Some interesting projects come out around a casual date in a bar. From a gathering of friends during a few drinks came the Israeli mission to the Moon. Something similar happened with a recent research on the characteristics of hoaxes or fake news on Twitter, one of the main social networks. Miguel Molina, a data scientist researching at Imperial College London, while conversing with Juan Gómez Romero, from the Department of Computer Science at the University of Granada, came up with a system to detect hoaxes. Their preliminary results detect patterns of writing, sending, and behavior that open the door to ending the plague of misinformation. However, Twitter warns of the limitations of similar studies.
"The first thing was to narrow down the field of research and define fake news," Molina says. "In a very short way, they are intentional lies that seek money or traffic," he explains. This definition coincides with others in this same field that link the proliferation of fake news to attempts at destabilization, influence and monetization.
"A family member or friend may think he or she is sending information that he thinks is true, but has no intention of it," explains the researcher. With this premise, the team, with the collaboration of Imperial College, began collecting, scoring and selecting tweets that responded to their object of study.
The application of statistical and mathematical analyses on the material yielded particular characteristics in the writing of the fake news: usually incorporate capital letters, exclamations, emojis (pictograms), drawings, images and videos . "They're looking for surprise, getting attention," Molina.
Nights of horror in Catalonia. Young pro-independence female activist kidnapped by police in Tarragona pic.twitter.com/HTJ9sXX3cw— Josep Lluís Alay 🎗 (@josepalay) 22 October 2019
Tuit issued on October 22nd in which, headlined with the expression "Nights of Horror in Catalonia", the police are credited with "kidnapping" (Deprivation of outpatient liberty to a person or group of people, demanding, in return for their release, the fulfillment of any condition, such as the payment of a ransomof) of a "young person". It is about the detention of a woman in Tarragona over the riots against the procés. The magistrate of the Court of Instruction 1 of Tarragona, acting on duty, issued provisional detention order without bail for the defendant,
This behavior is intended to fish in polarization waters, i.e. "they work because recipients are willing to believe fake news." With this complicity between emitter and receiver, the penetration and diffusion capacity is multiplied. Your mass shipment already achieves the desired effect when the intent is the influence. If you also want to raise money, it includes links to transfer traffic to a certain website in order to monetize visits or attract purchases or payments.
The paper, published in the international journal IEEE Access, mathematically analyzes other characteristics of tweets, such as metadata that identifies the account, author, number of followers, favorites, contacts or date of registration on the network social,
All these characteristics, filtered by a computer program, have allowed to determine that accounts that share misinformation are created linked to a specific episode (riots in Catalonia, elections, Brexit) of the present day. In this way, in addition to having more options to attract attention by joining topics that are trending, they benefit from a shorter time from the teams of the social network and others to be verified.
They have also detected that fake news accounts use strange characters in both their name and description, and have few followers, but they do follow many users who are trying to win with their membership and serve as a transmission belt. This behavior, known as altruistic reciprocity, allows "the creation of links to other nodes to prompt the latter to correspond by creating a link with the first", according to the research.Juan Gómez Romero (left) and Miguel Molina Solana (right), at the Data Observatory at Imperial College London. Ugr
in this way, whether created by robots or by people, both in their creation and in the dissemination strategy seek to exploit known human biases, such as confirmation (tendency to favor, seek and remember the information confirmed by the mselves beliefs) or as the aforementioned altruistic reciprocity,
The model developed by Molina y Gómez with the collaboration of Imperial College allows to establish a numerical classification on the odds that the tweet analyzed is a bull. In this way, you can set a numeric code (50% of being fake news) or colored (red or green) to warn the reader that he might find himself in the face of an intentional lie.
Current manual verification systems are not able to respond to all the traffic that is generated, which takes advantage of the buoy propagators to flood the networks. With a mathematical algorithm, users could at least be warned what kind of information they are receiving and the likelihood that it is a falsehood, Molina.
Twitter warns of the limitations of these programs
The social network has appreciated the research and reiterates that for similar projects they make free publicly available data from its application programming interface (API). "No other service or platform does this," company sources claim.
However, the social network remembers that its Director of Integrity, Yoel Roth, has already warned of defects and failures in the investigation of automatically created bots or messages. "We see a lot of research (...) that perform thorough assessments of account behaviors using only public signals, such as location (if quoted), account content, tweet frequency, and accounts that follow. To be clear: none of these indicators is sufficient to determine attribution definitively. Searching for accounts that look like those disclosed is an equally flawed approach, as many of the bad actors mimic legitimate accounts to appear credible. This approach also often miscaptures legitimate voices that share a particular political point of view with which one disagrees."
"Before participating in this type of research and making these claims, ethical standards should be considered. Doing otherwise does not promote public knowledge, but risks profoundly undermining confidence in public debate and conversation," says Roth.