- A new app and browser extension uses computer power and human fact-checking to try to uncover misinformation online.
- One industry expert likens fake news to spam.
- Once disinformation-fighting software is released, adversaries can test the products and eventually find ways around it.
A newly launched app that claims to help spot fake news faces major technical and logistical hurdles. Fighting disinformation with artificial intelligence (AI) is difficult, much like trying to tame spam.
UK-based Logically unrolled its app and browser extension in the United States this week. The app uses a combination of computer power and human fact-checking to uncover misinformation online just in time for the fraught presidential election. The launch comes as tech companies like Facebook and Twitter wrestle with how to deal with false information online, while the political discourse is charged with accusations of fake news and information warfare by foreign states.
“It’s a judgment call what spam is and what is fake news,” said Christian Huitema, a consultant focusing on privacy, in a phone interview. “Suppose I received an ad by email that asked me to give to the campaign of Donald Trump or to the Democrats. There are a bunch of people who think both are spam or one is and not the other. How do you know?”
Who Can We Trust?
Logically founder Lyric Jain hopes the app will restore public trust in online information.
“People are starting to distrust online experiences a lot,” Jain told Lifewire over the phone. “They don’t know who to trust on social media. It’s a very big democratic event that’s happening in November and we will try to give people context and access to a fact-checking service so that they can acquire evidence about certain claims.”
But artificial intelligence expert Huitema says that Logically and other companies trying to battle fake news online face tough challenges. He likens using computer algorithms to detect fake news to the never-ending battle against spam.
“Citizens need to be confident about making decisions based on real information.”
Another problem is that once disinformation-fighting software is released, adversaries can test the products and eventually find ways around it. “It’s a constant battle,” Huitema added.
Humans to the Rescue
To overcome these obstacles, Jain explains his company teams up computers and humans to spot fake news, boasting “we’ve got the world’s largest team of fact-checkers.”
Logically’s team of 45 fact-checkers based around the world relies on automation for much of their work. AI helps them identify fake images, for example, by using algorithms to spot masked faces, an indication that videos are being edited with the intent to deceive viewers, Jain said. Once a possible fake is spotted, the real work begins.
Sometimes, “it’s a simple matter of a database lookup,” he said.
In other cases, the fact-checkers make phone calls to verify facts or reach out to law enforcement agencies. The fact-checkers have backgrounds in journalism or in open source intelligence.
Logically uses AI models alongside natural language processing to analyze text, network, and meta-data. The software reviews more than 500,000 articles per day and evaluates indicators of an article’s accuracy, as well as the specific claims contained within the text, according to Jain.
Lots of Data
But analyzing all that data is a difficult task. “You start by building a body of knowledge and you are probably going to use machine learning to investigate those articles,” said Huitema. “The problem is that training the algorithms is very difficult.”
Jain said the software has proven its abilities. During the 2019 India general election and the recent UK election, the company’s software spotted unreliable information and sources, including bots and bad actors. Since March, the company said it has been working on identifying COVID-related information threats.
Jain hinted that Logically is working with US government agencies to fight misinformation. “We are working with stakeholders at the state and federal level to help identify problematic content,” he said in an interview. “We maintain threat intelligence on who the bad actors are able to have predictive intelligence about their actions.”
But during the interview, a spokesperson for Logically jumped in to say that the company was “not ready to announce” partnerships with US government agencies. Anyway, Jain said, he believes the main focus of the company should be helping ordinary people rather than governments.
“Citizens need to be confident about making decisions based on real information,” he said. “We want to make sure they are not being hoodwinked.”