Fact-checking by AI is not only feasible but also necessary, and it’s almost the only solution — ONLY MAGIC CAN DEFEAT MAGIC

Henry Wu
10 min readJun 2, 2023

--

The proliferation of disinformation on the Internet has become cancer in civilization: highly damaging and difficult to eradicate. To further complicate matters, the rapid rise of AI tools such as ChatGPT, Bard, and Midjourney has exacerbated the situation.

While AI technology is not a natural panacea for combating disinformation, it has the potential to be a major accomplice if left unchecked. The ease of generating AI-generated content (AIGC) has significantly lowered the threshold for creating text, graphics, and videos, making it a fast, cost-effective, and proficient tool.

However, there is one significant drawback: the potential dissemination of false information. This “small problem” acts as a gap in the dam, posing a threat to our trust in AI and potentially distorting our perception of reality. It is almost our last chance to taking action, before it is too late.

Photo by NASA on Unsplash

Why is disinformation so pervasive and difficult to eradicate?

First and foremost, it is essential to acknowledge that disinformation cannot be completely eradicated. Disinformation has existed throughout human civilization and will likely persist until its eventual demise. Two key reasons contribute to this enduring presence.

Firstly, disinformation is a symptom, but at its core, it is someone trying to manipulate individuals or groups: information can influence opinions, and opinions can influence actions. By controlling the source of information, one can control emotions and behavior. Disinformation is unparalleled in its efficiency to manipulate people. As long as the desire to manipulate others exists in society, disinformation will prove challenging to eliminate.

In today’s world, creators of disinformation fall into two categories: traffic chasers and brainwashers.

The former group primarily seeks fame and profit through generating online traffic. When they realize that disseminating false information brings higher returns, they willingly engage in such practices. Conversely, when they discover that the return on investment for spreading truth is higher, albeit infrequently, they readily shift their focus towards truth. Profit drives their actions, making them opportunistic in their approach.

The latter group has a clear objective: to solidify or alter people’s beliefs through the use of disinformation. As their views struggle to gain traction in fair, open, and transparent public opinion spaces, they resort to disinformation, including conspiracy theories, as a means of persuasion. Distorted by disinformation, individuals may hate a group of people they would not otherwise hate, they will like someone they should not; these emotions will inevitably affect their behavior in the real world and may well lead to tragedy.

Beyond the core of disinformation is hard to eliminate, the second reason disinformation persists is the difficulty in establishing a public consensus on what precisely constitutes “disinformation.” Both subjective and objective factors contribute to this challenge.

From a subjective standpoint, identifying false information requires determining whether a person knowingly fabricates and disseminates it. It is a costly endeavor, or most time impossible, to prove that. The creators of disinformation would be flirting and quibbling that they thought it was the truth.

Objectively, statements that are factually incorrect can be considered false information. Nevertheless, given that no individual possesses the complete truth, one must question whether everyone inadvertently shares false information. Compounding the issue, skilled manipulators often employ plausible representations or partial truths rather than outright lies, making it challenging to discern deliberate deception from a genuine lack of knowledge. This distinction is difficult to ascertain without incurring substantial expenses.

Sooner or later, we have to accept the reality that whether a representation is “disinformation” or not, very often the conclusion is not a binary 0 or 1, but a degree. Instead of striving for an unattainable “absolute truth,” we should pursue a “dynamic truth.”

This approach involves acknowledging that any representation will inevitably deviate from the truth. By listening to multiple perspectives and applying logical reasoning and experience, we can arrive at a representation that aligns more closely with the truth. Importantly, this representation of what we consider to be “the truth” is not unchallengeable, but is subject to constant revision based on new information that comes to light. What is the most scary is not false information, but information that is not open to question. Once this is accepted, we will find that in the war against disinformation, AI could help us a lot.

Advantages and Disadvantages of Traditional Fact-Checking

Fortunately, fact-checking has been a longstanding practice in combating disinformation, and history has shown that truth ultimately prevails. As Abraham Lincoln once said:

“You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.”

However, uncovering the truth requires diligent effort. In modern society, those who specializing in fact-checking full-time can be broadly categorized into two groups: those who verify information before publication and those who fact-check after publication.

The former includes reputable mainstream media outlets like the New Yorker, which employ dedicated fact-checkers to ensure the accuracy of information through interviews, research, and verification prior to publication. The latter group comprises websites like factcheck.org, focused on debunking popular false information circulating in the public domain. In addition to professional fact-checkers, many individuals come forward to challenge false information that affects them personally or pertains to their respective fields of expertise.

Various methods exist for fact-checking, with a common approach involving a presumption of guilt until supporting evidence is found. For instance, when quoting literature or individuals, it is crucial to verify the authenticity of those quotes. When new information is presented, multiple reliable sources and relevant circumstantial evidence should be sought. If a new interpretation of an event is put forth, it must withstand logical scrutiny.

When conducted properly, traditional fact-checking boasts the advantage of high accuracy and comprehensive information. It may not always lead to definitive conclusions, but it provides readers with a broader understanding of the evidence supporting or challenging a claim, ultimately offering a more nuanced perspective. In many ways, it resembles a courtroom trial, albeit with the understanding that conclusions may be inconclusive rather than an absolute “guilty” or “innocent” verdict.

However, traditional fact-checking also has its drawbacks. As a former journalist and a professional in the mass communication industry for nearly a decade, I am well aware of the cost-ineffectiveness of fact-checking. The barrier to producing disinformation is incredibly low — all one needs is a keyboard. In contrast, fact-checking demands significant time and resources, including information gathering, logical analysis, interviews, and on-site investigations. To excel in this field, practitioners must possess extensive education, professional training, or experience, making it a costly endeavor.

Ironically, despite the high costs and stringent requirements of fact-checking, its economic return remains relatively low. In terms of generating online traffic, false information consistently surpasses fact-checked information unless platforms actively intervene. Consequently, the dedicated fact-checking industry, particularly the first category, has become financially burdensome, leading to layoffs despite their invaluable contributions.

Readers cannot be entirely blamed for their susceptibility to misinformation. Skillfully crafted disinformation specifically exploits human vulnerabilities, evoking emotions like anger, anxiety, fear, or gullibility to encourage readers to engage and share. Unless individuals receive deliberate training, it is difficult for ordinary people to resist such temptations.

Furthermore, individuals tend to gravitate towards information that instills anxiety and fear, even when its veracity remains unknown. Evolution has eliminated skepticism towards negative news since the cost of assuming a “type II error” (dismissing a genuine threat) is far higher than a “type I error” (believing a false threat). For instance, in ancient times, if the statement “there is a tiger in the woods” turned out to be false 99 times, but true once, those who consistently believed it survived, while those who doubted it, although often correct, risked being devoured by the tiger.

In short, disinformation is easy to create, distribute on a large scale, and can be financially lucrative. On the other hand, fact-checking requires a higher level of expertise, cannot be conducted in bulk, and offers limited financial returns. Consequently, in the short term, disinformation tends to spread more easily. This situation seemed inevitable, until AI came along.

Defeating magic with magic: fact-checking by AI

The year 2023 will be one of the most important years in mass communication history. This is the year when AIGC was first appeared on a large scale: ChatGPT had over 100 million users. This irreversibly affects the information pool of human civilization. Moreover, the way humans make, and access information comes to a crossroads.

If we do nothing, bad money will drive out good on the information market: disinformation makers, with the help of AI platforms such as ChatGPT, Bard, Midjourney, etc., will be able to produce fake text, images, and video information at a very low cost. Because the content of false information is always eye-catching, the makers often get more traffic, which incentivizes them to continue to increase the number of productions, creating a vicious circle.

When disinformation, empowered by AI, comes in like a tsunami, human fact-checkers are like lighthouses on the shore: a symbol of human civilization, but of little help in warding off the tsunami. A shield produced in a handmade workshop cannot stop an industrially produced bullet. When the enemy uses hot weapons, continuing to use cold weapons wins respect, but not war.

Since AI can be used by disinformation makers, it can also be used by the fact-checking industry. It is technically feasible: AI models do not naturally know false from true, it is simulating human behaviour using a large source of learned data. If the right prompt is set, AI can feed back the corresponding results. Since it can simulate the process of false information creation, it can also simulate the fact-checking process.

As Michelangelo said, “The statue was originally in the stone, I just removed the unwanted parts.” Very often, the truth itself is on the vast web, or in the data sources that AI has learned. It’s just that the truth is mixed in with the disinformation, and we need a way to tell the AI how to identify and remove the disinformation, just as the fact checker does.

Fact-checkers are not soothsayers who rely on crystal balls to determine truths and lies, but have a proven scientific methodology: comparing historical information, performing logical reasoning on the text itself, cross-checking with authoritative sources, interviewing stakeholders, and so on. Some of these ways, such as communicating with real people to seek evidence, are hard for AI, but other ways are feasible. Maybe not as good as humans, but perhaps enough.

For example, one of the most common types of disinformation is to mislead readers by putting an irrelevant historical picture in a recent news story and drawing a false conclusion. For human fact-checkers, one way to verify this kind of false information is to search for a picture in a search engine and find the earliest time this picture appeared on the Internet and the corresponding scene, so as to judge its authenticity. This is a basic function of search engines, but unfortunately, most of the public is unaware of it. For AI, this skill is easy to learn. I was happy that Google also mentioned this approach at Bard’s launch: it is great, but it would have been better earlier.

Image search is just one of the many means of fact-checking. By analogy, when we combine the experience of human fact-checkers and conclude enough methods of fact-checking, and then convert them into prompts, theoretically, AI will be able to imitate humans and fact-check the input text. In this way, AI can create fact-checking information as if it were false information, with a low threshold, low cost, high speed, and production at scale.

Conclusion

To sum up, using fact-checked information created by AI to counteract false information created by AI is feasible and may be the only way.

However, even the most optimistic must admit that AI fact-checkers also won’t completely eradicate disinformation. AI’s abilities, while significant, may not reach the level of the most skilled human fact-checkers in analyzing complex issues. AI can excel in simpler cases, but more intricate and nuanced matters may still require the expertise of experienced human fact-checkers.

Rather than replacing human fact-checkers, AI can complement their work by handling simpler and lower-level disinformation. This allows human fact-checkers to focus on more complex and damaging forms of false information. AI’s involvement in fact-checking has the potential to raise the threshold for disinformation production and contribute to the overall battle against false information.

It is important for AI to adhere to certain principles when engaging in fact-checking:

1. clearly inform that the analysis is performed by AI.

2. clearly inform the reasoning of the conclusion.

3. clearly inform the limitations of the conclusion.

These principles are critical since the greatest value of fact-checking to society is not just to inform the public of a conclusion, which can be wrong, but to foster the public a mindset of critical thinking. If individuals possess the ability to discern the credibility of information, the destructive power of disinformation can be minimized. In this sense, everyone has the potential to be a fact-checker, and AI can be a helper.

In summary, leveraging AI technology for fact-checking to combat disinformation, even that generated by AI itself, and to promote critical thinking is technically feasible, situationally necessary, and perhaps the most viable option. Always, only magic can defeat magic.

This article is originally written by Henry Wu, translated by DeepL, polished by ChatGPT, proofread by Grammarly, and finally edited by Henry Wu on June 1st, 2023

Photo by Erwan Hesry on Unsplash

--

--

Henry Wu

Indie Developer/ Business Analyst/ Python/ AI/ Former Journalist/ Codewriter & Copywriter