Google Snippets Brings Fake News: Google’s approach to Fake News

Google Snippets Brings Fake News

Being exposed to misinformation campaigns is one of the many risks inherent in internet use. 

The proliferation of websites spreading false information, often known as “fake news,” is posing a major challenge to both casual readers and sophisticated algorithms like Google’s. 

This trend, dubbed “word of the year” in 2016 by Macquarie Dictionary, gained significant traction during the US election and its aftermath, influencing political discourse across the spectrum.

A significant concern with fake news is its deceptive nature. Often crafted to resemble legitimate articles, it can quickly spread through social media, making it difficult to debunk and control the misinformation. A notable example involves fake news websites created by 20th Century Fox to promote a horror movie, successfully deceiving thousands with fabricated anti-Trump stories that spread like wildfire.

Furthermore, this phenomenon presents a significant challenge for Google’s algorithms, which are designed to provide accurate information. Fake news stories have even infiltrated Google’s rich snippets, highlighting the vulnerability of even the most advanced systems.

This begs the question: why is this issue so critical for Google, and what steps are they taking to combat these deceptive websites? Read on to delve deeper into this complex challenge and explore potential solutions.

How Are Fake News Sites Affecting Google?

The fight against fake news takes a dangerous turn when it fools even Google’s most advanced algorithms. Not only will users encounter fabricated stories in their searches, but the very act of checking their veracity can backfire. While skepticism towards certain websites exists, Google snippets often hold an undeserved air of truth. We readily accept the information presented there as concise fact.

However, when Google falls prey to fake news, it unwittingly amplifies the deception. The snippet removes the context of the unreliable source, making the misinformation even more believable. This issue extends to Google’s “In the News” section, where a post-election story falsely claimed a landslide victory for both the Electoral College and popular vote.

Unlike curated platforms like Google News, snippets and “In the News” rely on an algorithm that scours the web, indiscriminately. While adept at combating spam, it struggles against the evolving tactics of sophisticated fake news sites. These sites are now deliberately optimizing their SEO to appear in snippets and news carousels, exploiting the trust placed in Google’s algorithms.

This vulnerability has not gone unnoticed. Malicious actors are actively improving their SEO, posing a significant threat to our ability to discern truth from fiction. It’s crucial for Google to adapt its algorithms to stay ahead of these evolving tactics and protect the public from the harmful consequences of fake news.

What is Google Doing About Fake News Sites?

As the internet’s primary gateway to information, Google faces a significant challenge in combating fake news. To address this issue, they’ve implemented several key strategies:

  1. Fact-Checking Tagging:
  • Google’s fact-checking tool helps identify potentially misleading content.
  • When you open an article’s expanded story box, you may see a “fact check” tag highlighting potential inaccuracies.
  • The tool relies on trusted sources like schema.org ClaimReview markup and other recognized fact-checking organizations.
  • This instant flag prompts users to double-check information before sharing it.
  • Users can click the tag to access fact-checking sites and understand why an article is flagged.
  1. Restricting Ad Sources:
  • Google has updated its advertising policies to disincentivize fake news sites.
  • While details remain limited, they aim to restrict ads on sources lacking transparency about publishers, content, and website purpose.
  • This reduces the financial incentive for fake news sites to operate.
  • Without AdSense revenue from viral stories, their motivation to publish diminishes.
  • Google is actively analyzing and purging untrustworthy sources within its network.
  • They have already taken action against 340 policy-violating sites and pledge continued vigilance in blocking such websites.

Google’s latest hammer-polishing effort to combat fake news is Google Owl. Users may now:

  • Flag search autocomplete suggestions
  • Flag featured snippets
  • Authority signals will increase in importance relative to context signals

Through Owl, objectionable content will not surface as easily. In that context the measures serve their primary purpose. In parallel though, we all know to expect abuse of the system. On any issue, we will see several problems.

  • Tyranny of the minority
  • Tepid answers to controversial questions
  • Filter bubble problem deepens
  • Google continues to suffer criticism for not doing enough

Similarly in search, we can see that through culling autocomplete suggestions and featured snippets, public awareness and perception of issues can be narrowed down to an inoffensive spectrum of views devoid of insight and analysis through the efforts of a few highly motivated zealots.

What More Might Google Do?

Google’s current ranking algorithm leans heavily on link popularity, which can be exploited by creators of fake news. To address this and solidify their commitment to truthfulness, Google may implement significant changes in the next algorithm update. This could include:

  1. Ranking Signal Modifications: Adjusting the algorithm to prioritize sources with established credibility over pure popularity. This may involve factoring in metrics like journalistic standards, author expertise, and fact-checking practices.
  2. Enhanced User Feedback: Expanding the avenues for users to report and flag suspicious content directly within search results. This could involve dedicated buttons, comment sections, or AI-powered tools to analyze user-generated feedback.

These potential changes reflect Google’s growing recognition of the fake news problem and their commitment to providing users with reliable information. By refining ranking signals and incorporating user feedback, Google can enhance its ability to detect and demote misleading content, paving the way for a healthier and more trustworthy online environment.

Google Rolls Out Fact-Check Tool For Images Globally

Google Search team introduced fact-checking on images at Google io 2023! With the proliferation of fake news and misinformation, it’s high time we have tools that can identify fake images, videos, audio, code, or text. While other GenAI services like BARD may be useful, the pressing need of the hour is to tackle disinformation and misinformation head-on.

Google’s extensive experience in AI and their expertise in organizing information make them an ideal candidate for this task. However, they initially missed an opportunity focusing on what OpenAI was working vs. on tooling to counter the adverse effects of GPT in society, elections, etc. 

Google Fights Back Against Misinformation

Perhaps worst of all for Google, search quality is lower, and the criticism will likely continue. Using science as an example, web pages discussing research about socially sensitive topics like obesity, gender and ethnicity will certainly trigger a flood of flags. The outrage machine will go on. 

Google can do much better than be reactive towards fake news and the outrage it causes. Rather than try to be an arbiter of truth, it is more useful to approach it as a persuasion problem. They can quell outrage and be profitable at the same time.

Doctors are required to advise patients before surgery of the risks, and not just focus on the benefits. In a similar vein, Google can get users to see and consider, however briefly, the opposite of the search result that they want.

To do this, Google can insert search results presenting the opposite side of the argument. Already, Google shows suggested searches beneath clicked results when users go back to SERPs. This feature can be retooled to show pages presenting the opposite side of contentious issues.

Let’s call it the persuasion bar.

There are 3 benefits to showing a persuasion bar:

  • Silences the outrage machine
  • Improves the search experience
  • Increases ad revenue  – The greatest good that Google can do with people who hold wrong beliefs about the world is to persuade them to reconsider their position. Persuasion results invite these users to do just that. We don’t expect it to be effective 100% of the time, just enough to cajole a closed mind to open a squeak. 

This feature could be rolled out first for topics that are controversial, as that’s where it can have the greatest impact. Controversial topics can be identified through a high volume of flags over a short time for given autocomplete suggestions and featured snippets.

Human analysts can still hold the power to override the algorithm, so that teenagers innocuously doing homework research don’t stumble upon the dark underbelly of the internet.

Critics will then move from decrying Google’s lack of action on fake news, to complaining about the unpleasant results in the persuasion bar. Here I have to make an assumption, which I find reasonable.

On any given controversial topic, there are some valid points on either side of an issue. And who better to uncover those valid points cogently argued than Google. The critics of such a measure can’t have it both ways, wanting to change their opponents’ minds while refusing to reconsider their own views.

How Ai Is Being Used To Fight Fake News

Generative AI‘s ability to create convincing content blurs the line between human and machine-generated, highlighting the importance of restoring trust in news production.

Current global discussions about AI governance and standards for trustworthy and responsible AI intersect with concerns about preserving journalistic integrity and ensuring the reliability of information accessible to the public.

These powerful technologies bring a new level of complexity to the challenge of fighting false information in the media, which Malaysia has grappled with for years.

Malaysia is considering regulating AI applications and platforms, covering crucial aspects such as data privacy and public awareness of AI use. The legislation wouldn’t hinder the progress of AI technology. It’s about balancing risk management and fostering innovation to ensure AI’s continued positive impact on the economy and society.

Final Thoughts

Google has done a fine job of persuading humongous companies and SEOs of the world to obey webmaster guidelines, through a combination of the carrot and the stick.

This may yet be the greatest challenge to its position, for users are not motivated by the same things as companies, which calls for a brand new cognitive science toolkit. We may not be able to turn lazy Elbonians around, but cultivating a society of open-mindedness is worthwhile irrespective of financial rewards.

Christopher Smith
Author: Christopher Smith

SEO and linkbuilding expert. More than 7 years of work in the field of website search engine optimization, specialist in backlink promotion. Head of linkbuilding products at GREAT Guest Posts, a global linkbuilding platform. He regularly participates in SEO conferences and also hosts webinars dedicated to website optimization, working with various marketing tools, strategies and trends of backlink promotion.

Leave a Comment

Your email address will not be published. Required fields are marked *