You’re reading Lessons Learned, which distills practical takeaways from standout campaigns and peer-reviewed research in health and science communication. Want more Lessons Learned? Subscribe to our Call to Action newsletter.
Concerned that recent improvements in generative AI might be fueling a new generation of content farms, analysts at NewsGuard, a news-rating organization, recently attempted to estimate the proliferation of such websites: They performed keyword searches for phrases commonly produced by AI chatbots, using several common search engines and a media monitoring platform, and then used text classification to verify the sites were mostly or entirely generated by AI. They reported their preliminary findings, which have not been peer reviewed, on their website this May.
What they learned: In April, the NewsGuard team identified 49 news websites that appear to be entirely or mostly generated by AI software. The websites span seven languages and produce a high volume of content, including on health topics.
Why it matters: Concerns that newly available and powerful AI tools “could be used to conjure up entire news organizations—once the subject of speculation by media scholars—have now become a reality,” according to NewsGuard analysts McKenzie Sadeghi and Lorenzo Arvanitis.
➡️ Idea worth stealing: Get in the practice of vetting news sites. First, do a deep dive into the outlet’s About Us section: Legitimate sites contain detailed information about the outlet, its parent company or funder, its leadership, its mission, and its ethical practices. Next, check that their content contains quotes from sources, and that those sources are reputable and identifiable. And beware of odd domain names.
What to watch: Whether tools to detect AI-generated content can keep up with the speed with which such content is improving, and whether tools designed to make AI-generated content more transparent catch on.