You’re reading Lessons Learned, which distills practical takeaways from standout campaigns and peer-reviewed research in health and science communication. Want more Lessons Learned? Subscribe to our Call to Action newsletter.
- Study 1: 691 participants were shown political/economic statements (e.g. 40% of food in the US never gets eaten) followed by a real fact-check from checkyourfact.com that either confirmed or corrected the claim.
- Study 2: To show that this effect existed without an explicit fact-checking label, the study team showed 691 participants a marketing claim (e.g., Sparboe farms treat their chickens humanely) followed by real articles that either confirmed or corrected the claim.
In both studies, participants rated how much they trusted the person who wrote the fact-check label/article, how much the fact-check surprised them, how much information they needed to believe the fact-check, and how exploitative the fact-check seemed.
What they learned: Across both studies, communicators who corrected claims were less trusted than communicators who confirmed claims. People saw corrections as more surprising, exploitative, and requiring of more evidence. These effects were true regardless of people’s prior beliefs about the facts and their political ideologies.
Why it matters: Health communicators are battling misinformation in a climate of low institutional trust. Understanding how to frame evidence-based information to maximize trust is essential to getting your message across.
➡️ Idea worth stealing: To build trust, uplift true information rather than correcting false information. When you must correct misinformation, try to frame the correction itself as positive. For example, if you are debunking an alternative medical therapy, first highlighting how the therapy often leads believers to spend lots of money might make a correction seem more helpful and less threatening.
What to watch: How communicators learn to adapt their framing strategies to fight misinformation.