6 Ways libraries can take on Science Misinformation

We often think of misinformation as a social media problem, something that festers in the chaotic wilderness of the web—far removed from the structured, carefully tended world of library licensed content. But the reality is more complicated. Misinformation doesn’t just thrive in algorithmic echo chambers; it seeps into the broader knowledge ecosystem, shaping what gets cited, what gets reported, and ultimately, what people believe.

A recent report from the National Academies, Understanding and Addressing Misinformation About Science, unpacks how misinformation emerges, spreads, and embeds itself into public discourse. Reading it, I couldn’t help but consider the implications for libraries—not just as information providers, but as potential architects of knowledge systems that shape credibility, trust, and research habits.

If misinformation is an information problem, then libraries are well positioned to push back. But how? What role should libraries play in strengthening knowledge integrity? What practical steps can we take? Let’s explore some ideas based on the report.

Science is Iterative, Not Static—We Should Teach It That Way

People expect science to deliver hard truths, but the reality is messier. The knowledge-building process is iterative, meaning that what we know changes over time. This evolution is precisely where misinformation thrives—when outdated claims persist, when uncertainty is misrepresented as failure, or when new insights challenge political or personal beliefs. Shifts in consensus can feel destabilizing, particularly when they contradict deeply held worldviews. This is why misinformation isn’t just about wrong information—it’s about how people react to knowledge that changes.

Libraries can help reframe this. Instead of just providing access to research, we should be teaching users how to trace the evolution of ideas: What did we believe five years ago? What about twenty or one-hundred and twenty years before that? How has the consensus shifted? What’s still being debated? Scientific progress isn’t (usually) about abrupt reversals—it’s about refining and improving our understanding. When people recognize this, they’re less likely to see changing knowledge as suspicious and more likely to see it as part of how science works.

This means looking beyond clickbait narratives and instead planting the seeds of inquiry. Instead of feeding people pre-packaged conclusions, libraries can cultivate a mindset of curiosity—encouraging users to ask better questions, follow the trajectory of research, and engage with complexity rather than retreating into skepticism. A well-placed question can do more to build resilience against misinformation than a thousand fact-checks. Libraries, as spaces of lifelong learning, are perfectly placed to foster this intellectual curiosity.

Trust is the Currency of Knowledge—And Libraries Are a Credibility Infrastructure

Trust in science isn’t uniform—it varies by discipline, by messenger, and by medium. Whom you hear it from and how you encounter it shapes not just what you think, but how you feel about it. Our challenge isn’t just making quality information available; it’s making it legible, contextualized, and accessible in ways that build confidence. How do we move beyond helping with course assignments to addressing the bigger challenge—helping people develop trust in knowledge when it impacts their health, their environment, or their community?

Libraries can serve as credibility infrastructure—not just curating trusted sources but shaping how people develop information habits. Trust is built through engagement, transparency, and participation in the research process. Instead of simply pointing to vetted sources, we might aim to demystify how they’re created—showing the rigor, debate, and self-correction that underpin scientific research. Trust isn’t about convincing people to believe the right things—it’s about equipping them with the skills to navigate uncertainty and complexity with confidence.

Libraries as Cognitive Firewalls—Anticipate Before You Correct

Misinformation spreads because it plays to emotions, reinforces biases, and fits neatly into existing mental models. By the time misinformation is corrected, it has often already shaped perception.

Libraries have spent decades advancing information and digital literacy—helping people assess and access sources, detect bias, and navigate evolving media landscapes. But misinformation demands more than reactive fact-checking.

Prebunking—teaching people to recognize misinformation tactics before they encounter them—is a stronger defense. Libraries could embed this into research consultations, workshops, and search strategies. Instead of just flagging unreliable sources, can we help people to recognize how misinformation exploits uncertainty, manipulates trust, and distorts debate—through tactics like false balance and manufactured doubt? (see Chapter 3 of the report) Helping people find quality information isn’t enough. We need to equip them to recognize when they’re being misled.

Track, Don’t Just React—Misinformation Follows Patterns

Misinformation isn’t random; it follows predictable cycles. Anti-vaccine narratives, climate denialism, and swings between AI doomism and AI over-hype don’t emerge from nowhere—they resurface in waves, repackaging the same core claims with minor variations. Each time a new technology, public health crisis, or scientific breakthrough appears, misinformation adapts, borrowing from past tactics and reintroducing doubt under a new guise.

Instead of waiting for misinformation to take hold, could libraries play a more strategic role in tracking emerging patterns? This means identifying misinformation trends before they become dominant narratives and using this intelligence to refine search strategies, collection priorities, teaching, and public engagement efforts. We already do this kind of horizon scanning for research & data compliance and publishing trends—why not apply the same foresight to disinformation risks?

Librarians can partner with researchers in media studies, science communication, and computational social science to monitor misinformation flows, analyze how false claims evolve, and develop interventions that anticipate rather than just respond. This could mean curating rapid-response research guides when a misinformation surge hits or working with faculty to integrate misinformation analysis into coursework. If libraries can get ahead of the next wave of disinformation, we can help researchers, students, and the public stay ahead of it too.

Algorithms Shape What We See—And What We Research

Historically, libraries served as “gatekeepers” of knowledge—or at least as privileged intermediaries—determining what information was collected, indexed, and made accessible. That role has shifted. Today, algorithms decide what researchers see first, which studies gain visibility, and what narratives dominate search results.

Search engines and recommendation systems don’t just surface information; they shape scientific discourse. Content-ranking algorithms prioritize articles based on popularity, past user behavior, and optimization tactics, meaning visibility isn’t always aligned with credibility. Worse, these systems can inadvertently amplify misinformation by favoring engagement over accuracy. The report highlights how search algorithms influence exposure to health misinformation, sometimes making it difficult for users to find high-quality sources amid an ocean of misleading content. (See Chapter 4)

Algorithmic awareness should be a core literacy skill. Libraries can help researchers and students navigate search bias, unpack recommendation systems, and understand why their search results are structured the way they are. Instead of assuming search results are neutral or unbiased, we can help people to critically evaluate why they’re seeing certain information and what might be missing. If we want to support better research practices, we need to help scholars and scientists see not just the information in front of them, but the structures shaping what they find. While not in the report, Buddhists might call this seeing through the illusion of Māyā—recognizing that what appears objective is often shaped by deeper forces, whether algorithms, funding priorities, or personal and institutional biases.

Embed Misinformation Defense Directly Into Research Workflows

Misinformation isn’t just a public problem—it’s a research problem too. It slips into literature reviews, circulates through questionable citations, and gains legitimacy through predatory publishing and industry-backed studies. Researchers often assume that peer review acts as a natural filter, but the reality is more complicated. The report highlights how the rise of open access and preprint publishing—while increasing accessibility—has also created pathways for unvetted or misleading research to spread more quickly​. (see Chapter 5)

Libraries could integrate misinformation defense directly into research workflows, helping scholars assess sources, track citation credibility, and recognize when certain narratives are being artificially propped up by political or industry interests. This isn’t about gatekeeping information but about equipping researchers with the skills to evaluate the credibility of what they cite and build upon.

A few possible long-game approaches: developing tools that assess citation patterns for signs of manipulation, embedding source evaluation training into research consultations, and working with faculty to discuss how misinformation infiltrates academic discourse. If we want to protect the integrity of scientific communication, libraries have a crucial role in ensuring that scholars—and the public—have the tools to navigate an increasingly complex information landscape.

Final Thoughts: The Library as a Strategic Node in the Knowledge Ecosystem

Misinformation isn’t an isolated issue—this report helped me see more clearly how deeply it is woven into the ways knowledge is produced, shared, and understood. It’s a research problem, an access problem, and a trust problem. Perhaps—libraries are uniquely positioned to respond, and it’s becoming more obvious that we have a responsibility to do so—not just as curators of reliable information, but as active designers (or interpreters?) of the knowledge ecosystem itself.

Addressing misinformation isn’t just an individual responsibility—it’s a systems problem. If we don’t design for credibility, misinformation will design itself into the system for us.

I need to reflect more on this, but the report reinforced something I’ve been thinking about for a while: the work we need to do goes beyond providing access to credible sources or helping people navigate databases and discovery tools. It requires building infrastructure (social & technical) that resists distortion, fosters trust, and supports critical engagement with information at every level. If we take this role seriously, we can do more than just correct misinformation after the fact—we can create a system where it struggles to take hold in the first place.

Along with the report, check out this related podcast interview:  How Do You Solve a Problem Like Misinformation?

Previous
Previous

Hope in Uncertain Times: a systems perspective

Next
Next

Uncovering New Service Potential: Jobs to Be Done as a discovery framework