This article is a collaborative perspective by Mimi Brooks, Founder & CEO of Logical Design Solutions, and Michael Carroll, Global Executive in Industrial Innovation & AI Research.
In early 2024, researchers from a university group used Generative AI to produce a detailed fictional case report about a never proven SARS-CoV-2 variant they dubbed the “Omega variant.” The report described a 35-year-old man with severe COVID-19 despite being vaccinated and previously infected. It “identified” thirty-one mutations, mimicked rigorous genomic analysis, traced contacts, and even suggested mechanisms like antibody dependent enhancement.1
The case report was crafted to look like a peer reviewed scientific case study, but it was entirely invented by AI. The output was highly realistic. It followed academic conventions, used scientific language, and cited methods that are typical of genuine studies.
In an age dominated by data and automation, we tend to talk about what AI knows, the insights it delivers, the patterns it detects, the scale at which it operates. But what’s less often examined, and far more dangerous, is what AI systems—and the organizations deploying them—choose not to know.
This is where agnotology comes in. Originally coined by historian Robert Proctor, agnotology probes the deliberate construction of ignorance, not as a passive void but as a strategic choice. In AI, where systems process vast datasets to deliver predictions at unprecedented scale, the focus is often on what machines know, their ability to optimize, predict, and transform. Yet, what they do not know, and what their creators choose to leave unexamined, is far more consequential. Ignorance in AI is not always accidental, it is often engineered, woven into systems that prioritize efficiency, profit, or convenience over clarity, accountability, or truth. This is not a flaw, it is a feature—one that thrives in boardrooms, hospitals, and courtrooms—where opacity shields decisions from scrutiny and responsibility from consequence.
In the corporate world, strategic ignorance has become a boardroom staple. Companies deploy AI to guide decisions, from hiring to credit scoring, yet shield their algorithms under the guise of trade secrets. This opacity does more than protect intellectual property, it creates a buffer from accountability. When a loan is denied or a job candidate rejected, the system’s complexity becomes a convenient excuse. No one need explain why, for who can unravel a black box?
In 2024, a whistleblower at a major financial firm revealed that its credit-scoring algorithm systematically downgraded applicants from low-income neighborhoods, a bias hidden behind proprietary protections. The company’s defense, invoking trade-secret law, was not just a legal maneuver, it was a calculated act of ignorance, obscuring the human cost of its decisions.
This pattern extends beyond finance. In public safety, predictive policing algorithms training data was drawn from historically biased arrest records. A 2023 study by the Brennan Center for Justice2 found that one such model, used in Chicago, amplified existing disparities, directing police to neighborhoods already over-patrolled while ignoring the broader context of systemic inequality. The algorithm did not know the difference between correlation and causation, nor was it designed to. Its creators, aware of these limitations, deployed it anyway, prioritizing efficiency over equity. This is agnotology at work: ignorance as strategy, profit, and power.
Nowhere is this more visceral than in healthcare, where the price of ignorance is measured in lives. Consider a 2023 case at Stanford University, where a dermatology AI model, celebrated for detecting melanoma with 95% accuracy, failed to diagnose the disease in an African-American patient. The reason? Its training data included few images of darker skin tones, a gap the developers understood but did not address.3 The model was marketed as objective, its limitations buried under claims of precision. Clinicians, pressed for time, relied on its recommendations, unaware of its blind spots. Patients, trusting the system, bore the consequences.
When a misdiagnosis occurs, the question of responsibility becomes a labyrinth: the model creator blames the health system, the health system the physician, the physician the algorithm. This opacity creates a vacuum of accountability, one that some institutions find disturbingly convenient.
The healthcare industry is riddled with such examples. Proprietary AI tools, their datasets and logic guarded as trade secrets, are often deployed without independent validation. A 2021 report from Cornell University entitled Quantifying machine learning-induced overdiagnosis in sepsis4 was followed by a 2024 investigation by ProPublica that revealed a diagnostic tool used in over 200 hospitals overstated its accuracy in detecting sepsis, leading to unnecessary treatments and patient harm. The company, citing proprietary protections, refused to share its validation data, leaving clinicians and researchers in the dark. This is not merely a technical issue, it is a leadership failure, a choice to prioritize market advantage over human lives.
As the Greek philosopher Socrates once said, “The only true wisdom is in knowing you know nothing.” In AI, this humility is not just a virtue, it is a necessity, a bulwark against the hubris of false certainty.
The regulatory landscape only deepens this challenge. Governments, lagging behind the pace of technological change, struggle to impose oversight. The FDA, tasked with regulating medical AI, often relies on company-submitted data, with little capacity for post-deployment auditing.
Politico reported in June 2025 that major AI firms have been actively lobbying for a light-touch, national AI regulatory framework,5 with the goal of preventing fragmented state-level rules. Without robust frameworks for transparency, companies can feign complexity, building competitive advantages on what is not understood rather than what is. This regulatory gap is not passive; it is sustained by design, a modern echo of industries like Tobacco, which for decades sowed doubt to obscure the truth about smoking’s harms.
Even in courtrooms, where algorithms guide sentencing or parole, the same opacity shields bias from scrutiny. If ignorance can kill in a hospital, its toll in commerce and justice is more insidious, eroding trust, amplifying inequity, and reshaping society in ways we barely perceive.
As AI systems become more embedded in the workplace, we face a profound tension: we’re now able to act without fully understanding why.
Prediction, at scale, begins to replace human knowledge construction. Data outputs are treated as truth without examining causal frameworks or validating underlying assumptions. The result? Decision-making processes that are fast, scalable, and utterly opaque.
Algorithms themselves become objects of ignorance—designed without clarity, trained on partial truths, and deployed with blind trust. And in some organizations, that opacity isn’t a flaw. It’s a feature.
If ignorance can be engineered into systems, it can also be designed out. The challenge lies in making it visible—and building governance, practices, and tools that reduce its impact.
Here are some key actions organizations are taking to address this challenge:
In a landscape shaped by speed, complexity, and scale, the temptation to let the machine “just decide” is strong. As AI becomes more embedded in the systems that shape our economy, institutions, and daily lives, we face a growing imperative: not just to ask what AI knows, but to confront what it doesn’t.
To address agnotology, organizations must act with intention, building systems that illuminate rather than obscure. Open models, subject to external validation, enable critical feedback and collective learning. For example, earlier this year a consortium of European hospitals adopted open-source AI diagnostics, sharing datasets and logic to improve accuracy across diverse populations. The result? A 20% reduction in misdiagnoses for underrepresented groups—a testament to the power of transparency.
Organizations must also seek to standardize independent algorithmic audits. In 2024, the National Institute of Standards and Technology (NIST) launched its “AI Standards Zero Drafts” pilot project,6 a process-focused effort aimed at quickly developing voluntary standards and encouraging broad stakeholder input, including on transparency and bias-related documentation. These findings, shared publicly, forced companies to revise their models, proving that scrutiny can drive accountability. This requires not just technical effort but cultural commitment, a willingness to see those who have been systematically unseen.
A 2025 EU regulation7 requiring post-deployment audits for high-stakes AI systems set a global precedent, reducing reliance on proprietary claims and fostering public trust. By promoting a culture of critical inquiry, organizations can encourage employees to question algorithms rather than defer to them. This cultural shift, though gradual, is the foundation of ethical AI, ensuring that human judgment remains the final arbiter.
When ignorance is unexamined, it can become embedded—into systems, decisions, and the future we’re building. As leaders and practitioners, we have a responsibility to interrogate the gaps, question the silences, and design with humility. Prioritizing the study and mitigation of ignorance must become a core tenet of AI strategy because a future shaped in the dark cannot serve the light.
Although the “Omega variant” case cited in our introduction wasn’t ultimately mistaken as authentic by health authorities, it sparked important discourse on the need for rigorous source validation in science. The paper emphasized that even well-structured academic-style reports from AI require careful scrutiny to enhance credibility. This case became a modern textbook example of AI-enabled misinformation in healthcare, serving as a teaching point in AI safety seminars and discussions on medical misinformation.
Agnotology echoes Blaise Pascal’s insight that “knowledge is a sphere, its surface ever-expanding into the unknown.” In the quiet aftermath of false certainty, our choice is a reminder that our first task is to confront the silences, to question the unseen, and to ensure that the future we build is not cloaked in manufactured darkness. The question lingers, as it must: what do we know, and when did we choose not to know it?
1) Navigating the Peril of Generated Alternative Facts: A ChatGPT-4 Fabricated Omega Variant Case as a Cautionary Tale in Medical Misinformation, Malik Sallam et al., 2024.
2) Policing & Technology, Brennan Center for Justice, 2023.
3) "AI Shows Dermatology Educational Materials Often Lack Darker Skin Tones," Stanford University, 2023.
4) Quantifying machine learning-induced overdiagnosis in sepsis, Cornell University, 2021.
5) "The AI lobby plants its flag in Washington," Politico, 2025.
6) NIST’s AI Standards “Zero Drafts” Pilot Project to Accelerate Standardization, Broaden Input, NIST, 2024.
7) AI Act, European Commission, 2025.