AI & Cybersecurity in 2026: Should we Worry?
- Carl Burch
- 4 days ago
- 17 min read

A few years back (2018), while working for a blockchain company, I published an article titled Cybersecurity: “How Paranoid Should You Be?” At the time, blockchain was the “new kid on the block” regarding a technology that was going to “shake things up,” regarding cybersecurity. Blockchain was being promoted as a tool that combats cybercrime by providing an immutable, decentralized, and transparent ledger that enhances security and aids in forensic investigations.
Did blockchain live up to its hype?
The unfortunate truth is that blockchain has not quite lived up to the “billing” that I discussed in the article. For example, in the article I discussed the use of smart contracts as a way to combat cybercrime by providing a more secure, automated alternative to traditional legal and financial agreements. However, smart contracts rely on the precision of their code, so a poorly written contract can be exploited, as illustrated by the Nomad Bridge Hack of August 2022.
The hack resulted in over a $190 million being drained due to a single-character coding error. The hack was so simple to exploit that hundreds of copycat hackers joined in to drain the funds, in what became a colossal “crowdsourced hack.” (Immunefi, Hack Analysis: Nomad Bridge, August 2022)
So, now, in 2026, the new “kid on the block” is artificial intelligence (AI). AI, at its core, is about teaching machines to learn from data rather than following rigid, step-by-step instructions. When considering cybersecurity, it becomes important to question whether the integration of AI into cybersecurity is ultimately a net positive or a significant cause for concern, or as the title asks, “Should we worry?”
AI is all around us…
Have you ever wondered how your phone predicts your next word, or why your favorite songs seem to play automatically? That's AI working quietly behind the scenes. AI is all around us, even if we’re not aware of it.
This subtle presence is what makes it both fascinating and, for some, a little unsettling. A term often used to describe AI is “invisible software layer.” This phrase captures a fundamental shift: AI is seamlessly integrated into our daily lives, operating in the background without needing direct user commands or a visible interface.
For instance, in my business, I frequently use AI—often without even realizing it. For example AI streamlines my bookkeeping through QuickBooks Online, primarily through “Intuit Assist,” a suite of AI agents designed to automate repetitive manual tasks, reduce errors, and provide real-time financial insights. I also use AI to enhance my email messages to clients as well as summarize complex accounting regulations (see my blog post of January 14). In the blog post, I outlined a scenario in which a senior partner of an accounting firm is required to examine lease agreements in accordance with ASC 842 (Lease Accounting).
I’m particularly intrigued by how voice assistants like Amazon’s Alexa and Apple’s Siri are able to understand and respond to spoken commands, all happening quietly in the background without you even realizing it. My grandson is especially fascinated by Alexa and loves to interact with it, often asking questions like, “Alexa, how old are you?” or “Alexa, are you my friend?”
So…what makes AI unsettling?
In November 2025, there was a documented case of a large scale cyberattack executed with no substantial human intervention. The attack was against Anthropic, a leading AI safety and research company. A summary of the attack is described below.
Anthropic reported a highly sophisticated espionage campaign where a state-sponsored actor (whom they assessed with high confidence was a Chinese state-sponsored group) manipulated AI agents (specifically using a tool called Claude Code). What’s amazing is that the AI agent performed 80-90% of the hacking operations autonomously. The AI agent was able to conduct reconnaissance, identify vulnerabilities, and attempted to infiltrate roughly 30 global targets, including “large tech companies, financial institutions, chemical manufacturing companies, and government agencies.” (Anthropic) |
This marked the transition from human-led, AI-assisted" attacks to "AI-led" operations that can overwhelm traditional defenses that still rely on human reaction times. Now… that’s sophistication and unsettling!
The “AI arms race” in Cybersecurity
Although the Anthropic attack was not the first AI-related cybersecurity incident, it significantly raised awareness of vulnerabilities faced by organizations, giving rise to what is now called the "AI arms race." This term refers to the ongoing cycle where both cybercriminals and security professionals continuously develop and deploy advanced AI tools in an attempt to outmaneuver each other. As attackers use AI to increase the speed and complexity of cyberattacks, defenders must respond with their own AI-powered systems to identify and stop these threats more efficiently. (MSSPAlert)
Cybercriminals vs. Security Professionals: And the winner is?
Though it may be too soon to pronounce a winner, current data suggests security defenders have their hands full. According to DeepStrike, projected cybercrime costs for 2025 are expected to reach $10.5 trillion. This comes out to be a 15% increase over the previous year.
Another way to look at it—if this "cybercrime economy" were a country, it would be the world's third-largest economy, trailing only the United States and China. Unfortunately, cybersecurity experts only expect costs to rise in the coming years, with projections at $15 trillion by 2030. (Cyberdefensewire)
Is AI empowering cyberattacks?
Based on the predicted losses, the picture does seem bleak, where cybercriminals have the upper hand in the so called “arms race.” Part of the reason for this is that AI has fundamentally shifted the risk equation by allowing even non-tech individuals (a.k.a. novices) to launch complex, multi-step attacks with minimal human intervention. In the past, those committing the attacks had to have specific "expert" skillsets: such as high-level coding, a deep understanding of server architectures, and the manual labor of researching targets.
The emergence of AI has introduced new complexities and challenges for cybersecurity. AI has changed the playing field by lowering the barrier to entry, giving someone with a computer and access to the internet the ability to be a player. Below, we compare the traditional “expert” hacker with today’s “novice” AI hacker.
Requirements | Traditional "Expert" Hacker | Modern "Novice" AI Hacker |
Technical Skill | The traditional hacker had to write custom code, find unpatched vulnerabilities manually, and build phishing kits from scratch. | Novices can now use "malicious LLMs" (like WormGPT or FraudGPT) to generate exploits, malware variants, and professional-grade phishing lures via simple prompts. |
Language & Social Engineering | Scams often failed due to poor grammar, typos, or lack of cultural context (e.g., the "Nigerian Prince" scam). | AI generates perfect, fluent prose in any language and analyzes social media to create hyper-personalized, "on-brand" messages that sound like real executives. |
Time & Effort | Researching a single high-value target could take weeks of manual work. | AI agents automate reconnaissance in minutes, scraping data from sites like LinkedIn and company sites to launch thousands of targeted attacks at once. |
Cost of Failure | High. An expert’s time is expensive so if an attack failed, it was a major loss of "investment". | Extremely low. Scammers can rent AI attack platforms for as little as $90/month, allowing them to "spray and pray" until something sticks. |
Notes:
“Nigerian Prince” scam (also known as a 419 scam) is where a fraudster emails a target posing as a wealthy royal or government official in distress to trick victims into paying money upfront for a non-existent fortune.
In cybercrime, "spray and pray" refers to a high-volume, low-precision attack strategy where criminals send out massive quantities of malicious content to as many people as possible, hoping that even a tiny fraction of recipients take the “bait.”
The Greatest Cyber Threat: Advanced Persistent Threats (APTs)
We would be remiss if we didn’t address the most significant cyber threat facing organizations today: the elite, state-sponsored hacking groups, commonly known as Advanced Persistent Threats (APTs). Because they are state-sponsored, APTs typically target:
Government and defense organizations: Seeking intelligence and strategic advantages.
Financial institutions: Aiming to access sensitive financial data or disrupt economic stability.
Healthcare providers: Targeting personal health information and research data.
Technology companies: Pursuing intellectual property and proprietary technologies.
Critical infrastructure sectors: Including energy, transportation, and telecommunications, where disruptions can have widespread impacts. (Microsoft Security)
The usual suspects committing cyberespionage include North Korea, China, and Russia. The Anthropic attack discussed earlier was assessed to have been a Chinese-sponsored attack; however, now, because of the Ukrainian war, Russia has stepped up its own cyberattacks against the West. (AP news)
State-sponsored hacking groups use AI to handle the tedious and lengthy parts of their attacks, like scanning for weaknesses and gathering information. By letting AI take over this groundwork, tasks that used to take weeks can now be completed in seconds. As a result, APTs are able to work on a much larger scale than ever before.
Why We Can Be Optimistic
Believe it or not, there are reasons to remain optimistic: optimistic that organizations shift in how they perceive and combat cyberattacks by integrating AI not simply as a technological upgrade but using it as a catalyst to enter into what is referred to as a new "proactive-first" era. (IBM)
The idea of being “proactive-first” is simple: organizations move from a "Fortress” Mentality (reactive) to a more "Immune System” Mentality (constant, proactive adaptation). This transition in mentality (from a reactive to an immune system) is why AI is considered a “force multiplier.” A force multiplier in cybersecurity terms is where security teams with limited resources and manpower are able to conduct a forensic-level cybersecurity sweep across an organization, which in the past would have been deemed impossible. AI shifts an organization’s defensive posture from having to clean up after an attack to stopping the attack from happening in the first place.
The analogies below are meant to illustrate the distinctions between these two distinct mentalities.
Description | “Fortress” Mentality (Protect the perimeter like a castle) | “Immune System” Mentality (Imagine the human immune system) |
Strategy | The strategy is to build a “moat” (firewall) and “high walls” (passwords and anti-virus) to protect the organization. | Like your white blood cells, the "Immune System" doesn't wait for a doctor's appointment to find a virus. It is constantly circulating, identifying "self" vs. "non-self," and neutralizing threats the moment they enter the bloodstream. |
Findings | Weakness: It assumes that once someone is inside the gate, they’re "trusted." If a single brick fails or a gate is left open, the entire kingdom is vulnerable. | Strength: It assumes the "skin" (the perimeter) will eventually be pierced. The focus is on detection and neutralization. Even if an attacker gets inside, the system identifies the "infection" and isolates it before it can spread to vital organs (data). |
Outcomes | Defenses are rigid and reactive. If an attacker finds a new way to climb the wall, the fortress has no way to adapt until after the breach has occurred. | Defenses are adaptive and resilient. Every encounter with a "pathogen" (threat) makes the system smarter and faster for the next time. |
Building your Immune System
The table above compares the fortress mentality, where organizations defend their systems like a castle, using barriers such as firewalls and passwords, as part of a legacy approach. In contrast, modern organizations adopt an immune system mentality, distributing protection throughout their network similar to how a human immune system works.
Although the idea of applying the principles of the biological immune system to computer security goes back to the 1990s, the research firm Gartner is credited with the current “hype” and modern definition of the term—Digital Immune System (DIS).
Gartner defines DIS as…
“a set of practices and technologies for software engineering, testing, automation, and analytics combined to protect digital assets from threats.” (N-iX)
Think of DIS like this—"much like the biological immunity it mimics, a DIS autonomously identifies anomalies, isolates potential threats, and recovers from disruptions without manual intervention.” (N-iX)
Pillars of a Digital Immune System (DIS) Framework
For overall security, DIS relies on several core principles and technologies (pillars):
Observability: This is the "eyes" of the system. It builds monitoring directly into the software to provide deep visibility into performance and security anomalies, allowing for the system to “quickly pinpoint and address potential issues.” (Martello)
AI-Augmented Testing: This pillar moves beyond manual checking by “leveraging AI for more automated, efficient, and reliable software testing.” This ensures that software can withstand complex scenarios without human intervention. (Martello)
Chaos Engineering: This pillar involves “penetration testing," where security teams intentionally introduce failures (like killing a server or slowing a network) into a system to uncover hidden weaknesses. “This helps prepare your systems to withstand various disruptions and exposes cracks in the armor before attackers can take advantage of them.” (N-iX)
Auto-Remediation: This is the "self-healing" pillar of the framework where the system has the ability to monitor itself and automatically correct issues, such as restarting a crashed service, without human intervention. (N-iX)
Site Reliability Engineering (SRE): “At its core , the main goal of SRE is to ensure applications stay reliable, even during frequent updates, especially when it comes to large-scale systems that would be unsustainable to manage manually. SRE focuses on enhancing user experience and improving collaboration between development and operations teams.” (N-iX)
Software Supply Chain Security: Gartner’s final pillar “addresses the risk of software supply chain attacks and improving transparency and integrity throughout the delivery cycle.” (Martello)
DIS in Action: American Airlines Case Study
A good example of a DIS in action, and one frequently cited in discussions of digital resilience is the case study of American Airlines.
Case Study: American Airlines |
American Airlines serves as a prime example of an organization implementing Digital Immune System (DIS) principles to transform from a "Fortress" to an "Immune System" mentality. By integrating AI-driven observability and automation across its legacy and modern systems, the airline has significantly improved its operational resilience. Background American Airlines offers an average of nearly 6,700 flights per day to nearly 350 destinations in more than 50 countries. Quick and seamless remediation of IT outages, failures, and breaches is vital when providing top-tier customer service to their passengers, and without this simplicity they risk impacting customer experience, brand reputation, and financial stability. So, when a technological failure stranded thousands of passengers on a rival airline in May 2017, American Airline’s leaders wanted to make sure the same thing couldn’t happen to them. (xMatters) To solve the potential problem of passengers missing their connections, the airline utilized the DIS framework to overhaul its legacy infrastructure, transforming how it handles high-stakes operational data. By moving away from "reactive" fixes, they built a system that functions like a living organism. Core Components of American Airlines' DIS American Airlines uses several pillars of a DIS to protect its digital and operational infrastructure.
Key Outcomes and Benefits
Conclusion The implementation of DIS allowed American Airlines to transition from a risk averse, legacy IT culture to a resilient, data-driven operation. By focusing on the DIS pillars, the airline was able to reduce system failures that impact the customer experience. |
The Cybersecurity Governance Landscape in 2026
“In 2026, you don't 'own' an AI; you own the data pipeline that feeds it. If the governance of that pipeline fails, the AI becomes a liability rather than an asset.”
While this particular quote is not attributed to a specific individual, it does sum up the feeling of AI governance in 2026, as reflected by figures like Andrew Ng (Founder of DeepLearning.AI) and echoed in recent Gartner and Deloitte 2025-2026 outlooks.
According to Ng, effective ownership of data pipelines necessitates rigorous AI governance—ensuring transparency regarding data origins—and integrity checks to confirm data authenticity. The absence of these measures undermines control over the knowledge and actions of an AI system.
Note: “A data pipeline is an end-to-end sequence of digital processes used to collect, modify, and deliver data. Organizations use data pipelines to copy or move their data from one source to another so it can be stored, used for analytics, or combined with other data. Data pipelines ingest, process, prepare, transform, and enrich structured, unstructured, and semi-structured data in a governed manner; this is called data integration.” (Informatica)
Traditional Corporate Governance
In my teachings of corporate governance, I present the conventional definition as “the system through which organizations are directed and controlled.” (The Cadbury Report, 1992) As I would explain, corporate governance is about providing leadership and direction so companies are able to achieve the objectives of their existence, which most often is associated with profitability.
Management is about making business decisions: governance is about monitoring and controlling management’s decisions. In this respect, corporate governance becomes a “checklist of activities” because it focuses on structural and procedural requirements to ensure compliance and shareholder protections.
The Traditional Governance "Checklist"
The goal of a governance checklist is to tick off boxes that demonstrates a company’s compliance with established rules and regulations, such as being in compliance with the Sarbanes-Oxley Act of 2002 (SOX) or the UK Corporate Governance Code 2024 (published by the Financial Reporting Council (FRC)).
The key components of a governance checklist typically include:
Board Composition: Ensuring the board has the required number of independent non-executive directors (NEDs) to challenge management objectively.
Committee Structure: Verifying that mandatory committees exist, such as Audit, Remuneration (Compensation), Risk, and Nominating committees, and that they meet their specific charter requirements.
Meeting Requirements: Confirming a minimum number of meetings occur annually (e.g., at least four) and that accurate minutes and attendance are documented.
Separation of Duties: Explicitly separating the roles of CEO and board Chair to prevent a concentration of power.
Financial Disclosures & Reporting: Ticking off statutory reporting requirements, such as annual operating plans, capital budgets, and disclosure of related party transactions.
Risk Management & Internal Control: Making sure the company operates within acceptable levels of risk and ensure through a system of internal control that the resources of the company are properly used and its assets, including data are protected.
AI Governance
IBM defines AI governance as “the processes, standards and guardrails that help ensure AI systems and tools are safe and ethical. AI governance frameworks direct AI research, development, and application to help ensure safety, fairness, and respect for human rights.”
While traditional corporate governance ensures legal compliance, the rise of AI-powered cybercrime introduces unprecedented levels of operational risk. Leadership (i.e., management and board) must prioritize cyber threats, making AI governance critical for any organization involved with AI. While regulated industries might feel the most pressure, even small businesses face growing legal and reputational risks.
Amidst rising AI-powered cyberattacks discussed earlier, the Digital Immune System (DIS) has emerged to help organizations focus cybersecurity efforts on adaptive, self-healing defenses—moving beyond traditional perimeters. As automation increased, the risks of ruleless autonomy highlighted the need for DIS governance.
DIS Governance
While there is not a clear definition of DIS governance, it is intended to establish policies and operational guidelines for managing resilient, autonomous IT systems. It ensures that a DIS's automated threat responses support organizational goals, comply with regulations, and maintain safety standards.
Andrew Ng stressed the importance of having a strong AI governance system to protect an organization’s data pipelines; however, the pipelines have to be treated as critical infrastructure under a DIS governance model to ensure resilience.
Comparing Traditional, AI and DIS Governance Models
For clarity in comparing the three models, traditional governance is generally described as overseeing personnel, AI governance is characterized by its focus on managing system logic, while DIS governance is tasked with ensuring the continued viability of the system.
Feature | Traditional Governance | AI Governance | DIS Governance |
Focus | Fiduciary duty and compliance | Trust and ethics. It prevents the "black box" problem where machines make life-altering decisions without explanation. | Operational resilience. The goal is "uninterrupted user experience" through autonomous self-healing. |
Primary Risk Mitigated | Fraud, corruption, legal liability, etc. | Algorithmic risks (losing accuracy over time), data poisoning, etc. | Systematic risks-software supply chain vulnerabilities, etc. |
Control Mechanism | Management and Board meetings, annual audits, policy handbook. | Control is exercised through bias-detection tools, Explainable AI (XAI), and data lineage tracking. | Control is exercised through Chaos Engineering and Auto-Remediation. |
Change Pace | Slow / Periodic (Annual audits). | Dynamic (Continuous model tuning). | Real-time (Autonomous response). |
Notes:
“Managing system logic” refers to the structural process of defining, designing, and maintaining the rules and workflows that govern how a system operates. It acts as the “brain” of the organization.
"Black box” problem occurs when a model—usually a deep learning neural network—reaches a decision or prediction that its own creators cannot explain.
“Data poisoning” is a cyberattack where an adversary corrupts an AI model's training or input data to manipulate its future behavior.
Explainable AI (XAI) is a set of methods and processes designed to make the outputs of machine learning (ML) models transparent, understandable, and trustworthy for human users. It serves as a "cognitive translation" that bridges the gap between complex mathematical algorithms and human reasoning.
Data lineage tracking the process of mapping and documenting the complete journey of data through its lifecycle, from its original source through every transformation and movement to its final destination.
Conclusion
The dangers of getting attacked are real. The sheer volume of attacks and the value of losses ($10.5 trillion projection for 2025) are expected to increase in the coming years. It’s not a matter of “if” an attack is going to happen; it’s only matter of “when.” If you’re one of the lucky ones, your losses will be minimal—not enough to put your organization at risk.
So… should we worry?
We shouldn’t merely be "worried" about cyberattacks; they must be treated as an inevitable strategic risk. The threat landscape has shifted from periodic "hacks" to a persistent, industrialized shadow economy. The best way to protect an organization is by strengthening its governance system and implementing the DIS pillars.
Governance through the Ages
“The only constant in life is change.”
Greek philosopher Heraclitus (c. 500 B.C.E.)
The phrase “Governance through the Ages,” like Heraclitus’ quote is an attempt to stress that no system is static, including governance; instead, it is a perpetual experiment in finding the best way for businesses to stay in business.
The past twenty or so years have seen a proliferation of specialized governance models like IT Governance (provides the infrastructure), Data Governance (manages information within the infrastructure), and AI Governance (oversees the autonomous decisions made by that information). These models grew out of the realization that traditional corporate oversight was too broad to manage the unique technical, ethical, and legal risks of the digital age.
Cybersecurity is a serious issue and the most effective way to avoid “silos” and ensure that your technical and security teams aren’t tripping over each other’s policies is to integrate these governance models (IT, Data, and AI) into a single “Integrated Governance” framework.
Note: In the context of IT, Data, and AI governance, "silos" refer to isolated sources of information or separate departments that operate independently without sharing data or communicating effectively with the rest of the organization. (Oracle)
The Power of "Integrated Governance"
An Integrated Governance framework strengthens an organization’s defense by embedding cybersecurity into its core business processes rather than treating it as an isolated IT task.
Benefits include:
Accountability & Oversight: It moves cyber-risk to the board level, ensuring that management and board are responsible for defining risk tolerance and allocating sufficient resources to security.
Breaking Silos: Instead of functional areas (i.e., Finance, IT, or HR) operating as independent "islands," breaking silos creates a unified environment where information, risks, and goals are managed collectively to achieve organizational success. This ensures that a new business initiative (like a cloud migration) is automatically assessed for security risks and legal compliance from day one.
Prioritization: It helps management focus on "business-critical" assets. Instead of trying to protect everything equally, integrated governance uses business impact data to prioritize the protection of the most vital services.
The Implementation of the DIS Pillars
DIS doesn't just block attacks; it makes the entire system "tougher" to infiltrate through the implementation of the DIS pillars.
Pillars | How it Protects |
Observability | Uses AI to monitor system behavior in real-time, spotting "symptoms" (anomalies) before they become full-blown infections. |
AI-Augmented Testing | Uses AI to perform more rigorous, automated testing, uncovering vulnerabilities that human testers might miss. |
Chaos Engineering | Intentionally "breaking" things in a controlled environment to find and fix hidden weaknesses. |
Auto-Remediation | The system automatically restarts, patches, or isolates a compromised segment without human intervention. |
Site Reliability (SRE) | Balances the need for fast updates with the need for a stable, secure foundation. |
Software Supply Chain Security | Mitigates the risk of attacks that come through third-party code or libraries by ensuring end-to-end transparency. |
Traditional cyber security is most often reactive (waiting for an alarm to go off); however, if you combine it with DIS, you get the following:
Integrated Governance + DIS model = Proactive and adaptive cybersecurity
Combining high-level leadership oversight (Integrated Governance) with a self-healing technical architecture (DIS), the result is a system that doesn't just wait for trouble, it anticipates it and evolves through it.
In future articles, I will delve deeper into the topic of Integrated Governance. If you have any thoughts on the topic, contact me at carl.burch@burchbusinesservices.com.
About Carl Burch
Carl Burch holds an MBA, CMA, CIA, FCCA, and is a QuickBooks ProAdvisor. He is also a co-founder of BURCH Business Services (BBS) located in Boston, MA. For more information on BBS, visit www.burchbusinesservices.com.

.png)