How AI Can Boldly Scale Cybersecurity Beyond Human Limits | By John Spencer-Taylor

AI Can Help Scale Cybersecurity: Data breaches create headaches in many ways, from lost work hours to potential liability to a damaged reputation in the world of public opinion.

The most significant breaches dominate the news headlines, such as last year’s ransomware attack at Change Healthcare. Eventually, it was determined that the cybersecurity attack affected 190 million Americans, making it the largest breach of health and medical data in U.S. history.

A single security lapse can lead to a breach like what happened to Change Healthcare because cybercriminals continuously probe for weaknesses, using automated tools and evolving tactics to bypass traditional security measures.

Organizations must adopt a proactive defense strategy to stay ahead, which isn’t always easy to do with human capabilities that are limited. Fortunately, artificial intelligence can act as an auxiliary defense force and play a key role in helping improve cybersecurity efforts in a number of ways, responding faster than human teams alone.

Patterns of Attacks

One of the things AI is adept at is identifying patterns. For example, when AI appears to be writing an email, a memo, or some other piece of content for you, what it actually is doing is figuring out the typical patterns in written language. It then puts the next logical word after the previous word.

Like language, cyberattacks have patterns. AI-powered platforms like Darktrace, CrowdStrike Falcon, and Microsoft Defender AI analyze network traffic, endpoint behavior, and user activity to detect anomalies before they escalate into breaches. For example, hospitals using AI-driven SIEM (Security Information and Event Management) tools have reduced response times from hours to minutes by automating threat detection and prioritization. This is valuable because it allows cybersecurity teams to be more proactive. They can prioritize their defenses based on actual threat probability rather than reacting to incidents after they occur and the damage is already done.

While AI strengthens cybersecurity, it also introduces new risks. Attackers can exploit vulnerabilities in AI models through adversarial attacks, model poisoning, or data manipulation. To ensure AI remains an asset rather than a liability, like with any cyber system, organizations must implement robust monitoring, access controls, and periodic validation of AI-generated outputs.

Healthcare cybersecurity teams often are understaffed with too few people to handle a too large workload. AI can step in and assist with the mundane chores just as well as it can with the more intricate assignments. You can use it to automate such routine security tasks as log analysis, patch management, and incident triage.

With the day-to-day minutiae out of the way, the human team is freed up to spend more time and energy on high-impact threats and strategic improvements.

Compliance Issues

AI can also be an ally to healthcare entities by helping them keep health records safe and comply with regulations such as the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act.

Among the ways AI can do this is by continuously auditing security configurations, detecting non-compliance issues, automating documentation, and cleaning up reporting for regulatory review and filings. This reduces the likelihood of error and, once again, frees up the human team to concentrate on other duties rather than get caught up in these time-consuming tasks.

Beyond that, AI can also partner in making the Internet of Medical Things (IoMT) more secure. The IoMT is all of those medical devices and applications that are connected to healthcare information technology systems through online computer networks. This amalgamation of information is convenient for everyone who works with these systems, helping with such activities as tracking patient medication orders or monitoring a patient’s vital signs remotely.

Unfortunately, even as the IoMT makes life more convenient for health providers and patients, it also gives hackers a massive and inviting target because the connected devices are becoming more numerous and complex, not less so.

Patient information must be kept secure but many legacy medical devices that are part of the IoMT run outdated operating systems with known vulnerabilities. Updating or replacing them is often impractical due to FDA regulations, vendor restrictions, or the risk of disrupting patient care. On top of that, healthcare organizations struggle to maintain an accurate inventory of all of the many connected medical devices, leading to security blind spots.

Close the Gaps

AI has the capability to point out those blind spots and suggest ways to fortify them. Here’s how: AI can monitor vendor advisories, FDA cybersecurity alerts, and databases that catalog common vulnerabilities and exposures. By doing this, AI can identify security patches relevant to specific medical devices that can fix those vulnerabilities.

If for some reason a device cannot be patched due to operational constraints, AI will not be deterred. It can then recommend compensating controls, such as network segmentation or stricter access policies, minimizing the risk of exploitation.

Finally, AI-powered Security Orchestration, Automation, and Response (SOAR) systems can analyze incidents, generate forensic reports, and recommend mitigation steps. This will reduce response time to security issues from hours to minutes, once again improving overall security for your systems.

Humans can do all these things, too, but not always with the rapidity that artificial intelligence brings and, with time constraints, they certainly don’t always have the capacity. Working together, humans and AI can do an even better job of detecting and remediating security vulnerabilities, giving everyone more confidence that their systems and data are not exposed to the whims and pernicious intent of relentless hackers.

Bad actors worldwide will take advantage of any opportunity to exploit cybersecurity gaps. The more gaps that your human team, augmented by AI, can close, the better. To maximize AI’s impact on cybersecurity organizations should:

  1. Integrate AI with existing security operations, complementing, not replacing human expertise.
  2. Mitigate AI-specific risks by validating outputs and maintaining the same rigor as for other systems.
  3. Leverage proven off-the-shelf AI solutions to enhance threat detection rather than build custom in-house, as the market is evolving faster than most internal teams can keep up with.

Editor’s Note: John Spencer-Taylor, author of Change the Box: A Guide to Dream, Incubate, and Scale Your Innovations, is co-founder and CEO of BrainGu. Spencer-Taylor has nearly 25 years of experience in such areas as software development, business intelligence, financial systems, and cybersecurity. He and his team at BrainGu build platforms that help organizations deliver higher-quality software and are routinely deployed to assist, advise, and enhance the innovation efforts of organizations around the world, from the energy and finance sectors to national intelligence and defense agencies. Spencer-Taylor has a bachelor’s degree in computer science from Grand Valley State University.  His book is available for purchase here. AI Can Help Scale Cybersecurity

Most people take their heart for granted, always there, always pumping, seemingly with no need to do anything to keep it going. That works, to a point, and then it doesn’t. Seeing this amazing organ in action through visualization might be the best way to get them to take steps towards preventing issues that COULD be prevented.
The integration of hybrid AI into the drug development process represents a transformative shift that will not only accelerate the discovery of new therapies but also enhance their precision and safety.