Elevate the power of your work
Get a FREE consultation today!
AI is ushering in new discoveries, creativity, and customer-focused innovation, but security risks loom. How can enterprises accelerate digital and security transformation that continuously balances reward and risk?
An entire gap-free sequence of the 3+ billion letters of our DNA is crucial to understanding and preventing disease. In 1990, the Human Genome Project 13 required years to sequence the human genome to about 92% completion. Fast forward from 2003 to 2022, when Stanford University and its partners used AI to achieve the Guinness World Record for sequencing human DNA in 5 hours and 2 minutes, a record that still holds. Both time and cost matter when health is at stake, and the cost to sequence the human genome has fallen from about $3B in 2003 to $500.
This example illustrates the rising value and accessibility of AI to organizations. But there is also a catch: an increasing reliance on AI-driven processes creates new vulnerabilities and potential attack vectors for cyber threats.
Bad actors have used AI in malware, distributed denial of service, and other attacks for years. Generative AI adds to the threat equation, as cybercriminals use it to create convincing fake content for phishing and social media campaigns and to deceive other AI systems such as spam filters, automate hacking tools, develop sophisticated malware, etc. In some cases, no coding and only limited prompting skills are required; users can simply chat.
Further, even well-meaning developers can do harm inadvertently with generative AI by relying too heavily on AI output without sufficient testing to detect security exposures and other issues.
Enterprise and consumer use of generative AI large language models such as ChatGPT, Google Gemini, or Microsoft Copilot is growing rapidly. As an example, In April 2024, the consumer version of OpenAI ChatGPT is used in “more than 92% of Fortune 500 companies.” Further, about 28% of employees use generative AI tools each day. How they use these tools introduces outsized vulnerabilities. Much of the information shared on generative AI is “sensitive” and is most often internal business information and source code that controls computer functionality and corporate intellectual property.
While these security vulnerabilities grow, the ease, breadth, and speed of generative AI cyberattacks are alarming. In one case, a zero-budget, zero-day malware attack was created by one person in two hours. This was accomplished as a test in a controlled environment with an engineer simply asking questions in the chat tool without writing a single line of code. This experiment and its outcome are not unique. Anyone can misuse generative AI, lowering the entry point for malicious activity.
Last year, Samsung banned the use of ChatGPT and other generative AI chatbots after three instances of data leaks. First, an engineer pasted source code into a chatbot to fix errors in the code. Second, an employee pasted code into a chatbot to optimize the code. Third, an employee used a chatbot to generate minutes of an internal Samsung meeting. All three of these scenarios created data leaks and security vulnerabilities.
Simply put, AI and generative AI create a “perfect storm” of vulnerability for these three reasons:
These three realities demonstrate the use and breadth of AI for malicious purposes. In addition to helping scamming and phishing schemes with fake content, AI-based attacks also include data poisoning, model theft, model evasion, and model data extraction.
The promise of AI is so profound that enterprises can’t ignore it. AI leaders can significantly outpace the competition, but only if they mitigate security and other risks of using it. Here are some steps that enterprises can take to do so:
Implement a strategic plan for securing physical and digital assets. While cybersecurity is a priority for many IT organizations, MI&S has found that many security teams overlook these physical assets that hide in plain view.
Future innovation leaders are enterprises that achieve a deft balance between accelerating the extraordinary potential of AI and establishing countermeasures to reduce risk. The first steps are understanding the magnitude of the cybersecurity vulnerability, unpacking the mechanisms that can be used maliciously, and then taking action to mitigate its impact. By taking appropriate action, organizations can achieve notable outcomes like sequencing the human genome in hours versus years while protecting their organizations from the risks of AI.
For more information about security threats and how businesses and governments can address them, read Protecting data where cybersecurity and global realities converge. The paper is based on a panel discussion hosted by Iron Mountain at the 2024 World Economic Forum. The panel featured world-renowned experts from business, academia, and two of the world’s foremost intelligence agencies, moderated by a Pulitzer Prize-winning journalist and author.
Get a FREE consultation today!