Core Insights

The Cybersecurity Risks of Generative Artificial Intelligence for the Defense Industrial Base

October 31 2023

By Shane Breland
Chief Information and Data Officer, Core4ce

Not since Otto Hahn and Fritz Strassmann demonstrated the power of splitting atoms has a more powerful and dangerous technology been introduced into our lives. While nuclear fission is relatively difficult to accomplish, Generative Artificial Intelligence (GenAI or AI) is now in the hands of everyone, including professionals handling some of our nation’s most sensitive information within the Defense Industrial Base (DIB) and adversaries who wish to steal that information.  

Employees across all sectors are leveraging GenAI through applications like OpenAI’s ChatGPT, Microsoft’s Bing Chat, Google’s Bard AI, Midjourney, and dozens more to enhance their productivity. Meanwhile, hackers, attackers, and nation states are using AI with nefarious intent.  The technology is tantalizingly simple to use and has significant implications for both innovation and cyber warfare. Government agencies and companies within the DIB must prioritize the implementation of protective measures against our adversaries and establish AI usage policies that safeguard sensitive information.  

Security teams and professionals working for the DIB should consider the following when using AI and defending against evolving threats: 

  • Advanced Phishing Attacks – Generative AI can be used to automate and refine phishing attacks to trick individuals into revealing sensitive information, such as passwords or credit card numbers. With AI, these attacks can be more personalized, adaptable, and convincing, thus increasing their success rate. DIB employees should be exceedingly skeptical of any email they are not expecting that wants them to act urgently. While traditional email filtering can provide some protection, end user training supplemented with simulated phishing campaigns can better equip DIB professionals to consistently recognize and report attacks. Security teams must train users to be cautious when asked to provide sensitive information and to contact the company or person directly by going to their website or calling them for confirmation.

  • Vulnerability Exploitation – Advanced AI systems can probe networks and systems, identify vulnerabilities, and exploit them faster than human hackers can. This creates a significant risk for DIB companies that have not adequately secured their digital assets. Core4ce has observed an increase in the speed to attack known vulnerabilities, with new vulnerabilities being exploited twice as fast as they were two years ago. To combat this form of attack, ensure systems and software are updated with the latest patches and bug fixes as soon as they are released. Consider periodic third-party network vulnerability assessments to help validate defenses and identify weaknesses.

  • Deepfake Attacks – Applications like DeepBrain AI have made the creation of ‘deepfakes’ – highly realistic and difficult-to-detect fakes of digital content – more accessible. Deepfakes can be used to create false images, videos, and audio clips that convincingly mimic real people. This technology poses significant cybersecurity threats, such as identity theft, misinformation campaigns, and fraud. If you see images, videos, or audio that seems strange or unexpected, verify the information through other means. Do not immediately assume what you see is real and check multiple sources. Ensure your company has a crisis communications plan in place to monitor, identify, and address false information that may damage your corporate reputation.  

Generative AI is becoming better and easier to access which makes it tempting to use on a daily basis. If you are routinely using Generative AI, there are some risks you should consider when using it for school, work, or entertainment.  

  • Privacy Concerns – GenAI requires extensive data to function effectively. This often means sharing information, which could put your privacy at risk if data security measures are not robust. Think about the information you are submitting into an AI tool. That information could be stored by the system and used in ways you did not intend. Avoid inputting sensitive information like names, addresses, passwords, or proprietary corporate information. Use trusted AI tools and read their privacy policies so you can make informed decisions. DIB companies should consider developing policies that provide guidelines about corporate and government data that should not be input into AI tools. We will likely see contract modifications in the near future from the government addressing the use of AI.

  • AI Accuracy – AI algorithms are only as good as the data they learn from. If the training data is biased, the AI’s decisions may also be skewed. This can lead to unfair outcomes, for example, in AI-driven services like credit scoring or job application screening. Additionally, artificial intelligence can sometimes provide incorrect information. Large language models (LLM) generate text by analyzing patterns in data they have been trained with and using statistics to mimic language without actual comprehension of the subject. Do not assume the output you receive from an AI chatbot is correct.

    Earlier this year, New York attorney Steven Schwartz used ChatGPT to help draft a brief for a personal injury case. The output from ChatGPT included references to several non-existent court cases, resulting in fines, sanctions, and embarrassment. Employees and companies need to understand the limitations and risks of using GenAI in the workplace.

  • AI Insurance – Cyber insurance has become commonplace for businesses looking to reduce risk when faced with cyberattacks, but AI insurance is relatively new and may not be in your business insurance portfolio. Privacy and accuracy risks could be reasons to consider getting coverage for your business.  In September 2023, an Equal Employment Opportunity Commission (EEOC) claim was filed against iTutorGroup for using AI to automatically reject older applicants because of their age. The group paid $365,000 to settle the suit out of court. There have also been several copyright infringement lawsuits filed against AI companies for using internet scraping functions to aggregate data and generate “unique” content which allegedly is used without proper author permission or attribution. In addition to training employees to consider these risks, companies should document how AI is being used for operations and consult with their insurance carriers to ensure coverage is appropriate.  

Core4ce is using AI for interesting and transformative work from generating graphics to analyzing thousands of data sets, reducing the time needed to perform tedious tasks. Generative AI holds enormous potential for innovation and efficiency, but it also introduces new and complex cybersecurity risks. By understanding these risks and implementing robust mitigation strategies, you can harness the power of AI while ensuring the security of your digital assets and information. As we continue to explore the capabilities of AI, a focus on cybersecurity and data security will be paramount to safe and responsible development and use. 

 


 

Shane Breland serves as the Chief Information Officer for Core4ce. He oversees strategic information technology initiatives and resources which align with the company’s core objectives and long-term goals. He has engineered and fostered critical innovations and marked improvements in the company’s approach to cybersecurity and digital transformation.