‘When It’s Wrong, It’s Authoritatively Wrong’: Security Experts Describe Generative AI Adoption & Risk

Generative AI has taken the world by storm in a short period of time. Companies of all sizes and across all industries have been quick to leverage the technology to enable productivity and create smoother and more automated user experiences. OpenAI CEO Sam Altman has made it clear that the technology has a ways to go to earn the public’s trust and minimize bias, but the fact of the matter is that it’s being used by hundreds of millions of people, now.

Within our Trust practice, we’re thinking every day about the cybersecurity implications of this rapid and widespread adoption. What are the biggest threats posed by the technology? And more optimistically, how are defenders capitalizing on it to create safer and more secure products and experiences? 

Late last month, Mission North convened a LinkedIn Live panel, entitled “Generative AI: A Double-Edged Sword for Cybersecurity,” to explore these foundational questions. The event, moderated by Mission North’s Chief Client Officer Shannon Hutto, featured the following panelists [all are clients]:

Here are some key takeaways on the topics we discussed.

Next-Level Phishing

Bad actors can leverage generative AI tools at scale to overwhelm humans or systems with more plausible sounding messages (versus, say, phishing scams of old with glaring grammatical errors).

“[But] it’s not Skynet [the superintelligence system from the film franchise ‘Terminator’] – it’s not thinking for itself, it’s just guessing the next word,” said Tenable’s Nathan Wenzler. “If the data is not good, it won’t be good.”

“It’s an attack on trust,” he added. “It enables malicious actors to become better and better at bridging that trust gap and leading people to fall for tricks or scams because they believe or trust what they’re reading or hearing.”

<split-lines>"[But] it’s not Skynet, it’s not thinking for itself, it’s just guessing the next word. If the data is not good, it won’t be good.” - Nathan Wenzler, Chief Security Strategist, Tenable<split-lines>

AI as Code Writer?

One benefit to Large Language Models (LLMs) — which are trained on vast amounts of internet data to understand written questions and generate responses — is that, with accurate prompts, they are capable of writing code, though arguably still require substantial levels of human intervention. Yet, the question of code generation on LLMs is a complex one, as many developers are reluctant to use it since intellectual property (IP) ownership has not been established.

“One of the biggest threats right now would be companies saying, ‘I need to adopt these new technologies to remain competitive.’ But there are flat-out business risks to doing so. If it’s generated code, do I own it?” said Ballistic Ventures’ Roger Thornton. “The consequences of using AI are manyfold. But there’s incredible pressure to adopt it very quickly.”

He reminded listeners that no matter the use case – code-creation or otherwise – the models can be flat-out wrong: “When they’re wrong, they’re authoritatively wrong – they drift, they have bias.”

Wenzler said: “Is it writing perfect code? No. Is it writing perfect malicious code? No. But as a criminal actor, [these tools] can help you cut down the time it takes to make code, so you can churn out more work.”

Where AI can shine with code, the experts agreed, is in identifying flaws and translating findings to plain language, with resulting time and cost savings that can actually allow for more hires or a better overall risk posture.

<split-lines>"The consequences of using AI are manyfold. But there’s incredible pressure to adopt it very quickly." - Roger Thornton, Co-Founder, General Partner, Ballistic Ventures<split-lines>

AI and the Days of Open Source

In describing the “land rush” around generative AI, Sonatype’s Brian Fox likened it to the early days of open source software, which includes source code that developers can freely modify and distribute.

“People were in denial about using open source because it wasn’t allowed in their companies, yet developers were using it all over the place,” he said. “And we’ve seen a lot of AI usage, but companies are [also] saying, ‘We’re not allowing it.’ These can’t simultaneously be true. Companies don’t know how to govern it, but the winners will figure out how to embrace it... These organizations will have a leg up in the market.”

That wasn’t the only bold comparison. The generative AI movement is “arguably one of the first times we’ve seen the bridge between rapidly adopted technology and one that the general population understands and interfaces with,” Wenzler said.

With the cloud, open source tools, or even the early internet, only technical experts adopted and used the technologies. AI, however, is about everyone, he added.

<split-lines>"We’ve seen a lot of AI usage, but companies are [also] saying, ‘We’re not allowing it.’ These can’t simultaneously be true. Companies don’t know how to govern it, but the winners will figure out how to embrace it." - Brian Fox, Co-Founder, CTO, Sonatype<split-lines>

Ground Swell of New AI Tech Companies?

Thornton, whose firm Ballistic Ventures is dedicated to early-stage cybersecurity and cyber-related companies, predicts “a mass of new companies” based on this AI technology.

“Many will fail, but some will figure out great things,” he predicted. “To would-be entrepreneurs [I’d say]: ‘It’s easier to start a company by falling in love with a customer problem, and then going to look at how to solve it. It’s hard having a technology and then finding a problem.”

“We’re always looking at cybersecurity companies, and prior to this ‘big awareness,’ only about half had an AI story to what they’re doing,” Thornton said. “Now, it’s easily 80%.”

<split-lines>"We’re always looking at cybersecurity companies, and prior to this ‘big awareness,’ only about half had an AI story to what they’re doing. Now, it’s easily 80%." - Ballistic Ventures' Roger Thornton<split-lines>

Narrowing the Skills Gap and the Evolving Role of the CISO

Panelists agreed that generative AI is having an impact on hiring, with CISOs expanding their job searches to include candidates with proficient AI skills and experience. For example, “‘prompt engineer’ is a whole new skill and role that didn’t exist a few months ago,” said Sonatype’s Fox.

What’s more, the CISO role itself has evolved beyond just the technical aspect of security, and security leaders would be wise to recruit employees who understand risk management more broadly, according to Tenable’s Wenzler. These hires could include people with finance or legal backgrounds — people who can help with new risk calculations.

“If you’re trying to be a successful CISO today, you are more aligned with the CFO and General Counsel than you are with the CIO and CTO,” said Wenzler.

Check out the session in its entirety, here:

More posts

January 7, 2025

January 7, 2025

Expert Insights
AI

Survey Says: AI Is a Human Story

December 9, 2024

December 10, 2024

Inside Mission North
Talent/Brand

Mission North EVP Tom Blim on the Power of ‘Deep Thinking’ in Corporate Comms

November 21, 2024

November 19, 2024

Client Stories
Enterprise

How ezCater CTO Erin DeCesare Empowers Teams Through Creativity

November 21, 2024

November 12, 2024

Impact of Tech
Enterprise

HumanX CEO: ‘AI’s Biggest Potential Is the Work We Cannot Do Ourselves’