Jesse Sampson — Dynamic Threat Landscape
“AI’s Impact in Cybersecurity” is a blog series based on interviews with a variety of experts at Palo Alto Networks and Unit 42 with roles in AI research, product management, consulting, engineering and more. Our objective is to present different viewpoints and predictions on how artificial intelligence is impacting the current threat landscape, how Palo Alto Networks protects itself and its customers, as well as implications for the future of cybersecurity. In our recent interview with Jesse Sampson, a consulting manager at the renowned Unit 42 Threat Intelligence and research organization, we explored the dynamic threat landscape.
Artificial intelligence (AI) intersects with the defense against cyberthreats. Sampson's experience and insights shed light on the transformative potential and challenges associated with integrating AI into the cybersecurity paradigm.
Sampson's deep understanding of AI's role in cybersecurity is evident in his observations of the widespread adoption of AI-powered tools. He foresees a trajectory where the rising demand for AI skills and training programs will lead to the creation of an increasing array of tools designed to harness algorithms efficiently. According to Sampson, this proliferation is not solely in tool creation, but also in AI's capacity to manage and analyze the extensive data and alerts generated by these tools. It’s a crucial need in making the flood of information actionable and meaningful for security practitioners. He predicts, "There are going to be more and more tools that get created to utilize algorithms, as well as the need to sift through all the outputs of all those different tools in a way that's meaningful and actionable."
While acknowledging AI's potential, Sampson remains pragmatic, assessing the current trends in AI technology, particularly in the context of generative AI and large language models (LLMs). He suggests that the initial enthusiasm and exuberance surrounding these models might gradually recede as their limitations and actual applications become more apparent. This is similar to the evolution observed with prior AI technologies, he explains:
"I think that we're going to start seeing those types of models go down the hype curve a little bit, as we see with other technologies that have been part of AI's history. At one time, deep neural nets were supposed to be the gateway to artificial general intelligence, and they were going to solve everything. And, it turned out that they were really good at identifying images of cats. They're also pretty good at finding malware, but they can't do everything. It's not a miracle tool that's going to change all of industry and revolutionize everything. And, I think that we've found the same thing is true with LLMs. So, I think we're going to get out of the hype cycle piece of maturity and into the, ‘okay what really is the sweet spot for this new newest technology?’”
Sampson's conversation also veers into the potential darker aspects of AI integration in cybersecurity. He expresses concerns about the misuse of AI for social engineering purposes, highlighting the rising threat of deep fakes and sophisticated phishing attempts that leverage AI-generated content. Sampson warns about the potential sophistication of attacks, including voice cloning and tailored social engineering as AI capabilities are harnessed by malicious actors.
Delving into the defensive strategies, Sampson emphasizes the proactive steps necessary to safeguard AI models from adversarial attacks and data manipulation. He underscores the significance of continually monitoring data quality, pipeline integrity and detecting anomalies within the training data – a crucial aspect in fortifying AI against potential vulnerabilities and manipulations. Sharing some of the questions to consider around data model integrity, Sampson further states:
"You have to monitor the regular stuff. I think this is the main thing you need to do to ensure data quality for a good model. Get a really good understanding of where your training data is coming from, and what that pipeline ought to look like. Are you monitoring that pipeline? Do you have metrics on your data pipelines? Are you looking at the outputs? Do you have a ton more detections than you had last week? Do you have as many detections as you had last week? Is there anything that's not getting scanned by your model? Just because it's automation and a model, doesn't mean it doesn't require a ton of maintenance and care and feeding. And, if you keep your eye on all those things and they're working, then you ought to be able to detect data poisoning, or something like, that if that happens.”
Discussing the future landscape of cybersecurity operations, Sampson envisions AI's role in transforming the conventional SOC operational models. He contemplates the potential reconfiguration of the tiered SOC structure, predicting that AI's capability to automate routine tasks might lead to a shift, wherein more advanced roles focus on proactive threat hunting and mitigation, saying, "I think that we're going to see a complete change of the traditional four-tier model because AI seems to be able to do a lot of the stuff that a Tier 1 SOC analyst would traditionally do."
While acknowledging AI's potential to bolster defense mechanisms, Sampson emphasizes the importance of vigilant, proactive defense strategies and a realistic understanding of AI's capabilities and limitations. As the cybersecurity landscape continues to change, the integration of AI demands a delicate balance between technological advancements and the human expertise required to navigate its complexities and challenges.
Learn more about AI in Cybersecurity. See the latest innovations from XSIAM 2.0 in action through our on demand demo.