AI, Cybersecurity and the Rise of Large Language Models

Apr 02, 2024
12 minutes
1189 views

Artificial intelligence (AI) plays a crucial role in both defending against and perpetrating cyberattacks, influencing the effectiveness of security measures and the evolving nature of threats in the digital landscape. Understanding this impact can help individuals and organizations stay informed about the latest trends and advancements in cybersecurity, enabling them to make thoughtful decisions to protect their data and assets from emerging threats.

I recently interviewed Sagi Kaplanski, Sr. staff researcher and Doron Rosen, Sr. manager, Threat and Detection of Palo Alto Networks, discussing the challenges and solutions related to incorporating AI into cybersecurity products. Our conversation focused on the concept of co-pilot AI assistants and their functionality in providing information and actions tailored to user needs and potential LLM issues. The magic, or “mojo”, in AI is something security practitioners need to capture and utilize, especially because attackers are also in on the game.

As cybersecurity continuously evolves, so does the technology that powers it. This is particularly evident in the integration of artificial intelligence into cybersecurity products, a topic explored in-depth during this interview. Our pair of experts shared valuable insights into the challenges and solutions related to AI implementation, particularly focusing on large language models (LLMs).

A large language model (LLM) is a state-of-the-art AI system, capable of understanding and generating human-like text. LLMs, like OpenAI ChatGPT or Google Bard, use deep learning and extensive training on text data to excel in tasks, such as translation, content creation and question answering. They have applications in diverse fields, from healthcare to customer service, due to their proficiency in natural language processing.

The Changing Landscape of AI in Cybersecurity

The incorporation of AI and specifically LLMs, like ChatGPT, into cybersecurity products is a growing trend, but it can be hard to imagine the way these integrations will fundamentally change security operations. According to Kaplanski and Rosen, one key aspect is understanding how AI can change the user experience and enhance a product's capabilities. The concept of a “co-pilot” in AI is particularly intriguing. It's about creating an interface where users can interact with the security product in a more intuitive and human-like manner. This enhances tasks, like code generation, error detection or problem solving in cybersecurity platforms.

Navigating Data Challenges and Contextual Accuracy in AI-Driven Cybersecurity

One of the primary challenges in implementing AI in cybersecurity is data acquisition and evaluation. Kaplanski and Rosen emphasized the importance of large-scale, accurate and contextually relevant data for training and refining AI models. Good data is crucial in cybersecurity for several reasons. First, it forms the foundation for accurate threat detection as high-quality data enables machine learning models to identify security threats and anomalies more effectively.

Good data empowers security analysts with timely and relevant information, facilitating informed decision-making and enabling swift responses to potential threats. Moreover, it contributes to better incident response by facilitating quicker identification and mitigation of security incidents, minimizing their impact on the organization.

Comprehensive and reliable data supports better risk management, allowing organizations to assess and mitigate cybersecurity risks proactively. Conversely, using bad data in cybersecurity can lead to various pitfalls, including inaccurate threat detection, increased vulnerability to cyberattacks, impaired decision-making and compliance risks.

The discussion also included the unique approach of using AI to create datasets for testing other AI systems, which is pivotal in ensuring the quality and relevance of the data being used.

The concept of using AI to create datasets for evaluating other AI systems is a crucial and innovative approach that addresses several challenges in the field of artificial intelligence. This involves leveraging multiple AI systems to generate data that can be used to train, evaluate and improve the performance of another AI system:

  • Introduce variability and scalability into the evaluation process – AI systems can create variations of data that mimic real-world scenarios and edge cases, making the evaluation process more comprehensive. Whether it's testing natural language understanding, computer vision or other AI domains, generating diverse and scalable datasets is critical for robust model development.
  • Optimize resource utilization – Once trained, AI models can continue to generate data on-demand, reducing the need for human intervention and minimizing costs associated with data collection and annotation. Manually collecting and labeling data can be resource-intensive.
  • Allow researchers to conduct controlled experiments and evaluations – They can precisely define the conditions, scenarios and factors they want to test, ensuring that the evaluation process focuses on specific aspects of an AI model's performance, integrity and accuracy. This level of control is particularly valuable for identifying model weaknesses and strengths.
  • Enable an iterative approach to model improvement – As AI systems generate data for testing, the insights gained from these tests can be used to fine-tune the AI models, creating a feedback loop for continuous enhancement. This iterative process can lead to more robust and accurate AI systems over time.
  • Be designed to minimize such biases, promoting fairness and equity in AI systems – Human-curated datasets can inadvertently introduce biases based on the data collector's perspectives or the available data sources
  • Be flexible and adaptable to changing requirements – When the focus shifts to different use cases or domains, AI systems can be reconfigured to create new datasets, ensuring that the evaluation process remains aligned with evolving needs and challenges.

Other important factors to consider with this approach:

Data Quality and Relevance: Ensuring the quality and relevance of data used for training AI models is paramount. Traditional methods of manually curating datasets can be time-consuming, expensive and often result in limited data diversity. By using AI to create datasets through careful prompt engineering and human labeled examples, researchers and developers can generate large volumes of data quickly, covering a wide range of scenarios, contexts and variations, which may be challenging to achieve through manual curation alone.

Data Privacy and Security: In some cases, using real-world data for evaluating AI systems can raise privacy and security concerns. AI-generated data can mitigate these concerns by providing synthetic data that closely resembles real data but doesn't contain sensitive or confidential information. This is especially relevant in industries like healthcare and finance, where data privacy is a top priority.

Another challenge is ensuring the AI’s responses are factually correct and contextually appropriate, since at times LLMs will “hallucinate,” providing false answers. Incorrect answers may lead a security analyst to take the security wrong action.

This is where the concept of "context information" – the surrounding circumstances or relevant information that helps an AI system understand and interpret input or make decisions – becomes crucial. By grounding the AI in specific, accurate information, it becomes possible to reduce the likelihood of AI generating factually incorrect or irrelevant responses.

Security Considerations and User Intent

A significant aspect of implementing AI in cybersecurity is managing the security risks associated with AI responses. This involves creating modes of evaluation that can identify and handle cases of malicious intent or unintended usage. It involves rigorous testing, robust threat modeling, secure development practices and data privacy measures. Adversarial testing helps identify vulnerabilities, while continuous monitoring and response mechanisms detect and mitigate incidents promptly. Ethical considerations guide responsible AI use.

Additionally, Kaplanski and Rosen described the development of frameworks to simulate different types of potentially problematic user interactions, thus enabling the AI to respond appropriately and securely. Kaplanski elaborates:

“Evaluating an LLM is a big challenge because of how vast it is; it's hard to predict problems and can be extremely unpredictable at times, especially if a user can interact with it freely. We wanted to find an innovative way to achieve this by pushing the limits of what AI is currently capable of, while maintaining reliability, integrity and adaptability.

The framework's goal is to provide the ability to evaluate an LLM from end to end in different customizable modes. It includes input generation, text classification and a multilayered AI evaluation that is eventually aggregated into sophisticated scoring metrics. The end result provides a reliable way to quickly identify problems, edge cases and inaccuracies with minimum human effort.”

Security practitioners can adopt several strategies to address these challenges effectively. First, rigorous testing and validation of AI models are essential to identify potential weaknesses and ensure that AI responses align with expected security outcomes. Adversarial testing should be employed to assess how AI systems respond to deliberate attacks or malicious inputs, helping to uncover potential weaknesses and improve resilience. Anomaly detection mechanisms can be implemented to identify unusual responses, indicating a security risk or a potential system compromise.

Human-in-the-loop security is another crucial approach, where human experts are actively involved alongside AI systems to provide oversight, review responses and make critical decisions in security situations. Ensuring the explainability and transparency of AI responses is vital as it helps in understanding the reasoning behind AI decisions. Regular updates and patching should be maintained to address emerging vulnerabilities and keep AI systems secure.

Data privacy measures, strict access controls and security awareness training are essential to safeguard sensitive information and educate personnel on potential risks. Developing a comprehensive incident response plan specific to AI security incidents is crucial for effectively managing and mitigating security breaches.

Collaboration with the cybersecurity community, compliance with relevant regulations, and staying proactive in identifying and addressing AI-related security risks are all integral parts of a holistic approach, managing security risks associated with AI responses in cybersecurity. Overall, implementing these strategies ensures AI-driven cybersecurity solutions remain resilient against threats and adhere to ethical standards.

Practical Recommendations for Successful AI Usage in Cybersecurity

For cybersecurity professionals beginning to use AI-enhanced tools, like AI co-pilots, Kaplanski and Rosen recommend clarity and specificity in user queries. They also stressed the importance of understanding that AI is not infallible. It is limited to the information it has been trained on and should be used as a guide rather than an absolute authority. It’s important to set expectations, because co-pilot solutions don’t eliminate the need for a security analyst, and at the current level of maturity, they won’t solve all security challenges. A co-pilot solution should rather be seen as one of the many tools in the security analyst’s arsenal.

Foundations of Successful Security AI

  • Dataset: The data utilized for training and validating AI models should be both precise and varied, mirroring diverse, real-world environments to the greatest extent possible. The training should include a broad variety of datasets from different sources. This data should cover a large assortment of samples, making sure the models can recognize and respond to many different situations effectively.
  • Required Skills: Developing and deploying security AI requires a combination of data analytics and security expertise. The accuracy numbers might look good on paper, but the model might be easily bypassed by adversaries or learn irrelevant features that do not contribute to the security objective.
  • Model Type: Multiple model types can be used for security AI include reinforcement learning, deep learning and generative adversarial networks. Each model type has its own strengths and weaknesses, and the choice of the best model depends on the data availability, quality and complexity, as well as the security goal and scenario.
  • For Prevention: The AI models should aim for a high precision and recall, meaning that they can correctly identify most of the malicious activities, while minimizing the impact on the legitimate ones as a consulting mechanism. A high precision means that the model rarely advises blocking legitimate activities. A high recall means that the model catches most of the malicious activities, but it might also advise to block some legitimate ones. Therefore, the optimal balance between precision and recall depends on the cost and risk of false positives and false negatives. A precision of 99% and the highest recall possible are desirable goals for prevention, but they might not be achievable or realistic in some cases.
  • For Detection: The AI models should be able to explain their decisions and actions to an analyst as part of the product. This can help the analyst understand the logic and rationale behind the model's output, as well as verify its validity and reliability.
  • Rigor, Validation & Monitoring: Each AI product should undergo rigorous testing and validation before triggering and responding to incidents. Each model should also be monitored automatically and have a response plan when it deviates from the expected performance or behavior.

Navigating the Promise & Perils of AI Integration in Cybersecurity

The integration of AI is both challenging and promising. As we continue to explore this "big new world," it's essential to approach AI with a balanced perspective, recognizing its potential to revolutionize cybersecurity while being mindful of its limitations and risks. The journey of integrating AI into cybersecurity is just beginning, and it's an exciting, evolving landscape for technical practitioners in the field.

Up-Level Your Skills and Stay Current on Trends like AI in Cyber

Register now for Symphony 2024 April 17-18, our virtual event focused on the future of modern security operations.

  • Explore the latest advancements in AI-driven security, where machine learning algorithms predict, detect and respond to threats faster and more effectively than ever before.
  • Delve into how automation is reshaping SOCs, enabling teams to focus on strategic projects by automating routine processes and tasks.
  • Discover how a platform, purpose-built for SecOps, simplifies security operations and accelerates incident remediation to stop the threats of today and the threats of the future.

Subscribe to the Newsletter!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.