Security

Threat Modeling With LLM Support: Using an AI Chatbot to Support Development Teams With Security

Lesezeit
14 ​​min

Threat Modeling is a well-established approach for proactively and accurately addressing security issues. Key challenges in Threat Modeling in less experienced teams are the difficulty in threat identification and the need for guidance by a security expert. We present our approach for Threat Modeling with LLM support in the form of a chatbot application with different use cases, demonstrating its effectiveness in analyzing data flow diagrams, proposing specific threats, and suggesting mitigation strategies. The tool not only aids in eliciting relevant threats but also enhances understanding through a guided interview format, making it accessible for teams with limited security expertise. Feedback from real project applications indicates that the chatbot significantly improves the Threat Modeling experience, providing a communicative and supportive environment.

Motivation

In today’s digital landscape, where attacks are increasingly sophisticated and pervasive, understanding how to protect our systems and data is more crucial than ever.  Modern application security cannot be achieved with some auditing after development or a quick pentest before going live. Instead, security needs to be considered in the entire software development lifecycle, from the very beginning on, in every process phase. This includes activities in the design of the software. Threat Modeling is an established method to address security topics in a preventive, precisely fitting way.

Threat Modeling in a nutshell

Threat Modeling provides a structured method for identifying potential threats, categorizing them, and devising countermeasures. One of the most popular methods is the “Four Question Framework“ of Adam Shostack, which structures the process into four questions.

The 4 questions for AI threat modelling, visualised by 4 arrows with the labels1. What are we building? 2. What can go wrong? 3. What are we going to do about it? 4. Did we do a good enough job?
4 Question Framework for Threat Modeling

The first step is to identify and describe what you are building and are trying to protect. This is often done visually, using architectural diagrams or drawing new ones. Especially helpful are data flow diagrams depicting the processing and storing of data in the application and the involved processes. This also involves cataloging valuable assets such as data, applications, and systems, and assessing their importance. Understanding these assets helps prioritize security efforts.
Next, consider what can go wrong by identifying potential threats. This includes enumerating various threat actors and analyzing the vulnerabilities within your systems that could be exploited. This focuses on the actual threats for the specific application under review and is the main subject of the following blog articles.
After identifying threats, the focus shifts to determining what actions to take. Evaluate the risks associated with each identified threat and develop appropriate countermeasures. This may involve implementing preventive, detective, and responsive measures to mitigate risks effectively.
Finally, look back at your work, and also in the future: Was this session helpful, and how to continue now? Discuss what the outcome is, how this helps you, and how the findings can be handled. For the future, think about when to continue or how to react to changes to your threat model in the future.
These four steps provide a framework for teams to proactively discover and manage risks and respond effectively with fitting solutions.

These steps provide a generic, ready-to-start framework for Threat Modeling in all different kinds of domains, technologies, or applications. A deep dive into Threat Modeling for AI can also be found in this blog.

Struggles of teams starting Threat Modeling

Threat Modeling unfolds its potential when done in a development team as a whole, possibly supplemented by non-technical stakeholders or connected teams. This does not only ensure to take all different perspectives into account. It also functions as a communicative exchange, allowing the distribution of knowledge between team members. This works towards a shared responsibility for security, encouraging the involvement of all team members.

However, contributing to a Threat Modeling session might be challenging for some participants. While the first question (“What are we building?“) should be answerable for everyone, the second one is regularly not. Thinking about specific threats, approaching the application from an attacker’s perspective, and considering weak points from a security perspective are often difficult for beginners. Teams with little experience in Threat Modeling or application security, in general, might be overwhelmed by the variety of threats and struggle to find suitable threats or forget important ones. This conflicts with the promise of a comprehensive, thorough analysis.
Security experts therefore play an important role in the success of Threat Modeling sessions, especially in the beginning. A lack of such experts compared to the number of developers and the idea of self-enablement of the teams ask for different approaches.

Helping inexperienced teams with Threat Modeling

As described, the identification of relevant and comprehensive threats is key to successful Threat Modeling. If the expertise and experience of the team are not sufficient and no external support is available, different interim solutions come to mind.

One approach is consulting threat catalogs, long lists of possible threats for different components and scenarios. While there exist lists for different domains and technologies, they often do not completely match their environment. This leads to the need to assess all and filter out the ones not relevant, which can be quite an effort and also not trivial to non-experts.
Threat Modeling card games like Elevation of Privilege, OWASP Cornucopia, or OWASP Cumulus also function as threat collections, without an attempt at completeness, focusing on providing a different, fun approach to the method.

Commercial tools for Threat Modeling attempt to reduce the effort of selecting relevant threats by trying to do this assessment and filtering automatically. Together with the costs of buying, configuring, and integrating such a tool, this is not the recommended approach for starting Threat Modeling.

In summary, we are looking for a method to get specific threats for a more or less formal description of our system, matching our domain and tech stack, based on the big collections of threats publicly available, but tailored to our level of detail and knowledge. This can be done by using a LLM-based chatbot to perform Threat Modeling with LLM support.

Threat Modeling with LLM support

All four phases of Threat Modeling can be supported by an LLM: Describing the relevant parts of a system, identifying threats, defining mitigations, and even deciding on the success of the session. Nevertheless, some questions are more challenging and might be supported, and on the other hand, an LLM might be more helpful in some tasks than in others.

Related approaches

There exist both academic concepts and practical implementations for Threat Modeling with LLM support.
For the related domain of privacy, an AI-powered tool PILLAR was proposed. They use a multi-agent collaboration architecture to discuss and prioritize threats to privacy in a software system.
Existing open-source implementations like Stride-GPT or TaaC-AI focus more on a “one-stop shop“ experience, with more structured and complete input and output.

Our approach

We identified four main use cases for support of LLMs in Thread Modeling and built a chatbot application to test their usefulness in practice.
In contrast to the related work mentioned above, we decided on a chatbot interface. This was preferred over such a “standalone“ threat analyzer, fed with all documentation and context information and responding with a long list of threats. By using the chatbot interface, the communicative aspect is maintained, and the LLM functions as a helpful and understandable support, not an omniscient black box.

The chatbot is based on the existing inovex chatbot platform inovex-gpt and is based on LLM provided by OpenAI using the Azure OpenAI Service. It is built using Chainlit and provides a familiar chatbot interface for the user.

Screenshot of the start page Threat Modelling support chatbot with an inpot field and four buttons to select one of the use cases

All use cases are therefore equipped with similar preprompts, setting the scene for this supportive character.

In the different use cases, the chatbot focuses on a specific task, but the structure of a guided interview stays the same.

Security architecture interview

To guide the first answer to the question “What are we building“, the chatbot can guide the user by asking the questions relevant for the later security assessment. This is especially helpful for teams with little prior security knowledge, to focus on the relevant parts and possibly gather more information. In the end, the application can provide a summary of all discovered facts and information for later usage.
The pre-prompt for this task focuses on maintaining a conversation-like interview, with the user remaining in control of the depth and focus of the interview.

Threat elicitation

This use case arises from the second question “What can go wrong?“. The chatbot is provided with the existing documentation or the information summary from the security architecture interview. Based on this, it then proposes specific threats for the team and discusses the applicability. This is done based on vetted frameworks like STRIDE and lists as the OWASP Top Ten. This allows cross-referencing and maintains a structure the users might know from other Threat Modeling sessions.
Thanks to the interactive chat interface, the user can always control the threat elicitation by asking for details, focusing on known pain points, or discarding threat clusters that are out-of-scope. The preprompt ensures the interview is clear, step-by-step, and understandable.

Analyze the data flow diagram

A special case of threat elicitation is based on Data Flow Diagrams. To allow the targeted usage of this valuable form of system representation, a specific use case is implemented for this. Based on a data flow diagram uploaded by the user, the LLM can identify key components and give concrete threats for this. Although the other use cases also allow the upload of files, we implemented this separately to give specific context about DFDs in the preprompt.

Defense and mitigation strategies

To address the third question “What can we do about it?“, this use case gives explanations about given threats, their impact, and possible solutions. It is also possible to use this not only in the context of Threat Modeling but also when evaluating an existing weakness, e.g. a pentest finding. The chatbot helps to understand the risk, and the potential damage and proposes mitigation strategies.
The pre-prompt ensures a focus on the specific threat and a practice-oriented answer.

Evaluation of Threat Modeling with LLM support

We tested the proof-of-concept Threat Modeling chatbot with LLM support using theoretical examples and real projects. An easy example is the simple data flow diagram of this e-commerce application.

The data flow diagram of a webshop, illustrating the architecture of a webshop system. It includes components such as a browser, webshop backend, database, order data, user data, logs, stock management, payment provider, log analysis, and sysadmin. Arrows indicate data flow between these elements, with labels for external entities, data stores, and trust boundaries.
Simple Data Flow Diagram of a WebShop

When uploading this into the “Data Flow Diagram“ chatbot, we get a summary of the contents, an explanation of the STRIDE categories, and the first threat proposals. When asked about details for a particular threat, the chatbot answers.

Screenshot of the Threat Modelling support chatbot analyzing a data flow diagram

It is noticeable that the chatbot gets a good understanding of the application, but starts with common threats first. This “low-hanging fruits“ approach is helpful for teams starting with Threat Modeling but might be annoying for teams with more expertise. With additional information about the already addressed threats or the focus of the analysis, this can be adjusted. Nevertheless, it is hard to get the chatbot in a way of investigating complex threats like domain-specific logical flaws. The best results are therefore achieved in covering more or less common areas of threats.

For the other use cases, the chatbot can gain a pretty good understanding of the application with just around a dozen questions asked. Nevertheless, it was a good practice to export a summary of the analysis after one session to get started faster next time. Also, existing high-level documentation about techstack and use cases can reduce the length of the initial interview.

In practice, the results of the pre-engineered, more and more fine-tuned system prompts are significantly better than simple “What are the threats concerning…“-prompts in ChatGPT. Also, the implementation of a specific application lowers the hurdle for teams to get started. In summary, even as a proof-of-concept, the chatbot application promises to be a useful tool for teams without security expertise and too little access to security personnel.

Future extension

In the future, the application might be extended by other scenarios with tailored preprompts. Other input formats are also thinkable, similar to Dragon-GPT allows converting an OWASP Threat Dragon export to be used instead of a graphical representation.

Another use case on our roadmap is integrating a diagramming solution directly into the application. However, it needs to ensure that the supportive character of the tool is maintained, and not the entire process is done in the application.

Summary

We built an easily usable application for development teams with small pre-knowledge as support in Thread Modeling sessions. With fitting pre-prompts for relevant use cases, the chatbot app allows a simple exchange between developers and LLMs in a communicative way. By deliberately not providing a “big input, comprehensive output“ black box application, but focusing on an explanatory, guided dialogue, we preserve the value of Threat Modeling as a knowledge-sharing, inclusive activity. While the chatbot can and should not replace the work of security experts for detecting sophisticated, complex threats, this offers a useful starting point that is more useful than threat catalogs, more comprehensive than security card games, and more communicative than commercial tools. In the future, more use cases for improved Threat Modeling with LLM support are imaginable.

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert