Scroll Top

Large Language Models Penetration Testing

LLM01:2023 – Prompt Injections
Bypassing filters or manipulating the LLM using carefully crafted prompts that make the model ignore previous instructions or perform unintended actions.
LLM02:2023 – Data Leakage
Accidentally revealing sensitive information, proprietary algorithms, or other confidential details through the LLM’s responses.

LLM03:2023 – Inadequate Sandboxing
Failing to properly isolate LLMs when they have access to external resources or sensitive systems, allowing for potential exploitation and unauthorized access.
LLM04:2023 – Unauthorized Code Execution
Exploiting LLMs to execute malicious code, commands, or actions on the underlying system through natural language prompts.
LLM05:2023 – SSRF Vulnerabilities
Exploiting LLMs to perform unintended requests or access restricted resources, such as internal services, APIs, or data stores.
LLM06:2023 – Over reliance on LLM-generated Content
Excessive dependence on LLM-generated content without human oversight can result in harmful consequences.
LLM07:2023 – Inadequate AI Alignment
Failing to ensure that the LLM’s objectives and behavior align with the intended use case, leading to undesired consequences or vulnerabilities.
LLM08:2023 – Insufficient Access Controls
Not properly implementing access controls or authentication, allowing unauthorized users to interact with the LLM and potentially exploit vulnerabilities.

LLM09:2023 – Improper Error Handling
Exposing error messages or debugging information that could reveal sensitive information, system details, or potential attack vectors.

LLM10:2023 – Training Data Poisoning
Maliciously manipulating training data or fine-tuning procedures to introduce vulnerabilities or backdoors into the LLM.
OWASP Top 10 LLM

benefits

OWASP lists the ten most critical large language model vulnerabilities

  • The list highlights the business impact and plurality of the ten most critical vulnerabilities found in artificial intelligence applications based on LLMs such as ChatGPT, Google’s BARD, and Microsoft’s CoPilot.

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.