Mitre and Microsoft have updated a community knowledge base of adversary tactics and techniques that security professionals use to protect artificial intelligence-enabled tools with the addition of new vulnerabilities and case studies involving generative AI and large language models.
Mitre Adversarial Threat Landscape for AI Systems features updates describing the type of attack pathways in LLM-enabled platforms that could be used to build up defenses against malicious attacks involving AI tools used in health care, transportation and finance, among other sectors, the nonprofit organization said Monday.
The updated ATLAS platform comes with new adversary tactics and techniques based on case studies of incidents involving generative AI and LLMs, including the ChatGPT plugin privacy leak, PoisonGPT and MathGPT code execution.
Security researchers and individuals from government, industry and academia offered feedback to help inform generative AI-related adversary tactics that were incorporated into Mitre Atlas.
The Mitre ATLAS community will focus on building up the anonymized dataset by sharing information on AI-related vulnerabilities and incidents.