New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

The latest news on AI-related security threats and vulnerabilities does not specifically mention a "Bad Likert Judge AI jailbreak technique." However, there are several key points from recent news and predictions that highlight the evolving landscape of AI-driven security threats:

  1. Deepfake Cases and AI-Driven Fraud:

    • The Asia-Pacific region has seen a 1,530% surge in deepfake cases from 2022 to 2023, making AI-driven fraud a prominent challenge1.
  2. Rise in 'Living Off the Land' Attacks:

    • Expectations for 2025 include a rise in 'living off the land' attacks, where attackers exploit legitimate tools and processes within an organization's network to avoid detection. This technique is expected to be used by nation-state actors like Russia, China, and Iran1.
  3. Social Engineering Attacks:

    • Social engineering attacks, including phishing, push bombing, and SIM swap attacks, will become more complex as adversaries leverage AI and ML to be more convincing and evade existing controls1.
  4. Identity Fraud and Transparency Demands:

    • Consumers are increasingly demanding transparency from businesses about their security practices and the use of AI. This trend is driven by concerns over identity security and personal data protection, with 89% of consumers having concerns about AI when it comes to their identity security1.
  5. Critical Infrastructure Attacks:

    • Critical infrastructure is expected to become a higher priority for nation-state threat actors in 2025, elevating critical infrastructure attacks to levels of national security1.
  6. SaaS Identity-Based Attacks:

    • The next wave of threats will target SaaS identities, leveraging single sign-on (SSO) to move laterally and access additional data through connected services. This makes every identity an attacker can obtain more valuable1.
  7. AI-Powered SAST Tools:

    • AI is enhancing Static Application Security Testing (SAST) tools by improving vulnerability detection and creating automated prioritization. However, there are concerns about sending proprietary code to third-party AI models, which may not meet compliance standards2.
  8. Injection Attacks and AI-Generated Code:

    • Injection attacks, fueled by AI-generated code vulnerabilities, are set to re-emerge as a top threat in 2025. AI can speed up development but often produces code that doesn't adhere to security best practices4.
  9. Data Privacy Concerns:

    • The rapid increase in AI adoption has created significant data privacy issues, including a lack of data transparency, new endpoints for vulnerabilities, and potential regulatory gaps. Organizations must balance business value with data security to mitigate these risks5.

In summary, while there is no specific mention of a "Bad Likert Judge AI jailbreak technique," the latest news highlights the increasing sophistication of AI-driven attacks, the rise of deepfake cases, and the need for enhanced data privacy measures as AI adoption continues to grow.