Threat Advisory

Incident Report: The Lovable.dev Data Breach Exposes the Dark Side of Vibe Coding

TLT
Threat Landscape Team
2026-04-226 min read

In recent days, the Stockholm-based AI app-building platform Lovable—a darling of the "vibe coding" movement recently boasting a $6.6 billion valuation—has faced immense scrutiny following the exposure of highly sensitive user data. What started as an alarming disclosure by an independent security researcher has snowballed into a broader industry conversation about API security, Broken Object Level Authorization (BOLA), and how AI startups handle responsible vulnerability disclosure.

The Technical Details: A Textbook BOLA Vulnerability

On April 20, 2026, security researcher Matt Palmer (known as @weezerOSINT on X) publicly disclosed a severe vulnerability affecting Lovable projects created prior to November 2025.

The core issue was a classic Broken Object Level Authorization (BOLA) flaw—the #1 vulnerability on the OWASP API Security Top 10. According to the disclosure, the endpoint https://api.lovable.dev/GetProjectMessagesOutputBody failed to enforce proper ownership validation.

By simply creating a free Lovable account and making five unauthenticated API calls, an attacker could freely access the data of other users' projects. The exposed JSON responses included:

  • Full source code
  • Complete AI chat histories (including internal AI reasoning logs and tool-use records)
  • Hardcoded third-party database credentials (such as Supabase API keys, Stripe, and Google APIs)
  • Real customer data

No sophisticated exploitation, bypassing of firewalls, or privilege escalation was required to trigger the bug.

The Impact: Real-World Consequences

The blast radius of this vulnerability is significant. Lovable's user base reportedly includes employees from major enterprises like Microsoft, Nvidia, and Uber.

The exposure of AI chat histories is particularly dangerous in the context of AI coding assistants. Developers routinely paste error logs, discuss proprietary business logic, and inadvertently share production credentials during debugging sessions with the AI.

In one cited example, the vulnerability exposed the source code for a Danish non-profit, Connected Women in AI. Because the database credentials were leaked in the source code, the researcher was able to query the live database, exposing personally identifiable information (PII) belonging to real individuals—including employees from Accenture Denmark and Copenhagen Business School.

The Response: Denial, Deflection, and Eventual Admission

Lovable's handling of the incident has drawn heavy criticism from the cybersecurity community. The timeline of their response is a case study in crisis mismanagement:

  • Ignored Warnings: The vulnerability was initially reported via Lovable's HackerOne bug bounty program on March 3, 2026—nearly 48 days before it went public. It was reportedly marked as a "duplicate" submission and left unpatched.
  • Denial: When the news broke on X, Lovable's first response was to deny the incident was a data breach. They claimed that viewing public projects' code was "intentional behavior."
  • Blaming the User and Documentation: The company then blamed unclear documentation around what "public" visibility meant, stating users mistakenly thought it only applied to the published frontend app, rather than the underlying chat logs and code.
  • Blaming the Bug Bounty Platform: Lovable subsequently shifted blame to HackerOne, implying the platform's triage partner believed the visibility of chats was expected behavior.
  • The Final Admission: After mounting public backlash, Lovable finally acknowledged a backend engineering mistake. The company admitted that in February 2026, while "unifying permissions in our backend, we accidentally re-enabled access to chats on public projects."

Lovable has since restricted chat message visibility on public projects and applied a fix, but the handling of legacy projects and the confusing communication has left many users anxious.

ThreatLandscape's Takeaways & Actionable Advice

The Lovable incident serves as a stark reminder that the rapid adoption of AI development tools often outpaces fundamental security practices. "Vibe coding"—where developers build applications through natural language interactions with AI—abstracts away the complexity of code, but it does not abstract away the security risk.

For Threat Intelligence teams, DevOps, and developers:

  • Rotate Credentials Immediately: If you or your enterprise users created a Lovable project before November 2025, assume any secrets shared in the Lovable AI chat or embedded in your source code are compromised. Rotate Supabase, Stripe, Google, and other API keys immediately.
  • Watch for Supply Chain Exposures: Threat actors actively monitor AI platforms for exposed credentials. A BOLA flaw in a low-code builder can quickly become an initial access vector into your production environments.
  • Audit API Endpoints: Ensure your own applications are not susceptible to BOLA. Every API endpoint that returns sensitive data must explicitly verify that the requesting user has the authorization to view the requested object ID.
  • Enforce Row-Level Security (RLS): For applications relying on Backend-as-a-Service (BaaS) platforms like Supabase, AI-generated code often misses critical security configurations. Ensure RLS is properly implemented so that even if an API key is leaked, unauthorized mass data access is blocked.

Stay tuned to the ThreatLandscape Blog for ongoing analysis of the latest vulnerabilities impacting the AI software supply chain.

Ready to Transform Your Threat Intelligence?

See how Threat Landscape can reduce alert fatigue and improve your security operations