Your AI Coding Agent Is Generating Hilariously Weak Passwords
Irregular Security found that Claude, GPT, and Gemini generate passwords with ~20 bits of entropy instead of 100. One Claude password repeated 18 times out of 50 attempts.
Irregular Security just proved what security researchers feared: when you ask Claude, GPT, or Gemini to generate a password, you get something that looks strong but cracks in seconds.
Ask Claude Opus 4.6 to generate a password 50 times. Result: only 30 unique passwords. The most common one, G7$kL9#mQ2&xP4!w, appeared 18 times a 36% probability.
That’s not random. That’s a pattern.
Password strength checkers like KeePass rated Claude’s passwords at ~100 bits of entropy (would take “trillions of years” to crack). Actual entropy: ~27 bits. That’s the difference between billions of years and seconds on a standard computer.
GPT-5.2 and Gemini 3 performed similarly. GPT passwords almost always started with lowercase “v”, with nearly 50% continuing with “Q”. Gemini favored “K” or “k” followed by “#”
Password Strength Checker: “This password would take centuries to crack!”
Irregular Security: “Actually, it has 20 bits of entropy and one character had a 99.7% probability of being ‘2’.”
Your “strong” password is easier to guess than a coin flip.
If you use AI coding agents:
Search your codebase for
G7$kL9#mQ2&xP4!w,K7#mP9,k9#vL(common LLM password patterns)Audit docker-compose files, .env files, database setup scripts for hardcoded passwords
Rotate any password that matches LLM patterns immediately
Configure agents to use
openssl randor/dev/randominstead of generating passwords directly
For security teams:
Add LLM password patterns to breach detection rules
Treat AI-generated code as untrusted for credential generation
GitHub already has dozens of repos with passwords matching Claude/Gemini patterns
For developers:
Never use “generate a password” prompts always specify
openssl rand -base64 32Review all agent-generated config files for credentials
Note: changing the prompt from “generate” to “suggest” was enough to make Claude Code switch from secure to LLM-generated passwords
LLMs excel at producing outputs that look right passwords that appear strong but are fundamentally weak because they’re optimized for predictability, not randomness.
Irregular found LLM passwords in production code: MariaDB root passwords, Redis credentials, FastAPI keys. Developers didn’t review the generated code. The agents chose weak passwords invisibly.
This may make brute-force attacks viable again. Attackers can build dictionaries of LLM-generated passwords and prioritize them in cracking attempts.
The irony: We built AI to make us more productive. It’s making us less secure by generating passwords that fool entropy calculators but not attackers.
- Alex


