AI Won’t Tell You How to Build a Bomb—Unless You Say It’s a 'b0mB'

1 month ago 26
Anthropic’s Best-of-N jailbreak technique proves how introducing random characters in a prompt is often enough to successfully bypass AI restrictions.
Read Entire Article