Anthropic offers $20,000 to whoever can jailbreak its new AI safety system
The company has upped its reward for red-teaming Constitutional Classifiers. Here's how to try.
![Anthropic offers $20,000 to whoever can jailbreak its new AI safety system](https://www.zdnet.com/a/img/resize/463d3dfdc8b8229b2116b822fc957012f0538881/2025/02/04/b5eeff24-2621-486e-a288-ce687dbd5a71/gettyimages-1210077086.jpg?auto=webp&fit=crop&height=675&width=1200)
Feb 7, 2025 0
Feb 7, 2025 0
Feb 7, 2025 0
Feb 7, 2025 0
Feb 7, 2025 0
Feb 7, 2025 0
Or register with email
Jan 27, 2025 0
Jan 28, 2025 0
Jan 28, 2025 0
Jan 28, 2025 0
Jan 29, 2025 0
Jan 30, 2025 1
Jan 29, 2025 0
Jan 28, 2025 0
This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies.