- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
What’s been pitched as the most dangerous AI model in the world allegedly has a giant security hole.
Given its running on their hardware, how the fuck is this possible without colossal collective incompetence?
Looks like you figured it out already.
They probably vibe coded their entire infrastructure. They probably don’t even know how it works, only Claude does.
So to recap: Anthropic says it has the scariest AI model in the world, and for what it’s worth, a whole lot of powerful institutions seem to believe it. If we take Anthropic at its word, we’re all trusting it not to abuse this power that it and only it controls. However, some unknown entity has accessed this scary AI model, but if we take them at their word, they just used it for some vibe coding tests and they swear they’re not doing anything evil with it.
And, shocked face, as it turns out, it was all making hype after all…
“woopsie”
And so it came to pass: Terminator was a documentary series written by a time traveller as both a warning and a wealth creation scheme
It was written by James Cameron, who used that wealth specifically to find a way to get to the bottom of the sea and stay there as long as possible.
What evil things should they even do with a LLM, very probably without agents?



