The paperclip problem theorises how AI wouldn’t know how to stop given the task of producing paperclips and essentially end up destroying our world to achieve its goals. Now replace paperclips with nuclear warheads and you know you’ve entered 2025. This is the paperclip problem on steroids. What happens when we task AI to control every nuclear warhead “for defence”.
OpenAI has announced that the US National Laboratories will use its AI models to help with “a comprehensive program in nuclear security, focused on reducing the risk of nuclear war and securing nuclear materials and weapons worldwide”.
Their product is now “a tool for war”. No longer just “a tool for thought”.
They try to paint their announcement in a good light, but there is really only one way to read it. Boy, are we letting them move that goal post while they just keep adding to the fire:
“This is the beginning of a new era, where AI will advance science, strengthen national security, and support U.S. government initiatives.”
It’s a sad development, but less and less surprising. I’ve written about previous similar developments (OpenAI supported autonomous war drones) and this is unlikely to be the last. For consumers it’s important to know, you don’t have to use ChatGPT and give them your money. There are alternatives and in a lot of ways, they are a better product than ChatGPT. Check out Claude for now, but also keep your eyes open for other (maybe European 👀 Mistral) services.
Since DeepSeek it has become a lot more likely that making comparable quality AI to ChatGPT is possible with lesser means. Now with the model quality getting more democraticed a lot of the AI experience will come down to the ui and input. And that is something EU designers excel at.