Skip to content

Your AI chat- & work-buddy is a war machine now

Published: at 02:54 PM

A few days ago OpenAI announced, what I find, a quite shocking and troubling collaboration with a weapons manufacturing company. OpenAI partnered with Anduril (famously founded by the Trump supporting, sexual harasser, Palmer Lucky) and removed the prohibition to use their AI services in weapons, military and warfare.

That means, that the same ChatGPT you use for emails, creative ideation or your internal corporate intranet, is now also being used for war and autonomous war-drones.

While they argue that this is to protect the US and their allies, one would have to be quite gullible to not predict how this is a seed towards general autonomous decision making in policing and enforcement.

For my American friends and colleagues (I’m Danish, as you might know), I’m genuinely curious: How do you reconcile this with your deeply held beliefs about individual liberty and government overreach? Imagine a future where a supposedly neutral AI system makes real-time decisions about law enforcement. It seems fundamentally at odds with the principles of democratic oversight and individual rights.

One must hope this represents a red flag

For many large global organisations, this could create a risk of association. By using OpenAI products you are funding research into AI war efforts, both with your money and with your data.

The same if you are an individual.

As part of the new initiative, Anduril and OpenAI will explore how leading edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness. These models, which will be trained on Anduril’s industry-leading library of data on CUAS threats and operations, will help protect U.S. and allied military personnel and ensure mission success.

I think it is crystal clear. If you are doing any meaningfully important works with OpenAI’s products, this should make it a top priority to evaluate and implement alternatives.

What other options are there?

If you (or your organisation) still need AI, but would like to stop supporting this behaviour, then I can recommend using Claude instead. I am in no way affiliated with them. I just think it’s the best alternative. It both has a nicer web interface with better features and a great mobile app and they in general have a more ethical approach to data and privacy and more careful approach to AI (though to be fair, no AI provider can really be considered ethical imo). They also have a pretty much identical API to OpenAI that your corporate intranet could switch to if you can convince the IT department.

So if you work in a large organisation that uses AI and if you don’t like where this is going, then I suggest you speak up now. I personally think whichever due diligence that has been done should be considered void and done over.

Is it difficult to use another AI service?

It’s not hard to move away from OpenAI. Now is the easiest time. It will only get harder going forward.

I’ve been using Claude for my needs the past year and even before this escalation from OpenAI, I would recommend it.

Just to be clear: I am not by any means endorsing Claude uncritically. I just think this is such an aggravating development that pretty much any other choice is an improvement, and I wish people would vote with their money.