I highly doubt they are putting LLMs on their little throwaway drones. The US military has actually been working on “let’s figure out what that thing is and blow it up automatically” technology since at least as far back as the 90s; e.g. modern warship defense systems use it to be able to react faster than a human can to blow up an incoming missile.
Yea, let’s just slap the missile equivalent of chatgpt on a bunch of drone missiles, what could go wrong? /s
Serriously though, what happens if the AI driving the drone hallucinates? I wouldn’t want to be anywhere near these things when they’re testing them.
I highly doubt they are putting LLMs on their little throwaway drones. The US military has actually been working on “let’s figure out what that thing is and blow it up automatically” technology since at least as far back as the 90s; e.g. modern warship defense systems use it to be able to react faster than a human can to blow up an incoming missile.
Personally I am much more worried about it working exactly as intended.