Pentagon launches tech to stop AI-powered killing machines from going rogue on ... trends now

Pentagon launches tech to stop AI-powered killing machines from going rogue on ... trends now

Pentagon officials have sounded the alarm about 'unique classes of vulnerabilities for AI or autonomous systems,' which they hope new research can fix. 

The program, dubbed Guaranteeing AI Robustness against Deception (GARD), has been tasked since 2022 with identifying how visual data or other electronic signals inputs for AI might be gamed by the calculated introduction of noise.   

Computer scientists with one of GARD's defense contractors have experimented with kaleidoscopic patches designed to fool AI-based systems into making false IDs. 

'You can essentially, by adding noise to an image or a sensor, perhaps break a downstream machine learning algorithm,' as one senior Pentagon official managing the research explained Wednesday. 

The news comes as fears that the Pentagon has been 'building killer robots in the basement' have allegedly led to stricter AI rules for the US military — mandating that all systems must be approved before deployment.

Computer scientists with defense contractor MITRE Corp. managed to create visual noise that an AI mistook for apples on a grocery store shelf, a bag left behind outdoors, and even people

Computer scientists with defense contractor MITRE Corp. managed to create visual noise that an AI mistook for apples on a grocery store shelf, a bag left behind outdoors, and even people 

A bus packed with civilians, for example, could be misidentified as a tank by an AI, if it were tagged with the right 'visual noise,' as one national security reporter with the site ClearanceJobs proposed as an example

Researchers with the modestly budgeted GARD program have spent $51,000 investigating visual and signal noise tactics since 2022, Pentagon audits show

A bus packed with civilians, for example, could be misidentified as a tank by an AI, if it were tagged with the right 'visual noise,' as one national security reporter with ClearanceJobs proposed as an example. The Pentagon program has spent $51,000 investigating since 2022

'You can also with knowledge of that algorithm sometimes create physically realizable attacks,' added that official, Matt Turek, deputy director for the Defense Advanced Research Projects Agency's (DARPA's) Information Innovation Office.

Technically, it is feasible to 'trick' an AI's algorithm into mission critical errors — making the AI misidentify a variety of patterned patches or stickers for a real physical object that's not really there.

A bus packed with civilians, for example, could be misidentified as a tank by an AI, if it were tagged with the right 'visual noise,' as one national security reporter with the site ClearanceJobs proposed as an example. 

Such cheap and lightweight 'noise' tactics, in short, could cause vital military AI to misclassify enemy combatants as allies, and vice versa, during a critical mission. 

Researchers with the modestly budgeted GARD program have spent $51,000 investigating visual and signal noise tactics since 2022, Pentagon audits show. 

A 2020 study by MITRE illustrated how visual noise, which can appear merely decorative or inconsequential to human eyes, like a 1990s 'Magic Eye' poster, can be interpreted as solid object by AI. Above, MITRE's visual noise tricks an AI into seeing apples

A 2020 study by MITRE

read more from dailymail.....

NEXT Move over, Gnasher! The Beano will feature a guide dog for first time to raise ... trends now