Ticker

6/recent/ticker-posts

Ad Code

Responsive Advertisement

US Marines Defeat DARPA Robot by Hiding Under a Cardboard Box

Could this defeat the Pentagon’s latest human-identifying robot? Apparently so. (Image: Kelli McClintock/Unsplash)
The Pentagon’s Defense Advanced Research Projects Agency (DARPA) has invested some of its resources into a robot that’s been trained—likely among other things—to identify humans. There’s just one little problem: The robot is cartoonishly easy to confuse.

Army veteran, former Pentagon policy analyst, and author Paul Scharre is gearing up to release a new book called Four Battlegrounds: Power in the Age of Artificial Intelligence. Despite the fact that the book isn’t scheduled to hit shelves until Feb. 28, Twitter users are already sharing excerpts via social media. This includes The Economist‘s defense editor, Shashank Joshi, who shared a particularly laughable passage on Twitter.

In the excerpt, Scharre describes a week during which DARPA calibrated its robot’s human recognition algorithm alongside a group of US Marines. The Marines and a team of DARPA engineers spent six days walking around the robot, training it to identify the moving human form. On the seventh day, the engineers placed the robot at the center of a traffic circle and devised a little game: The Marines had to approach the robot from a distance and touch the robot without being detected.

Solid Snake using a cardboard box as a disguise in Metal Gear Solid.

DARPA was quickly humbled. Scharre writes that all eight Marines were able to defeat the robot using techniques that could have come straight out of a Looney Tunes episode. Two of the Marines somersaulted toward the center of the traffic circle, thus using a form of movement the robot hadn’t been trained to identify. Another pair shuffled toward the robot under a cardboard box. One Marine even stripped a nearby fir tree and was able to reach the robot by walking “like a fir tree” (the meaning of which Twitter users are still working to figure out).

While it’s funny to imagine a team of Marines using Metal Gear Solid’s cardboard box strategy to defeat what’s likely a very expensive robot, the incident detailed in Scharre’s book fortifies something we already know: AI is only as useful as the data we give it. Similar to the way AI becomes biased once it’s fed biased data, algorithms can be as ignorant as their foundational data is flat. Without being shown what a somersaulting human or a human under a box looks like in action, a robot won’t be able to discern that image from all the surrounding noise, no matter how skilled its engineers are.

Now Read:

Enregistrer un commentaire

0 Commentaires