Boston Dynamics and Google DeepMind Teach Spot to Reason
Boston Dynamics has equipped its Spot robot with Google DeepMind's Gemini Robotics-ER 1.6, a high-level embodied reasoning model that brings more usability and intelligence to complex tasks like industrial inspection.
Why it matters
Advances in robot reasoning and understanding are critical for expanding the commercial viability of legged robots like Spot in industrial and real-world settings.
Key Points
- 1Boston Dynamics' Spot robot now has Google DeepMind's Gemini Robotics-ER 1.6 AI model
- 2This allows Spot to better understand its environment and perform tasks like reading gauges and detecting hazards autonomously
- 3The goal is to make robots that can reliably and safely operate in the physical world, bridging the gap between human and robot understanding
Details
Boston Dynamics has partnered with Google DeepMind to equip its Spot quadruped robot with advanced AI capabilities. The new Gemini Robotics-ER 1.6 model brings higher-level reasoning and understanding to Spot, enabling it to better perceive and interact with its surroundings. This is crucial for commercial applications like industrial inspection, where Spot can now autonomously look for hazards, read complex instruments, and call on computer vision models to aid its understanding. The key challenge is bridging the gap between how humans and robots 'understand' the world, so that robots can reliably and safely carry out instructions. Gemini Robotics-ER 1.6 aims to give Spot a more human-like grasp of safety and context, avoiding mistakes like gripping a can the wrong way. This marks an important step towards embodied AI systems that can truly operate in the physical world.
No comments yet
Be the first to comment