SELECT LANGUAGE BELOW

Figure Introduces Humanoid Robot Powered by OpenAI

A robotics startup named Figure showcased a humanoid robot that integrates OpenAI’s advanced language technology and enables real-time conversation and concurrent task execution.

Decryption report Robotics company Figure has announced its latest creation of a conversational humanoid robot infused with cutting-edge artificial intelligence technology from OpenAI. Dubbed Figure 01, the robot has the ability to instantly understand and respond to human interactions through the integration of OpenAI’s powerful language models.

The company’s recent partnership with OpenAI brings advanced visual and linguistic intelligence to its robots, enabling “high-speed, low-level dexterity robot movements.” The synergy between advanced AI and robotics has created robots that can not only communicate with humans, but also perform tasks and multitask seamlessly.

Breitbart News previously reported that Figure has attracted high-profile backing, including investments from Jeff Bezos and Nvidia.

In a video demo released by Figure, the Figure 01 robot is seen interacting with its creator, senior AI engineer Corey Lynch, who tasks the robot with a series of tasks and questions in a simulated kitchen environment. This robot effortlessly identifies objects such as an apple, a plate, and a cup, and when asked to offer something to eat, it immediately serves an apple, demonstrating its ability to understand commands and act accordingly. .

Additionally, Figure 01 can collect trash into a basket while talking, highlighting its multitasking capabilities. Lynch said the robot can describe visual experiences, plan future actions, recall memories and verbally explain its reasoning. This is a feat that would have been unimaginable just a few years ago.

The key to Figure 01’s conversational capabilities lies in OpenAI’s integration of multimodal AI models. These models can understand and generate different data types, such as text and images, allowing robots to process visual and auditory input and respond accordingly. Lynch explained that the model processes the entire conversation history, including past images, to generate a verbal response that is read back to the human via text-to-speech.

Figure 01’s debut caused a huge stir on social media, with many impressed by the robot’s capabilities, with some comparing it to a sci-fi scenario. But Lynch provided valuable technical insight for his AI developers and researchers, saying that all behavior is driven by the network’s visual-motor transformation policy, which directly maps pixels to actions. .

read more Decrypt it here.

Lucas Nolan is a reporter for Breitbart News covering free speech and online censorship issues.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News