Facebook is Studying the Sensitivity of Robots with Artificial Intelligence (AI)
As we have previously observed, the social network Facebook has diversified over the years, investing in different areas as regards its main platform. Cases that we have exposed and highlight in the world are their fields of research, which study various technologies and phenomena that are developed around the world. On this occasion we will expose the progress of the joint work between AI and robotics, investigating the sensitivity of robots.
As the background of the project, it is their research on the dissemination of a sense of curiosity in AI that has focused on reducing the uncertainty applied in robots, which seek to learn the efficient way to move in a given environment, starting without knowing anything about the site. , so they should explore with curiosity. Facebook's latest efforts seek the same goal, but they do it in a more structured way.
“Actually, we started with a model that does not know much about itself,”
said FAIR researcher Franziska Meier told Engadget.
“At this point, the robot knows how to hold his arm, but does not really know what actions to apply to achieve a certain goal.”
But as the robot learns which torques must be applied to move the arm to the next target configuration, it can eventually begin to optimize its planning.
“We use this model that tells us this, to plan several time steps in advance,”
“And we tried to use this planning procedure to optimize the sequence of action to accomplish the task.”
To prevent the robot from optimizing its routines too much and getting caught in a loop, the research team rewarded the robot for the actions that solved the uncertainty.
“We do this exploration, we actually learn a better model faster, we get the job done faster and we learn a model that is better generalized to new tasks,”
So with this new research theme, Facebook has been working hard to teach robots how to feel. Not emotionally, but physically. And it's taking advantage of a predictive deep learning model originally designed for video.
“It's essentially a technique that can predict videos from the current state, from the current image and action,”
The team trained the AI to operate directly using raw data, in this case, a high-resolution touch sensor, rather than through a model.
“Our work shows that such policies can be learned in their entirety without rewards, through various exploratory interactions unsupervised with the environment,”
the researchers concluded. During the experiment, the robot was able to successfully manipulate a joystick, roll a ball and identify the correct face of a 20-sided die.
“We showed that we can essentially have a robot that manipulates small objects in an unsupervised way,”
“And what it means in practice is … we can really predict precisely what will be the result of an action [given]. This allows us to start planning in the future. We can optimize the sequence of actions that actually produce the desired result ».
The combination of visual and tactile inputs could greatly improve the functionality of future robotic platforms and improve learning techniques.
“To build machines that can learn by interacting with the world independently, we need robots that can take advantage of data from multiple senses,”
the team concluded. We can only imagine what Facebook has reserved for this. However, the company declined to comment on possible practical applications for this research in the future.