DeepMind Control Suite trains a (simulated) robot dog to run and fetch with reinforcement learning

deepmind-control-suite-trains-a-simulated-dog-to-run-and-fetch-with-reinforcement-learning”width=

Reinforcement learning enables autonomous robots to learn large sets of behavioral skills with minimal human intervention. In real physical systems, however, there is a tendency to engineer policies by hand to achieve faster training times.

Deep reinforcement learning (RL) reduces the need for this manual input by including general-purpose neural network policies for a particular application. DeepMind’s dm_control development package demonstrates the training of deep Q-functions that can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train real-world robots.

These capabilities have ramifications for the future of AI for robots, especially for more complex locomotion and manipulation operations. The aim is to make it easier and faster to train robots to perform complicated tasks with multiple degrees of freedom and with minimal human input.

The dm_control software

The dm_control software package is a set of Python libraries and task suites for RL in a simulation environment for an articulated body. The package includes a MuJoCo wrapper, PyMJCF and Composer libraries, Control Suite, Locomotion framework, and a set of customizable manipulation tasks.

Software for research

One prerequisite for general intelligence is the ability to control the physical world. This consists of controlling position and velocity. The most familiar physical control tasks have a fixed subset of degrees of freedom for the body. DeepMind’s dm_control package is a collection of tools for RLagents in an articulated body simulation designed to make it easier to train robots to do complex tasks.

The package was designed to facilitate the continuous control and robotics needs of DeepMind scientists and engineers. The Control Suite environment includes a quadruped dog simulation, several locomotion tasks, and a single arm robotic manipulation task. This makes the DeepMind package useful for research by other engineers who want to develop AI for robots.

The future of AI development for robots

dm_control is a starting point for the testing and performance comparison of RL algorithms for physics-based control. It includes a wide range of predesigned RL tasks and a rich framework for designing new ones.

By starting with predesigned RL tasks, dm_control provides an optimized approach to AI development that should make it easier and faster to simulate robot movements. This tool should provide a more robust development environment that can give physical robots a foundation for deep RL to speed up training and reduce the amount of human input needed.

That approach should stimulate the robotics industry, resulting in a greater variety of products that are progressively more intelligent. A faster way to train robots in simulation and in the physical world will reduce development costs, produce robots with the ability to learn more flexibly, and provide more capable robots at a lower price point.

This development could point to a future trend in simulation and training based on deep RL that is faster and provides premium results.

For more information on this topic, check out these Omdia blogs:

Reinforcement learning: A trend to watch in 2019

Reinforcement learning and its implications for enterprise artificial intelligence

Comments are closed.