Technical Articles

Designing a Nonlinear Feedback Controller for the DARPA Robotics Challenge

By Russ Tedrake, Massachusetts Institute of Technology


In December 2013, a humanoid robot broke through a wall, cleared debris from a doorway, unspooled and connected a firehouse, and drove a utility vehicle through an obstacle course at the Homestead-Miami Speedway.

The robot’s control software was developed in MATLAB® and Simulink® by a team from Massachusetts Institute of Technology participating in the DARPA Robotics Challenge (DRC). This competition is designed to spur research into developing robots that can work in hazardous areas with task-level autonomy. A robot with task-level autonomy can be instructed to perform simple tasks, such as turning a steering wheel or grasping a handle, which it then carries out on its own.

From the time we received our robot to the day of the competition, we had less than five months to develop, debug, and test our controller algorithms. MATLAB and Simulink helped us keep to this aggressive schedule. We were able to prototype highly sophisticated, optimization-based controllers at a pace that would have been impossible with C or another low-level language.

Making Robots Move with Grace and Efficiency

Our work at the DRC was a continuation of my research into making robots move gracefully in the real world. My goal is to design legged robots that move as skillfully as ballerinas and unmanned aerial vehicles that fly like birds. These are fundamental problems for the field of robotics; they also force us to solve hard nonlinear control problems that will have applications in many other domains.

Watch carefully the next time you see a bird fly past the window and land on a branch. That little bird is casually but dramatically outperforming some of the best control systems ever designed by humans. During a “perching” maneuver, birds rotate their wings and bodies so that they are almost perpendicular to the direction of travel and to oncoming airflow. This maneuver increases the aerodynamic drag on the bird, both by increasing the surface area exposed to the flow and by creating a low-pressure pocket of air behind the wing (Figure 1).

DARPA_Robotics_fig1_w.jpg
Figure 1. Top: Airflow over a wing as the wing’s angle increases. Bottom: Visualization of the airflow from the wing of a glider in a steep stall.

Viscous and pressure forces combine for the desired rapid deceleration, but the maneuver has important consequences: the wings become “stalled,” meaning that they experience a dramatic loss of lift and, potentially, of control authority. The aerodynamics become unsteady (time-varying) and nonlinear, making the aerodynamic forces difficult to model and predict accurately. Yet birds perch with apparent ease. By comparison, helicopters and vertical take-off and landing (VTOL) airplanes require considerable time and energy to land on a target. Similarly, few jet pilots would be willing to fly between skyscrapers, yet owls and hawks navigate dense forests with ease.

Control systems that mimic such feats must make sophisticated logical decisions about how and where the robot will move. Although the equations describing the system kinematics and dynamics are nonlinear, these equations have exploitable structure. For example, many can be described in terms of rational polynomial equations. This means that we can write efficient algorithms that use numerical algebraic geometry and convex optimization to assess system stability over a wide range of operating conditions.

When we set out to land a small glider on a wire, we used MATLAB and Simulink to implement a relatively standard approach to trajectory optimization and time-varying linear feedback. Using this approach we produced controllers that could consistently land the plane as long as it was always launched from the same location and at the same velocity. By quickly evaluating the stability of this system using the polynomial approach, represented as a stability “funnel” (Figure 2), we were able to design a feedback library that would reliably land the aircraft on the perch from a wide range of initial conditions. Now we can simply throw the airplane towards the wire from any position, and it will always find its way to the perch.

DARPA_Robotics_fig2_w.jpg
Figure 2. Funnel-shaped area from which the glider can reliably land on its perch.

Why MATLAB?

As we began work for the DARPA Robot Challenge, some of my colleagues questioned the use of an interpreted language such as MATLAB in a real-time feedback control loop. They were concerned that the MATLAB algorithms would not execute quickly enough, and that interruptions by just-in-time-compiling, garbage collection, or background processes would cause jitters and affect timing.

After carefully considering these issues, I determined that our MATLAB algorithms would be able to run on a PC at 300 Hz or faster, and with enough timing accuracy to meet the needs of our control design. I knew that if we needed to speed up critical or slow components, we could use C++ code within our MATLAB simulations. While higher sampling rates and more reliable timing make control design easier, I believe that controllers can and should be designed with sufficient robustness to handle lower rates and less accurate timing. Our human nervous system manages complex movements with relatively low bandwidth and high latencies, and I strive to develop controllers that can do the same.

The numerous advantages of using MATLAB and Simulink became apparent as development progressed. For example:

  • MATLAB is one of the best tools available for rapid prototyping algorithms based on linear algebra.
  • Most of the commercial optimization solvers we use have a MATLAB interface that makes it easy to invoke them from MATLAB code.
  • MATLAB and Simulink provide numerous ways to visualize data, simulation results, and the motion of virtual robots.
  • Simulink enables us to develop sophisticated models, incorporate MATLAB classes as S-functions, apply ODE solvers, and simulate hybrid systems and systems that combine continuous and discrete components.
  • Many of the students on our DRC team had used MATLAB in their undergraduate or graduate studies in controls, communications, and signal processing.

The First DRC Event: Guiding a Simulated Robot in a Virtual Environment

Teams that did not have their own robots for the DRC were invited to participate in the Virtual Robotics Challenge, which tested software teams’ ability to control a simulated robot as it completed three tasks in a virtual environment. Seven finalists from this round would move forward to the DRC trials using an ATLAS robot provided by DARPA. ATLAS is a humanoid, hydraulically-powered robot created by Boston Dynamics.

The three tasks, closely related to the eight tasks that the real robot would complete later in the competition, were quite complex. For example, they required the robot to walk over rough, variable terrain and manipulate a fire hose (Figure 3).

DARPA_Robotics_fig3_w.jpg
Figure 3. The virtual ATLAS robot manipulating a fire hose in the simulation environment.

For the DRC, we needed to make the whole-body motion planning and control algorithms for ATLAS execute quickly enough to run in real time. Operating in an unfamiliar environment, the robot would be commanded to execute a task, and the controller would need to immediately plan the motion of the entire robot. We achieved that in MATLAB by exploiting what we knew about the robot’s structure and the equations used to describe it—just as we had done in earlier research projects. For mechanical systems like Atlas, we know that energy is conserved, that the mass matrix is positive, and that the center of mass dynamics is uniquely determined by the influence of gravity and contact forces between the robot and the environment. The equations have important sparsity patterns: the dynamics of the right hand are only coupled to the dynamics of the left foot through the mass matrix.

We had just eight months from the Virtual Challenge kickoff meeting to the actual competition. With that aggressive timeline, we had to develop quickly. In MATLAB we were able to rapidly develop sophisticated control ideas, prototype them, and debug them visually, and that was much more important than having code that ran 2% or even 20% faster.

Two months after being named one of the seven winners of the Virtual Robotics Challenge, we received our ATLAS robot.

Taking ATLAS from a Virtual Environment to the Real World

Once again, a tight timeline made rapid development imperative. In this second phase of the competition we had just five months to program ATLAS to perform eight tasks, including walking on uneven terrain, climbing a ladder, clearing debris from a doorway, breaking through a wall, turning a valve, connecting a fire hose, and driving a utility vehicle. Humans were allowed to direct the robot, but only via a low bandwidth communications channel, making some degree of task-level autonomy essential.

The ability to quickly implement and debug algorithms with MATLAB and Simulink proved instrumental to our ability to produce a controller capable of guiding ATLAS through such complex tasks.

To plan ATLAS’ movements and perform other required tasks during the competition, we had five or six separate MATLAB and Simulink processes running simultaneously on desktop PCs. These processes communicated with the ATLAS robot via UDP using Lightweight Communications and Marshalling (LCM), a set of libraries designed for real-time systems that requires data marshalling and message passing.

Two tasks in particular underscored the value of the task-level autonomy that we had achieved with MATLAB and Simulink. When our ATLAS robot was clearing debris from the doorway, a board it had just moved fell across its feet. ATLAS excels at many tasks, but the kinematics of the robot makes touching its own toes very challenging. Still, our team was able to overcome the unanticipated setback, going “off-script” to successfully direct ATLAS to remove the board from its feet before clearing the remaining debris (Figure 4).

DARPA_Robotics_fig4.jpg
Figure 4. The ATLAS robot removing debris from a doorway at the DRC trials.

There was another challenge. We knew that ATLAS barely fit in the car it had to drive (Figure 5), but we had no practical means of attempting the driving task near the MIT campus. We had just 30 minutes to experiment with the car and ATLAS before the 30-minute task began. No team before us had managed to move the car off the starting line. Due to the robustness of the solution we had developed using MATLAB and Simulink, we ended up spending 45 minutes getting ATLAS into the car but were then able to have it turn the wheel, depress the accelerator, and drive halfway down the course before our time expired.

DARPA_Robotics_fig5_w.jpg
Figure 5. MIT team members assessing how to fit ATLAS into the DRC utility vehicle.

Preparing for the DRC Finals

We placed fourth in the DRC trials, and qualified for the DRC finals scheduled for June, 2015. In this phase of the competition, the robots will be required to complete a series of physical tasks with degraded communications between the robots and the teams that operate them. DARPA has recently announced that this challenge will place an even greater importance on task-level autonomy. Teams that rely on scripted solutions or extensive communications with their robots will be at a disadvantage. We believe that this phase will showcase the value of the approach we have taken.

Having put an immense amount of effort into software engineering, our team has decided to make a large portion of the software we've written for the DRC available as an open-source distribution called Drake --even before the final competition. Drake is a general planning and control toolbox for nonlinear dynamical systems. It includes a rich dynamics engine for rigid body systems with frictional contact, trajectory motion planning, and nonlinear verification, as well as multiple hardware interfaces and methods for visualization, estimation, and parameter identification.

About the Author

Russ Tedrake is an associate professor in the department of Electrical Engineering and Computer Science and Aeronautics and Astronautics at MIT, and the director of the Computer Science and Artificial Intelligence Lab Center for Robotics. He received his B.S.E. in computer engineering from the University of Michigan, Ann Arbor, and his Ph.D. in electrical engineering and computer science from MIT. Professor Tedrake's research group is interested in underactuated motor control systems in animals, and in machines capable of executing dynamically dexterous tasks and interacting with unpredictable environments.

Published 2014 - 92229v00

View Articles for Related Industries