A lot of motivation for modern robotics is “can we do better than PID” so knowing how PID works in depth can help us understand approaches to move beyond it. Furthermore in cases where state-of-the-art control systems aren’t necessary (e.g hobby robot) being able to write a simple PID controller can be really helpful!

What is a PID controller and why do I care

PID stands for proportional, integral, derivative.

PID controllers are used everywhere. You can use it to control a thermostat (how much heater should I turn on? how much AC for how long to maximize energy savings and keep temperature within 0.x degrees?) or to control a robot to move its arm to a precise position. I once implemented a very naive approach to drone positioning for a NASA research experiment – in retrospect a PID controller would have been nice. You can also use it to control the speed of a robotically-driven car.

One really good motivating example is cruise control – the original electronically-controlled car. You set cruise control and your car magically maintains a set speed (e.g 65mph). How does it work?

Motivating Example: Cruise Control

If you were implementing cruise control naively, how would you go about it?

I’d say “well the closer we are to the target speed, the less we should try to adjust our speed.” So if our goal is 65mph and we’re currently at 55mph, then the difference in our speeds is 10mph and I’d want to open the throttle just a bit.

If the goal speed is 65mph and we’re currently at 30mph, then the difference in our speeds is 35mph and I’d want to open the throttle a lot.

This is the P in PID. Proportional. A proportional control system responds proportionally to how far it is from a target.

A proportional control system by itself doesn’t work very well. For it to work perfectly every time, you’d have to know something like “if I open the throttle to .66 for 10 seconds then I’ll go from 55mph to 65mph and be at my target.” You’d also have to know “if I keep throttle at .33 then I’ll stay at 65mph.” This may be OK for some simulated world, but in the real world things aren’t quite this nice.

Driving up a hill

What happens when your car drives up a hill? Suddenly your hard-coded proportional increase doesn’t work. Up an incline it would take more throttle to get from 55mph to 65mph. How do you make your controller account for this?

Time Matters

Recall that distance is the integral (sum) of speed over time (60mph for 1 hour = 60 miles, 60mph for 1 minute = 1 mile).

Intuition says “the longer I’m not at my target, the more action I should probably take.” In a naive implementation, the integrative module might count time that we aren’t at the target and say “for every second we’re not at the target, add a bit to the throttle.”

This is the I in PID. Integral.

The overshooting problem

You might detect an issue with this approach. If I am truly far away from my target, then if I always add throttle per second, by the time I reach the target I won’t have slowed down. Now I’ll overshoot the target and have to reverse (brake) to compensate. So maybe we should take acceleration and deceleration into consideration; if I’m getting closer to the target I should be accelerating less and less.

Factoring in acceleration

Acceleration is the derivative of speed with respect to time. And since you know your speed, you should be able to calculate your acceleration over some time window and say “if I’m going a lot too slow I should probably have a large positive acceleration” and “if I’m only a little too slow I should probably have a small positive acceleration.”

You increase your acceleration by adding throttle and decrease your acceleration by applying brakes.

This is the D in PID. Derivative.

Tuning the PID controller

By using these three signals (proportional distance, integral (total) distance, and change in distance over time (derivative) (acceleration)), you can build a robust controller. But how do you actually tune a PID controller? There are lots of ways including Coordinate Ascent, Ziegler-Nichols, and good old gradient descent. By tuning we mean setting the exact parameters that you use (weights) to multiply each of the PID factors by for optimal performance.

It’s obvious that if you replace each of the PID factors with a node and treat the whole thing like a neural network (distance from target is the loss function), you can use gradient descent to find functional parameters! This is exactly what they do in PIDNN, one of the first well-known PID nets. A lot of papers have come out since PIDNN was first introduced, most of which are about how to train it faster or tune it better.

PID controllers have major limitations

You might notice that PID controllers technically only try to calulate 3 signals. What happens when you try to pilot a self-driving car with a PID controller?

Let’s conduct this thought experiment. We have some algorithm that estimates lanes and we have an obstacle-free closed-circuit course.

You’ll probably have a PID controller that tries to stay in lane, as well as a separate PID controller to deal with speed. What happens during turns – does the PID controller for staying-in-lane compensate in time? As your speed increases, you’ll notice that the parameters for staying in lane start to change. This is sort of an issue.

In practice if you watch a PID-driven autonomous vehicle go, you’ll notice lane jitter. It’s unavoidable – the algorithm isn’t taking advantage of all available variables (e.g look ahead to the upcoming turns / speed increases and calculate factors by which to change the steering/throttle signals). This is why end-to-end SDC nets work a lot better in practice than PID-controlled vehicles.

Another problem with PID controllers is that they assume constant parameters; if your system isn’t linear, you’re going to have bad performance.

That said there aren’t always that many variables. Moving a robot arm in a well-controlled environment is still an excellent use case for a PID controller, for example. You really don’t need anything fancy to control your thermostat either.