Control of Mobile Robots: Week 1 Review

Greetings fellow enthusiasts! Here is a review of week 1 of the free web course, “Control of Mobile Robots”. Presented by Dr. Magnus Egerstedt of Georgia Tech. School of Electrical and Computer Engineering. General comment, I am only giving a very brief and high level review so please investigate more in control theory to fully understand the topic on your own.

The course began by introducing control theory and demonstrating the use of control techniques in robotics, thermostat control, circuits, engines, autopilots, and various others were stable and robust controls systems are employed. Understanding this wide range of applicability, the basic building blocks of a generalized control system were introduced; state, dynamics, reference, output, input and feedback. The core idea is to pick an input signal such that the control exhibits stability, tracking, robustness, disturbance rejection and optimality. This is typically achieved by a continuous time function using a differential equation, describing the system from some time t = 0 to t + n, with n > 0.

I digress for a moment, if you are unfamiliar with calculus, I will spare you the full mathematics, however if you’d like to review differentiation and integration, please check out https://www.coursera.org/course/calc1, or any general mathematics website on the subject.

Continuing on, the lecture goes forward to describe what is considered a bad control design, and builds upon this in order to develop a full understanding of the types of controllers used. This brings us to the PID controller. PID stands for Partial Integral Derivative, and is a generalized feedback system that uses the measurements in error of the target and measured values. What you get is a function of the feedback input with respect to time that describes the corrective actions in regards to the measured value. The input function, u(t), is given as: (image from http://en.wikipedia.org/wiki/PID_controller)

\mathrm{u}(t)=\mathrm{MV}(t)=K_p{e(t)} + K_{i}\int_{0}^{t}{e(\tau)}\,{d\tau} + K_{d}\frac{d}{dt}e(t)

So what does this thing really tell us? Well, first lets understand what the P, I, and D of PID does for use in real life. First, P contributes to stability and is considered a medium rate responsive variable to external influences. Second, I gives us tracking and disturbance rejection with a somewhat slower rate of responsiveness in control. Thirdly, D gives a fast response feedback, though sensitive to noise, control for settling to the target value.

Let’s now talk about the constants Kp, Ki, and Kd in the above equation. These are constants that allow us to describe how each of the error terms in the above equation affect our u(t) input signal. Typically these values are Kp = 1, Ki = 1, and Kd = 0.1. However, these are experimentally tuned parameters and need to be tuned for given application domains. Suffice it to say, the big message here is that we can tune how the error is handled or even ignored with these three constants.

Now where the money is; the three error terms. What is e(t) you might ask? e(t) is the measured distance between your target value and you currently measured value. Or said another way, it is the difference from where you are to where you want to be. You can use this difference alone for either adjusting positively or negatively in your controller to get to your target value. For instance, lets say you want to travel at 60mph and you are at 50 mhp, the next time step you would have an input of positive+10mph to get to your target velocity. The reverse is also true. But wait, why no just use this as the controller equation? We can simply use the difference to adjust our motors…alas, no. You don’t want to do this. Why not? Simply stated, physics in the real world is not idealized. We must consider factors such as friction forces that work against our ideal control philosophy. If we ignored physics we would never get to our target or bounce from too positive or too negative a signal, and our controller would be useless. If you don’t believe me, research on your own and more importantly, try it. 🙂

This is where the integral and derivative of the error terms come into save the day. Lets first review what a derivative represents. Simple stated, it is a function that describes the rate of change of a quantity. For instance, lets say we have a function that describes how a car moves in time. Let x(t) be the position of the car with respect to time, and let’s let our cars position in time be described by t^2. Or more succinctly:

x(t) = t^2 (t squared)

if we plotted this in a graph with the x-axis representing position and the y-axis time, we would see an arc from t = 0 onward. Try it. I will not reproduce the graph here for the sake of time. Continuing onward, if we take the derivative of this function, we get:

x’ (t) = 2t

graphing this result, we would see a straight diagonal from the origin of the graph, raising upwards. Again, graph it and see for yourself. If we again take the derivative of this equation, we would have the following:

x” (t) = 2

a constant. If you again plot this, you would see a straight horizontal line in our x-t plot. So what do we have? x was out position in time. x’ … can you guess? represents our velocity. and x”…? is our acceleration. The function x(t) = t^2 represents our car accelerating at a constant rate of 2, doubling our velocity with each time step.

As for the integral, simply stated, is the anti-derivative of our function. For simplicity, and please research on your own, the anti-derivative is the reverse of the above steps. So the integral of x” (t) = 2 is x’ (t) = 2t in some interval from t = 0 to t = n with n> 0. So the secret here is that if we know only the acceleration and some time interval, we can integrate to the position function of our system to determine how we move in space with respect to time. Newton and Leibniz change the world with this stuff.

Now what does this have to do with our error correction in our input function? Well, what I didn’t mention about the integral is that it actually represents the summation of all error at each time step over the course of time that our system is operating. Given an interval, say from t = 2 to t = 5, if we integrate the error function, we get the summation of that error. That doesn’t sound so special, however, this allows us to perturb our system such that we can better approximate closer to our target value, say our 60mph as given above. So our error term changes as Enew = Eold + error. That is, our new error term is summation of all error previously plus our new measured error. Dah! But a mathematically proven dah. (this actually solves some very complicated real world stuff, but bear with this explanation).

Let’s sum up. We have our measured error at some time t; we know how it acts over time de(t) / dt; and we know the quantity of all the error up until some measured some t, (integral) e(t) dt. We can now use these terms to obtain our target value and remain stable and robust to some perturbation.

Let’s see this in pseudo code:

kp, ki, kd = constants predetermined.

//call this function in contiguous time steps. given by Dr. Egerstedt in lecture.

function getInput{

  1. read error;
  2. e_dot = error – old_e;
  3. Enew = Eold + error;
  4. u = kp * error + ki * Enew + kd * e_dot;
  5. old_e = error;
  6. return u;

}

To finish out the week, Dr. Egerstedt demonstrated this generalized controller using an ArDrone and the ultrasonic sensors to maintain altitude stabilization while periodically perturbing (pushing) the drone. You’ve seen this in action if you’ve been to one of our ArDrone sessions! I encourage you to write your own controllers and either use your own robot or one of the drones at Robot Garden.

Thanks for reading. Please research on your own into controller theory for robotics applications. It gets harder and more detailed when going from theory to real world implementation, but I will try my best to summarize clearly.

Best regards,

Jim

Did you like this? Share it: