Control of Mobile Robots: Week 2 Review

Another interesting week in mobile robotics control theory through the free on-line course, “Control of Mobile Robots” Hosted by and presented by Dr. Egerstedt of Georgia Institute of Technology.

This weeks topic: Mobile Robots

What do you need to move a robot from point A to point B in an autonomous manner, avoiding obstacles, and displaying proper tracking to target with acceptable stability and reliability? This week, we observed that the world consists of dynamic and sometimes stochastic variables that preclude successful mobile robot operation given a generalized controller design. The system control must be able to respond the changing environmental conditions in real-time and is a non-trivial problem in robotics. This week, we took a layered or divide and conquer approach to attacking this problem by first considering a go-to-goal stage, obstacle avoidance stage, follow-wall stage, track target, and so forth in order to build a reliable and stable robot control program.

Behavior based robotics involves the study of mechanical systems and how they interact and address problems in the real world. I point you to Dr. Ronald Arkin’s text, “Behavior Based Robotics“, for a good introduction into the topic.

One of the first considerations in building a control program is to understand the model of the system in which we are attempting to control. For instance, a standard differential drive robot is used for the course to demonstrate the subtle yet important factors of wheeled robotics control. We want to drive a robot from point A to point B, avoiding obstacles and picking an appropriate path. We typically consider 3 primary variables for a general model. x’, y’ and phi’, where:

x’ = R/2(vr + vl)cos (phi)

y’ = R/2(vr + vl)sin(phi)

phi’ = R/L(vr + vl)

with R = the radius of the wheels on the robot, vr = velocity of the right wheel, vl = velocity of the left wheel, and phi’ = the angle from the initial straight line direction to the new target straight line direction. We care about the state of the robot at any time, defined by these three parameters (x, y, phi).

In a mobile platform, we want this state information readily available such that we can track and modify the robot’s trajectory using odometry techniques. We do this by using external and internal sensors for environmental feedback and control. A common method looks at wheel encoders which give the distance that each wheel has moved. For instance, a two wheeled robot is making a right turn, we can note three points describing the system as the left wheel, robot center, and right wheel. Using geometry, we come by that the distance of the center of the robot in the arc is given by:

Dc = (Dl + Dr)/2

with Dc = distance of center, Dl = distance of left wheel, and Dr = distance of the right wheel. Using our model above, we find that:

x’ = x + Dc cos(phi)

y’ = y + Dc sin(phi)

phi ‘ = phi + (Dr – Dl)/L

where x, y, and phi define the initial state (x, y, phi). We then have our new state (x’, y’, phi’). We also track our revolutions using ticks and can calculate our distance per wheel using:

D = 2(pi)R(delta(tick)/N)

with pi = 3.14159…, delta(tick) = tick’ – tick, R = radius, and N = total ticks per revolution for a given wheel.

However, if we go off and implement this, we would find that a problem quickly arises. This is the problem of drift. Drift can come from wheel slippage, encoder miss counts, uncertainties in position, etc, and we therefore need a mechanism to feedback into our robot so that we can correct for the encoder uncertainties. We therefore must add additional sensors onto the platform that may range from tactile switches, sonar/ultrasound, to laser range finders.

So how is this done? We venture down the path pioneered by the behavior-based robotics experts. Our robot must now exhibit some intelligence into it’s motion and be able to react to it’s environment robustly so that the motion can proceed successfully from our starting point to our destination point. Lets look at our generalized differential drive robot that is moving at a constant speed.

x’ = v0 cos(phi)

y’ = v0 sin(phi)

phi’ = w

We are moving in a straight line and wish to change our heading. Lets define phi and phid. Phi is our current angular heading from our reference and phid is our desired angle for the new heading. Using what we learned from week 1, we can use the PID controller in order to use our external error, given by our sensors, to adjust our internal controller in order to track to our destination (and do some obstacle avoidance which I am not going to go into this week). What do we have now?

We have our reference, our model, a control input and the tracking error. Using our PID from last time:

e = phid – phi

w = PID(e)

So what’s phid? Let (x, y) be our start position and (xg, yg) be our goal position. phid is then given by:

phid = arctan((yg – y)/(xg-x))

where arctan is the arc-tangent of the change in y / change in x. What we also see from experimenting with controllers of this type is that we can quickly get into trouble dealing with angles. We weight how our angle change interacts with our model and notice that to small or to large a weight, or K value, can lead us to our robot approaching our goal and veering off or circling our goal forever; never reaching our intended target. we can mathematically look at this in the following way:

w = K(phid – phi)

with K being defined by experimentation at this point.

That’s all for now. Please take a look at some robot simulators available that you can code and test your solutions. Below are some links and I will be back next week with more control theory fun!

— Jim


Dr. Ronald Arkin’s text, “Behavior Based Robotics


GRITS – Matlab based Simulator


I recommend ROS and the ROS RVIZ simulator. They are free and relatively simple to use. (it’s actually the GAZEBO simulator…)

Did you like this? Share it: