Java: PID Control

From Deep Blue Robotics Wiki
Revision as of 23:14, 10 November 2016 by SammyZhang (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

PID is a closed-loop control system designed to reach a target value as quickly and accurately as possible. By tuning 3 constants (i.e. kP, kI, and kD), a PID loop can be optimized for a specific system.

State Mechanics

Before going into PID, we should consider a more basic form of robot control, state machines. In our robot code, we can often compute control outputs (e.g., Jaguar/Victor motor control settings) based on current sensor values, such as joystick positions, slider positions, dashboard inputs, gyro readings, etc. However, sometimes current behavior needs to be modified by past events, such as a switch or button press. In this case, the behavior isn't driven directly from the input, but is driven by the "state", and the state is (potentially) changed by the input. Finite state machines are a special class of simple state machine whose state can be represented by a single enumeration (e.g., a boolean toggle variable in the case of a 2-state system). See this presentation for examples.

Closed-Loop Control

Closed-loop control, aka feedback-based control, involves driving actuator outputs from sensor inputs. The objective of a feedback loop is to hold a system at a certain setpoint, preventing oscillations around the setpoint and thus maintaining system stability. Well performing control systems also strive to reach their goals quickly, stay close to their targets, respond quickly and smoothly to disturbances, minimize overshoot and undershoot, and avoid output spikes, which could cause system stress and wear.

For example, let's say we want to implement a cruise-control system, to drive the robot at a specified velocity. If you've ever driven a car or ridden a bicycle, then intuitively you should know that static calibration of the output to the speed won’t work on its own, due to variance in battery voltage, incline, terrain, wind, mass of passengers or payload, etc. The only way to reach the desired velocity dependably is to measure the achieved velocity (typically using an encoder) and adjust motor output accordingly.

One common feedback-based control algorithm is called bang-bang control. With this approach, the output oscillates between completely on and completely off, with a variable duty cycle. This scheme is used in many heating and cooling systems, such as ovens, refrigerators, home heating systems, and air conditioning units. However, this approach is not suitable for velocity control due to the extreme spikes and oscillations it induces -- imagine riding in a car that alternated between flooring the accelerator and idling.

Another simple control strategy is to increment the output by a constant if under the goal and decrement if over goal. Due to response lag this approach tends to overshoot. Also, small increments are needed in order to avoid oscillation, but they also slow down convergence since more steps are required.

PID Control

The increment/decrement approach could be improved upon by incrementing by larger amounts when far from the goal and smaller amounts when closer to the goal. This approach has been formalized by an algorithm called PID. The Simbotics have posted a short presentation about the technique. See also the 2013 WPI doc about PID, PIDsubsystem, and the SmartDashboard PID tuning widget. We go into much more detail below.

PID is an an acronym for Proportional-Integral-Derivative, which refers to the three terms in the basic algorithm (called the "parallel", or independent, form):

 out(t) = Kp*error(t) + Ki*∫error(t)dt +Kd*(d/dt)(error(t))

Or, in discrete form, as code (note that the goal is called the setpoint):

 error = setpoint - input;
 totalError += error * period;
 output = P * error + I * totalError + D * (error - prevError) / period;
 prevError = error;

The "P" term is a measure of how far away the system is from the setpoint, the "I" term is a measure of how long the system has been away from the setpoint, and the "D" term is a measure of how quickly the system is moving towards the setpoint.

Tuning the coefficients P, I, and D produce different effects on the system. Increasing P produces a faster response from the system, at the price of instability - a high P coefficient causes more oscillations. Increasing I can help eliminate steady-state error, also known as droop (the limit of the difference between the input and output of the system as time goes to infinity), but it can also cause overshoot of the setpoint. Increasing D addresses the two problems created by the first two terms, as it decreases overshoot and reduces the response time of the system, but also adds instability to the system. Ideally, all three terms will be balance each other out.

For speed control, the basic terms of the algorithm relate to distance travelled (integral of velocity), velocity, and acceleration, with extensions using jerk (derivative of acceleration) and jounce (derivative of jerk). A well tuned PID algorithm can converge quickly, minimize overshoot, reject spikes, and produce very smooth motion.

In many articles about PID, you may see the "ideal" or "dependent" form:

 out(t) = Kc*(error(t) + (1/Ti)*∫error(t)dt +Td*(d/dt)(error(t)))

where Kc is called the controller gain, Ti is called the reset time, which is the time by which accumulated error should be eliminated, and Td is called the derivative time, which is the amount of time that the future error can be predicted. Or, in discrete form, as code:

 error = setpoint - input;
 totalError += error * period;
 output = C * (error + totalError / Ti + Td * (error - prevError/ period));
 prevError = error ;

The Jaguar speed controller has position and speed PID controllers built in. However, use of these control loops appears to require the CANbus and 2CAN, which a number of teams (including ours) have found unreliable in the past, to set the tuning parameters. Additionally, a separate encoder would be required to drive each Jaguar, and it would be difficult to control multiple Jaguars in a synchronized manner. Finally, according to a Chief Delphi thread, the algorithm has a number of limitations (e.g., no anti-windup, no feed forward), and we can't change it. Therefore, we won't be using the Jaguar's built-in PID controllers.

WPIlib provides a PIDController class, but we've found the need to modify it in order to change the algorithm and to make the parameters easier to display and tune via the SmartDashboard.

Issues to be aware of:

  • work in real-world units: The PID parameters are easier to tune if they're in terms of real-world measures.
  • validate the encoder/gear ratio: Compute the ticks per foot from first principles (4 * lines_per_revolution * gear_ratio * wheel diameter * PI), but measure its accuracy by pushing the robot a few feet and reading the count.
  • don't use Encoder::getRate: getRate has had bugs in the past, and also doesn't have the same period as your control loops, so just getRaw and use the Timer class to compute the * rate.
  • velocity control vs. position control: In practice, PID for velocity control works a bit differently than for position control. Position control on a flat surface is an integrating process -- when you remove the controller output, the position remains the same. Velocity control is a self-regulating process -- it requires a constant output to maintain constant velocity. For position control, the P term alone can get you close to the goal, and the D term can provide damping to avoid overshoot. For velocity control, it's the I term that gets you close, with the P term providing damping. In this case (called "absolute velocity" below), the integral of the error is equal to the total distance error (velocity setpoint * elapsed time - total distance). Alternatively, it's possible to accumulate the PID output for velocity control and use PD as with position (computing the change in output -- essentially taking the derivative of the whole equation), and this approach (called "incremental velocity" below) has a number of advantages, such as making it easy to switch between manual and automatic control, and is our preferred approach. Sample code for this can be found here.
  • sampling frequency too low: Ideally, we'd like the control loop to be invoked at 100Hz (i.e., 100x per second). The standard PIDController class executes at 20Hz, IIRC, and IterativeRobot *Continuous methods are only invoked at about 50Hz.
  • irregular period: The timing of a high-frequency software loop can easily be perturbed by other functions or threads. Furthermore, if you need to change the period, you don't want to retune all your constants. Measure the actual period between invocations and use it to scale the terms rather than assuming a constant period and omitting it.
  • non-linear motor speed: The PID algorithm assumes that a linear change to the output will produce a linear change to the input and vice versa -- the P term works exactly that way. The motors are controlled by voltage, but the speed generated is not linear with the voltage. The Jaguars linearize the output for you. Victors don't. You can see this by plotting speeds at output settings .1*k for k = 1 to 10. Ideally, the linearization transformation would be invertible and smooth, but a piecewise linear interpolation could suffice.
  • identical motors don't necessarily behave identically: Be careful about sending the same output to multiple motors, even those of the same model that should have the same amount of load applied. They can turn at significantly different speeds (>10% different) at the same voltage. This can be handled by calibrating the motors and scaling the outputs, or by running separate control loops for each motor, which may be necessary to support turning, anyway.
  • dead zone: The motors don't move at all below a certain output threshold (e.g., ~0.05 for Jaguar+CIM). Jump over the range where output has no effect. Shift the whole output range by adding or subtracting the dead-zone threshold if using the whole output range, both forward and reverse, or simply treat the threshold as the minimum value if implementing a unidirectional controller.
  • division by zero: Division by zero either produces an exception or garbage. Ensure your denominators are non-zero. Since PID is tuned by setting most of the coefficients to zero initially, avoid dividing by the coefficients.
  • directionality: Ensure your encoders are counting in the positive direction during positive motion; SetReverseDirection if not. Similarly flip the output polarity by negating it, if necessary, so that a positive output from the PID moves in the forward direction. If you want to permit both forward and backward motion, you can use the full output range (minus the dead zone), which, for example, will permit braking and avoid non-linear saturation, thus improving performance of the controller.
  • encoder jitter: If you observe significant variation in encoder readings at a constant output and load, then you may want to compute a moving average of encoder deltas or use another filtering technique rather than using the raw encoder value. Sometimes only the D term is filtered, since it is particularly sensitive to noise, and sometimes the final output is filtered. The disadvantage of filtering is that it induces lag, since it takes longer for real changes to saturate the filter, and this lag can exacerbate overshoot and oscillation.
  • unachievable goal: It may be difficult or impossible to achieve the setpoint exactly. Permit a small but non-zero tolerance around the setpoint in order to avoid pointless oscillation around the setpoint. Additionally, PID depends on being able to adjust the output freely according to its formula. When the output saturates and it still can't achieve its goal, the algorithm doesn't perform well (see integral windup next for a specific problem). Moreover, if motors are maxed out under a larger than expected load, the robot simply won't be able to move as fast as desired and/or won't have any headroom left to maneuver without slowing down.
  • integral windup: The error integral accumulates in order to compensate for sustained error. However, if the output saturates and can't compensate any faster, then the integral term can become very large and cause a large overshoot. To avoid this problem, stop changing the accumulated error when output's maximum or minimum value is reached, and/or when I == 0. When using a goal position minus the actual position as the accumulated error for speed control, then increment the goal position by the actual position delta instead of the goal position delta. In general, when/whether to reduce the integral term can be a tricky problem. Some solutions include delaying integral accumulation, decaying the integral to give more weight to recent error, and a sliding window of integral values (e.g., average speed over last N seconds for velocity control).
  • proportion kick due to change in setpoint: A large discontinuous change in the setpoint can cause a large spike in the output. To smooth the change, instead of P*error, use P*input. This only works if the P term isn't the first-order term, such as in the case of velocity control, since it removes the setpoint from the term entirely.
  • derivative kick due to change in setpoint: Similar to proportion kick. Instead of D*(error - preverror)/period, use -D*(input-previnput)/period.
  • integral kick due to change in I constant: The integral term doesn't change dramatically in response to changes in setpoint, but does change dramatically to changes in the I constant. To address this, accumulate I*error as the whole I term instead of adding I * accumulated error to the output.
  • integral kick due to enabling the PID while already in motion via manual control: Initialize the I term from the current output level (called back-calculation): I_term = output / I, if I non-zero, otherwise 0. One could also first subtract P and D terms based on current error to get even closer. This technique can also address integral windup.
  • overshoot due to lag: A change in output doesn't immediately cause a commensurate change in input. The greater the lag, the greater the overshoot and oscillation. Motion profiles can address this problem, thus creating much smoother motion. A motion profile gradually adjusts the setpoint and/or other limits (e.g., output limit, velocity limit, acceleration limit) over time. The simplest, "rectangular" profile uses a constant velocity to advance a "filtered" goal position until close enough to reach the actual goal. A "trapezoidal" velocity profile increases the target velocity over time at a constant acceleration, up to a maximum velocity, then reverses the process to decelerate. An "S-curve" profile applies a "trapezoidal" shape to acceleration by enforcing a constant jerk, thereby further smoothing the motion and reducing overshoot. A "parabolic" profile is similar, but subtracts the jerk terms in order to compensate for torque drop-off. The discrete S-curve equations are as follows:
    • PT = PT + VT + .5*AT + ⅙*JT
    • VT = VT + AT + .5*JT
    • AT = AT + JT
  • slow response to major discontinuous disturbances: For control problems where the I term is the dominant term, such as with speed control, it can take some time for the I term to respond to large discontinuous disturbances. This problem can be addressed using a technique called "feed forward", which adds a term to the output based on a model of the response to the disturbance, in order to take corrective action before input responds and/or integral accumulates. This approach can also be applied to compute the bulk of the output value if a reasonably accurate model can be developed. To do this, add F * setpoint to the PID output, where F is the slope of the output/input curve derived from static, load-free calibration. F is the reciprocal of what is called the process gain, which can also be used to derive the other PID coefficients in the case that feed forward is not used. An alternative but mathematically equivalent expression of the feed-forward approach is called setpoint weighting, which uses a combined feed-forward/feedback term, P*(W*setpoint - input) instead of separate F*setpoint and P*(setpoint - error) terms.
  • fast process affecting slow process: In the case that the process being controlled is controlled indirectly, cascaded controllers can be used, where an outer loop controls the setpoint of an inner loop. In order for this approach to be necessary and effective, the inner process should be at least 3x faster than the outer one. Motion profiles are more appropriate for position and/or speed control applications. Kalman filterscan be used to combine multiple inputs. One possible application would be to use an outer loop to control direction while an inner loop for each motor controls velocity. A balancing robot, such as a Segway, might also need cascaded PID loops.

PID Tuning

One of the biggest challenges in using PID is tuning the coefficients. Use the Preferences class to store tunable constants such as P, I, D, F, encoder/gear ratios, min/max caps, tolerance threshold, profile constants, and the desired period/frequency. Estimate the ballpark range of sane coefficient values by dividing the output range by the input range. Start with a value an order of magnitude smaller than this ballpark (e.g., 0.001) and increase it by 10x until has an effect, then "binary search" for a "reasonable value", or just increment by smaller and smaller amounts.

Start with the coefficient of the dominant term (P for position or incremental velocity, I for absolute velocity), and leave the others zero -- ensure the PID algorithm works in the case of zero values for these coefficients, as mentioned above. You know you've found a "reasonable value" for this term when the algorithm reaches its goal reasonably quickly without ridiculous overshoot. One suggested approach is to increase the value until the controller oscillates, then cut the coefficient in half. Then move on to the 2nd-order term (D for position or incremental velocity, P for absolute velocity) and repeat the process, with the objective of reducing overshoot. Increase the value until oscillations are minimized; decrease the value if the controller is responding too slowly or oscillation increases. Repeat for other constants as necessary to refine the motion through "knees" in the motion curves. If P is your dominant term, you may need a small I value to adjust for steady-state residual error. If I is your dominant term, a feed-forward term could replace I as the dominant term in order to free I to serve this purpose.

Feedback control loops can do some crazy things while you are in the process of trying to debug. One technique that can be useful is to purposely limit your motor outputs – to either some minimum level plus/minus or even to prevent reverse motion altogether – to start out with. Restricting motor output values can both help prevent mechanical breakage and give you sufficient time to observe the algorithms in action (and hence a fighting chance to debug your algorithms). This can be a little tricky. If the limits are set too low, the motor cannot actually move the mechanism.

There are also some formal tuning methods (e.g., Ziegler-Nichols, IMC, or the simpler Good gain method), but we haven't tried them yet. Closed loop control challenges

To experiment with closed loop control, checkout the closed loop control challenges at https://janismac.github.io/ControlChallenges/. Resources

PID resources: PID example in wpilib cookbook PID loops missile example animation video of PID-controlled tracking robot PID tuning Integral windup Practical PID control (book) PID control chapter from Control System Design by Astrom Control guru (online book) Using PID for velocity control PID without a PhD PID design basics PID tweaks Motion profiling diy-pid-control group Introduction to the Kalman filter

Other forms of closed-loop control: Control strategies Sliding-mode control Backstepping Lead-lag compensation Negative feedback amplification