Now consider the case in which a user commands an object to move. Examples include driving a car, flying a spaceship, or walking an avatar around. This introduces some new parameters, called the *controls*, actions, or inputs to the dynamical system. Differential equations that include these new parameters are called *control systems* [14].

Let be a vector of controls. The state transition equation in (8.26) is simply extended to include :

Figure 8.9 shows a useful example, which involves driving a car. The control determines the speed of the car. For example, drives forward, and drives in reverse. Setting drives forward at a much faster rate. The control determines how the front wheels are steered. The state vector is , which corresponds to the position and orientation of the car in the horizontal, plane.

The state transition equation is:

Using Runge-Kutta integration, or a similar numerical method, the future states can be calculated for the car, given that controls and are applied over time.

This model can also be used to steer the virtual walking of a VR user from first-person perspective. The viewpoint then changes according to , while the height remains fixed. For the model in (8.30), the car must drive forward or backward to change its orientation. By changing the third component to , the user could instead specify the angular velocity directly. This would cause the user to rotate in place, as if on a merry-go-round. Many more examples like these appear in Chapter 13 of [163], including bodies that are controlled via accelerations.

It is sometimes helpful conceptually to define the motions in terms of discrete points in time, called *stages*. Using numerical integration of (8.29), we can think about applying a control over time to obtain a new state :

(8.31) |

The function is obtained by integrating (8.29) over . Thus, if the state is , and is applied, then calculates as the state at the next stage.

Steven M LaValle 2016-12-31