Perspective projection

Figure 3.16: Starting with any point $ (x,y,z)$, a line through the origin can be formed using a parameter $ \lambda $. It is the set of all points of the form $ (\lambda x, \lambda y, \lambda z)$ for any real value $ \lambda $. For example, $ \lambda = 1/2$ corresponds to the midpoint between $ (x,y,z)$ and $ (0,0,0)$ along the line.

Instead of using orthographic projection, we define a perspective projection. For each point $ (x,y,z)$, consider a line through the origin. This is the set of all points with coordinates

$\displaystyle (\lambda x, \lambda y, \lambda z) ,$ (3.39)

in which $ \lambda $ can be any real number. In other words $ \lambda $ is a parameter that reaches all points on the line that contains both $ (x,y,z)$ and $ (0,0,0)$. See Figure 3.16.

Figure 3.17: An illustration of perspective projection. The model vertices are projected onto a virtual screen by drawing lines through them and the origin $ (0,0,0)$. The ``image'' of the points on the virtual screen corresponds to the intersections of the line with the screen.

Now we can place a planar ``movie screen'' anywhere in the virtual world and see where all of the lines pierce it. To keep the math simple, we pick the $ z = -1$ plane to place our virtual screen directly in front of the eye; see Figure 3.17. Using the third component of (3.39), we have $ \lambda z = -1$, implying that $ \lambda = -1/z$. Using the first two components of (3.39), the coordinates for the points on the screen are calculated as $ x' = -x/z$ and $ y'=-y/z$. Note that since $ x$ and $ y$ are scaled by the same amount $ z$ for each axis, their aspect ratio is preserved on the screen.

More generally, suppose the vertical screen is placed at some location $ d$ along the $ z$ axis. In this case, we obtain more general expressions for the location of a point on the screen:

\begin{displaymath}\begin{split}x' & = d x/z  y' & = d y/z . \end{split}\end{displaymath} (3.40)

This was obtained by solving $ d = \lambda z$ for $ \lambda $ and substituting it into (3.39).

This is all we need to project the points onto a virtual screen, while respecting the scaling properties of objects at various distances. Getting this right in VR helps in the perception of depth and scale, which are covered in Section 6.1. In Section 3.5, we will adapt (3.40) using transformation matrices. Furthermore, only points that lie within a zone in front of the eye will be projected onto the virtual screen. Points that are too close, too far, or in outside the normal field of view will not be rendered on the virtual screen; this is addressed in Section 3.5 and Chapter 7.

Steven M LaValle 2016-12-31