Nav: [home] > [render] > [stereo] > [create]

 ← [What are these? Why?]

## Creating stereoscopic left-right image pairs with POVRay

This describes how to set up the POVRay camera in order to create stereoscopic left/right image pairs.

 Local: [StereoPOV] [Theory] [How real...] [POV SDL code] [Camera setup] [Sample images] [Literature/links]

### StereoPOV

StereoPOV (written by Hermann Vosseler) is a patch for POVRay which allows the creation of left/right image pairs. It has some cache built in which should speed up rendering but you will have trouble setting up this patch for non-Windows versions of POVRay.

StereoPOV will trace the left and right image simultaniously resulting in an output image which is twice as wide; in order to cut the two sides apart, I wrote StereoJoin, a little tool which can operate on any number of PNG images.

But be warned: Setting up a stereoscopic camera ist not as easy as setting up a normal camera. When using StereoPOV you will most likely end up using camera's direction, right and up vectors again rather than convenient look_at and angle.

### Some theory

Since StereoPOV patch has trouble under Linux and since I wanted to know exactly what I am doing, I chose to set up my own stereoscopic camera. Let us first have a look at what we're actually doing:

It is essential to understand the camera vectors (used by POVRay). The camera is put at some location in space looking into some direction. The up and right vectors define the image plane.

The figure on the left (made using XFig, by the way) shows all the vectors in their actual length. The image taken by the camera stays unchanged when all the three vectors are scaled by the same amount. (When using StereoPOV, direction, up and right make up the stereoscopic window, see below for details.)
POVRay allows you to set the angle keywoard which is actually the horizontal view angle and can be computed from direction and right.

Since the image is normally wider than high rather than square, (aspect rate <1), the length of the right and up vectors need to be changed accordingly. For my cameras below, I chose to make |right|*|up|=1 (i.e. the product of the lengths equals 1). The angle is then selected using the appropriate direction vector length.

The figure on the left shows how the stereoscopic camera pairs are normally set up [1]: The left and right camera have a distance of e, called the stereo base which corresponds to the eye distance and is often set to 65mm (being the average human eye distance). Hence, the camera is moved from the (primary) location by +/- e/2 along the right vector.

Normal (perpendicular) cameras with parallel direction vectors are used which means that the right and the left cameras see a different part of the image: there is a vertical stripe on the left camera's image which shows parts of the scene not seen by the right camera and vice versa. Hence, these image parts are cut off after rendering which means that one wasts CPU and needs to perform additional work when doing the cuts.

This problem can be solved by using non-prependicular cameras: While keeping the right vectors as above (and hence not changing the image plane), the direction vectors are adjusted so that they cross each other at the stereo window distance. At a distance of f, both cameras now see exactly the same part of the sene and cutting off parts of the images is no longer needed. These cameras are called non-perpendicular because the right and direction vectors are not perpendicular any more.

When you look at a stereoscopic image, objects on the stereoscopic window appear to be painted on the image projection screen, objects with greater depth look like being behind the screen while those between the stereoscopic window and the camera location seem to be in front of the projection plane.

This is the reason why one normally puts the stereoscopic window at the distance where the nearest foreground objects are located. You may experiment with that but you will run into trouble when important parts of the foreground objects touch the image border because they may then only be visible for one eye, especially if they are in front of the stereo window (and not, of course, when they are at the depth of the stereo window).

Why not rotate the cameras?
This question commonly appears when one starts to think about what is being done rater than simply applying "HowTo"s from a text book. It seems to be natural to rotate cameras (around the up axis) just as the eyes converge when looking at a near object.
For an explanation, a more in-depth consideration is needed [2].

The model above is just the result of a longer consideration of what we're doing here: Actually, it is not about "placing two cameras into the scene like two human eyes" but about "placing two cameras into the scene in a way that the resulting image, when viewed stereoscopically (as anaglyph or whatever), will give the onlooker the illustion of watching the scene with his eyes".

Now, if one rotates the cameras, they have different image planes and hence the same object will have different vertical sizes in the images taken by the two cameras (see figure on the left). In other words: The red and cyan channels of an anaglyph will not match in size.

In contrast, the non-perpendicular shifted cameras (NP-camera in the figure) still have the same image plane. As pointed out above, these cameras are equal to ordinary perpendicular cameras, translated along the right axis (and not rotated) -- just that the direction was pertubed to eliminate the need for cut-off.

 Rotated:[click to enlarge: 509kb PNG] Shifted:[click to enlarge: 514kb PNG]

The two anaglyphs on the left illustrate the effect. They show the same scene with the same stereo window distance but the left one uses rotated cameras while the right one uses the normal (shifted, non-perp.) ones. Especially note the grids around the robots: Using the rotated cameras, the red and cyan channels of grids do not match in height (vertical size) leading to image distortion.

The effect is not very strong and is best seen with a near stereo window (i.e. large rotation angle). Make sure to enlage the image to about 25cm (10inch) width.

### How real is it and can we do better?

Short answer: It's perfect; we cannot do better.

Long answer (thanks to Hermann Voßeler for pointing this out): Subject to the condition that we use normal perspective cameras (with a flat image plane) and that we use one common flat projection plane (which is the case for most common presentation methods like anaglyphs and parallel view on print media, monitor or screen projection, etc.), the above approach yields the exact (optimum) stereo image when viewed from the optimum position which is called orthostereographic view.

In this case, the direction of the light rays from the scene into the eye of the viewer does not change when the real scene is replaced by the stereographic projection. (Of course, the accomodation plane (i.e. the one the eyes have to focus on) then changes unavoidably since the projection is at fixed distance from the viewer.) To create such a (prefect) orthostereographic projection, the following rules must be applied:

1. The stereo base used during creation of the stereo pairs (e) is equal to the stereo base used during presentation and equal to the eye distance of the viewer (commonly 65mm, average human eye distance).
2. The projection is adjusted so that the images of infinite distant objects on the projection plane have a (horizontal) distance which is equal to the eye distance of the onlooker (hence no eye convergence when looking at the infinite distant object).
3. The viewer's eyes located at the optimum view position (i.e. at the same distance as the stereo window distance during creation and when looking at the image center, the eye rays should cross the projection plane orthogonally). (Hint: Try yourself what happens to the stereo image when you move your head around.)

One should note, however, that under most circumstances, orthostereographic projection is not the principal goal. Instead, for practical (see e.g. 3.) and artistic reasons, approximatively correct projections will still work just fine as long as they can be looked at without eye strain.

### POVRay SDL implementation

Taking the above into account, one can define a stereoscopic camera using:

• side: the side of the camera.
Use +1 for the right one, -1 for the left one and 0 for non-stereoscopic (perpendicular) camera.
• e: the stereo base in POV units.
Using 65mm is normally a good choice (human eye distance).
• f: the stereo window distance in POV units: objects in front of it will appear in front of the projection screen.
As a starting point, use about the distance of the nearest foreground objects.
• alpha: the horizontal view angle in degrees.
Values in range 40..60 degrees are probably well-suited; this should actually be the angle under which the projection screen is seen by the viewer and hence depends on the ratio of screen width and screen distance.
• prim_loc: the camera location (actually the center between the right and left camera). For my macro, this is fixed to the origin because one can translate lateron.
• prim_dir, prim_right, prim_up: the other camera vectors. In my macro these are z, x, y, respectively and aspect correction based on the image size (in pixels) is done atuomatically. Rotate the camera to point it at the desired location; you may even use look_at.

Note that in order to set up a decent stereoscopic scene it is advisable to use real-world units as POV units (e.g. 1m). Otherwise you need to keep the scale factor in mind.
Futhermore, keep the intended projection environment (size and distance) in mind when choosing the camera angle.

Okay, all that can be put into some simple POVRay SDL #macro. I wrote the following one for my scenes; you can download it here (do not use copy-n-paste, file has additional info):

 Source: stereocam.inc   [4kb POV SDL] Version: 1.0   (04/2004) Author: Wolfgang Wieser   (report bugs here)
```// Do NOT copy-and-paste; use link above to download.
#macro WWStereoCam( side, e_, alpha_, f_ )
#local prim_loc=<0,0,0>;
#local prim_dir=z;
#local prim_right=x;
#local prim_up=y;

// Perform aspect correction:
#local Aspect = image_height/image_width;
#local prim_up =    prim_up   *pow(Aspect,    .5 );
#local prim_right = prim_right/pow(Aspect, 1- .5 );

#local alpha_ = alpha_*pi/180;

// Camera moves along right vector for different sides:
#local cam_loc = prim_loc + side * 0.5*e_*vnormalize(prim_right);

// Camera direction: right eye looks left and left eye right so that
// the rays cross at the stereo window, i.e. at distance f_ from
// the camera. [case for f=0 skipped]
#local cam_dir = (prim_dir - side * e_/(2*f_)*vnormalize(prim_right));
// Apply appropriate scaling:
#local cam_dir = cam_dir / (2*tan(0.5*alpha_));

// At least the right and up vectors are easy...
#local cam_right=prim_right;
#local cam_up=prim_up;

// Here comes the camera def:
perspective
location cam_loc
direction cam_dir
right cam_right
up cam_up
#end```

### Setting up the camera

Note the following when using the above macro WWStereoCam:

• You need to switch off POVRay's the vista buffer (-UV) because it (currently) does not work with non-perpendicular cameras.
• You can use the macro GetViewAngle(d,s) to calculate the view angle of a (projection) screen of width s at a distance d in front of the spectator.
• Changing the camera vectors prim_dir, prim_right, prim_up from cyclic to anti-cyclic or changing one sign will swap the camera's sideness which means that side=+1 will then be left instead of right.

StereoPOV has some hints on how one should set up the scene and the camera for a nice stereoscopic scene: You should place the foreground objects about at the depth of the stereo window and always keep in mind which projection device you will be using in the end. It is advisable to use real-world units in the POVRay code so that the calculations are easier.

### Sample images

Okay, here are finally some sample images made using the above camera macro. They are grayscale anaglyphs which should be viewed with red/cyan glasses (red being left).

The below images all show the same scene but with different stereoscopic window distance (f). Using a 17inch monitor, I recommend viewing them fullscreen at 60cm distance. The right images are the same as the left ones but with a visible stereoscopic window.
(And... what is it? Don't ask: The spherical thingy is a spy robot designed by me in 02/2004 which reminds me a bit of one such object seen in Star Wars...)

 Normal setup: The stereoscopic window is at the position of the nearest foreground object. Looking at the image is like looking through the stereoscopic (computer monitor) window and seeing some 3d world behind the screen surface. Special setup: The stereoscopic window is just behind the foreground object. The nearest flying robot seems to be in front of the screen while all the others are still behind it. Note that this only works well because there is no object in front of the stereoscopic window is at the border of the image.

Why are the images so large?
Simply because they are PNGs and hence losslessly compressed. It is really a bad idea to use e.g. JPEG for anaglyph storage because compression artifacts will considerably degrade anaglyph quality. (I tried it: Even at 98% quality you can very well see "shadows".)