Make and Use a Twin Digital SLR Camera System for 3-D Stereo Photography and More

Here’s how to couple two digital single lens reflex cameras for a “3D DSLR” combining the image quality and flexibility of a DSLR with the three-dimensional view of today’s much more limited stereo devices—inexpensively and in about the space of a single big “professional” camera. Read on for special situations, suggested improvements, and applications.

Summary
Get two matching wireless shutter releases that can transmit a half-press of the shutter button (or, for certain Canon cameras with “C3” style 3-pin remote-release sockets, a double-ended cable you may have to make by splicing, or for Nikon cameras with 10-pin connectors, a simple MC-23 compatible cable to link them), two matching DSLRs compatible with them and having tripod sockets in line with their optical axes (imaginary lines running forward and backward through the centers of the lenses), two matching lenses, a threaded rod such as a setscrew fitting though not quite the combined depth of the tripod sockets to hold the cameras together, and some “invisible” office tape to shim the edges of their mating surfaces. Apply the tape along the left and right edges of one camera’s mating surface, building up skids to take up the slack in the screw’s last thread-pitch between the cameras so they snug against each other parallel when you screw the tripod sockets onto the setscrew. Run left-to-right strips lengthwise, under the skids, at the front or back to correct “toe-in” or “toe-out”: the cameras’ views should be consistently separated by only their parallax as you can measure with test photos including a yardstick. Tack the cameras in place with a little white glue applied to come between the tape skids on one camera and a gapless body surface, or a little corresponding tape for smoother assembly, on the other once you’ve worked out how to shim the bases. Match the cameras’ settings and automation modes. Half-press the wireless release (or, with certain Canons or Nikons and the appropriate cable, one shutter release) to focus the cameras, then fully press it to trip the shutters with constant, matching delay. Use flash from at most one camera to avoid automation and synchronization conflicts. You may need to set single-shot autofocus mode or slower shutter speeds possibly with one slightly longer than the other; avoid preflash-based TTL automation or second-curtain sync, or put the flash on the other camera to ensure both shutters will be fully open when the flash fires. Identify the cameras’ corresponding pictures in a spreadsheet, name one’s to correspond to the other’s with pyRenamer, and use StereoPhotoMaker’s batch processing to perfectly align them, match their color and brightness, and combine them for various viewing methods such as eye-crossing, special frames, or colored glasses.

Choose cameras
Choose two of the same model. The shutter lag will match closely, so you won’t need complicated, expensive adjustable-delay electronic triggers to capture a moving subject in the same place with each camera. The metering and autoexposure programming will match, so you’ll automatically get not only equal brightness but equal motion and out-of-focus blur which would be much harder to fix in post-processing. The resolution, angle of view, and tonality will match out of the box as well. Look for these features:


 * Two-stage remote shutter release compatibility. You’ll half-press a single remote-release button to prefocus both cameras, then full-press it to trigger them both with a constant delay.


 * Small, inexpensive wireless shutter release available.
 * Cheapness can be a virtue. A wireless shutter release receiver often stows in the camera’s accessory shoe. A simple, small receiver won’t bump against much or exert much leverage, and a flimsy plastic foot will sacrifice itself to protect the camera.
 * Yongnuo makes inexpensive wireless two-stage remote releases for some Canon and Nikon DSLRs. Some, at least, take lithium batteries and consume them quickly—buy extras online to avoid exorbitant retail “accessory” markups. Apparently-similar releases are available under the name “Stado ST-WX2002” or simply “WX2002”.
 * If you can’t get an inexpensive wireless remote release, splice two wired remote releases.
 * Certain Canon cameras with “C3” style 3-pin remote-release sockets can connect directly with a double-ended cable, making the wireless releases unnecessary. If a double-ended C3 cable is not available directly, you can make one by splicing cut ends of two C3 cables (try solder, and heat-shrink tubing to replace the insulations and jacket, and match each pin's wire), or coupling C3-to-submini stereo audio adaptors commonly used for generic two-stage shutter releases through an audio coupler (heat-shrink tubing around the coupling can secure it and keep any rough parts from scratching camera stuff). Ideally the connectors would face in a direction that does not interfere with access doors on either camera or require the cable or its connectors to project behind the cameras so that they cannot lay flat on their backs. This can be accomplished by rotating the inside of each connector on common rearward-facing cables by one “pin” in an appropriate direction in manufacture or possibly reassembly.
 * Nikon cameras with 10-pin connectors can connect directly with an MC-23 compatible cable so that either camera’s shutter button will focus and trigger both, making the wireless releases unnecessary.
 * Multiple cameras could be synchronized with wireless shutter releases, or possibly coupled 10-pin or wired-remote-release cables for multiple perspectives in a single shot.


 * Flat, broad, sturdy baseplate perpendicular to the optical axis and near the bottom of the lens mount, with a metal tripod socket at the middle.
 * Avoid cameras with a large, fixed “grip” area below the lens. These would hold the cameras much further apart than your eyes for unrealistic, sometimes distracting excessive stereo base length and consequently divergence (“hyperstereo”) which would not be trivial to correct. Their mass and size would also increase both the force and leverage on the connection between the cameras.
 * Fixed grips and the “professional” style cameras on which they are generally found are stronger than removable grips actually mounted and may be stronger and more precise than spacers. So, they may be well suited to use with bulky lenses that would prevent other cameras from connecting directly and do not accept tripod collars to connect to one another instead.
 * The tripod socket must be in line with the optical axis to avoid distracting misalignment of the images’ stereo base, also not trivial to correct, and to allow a similarly aligned pair of tripod foot sockets to hold big lenses parallel by a second coupling.
 * If you find yourself with misaligned tripod sockets but still want to attach the cameras to each other, hold them with the lenses horizontally side-by-side and rotate the pictures back to right-side-up (cropping unless you want a tilted format) so you don’t have to rotate your head to match the tilted stereo base. Or, make a bracket to hold each camera separately.
 * A broad baseplate with the socket roughly centered front-to-back increases the distance in each direction from the “fulcrums” formed by the cameras’ edges or the spacers holding the cameras apart evenly to the tripod sockets, reducing the leverage a bump can apply to bend or extract them.


 * The battery, memory card, and other frequently-accessed components or controls should be reachable when the cameras are coupled baseplate-to-baseplate so you won’t have to pivot the cameras apart and realign them frequently. A door on the bottom is more likely to be reachable with the cameras coupled if it is nestled against the usually further extending right side of the camera. Some Nikon “consumer” DSLRs have bottom doors aligned this way, but some Canon consumer DSLRs’ bottom doors run along the length of the baseplate. Measure the distance, directly or on a picture, from the inward edge of the battery door to the center of the tripod socket along the bottom of the camera you’re considering. It should exceed the distance from the center of the socket to the other end of the camera.


 * One or more alignment pin sockets or other coupling points on the baseplate could help you keep the cameras parallel with a more complicated connector such as a rail.


 * Single-lens reflex design. Autofocus SLRs generally have a rangefinder-like out-of-focus-direction sensitive “phase-detection” autofocus system faster than the contrast-detection and other systems common on other cameras. And, as the most popular kind of expensive camera, they’re economically equipped with the most sophisticated features.
 * Non-reflex cameras tend to have smaller, better wide-angle lenses because they don’t require a more complicated retrofocus design to make room for a mirror.


 * Digital. Using two cameras at a time can get complicated. Know when something has gone wrong and retake the picture for free.


 * Versatile in-camera processing or tethering for automated image acquisition and out-of-camera processing. Save up to twice as much time with two cameras.
 * Compatibility with free firmware such as CHDK and its StereoDataMaker extension (each for Canon compact cameras, not DSLRs, so far), free tethering software such as gphoto, and free RAW processing software such as GIMP plugins will over time give you fullest use of your cameras’ hardware and computing power, helping you automate taking, correcting, and otherwise processing multiple pictures for an ever-expanding set of applications including stereo, focus-stacking, HDR, stitching, and combinations thereof. Wireless tethering would be very convenient for multiple cameras, and handheld-device controlled tethering would be very convenient outside. A camera with fully electronic controls, such as a compact camera, may be more flexible to automate than a typical SLR. A 3D smartphone, or two smartphone-cameras communicating with one another, would likely provide a modest camera with a great general-purpose networked computer for acquiring, combining, manipulating and sharing images and video in the field—and collaboratively developing new ways to use it and eventually other systems from an old smartphone with two shots taken by hand to multiple fancy wirelessly tethered cameras.
 * In-camera correction of chromatic aberration and other lens faults would be particularly convenient for small, cheap lenses easy to carry in duplicate.
 * In-camera high dynamic range (HDR) bracketing, alignment, and tonemapping, done well, speeds an often particularly tedious picture-taking and editing process.
 * Pentax includes many kinds of in-camera processing across its range of DSLRs.


 * Built-in flash. Two cameras and wireless releases are bulky enough without an external flash. Some cameras’ built-in flashes can also wirelessly control other flashes.
 * A digital camera usually uses a pre-flash for sophisticated through-the-lens flash metering and may delay opening the shutter when doing so for time to make the measurement. A little built-in flash on the second camera, open but covered to not actually illuminate the subject, might be a handy way to force a corresponding delay that may be required to synchronize the two cameras to the first one’s flash.


 * Automated flash control supporting multiple flashes and selectable ratios. It’s particularly handy for off-camera lighting to illuminate a deep scene evenly and from similar directions in relation to a group of cameras to avoid inconsistent shadows, balanced directional light to accentuate three-dimensional shapes through shine and shadow, and bright, even light for small-aperture closeups without any bellows factor calculations. Just point the flashes, with any diffusers, reflectors, or bounce you like within the limits of their power and the camera’s sensitivity in the right general directions, and let the camera’s through-the-lens meter measure the subject tone its environment and your setup create. Rebalancing lighting is straightforward with (ambient-light) exposure compensation, flash exposure compensation, and ratio settings.


 * Autofocus or at least electronic shutter operation. Autofocus will help you work quickly and avoid mistakes with two cameras at odd angles, one of which you won’t normally look through. Electronic timing is much more accurate than mechanical, for equal brightness side to side, and compatible with quick, consistent electronic and wireless shutter releases. But there are some two-camera mechanical cable releases, or you could couple two regular ones’ triggers.


 * Autofocus during recording and genlock or at least a workaround for synchronizing frames for video.
 * Shooting a still photo can synchronize Canon 5D Mark II’s.
 * The ability to use free firmware, such as Canon DSLRs’ “Magic Lantern”, might facilitate a future workaround for synchronization.
 * A flash burst before and after a desired sequence can mark corresponding frames in multiple recordings to guide sync adjustment.


 * Try a purpose-built 3D camera or attachment for video. One with separate lenses may provide a longer stereo base for more apparent depth and allow wider views because each side’s hardware doesn’t obstruct the other’s. An SLR’s resolution and low noise aren’t as important for video as for still photography since the pixels don’t hold still for peeping and none can yet even record the sensor’s full resolution at a high framerate. Shallow depth of field isn’t as important for 3D as for 2D photography since stereopsis conveys depth more directly.
 * Set as small an aperture as possible with lower-quality or auxiliary lenses, common with simpler cameras, to mitigate lens aberrations.


 * Returnable or resellable at a similar price if the project somehow fails.


 * Small. Two will be twice as heavy and more awkward. Some compact cameras have their lens openings at an edge or corner and if mounted—probably with a custom frame—with the lenses abutting one another, or even corner-to-corner for several very similar perspectives at once, would allow a very short stereo base suitable for macro photography. They could also take non-3D composite pictures (in HDR or at different focuses for instance) of objects at a distance where the difference in perspectives would be insignificant, as a simpler alternative to a beam-splitter. A set of such assemblies spaced further apart could provide the composite views in 3D.


 * Inexpensive. The assembly won’t have a baseplate facing down but instead two vulnerable top covers facing outward, and it will swing around and fall twice as hard. A camera might even work loose, drop, and smash itself.

Choose lenses

 * Choose two of the same model. Any shutter lag attributable to lens activity such as stopping down, and the actual effective focal length and thus magnification, will match closely.
 * Similar lenses of, or set to, about the same focal length can work. A program such as StereoPhotoMaker will stretch the pictures to align well with one another. But test mismatched lenses thoroughly before an important project to uncover any surprises.


 * Ultra-wide and tele lenses generally aren’t available on purpose-built 3D cameras, but can work very well. Beware of fat ultra-wide lenses, which won’t let the cameras attach directly to one another. Beware of physically-long, front-heavy tele lenses, especially if they don’t have tripod collars for additional support. Alignment is critical with tele lenses.


 * It’s easiest to use zooms at the ends of their ranges. It’s inconvenient and error-prone to set two cameras’ lenses to matching intermediate points and check that they haven’t been bumped away. Even a small mismatch in magnification can make a pair of pictures difficult or impossible to mentally “fuse” to a 3D image. (If you find yourself with this problem, enlarge and crop the wider picture to match the narrower one.) A picture from a high-resolution prime or mid-range zoom lens, even cropped and enlarged modestly, may be better than one from an all-in-one zoom with generally lower resolution and a usually too-long long end.
 * Inexpensive “plasticky” lenses’ stiction can hold ring-zoom settings better than expensive, smooth turning lenses.


 * Lens-ring settings can be mechanically synchronized. It’s easiest to synchronize zoom by setting each of the lenses to the same end of their range and synchronize focus by autofocusing on the same large object, and possibly best to synchronize each through electronic controls with the second camera’s active focusing area automatically diverging from the first’s to catch the same point on a close-up subject. But a ring on the lens you’re looking through can drive one on the one you aren’t through an added mechanical connection.
 * For manual settings such as zoom or, on a manual camera, aperture, you can supply the sometimes-substantial force to turn both lenses’ rings.
 * For powered settings such as autofocus, choose lenses whose relevant rings move forcefully under power (for the camera you’re looking through), are easy to turn in manual mode (for the camera you aren’t), and cannot move independently of the settings they control. Try a non-ultrasonic motorized lens, or a body-driven lens, especially with slow gearing.
 * Driving one lens with another could strain and damage the motor and gears. Stop immediately if you hear a motor struggle.
 * The links should avoid slipping on the lenses’ rings without having to stretch hard against them, which could strain their parts and the cameras’ mounts. Tripod collars or a brace between fixed parts of the lens barrels could relieve some strain. Try a toothed belt matching the longitudinal grooves on a pair of rings, or a belt or hinge-linked shaft stuck to each lens directly or with a collar at points where it would remain against their surfaces as they rotate. A flange on the belt could help it stay on a raised ring, and flanges added to the lens could help it stay on a flat grip area. Hood mounts could hold couplings for lenses lacking focus rings. Push-pull zooms could be synchronized with a belt that is wide to reduce transverse stretching and textured or glued to not slide back and forth, a broad, rigid link for the rings, or a fixed brace between any non-turning push-pull parts. A belt can be improvised from and attached to the lenses with non-stretching tape.
 * Match each setting before (or as) you couple it.
 * Focus can be critical and hard to synchronize precisely. Look through both cameras to match them initially—don’t rely on the positions of the focusing rings. Stop down a little in use to account for inconsistency in focusing due to variation and slop in the second lens that won’t be corrected by focusing it directly. Let the lenses focus independently unless you need to tie their focus together to avoid missing a small or distant subject with the one you’re not looking through.


 * Choose a wider lens than usual if you mostly take horizontal pictures. Mounting the cameras bottom-to-bottom and holding them with the lenses side-by-side gives vertical pictures. Rotating the assembly for a horizontal picture would mis-orient the images’ stereo base, so you’ll need to crop instead. A “crop factor” of 1.5 beyond that normally applicable to the camera follows from the typical 3:2 small-format sensor aspect ratio.
 * For wider horizontal or, sometimes, square views on a full-frame camera with smaller, cheaper, non-bulging filter-compatible lenses, you could use crop-sensor lenses compatible with full-frame mounts because the central frame areas actually used would be about the same size as cropped sensors and thus unaffected by vignetting.
 * For ultra-wide-angle views, the vertical image orientation is unusual for photos but might be more realistic than a horizontal one. One generally looks down to see where one is going more often than one turns to look past the limits of sideward peripheral vision. While the close foreground may not be very interesting, it does provide contrast and immersive depth.


 * Generally avoid fast lenses. Fast lenses are less useful than in 2D photography because 3D photography can use depth rather than blur to separate a subject from its background. Although modest defocus may direct the viewer’s attention to the primary subject and more accurately reflect the appearance of other objects, especially through dilated pupils in low light, aperture patterns from defocused bright points at now-distinct depths around the subject may attract more and often undesired attention. Slow lenses’ lightness and cheapness are all the more important when you have to buy and carry two, and might crush one between two cameras with another lens and whatever you swing it into. Their quality can be close to more expensive lenses’ stopped down and with chromatic aberration corrected electronically.
 * Fast wide angle lenses’ good depth of field wide open could make them a good choice for low light.
 * Try a lens with a diffuse-edged aperture to soften out-of-focus highlights from circles to fuzzy points.
 * A 3D camera might be able to provide selective focus better and cheaper than a fast lens: a depthmap can be made from the stereo pair to distinguish parts to keep sharp from those to blur. The blur can be Gaussian rather than circular, for aesthetically “perfect” bokeh, need not increase with depth immediately or at the same rate as a real lens’s, and need not increase evenly away from any particular plane as a real camera’s: the subject’s nose, eyes and ears can be perfectly sharp, and the background perfectly blurred. Create the original stereo pair with focus stacking if you need some very close and very far objects sharp without camera movements. With a real aperture, background blur does not occlude and distract from a focused (or less-defocused) foreground subject as a result of defocus, since the foreground subject is itself a kind of aperture cutting off background rays that would overlap it. Replicate this effect by making areas closer to the camera opaque to contribution from light from further-back areas in proportion to the blur pattern of their own images (including, if you like, any area that is obstructed rather than defocused, as by a catadiotpric lens’s secondary mirror) as the final image is assembled. For more realism, if not beauty, turn the bokeh into mechanical-vignetting half-moons toward the edges, and change its character with depth.
 * The GIMP depthmap plugin current as of mid-2011 matches nearby pixels rather than patterns between the two images, so it works best with just a few pixels’ width difference between each part of the two images and little noise. Open the images as layers, align them vertically, and align them horizontally at some medium-distance points to reduce the average amount of divergence. (To measure absolute depth consistently from picture to picture, leave horizontal alignment as it comes from the camera or match it for subjects at “infinity” or a known distance.) Divergence in pixels and noise can be reduced by reducing the resolution of the images from which the depthmap will be generated. (You can then enlarge the depthmap to guide edits to a full-size original image: a lack of precision in 'blur', for instance, will rarely be obvious.) Don’t get too close to the subject. A shorter stereo base would help, too, but fancy cameras’ own size tends to limit this: try a single camera shifted between exposures for a stationary subject, or a short-stereo-base 3D camera, a pair of small cameras, or a 3D accessory for a moving subject. The GIMP focus blur plugin can follow the depthmap generated (and then altered as you like). Since the depthmap is aligned to one side’s image, make another aligned to the other side’s if you want to precisely match guided edits to both halves of a stereo pair, or adapt the plugin to generate a depthmap aligned to each image from a single set of basic computations. Any additional software required to compile the plugins is generally available on Linux through the distribution’s package manager.
 * For defocused bright lights to show up as distinct bright blobs, you’ll need enough dynamic range to register their much-greater brightness in each color, rather than "burning them out" at the top of the scale with merely light colored areas. Try your camera’s lowest ISO setting to use the low as well as high ranges of its sensitivity, or a composite HDR image. Having identified them as particularly bright, you could accentuate them further with computed “diffraction spikes” which aren’t depth dependent, but would be distracting if large and mismatched in a stereo pair. Or, even without converting the image or blur spots to a full 3D model to map for each eye, give the bokeh a rapidly-thicker-edge-inward look of a translucent or diffuse-edged sphere rather than a flat or too-tapered disk.
 * Multiple secondary images from a row, cross, ring or other arrangement of positions around a central “primary position” rather than just two images from adjacent positions could see around all the edges of foreground objects to admit background-blur contributions from bright lights behind their edges. The extra positions would also catch depth of positions some of their number miss from any given picture due to parallax and subject curvature.


 * Single-sensor stereo photography can provide perfect synchronization, although often with image quality reduced well below what one would expect from the smaller sensor area due to cheap dual lenses, additional and often imprecise optical elements, or, with a coded aperture, the correlation of divergence and defocus.
 * A coded aperture, whether having as components complementary colors as with the Vivitar “Qdos” lens whose sides and colors one’s brain can combine with the help of glasses (and software can convert to stereo pairs) or keyed shapes creating a picture whose defocused subjects can be backed out computationally, can avoid the problem of aligning long lenses extra-precisely. The aperture should generally lie in the same plane as the original aperture for best results. But, for a long lens including a mirror lens, adding the supplemental aperture as a cover on the front may be close enough for experimentation, with each part illuminating at least the center area well. Two separate colored apertures may work better than side-by-side colored halves of a single round aperture by resolving a defocused point into a pair of more-distinct divergent spots, but may interfere with a phase-detection autofocus system as the separate apertures may not correspond to the areas of the lens whose light that uses. Check by covering the other (generally there are two) that each one illuminates most of the image area rather than being obscured from it by mechanical vignetting. If it is, move the apertures closer to the center, or as close as possible with a mirror lens (one such as the Reflex-Nikkor 500mm f/8 with each part of the mirror illuminating each part of the image to produce uniform donut-like bokeh would be perfect). The original aperture must remain wide open so as not to obstruct parts of the supplemental coded one.
 * A colored aperture need only be partially exposed at a given image point to provide a recognizable image (although uneven illumination will affect color balance unless corrected); a shaped aperture should be fully exposed to preserve its shape. Check the area of acceptable illumination by taking pictures of point sources of light like Christmas lights or a grid of dots on a computer screen against dimmer backgrounds up close and far away, in and out of focus, to examine the character of the bokeh. If the lens exhibits mechanical vignetting with half-moon shaped bokeh with the standard aperture wide open, the coded aperture parts will have to be restricted to a more central area.
 * Most people shouldn’t take apart a good lens: reassembling it is difficult and aligning the elements may require special equipment. A cheap 500mm f/8 non-mirror lens would be a good choice for a project because it is very inexpensive, satisfies a high-magnification purpose regular stereo equipment may not serve well, and has a long focus which gives more leeway for placing a supplemental aperture (where the lens cap goes may work for the center of the image at least).


 * Generally avoid fat lenses. If the widest point on the bottom of a lens extends past the bottom of a camera, you’ll need spacers between the cameras’ baseplates to keep the optical axes parallel. This will increase the stereo base length, which the size of most DSLRs makes long enough already, and introduce more potentially irregular and deformable mating surfaces to misalign.


 * Look for a tripod foot. If it’s along the optical axis, you can use it to fix the cameras along two points, rather than just one. Unfortunately, its usual purpose is to hold up a lens too long and heavy to safely hang from the camera, so few wide-angle or small lenses have one or even a broad, sturdy, immobile place to mount a collar providing one.


 * Add a protective filter and a hood if possible. As the bulky pair of cameras bumps into things, they’ll protect the front threads often necessary to open a lens for maintenance, and absorb shocks that could affect the entire assembly.

Choose mounting hardware
You’ll need:
 * A threaded rod to fit the cameras’ tripod sockets. Most take the standard ¼ inch wide, 20 threads per inch “coarse” thread bolt size. The rod should be strong to keep the cameras together, corrosion-resistant to let you separate them, and smoothly finished to preserve the hard-to-replace socket and other accessories with which it may be used. It should have the usual right-handed thread continuously along its length, with each end tapered to screw into a fitting and neither sharply pointed. It should be a little shorter than the sum of the combined depths of the tripod sockets and any pads or spacers: it should extend into both cameras to hold them together by the threads without bottoming out in both which would possibly damage the sockets or keep the cameras apart turned at an awkward angle. A ¼-20 × ½” stainless steel setscrew would generally be a good choice.


 * Spacers to fix the cameras’ alignment in three dimensions at points well away from the tripod socket bearing. The precise spacing needed depends on the orientation of and variation in the tripod sockets, and the orientation of the cameras about any other point of attachment such as tripod feet. “Invisible” office tape works well: it’s easy to attach and to remove with surface damage unlikely, thin enough to simply take up the slack in a gap of less than a single thread pitch rather than require a wider space allowing more play and room to lever one camera against the other, easy to build up in small increments whose irregularities will tend to average out, only moderately frictive to let the cameras slide together under force but then hold snug, and inexpensive.
 * For slightly larger gaps, cork or synthetic foam, which compresses within itself, would be better than rubber or solid soft plastic, which only unpredictably slips and bulges toward gaps. The spacers should match so that, spaced symmetrically around the tripod socket for equal leverage, they compress evenly and hold the cameras’ baseplates parallel. A small padded gap between the cameras may reduce damage from a mechanical shock, but don’t rely on it: the manufacturer probably didn’t expect the top of a camera to swing around facing outward or the tripod socket to take bumps from another camera hanging from it swinging into things. Self-adhesive cork cupboard door bumpers would generally be a good choice.


 * Temporary adhesive to hold the assembly together and aligned. Don’t use something that will mar the cameras or fix them together permanently in alignment that is probably not quite perfect initially and may change with stress and shocks. Adhesives generally have little tensile strength or toughness and may creep, so don’t rely on them to keep one camera bound to another but only to keep them aligned. Epoxy is sturdier than most, but, hardening by reaction, is almost impossible to remove: don’t use it unless the cameras are expendable and held firmly in perfect alignment while it cures.
 * “White glue” commonly used by children is a good choice, but like anything sticky, might mar or remove labels.
 * For a more permanent bond, try double-sided tape between the cameras themselves. The foam-core variety will bridge a gap between the cameras and can be sliced through with a string to free them. Stick it on each side of the bottom of one camera, toward the left and right to resist torque, and put a long strip of slick plastic similar to its backing strip on top. Fold one end of each strip over as you screw the cameras together. When they’re together, peel these cover strips back on themselves to expose the adhesive to the other camera without having to shear a cover strip across the tape then or shear the camera across the tape beforehand. Use double-sided tape that isn’t itself thick enough to be compressed hard in order not to have to adjust the cameras’ alignment again, now with difficulty in separating them, to account for the tape itself.
 * Or try inelastic tape pulled tight or a rigid plate with double-sided tape on the outsides of the ends of the cameras’ mating surface to keep them from twisting apart.


 * For fat lenses without tripod sockets: A broad, rigid but non-scratching spacer with a hole in the middle to go between the cameras’ aligned surfaces, with thin cushions such as cork cupboard door bumpers at the corners, to evenly separate cameras with wide lenses that don’t themselves provide another mounting point. Hard plastic could work well. Don’t mount lenses to each other except at a place designated for a tripod mount to avoid straining the camera, the lens’s body, or its moving parts with unexpected leverage.
 * A pair of battery grips coupled much as cameras would be directly could accommodate fat lenses and allow the cameras to be separated easily, at the expense of some rigidity and an exaggerated stereo base.
 * Chicago screws can easily attach repurposed camera mounting plate and tripod screw assemblies from battery grips. For best results, find mounting plates that include alignment pins to match cameras with them, and Chicago screws with shallow heads that allow the alignment pins to engage. Rivets with both ends flat could fit even flusher, but be more difficult to size and install and very difficult to remove.
 * Accessory or replacement baseplates with flat mounting areas, alignment pins, and thumbscrews similar to the kind on battery grips but removable or retractable—opposite offset sockets to accept matching connectors on another camera—would be great to quickly join cameras for part-time 3D use. Or, a separate coupling plate could hold both alignment pins and thumbscrews permanently. Adjustable pins or corner bearings, or even tape, could compensate for any imprecision in the mating parts.
 * Make these yourself by cannibalizing camera-attachment plates from battery grips and fixing together the parts that should overlap, aligning the plates to the cameras' required orientation, not to each other—very strongly and reliably, so a camera doesn't get dropped.


 * You could use a thin, inelastic sheet with cushions stuck on it or a sheet of padding such as cork or rubber foam as a spacer even with ordinary-width lenses to easily separate the cameras, have them free of stickers, and reattach them.


 * For lenses with tripod sockets along the optical axis: A threaded rod, such as a longer setscrew, whose length is equal to or a little less than the depth of a single tripod socket plus the distance between the cameras with the lenses mounted socket-to-socket and parallel, a nut to fit it, and some permanent strength thread locking fluid to glue the nut to the center of the threaded rod. And, if the cameras will be far enough apart to accommodate them, two more nuts and washers to bear against the camera baseplates holding them firmly against the threaded rod. Try nylon ringed locknuts. You might also need padded spacers for the sides of the facing surfaces of the camera baseplates if the assembly wobbles.


 * A “Z-bar” mount holding one camera right side up and one inverted to match up the typically shorter left sides in “landscape” picture orientation is popular, but being so long and typically unbraced can have too much flex, and getting the lenses anywhere near eye-width apart can be a challenge.
 * A Z-bar wider than the bottoms of the cameras and open in the center can allow them to abut one another and avoid extra width from the bar itself.
 * Cameras with shutter-release connectors other than on their short sides, such as their right sides, avoid the connector plugs forcing the cameras further apart.
 * Low-profile shutter-release cables, which can be made by attaching the cables to the plugs at a right angle with as little protective covering, perhaps able to bend over, as needed to hold them together securely while pulling out, can reduce separation distance when they must go between the cameras.
 * Cameras with shutter-release connectors vertically aligned with the centers of their lenses can simply be plugged into one another with short double-ended male adapters. These can be made from replacement plugs.
 * The low-profile cable or double ended male adapter can fit through a gap in the Z-bar.
 * A Z-bar or components thereof can be bent from inexpensive steel bar stock (which is very stiff yet ultimately malleable) and drilled, milled and tapped with holes to accommodate tripod mounting screws, base-leveling screws, alignment pins if applicable, a central pass-through gap, a strap, quick releases (preferably only accommodating removal of the cameras in directions that will not damage what may be plugged into them) and much, much more.
 * A customized one firmly bracing the bottom of the cameras and containing holders for accessories such as a smartphone for control and connectivity and/or braces for lenses could be 3D printed—partly from models captured with a stereo camera!


 * Cameras with their lenses roughly vertically centered, such as “mirrorless” ones without top viewfinders, can be easily and securely mechanically coupled small-ends by bars across the tops and bottom, each secured to the tripod socket of one camera and a tripod to flash shoe adapter on the other.

Couple the cameras
Work over the middle of a big table or sitting over a carpet in case you drop something. Have a body cap or a small, inexpensive lens with a lens cap (unless you’re coupling bigger lenses by their tripod feet) on each camera, and caps on any unmounted lens, so that you can manipulate it freely with less risk.


 * Clean the camera baseplates. A cloth dampened with alcohol or water and just a little detergent should work well.


 * Attach the cameras to each other as follows—or, if you’re coupling the lenses at their tripod feet with the cameras to be further apart, attach the tripod feet to each other, reading “camera” as “tripod foot”, leaving the cameras free for the time being. (Attaching tripod feet as well as cameras would be a pain, and just attaching the cameras makes nice pictures with small moderately-long-focus lenses. If the tripod feet are somehow to be further apart than the cameras, connect the cameras first and connect the lenses as connecting the cameras after tripod feet is described. If the tripod collars come off the lenses, you might be able to attach them before putting in the cameras each bearing lenses and one bearing a connecting rod. You might also be able to mount the cameras to the lenses after attaching tripod-collared lenses to one another by turning the lenses within their collars.)
 * If your lenses have tripod collars whose openings are arranged so that they can attach to already-coupled cameras, it would be convenient to have an extra set permanently and firmly coupled, as with holes and bolts or epoxy, for that purpose.


 * Screw the screw into one camera finger-tight. About half should protrude.


 * Position the other camera’s tripod socket on the screw with both cameras facing in the same direction to see how the baseplates pair up. Notice the corners of the mating area.


 * To shim the camera baseplates with tape, build up front-to-back “skids” on each side of the mating area. Position these symmetrically, toward the edges of the mating area. Tape should cover, or face, projecting points such as screw heads, flanges around them, and foot-like bumps. These points focus force, possibly holding the cameras apart asymmetrically if they only bear against the tape on some corners and possibly scraping the opposing baseplate if tape skids missing them are not thick enough to hold them off it. The tape skids should extend past the front and back of each camera's baseplate so that the tape strips’ ends don’t snag as you turn the two cameras together. Trim off any great excess with scissors when you finish aligning the cameras.
 * Build up the tape skids evenly. Test-fit them by turning the cameras together as you go. Note what kind of tape you used and, as you go, how many layers you used in each place so that you can easily recreate the tape shims after wear or removal.
 * Projecting points on the cameras will tend to bite into tape they turn against directly, making it uneven. Prevent this by building up pairs of tape skids on both cameras, facing each other. Invisible tape’s slightly-rough surface will also stick against white glue to help you tack the cameras in position once they’re aligned. The total number of tape strips separating the cameras at each place should be about the same, so you'll need fewer strips of tape on each camera.
 * If the cameras have uneven bottoms, try building up each corner of the mating area with tape skids running diagonally to cover it separately.


 * To shim the camera baseplates with cork bumpers or similar pads, stick one at each corner of the mating area on one camera to be well covered by the other camera when you screw the two together. Consistently cover or avoid projecting points on the two cameras to maintain even separation.
 * Since the corresponding side of each camera’s bottom is identical, this will place them symmetrically for equal leverage and equal compression at the end of each diagonal set of contact points, automatically aligning the cameras in the plane of their baseplates. The front and back sides of the camera bottoms often aren’t identical, but are generally close, and placing the bumpers all the way out to the edges reduces leverage on the tripod sockets as much as possible.
 * All of the cork bumpers should be stuck to the same camera so that the two cameras don’t each have uneven projections to interfere as they come together.


 * If you’re using a spacer, attach the bumpers to the corners of the area between the cameras on both sides if the spacer isn’t itself padded, and set it over the screw between the cameras.


 * Screw the second camera onto the first. Notice that the lens mounts spin in and out of alignment with every turn of the screw. Stop when the lens mounts both face in the same direction and the spacers are compressed firmly enough that the cameras do not turn without deliberate force. With soft spacers like cork, this may require a turn past the point at which they first contact the other camera. You may have to use a non-scratching tool such as a plastic utensil to gently pinch down their edges or hold them in place to rotate the cameras past that point.


 * Attach a strap to the assembly. You can attach it to one of the cameras, to a single lug on each camera at whichever end of the assembly you want to be the “top” (you’ll hold the cameras side by side, each vertical), or to the “bottom” lug of the camera you’ll look through and the “top” lug of the other one to reduce the strap length needed to let you raise the cameras to your eye and keep the strap out of your face while still allowing the cameras to hang at rest facing forward and ready to use. Make a habit of putting on the strap as you pick up the assembly: it’s unwieldy, particularly with flashes and accessories hanging off at odd angles; the grips are in the wrong places; and it will fall twice as hard. A wide or padded strap or harness would be most comfortable, but avoid unprotected “quick-release” connectors which can all-too-quickly release the camera if you grab the strap the wrong way.
 * Attaching the strap to the “bottom” of one camera and the “top” of the other makes the assembly roughly radially symmetric. Keep each camera’s pictures’ side consistent by checking that the strap is not “inside out” as you pick up the cameras or by adding an identifying mark such as a sticker to one.
 * A strap with one end on each camera will not keep the assembly from dropping to one side or the other if the cameras come apart. A short connection between the two cameras’ other lugs would prevent the loop from opening if the tripod sockets separate.


 * If you connected the tripod feet first and have the camera tripod sockets to connect (or closer-together camera sockets first and have tripod feet to connect, in which case read “tripod foot” as “camera” and vice versa), connect them as follows:
 * With the tripod feet snug, hold the cameras parallel and measure between their tripod sockets to determine how far apart the cameras will need to be. Get a threaded rod that length plus the depth of a single tripod socket, or just little shorter.
 * With the rod separate from the cameras, screw a nut, not thicker than the distance to come between the cameras, onto its center and some permanent thread locking fluid. Allow the fluid to cure thoroughly and remove any excess so you don’t glue the rod into a camera.
 * If the cameras will be far apart, screw an extra nut down each side to use to lock the rod in place and slip a washer over each end for it to bear against.
 * Turn the cameras apart about the tripod feet so that the bottom of each is accessible. Screw the threaded rod with the nut on it into one loosely.
 * Bring the cameras back into alignment. Don’t scrape the rod against the bottom of the other camera. Slip a wrench between the cameras and unscrew the threaded rod from the first to space them correctly, within a single thread pitch.
 * Begin to screw the threaded rod into the second camera to fix the distance between the two and hold each. Stop when the middle of the threaded rod is in the middle of the gap between the cameras so that half of the extra length is in each.
 * To preserve the screw threads when cutting threaded rod, screw a nut onto each side of the place to be cut, then twist each over its side of the cut and off to force down burrs. Then smooth around the cut end with a file.
 * If you have nuts and washers on each side of the threaded rod, hold the rod in place and gently tighten the nuts and washers against the cameras to lock the rod and cameras together.
 * If the assembly wobbles from side to side, as it might if the cameras and lenses are big and heavy in relation to the mating area of the tripod feet, try padded spacers at the sides of the facing surfaces of the camera baseplates. It would be easiest to slip these in after the baseplates are connected. Try high-friction padding such as cork, or white glue, to keep them in place.
 * If the lenses are securely connected by a screw, spacers and double-sided tape could be an easy alternative to another screw to hold the cameras in alignment. Automotive double-sided tape is very strong as tape goes, but might remove paint or leave a residue.
 * Cameras already connected by their lenses could also be connected by a short threaded rod such as a setscrew in each and a tubular “coupling nut” that can be extended partially off of one screw and onto the other to bind the two.


 * Don’t mount lenses together by any moving part, usually including the front lens barrel, nor, preferably, anywhere else but a tripod foot or collar attachment point designed to accept and distribute the force. The leverage might damage something.

Align the cameras
The cameras will likely need adjustment to align in two main degrees of freedom: rotation about the tripod screw and tilt toward or away from each other (“toe-in” or “toe-out”). Mis-rotation will cause the top of one image and bottom of the other not to match points on the other image and have to be cut off. Toe-in or toe-out can cause excessive differences in divergence between objects at different depths making the pictures hard to “fuse”, though potentially useful for measuring depth precisely. It can even cause the cameras’ views to overlap at only a narrow “tunnel” or miss each other’s fields of view entirely. A little toe-in can, however, help match fields of view for macro photography.


 * If you connected the cameras and lenses at multiple far-apart points, such as the camera socket and tripod foot along with side-to-side padding, they should be firmly aligned. Just confirm that they’re not tilted toward each other along one side of their optical axes by side-to-side wobble or uneven padding, or toed in or out significantly by connecting the cameras (or the lenses, if those are further apart) with too many or too few screw threads in between.
 * If you connected just the cameras, balanced pressure from the spacer pads about the tripod-socket connection should keep the optical axes in roughly parallel planes. You just have to apply a little pressure to the cameras to break stiction and pivot them so the lenses line up. Use relatively long lenses for more precision, and sight the front edge of one past the other. Look for a uniformly thin margin. Since the camera baseplates’ mating surface is generally wide, but not deep, toe-in and toe-out can be more of a problem.
 * Don’t push any extendible part of a lens that doesn’t have a sturdy grip ring on it, or push hard on any movable part of a lens. Even tough-looking barrels bear against miniature motors, gears, and other delicate parts.
 * Alternate, possibly more precise method: Take off the lenses, check that the front surfaces of the lens mounts are flat without protrusions other than spring-loaded parts meant to retract, and press them against a broad, hard, flat, smooth surface to align them. This would be easiest with cameras whose prism assemblies do not extend forward of the lens mounts, typically those without built-in flashes.


 * To check alignment, synchronize the cameras and take a picture of something detailed and at least several feet away so that the precise alignment of the cameras, not just their relative positions, controls their fields of view. The vertical field of view should match. The horizontal field of view should differ at any distance only by the constant parallax between the two lenses: usually three to four inches, but measure it from center to center with the middle part of a ruler—not an end, which could tip in and scratch something—laid flat across the preferably-capped fronts of the lenses.
 * One option is a subject far enough away that the separation between the cameras would not be expected to cover even a pixel, and brightly lit for a sharp exposure, such as a distant landscape or skyline: just zoom into the pictures and see if the edges match.
 * Another is a yardstick lying horizontally across and out one side of the cameras’ field of view. Take a picture from several feet away and check the markings on the yardstick toward the edge from which it exits each picture for correct parallax, toe-in, or toe-out. In these or other pictures, check the margins between a horizontal line and the tops or bottoms of the pictures for vertical alignment, and the orientation of vertical and horizontal lines in each frame for rotational alignment. Including a grid such as a window in each frame makes checking vertical and rotational alignment easy.
 * The alignment doesn’t have to be pixel-perfect. There may be enough variation and play in the cameras and lenses that it can’t be. Good alignment makes the pictures more comfortable to view, gives a more uniform and accurate sense of depth, and ensures that the cameras take pictures of the same thing—excepting parallax—with very-long-focus lenses.
 * Some lenses, particularly autofocus lenses, have some play in their barrel mounts. Lenses could even be a little misaligned. If you have multiple sets of lenses to interchange, align your cameras using the long ones. They’re most critical to match because they will magnify alignment problems. They’re also most likely to be straight because minor misalignment would magnify other problems for general use: calibrating the cameras to them should align the cameras well in an absolute sense, for the least aggregate error with other lenses’ presumably random small misalignments from desired straightness.
 * To stop worrying about imprecision, observe that you can often get a nice-looking 3D effect by simply hand-holding a camera a little to one side for a second picture: try gripping a compact camera firmly in both hands, squeezing the shutter button lightly with a “free” finger to not twist the camera as you take a picture, and shifting your weight from one foot to the other between shots for this “cha-cha” method. Unfortunately, this inexpensive trick doesn’t work for action, and doesn’t work well for big scenes where something will move between pictures.
 * For critical alignment in all directions, try a stiff, finely-adjustable shim like a tripod leveling plate. But, for technical measurement rather than ready-to-use realistic photography, one would probably want a long stereo base rather than a human-like one, and a compact package wouldn’t be so important.


 * Adjust rotational alignment by simply turning the cameras against each other. (This won’t be possible, but shouldn’t be necessary, if the lenses are connected too.)


 * Adjust toe-in or toe-out by adjusting the shims between the cameras. (If the lenses are connected too, instead disconnect the further-apart tripod sockets and reconnect them leaving more or fewer screw threads between them.)
 * Compensate for subtle toe-in or toe-out with layers of tape running across the front or back, respectively, of the mating area. Cork bumper or similar spacers can be stacked to correct severe misalignment, built up with stickers or paper and just enough glue to attach them without stiffening their airy structure, or shimmed with tape attached to the surface that will face them.
 * Adding layers of tape to both the front and back, or likewise to the sides, of the mating area compresses the tape and so reduces a single layer’s marginal contribution to the cameras’ angle of alignment: four layers in front and five in back may cause less toe-in than two in front and three in back. This can help align tele lenses precisely, but the cameras’ fit should be no more than snug: the pressure on the tape continually strains the tripod sockets and baseplates.
 * Compensate for more complex misalignment by building up the too-close corner of the mating area on its own. A tape skid diagonally across the corner would work well.
 * Adding spacers between the cameras in a few places to align them also increases the overall stress on the connectors holding them together. If the cameras are hard to twist together after they have been aligned, reduce the spacing at each corner, generally equally, for identical alignment and a snug but not forceful fit. If there are front-to-back tape skids on each side, just use fewer strips in each.
 * Tape edges perpendicular to the cameras’ paths past one another as they screw into alignment tend to catch and curl up on irregularities and by their own adhesive. Once you figure out the number of layers of tape needed in each location, clean and retape the cameras with width-running and diagonal tape applied first, under front-to-back “skids”. Record the number of layers of tape and arrangement that aligns the cameras well to easily replicate it after you may have to separate the cameras in the future.


 * You can lock the rotation of the two cameras with a dot of white glue on each of the two cork pads, one toward the camera fronts and one toward their backs, that last slide under the two cameras. Satisfy yourself that the cameras can be well aligned and hold somewhat stiffly against each other. Then twist them apart to expose a tape skid or the first cork pad on each side, check that the corresponding surface on the other camera bottom is free of gaps, and apply a dot of white glue to the center of each. Align the cameras and set them gently to dry without disturbing their alignment. (Don’t twist them by the strap.) If the glue sits between layers of tape or other non-porous surfaces, apply it thinly and allow it to dry for a day or two because the moisture will have to escape through the tape’s edges and diffusion through the tape itself. The white glue will hold the cameras against routine bumps, but a little force will break the cameras loose and gentle scraping or rubbing should get rid of it entirely.

Configure the cameras
Set everything that doesn’t automatically adjust the same way, adjust the automated systems to be as likely as possible to reach the same results on each camera, and keep track of the pairs of images by matching the file numbering. In particular, match the following settings, and try setting certain ones as suggested.


 * Zoom: Manual-zoom lenses’ settings are most easily matched and checked at one end or the other of their range. A piece of tape could hold a carefully-matched intermediate setting. Use inelastic, low-residue tape like cellophane or masking tape.


 * Some compact cameras’ lenses power-zoom in consistent, easy to match steps.


 * Focus mode: As usual, use single-shot or “one shot” generally, continuous or “AI servo” for fast-moving subjects, and manual focus when autofocus fails or for critical focus on static subjects.
 * Manual-focus mode synchronizes the cameras most consistently because the lenses take the same amount of time—none—to focus before a shot.
 * Single-shot mode synchronizes the cameras well if they’re allowed to prefocus before a shot: the camera will leave them as they are. Set each camera to beep when it finishes focusing, or just listen for the motors stopping.
 * Continuous mode may synchronize the cameras less consistently. One or the other may take a fraction of a second to stop focusing before a picture. The subject may not move in the interim enough to blur or arrive at a noticeably different position in each picture, but the difference can be enough for one camera to miss the other’s flash at a high sync speed.


 * Focus point: Multi-point autofocus generally chooses the closest point in a wide central area. It’s perfect for catching a moving subject without much else around. But it can be unpredictable with multiple objects close in the frame to the primary subject, and you won’t know what the camera you aren’t looking through has focused on. So, for general use, set each camera to use single-shot mode and the middle focus point, half-press the remote shutter release to focus them on the primary subject—which should occupy a significant area in the frame so that the center of each camera’s frame will be sure to catch it—and keep the release half-pressed to hold focus as you reframe.
 * To maintain continuous autofocus on an off-center, moving subject, take the cameras’ rotation into account when selecting the off-center autofocus points. When each camera is held upright to adjust its controls, select “opposite” autofocus points (radially symmetric about the cameras’ connection) on the two so that when they are held side by side to use, each will focus on the upper, lower, left or right side of the scene, as the case may be.
 * “Live view” autofocus generally uses contrast detection only and so is slower, but would let you confirm on the displays that both cameras have focused and on the right thing before you take the picture. It may cover the entire frame without gaps, keeping you from missing a small subject with the second camera especially if it doesn’t have other things nearby to catch the focus system’s attention. Try face detection to choose the focus point if your subject has a face.
 * Both cameras can benefit from a steady autofocus assist light that some flashes and wireless flash controllers can project.


 * Sensitivity (“ISO”): Set the same one, or the same automation mode, on each camera. Set a low one to measure gradation in bright light as well as dim light for great dynamic range—enough, on some cameras, for tonemapping often associated with composite HDR pictures, but in a single exposure fully compatible with moving subjects and flash.
 * The eyes and brain adapt to bright and dark areas of a view rapidly, concentrating on color and making the areas look more similar in tone than they are. Mild tonemapping can make a picture not only prettier but more realistic-looking.


 * Exposure: Set the same automation mode, or the same manual exposure, on each camera.
 * Set the exposure-selection steps as far apart as possible, for instance to a full stop or half stop instead of a third stop, to reduce the chance that the cameras will make images of different brightness by choosing exposures on different sides of a fine line.
 * Exposure compensation or bracketing must be set, and later turned off, on both cameras.
 * If two otherwise identical cameras report a consistent difference in exposure or one camera consistently automatically makes brighter pictures than the other, set exposure compensation to compensate for the meter.
 * A camera to which a flash is attached may change exposure settings automatically. For instance, the shutter speed may be decreased to the flash-sync speed—you may need a still lower one to sync both cameras to the same flash—or the shutter speed may be increased to one generally safest for casual hand-holding. Confirm that exposures still match after you attach a flash.
 * If the cameras don’t provide equal exposures, as you can check from the data display in the LCDs, you may have to use manual exposures. Use the meter for assistance on one camera, then set the other to match. On a digital camera, err a little toward underexposure to avoid bright, empty “burned out” areas of off-the-scale highlights.
 * Precisely matching exposure is not necessary. Small differences in exposure and color can be compensated for in post-processing, even automatically matched by StereoPhotoMaker.
 * Generally have StereoPhotoMaker match the exposure and color to the camera you’re looking through, since you’ll probably pay closer attention to setting it correctly and have been likely to notice and fix it when it was set wrong before taking a picture. But if the flash is on the other camera, match exposure and color to that camera where the one you’re looking through failed to synchronize with the flash burst.
 * Small differences in depth of field or shutter speed at a given exposure are typically not noticeable, but set aperture-priority (or manual) mode if unusually great depth of field or defocus is important to the picture. Set shutter-priority (or manual) mode if stopping motion or allowing a particular amount of blur is important. Freezing motion with flash or perfect synchronization with a purpose-built camera or twin-lens attachment for a single sensor would best provide very precise synchronization.


 * Exposure metering pattern: Use wide-area evaluative or averaging metering, not spot metering, because the cameras’ overall views will be similar but the objects at any given point in their frames will generally differ slightly.


 * Filters: If you use them, match them.
 * If you use polarizers, match their rotation. This is easier with lenses whose front elements don’t rotate as they focus or zoom. Look through one polarizer and then the other at a specular reflection to see if the directions of their polarization correspond equally to the markings on their rings. If it doesn’t, add marks or stickers to which they do.


 * “Drive” mode: Generally use single-shot mode. In continuous-drive mode the cameras’ timing will likely drift apart from shot to shot too much for them to share a flash, but might not drift apart too much, at least initially, to prevent the two sides’ pictures from matching well enough to look good.
 * A flash may delay the camera to which it is attached in order to recharge once its capacitors are depleted.
 * If you confirm that the flashes won’t overlap, with all but the first shot on the second camera consistently not being illuminated at all by a flash on the other and vice versa, try flashes or radio flash triggers on both cameras. You’ll need a powerful flash running at reduced output to have enough stored power for a rapid burst of exposures.
 * For HDR, set the cameras to take all of the bracketed exposures with a single press of the shutter release. The cameras will have less time to move between them and the subject shouldn’t move since the object is to make multiple different-brightness exposures of the same scene. Flash generally isn’t used because exposure bracketing usually varies the shutter speed for matching depth of field, and flashes are so quick that their contribution to exposure varies instead with aperture. For a stationary subject, you could support the camera and vary the flash power along with the shutter-speed adjustment, or use neutral-density filtering instead of different shutter speeds, to evenly vary ambient and flash exposure between frames.


 * White balance: Set the same one on each camera. It’s usually best to avoid “automatic” both because the cameras could respond to different color mixes in their slightly offset images and because guessing at incident light’s white balance from reflected light as cameras generally do often reaches a result that is neither accurate nor pleasing. If the cameras’ white balances consistently mismatch, a custom white balance setting on one or both could match them. But small mismatches probably won’t be noticeable and can be adjusted in post-processing, including automatically by StereoPhotoMaker.


 * Other in-camera corrections and adjustments for things such as chromatic aberration, distortion, sharpness, brightness, and contrast.


 * Try fully-automatic (“green”) mode or a scene-type-specific setting on each camera in lieu of other specific settings if you like, but pay attention to the image data in the picture previews to be sure the cameras chose matching settings on their own.


 * File type and quality. Choose full-size ready-to-use files along with raw files for maximum convenience and quality at the expense of buffer capacity for burst shooting. But JPGs work as well for stereo photography as regular photography.


 * File numbering: It’s often better to avoid resetting this and avoid configuring the cameras the cameras to reset their file numbering whenever the memory cards are formatted, in an attempt to match each camera’s file numbering.
 * Multiple files with the same names from different sessions can be harder to keep track of.
 * Multiple files with the same names can slow workflow with StereoPhotoMaker (as of early 2012). An operating system will generally not allow files with the same name in the same directory, StereoPhotoMaker’s “multi conversion” batch processing to generate the final pictures for viewing generally works on one directory (or one for each side’s pictures) at a time, multiple instances of StereoPhotoMaker may not run well together, and processing hundreds of pictures can take hours. If all of the photos to be processed, even from different sessions, can go in the same directory (or at least copies of them can go in one temporarily, as each session may have a useful permanent directory name), StereoPhotoMaker can be set up once for all of them rather than having to be periodically set up again to work on a new batch.
 * If one camera’s file numbering can be advanced to meet the other’s without actually taking lots of pictures, which would wear the shutter, corresponding pictures can be given similarly-numbered names without often reusing filenames among a photo collection. Lacking this functionality, you could take a few pictures to bring the last two digits of the corresponding filenames into sync, making them easier to pair later.
 * If the file numbers get out of sync during a picture-taking session, usually due to bumping a shutter button on one camera inadvertently taking a picture with it on its own, you can take a picture with the other camera to resynchronize them.
 * Covers taped over the shutter release buttons could keep you from bumping them.


 * Time: Reset each camera’s time setting to precisely the same time simultaneously. This will often ensure that their internal clocks are synchronized to within a second or two, even if only minutes can be set directly, making image pairs easier to pick out even if their filenames don’t match, as will happen if the cameras’ filenames are not reset between memory cards or can happen when an extra picture is deliberately or inadvertently taken by one camera alone.


 * Set each camera to display each picture briefly, with its image data, after a shot. Check that the exposure and other data matches, that each was fully illuminated by flash if you used it, and, periodically, that the file numbers correlate. If the images look close, rely on the data displayed rather than nuances you may see: the displays may be imprecise and their apparent tone likely shifts with viewing angle.

Synchronize the cameras

 * Attach a wireless shutter release receiver to each camera’s port.
 * The body likely has a foot to stow in an accessory shoe, which is a good place for it when you don’t need a flash.
 * To use a flash just tuck the receiver body somewhere, such as between the lens bases, if it’s small and light. Put the flash on the camera you’ll hold so it will balance the weight of the other camera, rather than on the other camera where it would aggravate their mis-balance.
 * Or, make another temporary attachment point somewhere out of the way, such as a spot on the front of a camera body without controls and not normally held or a spot on a flash if you use it often, with hook-and-loop fastener tape. Put the soft loop side on the camera so the camera doesn’t snag things when you’re not using the wireless receivers.
 * Certain Canon cameras with “C3” style 3-pin remote-release sockets can connect directly with a double-ended cable, making the wireless releases unnecessary.
 * Nikon cameras with 10-pin connector ports, often covered by screw-in caps, can be synchronized by connecting their ports with a “MC-23” or compatible connector cable. Half-pressing one camera’s shutter button will tell them both to focus and fully pressing it will tell both to trip their shutters. Use the same model of cameras to match their internal delays as precisely as possible, since so far these are not consistent between models or adjustable let alone automatically configured when the cameras are connected.
 * Unfortunately Nikon has only provided the 10-pin connector on relatively big, expensive cameras and its cameras’ usability has been limited by a lack of open-source firmware and even “crippling” such as reduced configurability of less-expensive models’ electronic functions and encryption of raw image files.
 * Certain other cameras may be able to be synchronized by splicing their remote-release connectors or connecting them, often most easily by submini stereo audio adaptors commonly used for generic two-stage shutter releases and an audio Y-cable, to a single remote release.
 * Put on the strap, pick up the cameras, turn them and the wireless receivers on, and steady them side-by-side with both hands without bumping the shutter buttons. Don’t bump the zoom settings: one hand on each camera is often best. Pick up the wireless transmitter and turn it on, too.
 * A very small wireless transmitter could be attached to one of the cameras. Don’t rely on it as part of your grip on a camera unless it’s attached very securely.


 * Half-press the button to focus the cameras.


 * When the lenses are focused in single-shot autofocus mode, when the cameras have them tracking the subject in continuous autofocus mode, or immediately in manual focus mode, full-press the button to take the picture. Both cameras’ shutters should trip pretty much simultaneously.
 * This might also adequately synchronize some video DSLRs that lack genlock, for a limited time at least, by starting recording simultaneously or tripping the shutters to reset the recording process.
 * Some wireless shutter releases will not transmit a half-press if the button is simply released halfway after a shot but instead require the button to be released fully, then half-pressed again.


 * Check synchronization with a non-dedicated flash on one camera—the camera could slow down to trigger and measure a preflash with a dedicated flash, rather than simply trigger a generic one—or by taking pictures or video of a cathode-ray tube television or monitor at a high shutter speed and comparing the progress of the beam’s scan in each picture.
 * The two cameras’ pictures don’t have to be perfectly synchronized for a pleasing 3D picture. Subject motion that would make an exposure of a few thousandths of a second look fuzzy won’t make two exposures separated by such a time different enough to not “merge” as a single image with the proper depths. And, if the pictures are primarily illuminated by a single flash or a set of flashes triggered together, they’ll consist mostly of the subject as it was at the time of the flash, with much-less-conspicuous veiling fuzz from its movement before and afterward.


 * Focus points on touchscreen cameras can be synchronized manually, especially during video recording, fairly reliably by tapping simultaneously with capacitive styli, either mounted the screens' separation apart or held chopstick style, or perhaps by redirecting one camera's focus output to both lenses. This may require some mount-area surgery for infinity focus, and would best reset to normal operation when the synchronization hardware is unplugged or turned off
 * With more work, one camera's lens could drive the other's focus remotely (and its zoom could drive the other's, by belt or synchronized motors). This works very well with stepping-motor lenses and an autofocus system designed to work with them such as the Canon EOS 70D's: a basic implementation involves simply isolating one lens from its controlling camera and patching all of its electrical connections but "DLC" to the other camera (this simple direct coupling requires identical zoom settings). The connection can be made with surgery to the kit lens, but would work more neatly for an entire outfit by modifying the cameras (or accessory lens mounts) to redirect and accept redirection of their signals. The "slave" camera can be another one or more with a compatible mount (including via adapter)--even a high-speed or film camera (for which an integral digital autofocus system perpendicular to a pellicle mirror could be a neater option)! Or, one could electronically tell the other what distance to focus at using many lenses' encoding. With even more, one camera's shutter hardware could drive or trigger the other's directly (or a low level program command could be sent) for closest synchronization without computing delays. Canons' remote capability and Magic Lantern free software offer a starting point for the reversible software side.

Synchronize flash
A focal-plane shutter exposes only a moving slit above the “flash-sync speed”, commonly around 1/250 second on DSLRs. At that speed, the entire sensor is exposed at once for a much shorter time after one curtain has gone all the way across the frame and before the second follows: the stated exposure period is the longer time between the curtains’ passes across any given part of it. (An electric flash is nearly instantaneous.) Below the sync speed, the curtains generally travel just as fast, but the sensor is fully exposed longer. A camera usually by default uses “first curtain sync”, firing the flash as soon as the sensor is fully exposed for as little lag as possible between the press of the shutter release and the sharp, bright picture formed by the flash, but can be set to use “second curtain sync”, firing the flash immediately before the sensor begins to be covered again to form the sharp flash image in front of, rather than behind, a motion-blur trail. Digital cameras often use a weak “preflash” to measure exposure before opening the shutter in order to adjust the power of their main burst of light accordingly, and might increase shutter lag to do it. They may also pause briefly to stop a lens moving in continuous autofocus mode before tripping the shutter.

A flash for each camera would be inconvenient, could create distractingly sharp double images from camera or subject motion between flashes, could confuse automatic systems expecting to receive only their own flashes at a given point in time, and could misexpose parts of one or both pictures if both shutters aren’t fully open at the same time. So it’s better to just use a flash on one camera, and synchronize the other to it. Imprecise flash synchronization will clearly reveal itself through a lack of illumination across all of a picture, or across part of it if the flash fired as a shutter curtain partially covered the sensor. (A lens shutter as compact and medium-format cameras often have will not cast a well-defined shadow and may manifest being partially closed with less brightness or odd bokeh from the shutter blades.) To compensate for imprecise synchronization, try some of the following:


 * Switch the camera to which the flash (or flash controller) is attached. This may help if one camera is consistently firing the flash before the other can open its shutter fully.


 * Avoid continuous autofocus mode. Neither camera will wait to stop focusing. Because a camera may divide its attention between the autofocusing process and actually tripping its shutter when it is told, allowing each of the cameras’ lenses to focus at a single distance and stop or manually prefocusing them at the subject’s expected distance prevents this interference.
 * A smaller aperture can compensate for imprecision in focusing from the lack of continuous and predictive adjustments.


 * Use manual or “thyristor” automatic flash instead of automatic flash with a pre-flash. This pre-flash process takes time which a camera that has no flash attached will not expect. Manually adjusting flash power (use the “guide number”) or a “thyristor” automatic setting which makes a single burst of light cut off as a sensor on the flash itself has received enough light back from the subject avoids the delay.
 * Attempting to duplicate the delay with a pre-flashing automatic flash on each camera could confuse their measurement process and result in ghosts. Such a method might work better with one flash covered to get the delay without the light. But, not seeing any light return from a pre-flash, it may fire at full power and need to recharge its batteries and cool for several seconds between pictures, and consume its battery power quickly.
 * Some automatic flashes are quicker than others. The newer camera-brand ones are often better for this.


 * Use first-curtain flash synchronization. Recall that an automatic flash with pre-flash slightly slows the camera to which it is attached. If one camera is using it, that camera will take a little longer to open its shutter. If that camera uses “second-curtain sync”, waiting until immediately before closing its shutter to tell the flash to make its main burst of light, the other camera’s shutter will already have closed. If it instead fires the flash immediately after opening its shutter, the other camera’s shutter will still be open, depending on the speed of the flash’s automation and the shutter speed.
 * Second-curtain flash synchronization does tend to work at slow shutter speeds. The camera with the flash may allow more time between firing the flash and closing the shutter, catching the other camera’s shutter still open.
 * Second-curtain flash synchronization might align otherwise mismatched shutter and flash timings better.
 * For reliable second-curtain synchronization with dim ambient light, set second-curtain sync and a slightly faster shutter speed, such as one-tenth second, on the camera with the flash. Set a slightly slower one, such as one-eighth second, on the camera without it. The ambient light won’t noticeably unbalance or blur images, and the cameras will have plenty of leeway to each capture the flash. (Choose exposure and sensitivity settings that would underexpose the images badly without the flash.)


 * Use a slower shutter speed. The cameras may allow a greater gap between the time the shutter fully opens or begins to close and the firing of the flash as the shutter speed slows, allowing more latitude for two cameras to capture one’s flash. A shutter speed a stop slower than the flash-sync speed may consistently allow synchronization with non-dedicated flash; a speed two or three stops below the flash-sync speed may consistently allow synchronization with a through-the-lens flash or controller or continuous autofocus.
 * Bounce flash or multiple flashes, most easily balanced with an automated controller, can be especially pleasing with a 3D picture because all levels of depth will be present to explore but the subject will still stand out at its own distinct distance. (A flash mounted on one of the cameras will be sideways, so you’ll need to use a reflector or rotate it to bounce it off a ceiling.)
 * Side lighting can accentuate a three-dimensional effect by demonstrating contours through brightness, shadow, and different angles’ reflectivity. Try multiple flashes triggered by a flash transmitter or an on-camera flash itself set not to flash or provide only low “fill” light. Automation of multiple flashes can involve each making its own distinct pre-flash, delaying the shutter of the camera controlling the flashes longer than a single automated flash would. A slower shutter speed may be needed to ensure the other camera doesn’t open and close its shutter before the main flash burst.


 * Match a dedicated flash’s delay of one camera with a dedicated flash on the other, covered so as not to interfere with the other’s preflash measurement or exposure. You could use the second camera’s built-in flash if it has one, but bear in mind that flash will fire at full power, recharge slowly, and drain batteries rapidly when the camera sees little or none of its preflash return and thinks it’s in a very big, dark place.


 * Use an electronic shutter release with a programmable delay such as a PocketWizard MultiMax. This expensive but straightforward method would be especially helpful with more than two cameras to synchronize to one another.


 * Use a long exposure and a strobe light, or a flash in strobe mode, for fast-moving objects against dim backgrounds and to detail motion. Both cameras should capture most of the images formed; part of an extra one on one side may not be important.

Support the cameras
Here are some ways to support the cameras without the now-coupled tripod sockets. Keep the strap on to avoid surprises!


 * A big “beanbag”, including a tough, inexpensive bag of actual beans or rice which is sometimes sold in a strong cloth outer bag and a sealed plastic inner one. Don’t use a bag of rocks or anything else that forms abrasive powder. Prop the cameras so the lenses are free to rotate to focus.
 * Nestle the cameras securely in a big bag if you want to gently adjust the cameras by hand between shots, as to adjust the lenses’ focus by hand for focus stacking.


 * Another tripod socket. You could use a threaded spacer between the cameras or lens tripod feet. Or permanently affix a tripod mount to an extra tripod ring, if your lenses use removable ones.


 * Cameras connected at the lenses’ tripod feet could be steadied by balancing or suspending the point between them for a gimbal-like mount.


 * Use a little Lazy Susan for multiple shots to stitch into a conventional panorama, or a big one with a hole in the middle for a non-rotating platform for multiple or wraparound views of an object. StereoPhotoMaker can arrange and combine numerous shots or even video frames a single camera takes over a wide arc for a stereo panorama. An aftermarket microwave oven turntable such as a “MicroGoRound” could rotate the camera evenly and inexpensively. Use a high shutter speed to avoid blur from uninterrupted rotation.

Carry the cameras
Mounting cameras baseplate-to-baseplate results in an assembly that is fragile and lacks accessible, sturdy support points. The lens fronts generally bear on delicate moving parts. The camera backs may have projections toward their upper edges such as eyecups and large hotshoe-mounted wireless receivers. Resting on a hard surface, these would direct some of the cameras’ weight to their baseplates’ relatively fragile attachment point.


 * Carry the cameras with a strap as you use them.


 * Use a camera bag that does not focus stress on the cameras’ attachment point as you carry or bump it. Try resting the cameras on their backs in a boxy shoulder bag with extra padding at the bottom, especially under the wide flat areas of the camera backs.

Macro
Many find a stereo pair’s relation of divergence to depth most pleasing when the distance to the closest subject is roughly 30 to 50 times the stereo base: several feet for a human-like stereo base of three inches or so. (Increase or decrease this distance proportionately to account for the differing magnification of a tele or ultrawide lens: with the usual stereo base, the field of view should be a few feet wide at the area of the closest subject.) With this modest separation, parallax keeps the two cameras’ images from overlapping only at their very edges, and each camera’s perspective captures only very little of the side of an object that doesn’t appear in the other camera’s view: you can look through one camera, take a picture, and expect that the other camera will make everything but the very edges appear three-dimensional.

Close-up or “macro” 3D photography presents a few problems. If the cameras are parallel their parallax will be large in relation to their fields of view, which will overlap only at a quickly shrinking inside area and ultimately not at all as you approach the subject. The stereo base will be large in relation to a moderately-magnified subject’s distance, resulting in extreme differences in divergence for small absolute changes in depth that make the images hard to fuse all over at once and make them “pop” out unappealingly much when they do. Wide-angle close-ups with a relatively long stereo base will also capture significantly different sides of an object: each lens will see much more of its own side of surfaces in the direction of, and thus lying edgewise or even occluded to, the other lens. “Toeing in” the cameras to face toward each other instead of straight ahead will misalign the subject planes from which their lenses render objects at equal sizes, “keystoning” each image for more magnification on its own side, and mismatch the background views (which may not matter if they’re plain or blurred to smoothness).

Here are a few ways to improve macro photos:


 * Reduce the stereo base for mostly-overlapping views and no need for toe-in at your subject distance. Unfortunately, DSLRs’ and other fancy cameras’ own size limits how close two can be to one another.
 * For a stationary subject, just use a tripod and a single camera shifted a short distance between pictures with a “slide bar”.
 * A lens’s image stabilizer could shift the image very quickly for you but generally is not user configurable.
 * Extremely small subjects can be shifted under a microscope.
 * The best option for a mobile subject is likely synchronized compact cameras. The kind with a lens window at the corner can be mounted with the optical axes especially close together. Since cameras ordinarily aren’t available in mirror-image variations, you’d probably need a frame or brackets to mount them lenses-inward and side-by-side with the camera bodies sticking away diagonally.
 * If both cameras’ flashes can fire in close enough synchronization to illuminate both sides’ images and aren’t too bright, you might not need any other light. If they’re not usable and there’s nowhere to plug in an extra flash, use bright ambient light, reflectors, or slave flashes (of the kind which can ignore preflashes, if your cameras use them) with the cameras’ flashes shielded from lighting the subject much directly. If each camera needs its own flash burst, try slave flashes reflecting off something a distance away so each casts the same pattern of diffuse light, with sensors arranged so each camera only triggers its own slave.
 * A single-lens stereo adapter generally has a shorter stereo base than two cameras.
 * Small-sensor cameras have greater apparent depth of field than larger-sensor ones. Use focus stacking for extreme depth of field with subjects that are stationary or can at least be added into otherwise stationary backgrounds.


 * Adjust the spacers between the cameras to toe them in gently if necessary. Use a little extra padding such as another thin cork disk at each pressure point on the back edge if you just couple the two cameras, or an extra few threads of the camera-coupling screw between the cameras if you couple the lenses too—with the tripod feet also coupled less tightly for play there. Work within the limits of the play in the assembly; don’t apply force to the tripod sockets and bend them into misalignment for long-distance photography.
 * By adjusting toe-in for the views to converge at the minimum focal distance, which may not require an extreme angle of toe-in with long-focus and non-macro normal lenses, you could easily position the camera by adjusting each lens’s focus ring to its limit at that distance, setting each camera to manual focus mode, and moving back and forth until the subject (or a point about one-third of the way in on it, for optimal use of depth of field) is sharp. The depth of field will increase when the camera stops down to take the picture.
 * If the views converge at another distance, as you may need them to if the minimum focal distance would require too much toe-in, focus the lenses to the distance at which they do converge and fix the focus gently with tape.
 * Some autofocus cameras can confirm correct focus and even the direction of focus error in manual-focus mode.
 * Mounting the cameras’ tripod sockets to a door hinge with short, slim-headed bolts sometimes called “machine screws” can hold the cameras parallel in one axis while leaving them free to adjust convergence toward one another. The lenses could swing into and damage each other if not restrained or padded; self-closing “spring” door hinges can be set to draw them parallel when no force is applied. A light beam projected from each camera at a point as far away from the toe-in axis as the lens—even through the eyepiece and out the lens itself—could converge to confirm the images will overlap well at a given distance. (It should generally be turned off when the picture is taken.)


 * Reduce the need to toe in two cameras with longer-focus lenses.
 * You’ll need long-focus lenses that focus relatively close. They should extend far, not reduce their true focal lengths as some zooms do in “macro” mode.
 * A teleconverter will increase effective focal length and an extension tube will preserve it, but an accessory “close-up lens” will reduce it.
 * Mirror lenses can’t easily be stopped down to control depth of field, and produce odd out-of-focus patterns, but they’re light and inexpensive and you generally don’t want the misaligned backgrounds toe-in produces to be sharp.
 * A few degrees of toe-in generally doesn’t require image corrections for pleasant viewing.
 * Correct keystoning of toed-in images in a photo editing program. (A sophisticated stereo camera might apply toe-in and correct the keystoning automatically.)
 * Shifting a camera’s lens or sensor could apply toe-in without keystoning, but the image stabilization systems that could do this in most electronic cameras generally aren’t user configurable.


 * Reduce electronic flash power for an ultra-fast burst for very quick subjects like hummingbirds. For ordinary photographic equipment, speed may correspond to factors affecting absolute as well as fractional output: a small flash using most of its power can be just as fast as a big flash using little of its power. Try a reflector for perfectly synchronized light from multiple directions: a white one for soft light, or a silvery one for a bright beam.

Hyperstereo
Divergence drops off asymptotically with distance. Depth sensation can be increased, and depth resolution can be increased, by increasing the stereo base for faraway objects such as big buildings and landscapes. Generally avoid close-up objects, which will have too much divergence and be hard to “fuse” or left out of one picture entirely.
 * For buildings, try a stereo base of a few feet.
 * The cameras can be mounted on a bar. A closet vertical rail works well: it’s strong, finished, pre-punched with holes through which bolts for the tripod sockets can go, and inexpensive. If it’s U-shaped, the bolt heads can tuck into the recess and not scratch surrounding surfaces. Use another, or for greater strength a board drilled with holes to accommodate the cameras’ and lenses’ mounts, for the tripod sockets of lenses that have them. For precise alignment on any mount, put the cameras on a pair of tripod heads.


 * For distant landscapes such as mountains, try a stereo base of many yards. Such a large mount might be impractically large, so use a single camera and move it between pictures. Carry the camera, matching subject reference points on very large objects essentially at infinity such as clouds to viewfinder reference points or edges, or watching the image shift evenly by the relevant distance in an overlay. Or, take a series of pictures quickly out the side of a vehicle, much as how topographic maps are made, and choose the two with most pleasing separation later.
 * Leaves and other natural elements move almost constantly. You can match these details by putting one camera on a tripod (beware of theft) or having someone else hold and aim it and tripping the shutters with a wireless release.


 * Hyperstereo views should, as before, differ only by their parallax—but it will be much greater. Look for objects of recognizable size to be offset only in the axis of the stereo base and by the distance separating the two viewpoints.


 * For accurate depth measurement of close-up and far-away objects, you could take several pictures at varying distances from the first camera and form a composite depth map.

Viewing
Copy your images to your computer. Convert them to the proper format for a special 3D display if you have one. Or, view the cameras’ respective sides of a given picture side by side with the right camera’s image on the left and vice versa and cross your eyes so that each looks at the other side’s picture and your brain “merges” them into a 3D view.
 * The Gadmei P83 is an inexpensive autostereoscopic (no glasses) media player. Its internal processor is slower than a PC’s, and will load files resized to its resolution noticeably more quickly than large ones.
 * Many devices that can take 3D pictures can also display 3D pictures from other cameras, but may require converting the pictures to a particular format.
 * Autostereoscopic displays generally rely on each eye facing its part of the screen at a given angle and so do not work well for close-up wide angle views. Displays separating the views at the eye as with separate displays, whether electronic or printed as with stereo cards, or with polarized or colored glasses work better.


 * Most people can cross their eyes but not wall them easily, so the “parallel” viewing method of looking “through” the pictures with the left one on the left for the left eye and the right one on the right for the right eye doesn’t work well unless the pictures are relatively small and far away. Move back to reduce the angle by which you’ll have to cross your eyes to see big, detailed pictures, reducing strain and headaches at the expense of a less realistically-wide view.
 * The vertical orientation of the pictures produced by most cameras mounted side-by-side at their bases maximizes image area for a given width, and thus a given eye-crossing angle.


 * A “3D wiggle” image animation swapping the two frames quickly uses motion parallax rather than stereopsis to convey depth, so it can be appreciated by people who can’t “merge” stereo pairs or are even blind in an eye.
 * Some programs, including Web-based stereo viewers and the GIMP Animation Package “gimp-gap”, can smoothly “morph” from one frame to the other in two (or more, especially useful where noticeable background details get occluded from one side's view to the other) multi-perspective series. For best results, let the “morphing” program know corresponding points between the two images (if it can't detect them itself) to accentuate its stretching and bending where the change in perspective should be greatest.
 * These manual “workpoints” and the rough depthmap they create might help an automated depthmap function in its search for finer corresponding points.
 * A sweeping stereoptic movie rather than simply motion-parallax-inducing 2D view could be formed by directing frames coming a few frames earlier in a morph sequence to one eye at any given point. This would effectively reduce stereo base relative to that between two extreme views from which the morph was created, and so might be best suited to a morph overall between hyper-stereo or multi-camera perspectives.


 * Keep your pictures organized with memory cards with different enclosure and computer volume labels, different folders, and corresponding names for left and right images.


 * Most DSLRs create an image file with the basic data arranged as the camera sees it, right side up, upside down, or sideways, but use an “orientation sensor” to tell the computer how the image should be rotated for display. The sensor may not work well for pictures taken upward or downward. If both pictures aren’t consistently right-side-up, you need an image viewer that recognizes the orientation data.
 * Many image editors can generate new images more directly representing a rotated image which some viewers and editors need, but avoid repeatedly saving an image in a lossy format to preserve quality.


 * Use a viewer that can move through a series of files with keyboard or mouse shortcuts so you can advance each side’s pictures without having to re-merge your 3D view.


 * splitmpo.sh can convert MPO files from 3D cameras to stereo pairs for cross-eyed or parallel viewing.


 * ImageMagick can splice and adjust batches of images.


 * StereoPhotoMaker can assemble, align and crop stereo pairs and prepare them for many different viewing methods. It can also make panoramas, which it calls “mosaics”.
 * The “optimized anaglyph” setting, tinted red-cyan glasses, and increasing the gamma over that which cameras produce and tends to be appealing for non-anaglyph pictures works well for many subjects.


 * pyRenamer is useful for renaming a series of image files from one camera to correspond to the names of image files from another. The two series may differ due to each camera numbering pictures in a long-running continuous sequence—good for keeping filenames unique, but not so good for immediately recognizing correspondence—or taking a picture deliberately or inadvertently with one camera but not the other. Here’s a sample workflow. Automating parts of it with a script—which might ask for confirmation of its guesses at more difficult tasks such as selecting pairs from sets of images closely similar on attributes used for comparison—could be easy since the programs involved are free software and largely graphic interfaces for readily interoperable command-line programs. StereoPhotoMaker might also be modified to sort through the pictures largely on its own, but it’s not open-source “free software”, so not everybody can make the changes.
 * Copy each camera’s pictures to its own directory on your computer.
 * Make a spreadsheet with a page for each pair of directories of pictures, most intuitively corresponding to a picture-taking session or memory card full, and within it columns for
 * the original filenames of the left camera’s pictures,
 * notes about the left camera’s pictures,
 * the original filenames of the right camera’s pictures, and
 * new filenames for and notes about the right camera’s pictures.
 * Look at the pictures to match the second side’s to the first—generally sets thumbnails will suffice for this—and fill in the spreadsheet. If you’re right-handed, you’ll probably look through the left camera and hold the right camera, more likely accidentally bumping its shutter button. And the way the two cameras are connected will make the right camera more convenient for single shots. So leave the left pictures’ filenames as-is and rename the right ones for a tidy progression of filenames to take from them to identify the finished pictures, and try to match a picture on the right to each on the left rather than vice versa to reduce the number of times you have to tediously confirm that there is not a match.
 * If a picture from the camera on the right comprises a stereo pair with one from the camera on the left, note that picture’s number for it.
 * Mark the beginning and/or end of a series of such pictures from a given camera in text or with color to speed its manual recognition for renaming of the files later.
 * If a picture from either camera does not go with any other picture, just note that it is unpaired. It is a regular 2D picture.
 * If a picture from either camera forms a stereo pair with another from the same camera (or the other camera) taken at different times, as for instance when the camera is moved between pictures for a hyper-stereo view, note which pictures go together, indicating which cameras they are from, and which is on the left and which is on the right.
 * If a series of pictures forms a more complex composite, as, for instance, several forming a HDR or focus-stacked set for each side of a stereo pair, a set of several perspectives from left to right allowing the most appealing stereo base to be chosen later, or several views combining to form a 3D model, explain that in its notes.
 * Mark the entries for pictures that are unpaired or in need of special processing in their own special text style or color to speed their manual recognition later.


 * Rename the files. To reliably pair the correct images from two directories, StereoPhotoMaker’s batch processing seems to require the files to have the same name, and the same sequence: there should not be files in one camera’s directory that do not go with pictures from the other camera.
 * Make copies of the directories containing the two cameras’ pictures to rename them without risk to the original files or their unique numbering.
 * Within each, create a directory for “unpaired” files—those that will not simply combine with single pictures from the other camera to complete their stereo pairs—within each camera’s directory and move those to it. This can contain the actually unpaired pictures, as well as the ones that go with other pictures from the same camera as where a single camera is moved some distance between shots.
 * Use pyRenamer to quickly rename a series of images from the right camera to match an uninterrupted sequence from the left camera.
 * Create a directory within the right camera’s directory for “renamed” files, to avoid conflicts between some files’ new names and other files’ new names.
 * In pyRenamer, select a series of right-side picture files to be given new, sequential names (highlight them with the mouse). Use a directory path and pyRenamer’s variables to send them to the “renamed” directory, with names matching the names of the left side’s files. For instance, if the cameras’ filenames are in the format of “IMG_0001.JPG”, for a series to be renamed beginning with IMG_0021 use the string “renamed/IMG_{num4+21}.JPG”.
 * Give the left and right pictures of stereo pairs not taken in the usual way similarly corresponding names and locations in the left and right side picture directories (no matter which camera they were taken with), removing them from the “unpaired” directories. If their names conflict with others already there, change them slightly, for instance adding an underscore and a number to the name.
 * Similarly, put the left and right pictures resulting from any process StereoPhotoMaker or a similar stereo processing program to be used doesn’t handle automatically, such as HDR, into the left and right picture directories with corresponding names.


 * Stereo pairs can also be combined into panoramas for anaglyph and simultaneously-panned viewing with software such as StereoMaker or Hugin. Try one panorama tilted up and another one tilted downward with a wide-angle lens for an all-encompassing view.
 * An easy way to make a conventional panorama is to pan a camera about its own axis over the desired angle of view, combine each side’s view in Hugin, then overlay them in StereoPhotoMaker. But this effectively decreases the stereo base toward the edges. It can be stretched back out at the expense of edge resolution with a rectilinear projection.
 * Three or four circular image fisheye lenses pointed directly up or down in a grid could capture a 360-degree 3D panorama all at once. (One would need an upward and a downward facing array if the directly downward view matters; many fisheye lenses have a view of slightly over 180 degrees which would reach somewhat below the horizon when faced upwards.)


 * Depth maps can be generated from stereo pairs and used to make random-dot stereograms like “Magic Eye” pictures.


 * 3D models can be generated from multiple overlapping pictures of an object. You’ll need many pictures for a wraparound view—and can synchronize them for a moving subject just as you would with two cameras. Space them out.


 * Many Windows programs can run on Mac OS or Linux with compatibility software and some tinkering. StereoPhotoMaker runs easily with WINE.

Improvements
A few more difficult things could make paired cameras work much more smoothly.


 * Programs, scripts combining existing programs, or scripts within existing programs such as the GIMP or even camera firmware, to automatically or with some configuration or guidance:
 * Combine two cameras’ images into a single file when they are taken. (The cameras might communicate with a separate computer or with one another through their interface cables or wireless adapters.)
 * Find images forming sides of a pair, possibly from corresponding image data or content.
 * Determine which side is which, possibly from camera orientation or arrangement of divergence.
 * Combine sets of images taken to form each side of a pair with techniques such as HDR and focus-stacking, possibly automatically from variations in image data or content.
 * One camera could expose for the bright areas and the other for the dark ones, with the scene’s correct tones (or in the case of another compositing techniques, its attributes as most accurately determined in either camera) computed from the two pictures of almost-identical scenes and the different perspectives of its shapes preserved from the borders which would generally be recognizable from both.
 * Combine pairs of finished images into cylindrical panoramas or wraparound “virtual reality” views.
 * Walk through collecting multiple images for a composite picture or even gather them concurrently through tethering.
 * Match resolution, angle of view, color, tone, distortion, and other image attributes in a pair of images from two different cameras, or two cameras set a little differently, possibly from differences determined through test-chart pictures.
 * Edit one side of a stereo pair automatically along with the other. A depth map could keep track of the disparity between the two pictures’ views of a given point.
 * Sweep and pivot smoothly through 3D panoramas and wraparound views, adjusting stereo separation as closer and more distant objects pan in and out of view possibly by reference to a depthmap, rotating it with the view, and swapping opposite sides’ views to pan across the zenith and nadir. The depthmap could guide stitching of the panorama into a variety of projections with two- as well as three-dimensional accuracy.
 * Store a depth channel with one or more of a set of images, allowing generation of all kinds of 3D views such as smoothly panned “wiggle” stereograms, anaglyphs of various color combinations, and reduced and increased divergence or rotated-stereo-base stereo pairs; seamless stitching and combination of 3D images; manipulation of depth and its relation with divergence to, for instance, increase depth perception for faraway objects or “push” and “pull” parts of the picture; and application of special effects such as depth blur.
 * The depth channel could cross-reference to actual depth by reference to the lenses’ focal lengths, separation, alignment and focal distance, which measurements could be calibrated with a few test pictures. Original pictures taken from slightly different positions will include the edges of objects to slightly different extents; a depthmap at these edges could be filled in with one or more extra “second” images from camera or coded-aperture positions capturing sides missed by the first, or by building upon detected contours to fill the depthmap and existing textures to fill the gaps in a later-generated stereogram. Curves heading sharply toward one another could be extrapolated to intersect sharply and those heading shallowly toward one another could be interpolated to intersect smoothly, or more complex predictions could be made on the basis of frequently photographed scene types, but the margin areas would generally be very small and aesthetically unimportant unless markedly different from their surroundings.
 * Calibrate separation in the stereo pair to real-world distances automatically. A camera’s alignment and magnification could be measured precisely with test charts at a given distance, or pictures of a standard object, possibly the moon with its relatively consistent and precisely predictable size and “infinite” distance, throughout the image area.
 * A program for a portable computer “tethered” to adjust and receive data from the cameras, or at least set to collect their pictures through memory cards with wireless interfaces, and presenting guides to take the pictures for various kinds of composite images it would process would be simpler than programming various kinds of cameras’ internal computers directly and allow a more familiar, detailed interface.
 * Canon “Magic Lantern” firmware can create HDR video by alternating exposure between nearly-identical video frames taken by an electronic rather than mechanical “shutter”. Similarly recording multiple pictures for compositing techniques such as focus-stacking (especially with power focus) or panorama stitching could be completed in fractions of a second and therefore handheld. Very high “shutter speeds” or a flash synchronized to the electronic “shutter” or at least catching some video frames as it “strobes” would capture little of the continual motion of hand-focusing or panning.


 * Viewer-selectable 3D display method options for online photo galleries.


 * A bracket shaped to hold the cameras together firmly straight ahead, hold or incorporate the dual shutter release hardware, mount a flash above the two cameras, provide a balanced grip, and protect the cameras and their lenses with “crash bars”.
 * Or a rail providing these things and holding the cameras parallel for adjustable hyperstereo views conveying the shapes of faraway objects.
 * A hot-shoe to tripod screw adapter can mount a small camera upside down to a simple slotted bar, minimizing its stereo base next to a right-side-up similar camera whose lens is next to one end of the body (and whose necessary connections are away from the lens side or used with low-profile connectors). Look for hot-shoe to tripod-screw adapters made of stainless steel for strength, with narrower threadless screw shafts near the hot-shoe mounts so they can slide along common screw-retaining slots, or file down the threads in this area if needed (protecting the remaining threads and screw with tape during this operation to avoid a rough, potentially camera-damaging finish).
 * These could match or adapt to shims for different kinds of cameras and lenses and, for lenses, might include rollers to zoom two together.
 * Cameras with tripod-collared lenses can most firmly be supported with the tripod collars at right angles to their baseplates, affixed to a sturdy bracket. Pressure in almost every direction would thus bear against a broad area of a camera or lens rather than just tripod sockets.
 * Cameras’ lens mounts are generally sturdy and precisely aligned, so a single, sturdy, stiff device binding to them on one side and the lenses on the other (with mounts that would swivel so that one camera doesn’t prevent the other’s attachment) would keep everything straight without adjustment.
 * Because the device would take up space behind the lenses, focusing at infinity would require the lenses’ focusing mounts to compensate, as by not having infinity stops, or optical couplings such as teleconverters or focal reducers.
 * Not, however, if the lenses were designed for another mount with a larger flange focal distance - which could be designed for compatibility with a simple connector, examples being Four Thirds lenses on a Micro Four Thirds body, or EF and EF-S lenses on an EF-M body. The stereo mount device could easily accommodate electronics for common control of the lenses.
 * A rugged or underwater housing could be made for a fitted bracket and its pair of cameras.
 * A“bracket” controlled or under common control with the cameras, or a purpose built 3d camera (most easily with small lens/sensor assemblies, possibly in the “periscope” arrangement common on compacts), could automatically adjust their stereo base and angle by reference to the distance of objects in the scene for a pleasing but not excessive sense of depth.
 * This functionality could be moderated for video so that objects moving toward the camera do appear to get closer, just less rapidly so.
 * It could also use an adjustable mirror-based stereo apparatus, although those can have limited angles of view.


 * Firmware for one camera to operate another as a slave, identifying pairs and sets of related images as with focus or exposure bracketing or even combining them in-camera, or to remotely trip the shutter on an older model at the correct time.


 * Alignment and communication connectors or wireless transmitters on cameras to couple to adapters or one another. These could be adjustable to toe in for macro photos or calibrate for camera-to-camera and lens-to-lens variation and wear. Mechanical connectors such as replacement or supplemental baseplates or side panels with coupling, locking surfaces could be supplied as aftermarket accessories.


 * A camera body with a revolving sensor or a side as well as the bottom ending close to the lens so that it can take full-frame horizontal pictures as well as vertical ones with a normal stereo base.


 * Sensor or lens shift, possibly through an existing image-stabilization system’s components, to toe-in the cameras’ views for macro photography without the keystoning that would come with tilt. This could be keyed to focus distance.


 * Selectable standardized shutter-release to exposure time.


 * Configurable, easy-to-predict wide-area autofocus responses: the cameras’ optical axes are a few inches apart, so they’ll be easiest to use if their autofocus systems react consistently to mostly but not exactly overlapping image areas. They might for instance, focus on the closest thing detected, for small subjects well in front of anything close beside such as birds in flight, or the second closest area detected, to avoid obstructions such as animals’ cages (which a complementary image may provide data to clone away entirely).


 * An orientation and direction sensor such as an electronic compass and level in a camera, and an electronic overlay of the first image in the viewfinder, to guide multiple-shot stereo photography.
 * Accessory shoe mount levels are readily available and inexpensive. Since mounting the cameras baseplate-to-baseplate generally limits them to “portrait” format, try keeping the cameras level to avoid converging verticals and cropping more from the top or bottom as necessary when you don’t actually need portrait format to avoid the need to correct converging verticals afterward (for which an orientation sensor could guide automation).


 * A camera body with two or more DSLR-style sensors and synchronized focus and exposure could be ideal. Regular lenses would keep costs down, but ganged tripod collars and synchronized power focus and zoom or at least precisely-fitted connectors would help two or more work as one.

Applications
Almost anything—if you don’t like the depth later, just use one camera’s picture! But, in particular:


 * Selling things for which size or shape is particularly important.
 * Real estate
 * Fancy cars
 * Personal electronics
 * Clothes
 * Jewelry
 * An image of a widely-recognized object, a person, or the customer could be automatically added to scale at a point of recognizable depth for comparison—even “inside” an item. With a common household item of similar weight listed to heft for comparison, a customer could get a very good “feel” for the item before ever seeing it in person.


 * Virtual tours. Depth adds a great deal to realism, and the lens and lighting options a pair of DSLRs can use enable high-quality pictures even when the subject is “too” close, far away, or poorly lit.


 * Measuring, comparing, and identifying things at high resolution in three dimensions from a varying point of view.
 * If the subject doesn’t have much of a pattern, project one on it for points of reference from which to determine its shape.
 * With a few 3D models of yourself stretching to different poses, you could measure yourself for clothes that fit perfectly with just enough room for movement everywhere. Have a computer adjust patterns for the clothes between several points or even automatically knit seamless tubes for them. You might even infer girth from a simple stereo photo to automatically choose the best-fitting off-the-rack clothes.
 * With a 3D printer, you could make sculptures—even in color—with a few clicks. Or embed your image in a 3D paperweight.
 * With a 3D picture of the previous shape or a computer-generated map of a new one you’ve designed, a computer could show you what to pull up, push down, stretch, and shrink on your damaged or custom car’s body as you tap it into shape.


 * Studying motion in three dimensions. Use an intervalometer for slow motion, video for medium-speed motion, and a strobe light for fast motion.


 * Illustrating a task in a busy-looking environment. The element of depth will show precisely what else something is and isn’t touching without the need for drawings to eliminate distractions.


 * Eliminating clutter. With two or more pictures from different viewpoints, distractions such as fences and passersby could be removed from an image by swapping in another camera’s unobstructed view of a particular area, stretching it to the first’s perspective to fit perfectly.


 * Distinguishing objects for non-3D-specific image editing. Edges in a depthmap, which might be stored as its own channel, would make picking individual objects easy and independent of color and shadow.


 * Identifying color, transparency, reflectivity, refractivity through refraction itself and angles of reflection, and so even materials separately from brightness. From the position of each point, and thus the angle of each surface, in three dimensions and the varying brightness of apparently uniform surfaces you might back out a scene’s light sources, determine the materials in a scene from their responses, and map a new light pattern onto the scene or even animate it. New shadows, at least, could be easy.


 * Shallow depth of field, broadly adjustable after the picture is taken, through depthmap-guided blur. Heavy, expensive fast lenses and perspective-control cameras are sometimes necessary to focus light, but a simple program can defocus it very well—leaving as much as you choose, where you choose, in the superior focus of a small, simple lens stopped down—which will have particularly great depth of field and good quality on a nice small-sensor compact camera.
 * Sharpening could be applied selectively to image areas at a depth known—or determined after the fact—to be in focus, improving actual detail and sharpness without creating much distracting erroneous detail.
 * This could be especially useful for video since each frame can’t be carefully hand-focused beforehand.
 * Other image attributes such as color and tone could also be adjusted with depth to provide, for instance, a subtle warm “spotlight” on a primary subject and a dimmer, cooler background.


 * Improving exposure. By preliminarily measuring the distance of each part of a scene and comparing its appearance under ambient light with and without the addition of a known light source, such as a flash, a camera could determine the true color and reflectivity of the scene. It could recognize a bright but backlit subject; know to present a dark subject as dark, even while giving it ample exposure to improve tonality and reduce noise; and adjust areas of a picture further from an unfiltered flash’s reach to match its color balance.
 * The depthmap could be computed from a 3D picture taken in a usual manner, estimated from phase-detection type sensors across the scene, or estimated from contrast detection of more and less focused areas as the lens focuses to various distances.


 * Most of these applications would work with a “light field” camera, coded aperture, or camera set viewing the scene from more than two points since these capture depth as well, with center or multiple side views preventing missing depth of sides of objects with only one view. But, for practical rather than scientific photography, the regular cameras, or a “light-field” camera only capturing a few left-, center- and right-oriented directions’ rays rather than all directions’, would make better use of limited sensor resolution by recording only a few sharp views and depth information, then appealingly faking rather than fully recording any desired out-of-focus blur.


 * Selling (or renting out) cameras.
 * “But I have one already.”
 * “Exactly!”