Cameras And Lenses: Capturing The 3D World

do camera and lenses see in 3d

3D cameras and lenses use various methods to capture images with depth and dimension, simulating human binocular vision. This effect is achieved by using two or more lenses to capture multiple viewpoints, or a single lens that shifts position to capture similar images from slightly different angles. This replicates the way our brains interpret depth from the slightly different images formed by our two eyes. The use of 3D cameras and lenses can enhance the viewing experience, making it more immersive and realistic, but it also comes with challenges and limitations, such as bulkiness and higher costs.

Characteristics Values
How do cameras and lenses see in 3D? 3D cameras use two or more lenses to record multiple points of view, while others use a single lens that shifts its position.
How does this work with human vision? Both our eyes see the world as a 2D image. The images formed by both eyes are slightly off, and the brain interprets these two images as a 3D image.
How do 3D glasses work? 3D glasses use different colour lenses or different polarised lenses to allow each eye to see a different picture.
How do you take images that look 3D? Limited depth of field targeting a central subject with high acuity but less acuity as you move outward.

shundigital

3D cameras use two or more lenses to capture multiple points of view

The human brain interprets two 2D images from our eyes as a 3D image. Similarly, 3D cameras use multiple lenses to capture two or more 2D images, which are then combined to create a 3D image. This combination of perspectives is what makes depth perception possible.

The difference between objects seen through the left and right eyes is known as binocular disparity. This, along with focusing and visual centre interpretation, helps develop perspective in human eyesight. 3D cameras mimic this process by using multiple lenses to capture different points of view, which are then combined to create a 3D image.

While some 3D cameras use two or more lenses, others use a single lens that shifts its position to capture multiple points of view. In either case, the key to creating a 3D image is capturing and combining multiple perspectives. This can be done through the use of multiple lenses, or by physically moving a single lens to capture different points of view.

In addition to 3D cameras, there are also other methods for creating 3D images and videos. For example, in post-production, 2D images can be converted into 3D stereoscopic images by creating anaglyphic (embossed effect) images. These images can then be viewed using special red and cyan 3D glasses. However, the most common method for creating 3D content is through the use of 3D cameras, which capture multiple points of view to create immersive 3D images and videos.

shundigital

3D glasses manipulate our brain to see images in 3D

The fundamental technology behind 3D involves presenting different images to the left and right eyes. The earliest 3D glasses used red and blue lenses to achieve this. Filmmakers would create two different images in red and blue, and the glasses would let one colour pass through each eye. This meant that the brain received two different images, which it could then interpret as 3D. However, this method altered the colours of the film, as only blue and red would pass through the filters.

To counter this drawback, 3D glasses started using polarization. Light waves have a particular direction of oscillation, known as the direction of polarization. Normal light has light waves in all directions of polarization, but when passed through a linear polarizer, only light with one direction of polarization passes through. Filmmakers could therefore create two images with different directions of polarization, and the 3D glasses would have one lens that would allow one direction of polarization to pass through, and the other lens would allow the other direction to pass through. This ensured that the brain received two different images and that the original colour of the image was maintained.

However, this method also had a drawback. If the viewer tilted their head, the direction of polarization of the glasses would also tilt, no longer matching the angle of the polarization of the light. This would result in very dim or no vision for the viewer. To counter this problem, the glasses needed to have circular polarizers instead of linear ones. A light wave can have only two types of circular polarization—clockwise and counterclockwise. Filmmakers could make two images for the 3D glasses using light with these polarizations, and the 3D glasses would have one lens that allowed one direction of polarization to pass through, and the other lens the other direction. This ensured that the viewer could see the image in 3D, even with a tilted head.

shundigital

Stereoscopy is the technique of capturing images and rendering them as three-dimensional

Stereoscopy, also known as stereoscopic imaging, is a technique used to create or enhance the illusion of depth in an image, thus rendering it as three-dimensional. It involves capturing two slightly different images of the same scene or object, each from a viewpoint corresponding to one of a human's two eyes. These two images are then presented separately to each of the viewer's eyes, either directly or with the aid of a device such as a stereoscope or 3D glasses. The human brain then combines the two images, interpreting them as a single 3D image with depth.

The technique relies on the fact that human depth perception, or stereopsis, is based on the brain's ability to interpret two similar but distinct 2D images from the left and right eyes as a single 3D image. This is achieved through a variety of visual cues, including occlusion (when one object partially hides another), the subtended visual angle of an object, linear perspective, vertical position, haze or contrast, saturation and colour, and changes in the size of textured pattern details. By presenting two images that incorporate these cues, stereoscopy tricks the brain into perceiving depth in what are essentially two 2D images.

The use of stereoscopy to create 3D images dates back to the Victorian era, before the invention of photography. Sir Charles Wheatstone invented the stereoscope in the early 1830s and presented it to the world in 1838. However, it was the subsequent invention of photography that led to the mass production of stereoscopic images or stereoviews in the Victorian era. Today, stereoscopy is commonly used in entertainment, such as 3D movies, as well as in fields like photogrammetry and experimental data visualisation.

While stereoscopy creates the illusion of depth, it does not truly replicate the three-dimensional viewing experience. The 3D effect lacks proper focal depth, resulting in what is known as the Vergence-Accommodation Conflict. Additionally, stereoscopy introduces unnatural effects for human vision, such as a mismatch between convergence and accommodation, and possible crosstalk between the eyes due to imperfect image separation. Despite these limitations, stereoscopy remains a widely used technique for creating and displaying 3D images.

shundigital

3D camera rigs: parallel rig and beamsplitter

3D camera rigs are devices that mount two cameras together to create a 3D image or film. There are two main types of 3D camera rigs: side-by-side rigs and mirror rigs.

Parallel Rig

The parallel rig is a type of side-by-side rig where the cameras are aligned parallel to each other. This is the simplest and cheapest method for shooting in 3D. The parallel rig is optimal for shooting vast spaces, long shots, landscapes, architecture, big objects, and wide views. It is also suitable for wide shots like landscape or overview shots in sports broadcasting. However, it cannot be used for close-ups as the minimum distance between the cameras is limited by the size of the lenses.

Beamsplitter Rig

The beamsplitter rig is a type of mirror rig where the two cameras are placed at a 90-degree angle to each other, with one camera capturing the image directly and the other capturing the reflected image through a beamsplitter (a half-silvered mirror). This setup allows for smaller distances between the optical axes of the cameras, making it suitable for close-up shots, two shots, detail shots, and most dialogue scenes. The beamsplitter rig is considered the best option for shooting live-action films and telling stories with classical film narration tools. However, it is more expensive and sensitive to dust and fast accelerations than the parallel rig.

shundigital

3D cameras are bulkier and heavier than 2D cameras

  • Multiple Lenses: Many 3D cameras use two or more lenses to capture multiple points of view and create a 3D effect. This array of lenses takes up more space and adds weight to the camera.
  • Larger Sensors: To capture depth information and create immersive 3D images, 3D cameras often employ larger image sensors. These sensors can contribute to the overall bulk and weight of the camera.
  • Specialized Optics: The lenses in 3D cameras may be larger or have specialized optical designs to facilitate depth perception. This can result in a more substantial lens assembly compared to a typical 2D camera.
  • Additional Hardware: Some 3D cameras have built-in displays, advanced image processing capabilities, and other features that require additional hardware. This extra hardware can make the camera bulkier and heavier.
  • Rugged Design: Certain 3D cameras, especially those designed for action or outdoor use, may have ruggedized or waterproof enclosures. This robust construction can add to the overall size and weight of the camera.
  • Heat Dissipation: 3D cameras with higher processing power or extended recording capabilities may require more effective heat dissipation mechanisms, such as larger heat sinks or additional cooling components.

While 3D cameras have traditionally been bulkier and heavier than their 2D counterparts, advancements in technology are leading to more compact and lightweight designs. For example, the Kandao QooCam EGO 3D, a top-rated 3D camera, is praised for its compact and lightweight design, weighing just 160 grams without its magnetic viewer.

Frequently asked questions

A 3D camera is an imaging device that allows the perception of depth in images, replicating three dimensions as experienced through human binocular vision.

3D cameras use two or more lenses to record multiple points of view, capturing two similar images at slightly different angles. When each image is viewed by a different eye, the brain is tricked into thinking that the 2D images have different planes of depth.

There are two types of 3D camera rigs: a parallel rig, where the cameras are placed horizontally alongside each other; and a beamsplitter, where two cameras are placed at a 90-degree angle, pointing into a cube with a mirror inside angled at 45 degrees.

Written by
Reviewed by
Share this post
Print
Did this article help you?

Leave a comment