Capturing Life: Through My Eyes And Camera Lens

what my eyes see what my camera sees

The human eye is an incredibly complex organ, but it doesn't always see things the way a camera does. While both our eyes and cameras see the world through a lens, the lens in our eyes is very different from camera lenses. Our eyes and brains are also incredibly adaptable to changing light, whereas a camera requires a shutter to capture images. Understanding these differences can help photographers create better images by taking advantage of a camera's unique characteristics.

Characteristics Values
Human eye Limited range of electromagnetic spectrum
Camera Sees beyond the human range of the electromagnetic spectrum
Human eye Perceives three colour bands: red, green, and blue
Hyperspectral camera Carves the electromagnetic spectrum into hundreds of bands
Human eye Can't stop motion
Camera Can capture split-seconds of time
Human eye Sees in three dimensions
Camera Sees in two dimensions
Human eye Sees through two lenses
Camera Sees through one lens
Human eye Automatically adjusts to changing light
Camera Records light that hits the sensor at one aperture setting

shundigital

Cameras can't produce a perfect exposure like the human eye

Cameras and the human eye have different ways of seeing and capturing images. While both see the world through a lens, the lens in our eyes is very different from camera lenses. The human eye has a nearly infinitely adaptable capacity to register light and is more attuned to recognizing detail in low light than cameras.

The image sensor in a camera sees light in a linear way. It registers light by volume, meaning that the brightest light hitting the sensor will occupy over half of the available registry. This is not how the human eye perceives light. Our eyes are designed to see more detail in darker areas than in extremely light areas. This presents a challenge for engineers, who must transpose a linear index into a non-linear or human system.

The dynamic range of the human eye is much larger than that of a camera. As a result, the camera has to guess what is important in that range and expose the image to get a good grey average. This often results in images that seem jarring to us because, although our eyes have similar limitations, our brains interpret what we see in real time. We can look at shadows and highlights as we scan a scene, and our brains stitch it all together. A still photo can only capture one or the other extreme of intense light.

Additionally, our brains are good at noticing and adapting to changes in brightness, but not good at measuring absolute light levels. A camera, on the other hand, can only measure the absolute light level. It doesn't understand how we perceive the scene, and our perception can change over time. For example, a room may seem "just plain dark" when we first enter it, but after a few minutes, it will feel like "normal" light as our eyes adjust.

In summary, while cameras and the human eye share some similarities in how they capture images, there are significant differences in their capabilities and limitations. These differences can present challenges for photographers, who must understand how their cameras work to create images that match their artistic vision.

shundigital

Cameras can show things the human eye can't see

Cameras can show things that the human eye can't see. While the human eye is able to observe fast-moving events, it cannot freeze motion like a camera can. With enough light, a camera can capture motion that is too fast for the human eye to perceive, such as a bursting balloon or a lightbulb smashing on the ground. Cameras can also record blur from movement over much longer periods of time, creating soft and abstract images. For example, a camera can capture blurred clouds moving across the sky or blurred waves along a seafront.

The human eye can only achieve a shallow depth of field by focusing on something very close. In comparison, a camera can blur the background behind a subject standing a few meters away, drawing more attention to them.

The human eye is also limited in its field of view. Using a telephoto lens with a digital camera gives a very narrow field of view, allowing users to capture close-up images of distant subjects. On the other hand, a wide-angle lens provides a broad field of view, capturing more scenery in the image.

Another advantage of cameras is their ability to see very small details. Macro lenses can capture tiny details of plants and insects, revealing an alien world that the naked eye cannot perceive.

Additionally, cameras can see in black and white, reducing images to shades of gray and emphasizing differences in tone and shape. They can also capture images in low-light conditions by collecting light over a long period and using it to form a single image. This allows users to photograph very faint stars or create daytime-like images of the moonlit landscape.

In the future, cameras like HyperCam, an affordable hyperspectral imaging camera, may be integrated into mobile phones. HyperCam can see beyond what human eyes are capable of by using visible light and invisible near-infrared light to create detailed images. It can be used for various purposes, including food inspection, biometric identification, and medical applications.

shundigital

Cameras record images in two dimensions, not three

It is important to understand the differences between the way a camera records light and the way the brain interprets the information sent by our eyes. This ability to picture how a camera will record a scene is called visualisation, a skill that all photographers need to practice.

Firstly, cameras record images in two dimensions, not three. Humans have two eyes, and each looks at something from a slightly different angle. This enables us to judge distance and determine how close objects are to each other. Stereoscopic cameras aside, cameras see through a single lens and record the world in two-dimensional images. We may not even realise this until it is pointed out, as we are accustomed to seeing two-dimensional representations in the form of paintings, drawings, and photographs.

Photographers do not have to worry about rendering depth too much as cameras take care of the business of accurately recording the scene in front of the lens. However, we must always be aware that the camera records an image with different characteristics than the one we perceive with our eyes.

Digital cameras have made the process of visualisation much easier. You can look at the image on the camera's LCD screen to see how it has been rendered in two dimensions. You can also use the camera's Live View feed to compose the image, rather than a viewfinder. These tools help you visualise how your photos will look after you have processed them.

Our eyes are constantly moving, taking in different aspects of the scene and adjusting instantly to changes in brightness. The brain takes this information and builds it up into a selective, moving image. Our eyes and brains are selective – when we look at something, we tend to look at what interests us and ignore the rest. The camera records everything, so we need to find ways of guiding the viewer to look at what the photographer deems important.

If you are taking a portrait, for example, and someone is walking by in the background, your brain may ignore it when you're looking at the person you're photographing, but the camera will record it. Eliminating distractions helps simplify your composition and improve your photos.

shundigital

Cameras record everything in a scene, not just what's interesting

Cameras and the human eye have many differences, and these can affect the final image. While a camera can record everything in a scene, our eyes and brains are selective – we tend to look at what interests us and ignore the rest. This is something photographers can use to their advantage, by guiding the viewer to look at what they deem important.

For example, if you're taking a portrait and someone walks by in the background, your brain may ignore it, but the camera will record it. Eliminating these distractions helps to simplify the composition and improve the photo.

Our eyes are constantly moving, taking in different aspects of a scene and adjusting to changes in brightness. The brain takes this information and builds it up into a selective, moving image. Our eyes and brains take numerous snapshots, and what we perceive is the combination of those snapshots. A camera, on the other hand, simply records the light that hits the sensor at one aperture setting. It can only have one exposure for the whole scene.

A camera is a light gatherer. It gathers photons and adds them up during an exposure. As more and more photons hit the camera's sensor, the image gets brighter. This means that if the shutter speed is too short, the final image will be underexposed, and if the shutter is open too long, the image will be overexposed.

The human eye has a nearly infinitely adaptable capacity to register light and is simply more attuned to recognising detail in low light than a camera. At low light levels, our eyes are less sensitive to colour than normal, whereas camera sensors always have the same sensitivity. That's why photographs taken in low light appear to have more colour than what we remember.

The longer the shutter remains open, the more light can enter the camera and hit the sensor. Therefore, long exposures can bring out objects that are faint in the sky, whereas our eyes will perceive no extra detail by looking at something longer.

In addition, our vision is stereoscopic. We have two eyes, and each looks at something from a slightly different angle. This enables us to judge distance and determine how close objects are to each other. Cameras, on the other hand, see through a single lens and record the world in two-dimensional images.

shundigital

Cameras can't adjust to changing light like the human eye

Cameras and the human eye have different ways of seeing the world. While cameras can be useful tools for capturing images, they cannot adjust to changing light in the same way that the human eye can.

The human eye is incredibly adaptable and can register light in a way that cameras cannot. Eyes are designed to see more detail in darker areas than in extremely light areas. Cameras, on the other hand, record light in a linear fashion, which results in images that appear very dark and lacking in detail to the human eye. This is because, unlike the human eye, a camera's image sensor registers light by volume, filling the sensor's light bucket. As a result, over half of the available registry is occupied by the brightest light hitting the sensor.

This disparity presents a challenge for engineers: how to transpose a linear index into a non-linear or human system. While cameras can capture a wide range of tones or shades, the distribution of these tones often results in images that appear too dark. This is especially true when capturing images with a mix of bright and dark areas, such as a bright sky with detailed shadows. While a camera can capture detail in the bright portions, the shadows often turn completely black.

Additionally, cameras struggle in low-light conditions. While human eyes contain two types of light-sensitive cells (rods and cones), camera sensors are limited in their ability to capture light in these conditions. This is why low-light events, like the Northern Lights, often look better through a phone camera. Smartphone cameras use longer exposure times, larger apertures, and higher ISO settings to gather more light and enhance images using computational photography and AI.

Overall, while cameras can capture images that the human eye cannot, they also face limitations when it comes to adjusting to changing light conditions.

Frequently asked questions

The camera's "eye" is different from our own. It requires a shutter, which our eyes do not need. Our eyes, along with our brain, can automatically produce a perfect exposure. A camera, on the other hand, is a light gatherer. It gathers photons and adds them up during an exposure. This means that as long as the shutter is open, the image continues to get brighter.

Understanding how your camera "sees" is the key to taking charge of your camera and making the images you envision. You can adjust settings like shutter speed, aperture, and ISO to capture more detail in low-light settings, or to freeze motion in fast-moving subjects.

Our eyes are constantly moving, taking in different aspects of a scene and adjusting to changes in brightness. Our brain takes this information and builds it up into a selective, moving image. Cameras, on the other hand, record everything within the frame. They also see the world in two dimensions, whereas we have stereoscopic vision.

Yes, thanks to the fact that the camera sensor is a light gatherer, we can show things in our photos that our eyes simply can't see. For example, long exposures can bring out objects that are faint in the sky, or capture the movement of stars, which our eyes cannot perceive.

Written by
Reviewed by
Share this post
Print
Did this article help you?

Leave a comment