Seeing and Perceiving

by ralph


Believe it or not, in the first instant that we behold a scene, most of the "visual information" that we "see" is supplied by the brain and not the eyes. It takes but a millisecond or so for the eyes to make a couple quick scans and supply the information that your looking at a familiar room, or face, or a mountain with red rocks. The incredible brain (yes, even of dimwits) supplies a full blown "probable" image while the eye hastily scans in many directions to fill in the details and look for any new updates.

This also applies to the fine detail of a scene. It has long been known that only the center of the field of vision, known as the fovea, is capable of high resolution. The rest falls off into unsharp aberrations and field curvature just like the image produced by a three dollar magnifying glass. But when you first behold that scene you think you're "just not looking at those outlying areas" -- and when you do, lo and behold they're sharp -- and as your eyes rapidly scan they paint the scene in fine detail which your brain artfully stitches together to produce a wide and sharp view. All of this assumes that you have "normal" or corrected vision of course.

Probably the most amazing creation of the brain is the rectilinear world of straight lines -- rooms, buildings and stretched strings -- that we consider correct. After all, like the much maligned fish, our eyes really present a fisheye view of the world to that incredible computer known as the brain, which then has to straighten it all out. After all, if the 180 degree view that we see was formed by rectilinear optics all objects, even in the center, would appear to be at a nearly infinite distance. Taint so of course. That means the focal length in the center, that puts objects at familiar distances, tapers off to near zero at the periphery - the very definition of a fisheye lens. This puts the fisheye lens in an almost mystic category. It probably sees the world the way we actually see it but not as we finally perceive it.

There is plenty of evidence for all this. How often have you looked for an object on a familiar surface like a desk or table and didn't see it. Then a minute later you look again and there it is, in an unbelievably obvious place. It shows that you just don't see the whole scene like the click of a shutter. Your first scan tells your brain that you're looking at the familiar table which it assures you that you "see", but it took the slower process of actual vision to spot the new addition. There is a famous experiment that was once conducted in a college psychology class. The quiet proceedings were suddenly interrupted when the door burst open and three "hoods" tumbled in and proceeded to yell at and beat up each other. In a matter of seconds they abruptly left. The whole scene, it turned out was prearranged by the professor who then asked the class to write down an account of what just happened. Incredibly, the versions of the event were miles apart, even though the young observers had excellent vision. The length of time of the incident was stated as being between a few seconds and several minutes. There was no agreement about the race, sex or clothing of the persons involved. Most of the class didn't even get the count of the "troublemakers" correct, coming up with anywhere from 2 to 5 persons. So much for eye witnesses!!

This concept has great importance for photographers. Think how often the beginner is disappointed because the picture just didn't come out the way they (think they) saw it. Typically the mistake made is that they tried to shoot the perception of a scene and not the actual scene itself. The old philosophical question of whether the camera lies really has it all backwards. The camera lens really sees things more like the eye - once you've taken its superior sharpness, linearity and limited field into account. It's the perception of a scene that really l
ie
s and unfortunately the brain likes to keep you unaware of this whole complex process. It's as if the engine of your car were to say, "I'll worry about the pistons, pushrods, fuels pumps, camshaft, spark plugs and stuff ---you just steer."

Here's a concrete example. Beginner Billy wants to take a photo of his girlfriend Cindy. They get together, he takes one look at her and thinks "wow, she's gorgeous" ---- click. When he gets the print he says, "the camera lied". Here's what really happened. The camera faithfully recorded the fact that the overhead fluorescent light made her look greenish with smoky, dark eye sockets and a dark, receding chin and that a mess of (background) objects were growing out of her head - and by the way she was having a real bad hair day. The beautiful girl Billy saw, in image supplied by his brain, was really distilled from many views in different lighting, different dress and better hair days. Its interesting to realize that the brain isn't trying to deceive -- just to arrive at a more universal truth.

Here are some more neat brain tricks. When the rather slow eye/brain system tries to construct reality it knows its limitations so it quickly scans the most important things first, like moving objects (predators? prey?), then faces and bright things. The background isn't going anywhere so it gets minimal attention. that's why a distracting background is easily overlooked.

Here's another one. The scanning eye has a second function. Just as the relatively few nerve endings in a fingertip can tell the difference between silk and wool, the scanning eye can detect textures (by moving the subject image past the relatively coarse retina) that are actually finer than it can resolve. That's why when you can tack a newspaper to the wall, step back to where you can barely read it, move back another 50% and shoot it with any half way decent camera you come up with an odd paradox. The camera has resolved the lettering at a greater distance than your eye can, but if you print it "life size", hang it on the wall and come back to where you read it you'll see that it's not totally sharp.

A third brain trick is important to us. Just as most video and digital cameras have "auto white balance", so does the brain. The whole vision system doesn't give a fig about the Kelvin color temperature of the light but it does want to maximize your ability to see all the colors in a scene so a quick scan constructs a neutralized "gray card reading" to keep the whites white. That's why Billy missed the greenish lighting.

So what a photographer has to accomplish is to see through the veil of perceptions to the sharp edged reality underneath. Fortunately we've developed some pragmatic guidelines to help do that such as deliberately paying attention to the background and really analyzing the play of light in the scene or face. The loss of depth perception that happens from using only the one lens is why we have to be more diligent about separating fore and background tonally.

The most important thing is to be aware of the slowness of seeing and to work with it. Look at a subject long and hard before tripping the shutter to let the eye really see everything. This is one reason why I judge still life shots a little more harshly than other scenes. The shooter has had time to really evaluate it and to perfect the composition, focus and lighting -- or at least they should. Also I think that if you go through this deliberate process repeatedly it speeds up, hopefully to the point where you can create well crafted instantaneous photojournalism.

Photographers really do see better.