The process of creating technically solid images can seem a bit daunting. But there aren’t actually all that many variables a photographer has to contend with, nor that many things those variables directly influence. But, as with everything, the devil is in the details.
In a previous article, we looked at how the human brain, and in particular the brains of those in our audience, work to interpret the different elements of a photograph. In this article, we’ll look at things from a different perspective, that of the key technical choices that we as photographers must make when we create an image. In a final article, then, we’ll examine the links between the two, the ways that the choices we make can result in, not just technically sound images, but art.
On a technical level, the process of creating an image is pretty straightforward for most photographic genres. We head out into the urban or natural environment in search of subject matter. When we find something of interest, we point the camera, twist a knob or two, maybe twirl a ring, and, when the moment is right, release the shutter. That’s often pretty much it (don’t worry, we’ll highlight some important exceptions in a little bit).
Basic, modern photographic workflow.
When we get home, we grab our new loot off the camera and run it through a raw converter, likely making a few global adjustments here and there to some of our better captures. The best ones we might pull into Photoshop (or some other editor), tweak them with a little more granularity, and then send them off to a client, print them for a show, or throw them up on social media for our family, friends, or followers.
Technical Elements of an Image
The work we do in this process to seek out, capture, and develop an image typically impacts one of three aspects of the resulting photograph: the nature of the light, the subject matter, or the information content. The right side of the figure below highlights these broad categories and their primary technical components.
The key variables in image creation. Notice the many interconnections. While these introduce complexity, they also add flexibility, allowing us to occasionally do things like “fix it in Photoshop,” tweaking the color or quality of light a bit if we missed the optimal moment in the field or cropping out a distracting element near the corner of a frame.
The precise state of each of these elements is determined by a combination of the environment we choose to shoot in, the settings we use on our camera, and the post-processing we opt to do in the studio. One of the first things to note is that there isn’t a one-to-one relationship between the choices we make and the impacts those choices have on the resulting photograph. It’s a bit more complicated than that. For example, where and when we shoot can impact the lighting of a scene — think of the difference a few minutes can make for a shot taken in the golden rather than blue hour, for example. Yet, an image’s color balance can also be manipulated in post-processing (though obviously under some constraints). Decisions about the environment we shoot in and the post-processing we do can both have an impact on the same aspect of the final image. Conversely, a single choice we make can have multiple repercussions. Shooting a few minutes later might alter the light in a scene, but also allow that annoying guy standing in the middle of our frame to have moved on. That single decision affects both the lighting of and subject matter within a photograph.
While there aren’t all that many different technical aspects to an image, the relative amount of inter-connectedness between them results in a fair amount of both complexity and flexibility. In the following few sections, we’ll look at some of these variables and their relationships in a bit more detail.
The single most important element of any image is the subject matter that we choose to capture. It’s the soul of what is said with a photograph, the story it tells. It says the most about who we are as individuals and what we find interesting about the world. Yet, the choice of subject matter is fundamentally constrained by our environment, something we often have surprisingly little control over. The world is a finicky, unstable, and oft-surprising place. Many of the skills related to the effective “use” of the environment as a photographic tool are, therefore, psychological: curiosity, vision, drive, patience, tenacity, a willingness to fail (over and over and over and...).
While the environment’s primary value photographically may be the subjects it provides for us to shoot, it also provides a source of light, determining the color, quality, and direction of the primary illumination in many genres of photography. These can certainly be manipulated after the fact in post-processing, but often only in comparatively limited ways.
Two images taken from our front deck a few months apart, a testament to how dramatically the quality and character of light can vary.
The two of these things together, the subject matter and light, also place significant constraints on the amount of information that can be gathered from a scene and how best to collect it. We’ll discuss these constraints in more detail in the following section.
Finally, notice that the link between artist and environment in the “Technical Elements” figure is dashed. It has, so far, been assumed that we have little direct control over the environment, that we’re observers rather than architects. There are genres of photography in which the artist has significantly more control — commercial, portraiture, food, architecture, tableaux, etc. In these genres, the photographer isn’t just discovering an existing environment, but actively creating or manipulating one. This can entail numerous technical skills — staging, lighting, posing, etc. — that are critical to achieving one’s objectives. A full discussion of the relevant techniques and considerations is well beyond the scope of this article. Yet, once the environment has been created, the broader discussion is still pertinent.
Modern cameras are marvels of technical prowess and computing power, but at the end of the day, the two most important decisions a photographer makes are still where to point the camera and when to release the shutter. Everything else is secondary. As alluded to above, these two decisions determine the subject matter and behaviors that will be recorded in an image, as well as how a scene will be lit. All the other buttons and knobs on the camera basically just select which information to collect from a scene and how much of it to gather. I might argue, however, that the way in which we might most effectively use those knobs has changed over time.
When I shot primarily on slide film, the objective of image exposure was a little different than it is now. One had to make the trade-offs necessary in-camera that would deliver as effective a visual representation of a scene directly onto the film as possible. With the advent of modern digital sensors, though, I’ve come to think about exposure in a different, um… light (sorry).
The information recorded on the sensor is no longer the final end state for most images. The camera simply collects information. How that information can most effectively be used to visually express an idea or emotion in the final image can be massaged or transformed in significant ways during post-processing. The goal of choosing an exposure is no longer to create an immediately appealing representation of the scene on film, but to maximize the amount of relevant information preserved. It’s a more information-theoretic perspective on exposure that we can aid with real-time data analysis in the field, e.g. through the use of luminosity or color channel histograms.
Two exposures of the same scene: (left) making the compromises necessary to achieve a decent exposure straight out of camera, (right) intentionally preserving as much information as possible to support the final vision for the image.
The image pair above should help illustrate the difference. The image on the left is representative of the trade-offs that may be necessary to balance overexposing the sky against underexposing the foreground in the absence of a sensor with massive dynamic range as well as sophisticated post-processing algorithms. Such a trade-off might represent the best outcome from a tough lighting situation.
Modern digital sensors, however, provide substantially more latitude. The image on the right was intentionally underexposed by five stops to preserve as much color detail in the sky as possible. When the shadows were pulled up in post, more than enough information was retained to show what those sun rays were illuminating (see below). Exposure decisions made in the field should, these days, take the full image creation process into consideration. Camera settings can be reflective of the broader, final vision for an image, rather than more narrowly limited to creating the best unaltered visual.
Fynbos, South Africa. The original image was underexposed by 5 stops to retain as much information as possible around the sun and its rays.
Let’s think about things in a little bit more detail from the perspective of information content. There are two fundamental constraints on our ability to gather information about a scene. First, there’s a rate at which information is delivered from the elements within the environment to the sensor. This is determined by the number of photons of light flying around at any moment. The more light, the more photons; the more photons, the more information that can be conveyed about the environment to the sensor each second.
The second limitation results from the fact that elements of the environment may be moving, their location, pose, or expression changing over time. This can limit how long we can collect information for. A fast-moving animal might limit collection to a couple thousandths of a second; a fleeting facial expression might limit collection to a couple of hundredths of a second; the slow twirling of the universe might limit collection to some fraction of a minute.
The rate and maximum duration of information collection are fundamental physical constraints that we have no control over. What we can control is which aspect(s) of the available information we want to focus our attention on. Think about aperture, for example. At any given shutter speed and level of illumination, there’s a limited amount of information that can be collected. By using a narrow aperture, we can choose to collect information about the existence and position of objects within a scene that are both very close to the camera and those that are quite far away. Yet, to do this, we give up a lot of information about hue, saturation, and even the precise locations of edges within a scene. These losses take the form of noise. By contrast, using a wide aperture would allow us to gather far more precise color and luminosity information about objects within the focal plane, but at the expense of detailed information about objects at other distances from the sensor. How we make this trade-off is an artistic decision that depends on our broader objectives for an image.
The image on the left was taken at f/3.2. It has a very shallow depth of field, but also very low noise, with buttery saturated colors. The image on the right was taken at f/36. Coherent spatial information across a much broader range of depths is available, but at the expense of color precision. Both images were taken at the same shutter speed and under the same illumination so that the total amount of information available was the same.
A similar tradeoff must be made with respect to shutter speed. If some elements of a scene are in motion, then a slower shutter speed means that we collect less information about the precise location of those things. This uncertainty takes the form of motion blur. On the other hand, with that slower shutter speed we will generally capture more information about precise color and luminosity values across the scene, especially in those subjects that aren’t moving. Alternatively, a faster shutter speed provides more precise information about the position of moving objects (less blur), but less precise information about their color or brightness (more noise). The same tradeoff is made with respect to camera shake.
Notice that I haven’t mentioned ISO. ISO isn’t something that we can change independently. Rather, it’s the gain, the amplification, that must be applied at a given exposure to bring the overall image up to a certain brightness. The problem is that when we amplify the signal, we also amplify the noise. The noise level is set by a combination of the available light, shutter speed, aperture, and sensor characteristics. The ISO setting (so long as we don’t saturate the highlights) just allows us to increase the apparent brightness of a scene so that it looks appropriately exposed to us. You can do that in camera with the ISO setting or as an exposure adjustment after the fact in your favorite raw converter. It doesn’t much matter which. You’ll get similar results either way. After five stops of lightening during post-processing, the foreground in the image of the Fynbos, above, has the noise equivalent of ISO 1,024 (though it was actually shot at ISO 32).
There are obviously lots of other variables and options that can be adjusted on modern cameras. They nearly all help us to either maximize the quality of information that’s collected (e.g. long-exposure noise reduction) or ensure that the information we’re collecting comes from the right place in the scene (e.g. focus tracking systems). Getting the settings right in camera is critical. Once the shutter closes, we’ve gathered all the information we’re ever going to get about that scene at that moment.
That information can, however, be further manipulated. There are many different adjustments and transformations that may be made while post-processing. Yet, their effects fall into one of two categories: altering the subject matter of an image (fundamentally changing the information the image contains) or transforming the existing information to make the image more visually effective.
Alterations such as cropping, cloning, and compositing fall into the former category. They exclude (in the case of cropping) or replace (in the case of cloning or using composite techniques) elements of an image. These tools can help remove a distracting tourist from a landscape or introduce a commercial client’s whiskey bottle into a scene of rugged Scottish splendor. Such alterations to the contents of an image reflect which subjects the artist feels are important and which they feel aren’t. They can have a profound impact on the story the image tells — and for this reason, have sparked a number of interesting ethical debates about expectations for their use in different contexts and photographic genres.
The other tools available within raw converters or photo editors tend not to remove or replace information, but to mathematically transform it. The temperature, tint, hue, and saturation sliders, as well as the hue/saturation adjustment layer, alter color. The exposure, highlight, shadow, and black sliders, as well as the curves and levels layers, manipulate luminosity (sometimes in fairly complex ways that depend on the local luminosity gradient).
There’s a nearly limitless trove of post-processing tutorials available on the web and within YouTube. Even a simple enumeration of the basic techniques goes well beyond what we can cover in this article. Think a bit, though, about some of the go-to methods you use in your standard editing workflow. In the next article, we’ll think about how some of these technical decisions and transformations impact our viewers' experience of an image.
How Does This Benefit Our Photography?
The objective of this article was to provide a little higher-level perspective on the choices that we make about the environments we shoot in, camera settings we use, and post-processing we do. Those choices can affect a number of different technical elements in the final image and even interact with one another.
Key technical elements of a photograph.
The really interesting bit, though, comes about when we start to ask ourselves how we should make these choices. In a previous article, we looked at the key elements of a photograph from the perspective of how they might be interpreted by the human brain, the way the brain works to assign meaning and extract emotion from an image.
Key mechanisms by which the brain interprets a photograph.
How we turn technically sound photographs into works of art is a question that lives, at least in part, at the intersection of these two perspectives, in the connections between them. And that’s what we’ll look at in the final article in this series.