Giant leap backwards

Written by      

Sitting on my desk is an object that could have been left over from the set of a sci-fi movie. It is a scanner with a sinister Darth Vader look to it, an Imacon Flextight virtual-drum scanner. When this technology first appeared, it didn’t create any revolu­tions in the publishing industry—it was simply a desktop version of what we already had. But it did mark a leap forward for this magazine. Finally we were able to control all aspects of transparency and negative-film scan­ning. Its creations have featured on these pages since issue 41.

Scanners scan photographs, con­verting analogue information, warts and all, into digital information. And the Flextight does this elegantly. I (the art director) make a seat of the pants call on optimum size, contrast, saturation, exposure and information content for each slide, and if I get it wrong, I simply rescan the image choosing different settings.

OK, so the scanner is handy in a geeky, control-freak sort of way, but why should that be elegant? And why should you, a reader interested in the look of photos in our maga­zine but not really moved by the technicalities of magazine produc­tion, care? Why, indeed, am I boring you with this?

Well, there is a bigger issue con­cerning digital imaging that you may well care about, especially if you are interested in photography, own a digital camera, or are thinking of buying one.

Publishers and photographers will tell you that the most significant technological leap forward for them in recent times was the emergence of digital photography. Some pub­lishers now insist on digital images from their photographers/suppliers. Digitals offer significant cost reduc­tions compared with film, as mate­rial, developing and scanning are no longer necessary. Digital is rapidly replacing film photography in con­sumer and professional markets. This magazine now receives about half of its photography in digital format.

Digital photos are relatively easy to spot in the magazine. If a photo­graph is vibrant, highly-contrasted, highly saturated, with great depth and rich colour, chances are it was shot on film. The dull, lifeless, flat ones are mostly digitals. Hold on a minute—this is blasphemy! Everyone knows that digital photography is an advance…isn’t it?

Outside of the sales-pitch hype and media raves, and even the in­dustry “wisdom”, the digital revolu­tion is not quite all it’s cracked up to be. In theory, digitals can equal or even surpass the quality of SLR film cameras, but this potential is seldom matched in practice. From a quality perspective, digitals still present no real challenge for film unless the photographer has invested in top-end studio equipment (which can cost the same as a new car), and assimilated a lot of new knowl­edge. But many photographers are seduced into using digital cameras without such an investment because anyone can point, shoot and pro­duce a passable digital photo, so the story starts and ends there.

I think the problem is really two problems. One can be nailed to the analogue/digital divide, the fact that digitals sample the world rather than record it. Sample-size is the main problem and at the professional end, digital cameras have already been evolving to remedy this. But there is another problem, around which there is much talk but little convincing action. It has to do with the fact that digital images bear colours that are assembled out of light. A printer uses pigment to assemble colour, which is not just incompatible—it is a com­pletely opposite process. The radical transformation of screen-colour to ink can, and often does, lead to mediocre results. Before we consider why this is so, let’s look at image capture.

Our world is analogue. View the world through a telescope, bifocals, microscope or electron-microscope, and you’ll find different layers of information occupying the same space. We live on a human scale, but the universe contains larger and smaller vistas with one scale merging seamlessly into the next. Informa­tion, which passes through a lens (carried on light), can be plucked from anywhere along the near-infi­nite information-continuum. Better lenses collect fatter slices.

Film too is analogue, although there are technical limits to what can be recorded on its chemical surface: noise begins to dominate information as the scale of detail approaches the size of a film’s grain. But while it has limits, film also has latitude. It col­lects extra information that can be drawn on to correct a photograph’s deficiencies. You can still produce a good image from over- or underex­posed pictures.

But a digital camera is a vastly different beast. Slightly overexposed images have no data in highlight areas, and underexposed images lose significant shadow detail. There is no latitude for correction if data is absent. Digitals build a tiled repre­sentation, like a pointillist’s painting, of the information that pours through the lens. Detail stops at the pixel level (pixel is shorthand for “picture element”—the smallest complete sample of an image) because each sensor element in the camera’s sen­sor array can record only the intensi­ty of a single primary colour of light. It takes three such elements to create a colour/intensity value for a pixel. The technical limit on any informa­tion that can be garnered through the lens is tied to the number of sensors available in the camera’s array, which is usually hyped as a megapixel number (where higher is theoretically better, although the camera’s processing/interpolation system actually plays a bigger role in final quality). A photographer can, of course, collect less information by choosing to photograph at lower sample rates in order to pack more images into the camera’s memory.

Comparing the grain/emulsion limits imposed by film, to the mega-pixel ceiling of digital cameras, film wins hands down. Photographic film can be made with a higher spatial resolution than any other type of imaging detector, and because of its logarithmic response to light, has a wider dynamic range than most digital detectors. For example, a film used in scientific applications like Agfa 10E56 holographic film has a resolution equivalent to a pixel size of just 0.125 micrometres, and a dynamic range of over five orders of magnitude for recording brightness, compared to typical digital sensors that might have 4–12 micrometre pixels and a dynamic range of 3–4 orders of magnitude. Even an off­-the-shelf brand of 35mm film, such as Fuji’s Velvia (which we use a lot), holds detail equivalent to a 25+ megapixel array.

I can, therefore, scan a Velvia slide—which is the size of a post­age stamp—on the Flextight and produce a crisp image that easily fills an A1 poster (1000 x enlargement of image area), and amazingly, every pixel on the newly created digitized file will contain a unique piece of information. I know that if I scan the film for a larger file, finer details recorded on the film’s emulsion will emerge that weren’t evident at the preceding step. Film always cap­tures as much information as lens and grain allows. A digital image contains no untapped reservoir of information. It cannot be enlarged. I can run a digital at A4 only if it was shot at A4. The vast bulk of digital images submitted to this magazine are shot on cameras rated at 6 or less megapixels.

Sensor arrays are expensive items, so affordable digitals have small arrays and take smallish images: A5 or even A6 is a default size on many cameras and A5 is often the size-ceiling for cheaper cameras, while A4 can be a ceiling for better ones. A3 photography is not on the menu unless you part with a lot of cash.

This is the least tractable problem posed by digital camera images: a built-in and often very low informa­tion-ceiling (file size). If you scan film to run A4 size then change your mind and decide you want to run it at A3, you have information-rich film to go back to. What you won’t do is digitally resize your scanned image, because a digital file—whether it has been scanned or is a digital camera capture—contains all the informa­tion it will ever contain. Enlarging it simply draws attention to missing information.

But digitals are convenient if you want to build a photo-library on your computer, view them on screen, email images to friends, or if you sell a lot of stuff on Trademe. That convenience starts to wear thin when you hit the print button. You may have noticed the images that spit out of your desktop printer don’t match the seemingly excellent screen rendering of your photo. You blamed the printer, right? OK, maybe some quality issues stem from your printer, but it is just as likely that this problem is intrinsic to the photo­graph. It arises from the different requirements of screen and printer, and your digital camera has loaded everything in the screen’s favour.

Digital image files are measured in dpi (dots per inch). An image looks good on your screen at just 72 dots per inch. This is because the av­erage screen can render from a huge palette of colours at every position (256 intensities of light to the power of three channels = 16,777,216 co­lours). For a printer to even roughly approximate that on-screen perfor­mance, the file needs to be around 300 dpi. In other words, it has to be four times the height and four times the width, which is to say 16 times the file size, to match your screen without degrading. Your printer can only lay down a fixed volume of ink, from 0–100 per cent, in each of its four channels. This dynamic applies equally to a desktop printer and a commercial Webb-offset press.

Then there is the problem of mem­ory—storing images in the camera until you can download them. A digi­camera user manual might boast that it holds 500 images in its memory, but if these images were a useful size, say A3 print-sized images (at 50Mb each uncompressed), that would require a lot of built-in memory (25 Gigabytes)! To shoot these larger pic­tures, you would have to constantly download them to a computer or storage device, which in turn would fill up its hard drive in no time.

To ameliorate this storage problem, camera manufacturers have em­braced image compression. A com­mon form of compression, standard on most cameras, is called JPEG (Joint Photographic Experts Group). JPEG compression reduces image file size considerably. This system separates images into colour and detail infor­mation, compressing colour more than detail because eyes are more sensitive to detail than to colour. It also sorts the detail information into fine and coarse detail and discards the fine detail (because our eyes are more sensitive to coarse detail than to fine detail). The end result is a tiled appearance where blocks contain rude and smeared-out information. At the highest quality (lowest compres­sion ratio), the blocks are very small and JPEG compression can be quite unobtrusive, but at high compression a file becomes heavily degraded. As compression goes up, images show increasingly visible defects. Regard­less of the compression ratio you opt for (or default to) when you compressan image, you are throwing away information and that lost information can never be restored to the image.

Colour does not exist in the world the way we perceive it (adhering to objects), it is a sensation in the brain triggered by wavelengths of light received at the eye. Essentially, your mind paints colour on things based on their absorption/reflection of light frequencies. A differently wired brain-eye combination will paint the world from a different palette. A flower, which is pink to you, might be grey to a cat and violet with un­usual blue blotches to a bee. Natural­ly it is what we can see that interests us. The part of the electromagnetic spectrum that we are sensitive to is called the visible spectrum.

We see a range of colours in the visible spectrum because our eyes contain three types of cone cells spread throughout the retina (but concentrated toward the centre). These are photoreceptors, each re­sponding to a range of wavelengths: yellow-green (longer wavelength), blue-green (medium) and blue-violet (short). We see the colour yellow when the yellow-green receptor is stimulated slightly more than the blue-green receptor. If yellow-green stimulation is somewhat stronger, red is perceived. Blue-like hues result from stronger stimulation of the blue-violet receptors. A weighting in these receptors toward green explains why we are more sensitive to green than other primary colours.

Human colour perception was first quantified as a colour space by the Commission Internationale de l’Éclairage in 1931 (CIE 1931 colour space). As we cannot see colour outside of this colour space, it draws a boundary around the vis­ible spectrum. Colour rendering and display devices work with subsets of this colour space (no device can yet emulate the entire range of colours we can see).

TVs and monitors paint with light—they render colour by emitting coloured phosphors. These devices use the RGB (Red, Green, Blue) colour space with varying levels of efficiency and each device has its own footprint, or gamut, within the colour space. Combinations of these three primary colours can render a large chunk of the visible. Dynamic range is greatly enhanced by intensity, which is why lumines­cent devices can maximise the RGB colour space.

Digital cameras also work in RGB gamuts. They have wavelength-sen­sitive receptors, similar to the photo­receptor cones in your eyes. Film is another media that uses RGB colour space. It has blue, green and red-sensitive chemical layers. With the luxury of a luminescent display or capture, any well engineered device can have a gamut that emulates a large subset of the visible spectrum.

Without that luxury, it is a differ­ent matter. Printing relies on reflect­ed light, which severely restricts the possible size of a device’s gamut. To print colours you have to lay down pigments. To get white you subtract ink. Paints, dyes and inks are said to be subtractive colours. Systems that use light to render colour are additive. When you mix two ad­ditive colours, the hybrid is lighter than either original. When you mix pigments, the hybrid colour is darker than either original.

Printers cope with this reverse dy­namic by using sleight of hand. They have their own set of primary colours: cyan, magenta, yellow, which are fusions of additive primary colours: blue, green, red. To get cleaner blacks and lessen ink saturation they add a forth pigment: black. A printer’s colour system is called CMYK (black is K to denote the printer’s key plate) or 4-colour. Images reproduced in this magazine show how vibrant and natural-looking 4-colour printing can be, but the reality is that CMYK offers a much smaller range of possible colours than RGB does.

And here you have the main source of printing headaches. A camera—film or digital—attempting to capture information as faithfully and simply as possible, produces photographs in RGB. Printers use a different, smaller colour space. Somewhere there has to be a con­version from one to the other.

CMYK colours occupy a subset of RGB colours. If you convert a CMYK file to RGB, it’s unlikely you’ll notice a difference because almost every point in any CMYK gamut has a RGB counterpart. But when you go the other way, many RGB colours simply don’t have a CMYK coun­terpart. They need to be altered to fall within a printable range, as they otherwise cannot be represented on the output device and would simply be clipped. Iridescent RGB colours are well outside the CMYK envelope (known as “out of gamut”) and will be converted to a colour that bears little resemblance to the original. It will be considerably duller. To normalise out-of-gamut shifts, points close to these out-of-gamut readings will also be shifted—and the whole colour range becomes compressed.

This is why printed versions of your digital photos are so dull.

In theory there should be no real difference between scanning RGB film into CMYK and converting an RGB digital image into CMYK using software such as Photoshop. However, my experience (gained from handling a lot of both types of files) is that digital camera files convert less elegantly. I am unsure if this is due to qualities inherent in film, sensors or scanner hardware, but I do find that digital camera im­ages tend to be constructed further out-of-gamut than film. Even with a lot of time spent colour-correcting digital images, they tend to print less appealingly than scans.

Without question, there is a lot of potential in digital photography, and it can only get better. But for now, the great leap forward has still to make ground that has been lost by taking two steps backward.

More by