Saturday, February 13, 2010

Giz Explains: Why ISO Is the New Megapixel [Giz Explains]

Source: http://feeds.gawker.com/~r/gizmodo/full/~3/m_FCHTFG9Q0/giz-explains-why-iso-is-the-new-megapixel

In 1975, the first digital camera took 23 seconds to record a 100-line black-and-white photo onto cassette tape. Today, a Nikon D3s takes photos with 12 million pixels at 1/8000 of a second. And it can see in the dark.

The conventional wisdom is that the romp-stomp-stomp of progress in digital imaging has proceeded on the mostly one-way track of ballooning pixel counts. Which wasn't always a pointless enterprise. I mean, 1.3-megapixel images, like you could take in 1991, aren't very big. The Nikon D1, introduced in 1999, was the digital camera that "replaced film at forward-looking newspapers." It was $5,000 and shot 2.7 megapixel images using a CCD sensor, large enough for many print applications. But still, there was room to grow, and so it did. Now pretty much every (non-phone) camera shoots at least 10-megapixel pictures, with 14 megapixels common even in baseline point-and-shoots. Cheap DSLRs from Canon are now scratching 18MP as standard. Megapixels were an easy-to-swallow specification to pitch in marketing, and became the way normal people assessed camera quality.

The now-common geek contrarianism is that more megapixels ain't more better. The new go-to standard for folks who consider themselves savvy is low-light performance. Arguably, this revamped arms race was kickstarted by the D3, Nikon's flagship DSLR that forsook megapixels for ISO. (Rumor had it that the D3 and D300 led Canon to shitcan their original, middling update to the 5D, pushing full-steam-ahead for a year to bring us the incredible 5D Mark II.) However it began, "amazing low-light performance" is now a standard bullet point for any camera that costs more than $300 (even if it's not true). Nikon and Canon's latest DSLRs have ISO speeds of over 100,000. Welcome to the new image war.

How a Camera Sees

The name of the game, as you've probably gathered by now, is collecting light. And in fact, the way a digital camera "sees" actually isn't all that different from the way our eyeballs do, at one level. Light, which is made up of photons, enters through a lens, and hits the image sensor (that boring looking rectangle above) which converts it into an electrical signal, sorta like it enters through an eye's lens and strikes the retina, where it's also converted into an electrical signal. If nothing else after this makes sense, keep this in mind: The more light an image sensor can collect, the better.

When a camera is spec'd at 10 megapixels, it's not just telling you that its biggest photos will contain about 10 million pixels. Generally, it's also telling you the number of photosites, or photodiodes on the image sensor; confusingly, these are also often referred to as pixels. Photodiodes are the part of the sensor that's actually sensitive to light, and if you remember your science, a photodiode converts light (photons) into electricity (electrons). The standard trope for explaining photosites is that they're tiny buckets left out in a downpour of photons, collecting the light particles as they rain down. As you might expect, the bigger the photosite, the more photons it can collect at the moment when it's exposed (i.e., when you press the shutter button).

Image sensors come in a range of sizes, as you can see in this helpful diagram from Wikipedia. A bigger sensor, like the full-frame slab used in the Canon 5D or Nikon D3, has more space for photosites than the thumbnail-sized sensor that fits in little point-and-shoots. So, if they're both 12-megapixels, that is, they both have 12 million photosites, the bigger sensor can obviously collect a lot more light per pixel, since the pixels are bigger.

If you're grasping for a specification to look for, the distance between photosites is referred to as pixel pitch, which roughly tells you how big the photosite, or pixel, is. For instance, a Nikon D3 with a 36mm x 23.9mm sensor has a pixel pitch of 8.45 microns, while a Canon S90 point-and-shoot with a 7.60 mm x 5.70 mm sensor has a pitch of 2 microns. To put that in less math-y terms, if you got the same amount of light to hit the image sensors the D3 and the S90—you know, you took the exact same exposure—the bigger pixels in the D3 would be able to collect and hold on to more of the light. When you're looking for low-light performance, it's immediately obvious why that's a good thing.

Catch More Light, Faster, Faster

Okay, so that's easy enough: As an axiom, larger photodiodes result in more light sensitivity. (So with the 1D Mark IV, Canon kept the same photodiode size, but the shrunk the rest of the pixel to fit more of them on the same-size chip as its predecessor). There's more to an image sensor than simply photosites, though, which is why I called up Dr. Peter B. Catrysse from the Department of Electrical Engineering at Stanford University. The "ideal pixel," he says, is flat—just an area that collects light—nearly bare silicon. But even at a basic level, a photodiode sits below layers of other stuff: a micro lens (which directs light onto the photodiode), a color filter (necessary, 'cause image sensors are in fact color blind) and then a layer of gunk, like wiring. So one way manufacturers are improving sensors is by trying to make all of that as thin as possible—we're talking hundreds of nanometers—so more light gets through.

One major way that's happening, he says, is with back-illuminated sensors, which move the wiring to the back-side of the silicon substrate, as illustrated in this diagram by Sony. It's currently still more expensive to make sensors this way, but since more light's getting through, you can use smaller pixels (and have more of them).

In your basic image sensor construction, there's an array of microlenses sitting above the photosites to direct light into them. Previously, you had gaps between the microlenses, which meant you had light falling through that wasn't being directed onto the actually light-sensitive parts of the sensor. Canon and Nikon have created gapless microlenses, so more of the light falling onto the sensor is directed into the diode, and not wasted. If you must persist with the bucket metaphor, think of it as putting a larger funnel over the bucket, one that can grab more because it has a wider mouth. Here's a shot of gapless microlens architecture:

A chief reason to gather as much light as possible is to bring up your signal-to-noise ratio, which is the province of true digital imaging nerds. Anyways, there are several different sources and kinds of noise. Worth knowing is "photon shot" or just "shot" noise, which occurs because the stream of photons hitting the image sensor aren't perfectly consistent in their timing; there's "read" noise, which is inherent to image sensors; and "dark current" noise, which is basically stray electrons striking the sensor that aren't generated by visible light—they're often caused by heat.

Taken with a Nikon D3s at ISO 102,400
Back in the day, when people shot photographs on this stuff called film, they actually bought it according to its light sensitivity, expressed as an ISO speed. (A standard set by the International Organization for Standardization, confusingly aka ISO. The film speed standard is ISO 5800:1987.) With digital cameras, you also can tell your camera how sensitive to light it should be using ISO, which is supposed to be equivalent to the film standard.

The thing is, whether you're shooting at ISO 100 or ISO 1600, the same number of photons hit your sensor—you're just boosting the signal from the sensor, and along with it, all the noise that was picked up on the way. If you've got more signal to work with—like in a camera whose sensor has some fat photon-collecting pixels, you get a higher signal-to-noise ratio when you crank it up, which is one reason a photo taken D3 at ISO 6400 looks way better than one from a teeny point-and-shoot, and why a 1D Mark IV or D3s can even think about shooting at an ISO of over 100,000, like the photo above. (Another reason is that a 1D Mark IV-level camera possesses vastly superior image processing, with faster processors that can crunch complex algorithms to help reduce noise.)

Sensor Shake and Bake

There are two kinds of image sensors that most digital cameras use today: CCD (charged-couple device) sensors and CMOS (complementary metal-oxide-semiconductor) sensors, which are actually a kind of active-pixel sensor, but the way they're made have become a shorthand name. "Fundamentally, at least physics-wise, they work exactly the same," says Dr. Catrysse, so one's not intrinsically more awesome than the other. CCD sensors are the more mature imaging tech, so for a long time, they tended to be better, but now CMOS sensors are taking over, having almost completely crowded them out of cellphones and high-end DSLRs (Leica's M9 is an exception)—and Dr. Catrysse suspects the last place for CCD sensors is going to be in niche scientific applications.

A "CMOS sensor" is one that's made using the CMOS process, the way you make all kinds of integrated circuits—you know, stuff like CPUs, GPUs and RAM—so they're actually cheaper to make than CCD sensors. (The cheap-to-make aspect is why they've been the sensor of choice in cameraphones, and conversely, DSLRs with huge chips.) And, unlike a CCD sensor, which has to move all of the electrons off of the chip to run them through an analog-to-digital converter, with a CMOS sensor, all of that happens on the same integrated chip. So they're faster, and they use less power. Something to think about as well: Because they're made pretty much the same way as any other semiconductor, CMOS sensors progress along with advances in semiconductor manufacturing. Smaller transistors allow for more circuits in a pixel and the potential to remove more noise at the source, says Dr. Catrysse, bringing us closer to fundamental physical limits, like photon noise. And then we're talking about controlling light at the nanoscale.

The Point

We've reached, in many ways, a point of megapixel fatigue: They're not as valuable, or even as buzzy as they used to be. Not many of us print billboard-sized images. But the technology continues to progress—more refined sensors, smarter image processors, sharper glass—and the camera industry needs something to sell us every year.

But that's not entirely a bad thing. Our friend and badass war photographer Teru Kuwayama says that while "increasing megapixel counts are mostly just a pain in the ass, unless you happen to be in the hard drive or memory card business, skyrocketing ISOs on the other hand, are a quantum leap, opening up a time-space dimension that didn't exist for previous generations of photographers. I'd happily trade half the megapixels for twice the light sensitivity."

Better images, not just bigger images. That's the promise of this massive shift. The clouds to this silver lining are that by next year, ISO speeds will likely be the headline, easy-to-digest spec for consumers. And like any other spec, just because the ISO ratings go higher doesn't mean low-light performance will be better. Remember, "more" isn't more better.

Still something you wanna know? Send questions about ISO, isometric exercise or isolation here with "Giz Explains" in the subject line.