A microscope camera is a big investment for a lab, so taking the time to learn how to choose the right type of camera for your particular needs is wise. But unless you’re already a camera geek, the technical specifications can make your eyes glaze over. If possible, talk to camera-savvy colleagues who are familiar with the type of experiments you plan to conduct. Another excellent resource is representatives from microscope camera suppliers, who are often a wealth of practical knowledge because they interact with scientists like you every day. Here is some additional expert advice on the basics to consider when perusing microscope camera options.

Choosing a sensor: CCD vs sCMOS

Today, cameras used for scientific purposes use one of two types of image sensors: either a charge-coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) sensor. In recent years, the latter evolved to the more advanced scientific CMOS (sCMOS) sensor, opening new possibilities for many researchers, including faster acquisition times for sCMOS cameras compared to CCD cameras.

A sCMOS camera will fit most researchers’ needs these days. “sCMOS-based cameras are widely available in packages with lower read noise, more pixels, larger FOV, better sampling, and higher frame rates than even the best CCDs of just a few years ago at very reasonable price points,” says Nathan Claxton, Biosystems Product Manager in Imaging at Nikon Instruments. “Even the average low-cost industrial machine vision CMOS camera is actually really fantastic.”

Search Microscopes
Search Now Search our directory to find the right microscope for your research needs.

In the past, CCD sensors were the widely recognized choice for amplifying lower-light signals. “However, CMOS technology has caught up and light sensitivity is similar nowadays,” says Jan-Willem van Bree, Chief Technology Officer at CytoSMART. “Another advantage of CMOS cameras is that they consume 10x less energy and are therefore more suitable for live-cell imaging.”

However, sCMOS cameras have by no means supplanted CCD cameras, which are important in special circumstances, such as detecting extremely low signals. Cooled CCD cameras provide excellent low-light sensitivity, and can outperform sCMOS cameras when resolving low signals above dark current noise. “sCMOS generally still has a challenge with dark current, and even deep cooling is not as effective at mitigation with this technology compared to CCD,” says Claxton. “When signals are incredibly low and require very long exposure times, as could be the case with bioluminescence, astronomy, or other “sit and stare” applications, even moderate and therefore affordable cooling—and even more so vacuum-chambered deep cooling—can bring the dark current of CCDs to nearly negligible levels.”

Signal to noise ratio is everything

No matter how ingenious and meticulously planned your experiments are, they will ultimately be an exercise in futility if a low signal-to-noise ratio obscures your hard-earned data. Current sCMOS and CCD cameras can deliver signal-to-noise ratios that are suitable for most applications. “Cameras with high signal-to noise can be important for niche fluorescence applications, but most applications in brightfield and fluorescence can be done with standard affordable cameras,” says van Bree.

Today’s bright, stable fluorophores for immunofluorescence and live-cell imaging get you off to a great start. “Modern cameras are so good that for the average researcher staining and imaging fixed slides, signal-to-noise might be of little concern as long as technique is sufficient,” says Claxton. “But often the exciting research is where signal-to-noise becomes limiting: very low expression of endogenous fluorescent proteins, and observing processes in live cells, tissues, and organisms.”

Most researchers will struggle to improve their signal-to-noise ratio at some point. Sometimes this can be accomplished—at least in part—by using a more sensitive camera. A camera’s sensitivity is affected by its quantum efficiency (QE) and its pixel area, also referred to as pixel pitch (the distance between adjacent pixel centers). “Contrary to popular belief, QE is not the sole influencing factor for increased sensitivity,” says Lauren Alvarenga, Senior Product Manager for Clinical Microscopy at Olympus Life Science. “Pixel pitch can offer a greater degree of improvement, even with a minor increase.” For example, increasing a sensor’s pixel pitch from 5.5 µm to 6.5 µm improves the sensitivity by around 40%, she notes, “whereas between a QE of 75% and 90%, there is only an improvement in sensitivity of 20%.”

About resolution

There isn’t a simple rule of thumb for resolution, which is sometimes expressed as the number of megapixels (i.e., the field of view), or the pixels per inch, or the pixel size. A camera’s spatial resolution is influenced by pixel size, with smaller pixels at a given same magnification making up more detailed images.

However, size isn’t everything—drawbacks to smaller pixels can include a smaller dynamic range, and a lower signal-to-noise ratio. “In some cases, a smaller pixel sensor cannot provide higher resolution because the light from the sample has been spread much larger than the pixel pitch through the point spread function based on the optical system’s magnification,” says Alvarenga.

A more accurate way to think about resolution involves multiple factors. “The key to achieving better resolution is to select the proper pixel pitch in relation to the numerical aperture (NA), the total magnification of the optical system, and the sample’s spatial frequency,” notes Alvarenga. Calculate the optical cutoff frequency by dividing 2xNA by the wavelength of light imaged. “If the sensor’s Nyquist frequency, which is half of the sampling frequency (or the reciprocal of the pixel pitch), is lower than the optical cutoff frequency, it’s worth trying a smaller pixel pitch to achieve higher resolution,” she explains.

For live-cell microscopy, often one must consider physical space constraints and look for small camera system options. “Since incubator space is scarce, you don’t want to fill it with a huge imaging system,” says van Bree. “With that in mind, one should choose a high megapixel camera, with high pixels per square inch, keeping the magnification small and optical track short.”

Looking ahead

Claxton notes that imaging software is increasingly as important as a camera’s hardware. “As AI/deep-learning methods have swiftly proved incredibly powerful and promise even more, all parts of the imaging system will need to work in concert, and tight integration and development of the camera technology will be paramount,” he says. Choosing a camera that gives you the hardware you need, with software that is update-able for ongoing innovations, may be a strong choice for any application.