Single color luminescence assays and even multi-color fluorescence assays are great for screening, but they can only go so far. Add in microscopy (and the appropriate software to parse the images) and it’s now possible to calculate hundreds of different parameters to describe the phenotype of each cell—from size and shape to granularity to protein localization. The ability to image sequentially along the z-axis literally adds a new dimension, opening up the world of organoids, tumoroids, and even whole organisms like worms and fish.

Here we look at 3D high content imaging (HCI) systems, what they can do, and how they’re being used.

What is HCI?

HCI is often (but not always) used synonymously with high-content screening (HCS) and high-content analysis (HCA), and as such implies the use of high-throughput, automated systems. “HCS instruments tend to be expensive and generally not affordable to individual academics,” points out Peter Banks, scientific director at BioTek Instruments. But there are also “more affordable imaging devices such as BioTek’s Cytation or Lionheart product lines, which can be purchased with typical academic grant funding levels.”

Microscopic imaging provides a much richer choice of parameters than simply the color or brightness of well. In HCI, this is combined with algorithmic analysis “because it’s just impossible to do the analysis by hand or by eye,” notes Misha Bashkurov, product support specialist, HCI, for GE Healthcare. A typical system is designed for pattern recognition.

Compared to the more flexible conventional microscopes-with-eyepieces, HCI microscopes-in-a-box are generally “meant to be doing the same thing over and over again, … trying to get massive data sets to achieve more robust statistics,” says Joseph Dragavon, director of the BioFrontiers Institute’s Advanced Light Microscopy Core at the University of Colorado Boulder. The data is stored in a database-like architecture, and it’s analyzed in a very different way.

Another dimension

There is a difference between imaging three-dimensional objects—such as a cell culture grown on a matrix—in two dimensions by focusing on the equatorial plane, versus 3D imaging. 3D imaging implies that multiple images are taken at different focal planes. “We have customers trying to understand whether cells in the spheroid core are different from those in the outer shell, and whether there is one lumen or multiple lumina inside a 3D object,” relates Karin Boettcher, associate product manager HCS at Revvity.

Because the results mimic the data obtained by imaging serial tissue sections in 2D, the process is called optical sectioning. “The confocal microscope is one of the common tools to perform optical sectioning,” says Bashkurov. Confocal technology suppresses out-of-focus light, allowing deeper penetration into thick samples.

Other technologies, such as multi-photon and light sheet microscopy, are also good at optical sectioning, but these have not yet been incorporated into commercial HCI systems. 3D images can also be acquired using widefield imaging followed by deconvolution—“by mathematically treating the data in order to try and get a confocal-like image,” Dragavon says. It’s not as good as a true confocal, “but it’s generally a lot cheaper.”

With the need for additional investments in equipment, more time to acquire, more data storage and computational resources to analyze the data, “you need to have a biological question that requires 3D imaging,” Bashkurov says.

To the max

3D images are traditionally presented as a maximum intensity projection (MIP). The individual planar images are stacked, and the brightest pixel from each column is collapsed down to a single plane. “The problems with MIP are that a) you don’t have any spatial information along the z axis, and b) you cover up objects—if one object is directly above another object you can’t discriminate them,” explains Dragavon.

Newer to the scene is volumetric analysis. Here the software examines each slice, and re-configures the data in those slices to identify objects regardless of their orientation, says Dan LaBarbera, director of the high-throughput screening and chemical biology core facility in the University of Colorado’s Skaggs School of Pharmacy and Pharmaceutical Sciences. Software capable of volumetric analysis is now available on several systems as well as from third-party vendors and as freeware. The downside is that “it’s very low throughput—you can’t screen thousands of compounds with that.”

Ever conscious of reducing computational burden, some vendors offer “software that finds the x, y, and z positions of objects with a low magnification objective and triggers a high magnification re-scan for detailed 3D analysis,” explains Boettcher. “This reduces the time needed for imaging up to 35-fold and reduces the volume of data up to 50-fold depending on the sample.”

In fact, Boettcher considers software capabilities to be a major differentiator among vendors, noting that Revvity’s Harmony® 4.9 software is a “highly advanced 3D high-content image analysis solution that can segment and quantify volumes, morphology, intensity, texture, positions and distances in 3D.”

harmony




Image: MDCK cysts, nuclei segmented and displayed in color by Harmony 4.8 software. Image courtesy of Revvity.

Meanwhile Bashkurov points to GE’s system that first utilizes unsupervised clustering to explore and discover new phenotypes, “and then you go into a supervised method. This hybrid method, as far as I know, is unique to GE.”

Keep in mind that data acquisition, visualization, and analysis are separate functions, and can be accomplished by separate or integrated software packages. “The majority of other providers have made that seamless, but we haven’t done that yet,” says Behrad Azimi, CTO of Vala Sciences, which began selling a comparatively inexpensive structural illumination-based 3D HCI system last year. He notes that oftentimes customers will have an existing databasing system that they want to use for their images.

What do you need?

Systems differ from each other not only in terms of their software, but in instrumentation as well. Some, for example, use a spinning disc versus a laser scanning confocal technology. They may vary in the number of lasers (or LEDs), in the number of channels, in the type and number of detectors (most use sCMOS). Some offer oil immersion while others sport water immersion objectives. Each has its upsides and downsides.

Dragavon recommends looking at usability, reliability, how helpful it is with data analysis, and functionality. “It’s going to be solely dependent on what they need. There’s no point buying something that going to be massive overkill.”