Pretty Pictures—and Quantifiable Ones, with the Latest 3D Imaging Tools

 3D Imaging Tools
Jeffrey Perkel has been a scientific writer and editor since 2000. He holds a PhD in Cell and Molecular Biology from the University of Pennsylvania, and did postdoctoral work at the University of Pennsylvania and at Harvard Medical School.

Suppose you’re refereeing a Little League baseball game, and there’s a play at the plate. In all the commotion, you miss the action. Was the runner safe, or did the catcher make the tag in time? Fortunately, there was a photographer on the field, one of the players’ fathers armed with a digital camera and a good lens. Alas, our amateur shutterbug was standing near third base, and his pic cannot resolve the issue. What to do?

If you’re a biologist trying to work out, say, the colocalization of two fluorescent proteins using standard two-dimensional microscopy, you face a similar problem. You can see the proteins just fine: One’s red, the other green. And you can see that they occupy the same x-y coordinate in the 2D image. But are they truly touching? Perhaps one is located above, but physically separated from the other (that is, at different z positions). That possibility cannot be resolved in a planar snapshot -- you’re out of position to make the call, so to speak.

What the situation requires is a way to study the system from all angles -- to rotate it, flip it around, and visualize it like the three-dimensional object it really is. What it needs, in short, is 3D imaging.

“Morphological misconceptions can be made in 2D that 3D data provides more info on,” says Claire Stewart, Associate Product Manager for Scientific Software at Revvity, one of a number of companies offering research tools to resolve these problems.

That’s true from the cellular level to the organismal level, and 3D imaging is possible at all of them. The technique encompasses everything from confocal microscopy of individual cells, to small-animal imagers that can scan an entire mouse, to the software tools, like Revvity’s Volocity, required to make sense of all those data. Applications run the gamut from straightforward protein colocalization studies to pharmacodynamic analyses of drug biodistribution, targeting, and excretion.

Cellular imaging

To image individual cells or bits of tissue in 3D, all that’s required is a microscope capable of capturing a so-called z-stack. A z-stack is a series of 2D planar images, each collected at a different depth (or z position), usually using a confocal microscope.

Once you have your z-stack, you can combine the individual images into a 3D rendering using any of a variety of software tools. Microscopy vendors offer their own software, of course. There also are free options, including NIH ImageJ and Fiji, as well as commercial third-party tools like Volocity (current version 6.2.1), Molecular Devices’ MetaMorph, and Andor™ Technology’s Imaris®.

Each of these tools has its strengths and weaknesses, but in general, all enable researchers to visualize a dataset in 3D, including rotating it, taking virtual sections, doing “fly-throughs” of the data and so on. On the analytical side, the software typically can perform segmentation (that is, reducing an image into a collection of objects such as cells, nuclei, or mitochondria, based on size, shape, color, or intensity), and crunch the resulting numbers, for instance to track a particular particle over time throughout the 3D volume.

“In microscopy it is no longer enough to produce pretty pictures,” Stewart says. “We have to back up our observations with quantitative data that is statistically significant and robust.”

Often, these different features are available as modules within a larger software shell. Volocity, for instance, is available as Visualization, Quantitation, and Restoration products, and these can be purchased separately. The quantification product, says Stewart, offers intelligent tools that can automatically identify, say, nuclei, mitochondria, and cells, count those objects, measure the distances between them, and so on.

In many cases, however, 3D and 4D (a time-series of 3D images) visualization is used for just that, visualization, says Chris Kier, Director of Product Management for Molecular Devices’ Bioimaging Group – whether to view a biological system like a neurosphere in its entirety or produce a pretty picture for publication.

“A lot of people say they want 3D, but that isn’t really what they want,” Kier says. That’s because, when converting a stack of 2D images into a 3D volume, certain assumptions have to be made, which can distort the image and thus the data it yields. “You’re filling in the space between each plane, and how do you do that?” Kier asks.

As a result, many researchers perform quantitative measurements on z-stacks by analyzing the 2D slices one by one, rather than in the 3D rendering. “If you go back to the single plane and make the measurement, it’s much more accurate,” Kier says. “It’s the primary data rather than the rendered secondary data.”

Imaging brains in 3D

For Philipp Keller and Misha Ahrens, Fellows at the Howard Hughes Medical Institute’s Janelia Farm Research Campus in Ashburn, Va., the application that drove their latest bit of 3D imaging was brain activity and connectivity. Specifically, Keller and Ahrens were interested in imaging a vertebrate brain during development to see if they could identify neural circuits based on their firing patterns.

To address that question, Keller and Ahrens used a custom microscope built in Keller’s lab called SiMView.

“The idea with the SiMView light-sheet microscopy platform is to deal with the challenges you encounter when imaging a large, non-transparent biological sample and trying to capture global, dynamic events,” Keller explains.

SiMView is based on light-sheet microscopy, a technique that minimizes the phototoxicity inherent in extended live-cell imaging experiments by projecting a relatively benign planar “sheet” of light into the sample and imaging the resulting fluorescence from a detection objective positioned orthogonally (i.e., at 90 degrees) to the excitation light path.

Zeiss has commercialized the light-sheet microscopy concept in its Lightsheet Z.1 microscope. But to handle the large, opaque samples he’s interested in – entire developing embryos and large brains – Keller had to raise the bar, microscopically speaking. SiMView is a souped-up microscope featuring two independent detection and excitation paths (i.e., four objectives) positioned around the stationary sample like the arms of a cross. Using that configuration, the system can snap four simultaneous images representing four quarters of a single optical slice through the sample, rather than capturing four sequential images and hoping nothing moved in the interim.

In the initial SiMView publication, Keller’s team used the system to image a growing Drosophila embryo in its entirety, over and over again, every 30 seconds for about a day, using a genetically encoded, nuclear fluorescent protein to tag and track each cell. The result is an amazing visualization of cellular motion during embryogenesis. More recently, they imaged a zebrafish brain, at 96 million cubic microns about three-times larger than a fruitfly embryo and containing about 100,000 cells, every second for an hour using a genetically encoded fluorescent calcium indicator called GCaMP5G.

“It’s a different kind of information we are after,” Keller explains. “We want to see neural activity on the single-cell level.”

Imaging animals in 3D

To view anatomic and molecular events on a larger scale, researchers use whole-animal scanners, which enable non-invasive live-animal imaging.

Small-animal imagers were covered in depth in a recent Biocompare editorial on in vivo toxicology imaging, but in brief, there are two basic kinds of scanner, says Mat Brevard, North American Vice President of Preclinical Imaging for Bruker BioSpin: Those providing volumetric anatomical imaging and those enabling molecular imaging.

Volumetric anatomical imaging in three dimensions at high resolution is conducted primarily with x-ray computed tomography (CT) and magnetic resonance imaging (MRI). The former produces a three-dimensional x-ray image of a sample, especially attuned for differentiation of bone, lungs, and some soft tissues such as adipose fat, says Brevard, whereas the latter uses a magnetic field and radio frequency to image hydrogen atoms with sensitivity to differences in soft tissues.

Molecular imaging modalities are those that use some sort of molecular tracer to hone in on specific markers, whether using a radioisotopically labeled molecule (as in positron emission tomography (PET) or single-photon emission computed tomography (SPECT)), or a fluorescent or bioluminescent protein (as in optical imaging).

PET and SPECT actually produce a tomographic, 3D image, which can be coregistered with CT or MR images if desired. But optical modalities, while generally less expensive and faster, generally cannot – they usually produce planar 2D images that can be overlaid, for instance, on white light or x-ray images of the animal for anatomic positioning. (One exception is Revvity’s line of FMT fluorescence tomography systems, obtained when the company acquired VisEn.)

One alternative is photoacoustic tomography (PAT), a kind of optical imaging modality now available from Endra in Ann Arbor, Mich. According to President Michael Thornton, PAT takes advantage of two well-established techniques, optical absorption and ultrasound imaging.

With typical fluorescence imaging, excitation light must penetrate the sample to excite the fluors, which must then travel back through the tissue to reach the detector. As a result, optical imaging loses resolution and sensitivity as depth increases. PAT also uses an external light source to excite fluors (as well as some endogenous molecules like hemoglobin), but rather than recording the resulting fluorescence, it listens for the acoustic shockwave that results from heating caused by the absorbed light.

Essentially, Thornton explains, the laser emits very short nanosecond pulses. As these strike an absorbing molecule, such as a fluorophore, it excites the molecule to fluoresce. But it also heats it ever so slightly, just a few thousandths of a degree Kelvin. That temperature change, in turn, produces a pressure wave, which propagates out of the sample as a sound wave. The result is a system capable of much deeper imaging than standard optical scanners (up to 3 or 4 cm in some cases), but one that also can produce 3D images.

“It’s almost like a cheat,” says Thornton. “You’re doing optical imaging but the image is formed with ultrasound. It’s the only imaging modality that comes to mind where you send in one form of energy and you detect another.”

Endra has commercialized PAT in the Nexus 128 imager, launched at the end of 2011. It already can overlay a PAT image on a while light image of an animal, and the company is working to enable overlays with 3D ultrasound and MR, as well, Thornton says.

Other imaging modalities (PET, SPECT, microCT, MR, and optical) are available from Revvity and LI-COR Biosciences, and well as Bruker BioSpin, which recently launched the Bruker Skyscan 1272, a high-resolution, high-throughput microCT scanner capable of automatically imaging up to 16, 7cm x 7cm ex vivo samples at a resolution of 0.4 microns, Brevard says.

Whatever your size scale, these systems can help you view it in three dimensions. You need never be out of position to make the call again.

The image at the top of the page is from PerkiinElmer's Volocity 3D Image Analysis Software.

  • <<
  • >>

Join the discussion