4 Tips for Optimizing Your Microarray Workflow

 4 Tips for Optimizing Your Microarray Workflow

Microarrays are arrangements of thousands of biomolecules on a solid substrate, such as a silicon chip or glass slide. They are designed for high-throughput detection of target material (DNA, RNA or proteins) under various physiological, developmental or disease conditions. Target is bound to the array elements on the basis of specific recognition (in the case of nucleic acids, by hybridization), producing fluorescent signals that indicate the identity and concentration of labeled species. Over the past 10 years, next-generation sequencing (NGS) has captured a large segment of the nucleic acid microarray market. Although NGS is often associated with better sensitivity in discovery-based projects, when processing a large number of samples with known content in a short period of time, microarrays are still the way to go. Microarray platforms typically have better developed software and algorithms, saving considerable time in downstream analyses. Below are some pointers to ensure a streamlined nucleic acid microarray workflow.

Study design is key

Probe selection is the most challenging aspect of study design, forming the basis of any microarray application. For researchers who purchase commercially available arrays—those produced using synthesis manufacturing, in which numerous DNA sequences are layered directly on chips—a common problem is unfamiliarity with the type of analysis applicable to a specific array type. A vendor might offer a variety of human expression arrays that fall under different umbrellas and analysis algorithms. Proper selection is contingent upon good study design and saves researchers time and money from the outset.

Good sample = Good hybridization

Other important considerations are sample quality and quantity. Both of these directly affect hybridization quality. For instance, poor-quality RNA that is degraded or contaminated cannot be efficiently labeled and thereby cannot hybridize sufficiently, often resulting in low signal. One solution for low signal is to increase the amount of RNA used in labeling. David Galbraith, professor of plant sciences at the University of Arizona, also recommends that researchers use RNA that has been purified by binding to a column or membrane rather than via precipitation, especially when working with plant material. In the past, his lab has manufactured in-house microarrays using delivery-based technology in which elements are printed directly onto glass slides with specialized pins.
“We say a good array is when you get something visually of T-shirt quality … that you can print and sell to someone,” says Galbraith. “You want nice round spots that stand out against the black background.”

Avoiding high background

High background also affects image quality, and it can occur both before and after hybridization. Prior to hybridization, improperly coated slides used in printing may have significant auto-fluorescence. Samples of slides from each production batch should be scanned before printing. Following hybridization, comet-like streaking may occur when DNA, for instance, is printed on a slide but does not bind to the slide. In this case, the DNA has been printed at a concentration that is too high. Washing the slide stringently with 1% SDS prior to hybridization to remove unbound target will help. Also, unincorporated fluorochrome molecules are a source of high background after hybridization; using commercial filters to purify labeled target will help. (The resulting solution should look almost colorless after purification.)

Closely review feature extraction

Feature extraction is an area in which researchers tend to run into confusion. It is a critical step that influences the outcome of any array experiment, however. Feature extraction is conducted by spot-finding programs that convert digital, scanned images to numerical values representing the signal intensity of each spot. Most microarray scanners are bundled with these programs. Regardless of the program, Galbraith recommends reviewing the entire slide twice: the first time to review mismatches in the grid and the second to mark obvious errors. Some errors include smears, streaks or dust where the spot is the same intensity or lighter than the smear it sits in; spots that are out of alignment with respect to the grid, indicating the position is influenced by signal-to-noise factors; and spots that have bled into adjacent spots. It’s best to eliminate these errors from the data set entirely.

The future of microarray

Overall, microarray analysis has transformed research both technologically and methodologically. It has helped advance genome-wide transcriptome analysis and has identified protein targets for therapeutics. Microarrays remain the platform of choice for profiling assays in which the goals may be to screen thousands of patient samples to detect a profile for diagnosis. However, the future of microarrays in the light of NGS technology remains to be seen. Although today NGS is more expensive per sample, the cost is dropping. For now, the choice to use one technology rather than the other remains largely dependent on a researcher’s goals.

The image at the top of the page was provided by David Galbraith, University of Arizona.