10x Genomics Support/Space Ranger/Algorithms Overview/

Space Ranger Algorithms: Image Processing

Space Ranger relies on image processing algorithms to solve two key problems with respect to the slide image: deciding where tissue has been placed and aligning the printed fiducial spot pattern. Tissue detection is needed to identify which barcodes/bins will be used for analysis. Fiducial alignment is needed so Space Ranger can know where in the image an individual barcode or bin resides.

These problems are made difficult by the fact that the image scale and size are unknown, an unknown subset of the fiducial frame may be covered by tissue or debris, each tissue has its own unique appearance, and the image exposure may vary from run to run.

If either of the procedures described here fail to perform adequately, users can turn to manual alignment in Loupe Browser. In most cases, the pipeline is not able to indicate that a failure has occurred, and the user is encouraged to check the quality control image in the Summary tab of the web_summary.html that Space Ranger outputs. Note that the user-supplied full resolution original image is used for downstream visualization in Loupe Browser.

Visium contains a system for identifying the slide-specific pattern of invisible capture spots printed on each slide and how these relate to the visible fiducial spots that form a frame around each capture area. The output of this process is a coordinate transform that relates the Visium barcoded slide pattern to the user's tissue image.

From Space Ranger v2.0 onwards, the spaceranger count pipeline is run with --reorient-images=true by default, which removes the constraint of the hourglass fiducial marker needing to be in the upper-left corner. This is accomplished by running the alignment algorithm against each of the eight possible fiducial frame transformations (all rotations by 90 degrees plus mirroring) and choosing among those the alignment with the best fit.

In Visium, the fiducial frame has unique corners and sides that the software attempts to identify. The alignment process first extracts features that "look" like fiducial spots and then attempts to align these candidate fiducial spots to the known fiducial spot pattern. The spots extracted from the image will necessarily contain some misses, for instance in places where the fiducial spots were covered by tissue, and some false positives, such as where debris on the slide or tissue features may look like fiducial spots.

After extraction of putative fiducial spots from the image, this pattern is aligned to the known fiducial spot pattern in a manner that is robust to a reasonable number of false positives and false negatives.

In Visium HD, the capture area is framed by unique concentric ring fiducials. The positions of these fiducials are identified by the pipeline (Calvet et al., 2016) and used for alignment. Some of these fiducials may be missed due to e.g. tissue coverage or debris, but the pipeline requires only a fraction of fiducials to be detected for successful alignment. After extraction of putative fiducial rings from the image, this pattern is aligned to the known fiducial ring pattern.

The Space Ranger algorithm was optimized for cases where at least 25% of fiducials are uncovered by tissue. If less than 25% of fiducials are exposed, this may increase the chances that users will need to run manual alignment.

Visium HD requires a high degree of accuracy in the image alignment process. Minor tilting of the CytAssist camera can add a noticeable increase in tissue alignment errors. To correct for this issue, the Visium HD pipeline computes a perspective correction during the fiducial alignment process. This correction is then applied to the CytAssist image to ensure accurate gene expression to tissue alignment.

In order to restrict Space Ranger's analysis to only those spots or squares where tissue was placed, Space Ranger uses an algorithm to identify tissue in the input brightfield image. This is always performed on the same image as fiducial detection, either the CytAssist image or the microscope image. Using a grayscale, downsampled version of the input image while maintaining the original aspect ratio, multiple estimates of tissue section placement are calculated and compared. These estimates are used to train a statistical classifier to label each pixel within the capture area as either tissue or background.

In order to achieve optimal results, the algorithm expects an image with a smooth, bright background and darker tissue with a complex structure. If the area drawn in red does not coincide with the tissue, you can perform manual alignment and tissue selection in Loupe Browser before running Space Ranger.

Support for CytAssist enabled Gene Expression analysis was introduced in Space Ranger v2.0. This workflow includes the use of CytAssist image which has the fiducial frame and optionally, a microscope image (brightfield or fluorescence image types) of the same tissue section on the standard glass slide. When the two image inputs are provided, the images need to be registered for downstream visualization. Additionally, users can optionally provide the pixel size in microns per pixel (using the command line option --image-scale). This helps improve the success rate of tissue registration.

The targeted transformation of the image registration process is the 2D similarity transformation, which includes homogeneous scaling, translation, rotation, and mirroring. To enable automated tissue image registration between the microscope image and CytAssist image, the algorithm first estimates the proper scaling and translation so the two tissue sections in the images can be roughly overlaid together. If the --image-scale is provided, then Space Ranger will use this option without attempting to do additional estimation. Since the correct rotation and mirroring are hard to estimate, the pipeline addresses this by testing eight possible transformations (all rotations by 90 degrees plus mirroring). The estimated scaling, translation, and rotation are used as the initialization to start the registration which is an optimization process. The optimization relies on the mutual information metric from the open source software ITK. When optimization of all different initializations are complete, the pipeline chooses the best combination based on the smallest metric (entropy) of Mattes mutual information to output as the final registration results.

This section does not apply to Visium HD. From Space Ranger 2.0 onwards, for Visium v1/v2 data with fluorescence images, the fluorescence intensity in the image at every barcoded location is quantified and is captured in the barcode_fluorescence_intensity.csv file. Each page of an input grayscale image is downsampled to the same dimensions as the tissue_hires_image.png image that can be found in the outs/spatial directory. From there, the pixels are assigned to each barcoded-spot for each page. The location and size of each spot is defined in tissue_positions.csv and scalefactors_json.json. Space Ranger then calculates the mean and standard deviation of the intensities from pixels that are entirely within the area of the known barcoded-spot location for each page.

Calvet, L., Gurdjos, P., Griwodz, C. & Gasparini, S. (2016). Detection and accurate localization of circular fiducials under highly challenging conditions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.