Mosaicing of Widefield Endomicroscopy Images

Add One

The Bioengineering Department at Rice University in Texas has been developing fibre-bundle based widefield endomicroscopes for several years. While these devices lack the depth sectioning capabilities of confocal endomicroscopes, they can still produce useful images from certain tissues if a suitable topical fluorophore is applied. A recent paper from Richards-Kortum’s Group at Rice has demonstrated ‘real time’ mosaicing using their endomicroscope, allowing characterisation of much larger areas of tissue than would otherwise be possible.

The idea of online mosaicing for endomicroscopes isn’t new; real time software ships with Mauna Kea’s CellVizio confocal endomicroscopy system, and Tom Vercauteren’s PhD Thesis gives the topic a comprehensive treatment. A particular problem that occurs for confocal scanning endomicroscopes is the introduction of ‘shearing’ type motion artefacts. This happens because the ‘top’ of the image is acquired slightly earlier than the ‘bottom’, and so an estimate of the probe’s veocity vector must be made if we want to correct for it. Camera based widefield endomicroscopes don’t have this problem because the whole frame is acquired in a single shot. The frame integration time can also often be lower, both because a lot more light is being collected, and because there is no need for mechanical scanning. This means that motion artefacts are generally much less of a problem, making high speed mosaicing easier to implement.

The authors’ method 1 involves two steps. The first is to remove the ‘honeycomb’ pattern that occurs due to the spacing of the fibre bundle cores. There are many different ways of doing this, the simplest being the appliation of a Gaussian filter. The method used in this paper is a little more complex, and involves finding the location of each core in an initial calibration stage. Subsequent images are then reconstructed by pulling out the pixel values at the centre of each core and interpolating to obtain values for all other pixels in the image. It’s essentially a simplification of a more complete approach which has been patented by Mauna Kea.

Once the core pattern has been removed, image registration takes place. To determine the motion between two image frames the authors used a simple cross-correlation. This is a fairly standard, if not always ideal, way of determining motion from images. Once the relative shift between two or more images is known, they can then be stitched together to make the mosaic. The ‘stitching’ method reported in this paper was a simple ‘dead-leaf’ approach – when a frame is added to the mosaic the pixel values simply over-write those of existing frames. The authors didn’t look at more complex blending methods, presumably in an attempt to minimise computation time.

The mosaics look to be fairly free of artefacts, so this simple method appears to work quite well – assuming the examples weren’t cherry picked. The limitation of all image-based motion estimation techniques is that they require a high degree of overlap between consecutive image frames. The authors achieved that here for ex vivo samples and for skin imaging by taking care to move the probe very slowly. It would be interesting to see whether this could be replicated if the device was actually used in an endoscope, in which case the operator would have much less fine control of the motion.


  1. Noah Bedard et al., Real-time video mosaicing with a high-resolution microendoscope, Biomedical Optics Express 3, 2428-2435 (2012)

Leave a Comment

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>