Within the world of high-definition microscopy, the phrase “seeing is believing” is a common misconception. When looking into a microscope, one does not actually look at or “see” the sample itself. Rather, what the microscope user views is a manufactured image of the sample after it has traveled through the optical path to the eye. Every lens, every medium, every microscope setting, affects the shape and symmetry of the point spread function (PSF) coming from a source of light below the diffraction limit. In simplest terms, the PSF can be thought of as the 3D representation of a point or point-like object under the microscope. The PSF will vary based on light wavelength; shorter wavelengths create a smaller PSF while longer wavelengths create a larger PSF.
The wavelike nature of light produces predictable optical patterns, enabling the construction of a theoretically perfect PSF (the “perfect” PSF would be the result of an extremely small point transmitted through a very high numerical aperture [NA] objective). PSFs are not only critical to assessing the performance of a microscopy system—they have also proven to be a valuable resource in the development of computational processing algorithms, and are a fundamental concept utilized in deconvolution. According to David Biggs, owner of KB Imaging Solutions and a pioneer in the development of deconvolution imaging technologies, “Modern 3D microscopy is now a computational imaging endeavor requiring both optimized hardware and optics, plus software algorithms to extract the truest representation of the specimen being observed by the instrument.”
Detection capabilities dramatically improving
It has been well established that both lateral and axial resolution are affected by the size of a confocal pinhole aperture, requiring a delicate balance between resolution and photon detection. Shrinking the size of the confocal pinhole below 1 Airy Unit (AU) results in a smaller full-width at half-maximum (FWHM) and therefore improved resolution. However, potential image-forming photons outside of the confocal pinhole aperture in both lateral and axial dimensions are rejected, resulting in lower intensity values. This can be problematic for subtractive deconvolution algorithms (deblurring, nearest-neighbor deconvolution), which inherently reduce signal strength and increase noise, creating images with poor signal-to-noise ratios (SNRs).
Recently, detection capabilities in laser scanning confocal microscope systems have improved dramatically. Optimized high-sensitivity gallium arsenide phosphide (GaAsP) detectors with Peltier cooling, like those found in the Olympus FV3000 confocal microscope, offer quantum efficiencies up to 45%, maximizing photon collection and improving the SNR even at pinhole sizes less than 1 AU. With the FV3000, full flexibility can be realized through TruSpectral Detection capabilities that allow simultaneous acquisition of multiple wavelengths and customizable data collection. Olympus’ FV-OSR Super-Resolution further capitalizes on these improved SNRs by coupling advanced signal processing algorithms to improve spatial resolution down to 120 nanometers.
“While it was originally a battle between confocal and deconvolution microscopy systems, it is now apparent that even 3D confocal data can benefit from the resolution enhancement and noise reduction provided by post-processing algorithms,” adds Biggs. “Today’s newest imaging systems, with resolution beyond the normal optical diffraction limit, such as SIM, require image processing algorithms as a fundamental part of the data collection pipeline, and will continue to push the limits of what can be observed.”
Indeed, optical and computational super-resolution techniques are coming together in commercial imaging instruments to offer flexible, multi-modal systems capable of producing images beyond the Abbe diffraction limit. As experimental needs change and grow, it is no longer a question of super-resolution itself but in what dimension.
Using silicone oil objective lenses to reduce spherical aberration
High NA silicone oil objective lenses reduce spherical aberration with immersion mediums designed to mimic the refractive index of living cells better than traditional oils or water, producing bright, consistent PSFs in the axial dimension even at great depths (Fig. 1). This allows the acquisition of brighter and higher-resolution 3D images of live cells and living tissue, especially at deeper sample depths. Additionally, silicone oil objective lenses yield morphological data that reflects actual sample morphology to a much greater degree, namely in the Z-dimension.
Figure 1: Comparison of point spread functions: traditional oil vs. water vs. silicone oil objective lenses.
Maximizing output from traditional widefield sensors
High temporal resolution can be achieved in addition to 120-nanometer super-resolution to capture ultrafast dynamic intracellular processes in live cells not possible with localization microscopy techniques. Technologies like the Olympus SpinSR10, a high-speed structured illumination system built upon a spinning disk confocal, use novel optical and computational approaches to maximize the output from traditional widefield sensors. There are additional benefits to acquisition on CMOS sensors over PMT detectors, including the ability to image an entire field of view at once. Deconvolution of super-resolution images is also now a possibility with the Olympus SpinSR (Fig. 2).
Figure 2: Image before and after deconvolution. Mouse kidney tissue stained with Alexa Fluor 488; image made using Olympus SpinSR.
With today’s commercial microscopes offering a variety of user-friendly technologies, deconvolution and super-resolution are now within the reach of lab staff without extensive optical backgrounds. With this new level of imaging support, experiments that once felt out of reach are now a possibility. Fewer experimental sacrifices need to be made, and existing fluorophore labeling protocols can be used to integrate super-resolution technologies into workflows without disruption.
“Advances in computer processing, particularly multi-gigabyte memory storage and GPU’s capable of Tera-FLOPS of performance, mean that image processing operations are no longer a rate-limiting step,” concludes Biggs. “Additionally, processing algorithms have become smarter and more automated, making them more user-friendly.”
Written by By Lauren Alvarenga, Associate Product Manager, Olympus Corporation of the Americas, Scientific Solutions Group