Research ArticleENGINEERING

A minimally invasive lens-free computational microendoscope

See allHide authors and affiliations

Science Advances  06 Dec 2019:
Vol. 5, no. 12, eaaw5595
DOI: 10.1126/sciadv.aaw5595

Abstract

Ultra-miniaturized microendoscopes are vital for numerous biomedical applications. Such minimally invasive imagers allow for navigation into hard-to-reach regions and observation of deep brain activity in freely moving animals. Conventional solutions use distal microlenses. However, as lenses become smaller and less invasive, they develop greater aberrations and restricted fields of view. In addition, most of the imagers capable of variable focusing require mechanical actuation of the lens, increasing the distal complexity and weight. Here, we demonstrate a distal lens-free approach to microendoscopy enabled by computational image recovery. Our approach is entirely actuation free and uses a single pseudorandom spatial mask at the distal end of a multicore fiber. Experimentally, this lensless approach increases the space-bandwidth product, i.e., field of view divided by resolution, by threefold over a best-case lens-based system. In addition, the microendoscope demonstrates color resolved imaging and refocusing to 11 distinct depth planes from a single camera frame without any actuated parts.

INTRODUCTION

Optical endoscopes are widely used to image the interior of the human body, enabling disease diagnosis and surgical image guidance. In addition, fiber-optic microendoscopes are becoming extremely valuable tools for structural and functional brain imaging of live animals. Such behavioral studies demand tools with high spatiotemporal resolution that can image over a large space to capture large-scale neural activity deep in the brain (14). One current approach is to acquire each image pixel of a scene by distal scanning of a single-core fiber or proximal scanning using a multicore fiber. Such designs typically use a mechanical scanner and microlenses and recover images with high spatial resolution but with a field of view limited by the deflection angle of the scanner. Another approach is widefield illumination and detection using a multicore fiber or a fiber bundle, where fiber cores transmit the image pixels of a scene (5). In this case, widefield imaging is accompanied by degradation in image quality due to the cross-talk between fiber cores and pixelation artifacts. Furthermore, reducing the number of fiber cores improves miniaturization but reduces the field of view with the aforementioned effects becoming more pronounced. Alternatively, handheld microscopes based on widefield illumination and collection using microlenses have been recently demonstrated for brain imaging of freely moving mice (6, 7). Regardless of the different approaches, the distal lenses that most approaches use impose an inherent trade-off between miniaturization of the imaging probes and their imaging performance (610). The physical limit to miniaturization is a particular problem for brain imaging as probe implantation inevitably damages the intricate neural circuitry that such studies aim to understand. Several lensless endoscope designs using a multimode fiber, multicore fiber, or cannula have been proposed but show drawbacks such as sensitivity to bending, restricted field of view, or inability to resolve color (1114).

Recently, lensless cameras based on coded-aperture imaging have been proposed for biological and commercial applications (15, 16). These cameras demonstrate flat form factors comparable to dimensions of the bare image sensor with variable working distances, which allows one to avoid damaging the sample through contact. The working principle is to place a single spatial mask near the front of the bare sensor, followed by characterization of light propagation through the mask and onto the sensor. A least-squares minimization algorithm with a regularizer reconstructs the scene using a single snapshot of the scene’s coded-aperture response. Notably, other coded aperture–based imaging systems have also demonstrated lightfield imaging capable of computationally refocusing objects located at different depths (17, 18). However, while these approaches can be very flat, they are large in the transverse dimension limited by the size of the sensor array and associated electronics. Thus, these approaches are best suited for application at a tissue surface and are not effective for implantation deep within tissue.

Here, we combine coded-aperture imaging with a multicore fiber to create a distal lens-free microendoscope system that simultaneously achieves miniaturization and wide field of view. Figure 1A shows a simplified illustration of conventional lens-based imaging with a multicore fiber via widefield illumination and detection. Figure 1B shows a simplified illustration of our distal lensless imaging approach using a multicore fiber and coded aperture. In essence, distal lenses are replaced with a simple random binary spatial mask (i.e., coded aperture), which modulates the intensity of light propagating from the scene to the fiber face. Unlike the widefield illumination approach, each fiber core serves as a single measurement instead of an image pixel as the cores measure a pseudorandom linear combination of light emitted from various points within the scene, enabling image reconstruction without pixelation artifacts.

Fig. 1 Imaging using a multicore fiber and coded aperture.

(A) Simplified illustration of widefield illumination imaging using a multicore fiber and lens. (B) Our distal lensless imaging approach using a coded aperture.

Before imaging, we first characterize the light propagation through the coded aperture and multicore fiber. For calibration, an incoherent source [green or white light-emitting diode (LED)] and a digital micromirror device are used to project and scan a point source across the microscopic sample plane. The design of the calibration projector is described in the Supplementary Materials (fig. S1). The light transmitted through the coded aperture and multicore fiber is imaged at the proximal end of the multicore fiber onto a charge-coupled device camera, which captures the corresponding system response of each point source. For imaging, an object is placed in the sample plane, an incoherent source illuminates the sample plane, and a single snapshot of the object’s system response is captured using the camera at the proximal end of the fiber. An image of the scene is then reconstructed using the calibrated system response of individual point sources, the single frame of the object’s system response, and an image reconstruction algorithm. In comparison to previously demonstrated lensless approaches (11, 12), the proposed lensless imager is insensitive to bending of the fiber as the operation relies on the faithful transmission of intensity patterns, not phase, of the system responses of the point sources (fig. S6).

The above processes can be written mathematically as the following. Let M and N represent the number of fiber cores in the multicore fiber and number of pixels in the computational reconstruction, respectively. The imaging problem is written as y = Ax, where y ϵ ℝM×1 is the object’s system response, A ϵ ℝM×N is the calibration matrix where each column vector is the system response from a single point source, and x ϵ ℝN×1 is the image of the object to be recovered. To reconstruct the image of the object from the object’s system response, we use l1 minimization coupled with discrete cosine transform basis at the level of blocks of pixels called patches: Any selected local patch should be sparse. Out of all candidate images that are consistent with the system response, the iterative optimization algorithm seeks the most sparse set of overlapped patches. A detailed mathematical description of the algorithm is given in Materials and Methods. In accordance with compressive sensing theory, the minimum number of measurements, i.e., fiber cores, to accurately reconstruct x is defined as S logNSM, where S is the number of nonzero elements in x, and the calibration matrix satisfies the restricted isometry property (19).

RESULTS

Lens-based versus lensless microendoscope

Example experimental results of the imaging system are shown in Fig. 2. For reference, images of projected test objects (Fig. 2, A and B) and a prepared slide of esophagus tissue (Fig. 2C) are acquired using a high-resolution bulk microscope. The corresponding objects imaged through a conventional lens-based multicore fiber microendoscope are also shown (Fig. 2, D to F) by using a lens and a 30-cm-long multicore fiber with 6000 fiber cores, an image circle diameter of 270 μm, a fiber core diameter of 3 μm, and a pitch of 3.3 μm. Imaging results from our distal lensless system are demonstrated using coded aperture and the same multicore fiber and are shown (Fig. 2, G to I). Furthermore, their corresponding raw camera images used to reconstruct these images are shown (Fig. 2, J to L). Experimental results shown throughout this article have a 980-μm-wide field of view.

Fig. 2 Experimental imaging results.

(A to C) Object images acquired using a bulk microscope. Experimental results shown throughout have a 980-μm-wide field of view. (D to F) Objects imaged using a conventional lens-based multicore fiber microendoscope. Scene is demagnified to fit within the fiber’s image circle diameter of 270 μm. (G to I) Raw images captured from the proximal end of the multicore fiber in our distal lensless microendoscope using a distal coded aperture and used to reconstruct (J) to (L). (J to L) Objects imaged using our distal lensless microendoscope.

Test for spatial resolution

Resolution targets are imaged (Fig. 3) to determine the spatial resolution of the imaging system. Microscope images of the resolution targets (Fig. 3, A to C), images using the conventional lens-based multicore fiber microendoscope (Fig. 3, D to F), and the distal lensless microendoscope image reconstructions (Fig. 3, G to I) are shown. The linewidths in Fig. 3 (A, D, and G) are 44, 40, and 33 μm, respectively; the linewidths in Fig. 3 (B, E, and H) are 32, 29, 26, and 22 μm, respectively; and the linewidths in Fig. 3 (C, F, and I) are 21, 19, 17, and 14 μm, respectively. Unlike a lens-based approach, lensless imaging is capable of resolving 14-μm features, as shown in the Supplementary Materials (fig. S2).

Fig. 3 Test for spatial resolution.

(A to C) Images of the resolution target objects acquired using a bulk microscope. Experimental results shown throughout have a 980-μm-wide field of view. (D to F) Objects imaged using a conventional lens-based multicore fiber microendoscope. (G to I) Objects imaged using our lensless multicore fiber microendoscope using a distal coded aperture. (A, D, and G) Linewidths are 44, 40, and 33 μm, respectively. (B, E, and H) Linewidths are 32, 29, 26, and 22 μm, respectively. (C, F, and I) Linewidths are 21, 19, 17, and 14 μm, respectively.

Dynamic scene reconstruction

The imaging architecture presented here is comparable to the single-pixel camera, where each measurement carries global information about the scene (2022). However, contrary to single-pixel cameras that sequentially mask the scene with varying spatial patterns and acquire each measurement sequentially, our imaging system only requires a single random spatial mask and acquires the spatially multiplexed measurements from a single camera frame and is therefore highly suitable for capturing dynamic scenes. To demonstrate this, we experimentally reconstruct a time-varying scene acquired at the native frame rate of our camera (50 frames per second) and is provided in the Supplementary Materials (fig. S3 and movie S1). The pixel resolution of the camera does not dictate the frame rate of the lensless microendoscope, provided enough pixels are available to measure the light intensity in each fiber core. For calibration and imaging, we acquire images of the fiber cores using only 10 camera pixels per core. Given the modest pixel requirements of the present system, we anticipate the signal-to-noise of the system response to be the primary limiter of the maximum frame rate, not the camera data throughput.

Computational refocusing

A marked benefit of the lensless microendoscope system presented is the ability to computationally refocus on objects that are positioned at different depths without any actuated components and using only a single camera frame. Conventionally, optical endoscopes with depth-scanning capabilities require components capable of physically varying the focal plane, such as electrically tunable lens, which makes brain mounting of freely moving animals difficult due to increased distal footprint and weight (2326). In stark contrast to these bulky approaches, we can simply calibrate the system responses at different depths and reconstruct the scene volumetrically without actuation from a single camera snapshot. As a demonstration of this, Fig. 4 (A and B) shows the microscope images of two test objects separated in depth by 1.5 mm. Using a single snapshot (Fig. 4C), we can volumetrically reconstruct an image volume of the objects and digitally focus on either object (Fig. 4, E and F, and movie S2) simply by choosing the depth plane within the reconstructed volume.

Fig. 4 Computational refocusing.

(A and B) Bulk microscope images of the test subject, which consists of two planar objects separated in depth by 1.5 mm. The image volume is reconstructed from a single image of the multicore fiber’s proximal end, shown in (C). (D) shows volumetric reconstruction with 11 depth layers, separated in depth by 300 μm, using the system response shown in (C). (E and F) Images from the volumetric reconstruction corresponding to the two depths that the objects are in the best focus.

Color imaging

Beyond computational refocusing, this lensless approach can also achieve color imaging without any additional components. In contrast, microlens-based systems suffer from substantial chromatic aberrations that are difficult to correct. Using the proposed lensless approach, one can simply use a color camera and calibrate the sensing matrix for each color channel resulting in no chromatic aberration, in principle. To demonstrate this color imaging capability, we used a white LED as the light source and we reconstructed and overlay images of each color channel to generate the results shown in Fig. 5.

Fig. 5 Demonstration of color imaging.

(A and B) Images of multicolor objects acquired using a bulk microscope. (C and D) Color image reconstructions of the same objects using our lensless microendoscope.

DISCUSSION

In summary, we have demonstrated a distal lensless, scan-free microendoscope using a coded aperture at the distal end of a multicore fiber. By replacing distal lenses with a single spatial mask, widefield images of the scene with a 980-μm-wide field of view are computationally recovered with superior image quality to a comparable conventional lens-based approach. In addition, the imaging system is capable of computational refocusing of objects located 1.5 mm apart in depth without actuation using a single snapshot of the scene’s coded-aperture response. Furthermore, the presented technique does not require additional elements to correct for chromatic aberrations, enabling color imaging by simply calibrating for each color channel. Thus, this distal lens-free microendoscope enables minimally invasive imaging with capabilities and performance that are not possible with conventional lens-based microendosopes. Future improvements on this work will include optimizing the minimum feature size of the random spatial mask and its distance from the multicore fiber to cast the smallest features on the distal end while preserving the decorrelation of each point source, which would maximize the lateral and axial resolutions. Furthermore, simultaneous illumination and detection using the multicore fiber is to be implemented, which can be done by evenly illuminating the scene through the coded aperture as all fiber cores are used for illumination and collection. Given the application to fluorescence imaging, we expect to use a fluorescence filter that sufficiently rejects the excitation light from the fluorescence emission, as is commonplace in conventional lens-based fluorescence microendoscopes. In addition, we aim to improve the scalability of the calibration module for high-resolution imaging with a large number of image pixels by using a high-speed two-dimensional (2D) galvanometer to scan a point in a thin fluorescence slide or using a coded aperture with a separable mask pattern (15, 16). Overall, the presented imaging system demonstrates an alternative design to ultrathin microendoscopy with great potential for applications that demand extremely small and agile probes such as real-time imaging of neural activity in freely moving animals.

MATERIALS AND METHODS

Multicore fiber and coded aperture

The multicore fiber (FIGH-06-300S, Fujikura, distributed by Myriad Fiber Imaging in the United States) used to acquire all experimental data is a 30-cm-long multicore fiber of 6000 fiber cores with a core diameter of 3 μm, a core pitch of 3.3 μm, an image circle diameter of 270 μm, a fiber diameter of 300 μm, and a coating diameter of 400 μm. The coded aperture used in the experiment has a minimum feature size of 10 μm, limited by our printing capabilities. The coded aperture was laser-printed on a transparency and was a 2D square-shaped, uniformly distributed pseudorandom binary pattern. Because of the feature size of the coded aperture and the inherent cross-talk in the multicore fiber, the distance between the mask and the fiber is set to be 1 mm with the imager’s working distance being approximately 4 mm to ensure sufficient shift in the coded-aperture responses of each point source from the scene.

Calibration

For all experimental results, the reconstructed images are 60 by 60 pixels, so we calibrated 602-point sources in our 980-μm-wide field of view for a single depth plane. The point sources are square-shaped with 16.3-μm width (square point source with 9.78-μm width is generated to acquire Figure 3I). We calibrated for 11 depth layers separated by 300 μm in depth for demonstration of volumetric imaging, which requires calibration of 11 × 602 point sources. In the present configuration, the lensless microendoscope demonstrates an axial resolution of approximately 300 μm as shown in figs. S2 and S3. In this particular experiment, the number of calibrated point sources was purely limited by the scanning speed of the digital micromirror device due to its limited scanning speed of 2 Hz.

Reconstruction algorithm

To reconstruct the image of the object from the object’s system response, we used a reconstruction framework focusing on the local image structures. A popular model to quantify local image information is sparsity in an appropriate domain. Given a patch or block of pixels z extracted at a random location from the image of the object, its coefficient α under some sparsifying transform Ψ() defined byα=Ψ˜(z)should be sparse or compressible.

The reconstruction process estimates the sparse coefficient set of some patch set covering the entire image of interest, which is consistent with the object’s system response. In particular, let {zk} be a patch set extracted from the original image x, the image of the object can be mathematically represented by its patches asx=P({zk})where P(∙) is an operator that combines the patch set to obtain the original image. Denote {αk} as the coefficients of the patches {zk} and Ψ(∙) as the inverse sparsifying transform of Ψ() satisfying zk = Ψ(αk) for all k, the sensing process can be written asy=A(P(Ψ{αk}))

We propose to obtain the sparse coefficients from the following optimization problemmin{αk}kαk1 s.t. A(P(Ψ{αk}))=y

This optimization problem can be solved efficiently by an iteratively alternating minimization procedure. At iteration t of the algorithm, a noisy estimate x(t) of the original image consistent with the object’s system response is reconstructed on the basis of the information from the previous iteration. The estimates of the sparse coefficients {αk(t)} at this iteration can then be found by thresholding the coefficients of the noisy patches {zk(t)} extracted from x(t). The error between the true measurements and the sparsified reconstruction with the known coded aperture is used to generate the next image estimate x(t + 1). The algorithm stops when a maximum number of iterations is reached, or the inconsistency between the estimate and the measurements is sufficiently small.

Lens-based microendoscope

Object images and lens-based imaging results in Figs. 2, 3, and 5 were acquired using a biconvex lens (LB1630-A, Thorlabs). The lens-based microendoscope for comparison is home-built using the biconvex lens, and the identical multicore fiber (FIGH-06-300S, Fujikura) was used for the lensless microendoscope. The lens was used to demagnify and relay the scene by a factor of 980 μm/270 μm = 3.6 to relay the 980-μm-wide field of view to the multicore fiber’s image circle diameter of 270 μm. Note that the lens-based microendoscope does not use a microlens and therefore represents the best-case imaging performance with minimal optical aberrations.

Calculation of the space-bandwidth product

The space-bandwidth product of an imaging system measures the number of pixels required to image the full field of view at full resolution (at Nyquist sampling rate). The 2D space-bandwidth product of a lens-based microendoscope is the total number of fiber cores in the multicore fiber, which is 6000. In comparison, the lensless microendoscope has a 2D space-bandwidth product of 0.96 mm2/(7 μm)2 = 19,592 for a single depth plane.

SUPPLEMENTARY MATERIALS

Supplementary material for this article is available at http://advances.sciencemag.org/cgi/content/full/5/12/eaaw5595/DC1

Fig. S1. Detailed schematic of our approach consisting of calibration optics and the imager.

Fig. S2. Determining the axial resolution of the lensless microendoscope.

Fig. S3. Volumetric reconstruction of two planar objects separated by 1.5 mm in depth (shown in Figure 4 and movie S2).

Fig. S4. Comparison of spatial resolution between lens-based and lensless multicore fiber microendoscopes.

Fig. S5. Demonstration of time-varying scene reconstruction.

Fig. S6. Demonstration of insensitivity towards bending of the multicore fiber of the lensless microendoscope.

Movie S1. Dynamic scene reconstruction, acquired at 50 frames per second.

Movie S2. Computational refocusing of planar objects separated in depth.

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.

REFERENCES AND NOTES

Acknowledgments: Funding: This work was supported by the National Eye Institute (NEI) (R21EY028436 and R21EY028381). Research reported in this publication was supported by the NEI of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Author contributions: J.S. and M.A.F. conceived the experimental system. J.S. performed the experiments. D.N.T., S.C., and T.D.T. developed the algorithm. J.S. and J.R.S. processed and analyzed results. S.C., T.D.T., and M.A.F. directed the research. J.S. and M.A.F. prepared the manuscript with contributions from J.R.S., D.N.T., S.C., and T.D.T. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. The data that support the findings in this study are available upon request from the corresponding author.
View Abstract

Stay Connected to Science Advances

Navigate This Article