Research ArticleOPTICS

Guidestar-free image-guided wavefront shaping

See allHide authors and affiliations

Science Advances  19 May 2021:
Vol. 7, no. 21, eabf5364
DOI: 10.1126/sciadv.abf5364


Optical imaging through scattering media is a fundamental challenge in many applications. Recently, breakthroughs such as imaging through biological tissues and looking around corners have been obtained via wavefront-shaping approaches. However, these require an implanted guidestar for determining the wavefront correction, controlled coherent illumination, and most often raster scanning of the shaped focus. Alternative novel computational approaches that exploit speckle correlations avoid guidestars and wavefront control but are limited to small two-dimensional objects contained within the “memory-effect” correlation range. Here, we present a new concept, image-guided wavefront shaping, allowing widefield noninvasive, guidestar-free, incoherent imaging through highly scattering layers, without illumination control. The wavefront correction is found even for objects that are larger than the memory-effect range, by blindly optimizing image quality metrics. We demonstrate imaging of extended objects through highly scattering layers and multicore fibers, paving the way for noninvasive imaging in various applications, from microscopy to endoscopy.


High-resolution imaging and light control through highly scattering, turbid media are fundamental problems that are important for a variety of applications, ranging from microscopy (1), through manipulation of cells and molecules (2), to astronomy (3). When light propagates in a turbid medium, its wavefront is strongly distorted due to the spatially varying refractive index, prohibiting the use of conventional imaging techniques. Great research efforts have led to the development of several approaches aimed at mitigating the effects of sample inhomogeneity. However, most of these approaches have limited success in highly scattering turbid media: Techniques that rely on unscattered, “ballistic” photons, such as optical coherent tomography and confocal and two-photon microscopy, are inefficient in thick multiple scattering media due to the exponential decay of ballistic photons at depths (1). Adaptive-optics techniques (4), which aim at correcting low-order aberrations by the use of active elements such as deformable mirrors, are ineffective in highly scattering media, since the number of scattered modes is orders of magnitude larger than the number of controllable degrees of freedom (DOFs) of the correction device. As a result, imaging using adaptive optics is limited in its penetration depth in soft tissues.

In the past decade, a paradigm shift in imaging and focusing light inside and through turbid media has been made following the introduction of “wavefront shaping” (515). In wavefront shaping, the illuminating or detected wavefront is shaped by a high-resolution spatial light modulator (SLM) using 103 to 105 controlled DOFs to compensate for multiple scattering–induced distortions. In contrast to adaptive optics, the wavefront-shaping correction cannot perfectly correct the scattering wavefront distortions, since the number of scattered modes exceeds by orders of magnitude even the large number of controlled DOFs of an SLM. However, the SLM-based correction still yields a diffraction-limited, high-contrast focus (9), which has allowed breakthroughs such as looking around corners (12), microscopic imaging (5, 7, 8, 16), and focusing light through tissues in space (7) and time (11).

While in the early demonstrations of wavefront shaping, the wavefront correction was found by direct invasive access to the target object plane, located behind or inside the scattering medium (9, 12), in most practical applications such as biomedical imaging, such invasive procedures are impossible. Thus, the major challenge in the field today is in finding approaches that will allow one to determine the wavefront correction in a noninvasive, robust, and simple fashion, using only measurements performed outside of the scattering medium and ideally without requiring labeling of the sample.

Despite the great recent advancements, the state-of-the-art noninvasive approaches for wavefront shaping are severely restricted by requiring a guidestar to generate the feedback mechanism for the wavefront correction (7). The large variety of developed guidestars include the use of optical nonlinearities (1618), which require external and usually invasive labeling. Ultrasound-based guidestars that rely on photoacoustic (7, 19) or acousto-optic (13, 20, 21) feedback come at the price of substantially reduced resolution compared to the optical diffraction limit and an increase in the measurement system complexity. Very recently, several all-optical guidestar-free approaches for focusing using wavefront shaping have been presented. These approaches optimize a metric that relies either on the memory effect (22) or on fluorescence variance optimization (23, 24) for focusing. However, these require controlled coherent illumination, fluorescence labeling, and raster scanning for imaging (23, 24), rendering these approaches inapplicable for imaging under widefield illumination and necessitating acquisition times that are longer than the wavefront correction time.

Alternatively, novel computational approaches that rely on the memory effect for speckle correlations (25) avoid guidestars and wavefront shaping together (26, 27). These techniques computationally reconstruct the hidden objects using scattered light measurements (26, 27). However, these methods work only for small isolated objects that are contained within the memory effect’s narrow angular range (the isoplanatic patch size), severely restricting their application in realistic scenarios where extended objects are usually concerned. Moreover, these computational approaches suffer from limited reconstruction fidelity and low probability of convergence when imaging complex objects (27).

Here, we present a guidestar-free, all-optical, noninvasive, and widefield wavefront-shaping concept for direct imaging of hidden objects through highly scattering media, without any control on the illumination, labeling, or need for computational reconstruction. We avoid the strict usual requirement for a guidestar or known reference target (28) by adapting generalized image quality metrics as the feedback mechanism for determining the wavefront correction, effectively using the object as its own guidestar. We show that image-based metrics can be used even when the initial scattered light image is a low-contrast, seemingly featureless diffusive halo of multiply scattered light (Fig. 1B). Our approach converges to the corrected wavefront even when the object extends beyond the memory-effect range, correctly imaging the object parts that are within the memory-effect angular range, a feat that is beyond the capabilities of correlation-based computational approaches (25, 29).

Fig. 1 Noninvasive imaging via image-guided wavefront shaping, concept, and numerical results.

(A) An incoherently illuminated object is hidden behind a highly scattering medium. The scattered light wavefront is shaped by a high-resolution SLM, which is conjugated to the scattering medium surface. The wavefront-shaped light is Fourier-transformed by a lens on a high-resolution camera. (B) The initial camera image is a low-contrast diffusive blur. (C) Image-guided iterative optimization of the SLM phase correction aimed at maximizing a quality metric of the captured image yields a wavefront correction with ~104 DOFs and a sharp, high-contrast widefield image of the hidden object.

Our method is inspired by image-guided adaptive optics (4, 3032), where an image metric is used to find the wavefront correction for aberrations that are composed of several low-order modes. However, while image-guided adaptive optics is effective in coping only with low-order aberrations, the situation is profoundly different when combatting multiple scattering in turbid samples. Specifically, while low-order aberrations only mildly reduce the sharpness of the imaged objects, multiple scattering results in extremely low-contrast, seemingly information-less diffusive blurs (Fig. 1B), and conventional sharpness metrics may seem inappropriate. Another fundamental difference is that while aberrations are described by only a few tens to at most hundreds of parameters, in diffuse multiple scattering, the light is scattered to >>106 modes, and the wavefront-shaping correction requires the determination of thousands to hundreds of thousands of DOFs. As a result, the conventional starting point in adaptive optics is a slightly blurred image, and the final result is a nearly perfect image, whereas in wavefront shaping, the starting point is a seemingly information-less, low-contrast diffusive light pattern, and the optimal end result is a sharp image over a speckled background (Fig. 1, B and C).

Nonetheless, we show that an image-based quality metric can robustly guide a wavefront-shaping correction with ~104 DOFs to directly image extended objects through highly scattering layers, when the initial image is a low-contrast diffusive blur. Notably, this guidestar-free guidance leads to imaging even when the object extends beyond the memory-effect range. Last, we demonstrate that this new concept is general and useful in other applications by experimentally demonstrating widefield lensless endoscopic imaging through a commercial multicore fiber (MCF) bundle, without invasive access to the distal fiber end (33) or nonlinear measurements (34).


Our concept for image-guided wavefront shaping is presented in Fig. 1A. The goal is to image a hidden, incoherently illuminated object that is located behind a strongly scattering medium. A high-resolution SLM is placed on the other side of the scattering medium to undo the scattering-induced wavefront distortions (35). While light from a spatially incoherent extended object has no well-defined wavefront, the light from each point on the object has a well-defined wavefront. These wavefronts, which in the absence of the scattering medium would be perfect spherical waves, are distorted, and the goal of our wavefront-shaping approach is to correct them back to resemble such spherical waves. The wavefront-shaped light from the SLM is Fourier-transformed by a single lens onto a high-resolution camera. To maximize the wavefront correction field of view (FoV), the SLM plane is optically conjugated to the scattering medium surface (12, 35) using a 4-f telescope (not shown, see section S1), such that the scattering medium surface is 4-f imaged on the SLM plane. Without correction, the scattered light image (Fig. 1B) is of extremely low contrast and results in a low-valued image quality metric. However, iteratively optimizing the SLM phase pattern to maximize an appropriate image quality metric yields a diffraction-limited, high-contrast image of the hidden object (Fig. 1C), in the same manner as would be obtained by using an implanted point-source guidestar (12, 36). Similarly to a correction using a guidestar, the angular FoV of the planar SLM correction is limited by the memory effect to θFoV ≈ λ/(2πL), where L is the thickness of the diffusive medium (12). Nonetheless, as we show below, image-guided wavefront shaping can robustly determine the wavefront correction without a guidestar, even when the hidden object extends beyond the memory-effect FoV.

To enable image-guided wavefront shaping, two major challenges need to be resolved: (i) An image metric that can robustly lead from the initial image to the final image needs to be developed. This is especially challenging during the first steps of the optimization where the contrast and signal to noise (SNR) are low. (ii) An appropriate optimization algorithm that will ensure the convergence of ~104 DOFs (SLM phases) needs to be developed. Following in-depth numerical and experimental studies of image metrics, optimization algorithms, objects, and optical scattering samples, we show that these two challenges can be addressed by an adaptation of two well-established image quality metrics optimized by a genetic algorithm (37). Specifically, we show that a combination of modified entropy and variance metrics, along with a coarse-to-fine partitioning of the SLM during optimization, successfully copes with the low contrast and SNR of the diffused images in the first optimization steps and converges to an appropriate wavefront correction, yielding sharp high-contrast images through highly scattering layers.

To understand and analyze the image formation in the system depicted in Fig. 1A, we consider the following mathematical model: Within the memory-effect range (25, 29), a diffusive medium can be modeled as a thin random phase screen, described by a spatial random phase ϕ(x, y) at a wavelength λ. Noting the SLM phase correction by ϕSLM(x, y) and assuming that the phase screen is imaged on the SLM surface, the total accumulated phase of the corrected wavefront is ϕ(x, y) + ϕSLM(x, y). For a point object located on the optical axis at a distance u from the scattering layer, the intensity distribution on the camera isPSF(x,y)|F{exp[i(2πλu2+x2+y2+ϕ(x,y)+ϕSLM(x,y))]}|2(1)where F{} stands for a two-dimensional (2D) Fourier transform that is performed optically by the lens in Fig. 1A.

Without an SLM correction, i.e., for ϕSLM(x, y) = 0, the point-spread function, PSF(x, y), is a random speckle pattern (27). A perfect correction of both scattering and defocus terms is obtained forϕSLMopt(x,y)=[2πλu2+x2+y2+ϕ(x,y)](2)

For a point-object “guidestar,” this correction can be easily found by maximizing the intensity at any single camera pixel (12), where it is beneficial to select the initially brightest speckle position, or via phase conjugation (36). However, the case of an extended object cannot be solved in such a simple fashion (Fig. 2, A to D). For the case of an incoherently illuminated object that is contained within the memory effect, the scattering PSF is isoplanatic, and the resulting camera image is given by (27)Icam(x,y)=Iobj(x,y)PSF(x,y)(3)where Iobj(x, y) is the intensity distribution at the object plane.

Fig. 2 Comparison of basic image quality metrics and description of the wavefront optimization process.

(A to C) 1D examples for imaging a simple two-point object using the following: (A) the initial uncorrected speckle PSF, (B) the desired wavefront-corrected diffraction-limited PSF, and (C) an undesired matched-filter PSF that resembles the object structure, which yields the same peak intensity in the camera image as the desired diffraction-limited PSF but does not recover the object. A desired image quality metric must distinguish between the two PSFs of (B) and (C). (D) Comparison of three basic metrics for the simulated cases of (A) to (C): peak intensity, image variance (“var”), and modified entropy (“H”; see section S3). The variance and entropy successfully distinguish the desired PSF of (C) from the undesired one of (B). (E to G) The proposed two-step wavefront optimization process: (E) initial uncorrected camera image for a flat-phase SLM. (F) As a first step, the SLM wavefront correction is optimized to minimize the modified entropy of the camera image. This results in partial correction with a few replications of the object, from which a small ROI (dashed line) for the second optimization step is extracted. (G) Variance maximization on the extracted ROI results in wavefront correction and high-resolution imaging.

Since both Iobj(x, y) and PSF(x, y) are non-negative functions and the PSF without correction is a random wide-angle speckle pattern, the initial camera image, Icam(x, y), before correction is a low-contrast diffusive blur (Fig. 1B). For a perfect SLM correction (Eq. 2), the PSF is a diffraction-limited spot, which results in a high-contrast diffraction-limited image of the object (Fig. 1C). Hence, an image quality metric that takes into account sharpness and contrast can be used as a feedback for determining the SLM correction. The challenge is to find an object-independent and robust image metric to guide the SLM correction.

One may naively expect that maximizing a single-pixel intensity (9) would lead to a perfect correction. However, a simple peak intensity metric cannot necessarily distinguish between the desired correction that leads to a diffraction-limited PSF (Fig. 2B) and a correction that results in a “matched-filter” PSF that is a mirror image of the object, as shown in the simplistic example in Fig. 2C (see section S3). This is a direct result of Eq. 2, where the intensity at a single pixel is the convolution of the object function with the PSF, i.e., the correlation of the object with PSF( −x, −y). Maximizing the pixel intensity thus maximizes the correlation between the two functions, resulting in a PSF that is the mirror image of the object (see section S4). This can be exemplified by considering the simple case of a two-point object, presented in Fig. 2 (A to D). For both the optimal correction with an intensity enhancement factor η ≡ max (PSFopt(x, y))/mean(PSFinit(x, y)) (Fig. 2B) and a matched-filter correction with an enhancement factor η/2 (Fig. 2C) (9), the peak intensity of the resulting image is the same (Fig. 2D), where PSFopt(x, y) and PSFinit(x, y) are the initial and optimized PSFs, respectively.

In the past decades, numerous image metrics have been studied for image-guided adaptive optics, with the earliest proposals by Muller and Buffington (30). Two of the well-established metrics are entropy minimization and variance maximization (30), both consider all image pixels rather than the single-pixel value measured in peak optimization. Our in-depth numerical and experimental studies have shown that using a modified entropy, H, of the normalized camera image leads to good imaging results for a large variety of objects and scattering layers (see Materials and Methods and section S5). However, we found that this metric may sometimes converge to local minima, appearing as replications of the object in the vicinity of the true object’s image (Fig. 2F). Unlike the modified entropy, maximizing a variance metric (see Materials and Methods) optimizes the image contrast and robustly converges to the object without any replications, under the condition that the region of interest (ROI) on the camera used for calculating the metric roughly matches the object dimensions. Otherwise, for a too large ROI, the speckle background may dominate the calculated variance, hindering convergence to the object image.

Exploring various metrics, we have found that modified entropy minimization followed by variance maximization over a small ROI that is extracted from the entropy optimization results in sharp imaging through highly scattering media. The optimization process is explained and numerically demonstrated in Fig. 2 (E to G). The initial uncorrected image is presented in Fig. 2E. Modified entropy minimization converges to a sharp image with nearby replications (Fig. 2F), from which the ROI for the variance optimization can be extracted. A following correction using variance maximization yields the desired corrected image (Fig. 2G).

Experimental imaging through scattering layers

As an experimental proof of concept, we imaged various incoherently illuminated objects through highly scattering optical diffusers. The results of two samples from these experiments using a single diffuser as the scattering medium are displayed in Fig. 3. These results demonstrate guidestar-free, high-contrast widefield imaging of the hidden unknown objects, starting from initial, very low-contrast scattered light images.

Fig. 3 Experimental imaging through a single highly scattering layer.

(A) The object as imaged directly without the scattering layer. (B) The initial image through a highly scattering thin diffuser (60° scattering angle). (C) Camera image after image-guided wavefront optimization. (D) Evolution of the variance metric gain as a function of the optimization iterations. (E to H) Same as (A) to (D) for a different object and 20° diffuser. Scale bars, (A to C) 2 mm and (E to G) 1.5 mm.

Guidestar-free widefield imaging through thin scattering layers, similar to those in Fig. 3, can also be obtained using computational approaches based on the memory effect (26, 27). However, these approaches fail when the object extends beyond the memory effect, as is most often the case in practice. In contrast, image-guided wavefront shaping does not suffer from this severe limitation and robustly finds the wavefront correction for a single isoplanatic patch even in such cases. This important and unique aspect of our approach is demonstrated numerically and experimentally in Fig. 4 (A to D and E to H, respectively). Figure 4 (A to D) presents a simulated scenario where the top and bottom halves of the scene (Fig. 4A) are scattered by uncorrelated scattering PSFs, each representing a single isoplanatic patch. Nonetheless, image-guided optimization robustly finds the correction for one of the object halves. Because of the stochasticity of the scattering and optimization process, a different half of the object is corrected in different numerical runs, but the optimization always results in imaging one isoplanatic patch. Figure 4 (E to H) experimentally demonstrates this capability, by imaging the same object used in Fig. 3E but through a scattering sample composed of a stack of two highly scattering diffusers with a small spacing, yielding a memory-effect FoV that is considerably narrower than the object size. As expected, the optimization results in high-contrast imaging of the object parts that are contained within the memory effect (Fig. 4G), a result that cannot be currently obtained by computational approaches without prior information on the hidden object.

Fig. 4 Imaging extended objects through volumetric scattering with a limited memory effect.

(A to D) Numerical results for the case of an object whose top and bottom halves are scattered by uncorrelated scattering functions. (A) The hidden object. (B) Initial camera image with uncorrected wavefront. (C and D) Final results of camera images from two independent optimizations, each randomly converges to a different wavefront correction matching a single isoplanatic patch (memory-effect FoV). (E to H) Experimental results through a scattering medium composed of two diffusers. (E) Object as imaged without scattering. (F) Camera image before correction. (G) Camera image after image-guided optimization corrects the left half of the object. (H) Final SLM phase pattern after optimization. Scale bars, (A to D) 1.1 mm and (E to G) 1.5 mm.

Application in lensless endoscopy

The concept of image-guided wavefront shaping is general and can be applied not only to imaging through scattering layers but also for, e.g., lensless endoscopic imaging. Lensless endoscopes are a desired solution to minimally invasive microendoscopy because of their reduced footprint and dynamic 3D imaging (33, 38). One attractive potential platform for lensless endoscopy is MCF bundles, composed of thousands of individual cores (33, 34). Conventionally, the fiber distal tip is placed adjacent to the imaged object or conjugated to it using a distal lens. This results in a fixed focal plane and an increased footprint. To image objects located away from the fiber distal facet without a distal lens and in 3D, the random spatial phases of the different cores need to be corrected. Currently, this has only been performed by invasive access to the distal side (33) or by using nonlinear feedback signals (34). However, the concept of image-guided wavefront shaping and the analysis of Eqs. 1 to 3 are valid also when the scattering medium is replaced by a fiber bundle. To experimentally demonstrate this capability, we replaced the scattering layer in Fig. 1A with a 50-cm-long commercial MCF (Schott 191012-002), as detailed in section S2. The results in Fig. 5 (B to E) demonstrate the successful compensation of both the random intracore phase distortions and defocus by image-guided wavefront shaping, leading to widefield imaging.

Fig. 5 Lensless widefield endoscopic imaging through an MCF bundle.

(A) Schematic of the experiment: An incoherently illuminated object is placed at a small distance from the distal facet of an MCF bundle having thousands of cores. The fiber proximal facet is conjugated to the SLM (4-f imaging telescope omitted for simplicity) and Fourier-transformed on a camera. (B) The object: USAF target group 5 elements 3 to 5 placed 3.8 mm from the fiber bundle. (C) Conventional image of the fiber proximal facet does not reveal any object features. (D) The initial camera image without correction. (E) The final image after image-guided wavefront optimization, correcting both the intracore phase distortions and defocus, resulting in a diffraction-limited image of the object. Scale bars, 100 μm. See movie S1 for the evolution of the image correction. au, arbitrary units.


We have shown that guidestar-free wavefront shaping of extended objects through highly scattering layers can be obtained by image-guided wavefront correction. For the planar objects considered in our proof-of-principle experiments, the correction does not only account for scattering distortions but also finds the correct axial focusing. However, the correction does not allow determining of the lateral position of the target, since it does not change the optimized image metric. This single correction can also be used for 3D objects as it simultaneously corrects out-of-focus planes that are within the memory-effect axial range (39). In the case of 3D objects that extend beyond the memory-effect axial range, the portions of the object that will not be corrected will reduce the corrected image contrast, similar to the case of objects that extend beyond the transverse memory-effect range (Fig. 4).

The resolution cell size obtained by our approach is δx ≈ λR/nD, where λ is the wavelength, R is the distance between the scattering layer and the object, n is the refractive index of the medium beyond the scattering layer, and D is the SLM image diameter on the scattering layer (12). A quantification of the imaging resolution through a fiber bundle is detailed in section S7.

As in other wavefront-shaping approaches, the intensity enhancement of the optimal correction in highly scattering media is ηπ4NSLM (9), where NSLM is the number of controlled SLM pixels. Since the optimized PSF has a peak-to-background ratio of η, the optimized image contrast is approximately η/Nobj, where Nobj is the number of bright resolution cells of the imaged object, assuming narrowband detection (12). Thus, similar to other wavefront-shaping techniques, high-contrast images are obtained for mostly dark objects or under dark-field imaging conditions. Nonetheless, we successfully demonstrated experimental imaging of conventional USAF (United States Air Force) targets. For broadband illumination or detection, the contrast of the initial and optimized images is further lowered by a factor of approximately δff, where Δf is the detected spectral bandwidth, and δf is the speckle spectral correlation bandwidth, assuming δf < Δf (12).

Since our approach is based on optimizing image metrics of severely distorted images, it requires the detection of small intensity fluctuations in the initial low-contrast image under the presence of noise. A detailed theoretical and numerical analysis of this requirement under shot-noise limited detection, a common condition with scientific complementary metal-oxide semiconductor (sCMOS) camera–based detection, leads to a requirement for a minimal average number of detected photo-electrons per speckle grain, Nelect: Nelect>Nobj·Δfδf, where Nobj is the number of bright resolution cells of the object (see sections S8 and S9). Thus, the more complex is the target object and the broader is the detection bandwidth, Δf (assuming that it is larger than the speckle spectral decorrelation width, δf), the lower is the initial image contrast, and a larger photon flux is required. This limit can be improved by the use of image denoising algorithms and more advanced image metrics.

In our proof-of-principle experiments, we used simple, high-contrast, binary-valued test objects. However, our approach can be used to image objects with a range of intensity values, as demonstrated numerically in fig. S7. While our method does not directly search for sharp object features, it implicitly relies on the target object to have high–spatial frequency content to perform robustly. This can be understood from the fact that since the camera image is a convolution between the corrected PSF and the target object, the Fourier transform of the camera image is the product of the object angular spectrum and the PSF angular spectrum. For a smooth object that lacks high spatial frequencies, the camera image will lack these spatial frequencies irrespectively of the optimized PSF shape. In the spatial domain, the optimized PSF obtained when using large feature-less target objects will not be considerably smaller than the object itself in most cases, since sharpening the PSF much beyond the object dimensions will not significantly change the optimized image. Numerical examples for optimization failures in the case of an object lacking high–spatial frequency content or when a too low SNR is present are presented in section S11.

We have used liquid crystal SLMs, which resulted in rather long optimization times. Orders of magnitude faster optimization times can be reached by using faster SLMs, such as digital micromirror device. In addition, more efficient optimization algorithms can be used to accelerate convergence. For optimal scattering compensation, the optimization time should be shorter than the sample speckle decorrelation time. In perfused tissue, the speckle decorrelation time can reach millisecond time scales, which is challenging for any iterative optimization scheme. In fiber bundles (also termed MCFs), the speckle decorrelation time is dependent on the dynamics of the fiber bending and on the specific type of fiber used: For immobile fibers, the decorrelation time can reach several hours. In MCFs having a single fused silica clad that is shared by all cores (34), moderate fiber bending mainly induces a linear phase ramp on the scattered wavefront, which results in a shift of the wavefront-shaped image (40), thus allowing continuous optimization provided that a sufficiently large optimization ROI is chosen. For leached fiber bundles that do not have a shared clad for all cores, such as the ones used in our experiments, fiber bending may cause strong decorrelation of the wavefront, which will require faster optimization. Recently introduced MCF designs feature significantly reduced bend sensitivity (41)and are especially attractive for applying our approach.

Last, we note that improved state-of-the-art image metrics that can take into account prior or learned information on the imaged scene and the scattering medium (42) are expected to improve imaging of more complex objects and potentially over larger FoVs, not only finding a single isoplanatic correction for each optimization (Fig. 4, C and D) but possibly also simultaneously correcting multiple isoplanatic patches. Merging wavefront shaping with the state-of-the-art computational approaches holds exciting potential for widefield imaging in various fields, ranging from biomedical to remote sensing applications.


Experimental setup

The complete experimental setups are detailed in figs. S1 and S2. In these experiments, the imaged objects were placed at distances of 0.38 to 40 cm behind the scattering sample and were illuminated by a spatially incoherent narrowband pseudothermal source composed of a 20-mW helium-neon laser (Thorlabs HNL210L) followed by a rotating diffuser. The SLM used was a Holoeye Pluto, and the scattered light was captured by a 5.5-megapixel Andor Zyla 5.5 sCMOS camera, with bandpass filters for filtering the illumination wavelength (Thorlabs FB630-10, Semrock LL01-633-25, and FF01-650/SP-25).

The scattering medium in Fig. 2 (A to D and E to H, respectively) was a diffuser with 60° and 20° scattering angle (Newport light shaping diffuser). In Fig. 3 (E to H), the scattering medium was composed of a 10° diffuser placed 1 mm away from another 10° diffuser (Newport light shaping diffuser). In Fig. 4 (D to G), we imaged a USAF test target (R1DS1N-Negative 1951) through a commercial Schott 191012-002, 50-cm-long fiber bundle with 11k cores, pixel size of 7.5 μm, and 0.83 mm bundle diameter, and the target was placed 3.8 mm from the fiber bundle. Camera exposure times were 70 to 400 ms. All objects were transmission plates with different features. The “ground truth” images of the objects without the scattering layer were acquired using a different 4-f imaging system composed of a microscope objective and an f = 180 mm tube lens.

Image metrics and optimization

The modified entropy metric, H, is calculated from the camera image after scaling its intensity to have a peak value of 0 < α < 1, which simultaneously encourages increasing the intensity of the bright pixels and decreasing the intensity of the dark pixels (see section S5): H[I˜cam]=m=1Mn=1NI˜cam[m,n]log(I˜cam[m,n]), where Icam[m, n] is Icam(x, y) sampled by the camera pixels, α = 0.4, and the sum is performed over all camera pixels.

The variance metric is calculated from the camera image by V[Icam]=1MROI·NROIm=1MROIn=1NROIIcam2[m,n](1MROI·NROIm=1MROIn=1NROIIcam[m,n])2, where the sum is performed over a small ROI that is extracted from the entropy optimization and fits the object image size.

The image-based sharpness metrics were optimized by a genetic algorithm (37) with a population size of 50 and 100 for Figs. 3 and 5 and 1, 2, and 4, respectively; initial and final mutation rates of R0 = 0.3 and Rend = 0.013, respectively; and a decay factor of 103. We used coarse-to-fine partitioning of the SLM during the optimization process, gradually refining macro pixels from 120 × 120 to 15 × 15, where in each refinement step, the macro pixel size in each dimension was divided by factor 2. A refinement step occurs when the metric changes in less than 5% over 50 generations. We found that while the coarse-to-fine partitioning of the SLM was important for robust optimization convergence, the results were robust to changes in the genetic algorithm parameters, as long as sufficiently large population size and number of generations are used, as was similarly demonstrated in (37). The same genetic algorithm parameters were used for all imaged objects.


Supplementary material for this article is available at

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.


Acknowledgments: We would like to thank R. Fattal and Y. Hoshen of The Benin School of Computer Science and Engineering for fruitful discussions and A. She’altiely, S. Gabay, and the staff from the Fine Mechanical Workshop of the Racah Institute of Physics. Funding: This work is funded by the European Research Council (ERC) Horizon 2020 research and innovation program (grant no. 677909), Israeli Ministry of Science and Technology (grant 712845), Human Frontiers Science Program (grant RGP0015/2016), and Israel Science Foundation (1361/18). Author contributions: O.K. conceived the idea. T.Y. developed the optimization algorithm, performed the experiments, and analyzed the results. T.Y. and O.K. performed numerical simulations. T.Y. and O.K. wrote the manuscript. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.

Stay Connected to Science Advances

Navigate This Article