Synthetic aperture radar is a well-known technique for improving resolution in radio imaging. Extending these synthetic aperture techniques to the visible light domain is not straightforward because optical receivers cannot measure phase information. We propose to use macroscopic Fourier ptychography (FP) as a practical means of creating a synthetic aperture for visible imaging to achieve subdiffraction-limited resolution. We demonstrate the first working prototype for macroscopic FP in a reflection imaging geometry that is capable of imaging optically rough objects. In addition, a novel image space denoising regularization is introduced during phase retrieval to reduce the effects of speckle and improve perceptual quality of the recovered high-resolution image. Our approach is validated experimentally where the resolution of various diffuse objects is improved sixfold.
- Computational Imaging
- Fourier Ptychography
- Phase Retrieval
- Synthetic Aperture Imaging
Imaging objects from large standoff distances is a requirement in many computer vision and imaging applications such as surveillance and remote sensing. In these scenarios, the imaging device is sufficiently far away from the object that imaging resolution is fundamentally limited not by magnification but rather by the diffraction of light at the limiting aperture of the imaging system—using a lens with a larger aperture will lead to increased spatial resolution. Physically increasing the aperture of the lens, by building a larger lens, results in expensive, heavy, and bulky optics and mechanics. Details on the rapid increase in cost and weight as the focal length increases can be found in part A of the Supplementary Materials. A number of techniques have been proposed to improve spatial resolution for various imaging systems, including refractive telescopes (1–6), holography (7–11), and incoherent superresolution (12–16).
The resolution of an imaging system is proportional to both the lens aperture size and the wavelength of the electromagnetic spectrum used. In long-wavelength regimes (such as radar), the direct coupling between image resolution and aperture size can be mitigated using synthetic aperture radar (SAR) techniques. SAR improves radio imaging resolution by capturing multiple measurements of a static object using a mobile recording platform, such as an airplane or satellite (Fig. 1A). For SAR, the resolution is determined by the synthetic aperture size, which can be many orders of magnitude larger than the physical aperture size. Stitching together multiple radar returns is possible because the full complex field (amplitude and phase) is directly measured by the antenna with picosecond timing resolution. However, to make a comparable measurement using visible light, a detector would have to continuously record information with a time resolution greater than 1 fs, a requirement well beyond the capabilities of modern devices. As such, current camera sensors record only the intensity of the incoming optical field, and all phase information is lost. Here, we seek to extend the concept of synthetic apertures for visible imaging (SAVI), paving the way toward long-distance, subdiffraction-limited imaging (Fig. 1B).
Fourier ptychography (FP) has emerged as a powerful tool to improve spatial resolution in microscopy. In FP, multiple low-resolution images with different illumination angles are captured and stitched together (17–27). Redundancy between measurements permits computational recovery of the missing phase information (18, 25, 28–31). See part B of the Supplementary Materials for a detailed discussion on previous work in FP microscopy and related phase retrieval techniques, as well as comparison with other methods for high-resolution imaging.
Adapting the technique to long-distance imaging requires two important modifications of previous FP microscopy implementations. First, the separation distance between the object and the camera increases by orders of magnitude. Second, a reflection imaging geometry must be used so that the illumination source and the camera are placed on the same side of the object. Dong et al. (20) and Holloway et al. (32) succeeded in the first task, scaling up the working distance to 0.7 and 1.5 m, respectively. Reflective FP microscopy setups have been proposed to fulfill the second task (33–35). However, these systems either require small working distances (34, 35) or exhibit limited reconstruction performance (33). Here, we present a highly flexible implementation based on separate illumination and imaging optical paths.
The results presented in this paper also differ from previous works in one key regard. We demonstrate FP for optically rough surfaces, that is, surfaces that produce speckle. This is more conducive to imaging everyday objects that scatter incident light in random directions. In Fig. 2, a comparison with existing FP implementations is shown. Previous works have relied on smooth objects and are loosely represented by the transmissive data set adapted from Holloway et al. (32) shown on the left side of Fig. 2. A sample data set of a diffuse object collected in a reflection mode geometry is shown on the right. The immediate difference between the two data sets is that the random phase associated with diffuse objects effectively spreads out information across the entire Fourier domain. As a consequence of the random phase, the spatial information is obfuscated by the resultant speckle.
The works most closely related to this paper are the synthetic aperture holographic setup proposed by Tippie et al. (11) and the optical synthetic aperture approach of Beck et al. (36). Tippie et al. (11) experimentally demonstrated synthetic aperture off-axis holographic capture of diffuse objects at a large standoff distance. Our approach can be interpreted as a reference-free extension of synthetic aperture holography in which computational reconstruction algorithms are used in place of interferometric capture, resulting in more stable implementations and widening the variety of application scenarios that could benefit from the approach. Beck et al. (36) seek to extend SAR techniques into optical wavelengths in the near-infrared regime of the electromagnetic spectrum. To record phase measurements, the object is raster-scanned by moving an aperture. The return signal is then demodulated using a reference signal to reduce the frequency to approximately 100 kHz, which can be sampled with commercially available analog-to-digital converters (ADCs). In contrast to the study of Beck et al. (36), our approach is reference-less, which improves signal strength and reduces hardware complexity, and markedly improves data acquisition rates by not having to raster-scan the illumination over the scene.
Here, we show, for the first time, long-range, high-resolution imaging of diffuse targets—well beyond the diffraction limit imposed by the imaging optics—using synthetic aperture FP-based techniques. An overview of the proposed SAVI creation technique is illustrated in Fig. 1C. We demonstrate, using our experimental test-bed, a sixfold improvement in image resolution. In addition, we introduce an image space regularization technique that reduces the speckle visible in the reconstructed images, which results in improved perceptual image quality.
To validate the proposed method of creating SAVI, we constructed a table top device to capture images of objects under coherent illumination. Image recovery is performed in MATLAB.
A simplified rendering of the experimental setup is shown in Fig. 3A, and the physical setup is shown in Fig. 3B. A laser diode (Edmund Optics #64-825) operating at 532 nm is passed through a spatial filter and a 2-inch-diameter focusing lens to illuminate an object 1 m away. For clarity, the spatial filter has been represented as a pinhole, and the camera body has been removed in Fig. 3A. Additional optical elements, such as neutral density filters and polarizers, have also been omitted from the diagram.
Light scattered off of the object is collected by a camera positioned near the focusing lens. To satisfy the Fraunhofer diffraction approximation for a short optical path, we used a lens to focus the coherent illumination at the aperture plane of the camera lens. Whereas the model in Materials and Methods assumes free space propagation, we show in part C of the Supplementary Materials that the analysis holds for converging spherical waves. Imaging over larger distances would not require a lens to satisfy the far-field conditions, and the focusing lens would be repurposed as a collimating lens instead. Low-resolution images are captured by moving the camera (both the lens and the sensor) on an XY translation stage to sample a larger region of Fourier space.
All of the results presented in this paper use the same experimental setup described above with the following parameters and components. The camera used is a Point Grey machine vision camera (BFLY-PGE-50A2M-CS) equipped with an Aptina MT9P031 complementary metal-oxide-semiconductor (CMOS) image sensor with a pixel pitch of 2.2 μm. In front of the sensor is a lens with a focal length of 75 mm and an aperture diameter of 2.5 mm (f/30), which produces a diffraction spot size of ~39 μm on the sensor. For the U.S. Air Force (USAF) target and fingerprint data sets, adjacent positions of the camera are 0.7 mm apart to ensure sufficient overlap between samples in the Fourier domain. A grid of 19 × 19 images is captured to produce a synthetic aperture of 15.1 mm, six times larger than the lens’ aperture. The parameters used to capture the reverse side of a U.S. $2 bill in Fig. 3D are slightly modified. A 21 × 21 grid of images is collected with adjacent positions separated by 0.6 mm, yielding a synthetic aperture of 14.5 mm.
Exposure bracketing and image averaging are used to increase the dynamic range and reduce read noise, respectively. At each position, the camera records images with three different shutter times (doubling the exposure between each). For each exposure time, between 5 and 10 images are averaged to reduce read noise. The exposures are then joined together to yield a high–dynamic range image. An image sensor with a larger ADC and larger pixels could be substituted to decrease acquisition time instead of using averaging and exposure bracketing.
Accurate alignment of the low-resolution images is crucial to accurately reconstruct a high-resolution image. Checkerboard fiducial markers are placed at the periphery of the object, outside of the region illuminated by coherent light, to allow for the ease of alignment postcapture using standard tools (37). If fiducial markers are not present, diffuse images can be aligned by correlating speckle patterns in adjacent images (11). In long-distance imaging, it is likely that only a portion of the scene will be illuminated and key features in the rest of the scene may be used for image alignment, which matches the operating parameters of our experimental setup.
SAVI for diffuse objects
In the bottom row of Fig. 3, two common objects are illuminated with coherent light: In Fig. 3C, a fingerprint deposited on glass is presented, and Fig. 3D shows the reverse side of a U.S. $2 bill. A single captured image for either object is unrecognizable because of the extreme degradation from speckle and diffraction blur. After reconstruction using FP, which amounts to a sixfold improvement in resolution, both images are recognizable and fine details can be clearly observed.
The ability to resolve biometric markers from large standoff distances has many applications in surveillance and authentication. Current methods of data collection are often intrusive, for example, using a fingerprint scanner. In Fig. 3C, we show how a fingerprint can be recovered from a piece of glass by means of SAVI. The fingerprint is coated with powder to provide relief against the transparent glass. Fine features necessary to identify the print are recovered, as shown in the zoomed-in regions. Because of the limitations in the current prototype, only stationary objects can be recovered; however, a real-time acquisition system would enable contactless on-the-fly authentication of users simply holding a finger aloft.
Capturing legible text and images is also of interest in surveillance scenarios, such as verifying payment amounts at retail outlets. However, recovering the high-resolution image for the U.S. $2 bill (Fig. 3D) introduced an unanticipated problem during reconstruction. Unlike the fingerprint, the intensities on the cloth are not binary, which results in reduced contrast between the foreground and background. To combat the added difficulty of recovering such a varied object, the overlap between adjacent measurements was increased to 76%, and a 21 × 21 grid was sampled. An analysis of recovering scenes with low contrast is examined in Discussion.
During phase retrieval, a high-resolution estimate of the phase emanating from the scene is recovered. As expected for a diffusely scattering object, the phase exhibits a random profile in the range [−π, π]. Here, we are primarily interested in recovering a high-resolution intensity image of the scene; thus, the randomness in the recovered phase has no impact. The recovered phase estimate for the U.S. $2 bill is discussed and shown in part F of the Supplementary Materials.
USAF resolution target
To quantitatively investigate the performance of SAVI, we image a negative USAF chrome-on-glass target with flat white paint applied to the chrome surface of the target, as suggested in the work of Tippie et al. (11). The target is imaged through the back of the glass plate to retain the high-resolution features characteristic of the resolution chart. An example of a captured image is shown in the first row of Fig. 4A.
The specular reflections off of the glass and chrome surfaces of the target are orders of magnitude stronger than the diffuse light scattered by the painted surface. To mitigate the effects of specular reflection, we adjusted the angle between the illumination, object, and camera so that the direct reflection does not enter the synthetic aperture of the camera system. In addition, crossed polarizers are used to attenuate the contributions of diffracted direct illumination (at the boundaries of the bars).
Directly imaging the target results in a characteristic speckle pattern (first row of Fig. 4A). For the given imaging geometry, resolution is limited to 1.26 line pairs per millimeter or a bar width of 397 μm. A common method to reduce speckle noise, and increase resolution, is to average multiple short-exposure images of a varying speckle field to extract spatial frequencies beyond the capability of any single measurement (1, 3). To induce a change in speckle pattern, we vibrated the target and acquired a sequence of 361 short-exposure images, equal to the number of images used to create the synthetic aperture. The exposure duration is set to be 1/5 of the middle exposure time used during FP data collection. The average of the short-exposure images is shown in the second row of Fig. 4A. As expected, the resolution of the averaged image surpasses that of the captured image, and 280-μm features are now resolvable.
Speckle can also be suppressed by inserting a rotating diffuser in the optical path before the object (38) to destroy the temporal coherence of the illumination source. As shown in the third row of Fig. 4A, imaging with a rotating diffuser improves resolution so that features as small as 157 μm are resolvable. A consequence of using a rotating diffuser is that light intensity falls off as 1/d2, which greatly affects the intensity measured by the camera and limits the effective working distance of the system.
Whereas individual images exhibit significant blur and diffraction, which can be partially mitigated by averaging, the resulting image from FP has a sixfold improvement in resolution with resolvable features as small as 70 μm (fourth row of Fig. 4A). The increased resolution is also accompanied by a corresponding sixfold decrease in speckle size; however, the speckle is still present in the final reconstruction. By introducing a denoising prior into the reconstruction algorithm, the effect of the speckle can be further mitigated to improve contrast while retaining the same resolution improvements as traditional FP algorithms. The full reconstruction with regularization is shown in the fifth row of Fig. 4A.
Zoomed-in details of the five imaging scenarios are shown in Fig. 4B. Images above the dashed pink line are not resolvable. Notice that the resolution and contrast quickly deteriorate for individual captured images and the averaged image while using the rotating diffuser increases resolution marginally better. The images obtained using FP exhibit higher resolution than those obtained by directly measuring the scene; furthermore, the addition of the denoiser reduces variation within the speckle regions to improve contrast.
The USAF resolution chart can be used to quantify the resolving power of an optical system. Many metrics to compute contrast rely on the ratio of the maximum value to the minimum value along a cross section perpendicular to the bar orientation. This is a serviceable definition in most cases; however, when speckle is present, the random distribution of intensities can skew contrast measurements. We add a slight modification to the standard contrast metric to account for the variable intensity due to speckle. Bar positions are known a priori, and the average intensities of the white () and black () bars are used to compute contrast. The contrast C is further scaled by the average intensity in the bright and dark regions to account for speckle migration during reconstruction(1)
Bar locations are manually located using a high-resolution image of the USAF target and are scaled to the correct size for each test image. The threshold for resolvability must be adjusted to compensate for the additional scaling in Eq. 1. We define a contrast value of 0.05 to be the resolution limit.
Contrast plots for the USAF target are presented in Fig. 4C. Contrast for the observation image and the averaged short-exposure image rapidly deteriorates in agreement with observations made in Fig. 4 (A and B). Reconstruction of the synthetic aperture image using the full complement of measurements significantly improves resolution. Including the image space denoising during reconstruction increases the contrast in the bars, which aids in discrimination of fine spatial features.
Resolution gains and speckle reduction
Because the goal of the proposed method is to emulate a larger synthetic aperture, and diffraction blur and speckle size are inversely proportional to the size of the imaging aperture, it is expected that the improvement in resolution should increase linearly with the increase in aperture diameter. To illustrate this effect, the USAF target was reconstructed with varying synthetic aperture sizes by only using subsets of the full 19 × 19 data set. In this manner, the size of the synthetic aperture was increased from 2.5 to 15.1 mm in steps of 0.7 mm. Figure 4D shows that the resolution improves and speckle size decreases according to theory. Recovery was performed without the use of the denoising regularizer so that an accurate measurement of the speckle size was obtained. Speckle size is measured as the full width at half maximum of the autocorrelation function of the intensity pattern (39). A region of the registration square at the top of the target (the square between group 1, element 1 and group 0, element 2) is chosen to measure the speckle size, and the reported value is the average of both the vertical and horizontal profiles. It should also be noted that the discrete nature of the USAF target causes the measured resolution to deviate from the predicted values; further details are provided in part D of the Supplementary Materials. The measured speckle size decreases according to theoretical predictions and demonstrates a sixfold improvement in spatial resolution.
Experimental results introduced in Results suggest that FP is a promising technique to achieve subdiffraction imaging at large standoff distances. However, there is still a long way to go before a full-scale implementation is realized.
One important area of further study is modeling and accounting for objects with low contrast. Although it is known that the difference in intensity between the background and foreground affects perceptual resolution in the presence of speckle (40), it is unclear how this affects the reconstruction performance of FP algorithms. We have observed that, for objects with a range of intensities (low contrast), such as the U.S. $2 bill shown in Fig. 3D, a greater amount of overlap may be required to obtain a satisfactory reconstruction.
Even with a good reconstruction, features may still be hard to make out if the contrast is low. In Fig. 5, we simulate super-resolving a complex object whose amplitude has varying brightness ratios. The owl is the brightest element (amplitude of 1), the background is the darkest element, and the brightness of the text and camera is the average of the owl and the background. The minimum amplitude varies from 0.1 to 0.5, and a simulated FP data capture is generated, matching the experimental parameters used for the USAF and fingerprint data sets (focal length, 75 mm; aperture diameter, 2.5 mm; overlap, 72%; synthetic aperture size, 15.1 mm). As the contrast decreases in the amplitude of the high-resolution object, a larger area of the captured images (left column of Fig. 5) exhibits speckle. The reconstruction quality decreases as the contrast increases (right column of Fig. 5). Speckle appears in the background, and fine features that can be seen with a relatively high contrast (first row) are no longer resolvable (third row).
Naturally, when the amplitude of the background is nonzero, speckles will form. Removing the speckle from the background will require stronger regularization than the method presented in this paper and is a promising avenue of research. Alternative strategies, such as destroying temporal coherence to reduce speckle contrast, have been used in FP microscopy (35) and may be of some benefit in near- to mid-range macroscopic imaging.
We have presented the first implementation of a macroscopic, reflective FP imaging system that is capable of reconstructing diffuse objects. Sixfold resolution improvements are validated experimentally, and reconstructions of other common objects demonstrate the versatility of FP to recover diffuse objects. These findings suggest that creating a SAVI is feasible and that the prospects are good for building a full-scale imaging platform in the near future.
As promising as these results are, there are still additional factors that will need to be overcome in a full-scale SAVI implementation. Here, we have only considered diffraction to be the limiting factor in spatial resolution loss. Atmospheric turbulence, long the bane of astronomers, will also affect image measurements made over large distances. There is some hope that the redundancy in the measurements for FP can help mitigate the effects of turbulence, just as aberrations and imperfections in the pupil function can be modeled and corrected, but significant research will be needed before dynamic effects can be incorporated into the model. Although this study has not solved every challenge that SAVI will face, we believe that our results lay important groundwork toward achieving its practical implementation.
MATERIALS AND METHODS
FP relies on the use of monochromatic illumination sources to provide coherent illumination. An overview of the forward model of FP is provided here.
Image formation model
In this experiment, we assumed a standard imaging geometry; that is, a source illuminates an object that reflects light back toward a camera. The source emits a field that is monochromatic with wavelength λ and is spatially coherent across the surface of the object of interest. The illumination field interacts with the object, and a portion of the field will be reflected back toward the imaging system. The field emanating from the object is a constant two-dimensional complex wave, ψ(x, y).
ψ(x, y) propagates over a sufficiently large distance z toward the imaging system to satisfy the far-field Fraunhofer approximation. The field at the aperture plane of the camera is related to the field at the object through a Fourier transform (41)(2)where k = 2π/λ is the wave number, and is the two-dimensional Fourier transform scaled by 1/λz. For simplicity, we dropped the multiplicative phase factors and the coordinate scaling from the analysis, although these could be accounted for after phase retrieval if desired. To further reduce clutter, spatial coordinates (x, y) were represented by the vector x, and frequency coordinates (u, v) were represented as u.
The field that intercepts the pupil of the imaging system, Ψ(u), is effectively the Fourier transform of the field at the object plane. Because of the finite diameter of the lens, only a portion of the Fourier transform was imaged onto the camera sensor. Let the transmittance of the pupil be given by P(u). For an ideal circular lens, P(u) is defined as(3)where d is the diameter of the lens.
The camera lens was focused on the image sensor and therefore also fulfilled the Fraunhofer approximation (41), such that the field at the sensor plane (again ignoring phase offsets and scaling) was the Fourier transform of the field immediately after the lens. Because the image sensor only recorded optical intensity, the final image is given as(4)where c is the center of the pupil. In Eq. 4, the shift of the camera aperture to capture different regions of the Fourier domain is represented by the equivalent shift of the Fourier pattern relative to the camera. Because of the finite extent of the lens aperture, the recorded image will not contain all of the frequency content of the propagated field Ψ(u). For a lens with diameter d and focal length f, the smallest resolvable feature within one image will be approximately 1.22λf/d.
Recovering high-resolution magnitude and phase
To facilitate the creation of a synthetic aperture, the camera was recentered in the Fourier plane at N different locations ci, i = 1, …, N. One consequence of sampling in the Fourier domain is that the images must be stitched together in the Fourier domain. From Eq. 4, the image sensor only recorded the intensity of the complex field and contained no information about the phase. It is therefore necessary to computationally recover the phase of the N intensity measurements. To ensure that sufficient information is available for postcapture phase retrieval, care must be taken to provide sufficient overlap between adjacent camera positions. From previous works, it is known that ≥65% overlap is typically required for phase retrieval to converge to an adequate solution (18, 25, 32).
Here, we expanded on the image recovery algorithm proposed by Ou et al. (18) and extended by Tian et al. (25), in which both an estimate of the high-resolution field Ψ(u) and an estimate of the pupil function P(u) were recovered. From Eq. 2, recovering Ψ(u) effectively recovered ψ(x) because the fields were related through a Fourier transform. We sought to solve the following optimization problem(5)where Φ(u, ci) is the optical field immediately following the aperture at the ith position, Φ(u, ci) = Ψ(u − ci)P(u).
Defining the data fidelity term of the reconstruction to be the L2 error between measured and estimated intensities in Eq. 5 resulted in a nonconvex optimization problem. Phase retrieval is typically solved using an iterative update scheme similar to that popularized by Gerchberg (42) and Fienup (43). In the mth iteration, the estimate of Φ(u) is propagated to the image plane for each camera position ci, whereupon the measured image intensities are enforced(6)(7)(8)(9)
Differences between successive estimates of the field Φ(u, ci) were used to update the estimates of Ψ(u) and P(u) in the Fourier domain. Following the formulation in the study of Tian et al. (25), the estimate of Ψ(u) is given by(10)
The adaptive update step size |Pm(u + ci)|/|Pm(u)|max was used by Tian et al. (25) and was based on the work of Rodenburg and Faulkner (44). The contribution of the pupil function was first divided out of the difference, and the remainder was used to update the estimate of Ψ(u). A similar update step was used to update the estimate of the pupil function but with the roles of Ψ(u) and P(u) reversed(11)
In the update steps shown in Eqs. 10 and 11, a small value (τ1 and τ2, respectively) was added to prevent division by zero. In the study of Ou et al. (18), the updated pupil function was constrained to lie within the support of the initial guess, which corresponded to the numerical aperture of the lens. A similar strategy was used in this work, with the support being twice as large as the initial guess to accommodate differences between the experimental setup and the forward model (such as the aperture not being a perfect circle).
Initial estimates of Ψ(u) and P(u) must be provided. The initial estimate of the pupil function is defined to be an ideal circular aperture from Eq. 3 with a diameter determined by the aperture. A common initialization of Ψ(u) for weakly scattering objects is to upsample any ofthe recorded images (often an image near the dc component) and take its Fourier transform (17, 18, 25). Here, we opted to take the Fourier transform of the average of all N captured intensity images. Averaging independent measurements of the field suppresses speckle in the initial estimate (1, 3).
Optically rough objects
Typical applications of FP in microscopy have dealt with objects that have gradual changes in refractive index. This, in turn, leads to transfer functions with relatively smooth phase components. However, the surfaces of most real-world objects are “optically rough” and exhibit random phase.
When coherent light reflects off of an object, each point along the surface acts as a secondary source of spherical illumination. The constituent components of the reflected optical field will be composed of a summation of each of the secondary sources. If the variation in path length exceeds the wavelength of the incident light (λ ~ 550 nm), the secondary sources will be out of phase with one another. Summation of the dephased point sources leads to destructive interference that manifests as speckle (45, 46).
Suppose that the variation of surface height is at least equal to λ and is uniformly distributed. For any point in the optical field, the probability of measuring an intensity I (squared amplitude) follows a negative exponential distribution (39)(12)where μ is the mean intensity. It should be noted that Eq. 12 holds for fully developed speckle, whereby polarized light maintains its polarization state after reflection. Most real-world objects exhibit subsurface scattering that destroys the polarization state of the incident light. In such a case, the intensity distribution is given as (47)(13)
For the purposes of this paper, it is sufficient to say that speckle intensity follows a negative exponential distribution.
In an imaging geometry, the apparent speckle size is compounded by diffraction blur induced by the aperture of the lens. Hence, the speckle size at the sensor plane is approximately twice that of the smallest resolvable image features (2.44λf/d) (39). FP is used to reduce diffraction blur by synthetically increasing the aperture diameter. In the presence of speckle, FP also reduces speckle size.
The formation of speckle is compatible with the analysis of the formation model given here and used in other previous work and, in fact, is a natural consequence of the object having a randomly distributed phase. Previous FP implementations have generally avoided dealing with speckle by imaging thin biological samples, which naturally have a smooth phase, by using partially coherent light (48), or by a combination of the two. Here, we presented results for what we believe to be the first macroscopic FP imaging system that recovers optically rough objects.
FP reduces speckle size by reducing diffraction blur; however, the variation in the remaining speckle remains large. We introduced an additional step to the recovery algorithm described in Materials and Methods (see “Recovering high-resolution magnitude and phase”) to help suppress speckle in the estimate of Ψ(u). In this section, we describe the speckle suppression method used during reconstruction; we compare it with an alternate speckle suppression technique of averaging independent measurements (1, 3) in Results (see “SAVI for diffuse objects”).
From Eqs. 12 and 13, speckle intensity follows a negative exponential distribution, which is consistent with a multiplicative noise model. It is important to note that speckle is not noise in the conventional sense. The underlying random phase of the object distorted the intensity field recorded by the sensor. Because our goal was to recover a high-resolution intensity image, it is desirable to mitigate any distortion that manifests as speckle. In this sense, we refer to speckle as “noise.” Recovering a high-resolution estimate of the magnitude and phase may be useful to determine material properties and may be accomplished by running a second reconstruction that omits the speckle suppression step.
Much of the research related to image denoising techniques, including the state-of-the-art BM3D algorithm (49), assume an additive noise model. Because the goal was to eventually recover a high-quality intensity image, the intensity of was denoised during image recovery. To overcome the multiplicative distortions caused by speckle, we first took the natural logarithm of the intensity at iteration m to convert the noise into a more convenient additive model(14)where the new variable ξm(x) was modeled as the true (noise-free) signal ξ′(x) corrupted by additive noise η(x)(15)
Extracting an updated estimate, ξm+1(x), that better approximates ξ′(x) can be achieved using any of the standard image denoising techniques. Here, a straightforward image noise suppression method was used: wavelet decomposition followed by soft-thresholding wavelet components whose magnitudes exceed ρ (50). Denoising was accomplished by decomposing ξ using an orthogonal wavelet basis W and denoising operator D(α, ρ)(16)(17)
We used symlet wavelets of order eight as the basis for wavelet decomposition and decomposed these wavelets to the second level. The value of ρ may be set a priori or may be updated automatically, for example, using Stein’s unbiased risk estimator (51). The latter method was adopted in this work. Finally, the amplitude of ψm+1(x) was updated in a manner similar to Eq. 9(18)which was then transformed back into the Fourier domain as Ψm+1(u), and the recovery algorithm proceeded as previously described. In the proposed speckle suppression, large coefficients in the wavelet domain were removed, which was similar to imposing a histogram constraint commonly used in phase retrieval (52) in the wavelet domain.
Denoising was applied every s iterations of the recovery algorithm. Spacing out denoising allows adequate time for information transfer and reinforcement between the spatial and Fourier domains during phase retrieval. The recovery algorithm effectively has an outer loop to perform the denoising and an inner loop for phase retrieval. A visual representation of the complete recovery algorithm is shown on the left side of Fig. 6.
The right side of Fig. 6 demonstrates the benefit of incorporating the denoising step into the recovery algorithm in the presence of subjective speckle. A simulated high-resolution complex object was imaged by a diffraction-limited system. The amplitude is shown in Fig. 6A, and the phase is randomly distributed in the range [−π, π]. Direct observation of the object resulted in an image degraded by diffraction blur and speckle; one such example is shown in Fig. 6B. Using FP without the denoising regularization resulted in artifacts in the reconstruction, as shown in Fig. 6C. Incorporating the denoising step during reconstruction, as in Fig. 6D, reduced some of the artifacts present in Fig. 6C and improved overall contrast by reducing the intensity variation in the speckle regions. The brightness of the outsets in Fig. 6 (B to D) was increased to highlight the presence of reconstruction artifacts.
To compare the performance between Fig. 6C and Fig. 6D, the variance of the white owl, , and the gray text and camera, , was computed. Without denoising, the variance of the white and gray regions was 44.6 and 3.2 pixels, respectively (with a maximum intensity of 255). Using the proposed denoising method reduced the variance by 55% to 20.4 and 1.5 pixels in the white and gray regions, respectively.
Supplementary material for this article is available at http://advances.sciencemag.org/cgi/content/full/3/4/e1602564/DC1
fig. S1. Cost and weight increase rapidly when improving lens resolution.
fig. S2. Comparison with incoherent superresolution methods for diffuse surfaces.
fig. S3. A simplified depiction of the imaging geometry.
fig. S4. Discretization in the USAF resolution chart.
fig. S5. Convergence of proposed phase retrieval algorithm.
fig. S6. Example of recovered phase map.
This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.
REFERENCES AND NOTES
- Copyright © 2017, The Authors