Research ArticleENGINEERING

Autofocals: Evaluating gaze-contingent eyeglasses for presbyopes

See allHide authors and affiliations

Science Advances  28 Jun 2019:
Vol. 5, no. 6, eaav6187
DOI: 10.1126/sciadv.aav6187
  • Fig. 1 Typical presbyopic vision with various methods of correction.

    Without any correction, near distances are blurry. Progressives and monovision allow focus to both near and far distances by either splitting up the field of view or using different eyes for each distance, as illustrated. Autofocals use information from each eye’s gaze to dynamically update the focus to near or far. (Foreground image: Nitish Padmanaban, Stanford; background image: https://pxhere.com/en/photo/1383278).

  • Fig. 2 Acuity measurements for presbyopes wearing their own correction compared to wearing autofocals.

    (Left) Average acuities for users that typically wear progressive lens either using their own correction or while wearing autofocals. (Right) Average acuities for monovision wearers using their own correction or wearing autofocals. Autofocals are, on average, better than the users’ own corrections at nearly all compared distances and are comparable to progressives at the farthest distance. Asterisks indicate significance at the *P = 0.05 level. Error bars represent SE.

  • Fig. 3 Contrast sensitivity and task performance for presbyopes wearing their own correction compared to wearing autofocals.

    (Left) Average log contrast sensitivities (logCS) grouped by users’ usual correction (progressives or monovision) and whether they were wearing their own correction or autofocals. All corrections perform similarly. (Middle) The average speed and (right) accuracy for the refocusing task, grouped by usual correction and whether they used autofocals. Baseline for accuracy is set to 50%, corresponding to random guessing. Autofocals are faster on average than user’s own corrections while not sacrificing accuracy, and are significantly better for accuracy than progressives (*P < 0.05). Error bars represent SE.

  • Fig. 4 Rankings from the three preference questions.

    Each black dot represents a user ranking. (Left) The users’ own corrections are more physically comfortable, even with only a short period of wear. Some prefer autofocals, citing lack of a need to crane their necks back. (Middle) On the other hand, the ease of refocus question shows a clear preference for autofocals, especially over the depth-tracked mode (i.e., eye tracking disabled). (Right) Autofocals are also preferred for convenience, although only slightly over their current correction. Again, the depth-tracked mode fares poorly. Significance is indicated at the *P = 0.05, **P = 0.01, and ***P = 0.001 levels.

  • Fig. 5 Front and side views of our autofocal prototype.

    The RealSense R200 depth camera, the Optotune EL-30-45 focus-tunable lenses, the offset lens holders for prescription correction, and the Pupil Labs eye trackers are shown. (Photo credit: Nitish Padmanaban, Stanford).

Supplementary Materials

  • Supplementary material for this article is available at http://advances.sciencemag.org/cgi/content/full/5/6/eaav6187/DC1

    Supplementary Materials and Methods

    Supplementary Text

    Preference Questionnaire

    Fig. S1. A partially exploded view of the headset computer-aided design model.

    Fig. S2. An image of the previous prototype, which had a glasses form factor.

    Fig. S3. Optical characteristics of the Optotune EL-30-45 focus-tunable lenses captured using a camera.

    Fig. S4. A wavefront map of the coma correctors, designed in Zemax.

    Fig. S5. Focus accuracy at different positions before and after addition of the coma corrector.

    Fig. S6. Measured optical lens power as a function of target lens power.

    Fig. S7. Evaluations of the accuracy of the two main external sensors.

    Fig. S8. A visual representation of sources of error in the estimated vergence.

    Fig. S9. Error in the estimated vergence distance from various sources.

    Fig. S10. An example recording of the sensor fusion algorithm.

    Algorithm S1. Sensor fusion: Vergence + error.

    Algorithm S2. Depth denoiser.

    Data S1. A zip file containing comma-separated values (CSV) files with the raw data for participants for visual acuity, contrast sensitivity, and task performance.

    Data S2. A CSV file containing the raw data for participants for the natural use questionnaire.

    References (4548)

  • Supplementary Materials

    The PDF file includes:

    • Supplementary Materials and Methods
    • Supplementary Text
    • Legend for Preference Questionnaire
    • Fig. S1. A partially exploded view of the headset computer-aided design model.
    • Fig. S2. An image of the previous prototype, which had a glasses form factor.
    • Fig. S3. Optical characteristics of the Optotune EL-30-45 focus-tunable lenses captured using a camera.
    • Fig. S4. A wavefront map of the coma correctors, designed in Zemax.
    • Fig. S5. Focus accuracy at different positions before and after addition of the coma corrector.
    • Fig. S6. Measured optical lens power as a function of target lens power.
    • Fig. S7. Evaluations of the accuracy of the two main external sensors.
    • Fig. S8. A visual representation of sources of error in the estimated vergence.
    • Fig. S9. Error in the estimated vergence distance from various sources.
    • Fig. S10. An example recording of the sensor fusion algorithm.
    • Algorithm S1. Sensor fusion: Vergence + error.
    • Algorithm S2. Depth denoiser.
    • Legends for data S1 and S2
    • References (4548)

    Download PDF

    Other Supplementary Material for this manuscript includes the following:

    • Preference Questionnaire (.pdf format)
    • Data S1 (.zip format). A zip file containing comma-separated values (CSV) files with the raw data for participants for visual acuity, contrast sensitivity, and task performance.
    • Data S2 (.csv format). A CSV file containing the raw data for participants for the natural use questionnaire.

    Files in this Data Supplement:

Navigate This Article