Research ArticleAPPLIED SCIENCES AND ENGINEERING

Utilizing sensory prediction errors for movement intention decoding: A new methodology

See allHide authors and affiliations

Science Advances  09 May 2018:
Vol. 4, no. 5, eaaq0183
DOI: 10.1126/sciadv.aaq0183

Figures

  • Fig. 1 Proposal and experiment paradigm.

    (A) We propose to use a sensory stimulator in parallel with EEG and decode whether the stimulation matches the sensory feedback corresponding to the user’s motor intention. The presented experiment simulated a wheelchair turning scenario and used a galvanic vestibular stimulator. (B) The participants were affixed with GVS electrodes during the EEG recording. The subliminal GVS induced a sensation of turning either right or left. (C) Experiment timeline: In each trial, using stereo speakers and a “high”-frequency beep, the participants were instructed to imagine turning either left or right while sitting on a rotating chair. A subliminal GVS was applied 2 s after the end of each cue, randomly corresponding to turning either right or left. This was followed by a rest period of 3 s cued by a “low”-frequency beep (stop cue).

  • Fig. 2 Decoding performance summary.

    The across-participant median decoding performance when decoding for the direction in which a participant wants to turn (that is, the cue direction) is shown in red and pink, whereas decoding for a MATCHED/MISMATCHED participant intention and applied GVS is shown in black. The data at each time point represent the decoding performance using data from the time period between a reference point (“cue” for red data and “GVS start” for pink and black data) and that time point. Box plot boundaries represent the 25th and 75th percentile, whereas the whiskers represent the data range across participants. The inset histograms show the participant ensemble decoding performance in the 140 (20 test trials × 7 participants) test trials, with each participant data shown in a different color.

  • Fig. 3 Summary of decoding performance from various analyses in the 96-ms time period.

    The MATCHED/MISMATCHED decoding with GVS is shown in black. On applying the GVS in the front-back configuration (green data), the decoding performance decreased. Decoding for cue direction, using either all the data since the cue (red data) or data since the GVS start (pink), did not show performance above chance. Decoding performance was similarly low using frequency features between 0.01 and 30 Hz, typical of ERD-based decoders (orange), indicating that our task (imaging turning right or left) did not initiate significant ERD differences in participants. Finally, the decoding of whether the participant was thinking of turning or not (cyan) was also observed to be significant after GVS.

  • Fig. 4 Spatiotemporal features selected in the 96-ms time period.

    (A) The weights selected by the decoder, averaged across the channels and time, are plotted in shades of gray. Nine channels (indicated by the blue disks) were selected in six or more participants. (B) The across-participant median (data point) and time range (whiskers) in which these nine channels were selected.

  • Fig. 5 GVS perception questionnaire.

    The report of vestibular sensation, muscle twitch, and tactile stimulation in our experiment to validate the GVS was subliminal. An experimenter applied GVS at 1 Hz, of amplitudes increasing from 0.2 to 3 mA (in both the rightward and leftward directions). Participants scored any sensation they experienced, between 0 and 6 (0 representing no report). The box plots show the across-participant median, the 75% percentile, and range (whiskers) of the participant reports. Participant reports were zero for all the sensations at the amplitude of GVS (0.8 mA) used in our experiment.

Tables

  • Table 1 % MATCH/MISMATCH decoding accuracy in the 96-ms time period.
    ParticipantMedianMeanSD
    189.5889.275.86
    272.5072.005.71
    379.1777.115.74
    487.5087.194.18
    597.9297.712.02
    677.0878.336.02
    788.5487.505.97