Recursive sequence generation in monkeys, children, U.S. adults, and native Amazonians

See allHide authors and affiliations

Science Advances  26 Jun 2020:
Vol. 6, no. 26, eaaz1002
DOI: 10.1126/sciadv.aaz1002

eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all . Please read our guidelines before submitting your own eLetter.

Compose eLetter

Plain text

  • Plain text
    No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g.
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Vertical Tabs

  • Simple models of sequential processing cannot explain center-embedded generalizations
    • Stephen Ferrigno, Postdoctoral Fellow, Harvard University
    • Other Contributors:
      • Samuel J. Cheyette, Graduate Student, University of California, Berkeley
      • Abhishek Dedhe, Graduate Student, Carnegie Mellon University
      • Steven T. Piantadosi, Professor, University of California, Berkeley
      • Jessica F. Cantlon, Professor, Carnegie Mellon University

    Lakretz raises several interesting perspectives on data we recently reported in Ferrigno et al (2020).

    First, he argues that "non-recursive mechanisms" may explain the data we observed. We certainly agree that computational models can handle stimuli like ours when they have the necessary architectural constraints built in (such as the specific LTST model Lakrets suggests). In our own modeling, we have found that simple recurrent neural networks learn ordinal knowledge but not the required hierarchical generalizations. These results emphasize that success on this task is interesting and informative because it rules out a variety of plausible learning architectures. We also agree that success on the task does not mean that a learning model has acquired a grammar that can generate arbitrarily deep embedding. Lakretz suggests that it is crucial to test deeper recursion depths than the training, and we note that no matter what amount of training is provided, a critic could always construct an ad hoc architecture to process it, and claim that deeper nesting is required for “true” recursion. In this context, it is important to remember that at least in language, humans themselves are incapable of more than two levels of center-embedding (Gibson & Thomas 1999). Such limitations are not unique to humans, but present in all physical computational systems.

    Due essentially to these limits on recursion depth, formal notions of recursion are notoriously diff...

    Show More
    Competing Interests: None declared.
  • Recursive Processing of Nested Structures in Monkeys?Two Alternative Accounts
    • Yair Lakretz, Post-doctoral researcher, Cognitive Neuroimaging Unit, NeuroSpin Center, 91191 Gif-sur-Yvette, France
    • Other Contributors:
      • Stanislas Dehaene, Prinicipal Researcher and director, Cognitive Neuroimaging Unit, NeuroSpin Center, 91191 Gif-sur-Yvette, France; Collège de France, Paris.

    Ferrigno et al. (2020) introduced an ingenious task to investigate recursion in human and non-human primates. Participants view a touch screen with four bracket signs, such as } { ] and [, presented at random positions. During training, they are rewarded for touching these items in a center-embedded sequence, either { ( ) } or { [ ] }. At test time, generalization to new sequences is examined: given a new display of ( ) [ ], do the participants spontaneously produce a center-embedded structure such ( [ ] ) or [ ( ) ]? American adults, Tsimane adults, and 3-5 year-old children did. Macaque monkeys required additional training, but two out of three eventually showed good generalization and scored above many Tsimane and child participants. Moreover, when tested on sequences composed of new bracket signs such as < and >, the monkeys still showed good performance. The authors thus concluded that recursive nesting is not unique to humans.

    Here, we dispute the claim by showing that at least two alternative interpretations remain tenable. We examine this conclusion in light of recent findings from modern artificial recurrent neural networks (RNNs), regarding how these networks encode sequences. We show that although RNNs, like monkeys, succeed on demanding generalization tasks as in Ferrigno et al., the underlying neural mechanisms are not recursive. Moreover, we show that when the networks are tested on sequences with deeper center-embedded structures compared to tra...

    Show More
    Competing Interests: None declared.

Stay Connected to Science Advances

Navigate This Article