Research ArticlePSYCHOLOGY

The origin of pointing: Evidence for the touch hypothesis

See allHide authors and affiliations

Science Advances  10 Jul 2019:
Vol. 5, no. 7, eaav2558
DOI: 10.1126/sciadv.aav2558

Abstract

Pointing gestures play a foundational role in human language, but up to now, we have not known where these gestures come from. Here, we investigated the hypothesis that pointing originates in touch. We found, first, that when pointing at a target, children and adults oriented their fingers not as though trying to create an “arrow” that picks out the target but instead as though they were aiming to touch it; second, that when pointing at a target at an angle, participants rotated their wrists to match that angle as they would if they were trying to touch the target; and last, that young children interpret pointing gestures as if they were attempts to touch things, not as arrows. These results provide the first substantial evidence that pointing originates in touch.

INTRODUCTION

In every human culture that has been studied, typically, developing infants begin to point at 9 to 14 months (1). The onset of this ability lays the foundation for language acquisition: It is arguably the first purely informative gesture produced by children (2); children who are delayed in pointing are delayed in subsequent language acquisition (3), and other animals who lack language systematically fail to understand informative pointing (4). Determining the origin of pointing is therefore essential to our understanding of human language and uniqueness, and yet, up to now, we have known next to nothing about where it comes from (5).

Some have speculated that pointing might begin in reaching (6). Perhaps a child begins by reaching for objects, and a parent hands her the object she reaches toward. The child learns that she can use that reach to have things handed to her, and the action is “ritualized” into a gesture (4). However, a clear distinction can be made between gestures that likely result from reaching and prototypical pointing gestures. The former are “imperative” (as opposed to “informative”) since children use them to have things handed to them, rather than simply to direct attention; they are produced with an open hand rather than a single index finger; and they feature significantly less vocalizations and joint attention than prototypical pointing gestures (7, 8). Since prototypical pointing gestures are so different, they are unlikely to originate in reaching. Others have proposed that pointing may be learned by children by imitating their parents (8). But, if pointing were acquired by imitation, it should vary across cultures: Learning by imitation is widely understood to be one of the main sources of cultural variation due to the errors that are inevitable in this kind of learning (9). Instead, we find that pointing exhibits remarkably little variation. Its morphology is seemingly universal, with infants in all cultures pointing in the same way, and its age of onset is also invariant across cultures (1). If pointing were acquired by imitation, then we should also expect that it would become more frequent with training, and yet, its acquisition is not affected by training (10).

Here, we explore an alternative hypothesis: that pointing originates in touch. There are already good reasons to see pointing and touch as connected. Children use a prototypical pointing hand shape to explore objects tactually from as early as 6 months (11), and as the frequency of pointing gestures increases from around 9 months of age, the frequency of this kind of exploratory touch decreases (12), suggesting that pointing is somehow “taking over” from touch. To investigate this issue more thoroughly, we tested three predictions about the production and interpretation of pointing gestures that we thought should be confirmed if pointing does indeed originate in touch. First is that, when pointing at a target, people should orient their fingers as though they are aiming to touch it rather than as though creating an “arrow” that picks it out (study 1); second is that, when pointing at a target at an odd angle, people should rotate their wrists as they would if they were trying to touch the target (study 2); and last is that we should interpret pointing gestures more as if they were attempts to touch things than as arrows (study 3).

STUDY 1: REFERENCE FIXING IN POINTING GESTURES

Our first prediction concerned the way that pointing gestures pick out their referents. It is sometimes supposed that pointing gestures work like arrows, much as the direction indicated by a road sign is determined by the orientation of the sign. On this view, a pointing gesture refers to an object found on a vector that extends along the angle of the finger (13, 14). We will call this view the “arrow hypothesis,” and the vector that extends along the finger the “arrow line” (Fig. 1). If pointing gestures originate in touch, however, then this arrow line should not be a good predictor of reference. When someone reaches out to touch something, the angle of her finger is largely irrelevant—it could be horizontal or even vertical—what matters is that the fingertip can make contact with the object she wishes to touch. If pointing originates in touch, then a better predictor of reference ought to be what we will call the “touch line”—the vector that runs between a person’s eye and fingertip while pointing (Fig. 1). This vector, after all, will pick out the object that the person’s fingertip appears closest to touching from her point of view.

Fig. 1 Reference-fixing experiment, setup.

Superimposed on the left image, we see two lines, one in red and one in green. The arrow line (red) extends along the angle of the finger; the touch line (green) extends from the participant’s eye through the fingertip. Our prediction was that the green line would be a better predictor of what someone is pointing at than the red line. The image on the left illustrates the apparatus and setup for 3-year-olds to adults, the image on the right illustrates the setup for the 18-month-olds.

To explore which of these two vectors is a better predictor of a pointing gesture’s target, we engaged participants in four age groups, from infants to adults, in a game that elicited pointing gestures. We then measured the touch line and arrow line of these gestures (Fig. 1).

Materials and methods

Participants in this study were aged 18 months (n = 13), 3 years (n = 17), 6 years (n = 12) and adult (n = 13) (for details, see the Supplementary Materials). For the adult, 6-, and 3-year-old participants, the following procedure was used. Six cups were fixed on shelves on an apparatus standing on a table at a distance of 1.5 m from the seated participant (height of rows, 20 cm apart, with the lower row matched to height of each participant’s eyes; distance between cups, 20 cm; Fig. 1, left).

The sessions began with the experimenter explaining to the participant that they would play a game where wooden marbles are hidden under cups, and the participant has to point out the location of the marble. Before the start of data collection, three practice trials were administered in this way: The experimenter placed the ball under one of the cups as the participant watched and then asked the participant to point to its location. After the participant pointed, and then withdrew their hand, the experimenter asked them to point to it again. Following the warm-up, 12 trials were administered, where the ball was hidden twice under each of the six locations.

For the 18-month-olds, pointing gestures were elicited by introducing children to puppet characters that appear behind a screen. The experimenter sat facing the child (who was seated on his or her parent’s lap), with a screen behind the experimenter (Fig. 1, right). A puppet (operated by a second experimenter) appeared above the screen behind the experimenter. The puppet said hello, and the experimenter turned around to see the puppet and introduce the puppet to the child. After the introduction, the puppet disappeared suddenly. The experimenter turned back to the child and said, “Oh! Where did the puppet go? Can you show me where he went?” The puppet then reappeared in one of four positions—over the top of the screen, on the left or the right, or through windows cut into the screen, to the left or right of the experimenter—and said, “Hello!” The experimenter pretended not to see the puppet and continued to ask the child where the puppet was until the child pointed at the puppet; then, the experimenter turned around to “find” the puppet and effusively thanked the child for helping. The puppet appeared eight times in a session.

Analysis and results

We took screenshots of the frames in the video recordings, where the pointing gestures were most fully produced [for details on the criteria that we used to decide this, see the Supplementary Materials (15)]. We then superimposed two vectors on these images using Adobe Photoshop—the arrow line and the touch line (Fig. 1). Then, we measured the angular distance of these vectors from the target using Adobe Photoshop’s “angle” tool.

To test whether the touch line or arrow line was a better predictor of reference, and whether this differed between ages, we analyzed whether, in a given trial, the touch or arrow line was closer to the target (we excluded trials where both were equivalent). First, we checked to see whether the age of the participants affected the difference between the two vectors, but we found that it did not (range, 0.83 to 0.86; likelihood ratio test comparing model with and without age, χ2 = 0.602, df = 3, P = 0.896). Since age has no effect, we pooled all age groups into a single model. We found that the probability of the touch line being closer to the target than the arrow line was significantly different than chance (0.5) across all ages (test of the adjusted intercept: estimate ± SE = 1.625 ± 0.291, z = 5.592, P < 0.001) (Fig. 2). We also measured the difference between the two angles and found an effect of age on this measure (χ2 = 8.488, df = 3, P = 0.037): The extent to which the touch line is a better predictor of reference is greatest for the 18-month-olds and smallest for the 6-year-olds, closely followed by adults. The sample size was a total of 991 observations of 57 participants (for more details, see the Supplementary Materials).

Fig. 2 Reference-fixing experiment, results.

The location of the dots on the vertical axis shows the distance of either the touch line or arrow line in degrees from the target (target at 0°). Dots depict individual means, lines connect the arrow, and touch measures of individuals. Box around 18-month-olds indicates that a modified procedure was used.

Discussion

Our first study shows that, from infancy to adulthood, pointing gestures are not produced as arrows. Rather, they are produced in such a way that it looks to the participant as though the tip of her finger is making contact with the object she is pointing at—as though she is touching that object. These results provide clear support for the touch hypothesis. The degree to which the touch line is more reliable is also greatest for the youngest age groups. As children get older, they are more inclined to fully stretch out their arms as they point so that the difference between the two vectors is smaller in 6-year-olds and adults. In their earliest appearance, however, pointing gestures are produced more in line with the touch than the arrow hypothesis. (It may be noted that, although the touch line is more reliable than the arrow line in all age groups, the touch line is most accurate in the 6-year-olds, rather than the adults, as one might expect; however, we suspect that this is simply due to the 6-year-olds producing their gestures more carefully, while the adults were more casual in their engagement with the task.)

STUDY 2: WRIST ROTATION IN POINTING GESTURES

Our second prediction concerned the rotation of our wrists while pointing. When someone reaches out to touch something, the rotation of her wrist changes, depending on the surface orientation of the object she is attempting to touch. If a person aims to touch the right side of a box that faces her, she rotates her wrist to the right so that the pad of her index finger is orientated toward that side; if she aims to touch the left side of that box, she rotates her wrist to the left, so the pad of her finger is now oriented to that side. We predicted that if pointing originates in touch, then we should also find this pattern of wrist rotations in pointing.

This hypothesis was tested by presenting participants (18 months, 3 years, 6 years, and adults) with targets to point at in two conditions. In a “2D” condition, the targets (magnets) were fixed to the left and right sides of a flat surface facing the participant (Fig. 3, left). In a “3D” condition, the targets were fixed to the left and right sides of a box (Fig. 3, center and right). The targets were magnets of various sorts—animals, stars, colored disks, etc. We predicted that participants would rotate their wrists to the left and right when pointing at the targets on the left and right sides of the apparatus in the 3D condition, but not in the 2D condition.

Fig. 3 Rotation experiment, setup.

On the left, the 2D condition, in which participants typically point with their hand flat; in the center, the participant rotates her wrist to the right to point at the right side of the box in the 3D condition; on the right, the participant rotates the wrist of her right hand to the left to point at the left side of the box.

Materials and methods

The study was a within-subject design, and the participants were aged 18 months (n = 12), 3 years (n = 16), 6 years (n = 12) and adult (n = 12) (most of the subjects appeared in all three studies; see the Supplementary Materials for details). Participants were seated opposite an apparatus and encouraged to point at magnets affixed in various positions to that apparatus (Fig. 3). In the “3D condition,” the apparatus featured a box covered in green felt (height, 30 cm; length, 20 cm; width, 20 cm) set against a blue sheet of plastic (height, 60 cm; width, 1 m). When magnets are affixed to the left side of the box, they are oriented toward the left, and when they are affixed to the right, they are oriented toward the right. In the “2D condition,” the box was removed, and the study was carried out using a sheet of green paper glued to the same blue sheet of plastic (height, 30 cm; width, 20 cm). In this condition, the magnets are oriented forward whether they are fixed on the left or the right side of the piece of paper. The magnets were an assortment of colored objects, including stars, animals, and colored disks.

For the 3-year-olds, 6-year-olds, and adults, the session began with the experimenter “introducing” one of the magnets to the participant in the form of telling a story—this was particularly important for the 3-year-olds to keep them interested. The experimenter began with a magnet of a farmer and introduced the green box as a “hill,” onto which “Farmer Klaus” goes for a walk. Farmer Klaus began in one position (e.g., the left side of the box) and then moved to another position (e.g., the right side of the box). Each time the magnet was moved to a new position, the experimenter asked the participant, “Where is [the character]? Can you show me?” Once the child pointed out the character, the experimenter waited until the child lowered her hand and then asked her to show the experimenter again. Two gestures were elicited per trial.

For the 18-month-olds, the same apparatus was used with a modified procedure. The infant was seated at a table on her parent’s lap. The apparatus was on the other side of the table, while an experimenter sat on one side. The infant was shown an “elephant” (a papier-maché toy with a hollow trunk), which could be “fed” with yellow magnetic disks. When the elephant was fed, the experimenter pressed a hidden button and some decorative lights wrapped around the elephant would light up.

Having shown the infant the yellow disks, the experimenter placed one before her on the table, in full view of the infant. The experimenter then turned away from the table, saying that she had to tie her shoelace. While the first experimenter was turned away, a second experimenter entered and moved the magnet to a position on the apparatus as the infant watched. The first experimenter turned back to the table, and seeing the magnet has disappeared, she feigned surprise and addressed the infant saying, “Oh! Where is the magnet gone! Do you know?” This was enough to elicit pointing gestures from the infants to the magnets in the different positions. When the child pointed, the experimenter “found” the magnet and effusively thanked the infant.

Analysis and results

We wanted to see whether participants would rotate their hands to the left and right in the 3D condition, where the orientation of the targets changed from left to right, but not in the 2D condition, where the orientation of the targets was the same whether the target was on the left or the right. We first decided on the moment in a video at which the pointing gesture was most fully produced—using the same criteria that we used in the first study. Screenshots were taken of these frames. We then coded the participant’s hand as falling into five positions of rotation: (1) palm facing down, (2) palm facing up, (3) palm facing left, (4) palm facing right, or (0) undecidable when the hand seemed to be between positions.

Our analysis tested whether the rotation of the hand corresponded to the side at which the target was presented more in the 3D than the 2D condition and whether this varied across ages. In the full model, the interaction between age and condition was not significant (χ2 = 5.101, df = 3, P = 0.165), and we therefore removed it from the model. The resulting model retained a clear effect of condition (χ2 = 50.367, df = 1, P < 0.001). Participants across all ages matched the rotation of their wrist to the side on which the target was located significantly more in the 3D than the 2D condition (Fig. 4).

Fig. 4 Rotation experiment, results.

Likelihood of wrist rotation matching the side of target is depicted on the y axis. Dots depict individual averages, and connecting lines depict individual performance across conditions. Box around 18-month-olds indicates that a modified procedure was used.

When the magnets were on the 2D surface, the participants tended to point at them with their hands in a “flat” position—palms facing the ground—whether the magnets were on the left or the right (Fig. 3, left). This means that the fingertip is oriented toward the object they point at, as though they were reaching forward to touch it. In the 3D condition, the participants tended to rotate their wrists so that the pad of their pointing finger is oriented toward the surface of the target. When the magnet was on the right side of the box and the right hand was used to point, participants rotated their wrist to the right so that the pad of their index finger is oriented toward the target object (Fig. 3, center). When the magnet was on the left side, participants were inclined to either use their left hand rotated to the left or even rotate the wrist of their right hand through 180° so that the pad of their pointing finger is facing right (Fig. 3, right).

Discussion

The results of the second study further support the touch hypothesis. When we point at targets at odd angles, we rotate our wrists just as we would if we were trying to touch those targets. In some observed instances, in this study, the right hand was used to point at the left side of the box or vice versa, and the participants rotated their wrist in a strenuous way through 180° to match the orientation of the surface that they point at. These cases serve as particularly clear illustrations of the strength of the impulse to orient the hand as though attempting to touch the target. That the effect is again in evidence in even the youngest age groups adds further weight to our interpretation that this indicates an origin of pointing in touch.

STUDY 3: THE INTERPRETATION OF POINTING GESTURES

Last, we considered that if pointing gestures are produced as though attempts to touch things, then we might find that pointing gestures are interpreted in this way too. To test this, we presented participants with images of a figure producing ambiguous pointing gestures, forcing them to choose between a touch and an arrow interpretation of those gestures.

In the “arrow condition,” participants viewed an image of a figure pointing at a cup in such a way that the vector extending along the length of his finger intersects with the object he was looking at (Fig. 5, left). If people interpret pointing gestures as arrows, then it should be clear that, in this condition, the figure in the image is pointing at the yellow cup.

Fig. 5 Interpretation experiment, main conditions.

On the left is the arrow condition: Here, the vector extending through the angle of the pointing finger intersects with the same object the figure is gazing at (the yellow cup); on the right is the touch condition: Here, the object the figure is gazing at is the same one the fingertip is closest to touching (the red cup). The 18-month-olds and 3-year-olds reliably pick the red cup in the condition on the right but are at chance in deciding between the red and yellow cup in the condition on the left (Credits: authors; C. O’Madagain, Max Planck Institute for Evolutionary Anthropology).

In a “touch condition” (Fig. 5, right), the experimenter’s gaze matches the object that his fingertip is closer to touching (the red cup), while the arrow picks out a different cup (the yellow cup). If participants interpret pointing gestures as attempts to touch things, then they should more reliably pick the cup the experimenter is looking at in this condition, which is the cup he is closest to touching.

Materials and methods

Participants were aged 18 months (n = 24), 3 years (n = 12), 6 years (n = 12), and 9 years (n = 12) and adult (n = 12). For the older age groups (3-year-olds through adults), the experimental design was within-subjects. Participants were told that they would play a game where a ball is hidden under a cup, and they must identify the location of the balls by naming the color of the cup under which they are hidden (participants who could not name the colors were excluded). A slide show then began, where a character named “Max” appeared and was shown hiding balls under the cups [the complete slide show can be viewed online (15)]. The participants were told that Max would help them to find the balls.

In each slide that appeared once the experiment began, Max was shown pointing at a cup. The experimenter asked the participant, “Where is the ball?” The participant answered by naming the color of the cup—if the participant simply pointed, or referred to the cup by location, then the experimenter asked for the color of the cup. No matter what choice the participant made, the experimenter said, “Ok!” and then reached behind the screen to retrieve a ball. This ensured that one interpretation of Max’s gestures was not reinforced over the other. The experimenter then handed the ball to the participant who (in the case of the 3- and 6-year-olds) used it to “feed” a toy elephant. Each participant received 16 test trials in one of four pseudo-randomized sequences.

Along with the arrow and touch conditions, we included two control conditions. In one of these, the figure’s gaze matched the arrow of the pointing finger, and this was also the cup that the fingertip was closest to touching (fig. S1, left). In these slides, touch, arrow, and gaze all picked out the same cup. Participants who did not pick the object the figure gazed at in three of four of these slides were excluded. This was designed to make sure that participants understood the task and were following the gesture in unambiguous cases. In the second control, the arrow was again aligned with the same cup that the fingertip was also closest to, but the gaze picked out a different cup (fig. S1, right). This control was designed to ensure that participants’ choices could not be explained by their simply ignoring the hand position and following the gaze alone. No age group was above chance for following the figure’s gaze in this control condition, which shows that the results cannot be explained for any group by merely following the gaze alone (fig. S2).

The 18-month-olds participated in a modified between-participant design suitable for the age group. In this setup, the pointing gestures were produced by a live experimenter rather than a slide show, and two targets, rather than three, were used. Before the test trials began, the experimenter conducted a warm-up as follows. The infant was seated on her parent’s lap, while the experimenter sat opposite her and held up an interesting toy for her to see. The experimenter let the child play with the toy for some time before taking the toy and closing her hands over it. The experimenter had a second identical toy concealed in one of her hands so that she was now holding two identical toys between her hands. She then separated her hands, with a toy in each hand, and pushed her hands into two cloth-covered boxes, which were fixed to a wooden tray (boxes 10 cm wide and 15 cm apart). She then pointed at one of the boxes while saying, “Look, [child’s name], look!” The experimenter then pushed the tray toward the child, who could pull off the cloth from one of the boxes to find the toy. The experimenter had hidden two identical copies of the toy—one in each box. The child always “finds” the toy, as a result, to ensure that one interpretation of the gesture is not reinforced over the other.

The pointing gestures produced in the warm-up phase were all “unambiguous” between touch and arrow—for example, where the experimenter used her right hand to point at the box on her right side, with her hand positioned outside of the table [matching the unambiguous gesture in fig. S1 (left)]. The test began when the ambiguous gestures were produced—matching the touch or arrow conditions already described (Fig. 5). Just as in the “on-screen” setup, the experimenter produced gestures where the arrow of her finger aims at one of the two boxes, while her hand hovers over the other so that the fingertip is closest to the other. In the “touch condition,” she looks at the box the fingertip is closest to touching, which her hand is hovering over; in the arrow condition, she looks at the box the arrow of the finger is directed at. The live experiment was carried out between subjects so that half of the participants received only arrow trials and half received only touch trials. Infants were given at least 4 and up to 12 trials (infants completing less than 4 trials were excluded).

Analysis and results

The main measure coded was which cup or box was chosen, either by naming the color of the cup (for older age groups) or by grabbing the cloth cover of the box (in the case of the 18-month-olds). To analyze whether participants’ choices were more likely to match the gaze in the touch or arrow condition, we ran a model with condition, subject age, and their interaction as the predictors. Overall, the full model was significant (full-null model comparison: χ2 = 25.881, df = 9, P = 0.002), and we found the interaction between condition and age to be significant (χ2 = 11.914, df = 4, P = 0.018). The 18-month-olds and 3-year-olds were more likely to pick the cup the experimenter was looking at in the touch condition than in the arrow condition, whereas for the 9-year-olds, the opposite was the case. The 6-year-olds did not show a clear preference, being at chance in both conditions, while the adults were well above chance in both conditions (Fig. 6).

Fig. 6 Interpretation experiment, results.

Likelihood of choice matching gaze is depicted on the y axis. When participants follow the arrow, their choices match the gaze in the arrow condition; when participants follow the touch, their choice matches the gaze in the touch condition. Dots depict individual averages, and boxes depict sample means and SE. Box around 18-month-olds indicates that a modified procedure was used.

Discussion

What we can see in these results is that the interpretation of pointing gestures as arrows (in yellow; Fig. 6) follows a clear trajectory: It is almost entirely absent in the earliest age groups tested (18 months and 3 years), who are at chance in their interpretation of the gestures in the arrow condition, but it is reliably available for the older age groups (9 years and adults). The youngest age groups, on the other hand, interpret the touch gestures reliably—overwhelmingly preferring the cup the figure was gazing at in the touch condition. Although these conditions are not as reliably interpreted by the older age groups, perhaps because of the availability of a competing interpretation (the arrow), the touch interpretation still seems to be available even among adults. These results indicate that children start out interpreting pointing gestures in the same way they produce them—as attempts to touch things and not as arrows. This adds further weight again to the hypothesis that pointing originates in touch.

GENERAL DISCUSSION

The pointing gesture sits at the foundation of language acquisition, allowing speaker and hearer to coordinate visual attention on an object for the sake of establishing reference in communication (1, 2, 5). Although its origin has been shrouded in mystery, the results of the present studies provide strong support that this gesture emerges from touch. From infancy to adulthood, pointing gestures are oriented toward their target not as arrows but instead as though they were attempts to touch that target. From infancy to adulthood, the wrist is rotated in a pointing gesture as it would be were the producer attempting to touch the target. And although adults and older children are able to interpret pointing gestures as arrows, in infancy and early childhood, the dominant interpretation of a pointing gesture is as though it were an attempt to touch a thing.

What is the best explanation for these results? We think that the best explanation is the hypothesis that predicted them: Pointing originates in touch. Some previous work on pointing has indicated that pointing may have an origin in touch (11, 12), but the results reported here are the first experimental results that we know of based on this hypothesis. How exactly might pointing emerge from touch? A natural suggestion is that pointing emerges from touch exploration through the kind of “ritualization” that has been found to lead to other early gestures. Consider the “hands-up” gesture produced by both human infants and apes (4). Here, an infant will raise her hands to literally try to climb onto her mother. The mother soon realizes what the child wants when she raises her arms and picks her up instead of waiting for the child to try to climb up. The child finds that all she needs to do is raise her arms to get the mother to pick her up, and the hands-up gesture is born. Now, consider how touch exploration could lead to pointing in the same way. We know that infants start out exploring objects in their environment with their extended forefingers, touching what they look at to coordinate visual and haptic information seeking (16). We also know that adults are inclined to automatically visually attend to objects they are touching themselves (17). It is a small step to suppose that adults might also systematically pay attention to what children are touching. Once the child finds that she can get an adult to pay attention to something by touching it, she may begin to make “as if” to touch things that are slightly further away. Parents recognize which object the child is aiming to touch and attend to that object. The action originally designed to allow the infant to explore an object with the fingertip becomes a gesture that functions to coordinate the attention of infant and adult on an object, and pointing is born.

Of course, there are further debates concerning the development of pointing that we have not discussed here, including how much infants understand of the mental states of the adults whose attention they direct with these gestures (18). However, we think that a key piece of the puzzle has been unlocked with the identification of the link between pointing and touch, and it is one that brings us substantially closer to fully understanding this milestone in human ontogeny.

SUPPLEMENTARY MATERIALS

Supplementary material for this article is available at http://advances.sciencemag.org/cgi/content/full/5/7/eaav2558/DC1

Fig. S1. Interpretation experiment, controls.

Fig. S2. Interpretation experiment—results of control 2.

Details on Materials and Methods

References (1925)

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.

REFERENCES AND NOTES

Acknowledgments: We thank M. Tomasello and the Department of Comparative and Developmental Psychology of the Max Planck Institute for Evolutionary Anthropology, Leipzig for the use of the laboratory facilities, where these studies were carried out. We also thank E. Rossi who ran the studies and R. Mundry for statistical analysis. Funding: This research was supported by funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013)/ERC grant agreement N°324115-FRONTSEM (PI: Schlenker). C.O’M. and G.K.’s work was supported by the Max Planck Institute for Evolutionary Anthropology, Leipzig. B.S.’s work was supported by Institut d’Etudes Cognitives, Ecole Normale Supérieure–PSL Research University. Institut d’Etudes Cognitives was supported by grants ANR-10-IDEX-0001-02, ANR-10-LABX-0087, and FrontCog: ANR-17-EURE-0017. Ethics statement: The presented studies were noninvasive and strictly adhered to the legal requirements of the country in which they were conducted. They were approved by the Max Planck Institute for Evolutionary Anthropology Ethics Committee (members of the committee are M. Tomasello, head of the child laboratory Katharina Haberl, and research assistant J. Jurkat). The full procedure of the study was covered by the committee’s approval. Informed written consent was obtained from all the parents of the children who participated in this study. Author contributions: C.O’M.: Conceptualization, methodology, data curation, production of figures, writing (original draft), and project administration. G.K.: Methodology, data curation, data collection, formal analysis, writing (review and editing), and project administration. B.S.: Conceptualization, methodology and writing (review and editing). Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. The data analysis and sample photographs of material used are available online at the Open Science Foundation website: osf.io/7ruhc. Additional data related to this paper may be requested from the authors.
View Abstract

Navigate This Article