Research ArticleNEUROSCIENCE

Computing hubs in the hippocampus and cortex

See allHide authors and affiliations

Science Advances  26 Jun 2019:
Vol. 5, no. 6, eaax4843
DOI: 10.1126/sciadv.aax4843

Abstract

Neural computation occurs within large neuron networks in the dynamic context of varying brain states. Whether functions are performed by specific subsets of neurons and whether they occur in specific dynamical regimes remain poorly understood. Using high-density recordings in the hippocampus, medial entorhinal, and medial prefrontal cortex of the rat, we identify computing substates where specific computing hub neurons perform well-defined storage and sharing operations in a brain state–dependent manner. We retrieve distinct computing substates within each global brain state, such as REM and nonREM sleep. Half of recorded neurons act as computing hubs in at least one substate, suggesting that functional roles are not hardwired but reassigned at the second time scale. We identify sequences of substates whose temporal organization is dynamic and stands between order and disorder. We propose that global brain states constrain the language of neuronal computations by regulating the syntactic complexity of substate sequences.

INTRODUCTION

Information processing in the brain can be approached on three different levels: biophysical, algorithmic, and behavioral (1). The algorithmic level, which remains the least understood, describes the way in which emergent functional computations can be decomposed into simpler processing steps, with architectures mixing serial and massively parallel aspects (2). At the lowest level of individual system components, here, in single neurons, these building blocks of distributed information processing can be modeled as primitive operations of storing, transferring, or nonlinearly integrating information streams (3).

In resting state conditions, both blood oxygen level–dependent (BOLD) and electroencephalogram (EEG) signals are characterized by discrete epochs of functional connectivity or topographical stability, defined as resting state networks and microstates, respectively (4, 5). The transitions between these large-scale epochs are neither periodic nor random but occur through a not yet understood syntax, which is fractal and complex (5). Does this organization at the macroscopic scale (whole brain and networks of networks for resting state networks and microstates, respectively) also exist at the microscopic scale? Said differently, is neuronal activity at the microcircuit level organized in discrete epochs associated to different “styles” of information processing? Our first goal is to determine whether information processing at the local neuronal circuit level is structured into discrete sequences of substates and whether these sequences have an observable syntax, whose complexity could be a hallmark of computation. Here, we focus on low-level computing operations, performed by individual neurons such as basic information storage and sharing (3, 6). To reduce external perturbations, such as sensory inputs, and to establish if primitive processing operations and their temporal sequences are brain state dependent, we study two conditions: anesthesia and natural sleep, which are characterized by alternating stable brains states, theta (THE)/slow oscillations (SO) and rapid eye movement (REM)/nonREM sleep, respectively. We consider the CA1 region of the hippocampus (HPC), the medial entorhinal cortex (mEC), and the medial prefrontal cortex (mPFC) to determine whether algorithmic properties are shared between regions with different cytoarchitectures.

The second goal is to determine whether primitive processing operations are localized or, on the contrary, distributed within the microcircuit, as proposed for attractor neural networks (7) and liquid state machines (8). This raises two key questions: Are certain operations driven by a few key neurons, similar to hub cells in a rich-club architecture (9)? and Do neurons have predetermined computing roles, such as “sharer” or “storer” of information, and rigidly prescribed partners in their functional interactions? Said differently, is information routed through a hardwired “neuronal switchboard system” like in early days of telephony or dynamically via different addressable nodes like in decentralized peer-to-peer services?

Here, we demonstrate the existence of a multiplicity of distinct computing substates at the microcircuit level within each of the probed global brain states in both anesthesia and natural sleep. The low-level algorithmic roles played by individual neurons change from one substate to the other and appear largely independent from the underlying cytoarchitecture, with roughly half of the recorded neurons acting as transient computing hubs. Furthermore, we reveal complexity not only at the level of information processing within each substate but also at the level of how substates are organized into temporal sequences, which are neither regularly predictable nor fully random. Substate sequences display an elaborate syntax in all the probed anatomical regions, whose complexity is systematically modulated by changes in global brain states. Together, our findings suggest a more distributed and less hierarchical style of information processing in neuronal microcircuits, more akin to emergent liquid state computation than to pre-programmed processing pipelines.

RESULTS

Analysis design

Neurons were recorded simultaneously from the CA1 region of the HPC and the mEC under anesthesia (18 recordings from 16 rats) and from the CA1 region and the mPFC during natural sleep (six recordings from three rats; see Fig. 1A and fig. S1 for more details on recordings). We focus on two elementary processing functions: information storage, i.e., how much information a neuron buffers over time that it has previously conveyed, as measured by the active information storage (3), and information sharing, i.e., how much a neuron’s activity information content is made available to other neurons, as measured by mutual information [see example in (6)]. We use the term feature to discuss the metrics we use, i.e., firing, information storage, or sharing (Fig. 1, B and C). We use the same analysis design for all features. The FeatureVector(ta) contains the values for the descriptive features, as measured in window ta (Fig. 1B). For example, for firing features, if 20 cells are recorded, FeatureVector(ta) contains 20 values, representing the firing density of each neuron during window ta. We first correlate feature vectors for a given window pair (ta, tb). Here, a high correlation value means that the two feature vectors are very similar to one another, i.e., that the features measured at ta are also found at tb. After, we build a feature similarity matrix, a collection of correlation values between feature vectors for all window pairs organized in time (Fig. 1D). A block along the diagonal indicates a stable state for a given feature, e.g., a period over which units fire, store, or share information in a consistently preserved pattern. The axes of the similarity matrix represent time, and repetitions of a block structure along a horizontal or vertical line mean that a stable state for a given feature is reoccurring over time. We then use a simple clustering technique to extract different stable states, which we call substates, and display their switching behavior during the recording session (Fig. 1D). Last, we define computing hubs as neurons that more heavily participate to the buffering (storage hubs) or the funneling (sharing hubs) of information streams (Fig. 1D; see Materials and Methods). This notion of computing hub generalizes previously introduced notions of “hubness” (10, 11) beyond the ability to synchronize firing toward more general types of influence on information processing.

Fig. 1 Unsupervised extraction of states and hubs.

(A) Cartoon representing the approximate recording locations (mEC and CA1; mPFC and CA1) during two experiment types in anesthesia and sleep. (B) Example LFP trace taken from the 32 channels in CA1 (blue) and 32 channels in mEC (orange). Below are examples of isolated unit activity taken from the same recording. For each time window (t), we extract different features represented by the FeatureVector(t), which has a feature value for each channel or single unit recorded. (C) We consider four features: spectral band averaged powers (from LFP channels), single unit firing rates, information storage, and information sharing. (D) Left: Cartoon representation of Msim. To extract substates and their temporal dynamics, we construct a feature similarity matrix Msim in which the entry Msim(ta, tb) measures Pearson correlation between the vectors FeatureVector(ta) and FeatureVector(tb). Time flows from the top-left corner horizontally to the top-right corner and vertically to the bottom-left corner. A block (square) along the diagonal in the resulting image identifies a period of feature stability, i.e., a substate. A block appearing several times horizontally or vertically indicates that a feature is repeated several times. Middle: Unsupervised clustering identifies the different substates (indicated by a number) and their temporal dynamics (the vertical axis corresponds to that of the similarity matrix). Right: We identify computing hub cells, i.e., neurons that display exceptionally high values for a given feature, associated with given substates. Note that reoccurring states have the same hub cells (state 3 in this example). The (*) corresponds to the neurons that are behaving in the top 5% of the examined feature.

Identification of brain global states

Unsupervised cluster analysis of the spectral features of the fields recorded in the various brain regions allowed a clear identification of typical global oscillatory patterns (fig. S2), which we call global brain states. In the following, all brain states are identified by the clustering analysis of field recordings performed in the CA1 region [stratum oriens (SOr) to stratum lacunosum moleculare (SLM)]. Unsupervised clustering identified two states for anesthesia corresponding to epochs dominated by slow (SO state) and theta (THE state) oscillations and two states during sleep corresponding to REM versus nonREM episodes.

Brain state–dependent firing substates

As subsets of cells tend to fire spontaneously together in stereotypical patterns (12), we first analyzed neuronal firing assemblies. Figure S3 shows that the firing rate, the burst index, and entrainment by the phase of the ongoing oscillations were brain region and brain state dependent, as previously reported (13). A simple visual inspection of firing behavior revealed the probable existence of different firing sets, as some neurons tended to fire together during certain epochs, with these epochs repeating themselves over time (fig. S4). To quantify this observation, we constructed the feature vectors Firing(ta), whose entries are given by the average firing rate of each neuron within the window of analysis ta. The complex block structure of the similarity matrix revealed a repertoire of state transitions much richer than the one associated to global brain states (Fig. 2). In this example, unsupervised clustering revealed a total of six firing substates in mEC (Fig. 2A) and five in mPFC (Fig. 2D) during THE and REM, respectively, for the two animals. Figure 2B demonstrates that a given brain state was characterized by the switching between different firing substates. Figure 2E shows that a subset of firing substates was shared between brain states and, importantly, that the switch from one firing substate state to another did not necessarily coincide with a change in the brain global state (and vice versa). Quantification over all recordings revealed that firing substates occurred 87% of the time during either one of the possible global brain states (Fig. 2, C and F). Substates were found in the mEC, CA1, and mPFC, and we found an average of about five substates for all brain regions and brain states (table S1). These results reveal that, although field recordings show stereotyped oscillatory behavior during a given brain state, the firing behavior of neurons displays a richer dynamic repertoire. Their activity is compartmentalized in a small number of firing substates, with discrete switching events from one substate to another. The firing substates are brain state and brain region specific, and they are not strictly entrained by the global oscillatory state.

Fig. 2 Firing substates.

Examples of similarity matrices Msim obtained from Firing(t) at different times in mEC during anesthesia (A) and in mPFC during natural sleep (D), measured in two animals. The bar below Msim indicates the transitions occurring between THE/REM (dark blue) and SO/nonREM (light blue). Although there were only two global brain states, six (A) and five (D) firing substates were identified. (B and E) Examples of the firing density of three neurons (a, b, and c) recorded in mEC and mPFC, respectively, with amplitude normalized for visualization. Neurons tended to fire in specific substates, indicated here with a color code. These examples also illustrate the switching between different firing substates inside a given global oscillatory state and their overlap across different global oscillatory states. The analysis of all recordings revealed that a majority of firing substates tended to occur during a preferred global oscillatory state, as indicated by the bimodal histograms during anesthesia (C) and natural sleep (F), respectively.

Storage of information is dynamic within a brain state

At any given time, neuronal activity conveys an amount of information that can be measured by Shannon entropy. We first focused on active information storage, which measures the fraction of information carried by a neuron i at a time t that was present in the past activity history of i itself. For storage features, we extract several substates (six for the mEC in the animal shown in Fig. 3A and seven for CA1 in the animal shown in Fig. 3D), with an average of about four states across all animals (table S1). As before, there was no strict alignment between brain state transitions and storage substate transitions (Fig. 3, A to C and E). Yet, brain state specificity of storage states was 80% for all regions (Fig. 3, C and F, and table S1).

Fig. 3 Information storage substates.

Examples of similarity matrices Msim obtained from Storage(t) at different times in mEC during anesthesia (A) and CA1 during natural sleep (D). As for firing substates, we identified more storage substates (six and seven, respectively, in the shown examples) than global oscillatory states. We show in (B) and (E) that the participation of three individual neurons to information storage (indicated in arbitrary units for visualization) was substate dependent. The values reported above the plots correspond to the average firing rate of neuron b (green color) during the corresponding epochs within consistent storage substates. The analysis of all recordings showed that storage substates tended to occur during a preferred global oscillatory substate, as indicated by the bimodal histograms for anesthesia (C) and for natural sleep (F).

Under anesthesia, the absolute storage values were stronger in mEC than in CA1, particularly in layers 3 and 5 of mEC (fig. S5). During natural sleep, however, storage values for CA1 were two orders of magnitude larger than during anesthesia and were as strong as in mPFC (fig. S5). Storage tended to be weaker for all probed regions and layers in THE with respect to SO during anesthesia but not during natural sleep (fig. S5). Therefore, information storage is dynamically distributed in discrete substates and is brain state and brain region dependent. In particular, the involvement in storage of a neuron could vary substantially along time without being necessarily paralleled by a comparable change in firing rate (Fig. 3, B and E).

Information sharing is dynamic within a brain state

A primitive processing operation complementary to information storage is information sharing, providing a pseudo-directed metric of functional connectivity between any two circuit units (6). For each neuron i, we quantified both “shared-in” (i acts as a sharing target, with information shared from j neurons’ past activity) and “shared-out” information (i acts as a sharing source and information is shared to j neurons’ future activity). We first constructed the feature vector Sharing(ta) containing the total amount of information funneled through each given neuron (integrated in- and out-sharing strengths, represented by big arrows in Fig. 4A), irrespective of whom the information was being shared with. Because in- and out-sharing strengths were strongly correlated (average Pearson correlation of >0.9), we ignored the distinction between in- and out-sharing and speak generically of sharing substates. Representative sharing similarity matrices and state sequences are shown in Fig. 4B (top) for mEC during anesthesia and mPFC during sleep and in fig. S6 for CA1. Here, we studied only information sharing within regions, because the number of pairs of simultaneous units in different regions that showed significant sharing was too small to reach robust conclusions. We found about four sharing substates on average across animals. Sharing states displayed an 86% specificity for a given brain state (Fig. 4D and table S1).

Fig. 4 Information sharing substates.

The cartoon in (A) shows an example of sharing assembly for a given sharing hub neuron across three nonsequential occurrences of the same substate. The total strength of in- and out-going sharing is equal (large, external arrows) during ta, tb,, and tc while the assembly changes (smaller, internal arrows). The changing size of internal arrows represents the sharing strength of that particular functional connection between the sharing hub and its source and target neurons. (B) Similarity matrices Msim for sharing strengths Sharing_S(t) (top) and sharing assemblies Sharing_A(t) (bottom) in mEC during anesthesia (left) and mPFC during natural sleep (right). We identified a multiplicity of substates within each global oscillatory state as shown by the colored bars below the feature similarity matrices. The similarity matrices for sharing strengths and assemblies have a matching block structure. However, sharing strengths were very stable within a substate (red-hued blocks), while sharing assemblies were highly volatile (light blue–hued blocks). (C) This is quantified for each sharing assembly substate by a liquidity coefficient. For all observed sharing substates across all regions and global oscillatory states in all animals, the liquidity of sharing assemblies was much larger than the one of sharing strengths. (D) Most sharing substates occurred preferentially during a preferred global oscillatory state for both anesthesia and natural sleep combined (see fig. S5 for separated histograms for the two conditions).

During anesthesia, we measured a stronger absolute sharing values in CA1 than in mEC, a pattern reversed with respect to storage values, particularly in stratum radiatum (SR) and stratum pyramidale (SP) of CA1, although mEC layer 5 had a sharing strength comparable to CA1s SR and SP (fig. S5). During natural sleep, the participation to information sharing of SO in CA1 increased by an order of magnitude and was as large as the one of mPFC, notably layer 4 (fig. S5). As for storage, the involvement of a neuron in sharing could vary along time even without corresponding variations of its firing rate (fig. S6, B and E).

Sharing assemblies are “liquid”

The previous analysis is focused on sharing strengths at the single-cell level. We then determined which neurons sharing cells were exchanging information, i.e., the detailed network neighborhood of sharing, or sharing assembly (cartoon networks in Fig. 4A). Two striking features were apparent. First, both the block structure of the sharing assemblies and the state transition sequences are nearly matching the sharing strength ones (Fig. 4B), as evidenced by a relative mutual information value of 98% on average. Second, in contrast to sharing strengths, the blocks in the sharing assembly similarity matrix were of a light blue color, indicating a strong variability of sharing assemblies within a given substate. This phenomenon was quantified by liquidity analysis, with liquidity being a measure bounded between 0 and 1, where a value of 0 represents an absence of internal variability within a substate and a value of 1 representing completely random variability (see Materials and Methods). The liquidity values of sharing assemblies for all sharing substates throughout all recordings lay below the diagonal (Fig. 4C). This result can be better understood, considering the toy examples of Fig. 4A. The cartoons represent snapshots at three different nonsequential times of a given hub neuron in its sharing network environment. The three considered time frames all fall within the same substate; therefore, the overall in- and out-sharing strengths, represented by the orange and gray arrows, respectively, are constant (meaning stability). However, the sources and targets of the funneled information can widely vary in time (meaning instability). Although the sum of ingoing and outgoing information remained overall constant within each sharing substate, information was shared over different cell assemblies from one time period to the next. All three brain regions displayed remarkable liquidity in sharing assemblies through all brain states, and liquidity was brain region and brain state specific (Fig. 4C). As reported in table S2, the largest liquidity was observed for mPFC sharing assemblies during natural sleep (~94%). CA1 displayed a substantial reduction in the liquidity of sharing assemblies when moving from anesthesia to sleep (dropping from ~86% in anesthesia to ~57% in sleep). Last, as for the other features, information sharing substates were brain state specific (Fig. 4D).

Loose coordination of substate transitions between brain regions

Single units were recorded simultaneously in two regions (CA1 and mEC; CA1 and mPFC). We thus assessed whether substate transition events in one region matched the transition in the other region. We computed the relative mutual information between substate sequences of a given type (e.g., firing, storage, or sharing) observed in one region and the other. We did not find significant differences for these measures across the three features (firing, storage, and sharing) and therefore pooled them together. The median relative mutual information between substate transitions in the probed cortical and hippocampal regions was 18% during anesthesia (between mEC and CA1) and 42% during natural sleep (between mPFC and CA1). These levels of coordination between substate sequences denoted a lack of perfect parallelism between transitions in the different regions, but they were still well above chance level. Thus, substate dynamics display some coordination between CA1 and mPFC during sleep (table S3), which is in keeping with the fact that information exchange occurs between the two regions during sleep (14). The weak coordination under anesthesia suggests that circuits may operate more independently from one another in this condition (but still not completely).

A large fraction of cells can act as computing hubs

Functional, effective, and anatomical hub neurons [mostly γ-aminobutyric acid releasing (GABAergic)] have been identified in the brain (15). We complement the concept, introducing storage and sharing hubs, i.e., neurons displaying an elevated storage or sharing values, respectively (see Materials and Methods). In contrast to the sparsity of functional, effective, and anatomical hubs, a large fraction of cells acted as a computing hub in at least one substate, as illustrated in Fig. 5A. Computing hubs could be recruited across all probed regions and layer locations (Fig. 5, B and C). As summarized in Fig. 5B, the probability of serving as computing hub—storage or sharing confounded—was 40% or more on average for almost all layers, apart from the possibly undersampled SLM and SR in CA1. We observed a general tendency for inhibitory interneurons to have a larger probability to serve as computing hubs than for excitatory cells. This tendency was particularly strong for cortical regions and was notably significant in layer 5 of mEC (during anesthesia) and layer 3 of mPFC (during sleep), for which the probabilities of inhibitory interneurons serving as computing hub in at least one substate approached 70%. The probability of serving as a computing hub at least once was relatively similar when evaluated separately for storage or sharing. In particular, 43% of the neurons serving as a storage hub in a substate could serve as a sharing hub in another substate but, in general, not simultaneously, as only 12% of the neurons were “multifunction” hubs.

Fig. 5 A democracy of computing hubs.

(A) Within every computing substate, some neurons exhibited significantly strong values of information storage or sharing (computing hubs). However, these computing hubs did generally change from one substate to the other, as shown in this example. Different rows correspond to different single units recorded in mEC during anesthesia and different columns correspond to different computing substates (left, storage substates 1 to 6; right, sharing substates 1 to 4). An entry is colored in yellow when the neuron is a computing hub within the corresponding substate. In the example shown, while ~9% of neurons on average were simultaneously acting as computing hub, more than 40% of the recorded units were recruited as hubs for at least one substate when considering all the computing substates together (vertical bar on the right). (B and C) The probability that a neuron acted as hub depended only loosely on its anatomical localization. Panel (B) shows that for all regions and layers, the probability that a neuron acts as computing hub at least once was always larger than 30%. Inhibitory (i) neurons tended to be recruited as hubs more frequently than excitatory (e) neurons. Analogously, panel (C) shows that none of the layers display a specialization in either one of the two processing operations of information storage or sharing. Asterisks denote statistically significant comparisons (lack of overlap between 95% confidence intervals for the probability, reported as vertical ranges on top of the histogram bar). In (C), a yellow horizontal line indicates the fraction of computing hub cells, which also happen to be simultaneously high firing rate cells. Many computing hubs thus have an average or low firing rate. In (B) and (C), in CA1, light blue represents anesthesia and dark blue represents natural sleep.

Despite this large flexibility in the dynamic assignment of hub roles, the notion of hub continued to make sense within each individual substate. Within a substate, on average, only ~9% of cells acted as hub (storage or sharing pooled), so still a strict “elite” (although not a permanent one but appointed just within the associated state). Thus, the set of recruited hubs constituted, at each time, a characteristic fingerprint of the active substates (with only 4% of the substates being “hubless”).

We also studied the probability that a computing hub emerged in a given layer (Fig. 5C). During anesthesia, all probed layers of CA1 and mEC showed a ~20% uniform probability for a storage and sharing computing hub to emerge. Natural sleep was associated to an enhanced recruitment of computing hubs. The probabilities of hub emergence exceeded ~40% for storage hubs in layer 5 of mPFC and in SP of CA1. The analysis of deep or superficial CA1 SP principal neurons, which are involved in different microcircuits (16), did not reveal an intralayer distribution of computing hubs. These results suggest that the probability that a neuron serves as computing hub is not correlated to its anatomical region or layer location.

Last, we tested whether computing hubs were characterized by high firing rates. Using the same procedure used to extract computing hubs, we found that 62% of the cells were high firing at least in one firing substate, with 70% being putative interneurons. There was a poor overlap between computing hubs and high firing rate cells. Table S3 already shows that storage and sharing substate sequences are only loosely coordinated with firing substate sequences (see also firing rate information in Fig. 3, B to E, and fig. S6, B to E). Furthermore, being a high firing rate cell does not guarantee that this cell will also be a computational hub (or the other way around). This is also shown in Fig. 5C, where the yellow levels over the histogram bars indicate the fraction of storage and sharing hubs, which also happen to be high firing cells. We conclude that a storage hub can have a normal or even smaller than average firing rate.

The syntax of substate sequences is complex and brain state dependent

Collectively, our results demonstrate the existence of substate sequences in three different brain regions during anesthesia and natural sleep. Using a linguistics analogy, we assign a letter to each identified substate (represented by a color in the figures). The temporal sequence of substates thus translates into a stream of letters. However, if we consider the three features simultaneously, then we obtain a stream of three-letter words. All combinations of possible letters from our three features define the dictionary of words that can be expressed. We represent a stream of words as a switching table (Fig. 6A). This allows us to explore two aspects of the “neuronal language”: the statistics of the words and the statistics of the transitions between the words (the syntax). We found that the words were mostly (85%) brain state specific, as expected, because the substate letters are already brain state specific (see Figs. 3, C and F, and 4D). Although the syntactic rules structuring the production of words are unknown, we can quantify their complexity. Algorithmic information theory or the minimum description length (MDL) framework links complexity to the notion of compressibility (17). As illustrated in Fig. 6B, an ordered, regular switching table requires a short description, as a small list of instructions can be written to reproduce the table (e.g., word D 100 times, followed by word B 88 times, etc.). At the opposite extreme, a completely random switching table would need a lengthy exhaustive description—as many instructions as the length of the table itself. A complex switching table stands between regularity and randomness and requires a description that is compressed, longer than a regular table but shorter than a random table.

Fig. 6 Complexity of substate sequences.

State switching found for each feature (firing, storage, and sharing) did not align in time. This can be visualized by state switching tables, whose different rows graphically represent transitions between global brain oscillatory states and firing, storage, and sharing substates. (A) Examples of switching tables for mEC during anesthesia (top) and for mPFC during natural sleep (bottom; note the different time scales). (B) Switching tables were neither perfectly regular (top left) nor random (top right), but they were “complex,” displaying organized patterns escaping simple description (bottom). (C) The complexity of the switching tables was larger for THE/REM than for SO/nonREM for most recordings. We included two recordings from mPFC under anesthesia for comparison. (D) Switching tables were complex in all cases. Complexity values were significantly above the upper threshold for regularity and below the lower threshold for randomness. (E) The increase of complexity was significant for mEC when transitioning from SO to THE and for mPFC from nonREM to REM sleep. This trend in CA1 was not statistically significant [significance assessed in terms of lack of intersection between 95% confidence intervals and threshold values for both (D) and (E)]. Asterisks indicate that the number of recordings in this category was not enough to assess significance, but that the median value lay below or above the considered threshold.

Figure 6C shows that the syntax was complex (between 0 and 1) for all brain regions and brain states and that THE/REM states were more complex than SO/nonREM states. We added two recordings from mPFC under anesthesia for comparison. Figure 6D shows that the measured complexity was significantly larger than the upper threshold for regularity and significantly smaller than the lower threshold for randomness (P < 0.05, Bonferroni corrected, direct confidence interval comparison).

Last, we assessed whether switching from SO to THE or from nonREM to REM increased the complexity. As shown in Fig. 6E, the tendency was toward an increase of complexity in all cases, from +30% for mEC during anesthesia and mPFC during anesthesia or sleep to roughly +10% for CA1 during anesthesia or sleep. This relative increase was always significant (P < 0.05, Bonferroni corrected, confidence interval comparison) apart from CA1, for which two recordings displayed increased complexity during nonREM sleep. We conclude that the syntax is complex and brain state dependent.

What determines complexity?

We then investigated which factors contribute to complexity. Different durations of words may account for variations in complexity. Although word dwell times were different by one order of magnitude between anesthesia and sleep with median of ~18 min (~10 min, first quartile; ~28 min, third quartile) during anesthesia and ~1.4 min (~1 min, first quartile; ~2.1 min, third quartile) during sleep, complexity values for anesthesia and natural sleep were similar.

We also evaluated the burstiness coefficient, B (18), of the stream of words. This coefficient ranged between −1 ≤ B ≤ 1, with B = −1 corresponding to a perfectly periodic stream of words, B = 0 to a Poisson train, and B = 1 to a maximally bursting stream. We found a positive correlation between burstiness and complexity (P < 0.01, bootstrap CI). Burstiness was greater during THE/REM (0.15) than during SO/nonREM (0.09; P = 0.03, Kruskal-Wallis test), which may contribute to the increased complexity found during THE/REM.

The richness of the dictionary also affects complexity (19). We therefore evaluated the used dictionary fraction, i.e., the ratio between the number of observed words and the maximum theoretical number of words, i.e., the dictionary. We find a significant positive correlation between the used dictionary fraction and complexity (P < 0.05, bootstrap CI; fig. S7B). The richness of the dictionary was greater during THE/REM (21%) than during SO/nonREM (14%; P = 0.032, Kruskal-Wallis test), which may also contribute to the increased complexity found during THE/REM.

A bivariate linear regression of complexity over burstiness and used dictionary fraction revealed a correlation of 0.62 (P < 0.05, bootstrap CI) between predicted and observed complexity, demonstrating that complexity is largely explained by burstiness and the used dictionary fraction.

Last, we verified that our results did not depend on the measure of complexity. Redoing analyses using the equivalent and commonly used Lempel-Ziv complexity measure (19) leads to qualitatively equivalent results. Lempel-Ziv complexity also strongly correlated with our measure of complexity (Pearson correlation, 0.84; P < 0.001, bootstrap confidence interval).

DISCUSSION

Here, we demonstrate two levels of organization of brain activity. At the single-cell level, we find that a large proportion of recorded neurons act as computing hubs during discrete time epochs (substates) within a given stable brain state (e.g., REM and nonREM). At the microcircuit level, we find a rich repertoire of computational substates characterized by temporally structured sequences, whose complexity was modulated by the global brain oscillatory state. This type of organization was shared between three anatomical different brain regions: the HPC, the mEC, and the mPFC.

The hubness of a neuron may be determined by fixed features, e.g., an exceptional extension of axonal arborizations (10), a suitable location in the circuit wiring diagram facilitating the control of synchronization (11), or yet some developmental “droit d’aînesse” (15). During natural sleep and anesthesia, however, we find that >40% of the recorded neurons act as a computational hub during at least one substate, meaning that computing hubs form a rather open and not so elitist club. The computational hubness is dynamic—a neuron acting as a hub in a given substate may not be a hub in a different substate or may swap its nature (e.g., converting from a storage to a sharing hub). The stronger tendency for putative inhibitory cells to serve as hubs (>70%) is in keeping with the known role of GABAergic cells in orchestrating network activity (10, 11, 15). Furthermore, because our analysis was limited to few brain states, the proportion of putative principal and GABA neurons acting as computational hubs may even be an underestimate. Perhaps all neurons act as computational hubs during specific brain states (including exploration and quiet awakening).

That hubs share information with ever changing source and target neurons is in apparent contradiction with the existence of sequential firing of cell assemblies in cortical and hippocampal circuits, including during nonREM sleep (12, 15, 2022). Our information-theoretical analyses require the use of at least 5-s-long sliding windows, which is not sufficient to detect fast sequences of activation, as replay events occur within 500 ms (22). Replay sequences are not strictly stable as they demonstrate intercycle variability (23), which may reflect liquidity. The liquid nature of information sharing suggests that neuronal activity is not frozen at fixed-point attractors as in classic artificial neural networks but may be sampling the broad basin of attraction of shallow attractors (7) or higher-dimensional manifolds “at the edge of chaos,” as found in reservoir computing schemes (24). In this case, information is shared across extremely volatile assemblies within a given substate. The assembly dynamics are thus liquid—i.e., neither frozen into crystallized patterns nor fully random as in a gas—and are only mildly constrained to robustly maintain the computational role of sharing hubs while preserving entropy of firing patterns, and therefore bandwidth for local computations. This preservation of hub function in a heterogeneous and reconfiguring circuit can be seen as a form of homeostasis of the computing role, generalizing the concept of homeostasis of circuit behavior evidenced in invertebrate systems (25). While this latter homeostasis preserves the functional level, in our case, homeostasis would extend down to the algorithmic level, referring to the three-level hierarchy proposed by Marr and Poggio (1).

During a “stable” behavior such as resting state, analysis of BOLD and EEG signals consistently revealed the presence of temporal sequences of resting state networks and topographical microstates, respectively (4, 5). Here, we find that an analogous switching between discrete states occurs at a completely different scale of microcircuits. During a stable oscillatory regime (e.g., THE rhythm), neuronal computation is organized in temporal sequences of computational substates. While field oscillations constrain neuronal firing and neuronal firing produces field oscillations, we find only a loose match between the switch from one oscillatory mode to the other and the switch from one substate to the other. Transitions between global states (related to the scale of mesoscale collective dynamics) sometimes anticipate and sometimes follow transitions between firing, storage, or sharing substates (related to the scale of microscopic firing dynamics), as if dynamic changes occurring at either one of the scales had destabilizing effects on the dynamics at the other scale (in both directions, meso- to microscale and micro- to mesoscale). The behavior of CA1 cells may reflect specific internal dynamics not tightly controlled by the CA1 local field, which mostly reflects synaptic inputs originating from outside the CA1 region. The repertoire of computing substates is brain state specific. Beyond proposals that oscillations are central for the routing of information between regions (26), we thus suggest here that global oscillatory states could also organize information processing within local regions by enforcing the use of their own state-specific “languages” (expressed in terms of combinations of alternative intrinsic substates).

Signatures of computation can be identified even if the function and meaning of the computation are unknown and even when system states are sampled partially, as it is the case for the present study. This allowed us to extract a symbolic representation of substates (letters) for a given feature, which make words when considering several features. The syntax of the substate word sequences is complex, standing between order and randomness (as it was already the case for the sharing dynamics within each substate). The capacity to generate complex sequences of patterns is a hallmark of self-organizing systems and has been associated to their emergent potential to perform universal computations (17). Moreover, dynamics at the “edge of chaos” confer advantages for information processing (24).

We find that the syntactic complexity of substate sequences is brain state dependent, as it was the case for the substate dictionaries, and more complex during THE oscillations/REM sleep than during SO/nonREM sleep, suggesting an increased load of computation in the former brain state. The temporal complexity of activation sequences was also shown to be modulated by brain states at the macroscale level of whole-brain dynamics (27). In keeping with the view that slow/THE oscillations measured during anesthesia share general properties with nonREM/REM sleep (28, 29), we found similar rules of organization in terms of substate sequences and their complexity, despite the fact that the word dwell times in anesthesia are one order of magnitude greater than during natural sleep. We speculate that the nature of the undergoing oscillation (slow versus THE) constrains the repertoire of words used and their syntax, modulating the type of computation performed by the recruitment of varying computing hubs. Sleep, oscillatory patterns, and neuronal firing are altered in numerous neurological disorders, including epilepsy (30), and therefore, it will be important to assess whether the repertoire of substates and the syntax are likewise affected.

In conclusion, our results reveal a rich algorithmic-level organization of brain computations during natural sleep and anesthesia, which combines a complex combinatorics of discrete states with the flexibility provided by liquidly reconfiguring assemblies. While we cannot yet prove that this substate dynamics is functionally relevant, it has the potential to serve as a substrate for previously undisclosed computations. The next aim will be to perform the similar analysis during specific behavioral tasks, such as goal-driven maze navigation. Words and/or their sequence may sign specific cognitive processes. The fact that the algorithmic instructions and primitive processing operations are similar in three brain regions with different architectural organizations suggests the existence of a basic architecture for low-level computations shared by diverse neuronal circuits.

MATERIALS AND METHODS

Data information

We used in this work a portion of the data (13 of 18 experiments) initially published by Quilichini et al. (29), which includes local field potentials (LFPs) and single-unit recordings obtained from the dorso mEC of anesthetized rats. We also used a portion of the data (2 of 16 experiments) initially published by Ferraris et al. (28), which includes LFPs and single units recorded in the mPFC under anesthesia. Seven recordings are original data in both mEC and dorsal HPC under anesthesia and 10 recordings in four animals during natural sleep in HPC and mPFC. See fig. S1 for details on recordings, number of cells, and layers recorded.

Animal surgery

We performed all experiments in accordance with experimental guidelines approved by the Rutgers University (Institutional Animal Care and Use Committee) and Aix-Marseille University Animal Care and Use Committee. We performed experiments on 13 male Sprague-Dawley rats (250 to 400 g; Hilltop Laboratory Animals), 8 male Wistar Han IGS rats (250 to 400 g; Charles Rivers Laboratories), and 3 male Long-Evans rats (350 to 400 g; Charles River Laboratories). We performed acute (anesthesia) experiments on the Sprague-Dawley and seven of the Wistar rats, which were anesthetized with urethane (1.5 g/kg, intraperitoneally) and ketamine/xylazine (20 and 2 mg/kg, intramuscularly), additional doses of ketamine/xylazine (2 and 0.2 mg/kg) being supplemented during the electrophysiological recordings. We performed chronic (natural sleep) experiments on one Wistar and the Long-Evans rats, which were anesthetized using 2% isoflurane in 1 liter/min of O2 for the surgery procedure. In both cases, the body temperature was monitored and kept constant with a heating pad. The head was secured in a stereotaxic frame (Kopf), and the skull was exposed and cleaned. Two miniature stainless steel screws, driven into the skull, served as ground and reference electrodes. To reach the mEC, we performed one craniotomy from bregma: −7.0 mm anteroposterior (AP) and +4.0 mm mediolateral (ML); to reach the CA1 area of HPC, we performed one craniotomy from bregma: −3.0 mm AP and +2.5 mm ML in the case of HPC coupled to mEC recordings and −5.6 mm AP and +4.3 mm ML in the case of HPC coupled to mPFC recordings; to reach the mPFC, we performed one craniotomy from bregma: +3 mm AP and +0.8 mm ML. We chose these coordinates to respect known anatomical and functional connectivity in the cortico-hippocampal circuitry (28, 29). We used different types of silicon probes to record the extracellular signals. In acute experiments, the probes were mounted on a stereotaxic arm. We recorded the dorso-medial portion of the mEC activity using a NeuroNexus CM32-4x8-5 mm-Buzsaki32-200-177 probe (in eight experiments), a 10-mm-long Acreo single-shank silicon probe with 32 sites (50 μm spacing) arranged linearly (in five experiments), or a NeuroNexus H32-10mm-50-177 probe (in five experiments), which was lowered in the EC at 5.0 to 5.2 mm from the brain surface with a 20° angle. We recorded HPC CA1 activity using a H32-4x8-5mm-50-200-177 probe (NeuroNexus Technologies) lowered at 2.5 mm from the brain surface with a 20° angle (in four experiments), a NeuroNexus H16-6mm-50-177 probe lowered at 2.5 mm from the brain surface with a 20° angle (in two experiments), and an E32-1shank-40μm-177 probe (Cambridge NeuroTech) lowered at 2.5 mm from the brain surface with a 20° angle (in one experiment). We recorded mPFC activity using NeuroNexus H32-6 mm-50-177 lowered in layer 5 at 3 mm perpendicularly from the brain surface (in two experiments). In chronic experiments, the probes were mounted on a movable micro-drive (Cambridge NeuroTech) fixed on the skull and secured in a copper mesh hat. We recorded HPC CA1 activity (probes lowered perpendicularly at 2.5 mm from the brain surface) using a NeuroNexus H32-Poly2-5mm-50-177 probe (in two experiments), a Cambridge NeuroTech E32-2shanks-40μm-177 probe (in one experiment), and a NeuroNexus H32-4x8-5mm-50-200-177 probe (in one experiment). We recorded mPFC activity (probes lowered perpendicularly at 3.0 mm from the brain surface) using a NeuroNexus H32-4x8-5mm-50-200-177 probe (in two experiments) and a NeuroNexus H32-Poly2-5mm-50-177 probe (in one experiment). The on-line positioning of the probes was assisted by the presence of unit activity in cell body layers and the reversal of THE ([3 6] Hz in anesthesia, [6 11] Hz in natural sleep) oscillations when passing from L2 to L1 for the mEC probe, the presence in SP of either unit activity or ripples (80 to 150 Hz) for the HPC probe, and the DV depth value and the presence of intense unit activity for the mPFC.

At the end of the recording, the animals were injected with a lethal dose of pentobarbital Na (150 mg/kg, intraperitoneally) and perfused intracardially with 4% paraformaldehyde solution. We confirmed the position of the electrodes (DiI was applied on the back of the probe before insertion) histologically on Nissl-stained 40-μm section, as reported previously in detail (28, 29). We used only experiments with appropriate position of the probe for analysis. The numbers of recorded single units in different anatomical locations for the different retained recordings are summarized in fig. S2.

Data collection and spike sorting

Extracellular signal recorded from the silicon probes was amplified (1000×), bandpass-filtered (1 Hz to 5 kHz), and acquired continuously at 20 kHz with a 64-channel DataMax System (RC Electronics or a 258-channel Amplipex) or at 32 kHz with a 64-channel DigitalLynx (NeuraLynx at 16-bit resolution). We preprocessed raw data using a custom-developed suite of programs (31). After recording, the signals were downsampled to 1250 Hz for the LFP analysis. Spike sorting was performed automatically using KLUSTAKWIK [http://klustakwik.sourceforge.net (32)], followed by manual adjustment of the clusters, with the help of autocorrelogram, cross-correlogram (CCG), and spike waveform similarity matrix [KLUSTERS software package; https://klusta.readthedocs.io/en/latest/ (33)]. After spike sorting, we plotted the spike features of units as a function of time and discarded the units with signs of significant drift over the period of recording. Moreover, we included in the analyses only units with clear refractory periods and well-defined clusters. Recording sessions were divided into brain states of THE and SO periods. The epochs of stable theta [THE in anesthesia experiments, REM in natural sleep experiments, or slow oscillations (SO in anesthesia experiments and nonREM in natural sleep experiments)] periods were visually selected from the ratios of the whitened power in the THE band ([3 6] Hz in anesthesia and [6 11] Hz in natural sleep) and the power of the neighboring bands ([1 3] and [7 14] Hz in anesthesia and [12 20] Hz in natural sleep) of EC layer 3 LFP, which was a layer present in all the 18 anesthesia recordings, or layer 5 mPFC recordings in natural sleep recordings, and assisted by visual inspection of the raw traces (fig. S2) (28, 29). We then used band-averaged powers over the same frequency ranges of interest as features for the automated extraction of spectral states via unsupervised clustering, which confirmed our manual classification. We determined the layer assignment of the neurons from the approximate location of their somata relative to the recording sites (with the largest amplitude unit corresponding to the putative location of the soma), the known distances between the recording sites, and the histological reconstruction of the recording electrode tracks.

Characterizations of single unit activity

We calculated pairwise CCGs between spike trains of these cells during each brain state separately (28, 29). We determined the statistical significance of putative inhibition or excitation (trough or peak in the [+2 5] ms interval, respectively) using the nonparametric test and criterion used for identifying monosynaptic excitations or inhibitions (28, 29), in which each spike of each neuron was jittered randomly and independently on a uniform interval of [−5 5] ms a thousand times to form 1000 surrogate datasets and from which the global maximum and minimum bands at 99% acceptance levels were constructed. Thus, inspection of CCGs allowed us to identify single units as putatively excitatory or inhibitory, an information that we used to perform the computing hub characterizations in Fig. 5B.

To perform the analyses of fig. S3, we then computed the burst index and the phase modulation of units. Burst index denotes the propensity of neurons to discharge in bursts. We estimated the amplitude of the burst spike autocorrelogram (1-ms bin size) by subtracting the mean value between 40 and 50 ms (baseline) from the peak measured between 0 and 10 ms. Positive burst amplitudes were normalized to the peak, and negative amplitudes were normalized to the baseline to obtain indexes ranging from −1 to 1. Neurons displaying a value of 0.6 were considered bursting.

To establish the phase modulation of units, we concatenated different epochs of slow or THE oscillations and estimated the instantaneous phase of the ongoing oscillation by Hilbert transform of the [0.5 2] or [3 6] Hz in anesthesia and [6 11] Hz in natural sleep filtered signal for slow or THE oscillations, respectively. Using linear interpolation, we assigned a value of phase to each action potential. We determined the modulation of unit firing by Rayleigh circular statistics; P < 0.05 was considered significant. We first assessed the circular uniformity of the data with a test for symmetry around the median, and we performed group comparison tests of circular variables using circular analysis of variance (ANOVA) for uniformly distributed data and using a nonparametric multisample test for equal medians “CM test,” similar to a Kruskal-Wallis test for nonuniformly distributed data (https://philippberens.wordpress.com/code/circstats), and P < 0.05 was considered significant.

Feature-based state extraction

We performed a sliding window analysis of the recorded LFP time series and single unit spike trains, extracting in a time-resolved manner a variety of different descriptive features. For all the considered features (see specific descriptions in later subsections), we used similar window sizes and overlap for the sake of a better comparison. For anesthesia recordings, we adopted a long window duration of 10 s—demanded by the estimation needs for the most “data-hungry” information-theoretical features—with an overlap of 9 s. For natural sleep recordings, we adopted a window duration of 10 s with an overlap of 9 s.

We computed each set of descriptive features and compiled them into multi-entry vectors FeatureVector(t) for every time window centered on different times t.

We then computed a similarity matrix Msim to visualize the variability over time of the probed feature set. The entries Msim(ta, tb) are given by the Pearson correlation coefficient between the entries in the vectors FeatureVector(ta) and FeatureVector(tb), treated as ordered sequences, and are thus bounded between −1 and +1. Blocks of internally elevated correlation along the similarity matrix diagonal denote epochs of stable feature configurations. Similar configurations are detected by the presence of off-diagonal highly internally correlated blocks and the existence of multiple possible configurations by the poor correlation between distinct blocks.

We then extracted feature-based states using a standard iterative K-means algorithm to cluster the different vectors FeatureVector(t) based on the correlation distance matrix defined by 1 − Msim. We defined the substates of different types as the different clusters obtained for different feature types. We chose the number of clusters K by clustering using K = 2, 3, … 20 and first guessing K using a maximal silhouette criterion across all Ks. We also inspected dendrograms from single-linkage clustering as a cross-criterion. Using both pieces of information, K was manually adjusted case by case (up to ±2 clusters with respect to the unsupervised silhouette criterion) to best match the visually apparent block structure of the similarity matrix Msim, which results in an optimized K selection for each recording.

Feature robustness

To compute the robustness of the feature computation, the original spiking times were randomly shuffled 1000 times and the features were recomputed for each instance for two files, one in anesthesia and one in natural sleep. To compare it to the original features computed, k for each recording and each feature was kept the same. The information retained after shuffling was computed by dividing the mutual information between the shuffled features and the original by the entropy of the new feature set. We found a significant difference for both anesthesia and in natural sleep across all features.

Global oscillatory states

We defined eight different unequally sized frequency ranges, which were manually adjusted recording by recording to be better centered on the recording-specific positions of the slow wave and THE peaks and of their harmonics (e.g., 0 to 1.5 Hz, 1.5 to 2 Hz, 2 to 3 Hz, 3 to 5 Hz, 5 to 7 Hz, 7 to 10 Hz, 10 to 23 Hz, and 23 to 50 Hz for the anesthesia spectrogram and the similarity matrix of fig. S2A). We averaged the spectrograms over all channels within each of the layers in the simultaneously recorded regions (e.g., EC and CA1 for anesthesia), and then we coarse-grained the frequencies by further averaging over the eight above ranges. We finally compiled all these layer- and band-averaged power values into time-dependent vectors Spectra(t), with a number of entries given by eight (number of frequency bands) times the number of layers probed in the considered recording, i.e., up to eight (CA1 SOr, SP, SR, and SLM; EC layers 2, 3, and 5; and PFC layers 1,2, 3, and 5), yielding at most 64 entries. We then processed these spectral features as described in the previous section to extract global oscillatory states—as any other substate type—via unsupervised clustering.

Firing sets and firing hubs

Not all neurons are equally active in all temporal windows. To determine typical patterns of single neuron activation, we binned the spiking data for each unit in 50-ms windows—if a neuron fired within that window, the result was a “1,” and if it did not fire, the result was a “0.” We enforced a strictly binary encoding, i.e., we attributed to a bin a 1 symbol even when more than one spike was fired within this bin. Our bin size choice was made to maintain less than a 5% loss of information when ignoring multiple firing events within a bin. Further, note that for most of the spike trains, multiple firing events were extremely rare, i.e., apart from a few cases the information loss was much smaller than 5%. We then averaged over time this binned spike density separately for each single unit and within each time window and compiled these averages into time-dependent vectors Firing(t) with N entries, where N is the overall number of single units probed within the considered recording. We constructed separate feature vectors for each of the simultaneously recorded regions. Firing substate prototypes were given by the centroids of the clusters extracted from the similarity matrix Msim resulting from the stream of Firing(t) feature vectors. We then defined a neuron to be a high firing cell in a given state if its firing rate in the state prototype vector was higher than the 95% percentile of all concatenated state prototype vector entries.

Active information storage

Within each time window, we computed for each single unit an approximation to the active information storage (AIS). AIS is meant to quantify how much the activity of a unit is maintaining over time information that it was conveying already in the past (3). This information-theoretical notion of storage is distinct from the neurobiological notion of storage in synaptic weights. It is an activity-based metric (hence the adjective “active”), able to detect when temporal patterns in the activity of a single unit can serve the functional role of “memory buffer.” AIS is strictly defined asAISi=MI[i(t),i(t)]i.e., as shared information between the present activity i(t) of a single unit i and its past history of activity i(t). Before computing mutual information, we binned all spike trains with method as for determining the Firing(t) descriptive feature vector. The limited amount of available data within each temporal window makes necessary to introduce approximations. Therefore, we replaced the full past history of activity i(t) with activity at a time in the past i(t-τ) and then summed over all the possible lagsAI^Si=Στ MI[i(t),i(tτ)]where the lag τ was varying in the range 0 ≤ τ ≤ 0.5 Tθ, where Tθ is the phase of the THE cycle. Note that MI values were generally vanishing for longer latencies. We evaluated MI terms using a “plug-in” function estimator on binarized spike trains, which takes the binned spike trains of two neurons for a defined time window and computes the mutual information and entropy values of the two variables (3). Concretely speaking, we estimated the probability p that a bin includes a spike and the complementary probability 1 − p that a bin is silent for each unit by direct counting of the frequency of occurrence of 1s and 0s in the binned spike trains of each unit. These counts yielded the probability distributions P(i) and P(j) that two neurons i and j fire or not. Analogously, we sampled directly from data the histogram P(i,j) of joint spike counts for any pair of two units i and j. These histograms were then directly “plugged in” (hence the name of the used estimator) into the definition of MI itselfMI(i,j)=ijP(i,j)log2P(i,j)P(i)P(j)

We then subtracted from each MI value a significance threshold (95th percentile of MI estimated on shuffled binarized trains, 1000 replicas), putting to zero nonsignificant terms (and thus negative after bias subtraction). Although this corrected plug-in estimator is very rough, it is sufficient in our application in which we are not interested in quantitatively exact values of MI but just in relative comparisons of values finalized to state clustering over a large amount of observations. We compiled the N resulting AÎSi values into time-dependent vectors Storage(t), constructing separate vectors for each of the simultaneously recorded regions. We then constructed storage substates through unsupervised clustering based on the Msim matrices, as previously described. We defined a neuron to be a storage hub in a given state if its AÎSi value in the state prototype vector was higher than the 95% percentile over all entries of concatenated cluster prototype vectors. This conservative threshold guarantees that only neurons with exceptionally high AIS values are labeled as hubs. While we may have some false negatives—i.e., neurons with values in the right tail of the AIS distribution not labeled as hubs—we are thus protected against false positives.

AIS absolute values varied widely between the different recordings. To compare AIS measures and their relative changes between global oscillatory states across recordings, we first averaged AIS for all the units within a specific anatomic layer. We then normalized these average AIS values by dividing them by the average AIS value in the SO state (in anesthesia) or the nonREM state (in natural sleep) for the specifically considered recording and layer. The results of this analysis are shown in fig. S5, where different lines correspond to different recordings.

Information sharing networks and strengths

Within each time window, we computed time-lagged mutual information MI[i(t), j(t − τ)] between all pairs of spike density time series for different single units i and j [evaluated via the same binning method for determining the Firing(t) descriptive feature vector]. Although MI is not a directed measure, a pseudo-direction of sharing is introduced by the positive time lag, supposing that information cannot be causally shared from the future. Thus, for every directed pair of single units i and j (including autointeractions, with i = j), we defined pseudo-directed information sharing asIshared(ji)=Στ MI[i(t),j(tτ)]where the lag τ was varying in the range 0 ≤ τ ≤ 0.5 Tθ, where Tθ is the phase of the THE cycle. Once again, we estimated MI terms via direct plug-in estimators on binarized spike trains, as with storage, subtracting a significance threshold (95th percentile of MI estimated on shuffled binarized trains, 400 replicas) and zeroing not significant terms. All these Ishared (ji) entries were interpreted as weights in the adjacency matrix of an information sharing directed functional network, and we defined as sharing assembly formed by a neuron i the star-subgraph of the information sharing network composed of i and all its immediate neighbors. We compiled all the overall N2 different values of Ishared (ji) into time-dependent feature vectors Sharing_A(t), thus describing all the possible sharing assemblies at a given time. We then also computed information sharing strengths by integrating the total amounts of information that each single unit was sharing with the past activity of other units in the network (“sharing-in”)Ishared(i)=Σj Ishared (ji)or with the future activity of other units in the network (“sharing-out”)Ishared (i)=Σj Ishared (ij)

That is, the integrated amount of shared information was given by the in-strength and the out-strength of a node in the information sharing network with individual link weights Ishared (ji). We compiled the N incoming Ishared (→ i) and N outgoing Ishared (i →) values into time-dependent vectors Sharing_S(t). We computed separate Sharing_A(t) and Sharing_S(t) for each of the simultaneously recorded regions. We then performed as before unsupervised clustering based on the associated Msim matrices to extract sharing substates. Because the block structure displayed by these Msim matrices for sharing assemblies and strengths are nearly perfectly overlapping, we conducted all substate analyses based on Sharing_S(t) vectors only. We defined a neuron to be a sharing hub in a given state if its Ishared (*i) and/or Ishared (i*) values in the state prototype vector were higher than the 95% percentile of all concatenated cluster prototypes entries (again protecting against false-positive detection).

The relative comparisons of information sharing between SO and THE (REM and nonREM) epochs for different recordings shown in fig. S5 are based, as in the case of AIS in fig. S5, on averaged and scaled values. We first averaged the total Ishared (i.e., sharing in plus sharing out) over all the units within a specific anatomic layer. We then normalized these average total Ishared values by dividing them by the average total Ishared value in the SO state (in anesthesia) or the nonREM state (in natural sleep) for the specifically considered recording and layer.

Liquidity of sharing

The Mrec matrices for sharing assemblies display light blue (low internal correlation) blocks, while the Mrec matrices for sharing strengths have similar blocks but red hued (higher internal correlation). We quantified this visual impression by evaluating liquidity of sharing strength and sharing assembly substates. For a given recording and a given associated Mrec matrix (e.g., the one for the Sharing_A or the Sharing_S features), we defined Tα as the set of times t for which the system is in a given substate α relative to the considered feature of interest. We then evaluated the liquidity Λ(α) of this substate α asΛ(α)=ttϵTαt<ttϵTα(1Mrec(t,t))/(#Tα2)where ∣ ∙ ∣ denotes the absolute value operator and #Tα is the number of elements of the set Tα. Liquidity values are thus bounded in the interval 0 ≤ Λ(α) ≤ 1, with 1 indicating the maximum liquidity (i.e., maximum internal variability) of a substate.

Oscillatory mode specificity and hub distributions

For each substate (firing, storage, and sharing), we computed the fraction of times that the substate was observed during a SO or THE state (in anesthesia) or a nonREM or REM state (in natural sleep). We defined the largest among these fractions as the oscillatory specificity of this substate. Oscillatory specificities close to 1 indicate that a substate occurs mostly within one of the two possible global states observed in each recording, while specificities close to 0.5 indicate that the substate do not occur preferentially in one of the global states.

To evaluate the probability that a hub emerges in a given anatomical layer, we computed for every recording the fraction of cells recorded in each layer that were labeled as hubs at least in one computing substate (storage or sharing). We computed separately these fractions layer by layer for excitatory and inhibitory cells and for anesthesia or sleep. These fractions were equal to unit when all the excitatory (or inhibitory) cells in a layer happened to be hubs at least once. We then evaluated the general probability that a hub emerges in a layer, which is different from the previous one, because it also takes into account the fact that some cells may be labeled as hubs more often than others. We then considered the list of all hubs of a given type (storage or sharing) across all substates, including repetitions (if a neuron was hub in more than one substate, then it appeared multiple times in the list), and evaluated the fraction of times in which a hub in this list was belonging to a given layer. We computed separately these fractions layer by layer for storage or sharing hubs and for anesthesia or sleep. Ninety-five percent confidence intervals for the mean fractions above were evaluated as 2.996 times their sample SD over the different recordings for which they could be computed. We considered two mean fractions to be different when their 95% confidence intervals were fully disjoint.

Coordination between substate transitions

To compare sequences of substates of different types or in different regions, we introduced a symbolic description of substate switching. Each substate was assigned a letter symbol, i.e., a label s(p), where p can stand for firing, information storage, or sharing and s(p) is an arbitrary integer label different for every substate. We could thus describe the temporal sequences of the visited substates of each different type as an ordered list of integers s(p)(t). We quantified the degree of coordination between the sequences of substates of different types (e.g., p = “storage” versus q = “sharing”) or in different regions (e.g., p = “storage in EC” versus q = “storage in CA1”) by evaluating the relative mutual information termMI[s(p)(t),s(q)(t)]/max[H(s(p)(t)),H(s(p)(t))]normalized between 0 (full statistical independence between the two substate sequences) and 1 (complete overlap between the two substate sequences) by dividing it by the entropy H of the most entropic among the two symbolic streams. We evaluated these MI and H terms using direct plug-in estimators on the joint histograms of substate labels. We estimated chance expectations for the level of coordination by repeating the same procedure for substate sequences with shuffled substate labels and then finding the 99th percentile over 1000 permutation replicas of the computed MI/H.

Mutual information measure’s dependence on bin size

The original decision for the bin size was chosen such that, when discretized, the information content lost by counting two or three spikes on the same neuron within a given bin as a 1 was less than 5%. On average, the information content lost was less than 1% across all recordings. To analyze the dependence on bin sizes, one example recording was chosen in the PFC during natural sleep in different bin sizes (25, 33, and 66 ms) and computed substates using the same methods described above. To make the comparison focused on bin size, the same number of clusters per feature was chosen to reflect the original number. We then computed the amount of information about the substate sequences computed with the original bin size that were retained by corresponding substate sequences derived for each different bin size. To do so, we used the same procedure described in the previous section to quantify coordination between sequences for different types of states or between different regions. Notably, we computed mutual information between the substate sequences for different bin sizes (normalized by the entropy of the original sequence) and compared this relative mutual information with chance expectation (obtained via shuffling substate sequences, as above). We found that the mutual information between corresponding sequences for different bin sizes was two order of magnitudes above the chance level, denoting high robustness of our procedure for extracting substates. Correspondingly, we also found that, for matched substates between sequences extracted for different bin sizes, the identification, number, and anatomical localization of hubs were only marginally altered.

Complexity of substate sequences

After converting sequences of substates into symbolic streams of letters, we defined substate words as the triplets of letters corresponding to the firing, the information storage, and the information sharing substates simultaneously observed at each time t, i.e.S(t)=s(firing)(t) s(storage)(t)s(sharing)(t)

We then constructed a switching table T in which the temporally ordered columns provide the sequence of substate words S(t) along time. We compiled separate switching tables for each recording and for each of the simultaneously recorded regions. The total set of substate words effectively found in a specific switching table constitutes its associated dictionary of substate combinations. We then defined the used dictionary fraction as the ratio between the number of observed words and the maximum theoretically possible number of words that could have been composed given the available substate letters (depending on how many firing, storage, or sharing substates have been effectively extracted).

We evaluated the complexity of substate word sequences using a procedure inspired from the notion of Kolmogorov-Chaitin complexity (17) and MDL approaches (17). The basic concept is that, for a regular symbolic sequence (as our streams of substate words), it will be possible to design a tailored “compression language” such that the sequence will admit a much shorter description when reformulated into this language with respect to the original length in terms of number of words. On the contrary, a random symbolic sequence will be poorly compressible, i.e., its descriptions in terms of a generative language will be nearly as long as the original list of symbols appearing in the sequence. A complex symbolic sequence will stand between these two extremes—still admitting a compressed generative description but not as short as for regular sequences. Departing from universal compression approaches, as the original MDL formulation (17) or the Lempel-Ziv approach (19), we introduce here a “toy language” for generative description, specialized to compress state transition tables as the ones in Fig. 6A. Our choice is conceptually compliant with the MDL approach but—for the sake of pedagogy—avoids technical steps as the use of binary prefix coding.

Let Ω = {S1, S2, …, Sω} be the dictionary of substate words appearing in the switching table T that we want to describe. We first define the exhaustive list description (Dlist) of Tas a string of the following formDlistS1 t1,1 t1,2t1,k1 S2 t2,1 t2,2t2,k2Sω tω,1 tω,2tω,kω

In such a description, the symbol of each substate word Sq (counting as one description unit) is followed by an exhaustive list of all the kq times tq,1, tq,2, …, tq,kq (each time index counting as an extra description unit) at which the recorded system produced the matching substate word. If the number of analyzed time windows is K = k1 + k2 + … + kω, then the length of the exhaustive list description will be |Dlist| = K + ω description units (K time stamps plus ω substate word symbols).

Let us then define the block-length description (Dblock) of the stream of substate code words as a description of the following formDblockS1 w1,1 l1,1 w1,2 l1,2w1,m1 l1,m1 S2 w2,1 l2,1 w2,2 l2,2w2,m1 l2,m1 Sω wω,1 lω,1 wω,2 lω,2wω,mω lω,mω

In such a description, the symbol of each word Sq (always counting as one description unit) is followed by a list of stepping instructions for a hypothetical “writing head” moving along different discrete positions on an idealized tape, similarly to computing automata as the Turing machine (34). At the beginning, the machine is initialized with the head on the first position on the tape. The integers wq,n—at odd positions (first, third, etc.) after the word symbol—indicate how many steps the machine head must shift on the tape toward the right without writing, but just skipping positions. The integers lq,n—at even positions (second, fourth, etc.) after the word symbol—indicate instead how many steps the machine must also write on the tape the symbolic string Sq before shifting to the next position on the right. Every time that a new symbol Sq is met when parsing the step length description, the position of the writing head is reset to the leftmost starting position on the tape. This parsing grammar is obviously more complex than the one for a simpler “parrot machine” designed to parse exhaustive list descriptions as the ones described above. The length in symbols of this block-length description is variable and depends on how regular the word sequence is to compress and regenerate. The block-length description segment Sq wq,1 lq,1wq,mq lq,mq will be shorter than the matching exhaustive list description segment Sq tq,1tq,kq whenever 2mp < kp, which can happen if transitions for the different types of substate letters are regularly aligned, in such a way that the resulting switching table have long alternating blocks with repeated substate words.

The syntactic complexity of a sequence of substate words can then be evaluated by quantifying how much the program to generate the switching table T via a “smart” compressing machine interpreting block-length descriptions is shorter than the program to generate the same table T via a “dumb” parrot machine interpreting exhaustive length descriptions. We define the description length complexity of a switching table T asDLC=Dblock(T)/Dlist(T)

To give a toy example, let us consider the sequence T= “AAAAAAA BBBB AAAAA CCCCC DDD BBBBBB,” built out of four possible collection of substate words S1 = A, S2 = B, S3 = C, and S4 = D. The exhaustive list description for this sequence will beDlist(T)=A 1 2 3 4 5 6 7 12 13 14 15 16 B 8 9 10 11 25 26 27 28 29 30 C 17 18 19 20 21 D 22 23 24with length |Dlist(T)| = 34 descriptive units. Its step length description will beDblock(T)=A 0 7 4 5 B 7 4 13 6 C 16 5 D 21 3with length |Dblock(T)| = 16 descriptive units, i.e., |Dblock(T)| < |Dlist(T)|.

Given the noisiness of data, we dropped from both the exhaustive list description and the step length description the segments corresponding to exceedingly rare words Sq. In particular, ranking the code words from the least to the rarest, we dropped all the words Sr with r ≥ R such that removing all of their occurrences in the word stream reduced the stream’s overall length of no more than 10% (lossy compression).

We computed confidence intervals for description length complexity (DLC) values via a jackknife construction in which we drop one word at random position from the temporal stream S(t) made of K symbols, generating up to K Jackknife surrogate streams, each with K − 1 symbols. The confidence interval was then given by the 5th and the 95th percentiles over the complexities evaluated from these Jackknife surrogates.

Appropriate reference criteria were then required to discriminate complex versus ordered or random switching tables. We need to compare the empirically observed DLC values against two thresholds. Complex switching tables should have a DLC below a threshold for randomness testing and above a threshold for regularity testing. Given a switching table T, we constructed a randomized version Trand by randomly permuting independently the entries of each of its rows. For each recording, we constructed 1000 instances of Trand and evaluated DLC for all of them, identifying as upper threshold for complexity the 5th percentile DLCrand = q5%[DLC(Trand)]. We then constructed an enhanced regularity version Tregular of each T by lexicographically sorting entries row by row (to get blocks of homogeneous code words as long lasting as possible based on exactly the same building bricks). We then arbitrarily defined a lower threshold for complexity DLCregular = 2 DLC(Tregular). The thresholds DLCregular and DLCrand varied for every switching table. However, the criterion DLCregular < DLC < DLCrand was fulfilled for all the considered recordings, whose state transition sequences could then be certified to be complex (in our arbitrary but quantitative and operational sense).

We could restrict the evaluation of complexity to subtable restricted to words occurring during selected different global oscillatory states only. In this way, we could compare the complexity of sequences occurring within different global states, e.g., REM versus nonREM. We plot in Fig. 6E the relative complexity variations between two global states α and βΔ (DLC)=(DLCαDLCβ)/(DLCα+DLCβ)

We evaluated once again confidence intervals for relative complexity variations via one-leave-out jackknife on global state–restricted switching table columns.

Burstiness of state sequences

We also characterize switching tables in terms of their “style” of transitions, looking at two different temporal statistics. First, we computed all intertransition times from a table T, i.e., the number of time steps occurring between one block (continuous time interval with a same substate word maintained in time) to the next. Note that these intertransition times are precisely the lp,n integers appearing in the block-length description Dblock (T) of the table T. After computing the mean μl and the SD σl of these intertransition times, we then evaluated the burstiness coefficient (18)B=σlμl1σlμl+1

Such a coefficient is bound between −1 < B < 1 and is equal to 0 when transitions between substate words follow a Poisson statistic, negative when the train of transitions is more periodic, and positive when more bursty than for a Poisson train.

SUPPLEMENTARY MATERIALS

Supplementary material for this article is available at http://advances.sciencemag.org/cgi/content/full/5/6/eaax4843/DC1

Fig. S1. Recording paradigm.

Fig. S2. Global brain oscillatory states.

Fig. S3. Effects of global states on unit firing.

Fig. S4. Firing substates.

Fig. S5. Information storage is brain state dependent.

Fig. S6. Substates of information sharing: Additional information.

Fig. S7. Burstiness and used dictionary fraction explain complexity.

Table S1. Number of states and their oscillatory mode specificity.

Table S2. Sharing assembly liquidity across regions and conditions.

Table S3. Matching between substate sequences of different types across conditions and regions.

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is not for commercial advantage and provided the original work is properly cited.

References and Notes

Acknowledgments: We wish to acknowledge A. Ghestem for his technical help in the experiments. We also wish to acknowledge V. Jirsa, L. Barnett, and T. Womelsdorf for constructive comments. We use part of the data collected in G. Buzsáki’s laboratory and originally published by Quilichini et al. (29) for our investigations. We acknowledge G. Buzsáki and A. Sirota for consenting to the use of this database in the present work. We also used part of the data originally published by Ferraris et al. (28). Funding: P.P.Q. acknowledges support from FRM, FFRE, and CURE Epilepsy Taking Flight Award. M.F. acknowledges support from FRM (Fondation Recherche Médicale FDT201805005246). The M-GATE project has received funding from the European Union’s Horizon 2020 research and innovation program under Marie Skłodowska-Curie grant agreement no. 765549. D.B. acknowledges support by the CNRS “Mission pour l’Interdisciplinarité” INFINITI program (BrainTime) and the French Agence Nationale pour la Recherche (ERMUNDY, ANR-18-CE37-0014-02). C.B. acknowledges the European Union’s Seventh Framework Program (FP7/2007-2013) under grant agreement no. 602102 (EPITARGET), ANR-13-NEUC-0005-01 (MOTION), and ANR-17-GRF2-0001-03 (Epigraph). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Author contributions: P.P.Q., A.F.V., and M.F. performed and administered all surgery, implantation, and experimental recordings. P.P.Q., A.F.V., and M.F. performed spike sorting, spectral analysis, and data preprocessing. W.C. and D.B. performed computational analysis. All authors designed the study and wrote the manuscript. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Example data can be found at https://doi.org/10.5281/zenodo.2620137. Additional data related to this paper may be requested from the authors. A current repository with code is available at https://github.com/wesleypclawson/ComCom.
View Abstract

Navigate This Article