## Abstract

Materials-by-design is a paradigm to develop previously unknown high-performance materials. However, finding materials with superior properties is often computationally or experimentally intractable because of the astronomical number of combinations in design space. Here we report an AI-based approach, implemented in a game theory–based conditional generative adversarial neural network (cGAN), to bridge the gap between a material’s microstructure—the design space—and physical performance. Our end-to-end deep learning model predicts physical fields like stress or strain directly from the material microstructure geometry, and reaches an astonishing accuracy not only for predicted field data but also for derivative material property predictions. Furthermore, the proposed approach offers extensibility by predicting complex materials behavior regardless of component shapes, boundary conditions, and geometrical hierarchy, providing perspectives of performing physical modeling and simulations. The method vastly improves the efficiency of evaluating physical properties of hierarchical materials directly from the geometry of its structural makeup.

## INTRODUCTION

Because of high demand for materials with superior mechanical properties and versatile functionalities, tuning composite designs has become a crucial part in materials development (*1*–*4*). The essence of composites lies in introducing heterogeneity through combinations of multiple materials with distinct, often disparate, properties. By optimizing the spatial distributions of composite constituents, vast enhancements of mechanical performance can be realized such as demonstrated in modern ceramics matrix materials (*5*, *6*), fiber-reinforced polymers (*7*), or architected materials including biologically evolved and bioinspired (*8*–*11*). However, traditional manufacturing methods limited tunable composite designs due to the difficulties of conjoining different base materials and manipulating complex microstructures (*12*). To address these issues, additive manufacturing has recently enabled the production of composites with complex geometries (*13*–*15*).

To calculate physical properties of composites often relies on multiscale modeling approaches such as finite element method (FEM) (*16*) or molecular dynamics (MD) (*17*, *18*) simulations. However, the design space of composites usually exists of an intractable number of combinations that hinders us from searching optimal designs either via experimental measurements or computational simulations. In earlier work, enabled by advances in AI methods, instead of using a brute-force approach, machine learning (ML) algorithms were implemented to optimize composite designs based on the calculation of FEM for various loading conditions, such as tensile or shear loading (*19*, *20*). However, these methods do not yet predict physical fields and do not directly connect material microstructure to performance. For many design applications and engineering analyses, we need access to physical fields like strain or stress tensor distributions. These types of physical data will also improve the physical relevance of AI approaches and provide more mechanistic insights.

In recent years, ML methods, especially deep learning (DL), have revolutionized our perspective of designing materials, modeling physical phenomena, and predicting properties (*21*–*26*). DL algorithms developed for computer vision and natural language processing can be used to segment biomedical images (*27*), design de novo proteins (*28*–*30*), and generate molecular fingerprints (*31*, *32*). Within the field of computational materials science, the abundance of data enables a boom of ML applications from quantum scale up to macroscale. By incorporating field-based (density function theory), particle-based (MD), and continuum-based (FEM) modeling, ML sheds light on predictions of quantum interactions (*33*–*35*), molecular force fields (*36*–*38*), and material mechanics (*19*, *20*, *39*–*41*). However, most of these models are limited by a minimal level of information included in the prediction and difficulty in generalization.

Here, we overcome these challenges and present a general method to translate material composition, provided to the model as a simple image of the geometry and microstructure that fully encodes material composition and boundary conditions, directly to strain or stress fields as physical fields containing integral information of material behaviors to bridge physics and designs. We show that the physical relationship between composite geometries and strain or stress fields can be directly established at high levels of accuracy and predictability. While some earlier work has used convolutional neural networks to predict localization linkages for elastic deformation in composites (*42*), the results reported in this paper offers a method that is capable to deal with more generalized mechanical problems and can be more easily used to incorporate experimental data.

## RESULTS

### Model development and training

A generative adversarial network (GAN) is a type of deep neural network that generates new data based on the data statistics of the training set (*43*). GANs consist of two key components known as the Generator and the Discriminator, which are trained against each other based on the game theory. The Generator generates candidates that are evaluated by the Discriminator. Although GANs were originally proposed as a form of unsupervised learning, labels can be incorporated as constrains, which leads to a subtype of GANs known as the condition GAN (cGAN). In our work, we develop a DL model implemented in a cGAN with a paired image as the constraint (*44*). The cGAN model has two key components, Generator U-Net and Discriminator PatchGAN (*44*). The geometric images (labels or constraints) are fed into the Generator to generate field images of interest with random noise. The Discriminator evaluates these field images generated by the Generator through comparing them with real field images obtained from FEM. The training objective of the Generator is to increase the error rate of the Discriminator, while the Discriminator is trained to optimize the capacity of identifying fake images produced by the Generator. Within the game theory framework used here, the GAN model converges when the Discriminator and the Generator reach a Nash equilibrium in which one component maintains its status regardless of the actions of the opponent.

Once the ML model is trained, new field images can be predicted bypassing conventional numerical simulations. The field predictions can be further used to extract mechanical properties of the composite given its geometry. As we show in the following sections, not only mechanical properties including stiffness and recoverability can be obtained but also local features such as stress concentrations around cracks or sharp inclusions. More details are provided in the Supplementary Materials.

### From geometry to strain or stress field

We focus on two-dimensional (2D) composites with two constituent materials, defined as soft units (red) and brittle units (white). Both units have linear plasticity and strain hardening, which is characterized by the “crushable foam” model in Abaqus (see Materials and Methods). Brittle units have relatively larger Young’s modulus but smaller yield strain comparing with soft units. The initial composite considered consists of 8 × 8 block units (see Materials and Methods) (*19*, *20*), keeping the size small so we can conduct a brute-force validation of the ML method against full-physics simulations. While the generated patterns are at coarse resolutions (8 × 8 in the first example), the overall image resolution is 256 pixels × 256 pixels.

The arrangement of block units is generated by a random geometry Generator so that we can explore a broad range of arrangement combinations (Fig. 1). For different geometries, strain or stress fields of composites under mechanical tests such as compression are obtained using FEM (see Materials and Methods). The results from FEM are regarded as the ground truth in this work. Both geometry images and strain or stress field images are collected to compose the dataset. The dataset consists of 2000 data in total, which is split into a training dataset with 80% of the data and a test dataset with the rest 20% of the data.

### Evaluation of model performance

To evaluate model performance, we consider the mechanical behaviors of composites under compressive loading and unloading (loading applied in the *x* direction, or horizontal axis, Fig. 2A) as an example (see Materials and Methods and movie S1). The geometry image is the input, while the field image of residual stress after unloading is the target. This represents a complex physical scenario and will serve as a test case to assess the method reported here. Typical results of strain or stress field predictions of the test dataset are shown in Fig. 2A. Notably, both stress and strain fields as well as displacements (incorporated in the overall deformation of the geometry) are captured in our prediction, as shown in Fig. 3A. To further quantitatively evaluate the similarity between the ground truth and the prediction, the *L2 norm* (see Materials and Methods) is calculated for all 400 data in the test dataset, considering von Mises stress field prediction.

The distributions of *L2 norm* of predicted contours are plotted against two reference contours, clean contours and random contours (Fig. 3A). According to the figure, the mean value of the *L2 norm* of predicted contours is much lower when comparing with random contours (~1.8, normalized by predicted contour) and clean contours (~2.9, normalized by predicted contour). Furthermore, a narrower peak of predicted contours indicates that the variation is smaller as well (random contour ~2.8 and clean contour ~2.7, normalized by predicted contour). The results reveal that our ML model features both high accuracy and applicability in predicting strain or stress fields directly from the composite geometry.

With predicted fields, we can further derive secondary mechanical properties of the composites. For instance, mechanical recoverability of the composite can be computed after subjected to compressive loading and unloading. Here, recoverability is defined as average residual stress of the composites. Our model is able to predict the geometries associated with the top three recoverability levels in the test dataset exactly (Fig. 3C), with the information obtained from the field images. And more broadly, our ML model performed an accurate prediction on the ranks and the values of recoverability or residual stress over all 400 composites in the test dataset (Fig. 3D).

The *R*^{2} value of the linear fitting between ML model prediction and the ground truth is 0.96, and the average relative errors of ML model prediction of the global property recoverability is 7.5%. It is emphasized that the ML model was not trained directly for predicting recoverability, as this measure is a secondary extracted feature. However, the model is still able to predict the secondary information obtained from field images precisely with only 1600 training data. The case of recoverability suggests that our model is not only giving field images that look similar to the ground truth but also predicting specific pixel values at high accuracy. As a consequence, the model can be used to optimize mechanical properties of composites by varying geometries. Furthermore, there is no longer a need to develop different ML models for different mechanical properties (*19*, *20*) as multiple properties can be derived from predicted fields and deformation already included in our model.

In addition to mechanical property predictions, we also check the reliability for local, smaller-scale patterns. To assess this capability, a ML model trained with 8 × 8 representation is used to study high-resolution patterns at an increased 16 × 16 resolution (Fig. 3E). Clear boundaries of chessboard pattern are predicted even if the 16 × 16 representation has never been seen by the model. The consistence across different scales reveals that the GAN has learned a physical understanding about mechanical phenomena, in this case, stress concentrations and how they emerge from complex microstructural patterns. Although the checkerboard-based stress field agrees well, it is important to assess the divergence of the stress values or high-order information such as the gradient of strain or stress fields across different scales. In terms of reducing the divergence regarding different scales in the future work, an alternative model can be trained by incorporating multiple representations (e.g., 8 × 8, 16 × 16, and 32 × 32 resolutions, each with 1000 images) into the dataset. With such multiscale information, the model will likely be able to learn how to upscale or downscale the strain or stress fields across different hierarchies.

In conclusion, our model’s capacity for recognizing scale-independent local patterns can be applied to predict strain or stress field of hierarchical structures and is useful for design applications that explore a broad range of de novo microstructures.

### Nonsquare features

In the earlier examples, the composite was composed of square units. Now, we explore whether our model can be extended to investigate composites with nonsquare elementary design units. To demonstrate such possibility, both hexagonal and triangular tessellations are used for designing composite elements. To do this, the same model trained on square representation is now trained with a dataset in hexagonal and triangular representations, respectively. von Mises stress fields calculated by FEM and the trained model show high similarity, as visualized in two example cases (Fig. 4A). The results indicate that the model can be easily generalized to composites of different shapes.

### Multiple loading conditions

The input geometry images we used above do not yet include any information about loading conditions, which were encoded in the training set used. Now, we extend the model to learn variations in boundary conditions, which are embedded directly in the images. The loading conditions are specified by adding green lines to our geometry images and field images (Fig. 4A). These green lines are representative of rigid bodies that exert loads during simulations of FEM. The straight green lines are two rigid lines used to compress the composites along the *x* direction. The circular green line is a spherical (2D) rigid indenter used for nanoindentation. Hence, now the dataset consists of two sets of data: One set of images contains two straight green lines showing the results of compression, and the other set includes one circular line exhibiting the results of nanoindentation. The geometries of composites in the two sets are the same.

After being trained with the mixed dataset, the model is able to recognize different loading conditions and make relevant predictions (Fig. 4A). More specifically, the model transfers loading conditions from input geometry images to the prediction and predicts von Mises stress field according to the specified loading conditions. More unexpectedly, when two loading conditions are combined in the geometry image that is not included in our training dataset, the ML model is still able to predict the stress field accurately. As is shown in Fig. 4A, both stress fields caused by compression (the general stress pattern similar to single compression) and nanoindentation (stress concentration on the top) are predicted by our model.

The ability of our model to read two distinct loading conditions from images and predict fields accordingly shows the potential of applying one model to predict multiple mechanical tests and offers evidence for transferability. In addition, in principle, the model is also able to predict strain or stress fields for multistage mechanical tests such as multiple loading cycles. One straightforward way to achieve that within the framework reported here is by leveraging different color codes for different loading stages. For instance, as the loading cycles increase, we can vary the colors of the green lines in Fig. 4B from light green to dark green to encode the loading stages for training. With our well-trained model, FEM simulations that are usually carried out by conventional codes can be performed at much higher speed and lower computational costs. Last, our model can also be used for complex loading conditions due to its capacity of predicting fields with mixed loading conditions.

### High-resolution geometry

For real-world composites, their geometries can be quite complicated (*45*, *46*). As a consequence, a relatively simple 8 × 8 representation may not be sufficient for practical applications. To study complex geometries, we trained an even deeper model (see Materials and Methods) with 32 × 32 representation (Fig. 4C). Similarly, as before, the loading condition is a one-cycle compressive loading and unloading with von Mises stress field images being the target outputs. To show an example of a more complicated composition and the associated prediction, a geometry image in the shape of the MIT dome is used as the input. Both the ground truth and the field image predicted by the model exhibit a similar stress pattern (Fig. 4C). In other words, a deeper model trained with 32 × 32 representation is more powerful than that in 8 × 8 representation because it can better present nonsquare patterns such as the sector in Fig. 4C and cover more situations when a desired resolution is requested. The example reveals the potential of the model to predict high-resolution fields for general inputs.

To quantitatively investigate the accuracy of high-resolution prediction, we calculated the *L2 norm* as we did for 8 × 8 representation. However, when using a 32 × 32 representation, there are much more local patterns than in the 8 × 8 representation. As a result, a comprehensive comparison of differences across the whole image is more suitable than a single value. Hence, we use an *L2 norm* map instead of the *L2 norm*. The *L2 norm* map is obtained by calculating local *L2 norm* region by region when comparing two images (see Materials and Methods). We randomly generated a 32 × 32 geometry image to check the similarity between the ground truth and prediction using *L2 norm* map.

Two reference contour images, known as clean contour and random contour, are selected in the same way as we did for the 8 × 8 representation (Fig. 3A). The *L2 norm* map of the clean contour exhibits the positions of stress concentration, while for the random contour, the map includes mixed information of two fields. According to the *L2 norm* map, the predicted contour is globally accurate with low values of *L2 norm* across the map. We find that for those points with large stress concentration in the image, the model shows less accuracy. The reason is that the ML model is inclined to smoothing the sharp peaks—stress concentrations—to achieve an overall low loss. However, the tendency of displaying large stress in those points is clearly predicted by our model, as is evidenced by comparing with the *L2 norm* map of the clean contour. With the capacity of predicting fields for high-resolution geometries, the model can be used to investigate composites with complex patterns as is necessary in design applications.

## DISCUSSION

We proposed a DL-based approach to provide a direct translation of the composite geometry to strain or stress fields, realized using a game-theoretic approach implemented as a GAN. The neural networks are trained with a relatively small amount of data but reach astonishing accuracy, transferability, and, hence, broad applicability. Multiple mechanical fields are predicted with the same trained model covering multiple aspects of materials’ behaviors (Fig. 2 and fig. S3). By extracting information from the field predictions, the model can identify top designs of recoverability exactly and capture scale-consistent local patterns. In terms of the acceleration of the computation speed gained by our method, the predictions of pretrained model take less than a second on a Intel i7-9800X (3.80 GHz) CPU core, whereas the standard FEM simulations usually last minutes, hours, or even days. Therefore, the approach is a promising tool to accelerate discovery of optimal geometric designs for materials such as metamaterials and hierarchical materials.

By using soft materials, the model can predict analogous stress fields around cracks (see Materials and Methods). The results agree with the knowledge of the patterns of stress fields around cracks even if the ratio of soft materials for these cases is extremely skewed in the data distribution (fig. S4). The predictions are consistent regardless of shapes and positions of cracks or sharp inclusions and could even be used to describe the evolution of fields as cracks propagate (movie S1). As a result, the model could also be used for crack-related design problems such as crack-resistant materials (*47*), providing added evidence for the transferability of the method.

Our model can capture mechanical behaviors of composites with various shapes, different loading conditions, and complicated geometries. It can predict mechanical behaviors from random geometries and open the gate to accelerate searching optimal designs of composites with multiple mechanical functionalities from the bottom up. In that context, one of the extraordinary advantages of using predicted stress and strain fields is that fields contain comprehensive information for various design purposes. Thus, the approach offers a high level of efficiency of predicting physical properties and accelerate the design process based on a transferrable ML method. Furthermore, we extended our model to incorporate nonsquare representations, multiple loading conditions, cracks, and high-resolution geometries. These extensions enabled us to cover tremendous applications of FEM with lower computational costs and investigate complex systems of interest. Moreover, the approach reported here can also be directly applied to experimental images for training of the model, which underscores the transferability of the approach, and provides a previously unidentified way to combine bottom-up modeling with experimental data sources for predictive methods. Future work could also focus more on global mechanical properties derived from local stress and strain fields, as opposed to local fields as focused on in this study. Although we covered a detailed analysis of one global property—recoverability—some global properties may possibly be more sensitive to divergence within the predicted maps. For instance, the fracture toughness is strongly correlated with local predictions of stress fields around a crack tip. In the case, the training data may need to be modified accordingly, such as incorporating sharp edges and dynamic effects.

Moreover, combined with optimization algorithms (*20*), the model can speed up the discovery of optimal designs without heavy computational loads. Moreover, the proposed approach is a general method for geometry-to-field translation. A similar protocol as developed here could be applied to other areas in the sciences, such as density functional theory fields, fluid mechanical fields, or electromagnetism.

## MATERIALS AND METHODS

### Graphical representation of composites

We use 8 × 8 graphical representation for composite profile, which is shown in fig. S1A. For any composite with the given geometry, it consists of two different block units, labeled as brittle unit and soft unit. Brittle unit and soft unit are mechanically distinct in elasticity and plasticity. A brutal force algorithm is executed to generate all possible combinations of two different units without fixing the composition. To simplify the geometry and shorten the calculation time in Abaqus, all composites created by the algorithm are symmetric about *Y* axis.

### FEM

The dataset is generated by FEM, using a commercial Abaqus/Explicit code (Dassault Systemes Simulia Corp., 2010). In our work, strain or stress fields calculated by FEM are considered as the ground truth when comparing with ML results. All simulations are carried out in 2D. To include plastic deformation, crushable foam model with volumetric hardening embedded in Abaqus is implemented for composites (*48*, *49*). Detailed material properties of two different units in composites are exhibited (tables S1 and S2). Table S1 displays overall mechanical behavior of materials, while the specific hardening curve is defined by table S2. To briefly summarize the difference of two materials shown by the data, the brittle unit has relatively larger Young’s modulus but a smaller yield strain compared with the soft unit.

Once the material properties and geometry are defined, the composite is put under compressive loading and unloading for one cycle in Abaqus (movie S2). Element “CPE4R” with the global size of 6 is used for composites during the calculation. In terms of boundary conditions, the middle line of the composite is set immobile along the *x* direction due to the symmetry. The loading and unloading are realized by moving two symmetric analytical rigid shells. During the loading process, the magnitude of compressive strain is 10%. The contact type between composite and 2D shell is surface-to-surface contact using kinetic contact method. For either loading or unloading, 2D shell is moving in a time period of 500 (unit consistent with table S1). Within the time period, the amplitude of displacement of the rigid shell follows “Smooth step” type embedded in Abaqus. After loading and unloading, images of strain or stress fields are postprocessed and collected for the dataset.

### Data distribution

According to our graphical representation, there are 2^{32} possible combinations in total. Among all potential geometries, we randomly select 2000 data to constitute the dataset. The distribution of all 2000 data is plotted over the ratio of soft units (fig. S1A). As the figure indicates, the distribution generally fits a Gaussian curve with the mean value lying at approximately 0.5. The curve reveals a uniformly distributed dataset without skewed features. As a consequence, we suggest that our dataset is representative of all possible data in the search space. With 2000 data randomly generated, we further split them into training dataset with 80% data and test dataset with the rest 20% data. The split is also stochastic, thus the distribution is being conserved.

### Image processing

The images of mechanical fields are generated first in Abaqus Visualization and then postprocessed by Python code. As for the generation, the images are plotted with both deformation (to capture the displacement field in the geometry change) and strain or stress field contours. The color spectrum for field contours is realized using “white” [RGB = (216, 216, 216)] for lower bound and “red” [RGB = (216, 7, 0)] for the upper bound. The lower and upper bounds vary from cases to cases but are consistent within one dataset. The same spectrum is also used for geometry input with white [RGB = (216, 216, 216)] for brittle units and red [RGB = (216, 7, 0)] for soft units. The contour style used is “DISCRETE,” and the number of intervals is set to 24. As for options for edges, “FEATURE” is used for visible edges, and “THICK” is used specifically for green lines, which represent analytical rigid bodies. Any triad, legend, title, state, annotation, compass, and reference point are turned off to exclude useless information in the images. In addition, the background of the image is set to solid black [RGB = (0, 0, 0)]. After all settings are applied, an image is generated with the “print” command in Abaqus.

On the basis of the images created by Abaqus, a Python code is executed to cut, resize, and stitch images. Input image (geometry images) and target image (field images) are cut to keep the proportion of black background similar vertically and horizontally. The purpose of cutting operation is to make sure that the composites in input image and target image are generally matching in shape to be read by the GAN model. Once the images are cut, they are all resized to images with a size of 512 × 512. After resizing, two images are stitched in accordance with the format of model input. OpenCV (*50*) is used for reading, cutting, plotting, and stitching images.

### Image comparison

To compare filed images, *L2 norm* is calculated to determine the difference between two images. Given two images *P1* and *P2*, *L2 norm* between these two images are defined as

In Fig. 4D, the *L2 norm* map is used instead of the *L2 norm* for comparing high-resolution field image. To derive a *L2 norm* map, we first slice images of 512 × 512 into 128 × 128 patches each with a size of 4 × 4. Second, for each pair of corresponding patches in two images, *L2 norm* is calculated. As a result, we end up with a matrix of 128 × 128 with each element corresponding to the difference between a pair of patches in two images. The matrix is named as *L2 norm* map.

When evaluating the performance of ML models, we calculate the *L2 norm* or the *L2 norm* map between the ground truth and prediction (“predicted contour”). The ground truth is also compared with two field images for direct reference. These two field images are labeled as “random contour” and “clean contour” (fig. S2). “Random contour” is a contour image randomly selected from test dataset. “Clean contour” is a contour image without any deformation and field (all values across the image is 0).

In addition, the relative errors comparing the residual stress values of ML predictions and the FEM results are computed simply following the expression*S*_{ML} is the average von Mises stress from ML predictions, and *S*_{FEM} is the average von Mises stress from FEM simulation results.

### DL approach

The ML calculations are performed using TensorFlow (*51*), a general purpose ML framework. A GAN is implemented for translating composite geometries to mechanical fields (*44*). GAN is a type of deep neural network consisting of a Generator and a Discriminator based on game theory (*43*). Particularly in our model, U-Net is the Generator and PatchGAN is the Discriminator (*44*). U-Net is a deep neural network model that has been used for biomedical image segmentation in earlier work (*27*). Here, we use U-Net to generate fake field images based on composite geometries. PatchGAN evaluates the generated field images from U-Net by comparing with real field images. The model was trained using a single NVIDIA Quadro RTX 4000 with 8-gigabyte memory. The numbers of training epochs range from 150 to 400 in different cases to achieve convergence, and the batch size of the training is 1. Although GANs are notorious for the training instability, cGANs are relatively more stable with the constrain of the input labels (*52*). The numbers of training epochs were carefully tuned so that the training process is not too short for the generator to learn enough information and also not too long that the discriminator always evaluates the generated images to be fake (*52*). We checked the predictions of our ML model on the test set to determine appropriate training epochs. To show the stability of our training process, we provided two datasets (datasets S1 and S2), each containing predicted images over 400 geometries of the test set for multiple fields.

The Generator U-Net consists of two components encoder and decoder with skip connections between mirrored layers (*27*). Both encoder and decoder contain eight layers including input and output layers. The complete architecture is shown in table S4. The trainable number of parameters in total is approximately 5,000,000. The Generator loss function is defined as

Here, *gan_loss* is a sigmoid cross-entropy loss of the generated images and an array of ones, and *L1_loss* is the mean absolute error between the generated image and the target image. LAMBDA is proportion coefficient that is equal to 100.

The Discriminator PatchGAN consists of eight layers with approximately 3,000,000 trainable weights. The complete architecture is shown in table S3. PatchGAN evaluates the generated field images by classifying individual patches (slice the image into 30 × 30 patches in our case) in the image as “real” or “fake.” The Discriminator loss function is defined as

Here, *real_loss* is a sigmoid cross-entropy loss of real images and an array of ones, and *generated_loss* is a sigmoid cross-entropy loss of the generated images and an array of zeros.

### Nonsquare features

Hexagonal and triangular representations are used to testify whether our model can be applied to composites with different shapes. The sizes of composites in hexagonal and triangular representations are 312 × 315 and 320 × 332 (*x* × *y*). At the boundaries of composite, both hexagonal and triangular units are cut half. The field images for both cases were generated after one-cycle compressive loading and unloading. The magnitude of compressive strain is 10% for the hexagonal representation and 5% for the triangular representation. The reason why a smaller strain is used for triangular cases is to avoid singularity at the edges of triangles when FEM simulations are carried out.

### Distinct loading conditions

To explore the capacity of our ML model to predict strain or stress fields under various loads, we use two mechanical tests, nanoindentation and compression. The general setup of those mechanical tests using FEM is the same as the setup mentioned above (see the “FEM” section). However, we do not unload the composite after loading. The reason is that the field image after unloading in nanoindentation contains very little information as the composite has most of its region stress free. A compressive strain with magnitude of 5% is used again to again avoid singularity during the nanoindentation simulation. To include loading condition in images, green lines are used to represent analytical rigid bodies, which induce deformation (Fig. 4D).

As for the nanoindentation test, rather than a straight-line rigid body, a spherical indenter is used to implement loading (movie S3). The indenter is laid at the top center of the composite with radius of 40 (unit consistent with table S1). In addition to the boundary condition mentioned in the “FEM” section, the motion at the bottom of the composite is disabled along *y* direction for both compression and nanoindentation. With consistent boundary conditions, we are able to combine more than one loads in one mechanical test.

### Composite with high-resolution geometry

To capture more complex patterns, additional composite designs with high-resolution geometry (32 × 32) are generated and added to the dataset. Keeping the length of composites consistent, smaller square units with lengths of 10 are used. All settings for FEM are the same as those considering low-resolution geometry (8 × 8), except for two minor differences. First, the element size decreases from 6 to 2 for calculation accuracy and convergence. Second, the magnitude of compressive strain is set to 5% instead of 10%. The reason to use a smaller strain is to avoid singularity when running FEM as the size of soft/brittle units is smaller.

For the GAN model, we use original images with a size of 512 × 512 as the input as the geometries are more complicated with smaller block units. As a consequence, one extra convolutional layer is added ahead of all layers in Generator to “downsample” images from 512 × 512 to 256 × 256. Similarly, there is one more layer after all layers in Discriminator to “upsample” images from 256 × 256 to 512 × 512.

### Analogous stress fields prediction around cracks

Soft materials act like cracks when being surrounded by brittle materials as they create large stress concentration and take majority of deformations. On the basis of this idea, we generate composites with soft materials in narrow shapes and sharp edges embedded in brittle materials. We use the model that is trained for high-resolution geometry to predict the stress field around soft materials. Two types of cracks with different shapes and positions are investigated (fig. S4).

## SUPPLEMENTARY MATERIALS

Supplementary material for this article is available at http://advances.sciencemag.org/cgi/content/full/7/15/eabd7416/DC1

This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial license, which permits use, distribution, and reproduction in any medium, so long as the resultant use is **not** for commercial advantage and provided the original work is properly cited.

## REFERENCES AND NOTES

**Acknowledgments:**We acknowledge support from the Google Cloud platform and MIT Quest for providing computational resources.

**Funding:**We acknowledge support by the Army Research Office (W911NF1920098), the Office of Naval Research (N000141612333), and AFOSR-MURI (FA9550-15-1-0514).

**Author contributions:**M.J.B., Z.Y., and C.-H.Y. conceived the idea. Z.Y., C.-H.Y., and M.J.B. developed the model and carried out the simulations. Z.Y. and C.-H.Y. curated the training and testing data. M.J.B. supervised the project, analyzed the results, and interpreted it with Z.Y. and C.-H.Y. Z.Y., C.-H.Y., and M.J.B. wrote the manuscript.

**Competing interests:**The authors declare that they have no competing interests.

**Data and materials availability:**All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the corresponding author.

- Copyright © 2021 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC).