1. Title of the Publication Rapidly Evolving Soft Robots via Action Inheritance 2. Author Information: the name, complete physical mailing address, e-mail address, and phone number of EACH author of EACH paper(s); Shulei Liu; School of Artificial Intelligence, Xidian University, Xi’an 710071, China; shuleiliu@126.com; +86-18366185139. Wen Yao; Defense Innovation Institute, Chinese Academy of Military Science, Beijing 100071, China; wendy0782@126.com; +86-18518169621. Handing Wang; School of Artificial Intelligence, Xidian University, Xi’an 710071, China; hdwang@xidian.edu.cn; +86-18681895468. Wei Peng; Defense Innovation Institute, Chinese Academy of Military Science, Beijing 100071, China; weipeng0098@126.com; +86-15874970410. Yang Yang; Defense Innovation Institute, Chinese Academy of Military Science, Beijing 100071, China; bigyangy@gmail.com; +86-18514592959. 3. Corresponding Author Wen Yao; Handing Wang. 4. The abstract of the paper The automatic design of soft robots characterizes as jointly optimizing structure and control. As reinforcement learning is gradually used to optimize control, the time-consuming controller training makes soft robots design an expensive optimization problem. Although surrogate-assisted evolutionary algorithms have made a remarkable achievement in dealing with expensive optimization problems, they typically suffer from challenges in constructing accurate surrogate models due to the complex mapping among structure, control, and task performance. Therefore, we propose an action inheritance-based evolutionary algorithm to accelerate the design process. Instead of training a controller, the proposed algorithm uses inherited actions to control a candidate design to complete a task and obtain its approximated performance. Inherited actions are near-optimal control policies that are partially or entirely inherited from optimized control actions of a real evaluated robot design. The action inheritance plays the role of surrogate models where its input and output are the structure and near-optimal control actions, respectively. We also propose a random perturbation operation to estimate the error introduced by inherited control actions. The effectiveness of our proposed method is validated by evaluating it on a wide range of tasks, including locomotion and manipulation. Experimental results show that our algorithm is better than the other three state-of-the-art algorithms on most tasks when only a limited computational budget is available. Compared with the algorithm without surrogate models, our algorithm saves about half the computing cost. 5. A list containing one or more of the eight letters (A, B, C, D, E, F, G, or H) that correspond to the criteria (see above) that the author claims that the work satisfies; (D) The result is publishable in its own right as a new scientific result — independent of the fact that the result was mechanically created. (F) The result is equal to or better than a result that was considered an achievement in its field at the time it was first discovered. (G) The result solves a problem of indisputable difficulty in its field. 6. A statement stating why the result satisfies the criteria that the contestant claims (see examples of statements of human-competitiveness as a guide to aid in constructing this part of the submission); D: The co-design of soft robots is commonly defined as a bi-level optimization problem in which the outer loop optimizes morphological structures while the inner loop optimizes the control for a given structure. The integration of evolutionary algorithms for optimizing outer morphologies and reinforcement learning (RL) for optimizing inner control has emerged as a prevalent paradigm. With the extensive application of RL, especially deep RL, the significant computational burden imposed by optimizing control has become a major factor affecting the rapid development of robot automatic design. Therefore, it is necessary to consider both optimization performance and design time in the process of evaluating the effectiveness of different design methods. In this work, we, for the first time, include design time as one of the evaluation metrics, thus the published results can be considered as a new scientific result. F: In a benchmark suite comprising 31 tasks, although the combination of the genetic algorithm and proximal policy optimization algorithm achieved the best performance [1], this conclusion was reached under the assumption of ample design time and computational resources. To address this limitation, we employ an action inheritance strategy to replace most time-consuming control optimization processes. While the inherited action is near-optimal, its acquisition process is remarkably swift. Therefore, the action inheritance plays the role of surrogate models. Within the same design time, our proposed method outperforms the genetic algorithm in 30 out of the 31 tasks. These designed robots are publicly available [2]. [1] J. Bhatia, H. Jackson, Y. Tian, J. Xu, and W. Matusik, “Evolution gym: A large-scale benchmark for evolving soft robots,” Advances in Neural Information Processing Systems, vol. 34, pp. 2201–2214, 2021. [2] https://github.com/HandingWangXDGroup/AIEA/tree/main/animation G: Reducing time costs is a major challenge in the field of intelligent robot design. Compared to genetic algorithms, the algorithm proposed in this work (termed as AIEA) can save approximately 50% of the design time on simple tasks. This situation also occurs in the majority of medium and hard tasks. On a few tasks, although the optimal designs obtained by AIEA did not reach the final goal, they all evolved basic features to adapt to task environments. For example, the optimal robot obtained by AIEA has evolved trunk-leg-like features that can help the robot step over obstacles of different heights while maintaining balance - much like how humans move their legs to walk. 7. A full citation of the paper; Shulei Liu, Wen Yao, Handing Wang, Wei Peng and Yang Yang, Rapidly Evolving Soft Robots via Action Inheritance, IEEE Transactions on Evolutionary Computation, 2023. doi: 10.1109/TEVC.2023.3327459. 8. A statement either that "any prize money, if any, is to be divided equally among the co-authors" OR a specific percentage breakdown as to how the prize money, if any, is to be divided among the co-authors; Any prize money, if any, is to be split equally between the co-authors of the cited paper. 9. A statement stating why the authors expect that their entry would be the "best"; This paper uses an action inheritance-based method to accelerate the automatic design of intelligent soft robots. The action inheritance is a process of applying the optimized control actions to an un-evaluated robot and obtaining its approximated task performance, which can be viewed as a knowledge-driven surrogate model. In the pertinent literature, this endeavor marks the inaugural endeavor to tackle the time-consuming issue in robotics through a knowledge-driven surrogate model. Taking the problem of soft robot design on different tasks as an example, we illustrate the practicality and research value of this project from the following two aspects. Firstly, this project introduces design time as one of the evaluation metrics for assessing algorithm performance for the first time, aiming to promote the development of efficient and accurate methods in this field. According to experimental results, compared to current mainstream methods, the proposed algorithm can achieve optimal designs in a shorter time, reducing design time by up to 50% in some tasks. Secondly, while structures obtained by different methods can all accomplish task objectives, the structures obtained by the proposed algorithm are more concise. Taking a walking task as an example, in cases where 25 voxels could be used, our designed structure only utilized 11 of them, significantly reducing manufacturing costs. Overall, we hope the introduction of this method will lead robotic design approaches towards a new journey of efficiency and convenience. 10. An indication of the general type of genetic or evolutionary computation used; GA (genetic algorithms) 11. The date of publication. 25 October 2023