1. The complete title of one (or more) paper(s) published in the open literature describing the work that the author claims describes a human-competitive result. Title:a 2. The name, complete physical mailing address, e-mail address, and phone number of EACH author of EACH paper(s). Authors: (a) Yanan Wang d8232116@u-aizu.ac.jp +81 -09069043022 (b) Yan Pei peiyan@u-aizu.ac.jp +81 -(242)-37-2765 (c) Zerui Ma mazerui@emails.bjut.edu.cn +86 15910915227 (d) Jianqiang Li lijianqiang@bjut.edu.cn +86 18813067196 Address for authors (a) and (b): University of Aizu, ADDR Tsuruga, Ikki-machi, Aizu-Wakamatsu City, Fukushima, 965-8580, Japan Address for authors (c) and (d): Beijing University of Technology, PingLeyuan 100, Chaoyang District, Beijing, 100124, China 3. The name of the corresponding author (i.e., the author to whom notices will be sent concerning the competition). (b) Yan Pei peiyan@u-aizu.ac.jp +81 -(242)-37-2765 4. The abstract of the paper(s). The development of generative artificial intelligence (AI) has demonstrated notable advancements in the domain of music synthesis. However, a perceived lack of creativity in the generated content has drawn significant attention from the public. To address this, this paper introduces a novel approach to personalized music synthesis, incorporating a human-in-the-loop generation. This method leverages the dual strengths of interactive evolutionary computation, known for its capturing user preferences, and generative adversarial network, renowned for its capacity to autonomously produce high-quality music. The primary objective of this integration is to augment the credibility and diversity of generative AI in music synthesis, fostering computational artistic creativity in humans. Furthermore, a user-friendly interactive music player has been designed to facilitate users in the music synthesis process. The proposed method exemplifies a paradigm wherein users manipulate latent space through human-machine interaction, underscoring the pivotal role of humans in the synthesis of diverse and creative music. 5. A list containing one or more of the eight letters (A, B, C, D, E, F, G, or H) that correspond to the criteria (see above) that the author claims that the work satisfies. (D, E, F, G) 6. A statement stating why the result satisfies the criteria that the contestant claims (see examples of statements of human-competitiveness as a guide to aid in constructing this part of the submission). Statement Why the Results Satisfy the Criteria (D), (E), (F) and (G) (D) The result is publishable in its own right as a new scientific result independent of the fact that the result was mechanically created. Statement for Criterion (D): The presented method introduces a novel approach to personalized music synthesis by integrating interactive evolutionary computation (IEC) and generative adversarial networks (GAN). This hybrid approach leverages the strengths of genetic algorithms in IEC, including crossover and mutation operations, to enhance the diversity and creativity of the generated music. Furthermore, this method is pioneering in the field of music synthesis as it allows for the exploration of the deep learning latent space through human-machine interaction. By enabling users to directly manipulate the latent vectors, the system significantly enhances the personalization and creativity of the output. The method's ability to produce music that closely aligns with user preferences has been rigorously validated through comprehensive experiments. These results demonstrate significant improvements in creativity and personalization, addressing a critical challenge in generative AI. The innovative combination of these techniques and the substantial empirical validation makes this work a valuable scientific contribution worthy of publication as an independent result. (E) The result is equal to or better than the most recent human-created solution to a long-standing problem for which there has been a succession of increasingly better human-created solutions. Statement for Criterion (E): Our proposed UIGAN method surpasses state-of-the-art models such as WaveNet, WaveGAN, and MelGAN in critical performance metrics, including fréchet audio distance (FAD), coverage, precision, recall, and density. By incorporating genetic algorithm operations like crossover and mutation within the IEC framework, UIGAN enhances the generative process with user-driven diversity and creativity. Moreover, this method is the first to explore the latent space of deep learning models through direct human-machine interaction, enabling users to guide the generative process based on their preferences. These improvements directly address the long-standing challenge of generating highly creative and personalized music through AI. The superior performance of UIGAN, validated by extensive experiments, demonstrates that it not only matches but exceeds the most recent human-created solutions in this domain. (F) The result is equal to or better than a result that was considered an achievement in its field at the time it was first discovered. Statement for Criterion (F): At its inception, the conventional MelGAN model was regarded as a significant breakthrough in music synthesis due to its high-quality audio generation capabilities. Our proposed UIGAN builds upon this foundation by incorporating interactive evolutionary manipulation, which includes genetic algorithm-based crossover and mutation to introduce user preferences into the generative process. This enhancement leads to superior results in terms of musical diversity and user satisfaction, as evidenced by higher 5-point mean opinion scores (MOS) and substantial improvements in FAD and other metrics. Crucially, the UIGAN is the first music synthesis method to enable users to interactively explore and manipulate the latent space of deep learning models, significantly enhancing the personalization and creativity of the output. Thus, the UIGAN not only matches but exceeds the achievements of earlier models like MelGAN, offering a more personalized and creatively rich music synthesis process. (G) The result solves a problem of indisputable difficulty in its field. Statement for Criterion (G): Generating creative and personalized music through AI is an inherently challenging problem due to the subjective nature of creativity and the complexity of capturing individual user preferences. Our proposed UIGAN effectively addresses this challenge by integrating the strengths of genetic algorithms in IEC with GANs, allowing users to interactively guide the generative process. This method is pioneering in enabling users to explore the deep learning latent space through human-machine interaction, which is a significant advancement in the field of music synthesis. The model’s ability to synthesize music that resonates with user preferences, validated through rigorous quantitative and qualitative analyses, demonstrates its success in solving this complex problem. The use of genetic algorithm operations like crossover and mutation within the IEC framework is crucial in exploring diverse musical elements and combinations, marking a significant breakthrough in the field and highlighting UIGAN's capability to tackle this difficult issue effectively. 7. A full citation of the paper (that is, author names; title, publication date; name of journal, conference, or book in which article appeared; name of editors, if applicable, of the journal or edited book; publisher name; publisher city; page numbers, if applicable). Yanan, Wang, Yan Pei, Zerui Ma, and Jianqiang Li, "A user-guided generation framework for personalized music synthesis using interactive evolutionary computation,”in Proceedings of the Genetic and Evolutionary Computation Conference, 2024. 8. A statement either that "any prize money, if any, is to be divided equally among the co-authors" OR a specific percentage breakdown as to how the prize money, if any, is to be divided among the co-authors. If any prize money is granted, it shall be split in the following way: -All prize money will be allocated to Yanan Wang (First Author) to support his PhD studies. This decision was made in mutual agreement between all the authors. 9. A statement stating why the authors expect that their entry would be the "best". We believe our entry stands out as the "best" due to several groundbreaking innovations and valuable advancements in the field of music synthesis: Strength Integration of IEC and GAN: Our method uniquely combines interactive evolutionary computation (IEC) with generative adversarial network (GAN). This hybrid approach leverages the dual strengths of IEC, known for its capturing user preferences, and generative adversarial network, renowned for its capacity to autonomously produce high-quality music. This method also incorporates the crossover and mutation operations of genetic algorithms, providing a foundation for enhancing the diversity of the generated music. This combination not only advances the state-of-the-art but also introduces a new paradigm where users manipulate latent space through human-machine interaction. First to Explore Deep Learning Latent Space of Music Synthesis through Human-Machine Interaction: Our approach is pioneering in allowing users to manipulate the latent vectors within the deep learning model. This interactive process enables a more personalized and creative music synthesis, where user preferences are directly integrated into the generative process. This level of user involvement in exploring the latent space is unprecedented in music synthesis. Superior Performance in Key Metrics: Extensive experiments have demonstrated that our proposed method surpasses existing state-of-the-art models such as WaveNet, WaveGAN, and MelGAN in critical performance metrics. These include fréchet audio distance (FAD), coverage, precision, recall, and density. Our method also received the highest 5-point mean opinion scores (MOS) in subjective evaluations, indicating a higher satisfaction level among users. Addressing the Challenge of Lack of Creativity and Personalization: One of the primary challenges faced by generative music models is the perceived lack of creativity and personalization. Our method not only addresses this challenge but also enhances the capabilities of existing approaches, highlighting the pivotal role of human input in creating diverse and creative music. This advancement underscores the importance of human involvement in the future of generative AI. Given these innovations and achievements, we are confident that our entry represents the best advancement in the field of music synthesis through genetic and evolutionary computation. 10. An indication of the general type of genetic or evolutionary computation used, such as GA (genetic algorithms), GP (genetic programming), ES (evolution strategies), EP (evolutionary programming), LCS (learning classifier systems), GI (genetic improvement), GE (grammatical evolution), GEP (gene expression programming), DE (differential evolution), etc. GA (genetic algorithms)—— Interactive Evolutionary Computation 11. The date of publication of each paper. If the date of publication is not on or before the deadline for submission, but instead, the paper has been unconditionally accepted for publication and is “in press” by the deadline for this competition, the entry must include a copy of the documentation establishing that the paper meets the "in press" requirement. In-press for GECCO 2024. Expected publication date: July 18, 2024 See below acceptance email: Dear Yan Pei, Congratulations! Your paper (wksp109s1) A User-Guided Generation Framework for Personalized Music Synthesis Using Interactive Evolutionary Computation has been accepted in the GECCO 2024 Workshop Interactive Methods at GECCO. Reviews are now available in the submission website at https://ssl.linklings.net/conferences/gecco Please confirm or decline acceptance there. Acceptance is subject to the condition that you consider the comments of the reviewers when preparing the camera-ready version of your manuscript. For the presentation format, please refer to the specific information for this workshop. In case this is not indicated on the webpage of the workshop, workshop organizers will get back to you shortly about these details. You will receive a separate email with instructions and information about the copyright process (please check your spam folder if you don't receive it next week). You need to complete the copyright form to get the copyright notice with the final DOI you have to include in the camera-ready version of the manuscript. Your camera-ready version must be submitted via the submission system (https://ssl.linklings.net/conferences/gecco/) by __Friday, May 10, 2024__. Please prepare your camera-ready version following the instructions and templates at https://gecco-2024.sigevo.org/Paper-Submission-Instructions#Camera-Ready_Instructions. We recommend uploading preliminary versions of your manuscript prior to the deadline to check if there is any problem or formatting issue (e.g., inappropriate or not embedded fonts). __IMPORTANT__: __Please note that May 10, 2024 is the firm deadline for authors of accepted workshop papers to register__. At least one author for each accepted paper must be registered by then, and at least one author must present the paper at the conference. If an author is presenting more than one paper at the conference, they do not pay any additional registration fees.   The registration site is open and can be accessed following the instructions at https://gecco-2024.sigevo.org/Registration Thank you for submitting your work to GECCO 2024. We look forward to seeing you in the conference! Please visit https://gecco-2024.sigevo.org/ for a list of tutorials, workshops, and competitions at the conference, as well as other events. The list of accepted papers will be made available in due time. We furthermore like to draw your attention to the GECCO summer school, open to current students and recent graduates: https://gecco-2024.sigevo.org/Summer-School Best wishes, GECCO 2024 Workshop Chairs