1. the complete title of one (or more) paper(s) published in the open literature describing the work that the author claims describes a human-competitive result; An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Convolutional Neural Networks 2. the name, complete physical mailing address, e-mail address, and phone number of EACH author of EACH paper(s); Raz Lapid, razla@post.bgu.ac.il Zvika Haramaty, zvikah@post.bgu.ac.il Moshe Sipper, sipper@bgu.ac.il Department of Computer Science, Ben-Gurion University, Beer-Sheva 8410501, Israel (972)-8-6477880 3. the name of the corresponding author (i.e., the author to whom notices will be sent concerning the competition); Moshe Sipper 4. the abstract of the paper(s); Deep neural networks (DNNs) are sensitive to adversarial data in a variety of scenarios, including the black-box scenario, where the attacker is only allowed to query the trained model and receive an output. Existing black-box methods for creating adversarial instances are costly, often using gradient estimation or training a replacement network. This paper introduces Query-Efficient Evolutionary Attack—QuEry Attack—an untargeted, score-based, black-box attack. QuEry Attack is based on a novel objective function that can be used in gradient-free optimization problems. The attack only requires access to the output logits of the classifier and is thus not affected by gradient masking. No additional information is needed, rendering our method more suitable to real-life situations. We test its performance with three different, commonly used, pretrained image-classifications models—Inception-v3, ResNet-50, and VGG-16-BN—against three benchmark datasets: MNIST, CIFAR10 and ImageNet. Furthermore, we evaluate QuEry Attack’s performance on non-differential transformation defenses and robust models. Our results demonstrate the superior performance of QuEry Attack, both in terms of accuracy score and query efficiency. 5. a list containing one or more of the eight letters (A, B, C, D, E, F, G, or H) that correspond to the criteria (see above) that the author claims that the work satisfies; (B) The result is equal to or better than a result that was accepted as a new scientific result at the time when it was published in a peer-reviewed scientific journal. (D) The result is publishable in its own right as a new scientific result independent of the fact that the result was mechanically created. (F) The result is equal to or better than a result that was considered an achievement in its field at the time it was first discovered. 6. a statement stating why the result satisfies the criteria that the contestant claims (see examples of statements of human-competitiveness as a guide to aid in constructing this part of the submission); (B): We presented an evolutionary, score-based, black-box attack, showing its superiority in terms of ASR (attack success rate) and number-of-queries over previously published work. (D): The resultant attacked images stand by themselves regardless of their being evolutionarily created. (F): Past achievements were either gradient-based (white-box) or of far lesser quality. Our approach is thus superior. 7. a full citation of the paper (that is, author names; title, publication date; name of journal, conference, or book in which article appeared; name of editors, if applicable, of the journal or edited book; publisher name; publisher city; page numbers, if applicable); R. Lapid, Z. Haramaty, M. Sipper, An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks, Algorithms, 15(11):407, 2022. 8. a statement either that "any prize money, if any, is to be divided equally among the co-authors" OR a specific percentage breakdown as to how the prize money, if any, is to be divided among the co-authors; Any prize money, if any, is to be divided equally among the co-authors. 9. a statement stating why the authors expect that their entry would be the "best," and Despite their success, recent studies have shown that DNNs are vulnerable to adversarial attacks. A barely detectable change in an image, for example, can cause a misclassification in a well-trained DNN. Targeted adversarial examples can even evoke a misclassification of a specific class (e.g., misclassify a car as a cat). Researchers have demonstrated that adversarial attacks are successful in the real world and may be produced for data modalities beyond imaging, e.g., natural language and voice recognition. DNNs’ vulnerability to adversarial attacks has raised concerns about applying these techniques to safety-critical applications. To discover effective adversarial instances, most past work on adversarial attacks has employed gradient-based optimization. Gradient computation can only be executed if the attacker is fully aware of the model architecture and weights. Thus, these approaches are only useful in a white-box scenario, where an attacker has complete access and control over a targeted DNN. Attacking real-world AI systems is FAR more arduous. In our work we assumed a real-world, black-box attack scenario, wherein a DNN’s input and output might be accessed but not its internal configuration. QuEry Attack, our evolutionary method, is a strong and fast attack that employs a gradient-free optimization strategy. We have shown it to be better than previous approaches. The adversarial images we evolve are entirely indistinguishable to the human eye, i.e., a human cannot tell the difference between the original image and the attacked image. We think that evolutionary algorithms are well-suited for adversarial problems, especially in real-world scenarios involving gradient-free black-box attacks. 10. An indication of the general type of genetic or evolutionary computation used, such as GA (genetic algorithms), GP (genetic programming), ES (evolution strategies), EP (evolutionary programming), LCS (learning classifier systems), GI (genetic improvement), GE (grammatical evolution), GEP (gene expression programming), DE (differential evolution), etc. GA (genetic algorithms) 11. The date of publication of each paper. If the date of publication is not on or before the deadline for submission, but instead, the paper has been unconditionally accepted for publication and is “in press” by the deadline for this competition, the entry must include a copy of the documentation establishing that the paper meets the "in press" requirement. Published: 31 October 2022 (https://www.mdpi.com/1999-4893/15/11/407)