See Dates for all deadlines.
The first international conference on automated machine learning (AutoML) brings together researchers and users, with the goals of developing automated methods for speeding up the development of machine learning applications, obtaining improved performance, and thereby democratizing machine learning. Topics of interest include but are not limited to:
- Neural Architecture Search (NAS)
- Hyperparameter Optimization (HPO)
- Combined Algorithm Selection and Hyperparameter Optimization (CASH)
- Automated Reinforcement Learning (AutoRL)
- Automated Data Mining
- Meta-Learning and Learning to learn
- Bayesian Optimization for AutoML
- Evolutionary Algorithms for AutoML
- Multi-Objective Optimization for AutoML
- AutoAI (incl. Algorithm Configuration and Selection)
- AutoML Systems
- Trustworthy AutoML (e.g., wrt fairness, robustness, sustainability, uncertainty quantification and explainability)
If a submission (or any link included) should violate any of following rules (especially, double-blind, formatting, reproducibility and dual submission), the submission will get (desk-)rejected.
All submissions will undergo a double-blind review, that is (i) the paper, code and data submitted for reviewing have to be anonymized to make it impossible to deduce the authors and (ii) the reviewers will also be anonymized.
The paper has to be formatted according to the LaTeX template we will make available. The paper can consist of up to 9 pages, excl. references and appendix. These 9 pages should include discussions of limitations and broader impact of the work; these discussions can be placed freely in the main paper, e.g., in the introduction/future work, or as separate sections. References and appendix are not limited in length.
Accepted papers are allowed to add another page to the main paper to react to reviewer feedback.
We use OpenReview to manage submissions. Shortly after the authors’ notification, the de-anonymized paper and anonymous reviews of all accepted papers and opt-in rejected papers will become public in OpenReview, and open for non-anonymous public commenting. Authors of rejected papers will have until July 14th, 2021 to opt in to make their de-anonymized papers (including anonymous reviews) public in OpenReview. Otherwise, there will be no public record that the rejected paper was submitted.
We ask that authors think about the broader impact and ethical considerations of their work. For example, authors may consider whether there is potential use for the data or methods to create or exacerbate unfair bias. Reviewers cannot directly reject papers based on ethical considerations but can flag papers for ethical concerns by the conference organizers, who may decide to reject papers based on these grounds (e.g., if the primary application directly causes harm or injury).
All submissions are not allowed to have been recently accepted or simultaneously being under review at another conference or journal. If this rule should be violated, the organizers will reject the submission or remove the paper from the proceedings.
Papers uploaded to arXiv or submitted/accepted at a workshop without formal proceedings do not violate the double-submission policy. To facilitate double blinde reviewing, we ask the authors to adhere to an embargo period starting 2 weeks before and ending 3 months after the submission deadline, in which the paper should not be uploaded to any web server (e.g., arXiv or the authors’ own website) and not be promoted on social media.
We strongly value reproducibility as an integral part of scientific quality assurance. Therefore, we ask that all submissions are accompanied by
- A link to an open source repository providing an implementation (if empirical results are part of the paper). To abide by double-blind reviewing, for the reviewing period we recommend services such as https://anonymous.4open.science/
- A reproducibility checklist, which is part of the LaTeX template and does not count as part of the 9-pages (incl. details of repeated measurements, tuned hyper(-hyper-)parameters, …). See the ML reproducibility list and the NAS best practice checklist for an orientation.
We recommend implementing new ideas in existing packages instead of re-implementing the basics from scratch and thereby introducing many confounding factors. For example, a new acquisition function could be implemented in one of the many Bayesian optimization packages. (A side effect of this can be easier usability and thus increased impact.)
During the rebuttal phase, the authors are allowed to update their papers based on the questions and the feedback of the reviewers. Adding more co-authors is not possible.
Publication of accepted submissions
The publication of accepted submissions will be done via OpenReview. Feedback by the reviewers can be incorporated into the final version of the paper; any other changes are not allowed. Furthermore, each paper has to be accompanied by a link to an open-source implementation (if there are any empirical results in the paper) to ensure reproducibility.
As mentioned above, accepted papers will be made available alongside with their reviews and meta-reviews.
We further encourage you to provide meta-information of your accepted paper in the open research knowledge graph.
Attending the Conference
The conference’s main objective is to allow for in-depth discussions and networking. It is not mandatory for authors of accepted papers to attend the conference, but we highly encourage it. Nevertheless, for all accepted papers, we require a short video.
- Isabelle Guyon
- Marius Lindauer
- Mihaela van der Schaar
If there should be any question, don’t hesitate to contact us: TBA