The first international conference on automated machine learning (AutoML-Conf 2022) brings together researchers and users, with the goals of developing automated methods for speeding up the development of machine learning applications, obtaining improved performance, and thereby democratizing machine learning. Topics of interest include but are not limited to:
- Neural Architecture Search (NAS)
- Hyperparameter Optimization (HPO)
- Combined Algorithm Selection and Hyperparameter Optimization (CASH)
- Automated Data Mining
- Automated Reinforcement Learning (AutoRL)
- Meta-Learning and Learning to learn
- Bayesian Optimization for AutoML
- Evolutionary Algorithms for AutoML
- Multi-Objective Optimization for AutoML
- AutoAI (incl. Algorithm Configuration and Selection)
- Trustworthy AutoML (e.g., wrt fairness, robustness, sustainability, uncertainty quantification and explainability)
- Applications of AutoML (with scientific insights e.g. into required features of AutoML approaches or application knowledge discovered by AutoML)
- See special track for systems, benchmarks and challenges
If a submission should violate any of the following rules (especially, double-blind, formatting, reproducibility and dual submission), it will get (desk-)rejected.
All submissions will undergo a double-blind review, that is (i) the paper, code and data submitted for reviewing have to be anonymized to make it impossible to deduce the authors and (ii) the reviewers will also be anonymized.
The paper has to be formatted according to the LaTeX template available at https://github.com/automl-conf/LatexTemplate. The paper can consist of up to 9 pages, plus references and appendix. These 9 pages should include discussions of limitations and broader impact of the work; these discussions can be placed freely in the main paper, e.g., in the introduction/future work, or as separate sections. References and appendix are not limited in length.
Accepted papers are allowed to add another page to the main paper to react to reviewer feedback.
We use OpenReview to manage submissions. Shortly after the authors’ notification, the de-anonymized paper and anonymous reviews of all accepted papers and opt-in rejected papers will become public in OpenReview, and open for non-anonymous public commenting. Authors of rejected papers will have until July 14th, 2021 to opt in to make their de-anonymized papers (including anonymous reviews) public in OpenReview. Otherwise, there will be no public record that the rejected paper was submitted.
We ask that authors think about the broader impact and ethical considerations of their work. For example, authors may consider whether there is potential use for the data or methods to create or exacerbate unfair bias. Reviewers cannot directly reject papers based on ethical considerations but can flag papers for ethical concerns by the conference organizers, who may decide to reject papers based on these grounds (e.g., if the primary application directly causes harm or injury).
Submissions are not allowed to have been accepted or to simultaneously be under review at another conference or journal. If this rule should be violated, the organizers will reject the submission or remove the paper from the proceedings.
ArXiv and Social Media
Papers uploaded to arXiv or submitted/accepted at a workshop without formal proceedings do not violate the dual submission policy.
To facilitate double blind reviewing, we ask the authors to adhere to a social media silence period (starting 2 weeks before paper submission and ending with the final notification), in which the paper should not be uploaded to any web server (e.g., arXiv or the authors’ own website) and not be promoted on social media.
We strongly value reproducibility as an integral part of scientific quality assurance. Therefore, we ask that all submissions are accompanied by
- A link to an open source repository providing an implementation (if empirical results are part of the paper). To abide by double-blind reviewing, for the reviewing period we recommend services such as https://anonymous.4open.science/
- A reproducibility checklist, which is part of the LaTeX template and does not count as part of the 9-pages (incl. details of repeated measurements, tuned hyper(-hyper-)parameters, …).
We recommend implementing new ideas in existing packages instead of re-implementing the basics from scratch and thereby introducing many confounding factors. For example, a new acquisition function could be implemented in one of the many Bayesian optimization packages. (A side effect of this can be easier usability and thus increased impact.)
During the rebuttal phase, the authors are allowed to update their papers based on the questions and the feedback of the reviewers. Adding more co-authors is not possible.
Publication of accepted submissions
The publication of accepted submissions will be done via OpenReview. Feedback by the reviewers can be incorporated into the final version of the paper; other major changes are not allowed. Furthermore, each paper has to be accompanied by a link to an open-source implementation (if there are any empirical results in the paper) to ensure reproducibility.
As mentioned above, accepted papers will be made available alongside with their reviews and meta-reviews.
We further encourage you to provide meta-information of your accepted paper in the open research knowledge graph.
Attending the Conference
The conference is co-located with ICML in Baltimore, and we are currently planning for an in-person event. The conference’s main objective is to allow for in-depth discussions and networking. It is not mandatory for authors of accepted papers to attend the conference in person, but we encourage it. Nevertheless, for all accepted papers, we require a short video. This will allow attendees to watch videos ahead of time at their own pace and plan ahead to use the conference most effectively for in-depth discussions and networking.
- Isabelle Guyon
- Marius Lindauer
- Mihaela van der Schaar
If there should be any question, don’t hesitate to contact us: firstname.lastname@example.org