Author Guidelines

Quality Standard of Submissions

We strive to accept only top-quality papers to AutoML-Conf. The reviewers will be asked to assess whether submissions make clearly-defined novel and significant contributions and whether they will have a high potential impact on future progress in the field. Of course, papers should be well written, structured and sound, both in theory and protocol for empirical evaluations. In addition to the standards of other top-tier conferences, such as ICML, ICLR and NeurIPS, we value reproducibility and open science more highly; therefore, these aspects will also influence the review and acceptance decisions more than at those conferences. We do not have a set acceptance rate but will rather accept all top-quality papers, be their number large or small.

Open Source Standards

To facilitate a culture of open collaborations and fast progress in the community, we require that code and data will be open-sourced and available at the time of submission. This includes a specification of all dependencies and its used versions in a machine-readable format (e.g., requirements.txt for Python packages), a ReadMe with instructions for installation, scripts for running the code and producing plots, and well-documented source code. We also encourage reporting of results across several (specified) random seeds, ablation studies, and any other measures that facilitate reproducibility. Authors will also have to fill out a checklist to that effect. Essentially, we encourage authors to ask themselves what information a third party would require to reproduce their results and make this information available. We recognize that fulfilling these criteria requires additional work; however, it will also lead to much more impactful papers, and it will ensure that we can rely on high-quality available code for papers published at AutoML-Conf. Reviewers will be encouraged to verify these criteria. 

We nevertheless acknowledge that some parts of the experiments may rely on proprietary software (e.g., for scaling up experiments on company-specific compute infrastructure). Such proprietary software does not have to be made available. However, it is required that authors at least provide a script to run their code on a single machine with limited resources, which is used to generate a substantial part of the experimental results in their paper. 

We also acknowledge that large-scale AutoML runs can require substantial resources to reproduce, especially in fields such as neural architecture search (NAS). We therefore encourage, in addition to the code, to make intermediate results available, such as, e.g., in the example of NAS, a trained one-shot model, the found architecture, etc.

Finally, we also acknowledge that not all data can be shared due to privacy concerns and legitimate company interests. To still enable the scientific community to repeat and build on publications based on such data, we require that such publications also report results on publicly-available data and that the code for doing so is made available.

Social Media Silence Period vs Being Scooped

Following CVPR 2022, we enforce a social media silence period, starting 2 weeks before the submission deadline and ending with the paper notification. During this period, authors should NOT use social media to promote their submissions to AutoML-Conf and should NOT upload their paper to arXiv or any other publicly available server with their names/affiliations. Based on the currently planned schedule, the social media silence period is from February 17th to May 18th; not abiding by this silence period is considered a policy violation. For the sake of clarity, anything that happens before the social media silence period (uploading to arXiv, tweeting, etc) is perfectly allowed.

The goal of this social media silence period is to prevent the authors’ names and affiliations from being revealed, which would undermine the double-blind review process. Authors of course have a legitimate interest to have a time stamp on their paper in order to avoid getting scooped. However, since we use OpenReview, submitted papers will immediately be available online, including a time stamp for the submission. In case of rejection, authors can still opt in to keep their paper online on OpenReview along with the mentioned time stamp.