Machine Learning Competitions for All
[Workshop proposal submitted to NeurIPS 2019, not yet accepted]

Invited Speakers (confirmed)

Afbeeldingsresultaat voor frank hutter challenges

Amir Banifatemi


Emily M. Bender (University of Washington)

Dina Machuve 


Frank Hutter (University of Freiburg)

Yang Yu (Nanjing University)


Challenges in machine learning and data science are open online competitions that address problems by providing datasets or simulated environments. They measure the performance of machine learning algorithms with respect to the given problem, resulting in a leaderboard. The playful nature of challenges naturally attracts students, making challenges a great teaching resource. However, in addition to the use of challenges as educational tools, challenges have a role to play towards a better democratization of AI and machine learning. They function as cost effective problem-solving tools and a means of encouraging the development of re-usable problem templates and open-sourced solutions. However, at present, the geographic, sociological repartition of challenge participants and organizers is very biased. While recent successes in machine learning have raised much hopes, there is a growing concern that the societal and economical benefits might increasingly be in the power and under control of a few.

CiML (Challenges in Machine Learning) is a forum that brings together workshop organizers, platform providers, and participants to discuss best practices in challenge organization and new methods and application opportunities to design high impact challenges. Following the success of previous years' workshops, we propose to reconvene and discuss new opportunities for broadening our community.

For this sixth edition of the CiML workshop at NeurIPS our objective is twofold: (1) We aim to enlarge the community, fostering diversity in the community of participants and organizers; (2) We aim to promote the organization of challenges for the benefit of more diverse communities.

The workshop will provide room for discussions on these topics, and aims to bring together potential partners to organize such challenges and stimulate "machine learning for good", i.e. the organization of challenges for the benefit of society. We have invited prominent speakers having experience in this domain.

Workshop Audience

The CiML workshop is targeted at workshop organizers, participants, and anyone with a scientific problem involving machine learning that may be formulated as a challenge. The emphasis of the CiML workshop is on challenge design. Hence it complements nicely the workshop on the NeurIPS 2019 competition track and will help pave the way toward next year's competition program.

We want to give a large space to discussion, by organizing  two discussion sessions monitored by the organizers, who will first give a brief introduction to the selected topics. In addition, the invited speakers will be asked to address in their presentations the main topics of the workshop and intervene in the discussions.

 Call for Abstracts

We welcome 2-page extended abstracts on topics relating to challenges in machine learning. Selected papers will be presented primarily as posters, but exceptional contributions will be given oral presentations. Abstract should be submitted by October 10th, 2019 by sending email to [You can use the NeurIPS template for your submissions; submission need NOT be anonymized; and extra page can be used for references and acknowledgements]. The best contributions will be invited to contribute a book chapter in the Springer series on challenges in Machine Learning.

Topics of interest include, but are not limited to:

  • Novel or atypical challenge protocols, particularly relating to research.
  • Novel or atypical challenge protocols to tackle complex tasks with very large datasets, multi-modal data, and data streams.
  • Methods and metrics of entry evaluation, quantitative and qualitative challenges.
  • Methods of data collection, "ground-truthing", and preparation including bifurcation/anonymization, data generating models.
  • Teaching challenge organization.
  • Hackatons and on-site challenges.
  • Challenge indexing and retrieval, challenge recommenders.
  • Experimental design, size data set, data split, error bounds, statistical significance, violation of typical assumptions (e.g. i.i.d. data).
  • Game theory applied to the analysis of challenge participation, competition and collaboration among participants.
  • Diagnosis of data sanity, artifacts in data, data leakage.
  • Re-usable challenge platforms, innovative software environments.
  • Linking data and software repositories to challenges.
  • Security/privacy, intellectual property, licenses.
  • Cheating prevention and remedies (i.e. leaderboard climbing).
  • Issues raised by requiring code submission.
  • Challenges requiring user interaction with the platform (active learning, reinforcement learning).
  • Dissemination, fact sheets, proceedings, crowdsourced papers, indexing post-challenge publications.
  • Long term impact, on-going benchmarks, metrics of impact.
  • Participant rewards, stimulation of participation, advertising, sponsors.
  • Profiling participants, improving participant professional and social benefits.
  • Challenges for the benefit of society, as a scientific research tool, for up-skilling, or to solve industry problems.
  • Where to venture next: opportunities for challenge organizers to organize challenges in new domains with high societal impact.
  • Successful challenge leading to significant breakthrough or improvement over the state-of-the-art or unexpected interesting results.
  • Rigorous study of the impact of challenges, analyzing topics and tasks lending themselves to high impact machine learning challenges.
  • Challenges organized or supported by Government agencies, funding opportunities.

Related workshops: