CiML 2018

Machine Learning competitions "in the wild":
Playing in the real world or in real time

[Schedule][Invited Speakers][Committee][Organizers][CALL-FOR-ABSTRACTS]

Invited speakers







Mikhail Burtsev, MIPT Moscow, Russia

Bich-Liên Doan, Centrale-Supelec, France

Esteban Arcaute, Facebook, USA

Laura Seaman, Draper Inc., USA

Larry Jackel, North-C Tech., USA

Daniel PolaniU. Hertfordshire, UK

Antoine Marot, RTE, France

Challenges in machine learning and data science are competitions running over several weeks or months to resolve problems using provided datasets or simulated environments. The playful nature of challenges naturally attracts students, making challenge a great teaching resource. For this fifth edition of the CiML workshop at NeurIPS we want to go beyond simple data science challenges using canned data. We will explore the possibilities offered by challenges in which code submitted by participants are evaluated "in the wild", directly interacting in real time with users or with real or simulated systems.  Organizing challenges "in the wild" is not new. One of the most impactful such challenge organized relatively recently is the DARPA grant challenge 2005 on autonomous navigation, which accelerated research on autonomous vehicles, leading to self-driving cars. Other high profile challenge series with live competitions include RoboCup, which has been running from the past 22 years. Recently, the machine learning community has started being interested in such interactive challenges, with last year at the NeurIPS learning to run challenge, an reinforcement learning challenge in which a human avatar had to be controlled with simulated muscular contractions, and the ChatBot challenge in which humans and robots had to engage into an intelligent conversation. Applications are countless for machine learning  and artificial intelligence programs to solve problems in real time in the real world, by interacting with the environment. But organizing such challenges is far from trivial

The workshop will give a large part to discussions around two principal axes: (1) Design principles and implementation issues; (2) Opportunities to organize new impactful challenges.
Our objectives include bringing together potential partner to organize new such challenges and stimulating "machine learning for good", i.e. the organization of challenges for the benefit of society.

CiML is a forum that
 brings together workshop organizers, platform providers, and participants to discuss best practices in challenge organization and new methods and application opportunities to design high impact challenges. Following the success of previous years' workshops, we propose to reconvene and discuss new opportunities for challenges "in the wild", one of the hottest topics in challenge organization. We have invited prominent speakers having experience in this domain.

The audience of this workshop is targeted to workshop organizers, participants, and anyone with scientific problem involving machine learning, which may be formulated as a challenge. The emphasis of the workshop is on challenge design. Hence it complements nicely the workshop on the NeurIPS 2018 competition track and will help paving the way toward next year's competition program.

We want to give a large space to discussion, by organizing  two discussion sessions monitored by the organizers, who will first give a brief introduction to several selected topics. In addition, the invited speakers will be asked to address in their presentations the main topics of the workshop and intervene in the discussions.

Call for abstract: OVER
We welcome 2-page extended abstracts on topics relating to challenges in machine learning. Selected papers will be presented primarily as posters, but exceptional contributions will be given oral presentations. Abstract should be submitted by October 10th, 2018 by sending email to [You can use the NeurIPS template for your submissions; submission need NOT be anonymized; and extra page can be used for references and acknowledgements]. The best contributions will be invited to contribute a book chapter in the Springer series on challenges in Machine Learning.

Topics of interest
- Novel or atypical challenge protocols, particularly relating to research.
- Novel or atypical challenge protocols to tackle complex tasks with very large datasets, multi-modal data, and data streams.
- Methods and metrics of entry evaluation, quantitative and qualitative challenges.
- Methods of data collection, "ground-truthing", and preparation including bifurcation/anonymization, data generating models.
- Teaching challenge organization.
- Hackatons and on-site challenges.
- Challenge indexing and retrieval, challenge recommenders.

- Experimental design, size data set, data split, error bounds, statistical significance, violation of typical assumptions (e.g. i.i.d. data).
- Game theory applied to the analysis of challenge participation, competition and collaboration among participants.
- Diagnosis of data sanity, artifacts in data, data leakage.

- Re-usable challenge platforms, innovative software environments.
- Linking data and software repositories to challenges.
- Security/privacy, intellectual property, licenses.
- Cheating prevention and remedies.
- Issues raised by requiring code submission.
- Challenges requiring user interaction with the platform (active learning, reinforcement learning).
- Dissemination, fact sheets, proceedings, crowdsourced papers, indexing post-challenge publications.
- Long term impact, on-going benchmarks, metrics of impact.
- Participant rewards, stimulation of participation, advertising, sponsors.
- Profiling participants, improving participant professional and social benefits.

- Challenges for the benefit of society, as a scientific research tool, for up-skilling, or to solve industry problems.
- Where to venture next: opportunities for challenge organizers to organize challenges in new domains with high societal impact.
- Successful challenge leading to significant breakthrough or improvement over the state-of-the-art or unexpected interesting results.
- Rigorous study of the impact of challenges, analyzing topics and tasks lending themselves to high impact machine learning challenges.
- Challenges organized or supported by Government agencies, funding opportunities.

Related workshops: