CiML 2017

Machine Learning Challenges as a Research Tool

Saturday December 9, 2017, Long Beach, California

[Schedule][Invited Speakers][Committee][Organizers][CALL-FOR-ABSTRACTS]


Challenges in machine learning and data science are competitions running over several weeks or months to resolve problems using provided datasets or simulated environments. The playful nature of challenges naturally attracts students, making challenge a great teaching resource. For this fourth edition of the CiML workshop at NIPS we want to explore the impact of machine learning challenges as a research tool. The workshop will give a large part to discussions around several axes: (1) benefits and limitations of challenges as a research tool; (2) methods to induce and train young researchers; (3) experimental design to foster contributions that will push the state of the art.
CiML is a forum that
 brings together workshop organizers, platform providers, and participants to discuss best practices in challenge organization and new methods and application opportunities to design high impact challenges. Following the success of last year's workshop, in which a fruitful exchange led to many innovations, we propose to reconvene and discuss new opportunities for challenges as a research tool, one of the hottest topics identified in last year's discussions. We have invited prominent speakers in this field.
We will also reserve time to an open discussion to dig into other topic including open innovation, collaborative challenges (coopetitions), platform interoperability, and tool mutualisation.
The audience of this workshop is targeted to workshop organizers, participants, and anyone with scientific problem involving machine learning, which may be formulated as a challenge. The emphasis of the workshop is on challenge design. Hence it complements nicely the workshop on the NIPS 2017 competition track and will help paving the way toward next year's competition program.
The workshop is accepted! Please submit papers. DEADLINE OCT. 10.

Invited speakers

Ben Hamner
Balazs Kegl
Andre Elisseeff
Katja Hofmann

 Xavier Baro
   Ben Hamner
(Kaggle, USA)
 Balázs Kégl
(CNRS, France)
  André Elisseeff 
(Google, Switzerland)
  Katja Hofmann
(Microsoft, UK)
(UAB, Spain)

We want to give a large space to discussion, by organizing  two discussion sessions monitored by the organizers, who will first give a  brief introduction to several selected topics. In addition, the invited speakers will be asked to reflect in their presentations upon the main topics of discussion.

Call for abstract: 
We welcome 2-page extended abstracts on topics relating to challenges in machine learning. Selected papers will be presented primarily as posters, but exceptional contributions will be given oral presentations. Abstract should be submitted by October 10th, 2016 by sending email to [You can use the NIPS template for your submissions; submission need NOT be anonymized; and extra page can be used for references and acknowledgements].

Topics of interest
- Novel or atypical challenge protocols, particularly relating to research.
- Novel or atypical challenge protocols to tackle complex tasks with very large datasets, multi-modal data, and data streams.
- Methods and metrics of entry evaluation, quantitative and qualitative challenges.
- Methods of data collection, "ground-truthing", and preparation including bifurcation/anonymization, data generating models.
- Teaching challenge organization.
- Hackatons and on-site challenges.
- Challenge indexing and retrieval, challenge recommenders.

- Experimental design, size data set, data split, error bounds, statistical significance, violation of typical assumptions (e.g. i.i.d. data).
- Game theory applied to the analysis of challenge participation, competition and collaboration among participants.
- Diagnosis of data sanity, artifacts in data, data leakage.

- Re-usable challenge platforms, innovative software environments.
- Linking data and software repositories to challenges.
- Security/privacy, intellectual property, licenses.
- Cheating prevention and remedies.
- Issues raised by requiring code submission.
- Challenges requiring user interaction with the platform (active learning, reinforcement learning).
- Dissemination, fact sheets, proceedings, crowdsourced papers, indexing post-challenge publications.
- Long term impact, on-going benchmarks, metrics of impact.
- Participant rewards, stimulation of participation, advertising, sponsors.
- Profiling participants, improving participant professional and social benefits.

- Challenges as a research tool.
- Where to venture next: opportunities for challenge organizers to organize challenges in new domains with high societal impact.
- Successful challenge leading to significant breakthrough or improvement over the state-of-the-art or unexpected interesting results.
- Rigorous study of the impact of challenges, analyzing topics and tasks lending themselves to high impact machine learning challenges.
- Challenges organized or supported by Government agencies, funding opportunities.

Related workshops: