in Machine Learning
Gaming and Education
NIPS 2016 workshop proposal
Friday December 9, 2016, Barcelona, Spain

[Schedule][Invited Speakers][Committee][Organizers]


Challenges in machine learning and data science are competitions running over several weeks or months to resolve problems using provided datasets or simulated environments. The playful nature of challenges naturally attracts students, making challenge a great teaching resource. For this third edition of the CiML workshop at NIPS we want to explore more in depth the opportunities that challenges offer as teaching tools. The workshop will give a large part to discussions around several axes: (1) benefits and limitations of challenges to give students problem-solving skills and teach them best practices in machine learning; (2) challenges and continuous education and up-skilling in the enterprise; (3) design issues to make challenges more effective teaching aids; (3) curricula involving students in challenge design as a means of educating them about rigorous experimental design, reproducible research, and project leadership.
CiML is a forum that
 brings together workshop organizers, platform providers, and participants to discuss best practices in challenge organization and new methods and application opportunities to design high impact challenges. Following the success of last year's workshop, in which a fruitful exchange led to many innovations, we propose to reconvene and discuss new opportunities for challenges in education, one of the hottest topics identified in last year's discussions. We have invited prominent speakers in this field.
We will also reserve time to an open discussion to dig into other topic including open innovation, coopetitions, platform interoperability, and tool mutualisation.

Our proposal was accepted!

Invited speakers


Emma Brunskill
Learning to improve learning: ML in the classroom

Sebastien Marcel
Reproducible Research: teaching scientific method

Henning Muller
(MediaEval, ImageClef)
a serious game

Joaquin Vanschoren 
(TU Eindhoven)
OpenML in research and education

Larry Zitnick 
Gathering common sense knowledge: how to game it?

We want to give a large space to discussion, by organizing  two discussion sessions monitored by the organizers, who will first give a  brief introduction to several selected topics: how are challenges used "in class", what makes a good classroom challenge, challenges and MOOCs, grading challenge work, involving students in challenge work. In addition, the invited speakers will be asked to reflect in their presentations upon the four main topics of discussion.

Call for abstract: 
We welcome 2-page extended abstracts on topics relating to challenges in machine learning and gaming in education at large. Selected papers will be presented primarily as posters, but exceptional contributions will be given oral presentations. Abstract should be submitted by October 10th, 2016 by sending email to

Topics of interest
- Novel or atypical challenge protocols, particularly relating to gaming and education.
- Novel or atypical challenge protocols to tackle complex tasks with very large datasets, multi-modal data, and data streams.
- Methods and metrics of entry evaluation, quantitative and qualitative challenges.
- Methods of data collection, "ground-truthing", and preparation including bifurcation/anonymization, data generating models.
- Teaching challenge organization.
- Hackatons and on-site challenges.
- Challenge indexing and retrieval, challenge recommenders.

- Societal of psychological studies of theories about gaming and education.
- Experimental design, size data set, data split, error bounds, statistical significance, violation of typical assumptions (e.g. i.i.d. data).
- Game theory applied to the analysis of challenge participation, competition and collaboration among participants.
- Diagnosis of data sanity, artifacts in data, data leakage.

- Re-usable challenge platforms, innovative software environments.
- Linking data and software repositories to challenges.
- Security/privacy, intellectual property, licenses.
- Cheating prevention and remedies.
- Issues raised by requiring code submission.
- Challenges requiring user interaction with the platform (active learning, reinforcement learning).
- Dissemination, fact sheets, proceedings, crowdsourced papers, indexing post-challenge publications.
- Long term impact, on-going benchmarks, metrics of impact.
- Participant rewards, stimulation of participation, advertising, sponsors.
- Profiling participants, improving participant professional and social benefits.

- Challenges as an educational tool.
- Where to venture next: opportunities for challenge organizers to organize challenges in new domains with high societal impact.
- Successful challenge leading to significant breakthrough or improvement over the state-of-the-art or unexpected interesting results.
- Rigorous study of the impact of challenges, analyzing topics and tasks lending themselves to high impact machine learning challenges.
- Challenges organized or supported by Government agencies, funding opportunities.

Related workshops: