[Dec/2024] We are organizing the next ICBINB workshop at ICLR 2025! We will take a deep into the pitfalls and challenges of applied deep learning!
[Jun/2023] The ICBINB workshop will be back at NeurIPS 2023! This time with a focus on failure modes of foundation models.
[Jan/2023] The talks of our NeurIPS 2022 workshop are now online. Watch them here!
[Nov/2022] We have launched the ICBINB Repository of Unexpected Negative Results. Feedback and suggestions are
welcomed.
The ICBINB initiative is a movement within the ML community for well-executed meaningful
research beyond bold numbers. The goals of the initiative are to crack open the research process, to re-value
unexpected negative results, question well-established default practices, and advance the understanding,
elegance, and diversity of the field as opposed to focusing solely on the outcome and just rewarding approaches
that beat previous works on a given benchmark.
The three pillars of such an initiative include:
Here our wonderful team of volunteers! None of this would be possible without their help.
Aaron Schein
Columbia University
Arno Blaas
Apple
Andreas Kriegler
Technical University of Vienna
David Rohde
Criteo AI Lab
Fan Feng
City University of Hong Kong
Francisco J.R. Ruiz
DeepMind
Ian Mason
Fujitsu Research
Javier Altoran
University of Cambridge
Jessica Forde
Brown University
Luca Zapella
Apple
Kelly Buchanan
Columbia University
Manuel Haussmann
University of Southern Denmark
Melanie F. Pradier
Microsoft Research
Nicola Branchini
University of Edinburgh
Rui Yang
Cornell University
Sahra Ghalebikesabi
University of Oxford
Sonali Parbhoo
Imperial College London
Stephanie Hyland
Microsoft Research
Tobias Uelwer
TU Dortmund
Vincent Fortuin
University of Cambridge
Wenbin Zhang
Carnegie Mellon University
Yubin Xie
Cornell University/MSKCC
David Blei
Columbia University
Max Welling
Amsterdam University & MSR
Robert Williamson
Tübingen University
Tamara Broderick
MIT
Hanna Wallach
Microsoft Research
Isabel Valera
Saarland University