News from the non-profit cTuning foundation about our open source tools and the Collective Knowledge platform to enable collaborative, reproducible and reusable AI/ML/Quantum/IoT R&D: https://cTuning.org .
This spring I was kindly invited by Dr. Arnaud Legrand (CNRS research scientist promoting reproducible research in France) to present our practical experience while enabling open and reproducible research at computer systems conferences (good, bad and ugly).
This CNRS webinar took place in Grenoble on March 14, 2017 with a very lively audience.
You can find the following online resources related to this talk:
We would like to congratulate Abdul Memon (PhD student advised by Dr. Grigori Fursin from the cTuning foundation) for successfully defending his thesis "Crowdtuning: Towards Practical and Reproducible Auto-tuning via Crowdsourcing and Predictive Analytics" in the University of Paris-Saclay.
After arranging many Artifact Evaluations to reproduce and validate experimental results from published papers at various ACM and IEEE computer systems conferences (CGO, PPoPP, PACT, SC), we saw the need for a common reviewing and badging methodology.
In 2016, the cTuning foundation joined ACM internal workgroup on reproducibility and provided feedback and suggestions to develop a common methodology for artifact evaluation. The outcome of this collaborative effort is the common ACM policy on Result and Artifact Review and Badging published here: