Tuesday 5 September 2017

My CNRS webcast "Enabling open and reproducible research at computer systems conferences: good, bad and ugly"

This spring I was kindly invited by Dr. Arnaud Legrand (CNRS research scientist promoting reproducible research in France) to present our practical experience while enabling open and reproducible research at computer systems conferences (good, bad and ugly).

This CNRS webinar took place in Grenoble on March 14, 2017 with a very lively audience.

You can find the following online resources related to this talk:

Monday 4 September 2017

Microsoft sponsors non-profit cTuning foundation

We would like to thank to Microsoft for providing an Azure sponsorship for our non-profit cTuning foundation to host our public repository of cross-linked artifacts and optimization results in a unified and reusable Collective Knowledge format.

Many thanks to Dr. Aaron Smith for his assistance!

Successful PhD defense at the University of Paris-Saclay (advised by cTuning foundation members)

We would like to congratulate Abdul Memon (PhD student advised by Dr. Grigori Fursin from the cTuning foundation) for successfully defending his thesis "Crowdtuning: Towards Practical and Reproducible Auto-tuning via Crowdsourcing and Predictive Analytics" in the University of Paris-Saclay.

Most of the software, data sets and experiments are shared in a unified, reproducible and reusable way using Collective Mind framework and later converted to the new Collective Knowledge framework.

We helped prepare ACM policy on Result and Artifact Review and Badging

After arranging many Artifact Evaluations to reproduce and validate experimental results from published papers at various ACM and IEEE computer systems conferences (CGO, PPoPP, PACT, SC), we saw the need for a common reviewing and badging methodology.

In 2016, the cTuning foundation joined ACM internal workgroup on reproducibility and provided feedback and suggestions to develop a common methodology for artifact evaluation. The outcome of this collaborative effort is the common ACM policy on Result and Artifact Review and Badging published here:
We started aligned artifact submission and reviewing procedures for computer systems conferences:
We expect that this document will gradually evolve based on our AE experience - please, stay tuned for more news!