Saturday 20 January 2018

Public CGO-PPoPP'18 artifact evaluation discussion session on the 26th of February


We have successfully completed PPoPP'18 artifact evaluation (AE). Just like at CGO'18, we received a record number of artifact submissions: 15. Results are now available at http://ctuning.org/ae/artifacts.html !


For the first time, we used the new ACM Artifact Review and Badging policy which we co-authored last year. Note that it is now possible to search for papers with specific badges in the ACM Digital Library: go to https://dl.acm.org/advsearch.cfm?coll=DL&dl=ACM and select "Artifact Badge" for field and then select badges to search! Since we see AE as a cooperative process to improve reproducibility of experiments, authors, reviewers and chairs worked closely together to improve artifacts and pass evaluation. We would like to thank them all for their hard work: http://cTuning.org/ae/committee.html !


Though there were no major problems, we noticed that "reusability/customization" criteria in the new guidelines are quite vague and caused ambiguity in evaluation of several complex artifacts.

Another problem is that all artifacts have their own ad-hoc formats and scripts, while we now have to automate this process as much as possible to make AE sustainable. ACM is now evaluating several technologies to pack, share and evaluate artifacts automatically: https://www.acm.org/publications/artifacts

We plan to evaluate those techniques further during the 1st open tournament on reproducible and Pareto-efficient co-design of the whole software/hardware/model stack for deep learning and other emerging workloads: http://cKnowledge.org/request

We would like to discuss all these issues with the community to improve next AE during an open CGO-PPoPP AE discussion session on the 26th of February 2018 (17:15). Please join us and feel free to provide your feedback!