Thursday, 8 February 2018

ACM ReQuEST: 1st open and reproducible tournament to co-design Pareto-efficient deep learning (speed, accuracy, energy, size, costs)

The first Reproducible Quality-Efficient Systems Tournament (ReQuEST) will debut at ASPLOS’18 ( ACM conference on Architectural Support for Programming Languages and Operating Systems, which is the premier forum for multidisciplinary systems research spanning computer architecture and hardware, programming languages and compilers, operating systems and networking).

Organized by a consortium of leading universities (Washington, Cornell, Toronto, Cambridge, EPFL) and the cTuning foundation, ReQuEST aims to provide a open-source tournament framework, a common experimental methodology and an open repository for continuous evaluation and multi-objective optimization of the quality vs. efficiency Pareto optimality of a wide range of real-world applications, models and libraries across the whole software/hardware stack.

ReQuEST will use the established artifact evaluation methodology together with the Collective Knowledge framework validated at leading ACM/IEEE conferences to reproduce results, display them on a live dashboard and share artifacts with the community. Distinguished entries will be presented at the associated workshop and published in the ACM Digital Library. To win, the results of an entry do not necessarily have to lie on the Pareto frontier, as an entry can be also praised for its originality, reproducibility, adaptability, scalability, portability, ease of use, etc.

The first ReQuEST competition will focus on deep learning for image recognition with an ambitious long-term goal to build a public repository of portable and customizable “plug&play” AI/ML algorithms optimized across diverse data sets, models and platforms from IoT to supercomputers (see live demo). Future competitions will consider other emerging workloads, as suggested by our Industrial Advisory Board.

For more information, please visit http://cKnowledge.org/request


Saturday, 20 January 2018

Public CGO-PPoPP'18 artifact evaluation discussion session on the 26th of February


We have successfully completed PPoPP'18 artifact evaluation (AE). Just like at CGO'18, we received a record number of artifact submissions: 15. Results are now available at http://ctuning.org/ae/artifacts.html !


For the first time, we used the new ACM Artifact Review and Badging policy which we co-authored last year. Note that it is now possible to search for papers with specific badges in the ACM Digital Library: go to https://dl.acm.org/advsearch.cfm?coll=DL&dl=ACM and select "Artifact Badge" for field and then select badges to search! Since we see AE as a cooperative process to improve reproducibility of experiments, authors, reviewers and chairs worked closely together to improve artifacts and pass evaluation. We would like to thank them all for their hard work: http://cTuning.org/ae/committee.html !


Though there were no major problems, we noticed that "reusability/customization" criteria in the new guidelines are quite vague and caused ambiguity in evaluation of several complex artifacts.

Another problem is that all artifacts have their own ad-hoc formats and scripts, while we now have to automate this process as much as possible to make AE sustainable. ACM is now evaluating several technologies to pack, share and evaluate artifacts automatically: https://www.acm.org/publications/artifacts

We plan to evaluate those techniques further during the 1st open tournament on reproducible and Pareto-efficient co-design of the whole software/hardware/model stack for deep learning and other emerging workloads: http://cKnowledge.org/request

We would like to discuss all these issues with the community to improve next AE during an open CGO-PPoPP AE discussion session on the 26th of February 2018 (17:15). Please join us and feel free to provide your feedback!