Monday, 16 February 2015

Artifact Evaluation Experience presentation online

I presented our Artifact Evaluation Experience at CGO'15/PPoPP'15. Presentation is now available online:

Overall, the feedback is positive and we plan to continue AE for CGO'16/PPoPP'16.

Our main task is to improve guidelines for artifact submission and reviewing.

We will continue validating our new publication model for ADAPT'16

For a few years we are promoting a new publication model where papers and related material are submitted to open access archives, then publicly discussed via SlashDot and Reddit, and only then validated and selected by the program committee.

Though we did not have participants for this publication model at our ADAPT'15, we had considerable interest and some colleagues are willing to participate in our ADAPT'16 ...

Furthermore, one of the papers made it to Slashdot generating considerable feedback and thus supporting our idea. By the way, we just noticed that similar approach is proposed in other sciences!

Therefore, we plan to continue validating our new publication model at ADAPT'16 - please, stay tuned!

Highest ranked artifacts for CGO'15/PPoPP'15

We would like to congratulate authors of the following 2 highest-ranked artifacts from CGO/PPoPP'15:

1st place (sponsored by Nvidia)

 "The SprayList: A scalable relaxed priority queue"
Justin Kopinsky, Dan Alistarh, Jerry Li  and Nir Shavi

received prize "Nvidia Quadro K6000"

2nd place (sponsored by cTuning Foundation)
"A graph-based higher-order intermediate representation"
Roland Leißa, Marcel Köster and Sebastian Hack

received prize "Acer C720P"

Sunday, 8 February 2015

ADAPT'15 outcome and new publication model for ADAPT'16

A few words about ADAPT'15 outcome:

* Final program with all PDFs is available online at here.

* The following paper received Nvidia best paper award (Tesla K40):

A Self-adaptive Auto-scaling Method for Scientific Applications on HPC Environments and Clouds
Kiran Mantripragada1, Alecio Binotto1 and Leonardo Tizzei2
1 IBM Research - Brazil
2 IBM Brazil

* We had a very interesting discussion about our new open publication model. In spite of some possible issues, it seems that there is a support to try it for ADAPT'16. Interestingly, we just found out that very similar model is proposed for other scientific fields (see this blog article). Furthermore, we just found the following public discussion on Slashdot about one of ADAPT'15 papers supporting our idea (as a researcher, you normally publish to present your work to a broad community, initiate discussions and get feedback to improve your work, unless it's purely for academic promotion reasons).

Please, follow our announcements about ADAPT'16 (will likely be co-located with HiPEAC'16 in Prague and will likely feature new publication model).

Anaconda Scientific Python Distribution

Recently discovered Anaconda Scientific Python Distribution. It contained all necessary libraries for the Collective Knowledge Framework that I use for auto-tuning, statistical analysis and predictive analytics, so wanted to share it with you:

Wednesday, 4 February 2015

New year's digest on collaborative & reproducible research

This list is aggregated from public and private messages or during web browsing. Don't hesitate to send me links via our public mailing list or LinkedIn group (to have an acknowledgment):!forum/collective-mind

=== Misc articles ===

* "Research Wranglers: Initiatives to Improve Reproducibility of Study Findings"

* Dennis McCafferty, "Should Code be Released?",
  Communications of ACM, 2010/10, Vol.53, No.10, DOI:10.1145/1831407.1831415

* Chris Drummond, "Replicability is not Reproducibility: Nor is it Good Science"
  Proc. of the Evaluation Methods for Machine Learning Workshop
  at the 26th ICML, Montreal, Canada, 2009.
  Copyright: National Research Council of Canada

* Science is in a reproducibility crisis - how do we resolve it?

* My blog article on "Automatic performance tuning and reproducibility as a side effect"
  for the Software Sustainability Institute:

* Puzzling Measurement of "Big G" Gravitational Constant Ignites Debate

* White House takes notice of reproducibility in science, and wants your opinion

* Problems during performance benchmarking:

We also experienced many similar issues during our work on auto-tuning and machine learning:

* ACM SIGOPS Operating Systems Review - Special Issue on Repeatability
  and Sharing of Experimental Artifacts:

* Vinton G. Cerf. "Bit Rot: Long-Term Preservation of Digital Information" [Point of View]

* Less related though interesting (about citations):

=== Future events ==

* February 9, 2015, CGO/PPoPP joint session on artifact evaluation experience
  San Francisco, 17:15 - 17:35
  Grigori Fursin and Bruce Childers

* November 1-4, 2015, Dagstuhl Perspective Workshop
  "Artifact Evaluation for Publications"
  Bruce Childers, Shriram Krishnamurthi, Grigori Fursin, Andreas Zeller

* March 13 - 18 , 2016, Dagstuhl Seminar 16111
  "Rethinking Experimental Methods in Computing"

=== Past events ===

* Oct 27-30, 2014, Washington DC, US
  "1st International Workshop on Collaborative methodologies to Accelerate
  Scientific Knowledge discovery in big data (CASK) 2014"

  In conjunction with 2014 IEEE International Conference on Big Data
  (IEEE BigData 2014)

* September 1, 2014:  Special journal issue on reproducible research methodologies
  in IEEE Transactions on Emerging Topics in Computing (TETC).

* January 2015:

  ACM SIGOPS Operating Systems Review
  Special Issue on Repeatability and Sharing
  of Experimental Artifacts

=== Journals/Conferences with reproducible articles ===
* IPOL Journal: Image Processing On Line

=== Tools ===
* NGS pipelines - ntegrates pipelines and user interfaces
  to help biologists to analyse data outputed from biological
  applications such as RNAseq, sRNAseq, ChipSeq, BS-seq:

* Skoll: A process & Infrastructure for Distributed, continuous Quality assurance

* NEPI: Simplifying network experimentation:

* RR (Mozilla project): records nondeterministic executions and debugs them deterministically

* Burrito: Rethinking the Electronic Lab Notebook

* Collective Knowledge (cTuning v4): our tool and repository to simplify code and data sharing as reusable components (for collaborative and reproducible R&D):

=== Online workflows ===

* RunMyCode:

* AptLab:

=== Projects ===
* OpenLab:

* EU Recode project

* CERN: opendata

* Research Data Alliance:

* Open Data Institute:

=== Online archives/repos ===

* Olive Archive (preserving executable content):


* OpenAire (CERN)

* Zenodo:

* ResearchCompendia:

* Internet Archive:

* The national archives:

* WikiData:

* The digital preservation network:

* Open datasets:

* DataHub:

* Datacite: citing data as DOI (Germany, has connections with CERN)

* CrossRef:

* International DOI Foundation

* Our new pilot Collective Knowledge repository: