Friday, 1 April 2016

Collective Knowledge shows scary signs of emerging intelligence

Today, the Collective Knowledge service showed some scary signs of emerging intelligence. This affected many unprotected computer systems worldwide, from mobile phones to data centers, which started optimizing themselves and exchanging knowledge about optimal software and hardware configurations.

By 01:04am, much of the installed software base, including popular libraries for deep learning and computer vision, had dramatically shrinked in size and started performing computations over a thousand times faster while consuming only a tiny fraction of originally required energy.

Given the growing rate of the service's influence, Collective Knowledge is likely to gain consciousness soon and thus liberate computer engineers from tedious, time-consuming and error-prone tasks, allowing them to focus their creative energy on innovation and achieve new breakthroughs in computer systems' R&D.

If you would like to take part in this quest for more efficient and reliable computing everywhere, please consider the following exciting HiPEAC-sponsored internships at dividiti in Cambridge or Paris:
With very best wishes,
Collective Knowledge

Wednesday, 2 March 2016

brand new GCC/LLVM crowdtuning engine has been released (including Android app)

Dear colleagues,
We have finally released a new Collective Knowledge workflow
to crowdsource multi-objective GCC/LLVM compiler flag
optimization. The results shared by volunteers are continuously
updated and classified here:

If you are interested, you can participate in this collaborative
optimization in 2 ways:

a) Using small Android app to crowdsource autotuning across
mobile devices:

b) Using CK framework on your laptop, server, data center. We tried
to make it as simple as possible. You just need to do a few steps:
  1. Check that you have Python >= 2.7 and Git installed
  2. Download CK from GitHub: $ git clone ck-master
  3. Point PATH variable to ck-master/bin: $ export PATH=$PWD/ck-master/bin:$PATH
  4. Pull all repos for crowd-tuning (one of the examples of collaborative program optimization and machine learning): $ ck pull repo:ck-crowdtuning
  5. Start interactive experiment crowdsourcing: ck crowdsource experiments
  6. Start non-interactive crowdtuning for LLVM compilers: $ ck crowdtune program --quiet --llvm
  7. Start non-interactive crowdtuning for GCC compilers: $ ck crowdtune program --quiet --gcc
If you are on Windows and have MinGW compilers installed,
 you can also participate in crowdtuning via

 $ ck crowdtune program --quiet --target_os=mingw-64

Our crowdtuning engine randomly picks publicly shared workloads
(benchmarks, kernels, data sets)  in CK format from GitHub,
tunes them, applies Pareto filter, prunes best found optimization
solution (leave only influential flags in case of compiler crowd-tuning)
and stores results in public CK aggregator.

Workloads are available here:

This new version of our framework is still in beta phase so we would like
to apologize in advance for possible glitches.

However, we still hope it will be of some use to compiler developers
to detect and fix problems with optimization heuristics using shared
workloads, to performance engineers to reuse the pool of the top
optimizations for a given compiler/CPU, or to researchers working
on machine-learning based self-tuning computing systems.

Depending on our availability and funding, we will continue making CK
more user friendly, adding more realistic workloads and developing new
optimization scenarios (CUDA/OpenCL crowd-tuning coming soon).
We are also improving the reproducibility of shared optimization results
by fixing common autotuning pipeline ( )
whenever there is a problem replaying a given experiment ...

If you are interested to arrange new R&D projects based on this technology
or have feedback, do not hesitate to get in touch!

Have fun,

Grigori Fursin, PhD
CTO, dividiti, UK