Skip to content
#

crowd-benchmarking

Here are 8 public repositories matching this topic...

Language: All
Filter by language

Collective Knowledge extension with unified and customizable benchmarks (with extensible JSON meta information) to be easily integrated with customizable and portable Collective Knowledge workflows. You can easily compile and run these benchmarks using different compilers, environments, hardware and OS (Linux, MacOS, Windows, Android). More info:

  • Updated Sep 21, 2021
  • C

Collective Knowledge crowd-tuning extension to let users crowdsource their experiments (using portable Collective Knowledge workflows) such as performance benchmarking, auto tuning and machine learning across diverse platforms with Linux, Windows, MacOS and Android provided by volunteers. Demo of DNN crowd-benchmarking and crowd-tuning:

  • Updated Jul 10, 2021
  • Python

Crowdsourcing video experiments (such as collaborative benchmarking and optimization of DNN algorithms) using Collective Knowledge Framework across diverse Android devices provided by volunteers. Results are continuously aggregated in the open repository:

  • Updated Dec 20, 2018
  • Java

Cross-platform Python client for the CodeReef.ai portal to manage portable workflows, reusable automation actions, software detection plugins, meta packages and dashboards for crowd-benchmarking:

  • Updated Mar 27, 2020
  • Python

Improve this page

Add a description, image, and links to the crowd-benchmarking topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the crowd-benchmarking topic, visit your repo's landing page and select "manage topics."

Learn more