An implementation of a distributed ResNet model for classifying CIFAR-10 and MNIST datasets.
-
Updated
Jun 6, 2022 - Python
An implementation of a distributed ResNet model for classifying CIFAR-10 and MNIST datasets.
Distributed Deep Learning experiments with the BigDL framework over Databricks
mnist, using caffe and openmpi
Distributed Tensorflow, Keras and BigDL on Apache Spark
A blockchain based neural architecture search project.
Yelp review classification using CNN model with horovod on HPC cluster
Simultaneous Multi-Party Learning Framework
Java based Convolutional Neural Network package running on Apache Spark framework
Implemented training strategies to help improve bottlenecks and to improve the training speed while maintaining the quality of our GANs.
Comparison of distributed machine learning techniques applied to openly available datasets
Horovod Tutorial for Pytorch using NVIDIA-Docker.
SHUKUN Technology Co.,Ltd Algorithm intern (2020/12-2021/5). Multi-GPU, Multi-node training for deep learning models. Horovod, NVIDIA clara train sdk, configuration tutorial,performance testing.
Collection of resources for automatic deployment of distributed deep learning jobs on a Kubernetes cluster
PyTorch Examples for Beginners
Distributed Deep Reinforcement Learning for Large Scale Robotic Simulations 👨💻🤖🕸🕹🕷❤️👨🔬
This repository contains the implementation of a wide variety of Deep Learning Projects in different applications of computer vision, NLP, federated, and distributed learning. These projects include university projects and projects implemented due to interest in Deep Learning.
Distributed deep learning framework based on pytorch/numba/nccl and zeromq.
Java based Convolutional Neural Network package running on Apache Spark framework
Scalable NLP model fine-tuning and batch inference with Ray and Anyscale
WAGMA-SGD is a decentralized asynchronous SGD based on wait-avoiding group model averaging. The synchronization is relaxed by making the collectives externally-triggerable, namely, a collective can be initiated without requiring that all the processes enter it. It partially reduces the data within non-overlapping groups of process, improving the…
Add a description, image, and links to the distributed-deep-learning topic page so that developers can more easily learn about it.
To associate your repository with the distributed-deep-learning topic, visit your repo's landing page and select "manage topics."