A constrained expectation-maximization algorithm for feasible graph inference.
-
Updated
Jun 10, 2021 - Jupyter Notebook
A constrained expectation-maximization algorithm for feasible graph inference.
Modified inference engine for quantized convolution using product quantization
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Batch Partitioning for Multi-PE Inference with TVM (2020)
Improving Natural Language Processing tasks using BERT-based models
OnnxRT based Inference Optimization of Roberta model trained for Sentiment Analysis On Twitter Dataset
PyTorch Mobile: iOS examples
Faster inference YOLOv8: Optimize and export YOLOv8 models for faster inference using OpenVINO and Numpy 🔢
Interface for TensorRT engines inference along with an example of YOLOv4 engine being used.
[WIP] A template for getting started writing code using GGML
A compilation of various ML and DL models and ways to optimize the their inferences.
YOLOV8 - Object detection
Learn the ins and outs of efficiently serving Large Language Models (LLMs). Dive into optimization techniques, including KV caching and Low Rank Adapters (LoRA), and gain hands-on experience with Predibase’s LoRAX framework inference server.
The blog, read report and code example for AGI/LLM related knowledge.
MLP-Rank: A graph theoretical approach to structured pruning of deep neural networks based on weighted Page Rank centrality as introduced by the related thesis.
PyTorch Mobile: Android examples of usage in applications
Batch estimation on Lie groups
A simple tool that applies structure-level optimizations (e.g. Quantization) to a TensorFlow model
🤖️ Optimized CUDA Kernels for Fast MobileNetV2 Inference
cross-platform modular neural network inference library, small and efficient
Add a description, image, and links to the inference-optimization topic page so that developers can more easily learn about it.
To associate your repository with the inference-optimization topic, visit your repo's landing page and select "manage topics."