Skip to content
This repository has been archived by the owner on Dec 19, 2023. It is now read-only.

janhq/model-converter

Repository files navigation

Introduction

This repository is a Model Converter tool, designed for converting Huggingface Large Language Models into various specified formats.

Supported Formats

  • GGUF (Q3_K_M, Q4_K,M, Q5_K_M, Q8_0)
  • AWQ
  • GPTQ
  • Tensorrt-LLM

Workflow GGUF convert

  • Clone Repository
  • Python Environment
  • Login to HuggingFace Hub
  • Environment Variables Setup
  • Install Dependencies for llama.cpp
  • Download HuggingFace Model
  • Convert Model to fp16 Format
  • GGUF Quantization
  • Test models
  • TODO: Add model card
  • Upload to HuggingFace Hub
  • TODO: Run benchmarks
  • Removes downloaded models and cached data.

Contributing

We welcome contributions! If you have any ideas, please create an issue or pull request.

License

This project is licensed under the AGPLv3 License - see the LICENSE file for details.

Contact

Join our Discord: Jan Discord

Acknowledgements

llama.cpp

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages