Skip to content
 
OpenCompass Website HOT      OpenCompass Toolkit TRY IT OUT
 

GitHub Org's stars

What is OpenCompass ? OpenCompass is a platform focused on understanding of the AGI, include Large Language Model and Multi-modality Model.

We aim to:

  • develop high-quality libraries to reduce the difficulties in evaluation
  • provide convincing leaderboards for improving the understanding of the large models
  • create powerful toolchains targeting a variety of abilities and tasks
  • build solid benchmarks to support the large model research
  • research on inference of Large Model(analysis, reasoning, prompt engineering.)

Toolkit

OpenCompass

VLMEvalKit

Benchmarks and Methods

Project Topic Paper

DevBench

Automated Software Development

DevBench: Towards LLMs based Automated Software Development

CriticBench

Critic Reasoning

CriticBench: Evaluating Large Language Models as Critic

MathBench

Mathematical Reasoning

MathBench: Evaluating the Theory and Application Proficiency of LLMs with a Hierarchical Mathematics Benchmark

T-Eval

Tool Utilization

T-Eval: Evaluating the Tool Utilization Capability Step by Step

MMBench

Multi Modality

MMBench: Is Your Multi-modal Model an All-around Player?

BotChat

Subjective Evaluation

BotChat: Evaluating LLMs’ Capabilities of Having Multi-Turn Dialogues

LawBench

Domain Evaluation

LawBench: Benchmarking Legal Knowledge of Large Language Models

Pinned

  1. opencompass opencompass Public

    OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.

    Python 2.7k 278

  2. MixtralKit MixtralKit Public

    A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI

    Python 758 81

  3. VLMEvalKit VLMEvalKit Public

    Open-source evaluation toolkit of large vision-language models (LVLMs), support GPT-4v, Gemini, QwenVLPlus, 40+ HF models, 20+ benchmarks

    Python 424 46

  4. LawBench LawBench Public

    Benchmarking Legal Knowledge of Large Language Models

    Python 179 23

  5. T-Eval T-Eval Public

    T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Step

    Python 160 8

  6. Ada-LEval Ada-LEval Public

    The official implementation of "Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks"

    Python 38 2

Repositories

Showing 10 of 17 repositories

Top languages

Loading…