Skip to content
#

pre-trained-language-models

Here are 68 public repositories matching this topic...

Question and answer generation (QAG) is a natural language processing (NLP) task that generates a question and an answer in the same time by using context information. The input context can be represented in form of structured information in a database or raw text. The outputs of QAG systems can be directly applied to several NLP applications...

  • Updated Apr 18, 2024
  • Python

Pre-Training and Fine-Tuning transformer models using PyTorch and the Hugging Face Transformers library. Whether you're delving into pre-training with custom datasets or fine-tuning for specific classification tasks, these notebooks offer explanations and code for implementation.

  • Updated Mar 13, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the pre-trained-language-models topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pre-trained-language-models topic, visit your repo's landing page and select "manage topics."

Learn more