Skip to content

Fine-tuning the BERT model #24

Answered by R1j1t
naturecreator asked this question in Q&A
Discussion options

You must be logged in to vote

ContextualSpellCheck relies on 🤗Transformers library to provide the model. So you can follow their fine tuning pipeline and then load the updated model by providing the local path to contextualSpellCheck as below:

checker = ContextualSpellCheck(
model_name="cl-tohoku/bert-base-japanese-whole-word-masking",
max_edit_dist=2,
)

A snippet on model loading in contextualSpellCheck is below:

self.model_name = model_name
self.BertTokenizer = AutoTokenizer.from_pretrained(self.model_name)

Replies: 8 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by R1j1t
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #24 on December 21, 2020 17:40.