Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: 'list' object is not callable || Resume from checkpoint #30754

Closed
4 tasks
satpalsr opened this issue May 11, 2024 · 3 comments · Fixed by #30790
Closed
4 tasks

TypeError: 'list' object is not callable || Resume from checkpoint #30754

satpalsr opened this issue May 11, 2024 · 3 comments · Fixed by #30790
Labels

Comments

@satpalsr
Copy link

satpalsr commented May 11, 2024

System Info

Name: transformers
Version: 4.41.0.dev0

When resuming training for mistral model.
This occurs with latest transformers version.
Previous version also have some other errors.

TypeError                                 Traceback (most recent call last)
Cell In[11], line 86
     75 trainer = CustomTrainer(
     76     model=model,
     77     args=training_args,
   (...)
     82     compute_metrics=compute_metrics
     83 )
     85 # Train the model
---> 86 trainer.train(resume_from_checkpoint='/workspace/regression_model_output/checkpoint-60')
     88 # Save model and tokenizer
     89 model_id = 'regression_mistral_model'

File /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1857, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
   1855 if resume_from_checkpoint is not None:
   1856     if not is_sagemaker_mp_enabled() and not self.is_deepspeed_enabled and not self.is_fsdp_enabled:
-> 1857         self._load_from_checkpoint(resume_from_checkpoint)
   1858     # In case of repeating the find_executable_batch_size, set `self._train_batch_size` properly
   1859     state = TrainerState.load_from_json(os.path.join(resume_from_checkpoint, TRAINER_STATE_NAME))

File /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2536, in Trainer._load_from_checkpoint(self, resume_from_checkpoint, model)
   2533 if os.path.exists(resume_from_checkpoint):
   2534     # For BC for older PEFT versions
   2535     if hasattr(model, "active_adapters"):
-> 2536         active_adapters = model.active_adapters()
   2537         if len(active_adapters) > 1:
   2538             logger.warning("Multiple active adapters detected will only consider the first adapter")

TypeError: 'list' object is not callable

Who can help?

@Art

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

\

Expected behavior

Training should start from checkpoint as usual.

@pashminacameron
Copy link
Contributor

@younesbelkada I am seeing a similar issue in _load_best_model since #30738 I have a proposed fix in this commit. Please let me know if that looks good to you, I can open a PR. Thanks!

@younesbelkada
Copy link
Contributor

Hi @pashminacameron
Thanks for your message! Sorry for the confusion, that's on me, I overlooked the fact that active_adapters() is a property method in PEFT: https://github.com/huggingface/peft/blob/2558dd872dbacd29f6b60668bfee54b594bcbd7f/src/peft/tuners/tuners_utils.py#L171
Please yes , feel free to open a PR so that we can merge it ASAP and land the fix before the release

@pashminacameron
Copy link
Contributor

Thank you. It's up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants