-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BFloat16 is not supported on MPS #85
Comments
I solved this problem by modifying the accelerate package following this link: https://github.com/huggingface/accelerate/pull/2227/files hope it helps my code: Error messages: File /opt/homebrew/lib/python3.11/site-packages/lida/components/manager.py:202, in Manager.visualize(self, summary, goal, textgen_config, library, return_error) File /opt/homebrew/lib/python3.11/site-packages/lida/components/viz/vizgenerator.py:27, in VizGenerator.generate(self, summary, goal, textgen_config, text_gen, library) File /opt/homebrew/lib/python3.11/site-packages/lida/components/scaffold.py:21, in ChartScaffold.get_template(self, goal, library) AttributeError: 'list' object has no attribute 'visualization'` Still solving. |
I got this at the step:
goals = lida.goals(summary, n=5, persona="sales")
full error message:
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[43], line 2
1 # goals = lida.goals(summary, n=2, textgen_config=textgen_config)
----> 2 goals = lida.goals(summary, n=5, persona="sales")
File /opt/homebrew/lib/python3.11/site-packages/lida/components/manager.py:177, in Manager.goals(self, summary, textgen_config, n, persona)
174 if isinstance(persona, str):
175 persona = Persona(persona=persona, rationale="")
--> 177 return self.goal.generate(summary=summary, text_gen=self.text_gen,
178 textgen_config=textgen_config, n=n, persona=persona)
File /opt/homebrew/lib/python3.11/site-packages/lida/components/goal.py:51, in GoalExplorer.generate(self, summary, textgen_config, text_gen, n, persona)
43 user_prompt += f"""\n The generated goals SHOULD BE FOCUSED ON THE INTERESTS AND PERSPECTIVE of a '{persona.persona} persona, who is insterested in complex, insightful goals about the data. \n"""
45 messages = [
46 {"role": "system", "content": SYSTEM_INSTRUCTIONS},
47 {"role": "assistant",
48 "content":
49 f"{user_prompt}\n\n {FORMAT_INSTRUCTIONS} \n\n. The generated {n} goals are: \n "}]
---> 51 result: list[Goal] = text_gen.generate(messages=messages, config=textgen_config)
53 try:
54 json_string = clean_code_snippet(result.text[0]["content"])
File /opt/homebrew/lib/python3.11/site-packages/llmx/generators/text/hf_textgen.py:213, in HFTextGenerator.generate(self, messages, config, **kwargs)
200 gen_config = GenerationConfig(
201 max_new_tokens=max_new_tokens,
202 temperature=max(config.temperature, 0.01),
(...)
210 repetition_penalty=repetition_penalty,
211 )
212 with torch.no_grad():
--> 213 generated_ids = self.model.generate(**batch, generation_config=gen_config)
215 text_response = self.tokenizer.batch_decode(
216 generated_ids, skip_special_tokens=False
217 )
219 # print(text_response, "*************")
File /opt/homebrew/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /opt/homebrew/lib/python3.11/site-packages/transformers/generation/utils.py:1764, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)
1756 input_ids, model_kwargs = self._expand_inputs_for_generation(
1757 input_ids=input_ids,
1758 expand_size=generation_config.num_return_sequences,
1759 is_encoder_decoder=self.config.is_encoder_decoder,
1760 **model_kwargs,
1761 )
1763 # 13. run sample
-> 1764 return self.sample(
1765 input_ids,
1766 logits_processor=logits_processor,
1767 logits_warper=logits_warper,
1768 stopping_criteria=stopping_criteria,
1769 pad_token_id=generation_config.pad_token_id,
1770 eos_token_id=generation_config.eos_token_id,
1771 output_scores=generation_config.output_scores,
1772 return_dict_in_generate=generation_config.return_dict_in_generate,
1773 synced_gpus=synced_gpus,
1774 streamer=streamer,
1775 **model_kwargs,
1776 )
1778 elif generation_mode == GenerationMode.BEAM_SEARCH:
1779 # 11. prepare beam search scorer
1780 beam_scorer = BeamSearchScorer(
1781 batch_size=batch_size,
1782 num_beams=generation_config.num_beams,
(...)
1787 max_length=generation_config.max_length,
1788 )
File /opt/homebrew/lib/python3.11/site-packages/transformers/generation/utils.py:2861, in GenerationMixin.sample(self, input_ids, logits_processor, stopping_criteria, logits_warper, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)
2858 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
2860 # forward pass to get next token
-> 2861 outputs = self(
2862 **model_inputs,
2863 return_dict=True,
2864 output_attentions=output_attentions,
2865 output_hidden_states=output_hidden_states,
2866 )
2868 if synced_gpus and this_peer_finished:
2869 continue # don't waste resources running the code we don't need
File /opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File /opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File /opt/homebrew/lib/python3.11/site-packages/accelerate/hooks.py:165, in add_hook_to_module..new_forward(module, *args, **kwargs)
163 output = module._old_forward(*args, **kwargs)
164 else:
--> 165 output = module._old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/homebrew/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:1181, in LlamaForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1178 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1180 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
-> 1181 outputs = self.model(
1182 input_ids=input_ids,
1183 attention_mask=attention_mask,
1184 position_ids=position_ids,
1185 past_key_values=past_key_values,
1186 inputs_embeds=inputs_embeds,
1187 use_cache=use_cache,
1188 output_attentions=output_attentions,
1189 output_hidden_states=output_hidden_states,
1190 return_dict=return_dict,
1191 )
1193 hidden_states = outputs[0]
1194 if self.config.pretraining_tp > 1:
File /opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File /opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File /opt/homebrew/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:1068, in LlamaModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1058 layer_outputs = self._gradient_checkpointing_func(
1059 decoder_layer.call,
1060 hidden_states,
(...)
1065 use_cache,
1066 )
1067 else:
-> 1068 layer_outputs = decoder_layer(
1069 hidden_states,
1070 attention_mask=attention_mask,
1071 position_ids=position_ids,
1072 past_key_value=past_key_values,
1073 output_attentions=output_attentions,
1074 use_cache=use_cache,
1075 )
1077 hidden_states = layer_outputs[0]
1079 if use_cache:
File /opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File /opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File /opt/homebrew/lib/python3.11/site-packages/accelerate/hooks.py:165, in add_hook_to_module..new_forward(module, *args, **kwargs)
163 output = module._old_forward(*args, **kwargs)
164 else:
--> 165 output = module._old_forward(*args, **kwargs)
166 return module._hf_hook.post_forward(module, output)
File /opt/homebrew/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:793, in LlamaDecoderLayer.forward(self, hidden_states, attention_mask, position_ids, past_key_value, output_attentions, use_cache, **kwargs)
787 warnings.warn(
788 "Passing
padding_mask
is deprecated and will be removed in v4.37. Please make sure useattention_mask
instead.`"789 )
791 residual = hidden_states
--> 793 hidden_states = self.input_layernorm(hidden_states)
795 # Self Attention
796 hidden_states, self_attn_weights, present_key_value = self.self_attn(
797 hidden_states=hidden_states,
798 attention_mask=attention_mask,
(...)
803 **kwargs,
804 )
File /opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File /opt/homebrew/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File /opt/homebrew/lib/python3.11/site-packages/accelerate/hooks.py:160, in add_hook_to_module..new_forward(module, *args, **kwargs)
159 def new_forward(module, *args, **kwargs):
--> 160 args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
161 if module._hf_hook.no_grad:
162 with torch.no_grad():
File /opt/homebrew/lib/python3.11/site-packages/accelerate/hooks.py:294, in AlignDevicesHook.pre_forward(self, module, *args, **kwargs)
291 if self.weights_map[name].dtype == torch.int8:
292 fp16_statistics = self.weights_map[name.replace("weight", "SCB")]
293 set_module_tensor_to_device(
--> 294 module, name, self.execution_device, value=self.weights_map[name], fp16_statistics=fp16_statistics
295 )
297 return send_to_device(args, self.execution_device), send_to_device(
298 kwargs, self.execution_device, skip_keys=self.skip_keys
299 )
File /opt/homebrew/lib/python3.11/site-packages/accelerate/utils/offload.py:118, in PrefixedDataset.getitem(self, key)
117 def getitem(self, key):
--> 118 return self.dataset[f"{self.prefix}{key}"]
File /opt/homebrew/lib/python3.11/site-packages/accelerate/utils/offload.py:169, in OffloadedWeightsLoader.getitem(self, key)
167 device = "cpu" if self.device is None else self.device
168 with safe_open(weight_info["safetensors_file"], framework="pt", device=device) as f:
--> 169 tensor = f.get_tensor(weight_info.get("weight_name", key))
171 if "dtype" in weight_info:
172 return tensor.to(getattr(torch, weight_info["dtype"]))
TypeError: BFloat16 is not supported on MPS`
when I use:
goals = lida.goals(summary, n=2, textgen_config=textgen_config)
also getting this error.
Many Thanks!
The text was updated successfully, but these errors were encountered: