Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Op "complex_stft_0_promoted" (op_type: cast) Input x="complex_stft_0" expects tensor or scalar of dtype from type domain ['fp16', 'fp32', 'int32', 'bool'] but got tensor[1,1025,250,complex64] #2212

Open
miku1958 opened this issue May 3, 2024 · 1 comment
Labels
bug Unexpected behaviour that should be corrected (type) PyTorch (traced)

Comments

@miku1958
Copy link

miku1958 commented May 3, 2024

馃悶Describing the bug

  • Make sure to only create an issue here for bugs in the coremltools Python package. If this is a bug with the Core ML Framework or Xcode, please submit your bug here: https://developer.apple.com/bug-reporting/
  • Provide a clear and consise description of the bug.

Stack Trace

    mlmodel = ct.convert(
  File "<python>/lib/python3.9/site-packages/coremltools/converters/_converters_entry.py", line 581, in convert
    mlmodel = mil_convert(
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 188, in mil_convert
    return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 212, in _mil_convert
    proto, mil_program = mil_convert_to_proto(
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 288, in mil_convert_to_proto
    prog = frontend_converter(model, **kwargs)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/converter.py", line 108, in __call__
    return load(*args, **kwargs)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 82, in load
    return _perform_torch_convert(converter, debug)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 116, in _perform_torch_convert
    prog = converter.convert()
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 581, in convert
    convert_nodes(self.context, self.graph)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 86, in convert_nodes
    raise e     # re-raise exception
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 81, in convert_nodes
    convert_single_node(context, node)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 134, in convert_single_node
    add_op(context, node)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 1549, in pow
    x, y = promote_input_dtypes(inputs)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/defs/_utils.py", line 456, in promote_input_dtypes
    input_vars[i] = _promoted_var(var, promoted_dtype)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/defs/_utils.py", line 441, in _promoted_var
    x = mb.cast(
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/mil/ops/registry.py", line 182, in add_op
    return cls._add_op(op_cls_to_add, **kwargs)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/mil/builder.py", line 182, in _add_op
    new_op = op_cls(**kwargs)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/mil/operation.py", line 191, in __init__
    self._validate_and_set_inputs(input_kv)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/mil/operation.py", line 504, in _validate_and_set_inputs
    self.input_spec.validate_inputs(self.name, self.op_type, input_kvs)
  File "<python>/lib/python3.9/site-packages/coremltools/converters/mil/mil/input_type.py", line 163, in validate_inputs
    raise ValueError(msg.format(name, var.name, input_type.type_str,
ValueError: Op "complex_stft_0_promoted" (op_type: cast) Input x="complex_stft_0" expects tensor or scalar of dtype from type domain ['fp16', 'fp32', 'int32', 'bool'] but got tensor[1,1025,250,complex64]

To Reproduce

I am modifying this script to convert model to MLModel
https://github.com/RVC-Boss/GPT-SoVITS/blob/main/GPT_SoVITS/onnx_export.py
And I have rewritten

        torch.onnx.export(
            self.vits,
            (text_seq, pred_semantic, ref_audio),
            f"onnx/{project_name}/{project_name}_vits.onnx",
            input_names=["text_seq", "pred_semantic", "ref_audio"],
            output_names=["audio"],
            dynamic_axes={
                "text_seq": {1 : "text_length"},
                "pred_semantic": {2 : "pred_length"},
                "ref_audio": {1 : "audio_length"},
            },
            opset_version=17,
            verbose=False
        )

to

        traced_model = torch.jit.trace(self.vits, (text_seq, pred_semantic, ref_audio))
        mlmodel = ct.convert(
            traced_model,
            inputs=[
                ct.TensorType(name="text_seq", shape=text_seq.shape),
                ct.TensorType(name="pred_semantic", shape=pred_semantic.shape),
                ct.TensorType(name="ref_audio", shape=ref_audio.shape),
            ],
        )
        mlmodel.save(f"onnx/{project_name}/{project_name}_vits.mlpackage")
  • If the model conversion succeeds, but there is a numerical mismatch in predictions, please include the code used for comparisons.

System environment (please complete the following information):

  • coremltools version: 7.2
  • OS (e.g. MacOS version or Linux type): macOS 14.4
  • Any other relevant version information (e.g. PyTorch or TensorFlow version):

Additional context

  • Add anything else about the problem here that you want to share.
@miku1958 miku1958 added the bug Unexpected behaviour that should be corrected (type) label May 3, 2024
@TobyRoseman
Copy link
Collaborator

What are the dtypes of text_seq, pred_semantic, and ref_audio? It looks like at least one of them is complex64.

The Core ML Framework does not support models with complex inputs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Unexpected behaviour that should be corrected (type) PyTorch (traced)
Projects
None yet
Development

No branches or pull requests

2 participants