Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

tf.random.uniform is optimized out when it shouldn't be #2204

Open
drubinstein opened this issue Apr 24, 2024 · 2 comments
Open

tf.random.uniform is optimized out when it shouldn't be #2204

drubinstein opened this issue Apr 24, 2024 · 2 comments
Labels
bug Unexpected behaviour that should be corrected (type) tf2.x / tf.keras Issue could be related to tf2.x where coremltools isn't supported (component) triaged Reviewed and examined, release as been assigned if applicable (status)

Comments

@drubinstein
Copy link

drubinstein commented Apr 24, 2024

馃悶Describing the bug

When making a test model that uses a tf.function with tf.random.uniform, it looks like tf.random.uniform is optimized out of the network. It shouldn't be.

Stack Trace

This stack trace appears when exiting the program. It may be related.

Exception ignored in: <function AtomicFunction.__del__ at 0x15752b5b0>
Traceback (most recent call last):
  File "python3.10/site-packages/tensorflow/python/eager/polymorphic_function/atomic_function.py", line 291, in __del__
TypeError: 'NoneType' object is not subscriptable

To Reproduce

import coremltools as ct
import tensorflow as tf
import numpy as np

dummy = [
    tf.function(
        lambda x: tf.random.uniform((1, 80)),
        input_signature=[tf.TensorSpec(shape=[1, 2, 1290, 513], dtype=tf.float32)],
    ).get_concrete_function()
]
model = ct.convert(
    dummy,
    convert_to="neuralnetwork",
)
print(model.predict({"x": np.ones((1,2,1290,513), dtype=np.float32)}))
print(model.predict({"x": np.ones((1,2,1290,513), dtype=np.float32)}))

The graphviz output from debug shoes that the random op is not preserved and stdout shows identical outputs for both predict calls

System environment (please complete the following information):

  • coremltools version: Both 7.1 and 7.2
  • OS (e.g. MacOS version or Linux type): MacOS
  • Any other relevant version information (e.g. PyTorch or TensorFlow version): I tried this with TF 2.15.0 and 2.16.1
@drubinstein drubinstein added the bug Unexpected behaviour that should be corrected (type) label Apr 24, 2024
@drubinstein drubinstein changed the title tf.random.uniform is optimized out tf.random.uniform is optimized out when it shouldn't be Apr 24, 2024
@drubinstein
Copy link
Author

Small update: If I delete delete_unnecessary_constant_nodes from tfssa_passes, then I start seeing different outputs upon successive predicts again.

@TobyRoseman TobyRoseman added tf2.x / tf.keras Issue could be related to tf2.x where coremltools isn't supported (component) triaged Reviewed and examined, release as been assigned if applicable (status) labels Apr 24, 2024
@TobyRoseman
Copy link
Collaborator

This is indeed a bug. Looking at model.get_spec(), the return values are coming from an identity layer. This is also an issue for convert_to="mlprogram".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Unexpected behaviour that should be corrected (type) tf2.x / tf.keras Issue could be related to tf2.x where coremltools isn't supported (component) triaged Reviewed and examined, release as been assigned if applicable (status)
Projects
None yet
Development

No branches or pull requests

2 participants