Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

import tensorflow.compat.v2 could not be resolved #67331

Open
Yummyto opened this issue May 10, 2024 · 8 comments
Open

import tensorflow.compat.v2 could not be resolved #67331

Yummyto opened this issue May 10, 2024 · 8 comments
Assignees
Labels
comp:model Model related issues stat:awaiting response Status - Awaiting response from author TF 2.10 type:support Support issues

Comments

@Yummyto
Copy link

Yummyto commented May 10, 2024

Issue type

Build/Install

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

2.10.1

Custom code

Yes

OS platform and distribution

Windows

Mobile device

No response

Python version

3.7.3

Bazel version

No response

GCC/compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

Traceback (most recent call last):
File "Tensorflow/models/research/object_detection/model_main_tf2.py", line 31, in
from object_detection import model_lib_v2
File "C:\Users\Liam\Anaconda3\lib\site-packages\object_detection\model_lib_v2.py", line 29, in
from object_detection import eval_util
File "C:\Users\Liam\Anaconda3\lib\site-packages\object_detection\eval_util.py", line 35, in
from object_detection.metrics import coco_evaluation
File "C:\Users\Liam\Anaconda3\lib\site-packages\object_detection\metrics\coco_evaluation.py", line 28, in
from object_detection.utils import object_detection_evaluation
File "C:\Users\Liam\Anaconda3\lib\site-packages\object_detection\utils\object_detection_evaluation.py", line 46, in
from object_detection.utils import label_map_util
File "C:\Users\Liam\Anaconda3\lib\site-packages\object_detection\utils\label_map_util.py", line 29, in
from object_detection.protos import string_int_label_map_pb2
File "C:\Users\Liam\Anaconda3\lib\site-packages\object_detection\protos\string_int_label_map_pb2.py", line 9, in
from google.protobuf.internal import builder as builder
ImportError: cannot import name 'builder' from 'google.protobuf.internal' (C:\Users\Liam\Anaconda3\lib\site-packages\google\protobuf\internal_init
.py)

Standalone code to reproduce the issue

"""Creates and runs TF2 object detection models.

For local training/evaluation run:
PIPELINE_CONFIG_PATH=path/to/pipeline.config
MODEL_DIR=/tmp/model_outputs
NUM_TRAIN_STEPS=10000
SAMPLE_1_OF_N_EVAL_EXAMPLES=1
python model_main_tf2.py -- \
  --model_dir=$MODEL_DIR --num_train_steps=$NUM_TRAIN_STEPS \
  --sample_1_of_n_eval_examples=$SAMPLE_1_OF_N_EVAL_EXAMPLES \
  --pipeline_config_path=$PIPELINE_CONFIG_PATH \
  --alsologtostderr
"""
from absl import flags
import tensorflow.compat.v2 as tf
from object_detection import model_lib_v2

flags.DEFINE_string('pipeline_config_path', None, 'Path to pipeline config '
                    'file.')
flags.DEFINE_integer('num_train_steps', None, 'Number of train steps.')
flags.DEFINE_bool('eval_on_train_data', False, 'Enable evaluating on train '
                  'data (only supported in distributed training).')
flags.DEFINE_integer('sample_1_of_n_eval_examples', None, 'Will sample one of '
                     'every n eval input examples, where n is provided.')
flags.DEFINE_integer('sample_1_of_n_eval_on_train_examples', 5, 'Will sample '
                     'one of every n train input examples for evaluation, '
                     'where n is provided. This is only used if '
                     '`eval_training_data` is True.')
flags.DEFINE_string(
    'model_dir', None, 'Path to output model directory '
                       'where event and checkpoint files will be written.')
flags.DEFINE_string(
    'checkpoint_dir', None, 'Path to directory holding a checkpoint.  If '
    '`checkpoint_dir` is provided, this binary operates in eval-only mode, '
    'writing resulting metrics to `model_dir`.')

flags.DEFINE_integer('eval_timeout', 3600, 'Number of seconds to wait for an'
                     'evaluation checkpoint before exiting.')

flags.DEFINE_bool('use_tpu', False, 'Whether the job is executing on a TPU.')
flags.DEFINE_string(
    'tpu_name',
    default=None,
    help='Name of the Cloud TPU for Cluster Resolvers.')
flags.DEFINE_integer(
    'num_workers', 1, 'When num_workers > 1, training uses '
    'MultiWorkerMirroredStrategy. When num_workers = 1 it uses '
    'MirroredStrategy.')
flags.DEFINE_integer(
    'checkpoint_every_n', 1000, 'Integer defining how often we checkpoint.')
flags.DEFINE_boolean('record_summaries', True,
                     ('Whether or not to record summaries defined by the model'
                      ' or the training pipeline. This does not impact the'
                      ' summaries of the loss values which are always'
                      ' recorded.'))

FLAGS = flags.FLAGS


def main(unused_argv):
  flags.mark_flag_as_required('model_dir')
  flags.mark_flag_as_required('pipeline_config_path')
  tf.config.set_soft_device_placement(True)

  if FLAGS.checkpoint_dir:
    model_lib_v2.eval_continuously(
        pipeline_config_path=FLAGS.pipeline_config_path,
        model_dir=FLAGS.model_dir,
        train_steps=FLAGS.num_train_steps,
        sample_1_of_n_eval_examples=FLAGS.sample_1_of_n_eval_examples,
        sample_1_of_n_eval_on_train_examples=(
            FLAGS.sample_1_of_n_eval_on_train_examples),
        checkpoint_dir=FLAGS.checkpoint_dir,
        wait_interval=300, timeout=FLAGS.eval_timeout)
  else:
    if FLAGS.use_tpu:
      # TPU is automatically inferred if tpu_name is None and
      # we are running under cloud ai-platform.
      resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
          FLAGS.tpu_name)
      tf.config.experimental_connect_to_cluster(resolver)
      tf.tpu.experimental.initialize_tpu_system(resolver)
      strategy = tf.distribute.experimental.TPUStrategy(resolver)
    elif FLAGS.num_workers > 1:
      strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
    else:
      strategy = tf.compat.v2.distribute.MirroredStrategy()

    with strategy.scope():
      model_lib_v2.train_loop(
          pipeline_config_path=FLAGS.pipeline_config_path,
          model_dir=FLAGS.model_dir,
          train_steps=FLAGS.num_train_steps,
          use_tpu=FLAGS.use_tpu,
          checkpoint_every_n=FLAGS.checkpoint_every_n,
          record_summaries=FLAGS.record_summaries)

if __name__ == '__main__':
  tf.compat.v1.app.run()

Relevant log output

No response

@google-ml-butler google-ml-butler bot added the type:build/install Build and install issues label May 10, 2024
@tilakrayal
Copy link
Contributor

@Yummyto,
Could you please try to follow the below steps to resolve the issue. Also try to upgrade the protobuf package.

pip install --upgrade protobuf

- Use protobuf version 3.19.4 for when using object detection
- Download builder.py from [github repo](https://github.com/protocolbuffers/protobuf/blob/main/python/google/protobuf/internal/builder.py)
- Place this downloaded builder.py inside your protobuf installation 

https://stackoverflow.com/questions/71759248/importerror-cannot-import-name-builder-from-google-protobuf-internal

Thank you!

@tilakrayal tilakrayal added the stat:awaiting response Status - Awaiting response from author label May 11, 2024
@Yummyto
Copy link
Author

Yummyto commented May 11, 2024

I got this error when I tried to install protobuf 3.19.4

ERROR: Exception:
Traceback (most recent call last):
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 437, in _error_catcher
yield
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 560, in read
data = self._fp_read(amt) if not fp_closed else b""
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 526, in _fp_read
return self._fp.read(amt) if amt is not None else self._fp.read()
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\cachecontrol\filewrapper.py", line 90, in read
data = self.__fp.read(amt)
File "C:\Users\Liam\Anaconda3\lib\http\client.py", line 447, in read
n = self.readinto(b)
File "C:\Users\Liam\Anaconda3\lib\http\client.py", line 491, in readinto
n = self.fp.readinto(b)
File "C:\Users\Liam\Anaconda3\lib\socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "C:\Users\Liam\Anaconda3\lib\ssl.py", line 1052, in recv_into
return self.read(nbytes, buffer)
File "C:\Users\Liam\Anaconda3\lib\ssl.py", line 911, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\cli\base_command.py", line 160, in exc_logging_wrapper
status = run_func(*args)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\cli\req_command.py", line 247, in wrapper
return func(self, options, args)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\commands\install.py", line 401, in run
reqs, check_supported_wheels=not options.target_dir
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\resolution\resolvelib\resolver.py", line 93, in resolve
collected.requirements, max_rounds=try_to_avoid_resolution_too_deep
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 481, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 348, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 172, in _add_to_criteria
if not criterion.candidates:
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\resolvelib\structs.py", line 151, in bool
return bool(self._sequence)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 155, in bool
return any(self)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 143, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 47, in _iter_built
candidate = func()
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\resolution\resolvelib\factory.py", line 211, in _make_candidate_from_link
version=version,
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 303, in init
version=version,
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 162, in init
self.dist = self._prepare()
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 231, in _prepare
dist = self._prepare_distribution()
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 308, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\operations\prepare.py", line 491, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\operations\prepare.py", line 542, in _prepare_linked_requirement
hashes,
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\operations\prepare.py", line 170, in unpack_url
hashes=hashes,
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\operations\prepare.py", line 107, in get_http_url
from_path, content_type = download(link, temp_dir.path)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\network\download.py", line 147, in call
for chunk in chunks:
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\cli\progress_bars.py", line 53, in _rich_progress_bar
for chunk in iterable:
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_internal\network\utils.py", line 87, in response_chunks
decode_content=False,
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 621, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 586, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "C:\Users\Liam\Anaconda3\lib\contextlib.py", line 130, in exit
self.gen.throw(type, value, traceback)
File "C:\Users\Liam\Anaconda3\lib\site-packages\pip_vendor\urllib3\response.py", line 442, in _error_catcher
raise ReadTimeoutError(self._pool, None, "Read timed out.")
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.

Thank you!

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label May 11, 2024
@Yummyto
Copy link
Author

Yummyto commented May 13, 2024

I now installed the protobuf 3.19.4 and added to this path (C:\Users\Liam\Anaconda3\lib\site packages\google\protobuf\internal_init_.py) however I still got the same error ImportError: cannot import name 'builder' from 'google.protobuf.internal' (C:\Users\Liam\Anaconda3\lib\site-packages\google\protobuf\internal_init_.py). Should I copy the code of builder.py then and paste it to init.py?

@Yummyto
Copy link
Author

Yummyto commented May 13, 2024

I now got it and resolved the problem. However it still got me this new error AttributeError: module 'tensorflow.compat.v2' has no attribute 'internal'. What should I do?

Thank you!

@Yummyto
Copy link
Author

Yummyto commented May 13, 2024

This is the error that I got

AttributeError Traceback (most recent call last)
in
2 from object_detection.utils import label_map_util
3 from object_detection.utils import visualization_utils as viz_utils
----> 4 from object_detection.builders import model_builder

~\Anaconda3\lib\site-packages\object_detection\builders\model_builder.py in
35 from object_detection.meta_architectures import center_net_meta_arch
36 from object_detection.meta_architectures import context_rcnn_meta_arch
---> 37 from object_detection.meta_architectures import deepmac_meta_arch
38 from object_detection.meta_architectures import faster_rcnn_meta_arch
39 from object_detection.meta_architectures import rfcn_meta_arch

~\Anaconda3\lib\site-packages\object_detection\meta_architectures\deepmac_meta_arch.py in
18 from object_detection.meta_architectures import center_net_meta_arch
19 from object_detection.models.keras_models import hourglass_network
---> 20 from object_detection.models.keras_models import resnet_v1
21 from object_detection.protos import center_net_pb2
22 from object_detection.protos import losses_pb2

~\Anaconda3\lib\site-packages\object_detection\models\keras_models\resnet_v1.py in
26
27 try:
---> 28 from keras.applications import resnet # pylint: disable=g-import-not-at-top
29 except ImportError:
30 from tf_keras.applications import resnet # pylint: disable=g-import-not-at-top

~\AppData\Roaming\Python\Python37\site-packages\keras_init_.py in
18 keras.io.
19 """
---> 20 from keras import distribute
21 from keras import models
22 from keras.engine.input_layer import Input

~\AppData\Roaming\Python\Python37\site-packages\keras\distribute_init_.py in
16
17
---> 18 from keras.distribute import sidecar_evaluator

~\AppData\Roaming\Python\Python37\site-packages\keras\distribute\sidecar_evaluator.py in
20 from tensorflow.python.platform import tf_logging as logging
21 from tensorflow.python.util import deprecation
---> 22 from keras.optimizers.optimizer_experimental import (
23 optimizer as optimizer_experimental,
24 )

~\AppData\Roaming\Python\Python37\site-packages\keras\optimizers_init_.py in
23
24 # Imports needed for deserialization.
---> 25 from keras import backend
26 from keras.optimizers.legacy import adadelta as adadelta_legacy
27 from keras.optimizers.legacy import adagrad as adagrad_legacy

~\AppData\Roaming\Python\Python37\site-packages\keras\backend.py in
30 import tensorflow.compat.v2 as tf
31
---> 32 from keras import backend_config
33 from keras.distribute import distribute_coordinator_utils as dc
34 from keras.engine import keras_tensor

~\AppData\Roaming\Python\Python37\site-packages\keras\backend_config.py in
31
32 @keras_export("keras.backend.epsilon")
---> 33 @tf.internal.dispatch.add_dispatch_support
34 def epsilon():
35 """Returns the value of the fuzz factor used in numeric expressions.

AttributeError: module 'tensorflow.compat.v2' has no attribute 'internal'

@tilakrayal tilakrayal added type:support Support issues comp:model Model related issues and removed type:build/install Build and install issues labels May 16, 2024
@tilakrayal
Copy link
Contributor

tilakrayal commented May 16, 2024

@Yummyto,
Glad the build issue was resolved and could you please confirm whether you are trying to execute the code of the object detection which is related to TensorFlow 2.x version or the older TensorFlow version. Also I suspect that the error indicates an issue with how you are importing TensorFlow (TF) and potentially a version mismatch. Thank you!

@tilakrayal tilakrayal added the stat:awaiting response Status - Awaiting response from author label May 16, 2024
@Yummyto
Copy link
Author

Yummyto commented May 20, 2024

Thank you so much for your help. I do have a question, what is the should I do since I get this error prompt AttributeError: module 'tensorflow.keras.optimizers' has no attribute 'experimental'?

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label May 20, 2024
@tilakrayal
Copy link
Contributor

tilakrayal commented May 21, 2024

@Yummyto,
I suspect you are using tf.keras.optimizers.experimental.SGD() and tf.keras.optimizers.experimental.Adam() in your code. Those experimental optimizers were available in tensorflow2.9. Now you can use the regular optimizers (like tf.keras.optimizers.Adam/tf.keras.optimizers.SGD .

https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/SGD

Thank you!

@tilakrayal tilakrayal added the stat:awaiting response Status - Awaiting response from author label May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:model Model related issues stat:awaiting response Status - Awaiting response from author TF 2.10 type:support Support issues
Projects
None yet
Development

No branches or pull requests

2 participants