Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error running Prompt Design/Ideation in Workbench #115

Open
markbpryan opened this issue Jul 13, 2023 · 1 comment
Open

Error running Prompt Design/Ideation in Workbench #115

markbpryan opened this issue Jul 13, 2023 · 1 comment
Assignees

Comments

@markbpryan
Copy link

I ran https://github.com/GoogleCloudPlatform/generative-ai/blob/main/language/examples/prompt-design/ideation.ipynb in Colab, no issues.

However, when I tried to run https://github.com/GoogleCloudPlatform/generative-ai/blob/main/language/examples/prompt-design/ideation.ipynb in Workbench (starting with clicking the Workbench link https://screenshot.googleplex.com/BkHN3iBbAZEVfUY), I got an error.

Here is what happened after clicking the open in Workbench link:

prompt = "Generate a marketing campaign for sustainability and fashion"

print(
    generation_model.predict(
        prompt, temperature=0.2, max_output_tokens=1024, top_k=40, top_p=0.8
    ).text
)

running this cell, I get the following errror:

---------------------------------------------------------------------------
_InactiveRpcError                         Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:65, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
     64 try:
---> 65     return callable_(*args, **kwargs)
     66 except grpc.RpcError as exc:

File /opt/conda/lib/python3.10/site-packages/grpc/_channel.py:946, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
    944 state, call, = self._blocking(request, timeout, metadata, credentials,
    945                               wait_for_ready, compression)
--> 946 return _end_unary_response_blocking(state, call, False, None)

File /opt/conda/lib/python3.10/site-packages/grpc/_channel.py:849, in _end_unary_response_blocking(state, call, with_call, deadline)
    848 else:
--> 849     raise _InactiveRpcError(state)

_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.PERMISSION_DENIED
	details = "Permission 'aiplatform.endpoints.predict' denied on resource '//aiplatform.googleapis.com/projects/genai-test-project-jun18/locations/us-central1/publishers/google/models/text-bison@001' (or it may not exist)."
	debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.124.95:443 {grpc_message:"Permission \'aiplatform.endpoints.predict\' denied on resource \'//aiplatform.googleapis.com/projects/genai-test-project-jun18/locations/us-central1/publishers/google/models/text-bison@001\' (or it may not exist).", grpc_status:7, created_time:"2023-07-13T14:46:42.615806389+00:00"}"
>

The above exception was the direct cause of the following exception:

PermissionDenied                          Traceback (most recent call last)
Cell In[6], line 4
      1 prompt = "Generate a marketing campaign for sustainability and fashion"
      3 print(
----> 4     generation_model.predict(
      5         prompt, temperature=0.2, max_output_tokens=1024, top_k=40, top_p=0.8
      6     ).text
      7 )

File /opt/conda/lib/python3.10/site-packages/vertexai/language_models/_language_models.py:251, in TextGenerationModel.predict(self, prompt, max_output_tokens, temperature, top_k, top_p)
    229 def predict(
    230     self,
    231     prompt: str,
   (...)
    236     top_p: float = _DEFAULT_TOP_P,
    237 ) -> "TextGenerationResponse":
    238     """Gets model response for a single prompt.
    239 
    240     Args:
   (...)
    248         A `TextGenerationResponse` object that contains the text produced by the model.
    249     """
--> 251     return self._batch_predict(
    252         prompts=[prompt],
    253         max_output_tokens=max_output_tokens,
    254         temperature=temperature,
    255         top_k=top_k,
    256         top_p=top_p,
    257     )[0]

File /opt/conda/lib/python3.10/site-packages/vertexai/language_models/_language_models.py:287, in TextGenerationModel._batch_predict(self, prompts, max_output_tokens, temperature, top_k, top_p)
    279 instances = [{"content": str(prompt)} for prompt in prompts]
    280 prediction_parameters = {
    281     "temperature": temperature,
    282     "maxDecodeSteps": max_output_tokens,
    283     "topP": top_p,
    284     "topK": top_k,
    285 }
--> 287 prediction_response = self._endpoint.predict(
    288     instances=instances,
    289     parameters=prediction_parameters,
    290 )
    292 return [
    293     TextGenerationResponse(
    294         text=prediction["content"],
   (...)
    297     for prediction in prediction_response.predictions
    298 ]

File /opt/conda/lib/python3.10/site-packages/google/cloud/aiplatform/models.py:1559, in Endpoint.predict(self, instances, parameters, timeout, use_raw_predict)
   1546     return Prediction(
   1547         predictions=json_response["predictions"],
   1548         deployed_model_id=raw_predict_response.headers[
   (...)
   1556         ),
   1557     )
   1558 else:
-> 1559     prediction_response = self._prediction_client.predict(
   1560         endpoint=self._gca_resource.name,
   1561         instances=instances,
   1562         parameters=parameters,
   1563         timeout=timeout,
   1564     )
   1566     return Prediction(
   1567         predictions=[
   1568             json_format.MessageToDict(item)
   (...)
   1573         model_resource_name=prediction_response.model,
   1574     )

File /opt/conda/lib/python3.10/site-packages/google/cloud/aiplatform_v1/services/prediction_service/client.py:602, in PredictionServiceClient.predict(self, request, endpoint, instances, parameters, retry, timeout, metadata)
    597 metadata = tuple(metadata) + (
    598     gapic_v1.routing_header.to_grpc_metadata((("endpoint", request.endpoint),)),
    599 )
    601 # Send the request.
--> 602 response = rpc(
    603     request,
    604     retry=retry,
    605     timeout=timeout,
    606     metadata=metadata,
    607 )
    609 # Done; return the response.
    610 return response

File /opt/conda/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:113, in _GapicCallable.__call__(self, timeout, retry, *args, **kwargs)
    110     metadata.extend(self._metadata)
    111     kwargs["metadata"] = metadata
--> 113 return wrapped_func(*args, **kwargs)

File /opt/conda/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:67, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
     65     return callable_(*args, **kwargs)
     66 except grpc.RpcError as exc:
---> 67     raise exceptions.from_grpc_error(exc) from exc

PermissionDenied: 403 Permission 'aiplatform.endpoints.predict' denied on resource '//aiplatform.googleapis.com/projects/genai-test-project-jun18/locations/us-central1/publishers/google/models/text-bison@001' (or it may not exist). [reason: "IAM_PERMISSION_DENIED"
domain: "aiplatform.googleapis.com"
metadata {
  key: "permission"
  value: "aiplatform.endpoints.predict"
}
metadata {
  key: "resource"
  value: "projects/genai-test-project-jun18/locations/us-central1/publishers/google/models/text-bison@001"
}
]

@iamthuya
Copy link
Contributor

Hi. I can't seem to reproduce the error. Could you give it a try again? Sometimes this happens due to network issue or you are querying too fast to the model.

@holtskinner holtskinner changed the title error running https://github.com/GoogleCloudPlatform/generative-ai/blob/main/language/examples/prompt-design/ideation.ipynb in Workbench Error running Prompt Design/Ideation in Workbench Feb 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants