-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Store span names & types, input names & types as internal trace tag #12015
Conversation
Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Documentation preview for 96f3683 will be available when this CircleCI job More info
|
mlflow/tracing/trace_manager.py
Outdated
@@ -139,6 +139,15 @@ def get_request_id_from_trace_id(self, trace_id: int) -> Optional[str]: | |||
""" | |||
return self._trace_id_to_request_id.get(trace_id) | |||
|
|||
def get_mlflow_trace_from_trace(self, request_id: int) -> Optional[Trace]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we rename this to get_mlflow_trace? from_trace sounds a bit weird
parsed_span["type"] = span.get_attribute(SpanAttributeKey.SPAN_TYPE) | ||
span_inputs = span.get_attribute(SpanAttributeKey.INPUTS) | ||
if span_inputs and isinstance(span_inputs, dict): | ||
parsed_span["inputs"] = list(span_inputs.keys()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if span_inputs is not a dict, do we want to store it or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same for outputs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If span inputs or outputs is not a dict, we don't want to save it
mlflow/tracing/export/mlflow.py
Outdated
@@ -80,3 +80,8 @@ def _log_trace(self, trace: Trace): | |||
self._client._upload_ended_trace_info(trace.info) | |||
except Exception as e: | |||
_logger.debug(f"Failed to log trace to MLflow backend: {e}", exc_info=True) | |||
|
|||
try: | |||
self._client._upload_trace_spans_as_tag(trace.info, trace.data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we put this above end_trace? It seems more reasonable that we update tags before ending trace (which includes status).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's fair
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM once comments from @serena-ruan are addressed! Thanks @jessechancy !
Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
馃洜 DevTools 馃洜
Install mlflow from this PR
Checkout with GitHub CLI
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
When a trace is logged from the MLflow client to Databricks, we should set amlflow.traceSpans tag (see https://github.com/databricks/universe/pull/559943 ) via the SetTraceTag API call that includes the following information for each span:
The span name
The span type
The span input names (only if the span outputs are a dict)
The span output names (only if the span inputs are a dict)
We should store this as JSON with keys name (value is string), type (value is string), inputs (value is list of string), outputs (value is list of string). To save space, we should not pretty print the JSON - it should be as compact as possible.
If the JSON is too large, the backend will throw an INVALID_PARAMETER_EXCEPTION (see https://github.com/databricks/universe/pull/559943 ). We should try / catch the tag logging; if this exception is encountered, we should just skip logging this tag (the trace should still be logged, the user shouldn鈥檛 see any exceptions). It鈥檚 important to use SetTraceTag, not EndTrace, to set the tag because we don鈥檛 want EndTrace to fail if the tag length is too long. We should not hardcode the tag length check in the client, since the maximum tag length may differ based on the backend.
This tag will be used by the UI to populate a dropdown of span fields (inputs, outputs) that users can select, allowing them to extract the fields and view them in a table.
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yes
should be selected for bug fixes, documentation updates, and other small changes.No
should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.