Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-23.2: pkg/sql: expand metrics used by the SQL stats activity update job #123960

Merged
merged 4 commits into from May 13, 2024

Conversation

abarganier
Copy link
Member

Backport 4/4 commits from #120522.

/cc @cockroachdb/release


pkg/sql: expand metrics used by the SQL stats activity update job

Addresses: #119779

Epic: CRDB-24527

Currently, the SQL activity update job is lacking observability. While
we have a metric for job failures, we've seen instances whe the query
run by the job gets caught in a retry loop, meaning the metric is rarely
incremented.

Therefore, additional metrics, such as counts of successful runs, and
the latency of successful runs, will be helpful to further inspect the
state of the job.

This patch adds metrics for both.

We've also had escalations recently involving the SQL activity update job
running for extended periods of time, such that the signal made to the
job indicating a flush has completed was not received because there was
no listener.

While we've added a default case to prevent this from hanging the flush
job, and some logging to go with it, a counter metric indicating when
this occurs would also be useful to have when debugging.

This patch adds such a counter.

Finally, we rename the metric counting failures of the job to sql.stats.activity_job.runs.failed,
as the old metric name was not descriptive.

Release note (ops change): Two new metrics have been added to track the
status of the SQL activity update job, which is used to pre-aggregate
top K information within the SQL stats subsytem and write the results to
system.statement_activity and system.transaction_activity.

The new metrics are:

  • sql.stats.activity_job.runs.successful: Number of successful runs made
    by the SQL activity updater job
  • sql.stats.activity_job.latency: The latency of successful runs made by
    the SQL activity updater job

Release note (ops change): A new counter metric,
sql.stats.flush.done_signals_ignored, has been introduced. The metric
tracks the number of times the SQL Stats activity update job ignored
the signal sent to it indicating a flush has completed. This may
indicate that the SQL Activity update job is taking longer than expected
to complete.

Release note (ops change): A new counter metric,
sql.stats.activity_job.runs.failed, has been introduced to measure the
number of runs made by the SQL activity updater job that failed with
errors. The SQL activity update job is used to pre-aggregate top K
information within the SQL stats subsystem and write the results to
system.statement_activity and system.transaction_activity.

Release justification: metrics-only change to improve insights into fingerprint counts and sql stats flush statistics, which will be useful in our mission to stabilize the feature.

Addresses: cockroachdb#119779

Currently, the SQL activity update job is lacking observability. While
we have a metric for job failures, we've seen instances whe the query
run by the job gets caught in a retry loop, meaning the metric is rarely
incremented.

Therefore, additional metrics, such as counts of successful runs, and
the latency of successful runs, will be helpful to further inspect the
state of the job.

This patch adds metrics for both.

Release note (ops change): Two new metrics have been added to track the
status of the SQL activity update job, which is used to pre-aggregate
top K information within the SQL stats subsytem and write the results to
`system.statement_activity` and `system.transaction_activity`.

The new metrics are:
- `sql.stats.activity.updates.successful`: Number of successful updates made
   by the SQL activity updater job.
- `sql.stats.activity.update.latency`: The latency of updates  made by
  the SQL activity updater job. Includes failed update attempts.
Addresses: cockroachdb#119779

We've had escalations recently involving the SQL activity update job
running for extended periods of time, such that the signal made to the
job indicating a flush has completed was not received because there was
no listener.

While we've added a default case to prevent this from hanging the flush
job, and some logging to go with it, a counter metric indicating when
this occurs would also be useful to have when debugging.

This patch adds such a counter.

Release note (ops change): A new counter metric,
`sql.stats.flush.done_signals.ignored`, has been introduced. The metric
tracks the number of times the SQL Stats activity update job ignored
the signal sent to it indicating a flush has completed. This may
indicate that the SQL Activity update job is taking longer than expected
to complete.
The metric used to track failures of the SQL Activity update job
didn't have a descriptive name, and the help text was grammatically
incorrect. Furthermore, the metric name is the same as a metric used
within the job system, meaning one of these metrics is probably
clobbering the other when writing to TSDB or outputting to
`/_status/vars`.

This patch simply updates the metric name to better describe what it
measures, and fixes the help text description.

Release note (ops change): A new counter metric,
`sql.stats.activity.updates.failed`, has been introduced to measure the
number of update attempts made by the SQL activity updater job that failed with
errors. The SQL activity update job is used to pre-aggregate top K
information within the SQL stats subsystem and write the results to
`system.statement_activity` and `system.transaction_activity`.
@abarganier abarganier requested review from a team as code owners May 10, 2024 18:36
@abarganier abarganier requested review from nkodali and DrewKimball and removed request for a team May 10, 2024 18:36
Copy link

blathers-crl bot commented May 10, 2024

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Backports should only be created for serious
    issues
    or test-only changes.
  • Backports should not break backwards-compatibility.
  • Backports should change as little code as possible.
  • Backports should not change on-disk formats or node communication protocols.
  • Backports should not add new functionality (except as defined
    here).
  • Backports must not add, edit, or otherwise modify cluster versions; or add version gates.
  • All backports must be reviewed by the owning areas TL and one additional
    TL. For more information as to how that review should be conducted, please consult the backport
    policy
    .
If your backport adds new functionality, please ensure that the following additional criteria are satisfied:
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters. State changes must be further protected such that nodes running old binaries will not be negatively impacted by the new state (with a mixed version test added).
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.
  • Your backport must be accompanied by a post to the appropriate Slack
    channel (#db-backports-point-releases or #db-backports-XX-X-release) for awareness and discussion.

Also, please add a brief release justification to the body of your PR to justify this
backport.

@blathers-crl blathers-crl bot added the backport Label PR's that are backports to older release branches label May 10, 2024
Copy link

blathers-crl bot commented May 10, 2024

It looks like your PR touches production code but doesn't add or edit any test code. Did you consider adding tests to your PR?

🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

@abarganier abarganier requested review from xinhaoz and removed request for DrewKimball May 10, 2024 18:37
@cockroach-teamcity
Copy link
Member

This change is Reviewable

}
metrics.UpdateLatency.RecordValue(timeutil.Now().UnixNano() - startTime)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

super nit: this could be timeutil.Since(startTime).Nanoseconds()

Addresses: cockroachdb#119779

The count of unique fingeprints flushed to `system.statement_statistics`
and `system.transaction_statistics` is the core component that
determines data cardinality within the SQL stats subsystem. Today, we
don't have good metrics around this source of cardinality. As we aim to
reduce cardinality by improving our fingerprinting algorithms, creating
a metric to count the number of unique statement and transaction
fingerprints included in each flush of the in-memory SQL stats will be a
helpful measurement to benchmark cardinality reduction.

This patch adds a new metric to track the # of unique fingerprints (stmt
and txn) included in each flush.

Release note (ops change): A new counter metric,
`sql.stats.flush.fingerprint.count`, has been introduced. The metric
tracks the number of unique statement and transaction fingerprints
included in the SQL Stats flush.
@abarganier
Copy link
Member Author

TFTRs!

@abarganier abarganier merged commit 57fa5b8 into cockroachdb:release-23.2 May 13, 2024
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport Label PR's that are backports to older release branches
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants