New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vmagent: support multiple groups of remoteWrite.urls to support directing samples to different groups #6212
Comments
@hagen1778 Thanks for your reply. I understand why you are proposing this approach, and it is one I considered internally. But there is a real cost associated with running an additional VM Agent tier - these consume CPU and RAM resources. As we don't need more vm agents to handle load, adding more to handle this distribution work increases our cost to serve. From that perspective, increase vm agent configuration complexity (which I agree will happen) is the preferable option. |
The For your case, how significant would the resource cost for adding the |
For the clusters that have this architecture (we call them "sharded clusters" but they are a set of storage/insert/select nodes with a shared agent layer), our aggregate throughput is about 50M samples in, 100M samples out. In practice, there are six of these sharded clusters so we would need to run a separate L2 for each of them, and we would want some amount of physical isolation - so keep in mind that we would not be able to deploy the L2 as densely as theoretically possible. When you combine that with the operational overhead of having an additional moving parts that we have to monitor and deal with failures for, it is not something we would want to casually do. Keep in mind that the one cluster per fault domain design will itself already increase the operational overhead because of the more complex deployment topology. Thanks. |
@plangdale-roblox Do you think something like below will work for your case?
Also, do you plan to use stream aggregation per each group? |
Thanks @hagen1778. What you've described here should work. Let me describe how we use relabel configs today to make sure there's nothing there that could be problematic in this scheme. Here is the pattern we use for the agent command line (note the real cluster here has 14 "shards"):
The first global relabel config looks at tags which identify the sample source and then adds a tag which identifies which shard the sample should go to. We ensure the rules here tag every sample (so there's a catch-all rule for any samples that don't match any other rule). Then the second set of relabel config files each correspond to one of the remote write URLs and simply say So, with the syntax you have proposed:
If it works that way, then we are good. As a minor point, I think the Thanks! |
Thanks! What about stream aggregation? Are you going to use it per-group? |
Currently, streaming aggregation is handled by a second set of agents, so part of the relabel config sends aggregation input samples to these dedicated agents. Those agents then write their output to a single "shard" (ie: a single group). So everything we've discussed so far should work just fine for those, and if we ever did find ourselves writing aggregation results to multiple groups, it should still work fine unless I'm missing something. |
Is your feature request related to a problem? Please describe
This request builds on top of #6054.
In our current fault-domain-less topology, we currently configure our vmagents with multiple
remoteWrite.urls
and use relabelling configs to decide which url to send samples to. This has the effect of allowing us to use a shared set of vmagents in front of multiple storage clusters.The work done to address #6054 will require us to define one cluster per fault domain where we currently run a single cluster. In these situations we would like to continue using our shared vmagents. Based on my reading of 8f535f9, this is not currently possible, as all the
remoteWrite.urls
are treated as a single group.That means that if we tried to pass the
remoteWrite.urls
for multiple clusters, it would still treat them all as one storage group and apply the replication factor across all of them, which is not the intended result.Describe the solution you'd like
To be able to implement this scenario, we will need to be able to direct samples to distinct sets of
remoteWrite.urls
. (ie: directing them to identifiable storage groups).At a high level, this would mean being able to assign
remoteWrite.urls
to identifiable groups, and then have the relabelling configs be mapped to specific groups.Although not strictly required, it would also make sense to specify the replication factor separately for each group (vs having a single global setting applied to all groups).
Describe alternatives you've considered
The only alternative we have today is we would need to run dedicated sets of vmagents in front of each of these storage groups, and our agents would need to be aware of which set of collection agents (not always vmagent) to send which samples to. Currently, the information about where samples go is fully encapsulated in the configuration of the shared vmagent tier. If we moved to dedicated vmagent sets, we would need to size them independently, as well as configure certain external services that push samples to us to become aware of which vmagent set is for which metrics. This would significantly complicate our overall configuration.
Additional information
No response
The text was updated successfully, but these errors were encountered: