Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(clustering/rpc): cache the result of parse_method_name() #12949

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

chronolaw
Copy link
Contributor

@chronolaw chronolaw commented Apr 28, 2024

Summary

Using lru_cache to improve the performance of pares_method_name().

KAG-4441

Checklist

  • The Pull Request has tests
  • A changelog file has been created under changelog/unreleased/kong or skip-changelog label added on PR if changelog is unnecessary. README.md
  • There is a user-facing docs PR against https://github.com/Kong/docs.konghq.com - PUT DOCS PR HERE

Issue reference

Fix #[issue number]

@github-actions github-actions bot added core/clustering cherry-pick kong-ee schedule this PR for cherry-picking to kong/kong-ee labels Apr 28, 2024
@chronolaw chronolaw changed the title refactor(router/rpc): cache the result of parse_method_name() refactor(clustering/rpc): cache the result of parse_method_name() Apr 28, 2024
ADD-SP

This comment was marked as duplicate.

function _M.parse_method_name(method)
local cap = cap_names:get(method)
Copy link
Contributor

@ADD-SP ADD-SP Apr 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method should not be very long, I'm wondering if the lrucache is significantly faster than this simple string operation.

I'm trying to avoid meaningless memory consumption.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I also notice that, need more discussion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a simple perf test:

-- lru cache
local function parse_method_name1(method) ... end

-- no cache
local function parse_method_name2(method) ... end

local COUNT = 100 * 1000

ngx.update_time()
local t = ngx.now()

for i = 1, COUNT do
  parse_method_name1("kong.rpc.v1.test")
end

ngx.update_time()
print("v1: ", ngx.now() - t)

local t = ngx.now()

for i = 1, COUNT do
  parse_method_name2("kong.rpc.v1.test")
end

ngx.update_time()
print("v2: ", ngx.now() - t)

The result is:

resty ./t.lua
v1: 0.002000093460083
v2: 0.0079998970031738

So cache is faster than calculation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cherry-pick kong-ee schedule this PR for cherry-picking to kong/kong-ee core/clustering size/S skip-changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants