Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes suggestion to improve performance under production workload #1414

Open
0xGosu opened this issue Dec 28, 2020 · 2 comments
Open

Changes suggestion to improve performance under production workload #1414

0xGosu opened this issue Dec 28, 2020 · 2 comments
Labels
enhancement Suggested enhancement feature Feature request, suggested feature or feature implementation web server Functionality related to the web server serving HTTP requests and websockets

Comments

@0xGosu
Copy link
Contributor

0xGosu commented Dec 28, 2020

Hi recently i've bump into this framework and very interested with its idea. In the past, I've work (and contribute) with a couple of micro-services framework. I would like to proposed a list of changes below:

Change 1: Allow to run multiple HTTP service on a same port.
Currently in this tomodachi framework, i can run multiple service in single process by import them in a same file services.py then run it using tomodachi run services.py --production. This behaviour works great for AMQP services since there is no port conflict when run more than 1 service. But for HTTP services it make access those HTTP services become difficult since each service need to be configured on a different port. Especially when run inside docker container with default bridge isolation network (multiple ports need to be forwarded to the host Or having a nginx configured to route each service path to its corresponding port). If some how the framework can merge all route and corresponding handler of multiple HTTP services to use a single socket bind to a same (address, port). It will eliminate this problem. Most of the time, different service will have different path routing any way so conflicting on a same path is not really a big issue. If conflicted, then developer should always have an option to run the service on different port (current behaviour).

Change 2: Automatically set socket SO_REUSEPORT to True on Linux platform.
As of today, all cloud provider are providing virtual machine using CPU with hyper threading feature. You will get minimum 2 vCPU by default. To fully utilised these CPU for Python application, standard Production setup will alway run the application using process manager supervisord or using web server gunicorn with number of process/worker is (n+1) or (2n+1). Currently this is not possible without socket SO_REUSEPORT flag set to True. This change along with change 1 above will allow multiple tomodachi HTTP service to be easily deployed for Production setup using supervisord with a simple config inside a docker container.

Change 3: Set a default prefetch_count for AMQP service.
Current implementation of tomodachi framework doesn't set the prefetch_count for channel when consume message from a queue. If this number is not set (or set to 0), then when ever a message is successfully published to a queue, it will immediately be sent to the tomodachi AMQP service which is currently consuming the queue regardless of how busy this service worker is. This behaviour is only optimal for service which is very lightweight (very low memory consumption per request and low cpu consumption overall). When the AMQP service doing some thing heavy this behaviour will instead causing the service to bottle-neck itself under high/average load (may cause the Out Of Memory situation as well). I suggest to set a default prefetch_count of 100 in the framework and allow developer to specify this number in service configuration. (see: https://www.rabbitmq.com/confirms.html#channel-qos-prefetch-throughput)

p/s: I will personally create the PR for change 2 and change 3. For change 1, it require to touch/rewrite part of the HTTP core of this framework i feel its better to be implement by framework owner if he agree to this change.

@0xGosu
Copy link
Contributor Author

0xGosu commented Jan 1, 2021

I've created 2 PR above for change 2 and change 3. Please take a look at the 2 PR above.

@kalaspuff
Copy link
Owner

@tranvietanh1991 Hi. This is an insanely late response to your suggestions I'm afraid, so I would like to apologize for that. Also – thank you for the great suggestions and the pull requests to the repo.

Using the SO_REUSEPORT socket option seems like a great addition, specially for cases where services are managed by supervisord – PR for suggested change 2 seems to work as expected as far as I can see. As you mentioned regarding suggested change 1, the ins and outs of the routing would have to be restructured a bit for it to work with multi-class services running on the same port within the same process. It's an interesting suggestion that I haven't really thought about previously, but I'm sure it could be quite useful for having a service with different options, middlewares, etc. while working as one.

Will also merge your third suggested change (and PR) regarding setting a prefetch_count value for AMQP subscribers. Great stuff! 🙇‍♂️

@kalaspuff kalaspuff added enhancement Suggested enhancement feature Feature request, suggested feature or feature implementation web server Functionality related to the web server serving HTTP requests and websockets labels Mar 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Suggested enhancement feature Feature request, suggested feature or feature implementation web server Functionality related to the web server serving HTTP requests and websockets
Projects
None yet
Development

No branches or pull requests

2 participants