Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory from transport buffer not freed after connection_lost #548

Open
toppk opened this issue Jul 6, 2023 · 2 comments
Open

memory from transport buffer not freed after connection_lost #548

toppk opened this issue Jul 6, 2023 · 2 comments

Comments

@toppk
Copy link

toppk commented Jul 6, 2023

  • uvloop version: 0.17.0
  • Python version: 3.11.4
  • Platform: fedora 38
  • Can you reproduce the bug with PYTHONASYNCIODEBUG in env?: yes, with PYTHONASYNCIODEBUG=1
  • Does uvloop behave differently from vanilla asyncio? How?: yes, keeps resident memory

If I flood transport.write() and then close the connection (e.g. having the client telnet session stop) the memory stays resident.
Under stock asyncio, the memory immediately is released.

To reproduce,

  1. start this program,
  2. watch it in top
  3. , telnet to port 8888,
  4. watch memory usage spike.
  5. close the telnet session,
  6. see that memory usage stays high.
  7. repeat this experiment with stock asyncio runner,
  8. the memory is freed immediately. after telnet session is closed

This test script is a malloc bomb, so make sure you watch memory closely or you will consume all the memory on your machine

#!/usr/bin/env python3


import asyncio
import uvloop


class EchoServerProtocol(asyncio.Protocol):
    def __init__(self, on_connect, on_disconnect):
        self.on_connect = on_connect
        self.on_disconnect = on_disconnect

    def connection_lost(self, transport):
        print('Connection lost')
        self.on_disconnect()
        self.transport = None

    def connection_made(self, transport):
        self.on_connect(transport)
        peername = transport.get_extra_info('peername')
        print('Connection from {}'.format(peername))
        self.transport = transport

    def data_received(self, data):
        message = data.decode()
        print('Data received: {!r}'.format(message))

        print('Send: {!r}'.format(message))
        self.transport.write(data)

        print('Close the client socket')


async def send_data(transport):
    message = b'Hello World!' * 1000
    while True:
        transport.write(message)
        await asyncio.sleep(0)


task = None


def on_connect(transport):
    global task
    task = asyncio.create_task(send_data(transport))


def on_disconnect():
    global task
    task.cancel()


async def start():
    # Get a reference to the event loop as we plan to use
    # low-level APIs.
    loop = asyncio.get_running_loop()

    server = await loop.create_server(
        lambda: EchoServerProtocol(on_connect, on_disconnect),
        '127.0.0.1',
        8888,
    )

    async with server:
        await server.serve_forever()


if __name__ == '__main__':
    # with asyncio.Runner() as runner:
    with asyncio.Runner(loop_factory=uvloop.new_event_loop) as runner:
        runner.run(start())
@toppk
Copy link
Author

toppk commented Jul 6, 2023

i compiled from master with debug, and recorded the debug info (printing in protocol.connection_lost()) after a few runs (you can tell from the closed handles, that I'm not posting each one I capture). I do see the resident memory going down from 2gb to 1gb, but I don't know if that's just me swapping, or actually freeing up addressspace. virtual memory doesn't seem to be going down.

---- Process info: -----
Process memory:            2654928
Number of signals:         0

--- Loop debug info: ---
Loop time:                 886244.171
Errors logged:             1

Callback handles:          5        | 3379071
Timer handles:             0        | 0

                        alive  | closed  |
UVHandles               python | libuv   | total
                        objs   | handles |
-------------------------------+---------+---------
    TCPServer                1 |       0 |       1
    TCPTransport             1 |       0 |       1
    UVAsync                  1 |       0 |       1
    UVCheck                  1 |       0 |       1
    UVIdle                   1 |       0 |       1
    UVPoll                   1 |       0 |       1

uv_handle_t (current: 6; freed: 0; total: 6)

--- Streams debug info: ---
Write errors:              0
Write without poll:        223
Write contexts:            3367837  | 3378839
Write failed callbacks:    0

Read errors:               0
Read callbacks:            1
Read failed callbacks:     1
Read EOFs:                 0
Read EOF failed callbacks: 0

Listen errors:             0
Shutdown errors            0

--- Polls debug info: ---
Read events:               0
Read callbacks failed:     0
Write events:              0
Write callbacks failed:    0

--- Sock ops successful on 1st try: ---
Socket try-writes:         0
---- Process info: -----
Process memory:            2654928
Number of signals:         0

--- Loop debug info: ---
Loop time:                 886381.956
Errors logged:             1

Callback handles:          5        | 7834229
Timer handles:             0        | 0

                        alive  | closed  |
UVHandles               python | libuv   | total
                        objs   | handles |
-------------------------------+---------+---------
    TCPServer                1 |       0 |       1
    TCPTransport             1 |       3 |       4
    UVAsync                  1 |       0 |       1
    UVCheck                  1 |       0 |       1
    UVIdle                   1 |       0 |       1
    UVPoll                   1 |       0 |       1

uv_handle_t (current: 6; freed: 3; total: 9)

--- Streams debug info: ---
Write errors:              5
Write without poll:        890
Write contexts:            1467948  | 7833321
Write failed callbacks:    0

Read errors:               0
Read callbacks:            1
Read failed callbacks:     1
Read EOFs:                 3
Read EOF failed callbacks: 0

Listen errors:             0
Shutdown errors            0

--- Polls debug info: ---
Read events:               0
Read callbacks failed:     0
Write events:              0
Write callbacks failed:    0

--- Sock ops successful on 1st try: ---
Socket try-writes:         0

---- Process info: -----
Process memory:            4007728
Number of signals:         0

--- Loop debug info: ---
Loop time:                 886583.413
Errors logged:             1

Callback handles:          5        | 14216779
Timer handles:             0        | 0

                        alive  | closed  |
UVHandles               python | libuv   | total
                        objs   | handles |
-------------------------------+---------+---------
    TCPServer                1 |       0 |       1
    TCPTransport             1 |       5 |       6
    UVAsync                  1 |       0 |       1
    UVCheck                  1 |       0 |       1
    UVIdle                   1 |       0 |       1
    UVPoll                   1 |       0 |       1

uv_handle_t (current: 6; freed: 5; total: 11)

--- Streams debug info: ---
Write errors:              8
Write without poll:        1334
Write contexts:            1273661  | 14215421
Write failed callbacks:    0

Read errors:               0
Read callbacks:            1
Read failed callbacks:     1
Read EOFs:                 5
Read EOF failed callbacks: 0

Listen errors:             0
Shutdown errors            0

--- Polls debug info: ---
Read events:               0
Read callbacks failed:     0
Write events:              0
Write callbacks failed:    0

--- Sock ops successful on 1st try: ---
Socket try-writes:         0

@toppk
Copy link
Author

toppk commented Jul 6, 2023

fwiw, running this after the transport is closed down seems to trigger a cleanup

import ctypes

def malloc_trim():
    ctypes.CDLL("libc.so.6").malloc_trim(0)

as a workaround, I'm planning on firing a call to malloc_trim run a few seconds after connection_lost. at least until I get the buffer pause protocol implemented, I don't think there's an obvious bug to fix here, but who knows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant