Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

System RAM not getting properly released #12704

Open
2 tasks done
muhammad-baqir-410 opened this issue May 15, 2024 · 3 comments
Open
2 tasks done

System RAM not getting properly released #12704

muhammad-baqir-410 opened this issue May 15, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@muhammad-baqir-410
Copy link

Search before asking

  • I have searched the YOLOv8 issues and found no similar bug report.

YOLOv8 Component

Predict, Other

Bug

The system RAM is not released even after dereferencing the model instance, using a garbage collector, and even using a context manager. Here is the system RAM usage before and after loading the model and after deleting the model instance:

Memory Usage Before Loading the Model:
System RAM - Total: 31.22 GB
System RAM - Used: 14.97 GB
System RAM - Available: 15.41 GB
System RAM - Usage Percentage: 50.6%
GPU - Allocated: 0.00 GB
GPU - Cached: 0.00 GB

WARNING ⚠️ 'source' is missing. Using 'source=/home/user/anaconda3/envs/yolov8/lib/python3.11/site-packages/ultralytics/assets'.

image 1/2 /home/user/anaconda3/envs/yolov8/lib/python3.11/site-packages/ultralytics/assets/bus.jpg: 640x480 4 persons, 1 bus, 1 stop sign, 78.1ms
image 2/2 /home/user/anaconda3/envs/yolov8/lib/python3.11/site-packages/ultralytics/assets/zidane.jpg: 384x640 2 persons, 1 tie, 79.3ms
Speed: 2.0ms preprocess, 78.7ms inference, 1.2ms postprocess per image at shape (1, 3, 384, 640)
Memory Usage After Loading the Model:
System RAM - Total: 31.22 GB
System RAM - Used: 16.72 GB
System RAM - Available: 13.65 GB
System RAM - Usage Percentage: 56.3%
GPU - Allocated: 0.04 GB
GPU - Cached: 0.09 GB

Memory Usage After Deleting the Model:
System RAM - Total: 31.22 GB
System RAM - Used: 16.72 GB
System RAM - Available: 13.65 GB
System RAM - Usage Percentage: 56.3%
GPU - Allocated: 0.04 GB
GPU - Cached: 0.06 GB

Environment

Ultralytics YOLOv8.1.29 🚀 Python-3.11.3 torch-2.0.1+cu117 CUDA:0 (NVIDIA GeForce RTX 3080, 9994MiB)
Setup complete ✅ (16 CPUs, 31.2 GB RAM, 845.1/915.3 GB disk)

Minimal Reproducible Example

import torch
import psutil
from ultralytics import YOLO
import gc

def print_memory_usage(description):
    # System RAM memory
    memory = psutil.virtual_memory()
    print(f"{description}:")
    print(f"System RAM - Total: {memory.total / (1024**3):.2f} GB")
    print(f"System RAM - Used: {memory.used / (1024**3):.2f} GB")
    print(f"System RAM - Available: {memory.available / (1024**3):.2f} GB")
    print(f"System RAM - Usage Percentage: {memory.percent}%")

    # GPU memory
    if torch.cuda.is_available():
        torch.cuda.synchronize()  # Wait for all kernels in all streams on a CUDA device to complete
        gpu_memory = torch.cuda.memory_stats(device=0)  # Get memory stats for the CUDA device
        print(f"GPU - Allocated: {gpu_memory['allocated_bytes.all.current'] / (1024**3):.2f} GB")
        print(f"GPU - Cached: {gpu_memory['reserved_bytes.all.current'] / (1024**3):.2f} GB")
    print()

class ModelManager:
    def __enter__(self):
        self.model = YOLO('yolov8n.pt')
        return self.model

    def __exit__(self, exc_type, exc_value, traceback):
        del self.model
        torch.cuda.synchronize()  # Wait for all CUDA operations to complete
        torch.cuda.empty_cache()
        gc.collect()
        torch.cuda.synchronize()  # Final synchronization after cleanup


# Memory usage before loading the model
print_memory_usage("Memory Usage Before Loading the Model")

with ModelManager() as model:
    model.predict()  # Use the model within the context

    # Memory usage after loading the model
    print_memory_usage("Memory Usage After Loading the Model")

# Memory usage after deleting the model
print_memory_usage("Memory Usage After Deleting the Model")

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@muhammad-baqir-410 muhammad-baqir-410 added the bug Something isn't working label May 15, 2024
Copy link

👋 Hello @muhammad-baqir-410, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@Calleobarroso
Copy link

Hello, i have the same error when i try run a simple YOLOv8 inference on my PC. When tried received a segmentation fault error, being that i have memory. Below my error:

WARNING ⚠️ 'source' is missing. Using 'source=/home/user/.local/lib/python3.10/site-packages/ultralytics/assets'.

image 1/2 /home/user/.local/lib/python3.10/site-packages/ultralytics/assets/bus.jpg: 640x480 21 with helmets, 61 riders, 18 number plates, 9.2ms
[1]    24052 segmentation fault (core dumped)  python3 test.py

and the htop view while the script is run:

image

Someone help us, i need run this YOLOv8 model. Thank you everyone

@glenn-jocher
Copy link
Member

Hello!

It seems like you're encountering a segmentation fault which could be due to a variety of reasons — this type of error often relates to accessing memory that isn't valid or incorrectly managing resources. Here are a few suggestions to help resolve this issue:

  1. Update Libraries: Ensure your libraries, especially PyTorch and CUDA, are up to date, as they are critical for YOLOv8's operation.

  2. Valid Source Paths: The warning about the missing 'source' suggests that it might not be locating the images correctly. Ensure the path in your code is specified correctly and that the user running the script has the necessary permissions to access these files.

  3. Resource Utilization: Monitor system and GPU memory usage closely. From your description, it's unclear if your application is running out of GPU memory. You might consider reducing the batch size or imgsz if memory overflow is the issue.

  4. Debugging: Run your Python script with gdb or another debugger to get more information about what's going wrong at the moment of the crash:

    gdb --args python3 test.py
    run

If these steps don't resolve the issue, consider sharing more details such as the code snippet you're using for inference. This might provide more insight into what might be going wrong.

Hope this helps! 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants