Skip to content

Implementation of a Neural Network that can detect whether a video is in-game or not

License

Notifications You must be signed in to change notification settings

ContentAutomation/NeuralNetworks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Logo

MIT License Code style: black
Tensorflow

Implementation of a Neural Network that can detect whether a video is Ingame or not

Video explanation (YouTube)

Authors: Christian C., Moritz M., Luca S.
Related Projects: Twitch Compilation Creator, YouTube Uploader, YouTube Watcher


Ingame Detection

About

This project implements a convolutional neural network architecture that can be trained to detect whether a given video clip is in-game or not. The network is trained using transfer learning by choosing one of the following architectures: ResNet50 (default), VGG16, InceptionV3

Setup

This project requires Poetry to install the required dependencies. Check out this link to install Poetry on your operating system.

Make sure you have installed Python 3.10 or higher! Otherwise Step 3 will let you know that you have no compatible Python version installed.

  1. Clone/Download this repository

    Note: For test models/assets, download Release v1.0

  2. Navigate to the root of the repository

  3. Run poetry install to create a virtual environment with Poetry

  4. Run poetry run python src/filename.py to run the program. Alternatively you can run poetry shell followed by python src/filename.py

  5. Enjoy :)

Script Explanations

video2images.py

This utility can be used to build the dataset by splitting video files into images.

predict.py

This script is used to verify the performance of the trained neural network by specifying a path to the model of the trained neural network, and a video clip that should be analyzed.

game_detection.py

This script is used to train a neural network (e.g. create a model) on a given dataset. If enough data is present, the neural network will learn to distinguish Ingame clips from clips that are not ingame (e.g. Lobby, Queue, ...)

Creating a new model for a game

Let's assume you want to create a new model for the game Dota2. The following steps have to be performed:

  1. Download clips for Dota2 that are both ingame and not ingame (recommended source: Twitch)

        HINT: You can download clips manually or by creating a compilation with TwitchCompilationCreator

  1. Split the clips into images via video2images.py
  2. Create the following folder structure
...
│
└───anyFolderName
    │
    └───dota2
    └───nogame
  1. Sort the clips from step 1 into those folders depending on if they are ingame or not
  2. Create a main.py file in ./src/ to initialize a GameDetection object, then run it (see example below)
  3. Test the created model on a few example clips using predict.py to verify its accuracy

        NOTE: The number of images in the 'gamename' or 'nogame' folder has to be greater than or equal to the defined batch size

# For more information about the parameters, check out game_detection.py
m = GameDetection(
    model_name="ResNet50",
    game_name="dota2",
    dataset_path="---PATH TO 'anyFolderName'---",
    input_size=(224, 224),
    batch_size=16,
    save_generated_images=False,
    convert_to_gray=False,
)
m.train(epochs=2)