Skip to content

πŸ“œ A python library for distributed training of a Transformer neural network across the Internet to solve the Running Key Cipher, widely known in the field of Cryptography.

License

Notifications You must be signed in to change notification settings

alex-snd/TRecover

Repository files navigation

Welcome to Text Recovery Project πŸ‘‹

A python library for distributed training of a Transformer neural network across the Internet to solve the Running Key Cipher, widely known in the field of cryptography.

Preview Animation

Hugging Face demo Open%20In%20Colab Visualize%20in%20W&B MkDocs link Python version PyPI version PyPi Downloads License Apache 2.0

πŸš€ Objective

The main goal of the project is to study the possibility of using Transformer neural network to β€œread” meaningful text in columns that can be compiled for a Running Key Cipher. You can read more about the problem here.

In addition, the second rather fun πŸ˜… goal is to train a large enough model so that it can handle the case described below. Let there be an original sentence:

Hello, my name is Zendaya Maree Stoermer Coleman but you can just call me Zendaya.

The columns for this sentence will be compiled in such a way that the last seven contain from ten to thirteen letters of the English alphabet, and all the others from two to five. Thus, the last seven characters will be much harder to "read" compared to the rest. However, we can guess from the meaning of the sentence that this is the name Zendaya. In other words, the goal is also to train a model that can understand and correctly β€œread” the last word.

βš™ Installation

Trecover requires Python 3.8 or higher and supports both Windows and Linux platforms.

  1. Clone the repository:
git clone https://github.com/alex-snd/TRecover.git  && cd trecover
  1. Create a virtual environment:

    • Windows:
    python -m venv venv
    • Linux:
    python3 -m venv venv
  2. Activate the virtual environment:

    • Windows:
    venv\Scripts\activate.bat
    • Linux:
    source venv/bin/activate
  3. Install the package inside this virtual environment:

    • Just to run the demo:
    pip install -e ".[demo]"
    • To train the Transformer:
    pip install -e ".[train]"
    • For development and training:
    pip install -e ".[dev]"
  4. Initialize project's environment:

    trecover init

    For more options use:

    trecover init --help

πŸ‘€ Demo

  • πŸ€— Hugging Face
    You can play with a pre-trained model hosted here.

  • 🐳 Docker Compose

    • Pull from Docker Hub:
      docker-compose -f docker/compose/scalable-service.yml up
    • Build from source:
      trecover download artifacts
      docker-compose -f docker/compose/scalable-service-build.yml up
  • πŸ’» Local (requires docker)

    • Download pretrained model:
      trecover download artifacts
    • Launch the service:
      trecover up

πŸ—ƒοΈ Data

The WikiText and WikiQA datasets were used to train the model, from which all characters except English letters were removed.
You can download the cleaned dataset:

trecover download data

πŸ’ͺ Train

To quickly start training the model, open the Jupyter Notebook .

  • πŸ•ΈοΈ Collaborative
    TODO
  • πŸ’» Local
    After the dataset is loaded, you can start training the model:
    trecover train \
    --project-name {project_name} \
    --exp-mark {exp_mark} \
    --train-dataset-size {train_dataset_size} \
    --val-dataset-size {val_dataset_size} \
    --vis-dataset-size {vis_dataset_size} \
    --test-dataset-size {test_dataset_size} \
    --batch-size {batch_size} \
    --n-workers {n_workers} \
    --min-noise {min_noise} \
    --max-noise {max_noise} \
    --lr {lr} \
    --n-epochs {n_epochs} \
    --epoch-seek {epoch_seek} \
    --accumulation-step {accumulation_step} \
    --penalty-coefficient {penalty_coefficient} \
    
    --pe-max-len {pe_max_len} \
    --n-layers {n_layers} \
    --d-model {d_model} \
    --n-heads {n_heads} \
    --d-ff {d_ff} \
    --dropout {dropout}
    
    For more information use trecover train local --help

βœ”οΈ Related work

TODO: what was done, tech stack.

🀝 Contributing

Contributions, issues and feature requests are welcome.
Feel free to check issues page if you want to contribute.

πŸ‘ Show your support

Please don't hesitate to ⭐️ this repository if you find it cool!

πŸ“œ License

Copyright Β© 2022 Alexander Shulga.
This project is Apache 2.0 licensed.