Skip to content

Semantic Search demo featuring UForm, USearch, UCall, and StreamLit, to visual and retrieve from image datasets, similar to "CLIP Retrieval"

Notifications You must be signed in to change notification settings

ashvardanian/usearch-images

Repository files navigation

USearch Images

USearch Images animated text to image multi-modal AI search

Semantic Search Demo with UForm, USearch, and UCall

  • Can run the GUI and the search part on the same or different server
  • Comes with pre-constructed indexes for large datasets
  • Supports text-to-image and image-to-image search

To start the StreamLit demo app locally, you need to download just a couple of files:

mkdir -p /data/unsplash-25k
wget -O /data/unsplash-25k/images.txt https://huggingface.co/datasets/unum-cloud/gallery-unsplash-25k/resolve/main/images.txt
wget -O /data/unsplash-25k/images.uform-vl-multilingual-v2.fbin https://huggingface.co/datasets/unum-cloud/gallery-unsplash-25k/resolve/main/images.uform-vl-multilingual-v2.fbin

pip install -r requirements.txt
streamlit run streamlit_app.py

Datasets

The default dataset - Unsplash, contains less than 25'000 images. Still, the demo is easy to extend to other datasets, some of which we already embedded with UForm and indexed with USearch. All datasets are available on Unum's HuggingFace page and share an identical format:

  • images.txt contains newline-delimited URLs or Base64-encoded data-URIs of images.
  • images.<model>.fbin contains a binary matrix of UForm embedding for every image from images.txt.
  • images.<model>.usearch contains a binary USearch search index for fast kANN.

Additionally, some image-text paired datasets may provide texts.txt, texts.<model>.fbin, texts.<model>.usearch, following the same logic.