Skip to content

roboflow/multimodal-maestro

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

71 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

multimodal-maestro


version license python-version Gradio Colab

πŸ‘‹ hello

Multimodal-Maestro gives you more control over large multimodal models to get the outputs you want. With more effective prompting tactics, you can get multimodal models to do tasks you didn't know (or think!) were possible. Curious how it works? Try our HF space!

πŸ’» install

⚠️ Our package has been renamed to maestro. Install the package in a 3.11>=Python>=3.8 environment.

pip install maestro

πŸ”Œ API

🚧 The project is still under construction. The redesigned API is coming soon.

maestro-docs-Snap

πŸ§‘β€πŸ³ prompting cookbooks

Description Colab
Prompt LMMs with Multimodal Maestro Colab
Manually annotate ONE image and let GPT-4V annotate ALL of them Colab

πŸš€ example

Find dog.

>>> The dog is prominently featured in the center of the image with the label [9].
πŸ‘‰ read more
  • load image

    import cv2
    
    image = cv2.imread("...")
  • create and refine marks

    import maestro
    
    generator = maestro.SegmentAnythingMarkGenerator(device='cuda')
    marks = generator.generate(image=image)
    marks = maestro.refine_marks(marks=marks)
  • visualize marks

    mark_visualizer = maestro.MarkVisualizer()
    marked_image = mark_visualizer.visualize(image=image, marks=marks)

    image-vs-marked-image

  • prompt

    prompt = "Find dog."
    
    response = maestro.prompt_image(api_key=api_key, image=marked_image, prompt=prompt)
    >>> "The dog is prominently featured in the center of the image with the label [9]."
    
  • extract related marks

    masks = maestro.extract_relevant_masks(text=response, detections=refined_marks)
    >>> {'6': array([
    ...     [False, False, False, ..., False, False, False],
    ...     [False, False, False, ..., False, False, False],
    ...     [False, False, False, ..., False, False, False],
    ...     ...,
    ...     [ True,  True,  True, ..., False, False, False],
    ...     [ True,  True,  True, ..., False, False, False],
    ...     [ True,  True,  True, ..., False, False, False]])
    ... }
    

multimodal-maestro

🚧 roadmap

  • Rewriting the maestro API.
  • Update HF space.
  • Documentation page.
  • Add GroundingDINO prompting strategy.
  • CovVLM demo.
  • Qwen-VL demo.

πŸ’œ acknowledgement

🦸 contribution

We would love your help in making this repository even better! If you noticed any bug, or if you have any suggestions for improvement, feel free to open an issue or submit a pull request.