You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I got confused about Table 2: How are the open vocabulary segmentation metrics calculated?
Also, could you please explain how Osprey outputs the mask to calculate these metrics?
Thanks for your help!
The text was updated successfully, but these errors were encountered:
Hi, @Glupapa
For open-vocabulary segmentation, all approaches employ ground truth boxes/masks as input to assess regional recognition capability. We leverage the semantic similarity as the matching measurement to calculate these metrics. We will release the codes for performance evaluation.
Actually, the current version of Osprey lacks the capability to generate output masks.
Thanks for your prompt response!
I noticed that the metrics used on Cityscapes and ADE20K-150 in Table2 are PQ, AP and mIoU, so I'm curious about how to calculate these metrics if Osprey may not output a mask. Could you please shed some light on this?
Thank you once again for your assistance.
Hi,
Thank you for sharing your impressive work!
I got confused about Table 2: How are the open vocabulary segmentation metrics calculated?
Also, could you please explain how Osprey outputs the mask to calculate these metrics?
Thanks for your help!
The text was updated successfully, but these errors were encountered: