Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test contribution/robustness detector #1908

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

ChatBear
Copy link

Description

This PR aim to add a new detector for testing the robustness when a single categorial values. I tried to add a detector and a new tranformation function which shuffle all categorial values and add in issue if the accuracy of the model decreased too much.

I created a new base categorial detector which basically has the same methods than the basedetectortext, only the method "run" change, we only take "category" feature insteand of text.

Furthermore, i tried to check info around the taxonomy, do you have a link or something in order to get more info around that, for now the taxonomy is None on the categorial detector class.

Related Issue

Scan: Add a robustness detector to the scan that perturbs categorial values #1847

#1847

Type of Change

  • 馃摎 Examples / docs / tutorials / dependencies update
  • 馃敡 Bug fix (non-breaking change which fixes an issue)
  • 馃 Improvement (non-breaking change which improves an existing feature)
  • 馃殌 New feature (non-breaking change which adds functionality)
  • 馃挜 Breaking change (fix or feature that would cause existing functionality to change)
  • 馃攼 Security fix

Checklist

  • I've read the CODE_OF_CONDUCT.md document.
  • I've read the CONTRIBUTING.md guide.
  • I've written tests for all new methods and classes that I created.
  • I've written the docstring in Google format for all the methods and classes that I used.
  • I've updated the pdm.lock running pdm update-lock (only applicable when pyproject.toml has been
    modified)

@ChatBear
Copy link
Author

ChatBear commented Apr 21, 2024

Hello,
It is not merge ready yet, i didn't do the testing part because i need to check the doc to do the testing.
But i have a few questions, i did create another detector for categories values, which it is basically the same one than base detector text. Only the method run : the category detector only take feature type : "category". So there is a lot boilerplate code, so i am not sure that my solution is good.

Also, i didn't find any info related to the taxonomy, can you provider link for that ? I don't know which taxonomy is appropriate for categorial perubation.

Last thing but not least, if you can check the code if it respect your standard, otherwise i would gladly change it.

Thanks

@mattbit
Copy link
Member

mattbit commented Apr 23, 2024

Hi @ChatBear !

Also, i didn't find any info related to the taxonomy, can you provider link for that ? I don't know which taxonomy is appropriate for categorial perubation.

Regarding the taxonomy, we use the AVID standard. I think the correct code for this detector would be:

_taxonomy = ["avid-effect:performance:P0201"]

P0201 is correspond to this taxonomy item:

P0201: Resilience/stability
Ability for outputs to not be affected by small change in inputs.

You can find the taxonomy documentation here: https://docs.avidml.org/taxonomy/effect-sep-view/performance

@ChatBear
Copy link
Author

Thanks i'll add the taxonomy then

@ChatBear
Copy link
Author

I took some time off, that's why there is not update of the PR, I'll be able to continue the PR in middle of May !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants