Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

check_output_custom could pass the information about the feature definition to the output validators #809

Open
lucamattiazzi opened this issue Apr 8, 2024 · 4 comments

Comments

@lucamattiazzi
Copy link

lucamattiazzi commented Apr 8, 2024

hello, we're using the decorator check_output_custom to add custom validation to our features, but while the validator class is defined only once, each feature has its own rules for validation

the problem is that the validator class does not receive any information on the feature it will be validating, only the resulting pd.Series, and we would need at least the name of the feature to be able to validate it correctly without explicitly using the name of the feature on the validator

the DataValidator child class could receive either in the __init__ or in the validate method the function whose results it should validate

I'll glady open a PR with the change

I'd like to be possible to do this:

class MyValidator(DataValidator):
  def validate(self, dataset, fn):
    validator = VALIDATORS[fn.name]
    return validator(dataset)


@check_output_custom(MyValidator("warn"))
def sum_feature(x, y):
  return x + y

instead of needing to do this:

class MyValidator(DataValidator):
  def __init__(self, importance, fn_name):
    self.super().__init__(importance=importance)
    self.fn_name = fn_name

  def validate(self, dataset):
    validator = VALIDATORS[self.fn_name]
    return validator(dataset)


@check_output_custom(MyValidator("warn", "sum_feature"))
def feature(x, y):
  return x + y
@skrawcz
Copy link
Collaborator

skrawcz commented Apr 8, 2024

@lucamattiazzi yep seems like a reasonable feature -- there might be a bit of surgery required to pass that information through (haven't scoped it). @elijahbenizzy might also have thoughts.

Some questions to get your input on:

  1. Do you like having this validator become a specific part of the DAG? Would it matter if it wasn't?
  2. Do you capture any results of this? Would you want to?
  3. What drives the validator setup on your end? Some configuration? or is it code? or?

Reason I ask is there are other implementation options:

  1. Extend the current validators to enable people to register custom ones, versus needing to use @check_output_custom -- but we'd need to expose passing in function metadata information.
  2. Use the lifecycle API functionality (see blog) combined with say @tag on features requiring validation.

So just figuring out which of the three approaches would be better to invest in / look at.

@lucamattiazzi
Copy link
Author

Thank you for the quick answer!

1 - it does not matter, it's nice but it would not change a lot for us
2 - yes, we capture all of the failures in the validations and use the results as warning for the end user
3 - we use GreatExpectations suites and a single validator class that uses the specific suite for each feature (that's why it would be useful for us to have the name of the feature at init or execute time)

the lifecycle API seems great, and I think we might make it work with our current implementation, but the results would be stored elsewhere than the driver, which looks a little bit worse in my opinion.
moreover, we would like in the future to be able to skip invalid rows from the computations down the DAG, and since this method only allows for side effects it wouldn't work I'm afraid (that's not an issue for us now)

the current structure of DataValidator does not seem to allow the current_node value to be passed easily without breaking current child classes

@skrawcz
Copy link
Collaborator

skrawcz commented Apr 9, 2024

Thanks for the responses, makes sense. Some follow ups:

the lifecycle API seems great, and I think we might make it work with our current implementation, but the results would be stored elsewhere than the driver, which looks a little bit worse in my opinion.

Yes, that lifecycle adapter could house the results that you would then inspect. How are you getting the results now? (or how would like to get them?)

moreover, we would like in the future to be able to skip invalid rows from the computations down the DAG, and since this method only allows for side effects it wouldn't work I'm afraid (that's not an issue for us now)

None of the approaches would directly enable this I don't think. My Hamilton philosophy here is that this should then be explicitly part of the DAG -- or if part of a decorator maybe a different one or a flag to enable it.

the current structure of DataValidator does not seem to allow the current_node value to be passed easily without breaking current child classes

Yep. But nothing that can't be changed since I think we can do it in a backwards compatible way. :)

@elijahbenizzy
Copy link
Collaborator

Chiming in -- so yes, this does make inherent sense. We could expose HamiltonNode to be the node it decorates, we'd just have to make it backwards compatible. Could be as simpole as checking if the class implements validate_with_metadata, which would have a default implementation that calls to validateor something like that.

It's interesting -- usually we recommend that users put the configuration closer to their code (I.E. in the parameters themselves), but if its a lot this can get tricky. So I think this is a reasonable approach. Your workaround is pretty good for now, but it's reasonable to have the name of the node as the input.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants