Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to integrate a model into Spark cluster #579

Open
jahidhasanlinix opened this issue Dec 1, 2021 · 12 comments
Open

How to integrate a model into Spark cluster #579

jahidhasanlinix opened this issue Dec 1, 2021 · 12 comments

Comments

@jahidhasanlinix
Copy link

How can I integrate a model into a Spark cluster in real? I actually have a deep learning (tf, python) based model which I would like to integrate with the Spark cluster to do some experiments. Can anyone give me some suggestions or steps to follow to do that?

@leewyang
Copy link
Contributor

leewyang commented Dec 2, 2021

If you already have a trained model (and it fits in memory), then the simplest way to run inferencing in a Spark job is to use something like this example. Basically, you load the model in your map_fn and inference on partitions. Note that the TFParallel class is just a convenience function wrapping a RDD.mapPartitions() call, so if it doesn't fit your needs, you can always write a similar class. Also, you may want to cache your model (if it's large), instead of reloading it in every map function call.

Note: there is also this example, which tries to emulate the Spark DataFrame API, but may be a little harder to follow how it works.

Finally, some folks wrap the TF model inside a Spark UDF, but I don't have an example of that here.

@jiqiujia
Copy link

@leewyang How can I cache my model in pyspark. I found the model got reloaded for every task. Here's a demonstration of how I predict the whole dataset

def _predict_dataset():
  def _input_fn():
     ...
  estimator = build_estimator(...)
  return estimator.predict(_input_fn)

data.mapPartitions(lambda it: _predict_dataset())

@jahidhasanlinix
Copy link
Author

Can you please give me some coding snippets or resources to read or understand that how I can load my trained_models in spark? I'm not sure how to do this part.

@jahidhasanlinix
Copy link
Author

jahidhasanlinix commented Dec 25, 2021

@jiqiujia could you please help me like how I can load my trained_model that are already saved in my project directory. Can you share the coding steps where you integrated your trained model in spark. Thank you

@jiqiujia
Copy link

@jahidhasanlinix you could follow the examples in this repo: https://github.com/yahoo/TensorFlowOnSpark/tree/master/examples

@jahidhasanlinix
Copy link
Author

@jiqiujia thank you. I'll check it.

@leewyang
Copy link
Contributor

leewyang commented Jan 3, 2022

@jiqiujia assuming that your model won't change over the course of the job, you can just cache the model in the python worker processes via a global variable. Just check if it's none/null, and if so, load the model from disk, otherwise use the cached model.

@jahidhasanlinix
Copy link
Author

How can I load the model? I have a code base model and trained model saved in .pt. how can I load into the Cluster? Any help

@leewyang
Copy link
Contributor

leewyang commented Jan 3, 2022

@jahidhasanlinix Not quite sure what you're doing here... *.pt are PyTorch models. Have you converted a TensorFlow model to PyTorch (or vice versa)?

@jahidhasanlinix
Copy link
Author

jahidhasanlinix commented Jan 3, 2022

@leewyang https://github.com/hongzimao/decima-sim
Here is the source code I was trying to work with to integrate this pipeline into Spark, would you like to give me some instructions how to do that. I guess this repo, can give you some idea that what's I'm trying to say. Do you mind sharing your code to see how you did the integration of your tf code into spark. (This code in Tensoflow, but there's one is in Pytorch also, but tf understanding can help) but problem is I don't know how to integrate this repo code base into spark.

@leewyang
Copy link
Contributor

leewyang commented Jan 4, 2022

@jahidhasanlinix Unfortunately, I think that code looks like it's beyond the scope of what TFoS is trying to do. Decima presumably integrates with (or replaces) the spark scheduler itself, while TFoS is more about using Spark (and it's default scheduler) to launch training/inferencing jobs on executors.

@jahidhasanlinix
Copy link
Author

@leewyang thank you so much for your response. Is there any other way to integrate this, can you help me with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants