Skip to content

Nerdward/mlops_with_kubeflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tf-serving

Setup

  1. Install Kind (Kubernetes in Docker) I am using kind here in place of minikube because it will enable me install kserve as a standalone.

  2. Install kubectl

Steps

It is required that you have Anaconda(python) and tensorflow installed.

  • python model.py

    To train the model and save it in SavedModel format for the tensorflow serving service.

  • docker build -t tfserving_classifier:v01 .

API Endpoints in Tensorflow Serving

  • REST is a communication “protocol” used by web applications. It defines a communication style on how clients communicate with web services. REST clients communicate with the server using the standard HTTP methods like GET, POST, DELETE, etc. The payloads of the requests are mostly encoded in JSON format.

  • gRPC on the other hand is a communication protocol initially developed at Google. The standard data format used with gRPC is called the protocol buffer. gRPC provides low- latency communication and smaller payloads than REST and is preferred when working with extremely large files during inference.

  • docker run -it --rm -p 8501:8501 tfserving_classifier:v01
  • python predict.py

    http://{HOST}:{PORT}/v1/models/{MODEL_NAME}:{VERB}
    To load images from your local into the cluster

  • kind load docker-image tfserving_classifier:v01
  • kubectl apply -f kubeconfig/deployment.yaml
  • kubectl apply -f kubeconfig/service.yaml
  • kubectl port-forward svc/tfserving-classifier 8080:80

Kserve

Setup

  1. Install Kserve standalone (requires kind and kubectl)

  2. Run your first InferenceService

kubectl get svc istio-ingressgateway -n istio-system
kubectl port-forward -n istio-system svc/istio-ingressgateway 8080:80
INGRESS_HOST="localhost"
INGRESS_PORT="8080"
DOMAIN="example.com"
NAMESPACE="kserve-test"
SERVICE="sklearn-iris"

SERVICE_HOSTNAME="${SERVICE}.${NAMESPACE}.${DOMAIN}"

curl -v -H "Host: ${SERVICE_HOSTNAME}" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/sklearn-iris:predict -d @./iris-input.json

Running tensorflow model in Kserve

Download the model

wget https://github.com/alexeygrigorev/mlbookcamp-code/releases/download/chapter7-model/xception_v4_large_08_0.894.h5

Convert it to saved_model:

python convert.py

Instead of using an s3 bucket or Google cloud storage

cd clothing-model/
tar -cvf artifacts.tar 1/
gzip < artifacts.tar > artifacts.tgz

Host the file on your local machine

python -m http.server
kubectl apply -f tensorflow.yaml 
curl -v -H "Host: clothes.default.example.com" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/clothes:predict -d $INPUT_PATH

Deploy transformer with InferenceService

Create custom image transformer link

  1. Create a python script with ImageTransformer Class inheriting from kserve.Model

  2. Build Transformer docker image and push to dockerhub

docker build -t nerdward/image-transformer:v02 .
docker build -t <hub-user>/<repo-name>[:<tag>]

docker push nerdward/image-transformer:v02
docker push <hub-user>/<repo-name>:<tag>
  1. Create the InferenceService and Apply.
kubectl apply -f tensorflow.yaml
  1. Run a prediction.

References and Important links

  1. How to Serve Machine Learning Models With TensorFlow Serving and Docker 2.Machine Learning Bookcamp
  2. Kserve official documentation

About

Hands-on labs on deploying machine learning models with tf-serving and KServe

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published