Batch Serving in Hopsworks

Hi there,

Do you have any feature available for batch serving of model in cloud or on-prime? Could you help me to point out this in your documentation?

Thanks,
Ham

Hi Ham,

You can find batch inference example here: https://github.com/logicalclocks/hops-examples/blob/release-1.3/notebooks/ml/Inference/Batch_Inference_Imagenet_Spark.ipynb. Let me if this works for you.

/Davit

Hi Davit,

Thank you. It is a practical tidbit for batch inference.

How about this feature in “serving” menu of Hopsworks? How can the batch serve be monitored in Hopsworks?

In usual scenarios, result of batch inference will go back to the data origin storage. Can we do any such integrations through UI?

Thanks in advance.

Regards,
Ham

Hi Ham. Batch applications that use models to make predictions are, for us, just batch applications. We do not have any special UI support for them. As you say, you can write their output where-ever you want.

If you want to build some UI support for your batch applications in Spark in Hopsworks, you can log results to Kibana and build a UI Dashboard from Kibana. When load up the UI for a Spark Application that has already run, the Kibana UI will be available. However, i don’t think that would be a great solution, and you are better off with developing your own UI support for visualizing the results of your Spark Batch Apps.

Hi Jim,

Thanks for your response. My question was more about deploying a model for batch inference. It looks to me that I still have to codify batch inference and take traditional route to deploy on the platform, and no options available from UI.

Thanks and Regards,
Ham