I’ve a few custom built python models that I would like to Package and deploy. An easy way is to pickle them and then load them up into the services. But pickling comes with a lot of issues. I’m wondering if there is way to package it as an executable so that it works even without the source code (essentially as a black box). Anyone have any experience with these?
Just create an API endpoint and let others use it
Yeah of course that’s what I’m doing but I still need to save the model somewhere and load it onto the api boxes
Yeah - I’ve been using pickle for now, but since my source code is going to evolve over the long run for the custom models the best practice would be to store each of the models with its full environment. So I’m looking into how to do that most effectively now
Pickle is probably too weak for this. Try hdf5 datasets
I don’t want or need to save datasets, this is to save a model And infer from it in real-time through a rest api
I said hdf5 datasets, not your ML data set. You can save models: https://www.tinymind.com/learn/terms/hdf5 Try googling before you outright dismiss a suggestion.
Like the reply above said... docker image with your model, predict code, and all packages with their required versions in the build file. Bonus points for deploying it as a flask endpoint.
Yeah already using flask for the rest API for the prediction service. I’m basically designing for a large number of small-ish models. So I don’t want to spin up each model as it’s own endpoint. So my main goal here is to have a few API boxes behind a load balancer that can load up and maintain 100s of models .
Use docker
Interesting choice. So you want to save the model as a docker image?
I assume your model depends on a whole lot of other python packages, sckit, tf etc. Some of those version quite bad and might not be immediately noticeable on your predictions