Tech IndustryJan 2, 2020
GitHubisev0qba

Learn How to Deploy ML models as production quality APIs

So, I've hit a wall in terms of finding organized and end-to-end study material online on this topic. What I am looking to learn is, once I have build a model in python, how do I go about "deploying" it as a highly available, fast, secure service on AWS (like an industry standards API). How would I go about learning this? PS: I am not looking to learn SageMaker. I am looking to build ML APIs in the same way that production quality APIs are built and frameworked.

Add a comment
Algorithmia Lhahd Jan 2, 2020

Checkout algorithmia

GitHub isev0qba OP Jan 2, 2020

It looks good but it's not a transferrable skill. Plus, I'm not in a position at work to ask for enterprise tools like this one :(

Juvo OmgDie Jan 2, 2020

Just build a regular api and use it to make predictions from you model

GitHub isev0qba OP Jan 2, 2020

That is a good idea. What would be a good place to start learning how to build good end-to-end architecture (eg. Things like queuing, auto-scaling)?

Microsoft travasty Jan 2, 2020

How did you get into Github without knowing system design?

Flexport fr8five Jan 2, 2020

What interaction model are we talking about here? If there’s no async behavior just build a regular service on EC2/ECS, and do model swapping via something like Supervisord which take care of hot reloading for you. If it requires async jobs use a msg queue + cache, you can get near real-time behavior from that. Scales horizontally pretty well too.

Hulu huluswdev Jan 2, 2020

^^this guys answer. When things start to grow and require more back end cpu time, I'd recommend a framework on AWS Kubernetes (EKS) called Argo I've been using. Can help orchestrate pretty complex ML work. Integrates well with SQS, etc. Not straight out of the box but fairly close. K8s is the new thing, can build your service and workers on the same cluster and manage it all in same "workspace" rather than different infra. Def prod quality/readiness.