I switched from DS to MLE

Nov 22, 2021 11 Comments

Previously my job as a DS was very easy and comfortable- just use a jupyter notebook, generate some boilerplate analysis, sometimes build some simple ML models. One time clicked around in Azure to deploy a model.
One month in as a MLE. The transition has been rough. At my previous job, I rarely used git, I had no sense of system design, I had zero clue about writing production code or unit tests. I never reviewed a PR or contributed to a large codebase. I didn't know anything about ops or DRI (oncall). I mean, I have a CS degree, but these things weren't taught in the classroom.
There is so much to learn and I like the product I work on, but sometimes I feel as if I bit off more than I could chew. Am working many hours just to keep my head above water. Many of my colleagues have PhDs, and they seem really solid on the engineering side as well. Feel I can really learn a lot from them.

The one thing which I can really appreciate now is that ML which is used in production on my team is much more complex than some one liner I was using in sklearn. There are entire research teams in US, India, and China dedicated to building huge state of the art models. Really cool, and at the same time daunting, to be working alongside these geniuses.

TC 210k.

comments

Want to comment? LOG IN or SIGN UP
TOP 11 Comments
  • Facebook / Eng
    MacauMouse

    Go to company page Facebook Eng

    BIO
    unemployed
    MacauMouse
    “ At my previous job, I rarely used git, I had no sense of system design, I had zero clue about writing production code or unit tests. I never reviewed a PR or contributed to a large codebase. I didn't know anything about ops or DRI (oncall).”

    What kinds of unit tests do you write for ML models?
    Nov 24, 2021 3
    • Facebook / Eng
      MacauMouse

      Go to company page Facebook Eng

      BIO
      unemployed
      MacauMouse
      Dev versus prod should be an online A/B test, right? What is payload validation?
      Also, won’t score/feature distribution shift be post-launch?
      Nov 24, 2021
    • Intuit
      QWERTYSUCK

      Go to company page Intuit

      QWERTYSUCK
      Not A/B test, I mean that implementations are consistent. More like making sure DS to MLE translation if consistent and there are no edge cases due to productionizing. Payload validation is tied to the above. Making sure when we query the prod env we get exactly the same result as the DS do from notebooks or local envs. Feature/score distribution will absolutely shift but we expect the shift to be minimal. You can use population stability index to measure the shift. Trickier when you have a cold start problem but that can be a bag of worms and you just try yourself best.
      Nov 24, 2021
  • Ford
    SocialDuck

    Go to company page Ford

    SocialDuck
    Can you talk a bit more about this new role? I'm really interested in something similar. I am in similar shoes right now. I've had some experience in end-to-end ML system design but didn't know Microsoft did similar projects
    Nov 22, 2021 1
    • OP
      Yes sure. There are many teams in MS which do production ML. Examples: all the AI-related Azure services (AzureML, Azure Cognitive Services, AutoML, etc). Bing Search/Ads. Office (Autocomplete, Designer, etc). The list is actually endless, because in any product they are now building AI/ML-powered features.
      Nov 23, 2021
  • Intuit
    QWERTYSUCK

    Go to company page Intuit

    QWERTYSUCK
    Hod up. You moved from DS to MLE without having any of the required skills? Was this an internal transfer?
    Nov 22, 2021 4