MLOps
search
⌘Ctrlk
Equal ExpertsContact UsPlaybooks
MLOps
  • Overview
  • What is MLOps
  • Principles
  • Practices
    • Collect performance data
    • Ways of deploying your model
    • How often do you deploy a model?
    • Keep a versioned model repository
    • Measure and proactively evaluate quality of training data
    • Testing through the ML pipeline
    • Business impact is more than just accuracy - understand your baseline
    • Regularly monitor your model in production
    • Monitor data quality
    • Automate the model lifecycle
    • Create a walking skeleton/steel thread
    • Appropriately optimise models for inference
  • Explore
  • Pitfalls (Avoid)
  • Contributors
gitbookPowered by GitBook
block-quoteOn this pagechevron-down

Practices

Collect performance datachevron-rightWays of deploying your modelchevron-rightHow often do you deploy a model?chevron-rightKeep a versioned model repositorychevron-rightMeasure and proactively evaluate quality of training datachevron-rightTesting through the ML pipelinechevron-rightBusiness impact is more than just accuracy - understand your baselinechevron-rightRegularly monitor your model in productionchevron-rightMonitor data qualitychevron-rightAutomate the model lifecyclechevron-rightCreate a walking skeleton/steel threadchevron-rightAppropriately optimise models for inferencechevron-right
PreviousMLOps is a team effortchevron-leftNextCollect performance datachevron-right

Last updated 3 years ago