Transition to operations#

The transition involves serialising and containerising the models trained in the development phase and getting them ready for deployment.

The models are served in the form of APIs or independent artifacts for batch inference. When a model is packaged and ready to be served, it is deployed in the production environment using streamlined CI/CD pipelines when passing quality assurance checks.

By the end of this phase, packaged models are served and deployed in the production environment doing inference in real time.