Showing results for 
Search instead for 
Did you mean: 

Do custom inference models capabilities differ from vanilla

Do custom inference models capabilities differ from vanilla

Hi, I have been looking at custom model deployments and wanted to know if their capabilities in MLOps differ from standard deployments using DataRobot's inbuilt models. Specifically can custom models use continuous AI features such as automatic retraining?


12 Replies


Thank you @MLOPS DFW for the answer, I didn't realise how new the system was to DataRobot. 

Hi Ira, I am from former Algorithmia. We got acquired about 2 Month ago by DataRobot. The custom inference capability brought by Algorithmia differ quite a bit from the vanilla offering.

ModelCatalog, Versioning, Pipelining, Governance, Languages, Dependencies, Compute, Orchestration, Hardware, Monitoring are features provided across models built in R, Python, Ruby, Rust...DataRobot, H2O, Dataiku, AWS SM...etc using autoscaling capabilities running Kubernetes on Dockers, deployed on premise or in the Cloud.

Hey, thanks I had not seen the demo!

Hey @IraWatt I'm sure you've looked here already but just in case: See the Continuous AI (public preview) info/demo video.

Great I only ask because in the docs it says it can be enabled and wanted to know its current state. 


Thanks so I assume its final functionality hasn't been settled on?

I just checked it out.   Running fit, python Sklearn for training models, inside of DataRobot is currently in alpha at this moment.  

One solution is to register the model in our model registry and then you can have it retrained, which is what is drawn in the image above.   Still, I will reach out to our MLOps experts and get back to you on Monday.