I am following the directions in the "Evaluate a Regression Model" lab and am observing a mismatch between the variables in my model deployment and the variables in the associated what-if application that I've created - the latter seems to be using the full set of variables (including a variable - namely, ConvertedComp - that has target leakage) whereas the former (as I intended) contains a shorter list of variables (without ConvertedComp). Any help would be appreciated in debugging this discrepancy!
Hi @steveb, can you confirm that your What-if application is accessing the deployment that's using a model without the `ConvertedComp` feature?
AI Apps are tied to a specific deployment, so if I had to guess you might have multiple deployments from the modeling project, and your What-if app is accessing a different deployment than you intended.
P.S., the lab currently references our older deprecated version of the app builder, and we need to update it with the new version.
Thanks James. I deleted all deployments before deploying this one, and did confirm that the app is linked to the right deployment, and that deployment is linked to the right model. But the variables don't match (between application and model).
I am also getting "500" errors when I follow the directions - but then the application shows up in the applications section anyway (with the wrong variables). Does this help?
If you are using the now-deprecated version of AI Applications, it's possible that you might see errors and unexpected behavior.
Which version of AI Apps/Applications are you using? And are you working with either the managed AI cloud version of DataRobot (app.datarobot.com) or the trial version (app2.datarobot.com)?