I realize in writing this that is really several questions -
but they are related so I will make it one post.
Everything in the post is asking about the Python API to Data Robot.
1. Can I get Data Robot to ignore columns in the provided data in training the model.
The answer seems to be feature lists, but I am a bit fuzzy on how to use them. If I start a project, I have to give a target - but oddly, I cannot give a feature list. The only way I know how to hand over a feature list to a project is using set_target, which complains if I specify the same target that I did when I created the project. Neither allow me to specify a blank target.
Addendum: part of the answer seems to be create project, which does not require a target, set target, which requires a target and you can give features, start project, which requires a target - but will accept the previously supplied target specified again. That feels like - you can specify multiple targets but when you start a project it will create it if it does not exist. This was probably intended as a good thing. [later] This does not entirely work, as it appears to create two projects when I do it. Trying to create a project with auto-pilot turned off is part of the issue.
2. Can I get Data Robot to not ignore a column, even if it thinks it is target leakage.
3. Can I get Data Robot to include the independent variables in the output prediction table - so that I do not have to stitch them back together again, at the risk of introducing error or at least doubt into the situation?
Solved! Go to Solution.
For a more concise response to (1) and (2):
I would caveat your second question by saying that if you want to force the feature to be considered at the algorithm-level (e.g. you don't want the feature to be lasso-ed away), I am not aware of a way to do so. If that is the goal, you're probably better off blending with a model built with only that feature.
For (3), a slightly cumbersome option perhaps: download the JAR Scoring Code, which has a passthrough_columns parameter. Write your table to a temporary csv, call Java in the command line and get back your original columns plus predictions in the resulting csv output. Not the best of options I suppose, but it works and avoids cluttering the server with prediction datasets.
My overall conclusion fwiw is that it is not worth it. I have converted the code to condition the data to contain only the target and the data intended to be used in the training. And I add a row number to a static copy of the table so that I can join this on that column with the predictions returned by Data Robot.
Not any time soon as we try to get customers to go through the deployment section and use the BatchPrediction capability like @IraWatt mentioned.
Glad you found some of the information useful. There are a few good articles on the community on Batch Prediction and also on uploading actual results to measure deployment accuracy, both worth a look if you haven't seen them. Also just to let you know the DR community helpfully lets you accept multiple answers, I think @Eu Jin's answer was more complete then mine so feel free to tick it also 😄.
@Bruce Great point I didn't think of that! Setting autopilot_on to be false would allow you to then set up a feature list like I did above then begin modelling.
Thanks @IraWatt -- that gave me a collection of ideas. I don't know yet what road-blocks I will run into with the batch predictions, but your suggestions have given me a path to explore.
First of all thanks @IraWatt for the quick response to the questions! Looks like you've covered the feature list question and the ignore target leakage which is awesome! I'll add another variant to creating the feature list:
featurelist = ['Type', 'Price', 'Distance', 'Bedroom2', 'Bathroom', 'Car'] featurelist = project.create_featurelist('EJL features', list(featurelist)) project.set_target( target = 'Price' , featurelist_id = featurelist.id , mode = dr.AUTOPILOT_MODE.QUICK , worker_count = -1 )
On the last one @Bruce you can get DataRobot to pass back all the independent features (or even features that are not used at all) but only if you have deployed the model in MLOps. Currently the only way to do it via the Modelling workers is through the GUI. There's no support for the API yet. Here's a very similar question that was asked a few months back here
I'm not sure how you are doing your predictions but the BatchPredictionJob function has a 'passthrough_columns' parameter which may be helpful. I have not used this parameter so I'll give it a test.