cancel
Showing results for 
Search instead for 
Did you mean: 

What is the prediction and its strength

What is the prediction and its strength

I asked this recently on the other forum but maybe this is more the right place.

 

A classification prediction comes with what looks like a number between 0 and 1, call it the certainty, and also a collection of feature-strength-value triples. 

 

But, what exactly is strength? 

 

If one is computing a continuous variable from continuous variables then this could be the partial derivative (marginal statistics). And since the certainty is a continuous variable between 0 and 1, this is a valid measure. But, is that what it is? At the very least it should be scaled by some measure of spread of the input variables. And what if the input variable is an integer or even catagorical?

 

Can someone clarify this for me?

11 Replies

Thanks for accepting my solution Bruce.

I think it's important to keep in mind that choosing a model explanation method can be quite subjective. I cannot speak for DataRobot as to how they came to decide on their model explanation offerings, but here are some of my personal thoughts: 

  1. I think in general, it is useful to think of any model explanatory method as depending on surrogate model(s). That creates risks about the fidelity of your explanations, as your explanation now depends on two or more models. From what I understand, LIME depends on multiple surrogate models, whereas XEMP may be easier to understand as a summary from a what-if analysis of sorts. If you need to precisely describe how your model behaves at a locality, and depending on multiple models is less of a concern, LIME would be more appealing.
  2. "it suffers from testing the model only where the original data exists. Thus, being inappropriately kind to the model.": I think this critique cuts both ways: one could also prefer XEMP to LIME because it depends only on realistic data samples.

@phi 

 

It is interesting to chat with you. 

Unless you fundamentally support Xemp against Lime, we may well be in basic agreement. My own approach is that they are just part of the same general approach.

Although - I don't feel that Xemp applies a what-if analysis at all. It only uses the data that has already been processed. While Lime thinks up new scenarios and asks how the model would behave under those conditions. That sounds like Lime is doing the what-if analysis.

The thrust of my intention was to remove the idea that Xemp somehow wins hands-down against Lime. In fact, I see the two as essentially the same approach, differing only in how they select the data used to probe the model.

I am now in my official work not sticking to one or the other approach, but have explicitly chosen to see the whole thing in a more general light which I am calling Limesque. 

I do not think that selecting data from the actual original sample used to train the model fundamentally uses more realistic data. All samples are biased. One could use further sampled data perhaps, and perhaps that would be more justfied. And if you have an idea of the distribution of the data, you could generate potentially realistic data to test it on. I don't feel that one should specialize to normal distribution - which Lime does. And, I don't feel that one should specialize to the already sampled data - which Xemp does. 

We are perhaps in some agreement here - as when you say that the critique cuts both ways, that was precisely the point I was making. The idea that Xemp uses more realistic data is the standard Xemp proponent justification. So, I gave the counter argument.

My own personal interest has moved into looking at the idea that an explanation is a theory. And that in a very real sense, the large and supposedly more accurate numerical and combinatorial models with thousands of terms should not be considered fundamentally a better theory. In fact from the very fact of being large and complicated - they already fail. And they typically need recalibrating, which emphasizes that failure. 

The real Data Science task seems to cut very deeply into the practice of science itself. It is not something that is solved by throwing computational grunt at it the way we are doing today.

In my opinion - an explanation of a model should fundamentally come from the internals of the model. Models should be built from the ground up to to produce explanations. After-market bolt-ons like Xemp and all Limesque approaches have a deep problem there.

Ultimately, an explanation should involve the ability to reverse the decision process. How can I change the data to something that might not have been seen before so that I can change the outcome. An explanation that does not involve control is a pretty poor explanation.

I suspect that a good answer to that only comes from an examination of the internals and probably requires the machine learning fitting method to have been designed from the ground up to admit this option. Neither Xemp nor Lime gets anywhere near doing this in their vanilla forms.

 

 

 

0 Kudos