cancel
Showing results for 
Search instead for 
Did you mean: 

What is the prediction and its strength

What is the prediction and its strength

I asked this recently on the other forum but maybe this is more the right place.

 

A classification prediction comes with what looks like a number between 0 and 1, call it the certainty, and also a collection of feature-strength-value triples. 

 

But, what exactly is strength? 

 

If one is computing a continuous variable from continuous variables then this could be the partial derivative (marginal statistics). And since the certainty is a continuous variable between 0 and 1, this is a valid measure. But, is that what it is? At the very least it should be scaled by some measure of spread of the input variables. And what if the input variable is an integer or even catagorical?

 

Can someone clarify this for me?

1 Solution

Accepted Solutions

Would like to relook Bruce's original question, which as I understand has to do with the concept of strength reported by the XEMP Prediction Explanations.

The docs state as follows:

Each explanation is a feature from the dataset and its corresponding value, accompanied by a qualitative indicator of the explanation’s strength—strong (+++), medium (++), or weak (+) positive or negative (-) influence. If an explanation’s score is trivial and has little or no qualitative effect, the output displays three greyed out symbols (+++ or - - -).

I understand from the whitepaper that XEMP for each feature is calculated as the difference between Feature Effects (partial dependence) values and a weighted average of partial dependence values for the feature concerned. Therefore the basis for computing strength is the deviation in partial dependence from the 'usual value'. What is less clear perhaps is what basis is used for the 3 qualitative indicators of strength: strong, medium, weak (+ trivial).

Apologies if I have misunderstood your question and hijacked your thread @Bruce .

View solution in original post

11 Replies

Hi Bruce, in a classification problem the output values are propensity scores, which can then be converted to discrete predicted values by applying threshold(s).   A propensity score is not really the certainty, and I often caution folks not to (necessarily) think of it as true real-world probability either - unless the model is very accurate and very well-calibrated.  It can be more useful to think of propensity scores as relative - as many classification use cases tend to end up being a ranking exercise.  Say for example ordering by descending propensity score to understand highest likelihood or risk amongst the individuals that were scored.

Re: your question on strength-feature-value triples, I understand this as referring to the prediction explanations which can be provided alongside the predictions.  For models which support it, these are derived from SHAP values (Shapley Additive Explanations) - and for models which don't, from XEMP (Exemplar Based Explanations).  Prediction Explanations are documented here:

https://docs.datarobot.com/en/docs/modeling/analyze-models/understand/pred-explain/index.html

To answer your question, integer and categorical features are catered for - the high-level interpretation is:

These are the (say top 3 or whatever was specified) feature-values, their direction positive or negative, and a simple granular representation of their magnitude, which contributed to the propensity score for this individual.  So the 'strength' is the relative marginal influence on the predicted outcome, according to the feature value's numeric SHAP score for this individual.  The SHAP values are ordered by descending magnitude and the top X are shown - and these will be different per row/individual. 

This link to the docs goes into some detail on SHAP:

https://docs.datarobot.com/en/docs/modeling/reference/model-detail/shap.html

This general reference may also also useful:

https://shap.readthedocs.io/en/latest/example_notebooks/overviews/An%20introduction%20to%20explainab...

 

Hope this helps.

 

Eu Jin
Data Scientist
Data Scientist

hey @Bruce  

 

This one is abit beyond my depth but, I'm going to leverage our Global team of experts to come back to you on this answer. Won't be long... stay tune

 

Eu Jin

@TravisB Thanks for the links that I have yet to digest. 

 

I picked the word "certainty" because it has no common technical meaning - unlike probability or likelihood. However, you say propensity. What is the intended by the use of that term? I found this link fairly quickly Propensity Scores: A Primer - KDnuggets which spoke of it being the result of a broken or incomplete experiment. But, is this the same thing to which you refer? If you have 5 classes, should the 5 propensity scores add to exactly 1? Or is that not a thing. Does propensity have any intuitive meaning you can hang a hat on - or is it a mostly meaningless number regarding the way the model has approximated the function - sort of like using logistic regression on a binary step function?

 

0 Kudos

@Bruce, may be model dependant but generally for multi class classification you would typically have a softmax function in which the probabilities/Propensity of all the classes would add up to 1. 

@Bruce Cool discussion.  I usually associate 'certainty' with determinism and 'uncertainty' with stochastic/probabilistic outcomes.  I believe propensity scores approach true probability when models approximate the function in question very well (highly accurate & calibrated).. but no I don't believe algorithms for multinomial classification guarantee class propensities sum to 1. 

 

Great point re: softmax @IraWatt 

@TravisB Very interesting about propensity which I was not aware of and will have to read up on. Especially that they do not add to 1.

 

Regarding "certainty" I might be getting out of scope but I would like to explain my use of the word. In binary logic we give statements values 0 and 1. In probability values in [0,1]. In this sense probability theory can be seen as a generalization of binary logic, with binary logic reappearing for certainly true or certainly false statements. But, this is just one example of the idea of generalizing truth values. For example, we could use a standard trinary logic, true, false, or unknown. Or we could use fuzzy logic, which is a bit like probability, but the method of combination is different.

 

So, to me just as a person who has a weight of 0 is light and a person who has a strength of 0 is weak - a statement that has a certainty of 0 would be false. So, certainty is being used by me in the sense of determinism, but as a scale. So, yeah, that's the reason I picked it. 

0 Kudos

I have been doing a bit of looking around.

 

These guys say that propensity is a probability.

https://www.altexsoft.com/blog/propensity-model

 

And these guys specifically say it is the conditional probability given the data.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2907172/

 

These guys seem to mean a rate of change of the probability.

https://www.kdnuggets.com/2017/05/propensity-scores-primer.html

 

A lot of people refer to propensity estimation - which seems to me to imply that there is a something that exists that is being estimated. The example of logistic regression comes up several times. Clearly, one can use something like logistic regression to approximate the characteristic function of each class - which is analogous to what Data Robot is doing. 

 

I have not done the experiment yet to see whether Data Robot prediction scores add to unity - but since this could be done merely by normalizing them, it feels like something that they would be remiss not to do. 

 

I am a bit concerned that, though, that some writers are mixing up probability and likelihood. 

 

My current position is then that the Data Robot prediction scores, or propensity, is intended in principle to be a conditional probability based on a model built from the statistical data - but it is unknown to me whether it is guaranteed to add to unity.

 

 

0 Kudos

Would like to relook Bruce's original question, which as I understand has to do with the concept of strength reported by the XEMP Prediction Explanations.

The docs state as follows:

Each explanation is a feature from the dataset and its corresponding value, accompanied by a qualitative indicator of the explanation’s strength—strong (+++), medium (++), or weak (+) positive or negative (-) influence. If an explanation’s score is trivial and has little or no qualitative effect, the output displays three greyed out symbols (+++ or - - -).

I understand from the whitepaper that XEMP for each feature is calculated as the difference between Feature Effects (partial dependence) values and a weighted average of partial dependence values for the feature concerned. Therefore the basis for computing strength is the deviation in partial dependence from the 'usual value'. What is less clear perhaps is what basis is used for the 3 qualitative indicators of strength: strong, medium, weak (+ trivial).

Apologies if I have misunderstood your question and hijacked your thread @Bruce .

@phi 

No hijacking, you are on the right track. I am going to accept your answer.

 

In simple terms - the strength is a (local) partial derivative estimation, and the scaling of the number of plus or minus signs is not apparent. In principle they stand for -3,-2,-1,+1,+2,+3, If the value is 0 the item is not mentioned as an explanation. But, the details of the scaling elude me and seem complicated and arbitrary.

 

But, I also take a moment to warn anyone following this track that I had to do quite a lot of reading to make sense out of the Xemp white paper, and that IMHO that white paper is misleading and (naturally) rather biased in favor of DataRobot as a piece of commercial software. 

 

As far as I can see the essence of the distinction between Lime and Xemp is mainly that Xemp uses values from the original data in order to produce a consistent explanation. However, this is essentially a modification of Lime that forces the explanation to be consistent - but does not stop it having an element of arbitrariness.  And since this is supposed to explore the model rather than the data - it suffers from testing the model only where the original data exists. Thus, being inappropriately kind to the model. 

 

The core of the Lime method is to find a model that has similar behaviour in a local region. This is the same idea as used in many other contexts - in particular the use of Taylor series or simply local affine approximants - which are all over the place in theory and practice.

 

I was not convinced that Lime uses a surrogate model and Xemp uses the original. Lime could be said to be providing a simple approximation as a description of the original model. Xemp does not seem to spend any effort on the internals of the model so could be said to be using what amounts to an implicit surrogate model.