Please enable JavaScript to view this site.

This guide is for an old version of Prism. Browse the latest version or update Prism

As discussed in the previous section, the goal of logistic regression is to model the probability of a given outcome occurring. However, rather than predicting probabilities, researchers sometimes want the output of their model to indicate if either a success or a failure is expected for a given X value. This is called classification. The simplest way to perform classification is to set what is known as a cutoff value. This value is a number between 0 and 1 that serves as the division for what to call a “success” and what to call a “failure”. For example, setting the classification cutoff to 0.5 is common (and default for simple logistic regression in Prism), and means that if the model predicts a probability of success greater than or equal to 0.5, then that prediction is classified as a "success" (Y=1), while if the model predicts a probability less than 0.5, it will classify it as a "failure" (Y=0).

There are a LOT of metrics that researchers use from this sort of classification including concepts like the sensitivity and specificity of a model, the true positive rate (TPR) and false positive rate (FPR) of classification, the positive and negative predictive power of the model, and much more. Read more about classification.

 

© 1995-2019 GraphPad Software, LLC. All rights reserved.