# Scikit learn accuracy_score

In this Python tutorial, we will learn about the scikit learn acuuracy_score in python and we will also cover different examples related to scikit learn accuracy_score. And we will cover these topics.

• scikit learn accuracy_score
• scikit learn accuracy_score examples
• How scikit learn accuracy_score works

## Scikit learn accuracy_score

The accuracy_score method is used to calculate the accuracy of either the faction or count of correct prediction in Python Scikit learn.

Mathematically it represents the ratio of the sum of true positives and true negatives out of all the predictions.

``Accuracy Score = (TP+TN)/ (TP+FN+TN+FP)``

Here we can also calculate accuracy with the help of the accuracy_score method from sklearn.

``accuracy_score(y_true, y_pred, normalize=False)``

In multilabel classification, the function returns the subset accuracy. If the whole set of predicted labels for the sample accurately matches with the true set of labels. Then the accuracy of the subset is 1.0 otherwise, its accuracy is almost 0.0.

Syntax:

``sklearn.metrics.accuracy_score(y_true,y_pred,normalize=False,sample_weight=None)``

Parameters:

• y_true: label indicator array / sparse matrix correct label.
• y_pred: label indicator array / sparse matrix predicted labels as returned by the classifiers.
• Normalize : It contains the boolean value(True/False).If False, return the number of correctly confidential samples. Otherwise, it returns the fraction of correctly confidential samples.

Returns:

score:float

• If normalize == True, it returns the number of correctly confidential samples(float), else it returns the number of correctly confidential samples(int).
• The best performance is 1 with normalizing == True and the number of samples with normalizing == False.

We can also write the accure_score in the following way:

``````accuracy_score(
y_true,
y_pred,
normalize: bool=True,
sample_weight:__class__ =None
)``````

## scikit learn accuracy_score examples

As we know scikit learn library is used to focus on modeling the data and not focus on loading and manipulating the data. Here we can use scikit learn accuracy_score for calculating the accuracy of data.

Example 1:

In this example, we can see

• y_pred = [0, 5, 2, 4] is used as predicted value that we can choose.
• y_true = [0, 1, 2, 3] is used as true value that already given.
• accuracy_score(y_true, y_pred) is used to check the accuracy_score of true value and predicted value.
`````` import numpy as np
from sklearn.metrics import accuracy_score
y_pred = [0, 5, 2, 4]
y_true = [0, 1, 2, 3]
accuracy_score(y_true, y_pred)``````

Output:

After running the above code we get the following output in which we can see that here the normalized value is true from this we get the float value.

Example 2:

In this example, we can see:

• If normalize == False, it returns the number of correctly confidentialsamples(int).
• y_pred = [0, 5, 2, 4] is used as predicted value that we can choose.
• y_true = [0, 1, 2, 3] is used as true value that already given.
• accuracy_score(y_true, y_pred, normalize=False) is used to check the accuracy_score of true value and predicted vale.
``````import numpy as np
from sklearn.metrics import accuracy_score
y_pred = [0, 5, 2, 4]
y_true = [0, 1, 2, 3]
accuracy_score(y_true, y_pred)
accuracy_score(y_true, y_pred, normalize=False)``````

Output:

After running the above code we get the following output in which we can see that the accuracy score of true_value and predicted_value.

## How scikit learn accuracy_score works

The scikit learn accuracy_score works with multilabel classification in which the accuracy_score function calculates subset accuracy.

• The set of labels that predicted for the sample must exactly match the corresponding set of labels in y_true.
• Accuracy that defines how the model performs all classes. It is useful if all the classes are equally important.
• The accuracy of the model is calculated as the ratio between the number of correct predictions to the total number of predictions.

Code:

In the following code, we import two libraries import numpy and import sklearn.metrics for predicting the accuracy of the model.

• y_true = [“positive”, “negative”, “negative”, “positive”, “positive”, “positive”, “negative”] this is the true value of the model.
• y_pred = [“positive”, “negative”, “positive”, “positive”, “negative”, “positive”, “positive”] this the predicted value of the model.
• accuracy = (r[0][0] + r[-1][-1]) / numpy.sum(r) is used to calculate the accuracy csore of the model.
• print(accuracy) is used to print the accuracy_score on the screen.
``````import numpy
import sklearn.metrics

y_true = ["positive", "negative", "negative", "positive", "positive", "positive", "negative"]
y_pred = ["positive", "negative", "positive", "positive", "negative", "positive", "positive"]

r = sklearn.metrics.confusion_matrix(y_true, y_pred)

r = numpy.flip(r)

accuracy = (r[0][0] + r[-1][-1]) / numpy.sum(r)
print(accuracy)``````

Output:

After running the above code we get the following output in which we can see that the accuracy score of the model is printed on the screen.

The sklearn.metrics has function accuracy_score() that can use to calculate the accuracy.

``accuracy = sklearn.metrics.accuracy_score(y_true, y_pred)``

You may also like:

So, in this tutorial we discussed scikit learn accuracy_score in python and we have also covered different examples related to its implementation. Here is the list of examples that we have covered.

• scikit learn accuracy_score
• scikit learn accuracy_score examples
• How scikit learn accuracy_score works