A Confusion matrix is an N x N matrix used for evaluating the performance of a classification model, where N is the number of target classes. The matrix compares the actual target values with those predicted by the machine learning model.17-A
Confusion matrix is a fairly common term when it comes to machine learning. Today I would be trying to relate the importance of confusion matrix when considering the cyber crimes.
So confusion matrix is yet another classification metric that can be used to tell how good our model is performing. Yet it is more often used in various places which might not be using the confusion matrix.
This all gives us an idea that there is something more to confusion matrix than just being another classification metric.
So before we dive deep let’s first understand what a confusion matrix is.
It is a performance measurement for machine learning classification problem where output can be two or more classes. It is a table with 4 different combinations of predicted and actual values.
Interpretation: You predicted positive and it’s true.
Interpretation: You predicted negative and it’s true.
False Positive: (Type 1 Error)
Interpretation: You predicted positive and it’s false.
False Negative: (Type 2 Error)
Interpretation: You predicted negative and it’s false.
So this would give an idea of what the four boxes in the confusion matrix are representing.
So what makes the confusion matrix so peculiar is the presence and distinction of type 1 and type 2 errors.
High accuracy is always the goal be it machine learning or any other field. But the question is does high accuracy always mean better results. Well in most cases the answer is yes but let me give you an example where we might have to go beyond the common notion that we can blindly go towards a higher accuracy.
Let’s say an anti virus company came with an AI based anti virus that detects all the suspecting files. This model is giving 97 percent accuracy. Let’s say the model is working on your PC and you are there working on the next big thing. You just created an executable script which is very crucial for you but the anti virus being an AI model gave a “FALSE POSITIVE” that your file is a virus.
But on the other hand let’s say that you downloaded a few music videos that might have contained some malicious package but your model was unable to detect it and gave a “FALSE NEGATIVE”.
So now you have a choice. What type of model would you prefer. The mere existence of a choice here means that just accuracy doesn’t suffice the need in some cases because in both these cases the accuracy remained the same.
So you might now have a gist of the importance of the two types of error in confusion matrix and what they mean.
Cybercrime can be anything like:
- Stealing of personal data
- Identity stolen
- For stealing organizational data
- Steal bank card details.
- Hack emails for gaining information.
Trade off between type 1 and type 2 error is very critical in cyber security. Let’s take another example. Consider a face recognition system which is installed infront of the data warehouse which holds critical error. Consider that the manager comes and the recognition system is unable to recognize him. He tries to log in again and is allowed in.
This seems a pretty normal scenario. But let’s consider another condition. A new person comes and tries to log himself in. The recognition system makes and error and allows him in. Now this is very dangerous. An unauthorized person has made an entry. This could be very damaging to the whole company.
In both the cases there was an error made by the security system. But the tolerance for False Negative here is 0 although we can still bear False Positive.
This shows the critical nature that might vary from use case to use case where we want a tradeoff between the two types of error.