LabVIEW Analytics and Machine Learning Toolkit API Reference

Evaluate Classification Model VI

  • Updated2023-02-21
  • 4 minute(s) read

Evaluate Classification Model VI

Owning Palette: Classification VIs

Requires: Analytics and Machine Learning Toolkit

Evaluates a trained classification model by using new test data with labels.

You must load the new test data by using the Deployment instance of the Load Data (2D Array) VI.

Example

model in specifies the information about the entire workflow of the model.
evaluation configuration specifies the configuration for the evaluation metric.
average method specifies the averaging method for this VI to calculate metric values for multiclass classification.

0Micro (default)—Calculates metric values for each sample and returns the mean of the metric values for all samples.
1Macro—Calculates the metric values for each label and returns the mean of the metric values for all labels.
2Weighted—Calculates the metric values for each label and returns the mean of weighted metric values for all labels. The number of true cases in a label determines the weight of the metric value of the label.
3Binary—Calculates the metric values for the class that positive label specifies.
positive label specifies the label of the class to calculate metric values. The default is 0. This input is valid only if average method is Binary.
error in describes error conditions that occur before this node runs. This input provides standard error in functionality.
model out returns the information about the entire workflow of the model. Wire model out to the reference input of a standard Property Node to get an AML Analytics Property Node.
confusion matrix returns the confusion matrix from the evaluation result. A confusion matrix describes the performance of a classification model by reporting the number of true positive cases, true negative cases, false positive cases, and false negative cases. Each row of a confusion matrix represents the actual class and each column represents the predicted class.

For example, for 100 samples, there are two possible classes: positive and negative. The following table is a confusion matrix for the two classes.

Predicted Class
PositiveNegative
Actual ClassPositive655
Negative1911


The confusion matrix contains 65 true positive cases, 5 false negative cases, 19 false positive cases, and 11 true negative cases.
metrics returns metrics from the evaluation result.
accuracy returns the accuracy metric value.

The following equation defines the accuracy metric:



where
TP is the number of true positive cases in the data
TN is the number of true negative cases in the data
P is the number of real positive cases in the data
N is the number of real negative cases in the data
precision returns the precision metric value.

The following equation defines the precision metric:



where
TP is the number of true positive cases in the data
FP is the number of false positive cases in the data
recall returns the recall metric value.

The following equation defines the recall metric:



where
TP is the number of true positive cases in the data
FN is the number of false negative cases in the data
f1 score returns the f1 score metric value.

The following equation defines the f1 score metric:



where
TP is the number of true positive cases in the data
FP is the number of false positive cases in the data
FN is the number of false negative cases in the data
error out contains error information. This output provides standard error out functionality.

Example

Refer to the Classification (Deployment) VI in the labview\examples\AML\Classification directory for an example of using the Evaluate Classification Model VI.