Table Of Contents

Train SVM Classifier (G Dataflow)

Last Modified: October 26, 2017

Sets the classifier session in to use the SVM classifier engine and configures the SVM parameters to use.

connector_pane_image
datatype_icon

classifier session in

Reference to the classifier session on which the node operates.

datatype_icon

kernel options

Cluster of parameters to configure the kernel.

datatype_icon

kernel

Kernel that the classifier uses to determine if a sample is a texture or contains a defect.

The SVM classifier is a linear classifier. Use a nonlinear kernel to transform samples with nonlinear feature information to a dimension where the feature information is linearly separable.

Name Description
linear Applies a linear kernel to the sample. Use this kernel If the number of features per sample is high.
polynomial Applies a polynomial kernel to the sample.
Gaussian Applies a Gaussian kernel to the sample.
RBF (Default) Applies a radial basis function (RBF) kernel to the sample.
datatype_icon

degree

Degree of the polynomial kernel.

The polynomial kernel becomes a linear kernel if you specify a degree of 1.

datatype_icon

gamma

Gamma value for the polynomial and RBF kernels.

A high value requires more support vectors to classify the sample. Use a high value for samples with regularly distributed feature information, and a low value for samples with irregularly distributed feature information.

datatype_icon

coefficient

Coefficient of the polynomial kernel.

datatype_icon

sigma

Sigma value for the Gaussian kernel.

A higher value produces a smoother Gaussian function and fewer support vectors. Use a high value for samples with regularly distributed feature information, and a low value for samples with irregularly distributed feature information.

Default: 1

datatype_icon

cache(MB)

Cache size, in megabytes, for kernel operations.

datatype_icon

model options

Cluster of parameter to configures the SVM model.

datatype_icon

SVM model

SVM model that trains the classifier.

Name Value Description
C-SVC 0

The C-SVC model allows the SVM algorithm to clearly separate samples that are separated by a very narrow margin. If the SVM algorithm cannot define a clear margin, it uses the cost parameter to allow some training errors and produce a soft margin. If the cost value is too high it prohibits training errors, producing a narrow margin and rigid classification.

nu-SVC 1

In the nu-SVC model, the nu parameter controls training errors and the number of support vectors. The nu value specifies both the maximum ratio of training errors and the minimum number of support vectors relative to the number of samples. Nu must be greater than 0 and cannot exceed 1. A higher nu value increases tolerance for variation in the texture, but may also increase tolerance for texture defects. If nu is too high, training produces too many training errors to be useful.

one class 2

(Default) In the one-class model, the SVM algorithm considers the spatial distribution information for each sample to determine whether the sample belongs to the known class.

datatype_icon

tolerance

Maximum gradient of the quadratic function used to compute support vectors.

Default: 0.001

datatype_icon

max. iteration

Maximum number of iterations.

datatype_icon

nu

Value of nu.

Values range from 0.1 to 1. The default value is 0.1 A higher nu value increases tolerance for variation in the texture, but may also increase tolerance for defects. If the texture classifier does not perform as expected because the trained texture samples do not represent every possible variation of the texture, try increasing the value of nu.

datatype_icon

shrinking

Classifier that uses shrinking heuristics to attempt to reduce the number of variables involved in the classification computation.

Shrinking may reduce processing time if the number of iterations is large.

datatype_icon

cost

Penalty for training errors.

If the cost value is too high it prohibits training errors, producing a narrow margin and rigid classification. Decrease the cost value to allow more training errors and produce a softer margin between classes.

datatype_icon

weighted cost

Cost weight for each label.

datatype_icon

label

Label associated with the weight coefficient.

datatype_icon

weight

Weighted coefficient for label.

datatype_icon

error in

Error conditions that occur before this node runs.

The node responds to this input according to standard error behavior.

Standard Error Behavior

Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way.

error in does not contain an error error in contains an error
If no error occurred before the node runs, the node begins execution normally.

If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out.

If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out.

Default: No error

datatype_icon

classifier session out

Reference to the classifier session the node creates.

datatype_icon

training results

Array of statistical information for each class in the classifier.

datatype_icon

class

Class into which the classifier session categorizes the input sample.

datatype_icon

standard deviation

Standard deviation from the mean of all samples in class.

datatype_icon

number of samples

Number of samples in class.

datatype_icon

number of SVs

Number of support vectors in class.

datatype_icon

class distance table

Table giving the mean distance from each class to each other class.

datatype_icon

error out

Error information.

The node produces this output according to standard error behavior.

Standard Error Behavior

Many nodes provide an error in input and an error out output so that the node can respond to and communicate errors that occur while code is running. The value of error in specifies whether an error occurred before the node runs. Most nodes respond to values of error in in a standard, predictable way.

error in does not contain an error error in contains an error
If no error occurred before the node runs, the node begins execution normally.

If no error occurs while the node runs, it returns no error. If an error does occur while the node runs, it returns that error information as error out.

If an error occurred before the node runs, the node does not execute. Instead, it returns the error in value as error out.

Where This Node Can Run:

Desktop OS: Windows

FPGA: Not supported

Web Server: Not supported in VIs that run in a web application


Recently Viewed Topics