Last Modified: January 12, 2018

Uses the downhill simplex method to determine a local minimum of a function of *n* independent variables defined with formulas.

The downhill simplex method relies only on function evaluations and is able to find a solution when the function is not smooth and does not have derivatives defined.

Names of the variables.

Variable names must start with a letter or an underscore followed by any number of alphanumeric characters or underscores.

Formula that defines the objective function. The formula can contain any number of valid variables

Values of the variables at which the optimization starts.

Error conditions that occur before this node runs.

The node responds to this input according to standard error behavior.

Standard Error Behavior

Many nodes provide an **error in** input and an **error out** output so that the node can respond to and communicate errors that occur while code is running. The value of **error in** specifies whether an error occurred before the node runs. Most nodes respond to values of **error in** in a standard, predictable way.

**Default: **No error

Conditions that terminate the optimization.

This node terminates the optimization if this node reaches all the tolerance thresholds or passes any of the maximum thresholds.

Minimum relative change in function values between two internal iterations.

Definition of Relative Change in Function Values

The relative change in function values between two internal iterations is defined as follows:

$\frac{\mathrm{abs}({f}_{n}-{f}_{n-1})}{\mathrm{abs}\left({f}_{n}\right)+\epsilon}$

where

*f*_{n}is the function value of the current iteration*f*_{n - 1}is the function value of the previous iteration- ε is the machine epsilon

**Default: **1E-06

Minimum relative change in parameter values between two internal iterations.

Definition of Relative Change in Parameter Values

The relative change in parameter values between two internal iterations is defined as follows:

$\frac{\mathrm{abs}({P}_{n}-{P}_{n-1})}{\mathrm{abs}\left({P}_{n}\right)+\epsilon}$

where

*P*_{n}is the parameter value of the current iteration*P*_{n - 1}is the parameter value of the previous iteration- ε is the machine epsilon

**Default: **1E-06

Minimum 2-norm of the gradient.

**Default: **1E-06

Maximum number of iterations that the node runs in the optimization.

**Default: **100

Maximum number of calls to the objective function allowed in the optimization.

**Default: **1000

Maximum amount of time in seconds allowed for the optimization.

**Default: ** -1 — The optimization never times out.

Values of the variables where the objective function has the local minimum.

Value of the objective function at **minimum**.

Number of times that this node called the objective function(s) in the optimization.

Error information.

The node produces this output according to standard error behavior.

Standard Error Behavior

**error in** input and an **error out** output so that the node can respond to and communicate errors that occur while code is running. The value of **error in** specifies whether an error occurred before the node runs. Most nodes respond to values of **error in** in a standard, predictable way.

For the function defined by equation *f*(*x*, *y*) = *x*^{2} + *y*^{2}, you must enter two numbers to represent the starting point in 2D space. The method generates a new simplex by some elementary operations such as reflections, expansions, and contractions. In the end, the minimum is concentrated in a very small simplex.

To find the simplex sequence tending to the minimum of the preceding function (0, 0), enter the following values on the panel:

objective function |
x*x+y*y |

variables |
[x, y] |

start |
[3.2, 1] |

The following illustration shows the simplex sequence tending to the minimum of the preceding function (0, 0).

**Where This Node Can Run: **

Desktop OS: Windows

FPGA: Not supported

Web Server: Not supported in VIs that run in a web application