Last Modified: January 12, 2016
Converts, displays, and manages variables.
Constructs a complex matrix.
Converts input elements to double-precision, floating-point numbers.
Generates an error and stops execution.
Converts the input elements to 16-bit signed integers.
Converts the input elements to 32-bit signed integers.
Converts the input elements to 64-bit signed integers.
Converts the input elements to 8-bit signed integers.
Computes the length of a numeric object or string.
Converts input elements to single-precision numbers.
Converts the input elements to 16-bit unsigned integers.
Converts the input elements to 32-bit unsigned integers.
Converts the input elements to 64-bit unsigned integers.
Converts the input elements to 8-bit unsigned integers.
Recently Viewed Topics