Dr. Stan Zurek - Wolfson Centre for Magnetics, School of Engineering, Cardiff University
Prof. Anthony Moses - Wolfson Centre for Magnetics, School of Engineering, Cardiff University
Dr. Michael Packianather - Manufacturing Engineering Centre, School of Engineering, Cardiff University
Predicting Magnetic Properties of Wound Toroidal Cores
The power losses in magnetic cores due to eddy currents occurring under alternating magnetization can be easily reduced by using laminations instead of bulk magnetic material. However, production of the laminated cores is usually costly. One of the simplest and most cost-effective techniques to produce these cores is by winding them from long strips of soft magnetic material (for example, cut from electrical steel sheets or ribbons of amorphous magnetic alloy) into a tightly wound spiral.
The toroidal magnetic cores are used widely in applications like low-distortion transformers, switching transformers, magnetic chokes, and inductors. They are magnetized in such a way that their permeability and magnetic losses can be degraded by the presence of high flux harmonics caused by electronically generated or controlled voltage sources. The wound cores’ geometrical shape affects their magnetic properties under sine wave flux conditions, and these effects can be predicted using an artificial neural network (ANN).
The Wolfson Centre for Magnetics received a grant from the Engineering and Physical Sciences Research Council to fund a project to predict power loss and magnetic permeability in these types of wound toroidal cores.
One of the project’s goals is to provide a feasible tool to predict the magnetic properties of cores of various dimensions wound from Co-Fe amorphous ribbon and Fe-Si electrical steel, within the magnetizing frequency spectrum of interest (from 20 Hz to 25 kHz) and under sinusoidal and distorted magnetizing conditions. Although there are other possible methods for determining dependencies and prediction, we chose the ANN approach for this study because of its ability to learn, deal with noisy or incomplete data, and handle complex data.
ANNs are computational models of a biological nervous system that can learn from examples. The neurons are arranged in layers, forming an input layer, a hidden layer (or layers), and an output layer. The neurons in each layer are interconnected, with their connections associated with weights. During the training phase, the connection weights are adjusted while the input parameters and corresponding outputs are presented to the network for learning. The network output is compared to the target output to produce an error after every iteration. To facilitate the learning process, the back-propagation algorithm chooses the new weights in the direction the error gradient continues to decline. Training is terminated when the error is reduced below a certain threshold, defined by the user, and the weights are stored for recall when the network is presented with previously unseen data.
The ANN software depended on easy integration with LabVIEW. We needed to develop the required tool with LabVIEW so that the compiled version of the code written in LabVIEW could be distributed free of charge to the user.
Implementing the Artificial Neural Network in LabVIEW
We needed a feed-forward, back-propagation, multilayer perceptron ANN with a nonlinear activation function. We configured the ANN structure to five input neurons, 10 neurons in the first hidden layer, 10 neurons in second hidden layer, five neurons in third hidden layer, and one output neuron. If necessary, it is more beneficial (due to less computational time) to increase the number of neurons in one of the hidden layers rather than to increase the number of hidden layers.
The artificial neural network we created in LabVIEW, which we named aNETka, has most of the features available in commercial software, including:
- Choice of activation function (linear, sigmoid, hyperbolic tangent)
- Automatic data reading and saving
- Automatic data normalization from 0.05 to 0.95 (required for correct ANN performance)
- Weights randomization
- Training and recall modes
- Biasing of the neurons
- Momentum term to speed up and stabilize the training process
- Adaptive learning rate
- Automatic termination of training using a specified number of iterations or a user-defined training error (whichever comes first)
aNETka performance has been compared to the commercially available QNET 2000 package. Although the user interfaces are quite different, both packages have similar features.
Removing any unnecessary calculations, data copying, and sub VIs helped minimize the execution time by around 10 percent. However, we achieved the largest reduction by changing all the numeric types to single precision instead of default double precision, which further reduced the training time by around 60 percent.
Of course, the execution time strongly depends on the computer configuration and the number of connections within the ANN topology. aNETka offers much easier integration with the measurement software, as well as the possibility to be used as stand-alone software – both factors that outweigh the limitations.
For more information, contact:
Dr. Stan Zurek, Research Associate
Wolfson Centre for Magnetics
School of Engineering
Cardiff CF24 3AA
Tel: +44 2920 87 5943