Professor Deng’s research examines nuclear power plants (NPPs) that generate electricity using nuclear fission as a source of heat. Her team’s interest is in the way that these plants fail, their failure mode, and how a mishap can be prevented from becoming a catastrophe by earlier intervention.
Professor Deng’s work has included predicting failure modes. She believes it is crucial to identify the likely physical manner of failure and estimate the recovery time, when the NPP will be operational again. There are two general methods of detecting failure modes: using simulation models or monitoring the NPP to analyse the data being recorded. Professor Deng has added a third, exploiting her interest in neural networks, the technology that powers the brain.
Loss of Coolant Accidents
NPPs rely upon coolants to keep their temperature within a safe range. For NPPs, there are primary and secondary cooling systems, which is why ‘loss of coolant accidents’ (LOCA) are rare, thankfully. NPP ageing will increase the chances of failure and a LOCA would be a very serious incident.
In such an occurrence, a rapid reactor shutdown would be essential to prevent serious damage or radiation release. LOCAs occur due to a double-ended break in the reactor headers of the primary heat transport system that shifts heat from the reactor to the steam turbines that generate electricity. Detecting these breaks accurately is challenging because it is difficult to detect abnormal patterns in the transient data which changes rapidly, almost in real-time. Researchers have recreated this data using a simulation tool, called RELAP5, that allows modelling of the behaviour of the reactor coolant system and core for accidents that might occur. Professor Deng believes that these data could be used to train artificial neural networks to detect LOCA events as they arise, something that would be valuable to NPP operators.
Nature of Neural Networks
Professor Deng specialises in the use of neural networks that behave like the human brain. They are excellent at mapping inputs to outputs and operate in a non-linear manner, requiring ‘training’ before they can be used to perform a function.
Her team utilised a well-known design of neural network – a ‘multi-layer perceptron’ – which has an input layer of neurons, one or more hidden layers and an output layer. Each input neuron is connected to the first hidden layer and subsequent connections to generate an output, which is determined by the training performed.
To succeed, Professor Deng needed to use accurate training situations and tune the neural network so that the outputs were those to be expected. Tuning involved adjusting the weights and biases of the network until an ‘optimal’ network was achieved that delivered the results that Professor Deng wanted.
Detecting LOCAs using a Neural Network
For Professor Deng’s work to be of practical benefit, neural networks must be trained to recognise a LOCA from data received from monitoring the operation of a NPP; it’s also imperative that predictions have a high level of accuracy.
Data obtained from simulations of LOCAs were available as time-dependent measurements related to inlet header ‘break-size’. There were 35 analogue sequences, critical NPP operating measurements varying over time, and two digital values, representing the state of components like reactor-tripped (yes or no) and pump-pressure-high (yes or no).
Six different break-sizes were modelled ranging from ‘no-break’ to ‘double ended guillotine break’, with severity graduating between the two extremes. Around 541 signal measurements were taken for a transient duration of one minute. Because there were 37 variables available from the data, Professor Deng created a neural network of 37 neurons in the input layer, two hidden layers of a variable number of nodes and a final layer of three outputs for detecting the break-size and inlet header location where the failure occurred. The outputs are termed ‘softmax’ because they use a well-known mathematical formula to create discrete binary output, rather than a variable one.
Creating the Optimal Neural Network
The simulated transient data set of break-sizes was split randomly into a balanced training (50% of the data), validation (25%) and a test (25%) set. The training set was normalised using a standard-score normalisation method where data was modified to have the same mean and standard deviation to allow easier comparison.
The neural network parameters were tuned at an iteration interval of 1,000, called an ‘epoch’ and the validation set of data (unseen by the neural network) was used to check the performance of the network during training. To reduce the risk of bias, the training was stopped when the validation error did not improve for six consecutive epochs.
The performance of the trained network was evaluated using the test set (again, unseen by the neural network) and the network with the highest accuracy and the smallest number of hidden nodes was chosen as the optimal network. The optimal network could then be used to detect the break-sizes of new unseen data to test its accuracy. In addition to training, the networks were checked against ‘noisy’ data, where errors were deliberately injected into the input data.
Linear interpolation was a key step for improving the performance of the optimal network. Linear interpolation is a method of constructing new data points within the range of a set of known data points by fitting straight lines using linear polynomials. The amount of training patterns critically affects the predictive performances of neural networks. When the training set contains limited amount of training patterns, the prediction performance of models can often be improved by adding new patterns to the training set. Linear interpolation was used to generate a missing break-sizes dataset. The optimal network was then trained and tested using the transient dataset added with the missing break sizes dataset to obtain a network with better performance.
Was the Neural Network Accurate?
The neural network with 11 hidden nodes was found to be the optimal network with the highest accuracy of 98.5% on the test set. A network with 19 nodes delivered the same accuracy of 98.5% but had eight more hidden nodes. Neural networks with fewer hidden nodes tend to have better generalised performance, which is why they are considered ‘optimal’. For an inlet header with large break-sizes, the optimal network achieved more than 95% accuracy, the highest being 99.2% for the largest break-size. Finally, the optimal network was robust when the test data was corrupted with ‘noise.’
Professor Deng continues this exciting area of research to further train networks to detect all break-sizes, small and large. Her ultimate aim is to create an effective tool that can be incorporated into the control room, which will significantly enhance the safety of nuclear power plants.
LOCA are considered to be one of the most common basis for accidents in NPPs with a potential for release of radioactivity into the environment if not mitigated. However, the occurrence of LOCA in an NPP is very unlikely as there are multiple safety barriers.
The training of the neural network was done with simulation data. There have been a number of serious NPP incidents that have occurred. How would you leverage data from these events into your research?
If there are new/real data available for our research, our research will be benefitted hugely.
Your research has also included prediction of ‘failure modes’ of NPPs. As many plants are ageing around the world what would you expect to see in terms of NPP failure in the future, and over what time frame?
Regular prognostic health monitoring (PHM) through periodic In-service Inspection of NPP equipment and components is highly recommended for safe operation and plant life extension beyond design life.
You mentioned that LOCAs are difficult to detect from the data being monitored. Could you elaborate, explaining why this is?
Large LOCA is a very fast transient, involving sudden changes in the thermal-hydraulic parameters. Hence the operator support system is required for timely detection and mitigation.
Your research has some important implications for NPPs, old and new. Where do you see your research leading, especially for the new generation of NPPs?
This technique can be extended to any safety critical industries.
Professor Jiamei Deng’s research interests lie in improving efficiency for buildings, renewable energy technologies, safety monitoring for nuclear power plants, the manufacturing industry and transport systems, virtual sensor design and data processing. Her particular interest lies in predictive models, emissions modelling, and data-driven power plant safety monitoring.
- Professor Jiamei Deng would like to thank the Engineering and Physical Sciences Research Council (EPSRC) for financial support (EP/M018717/1).
- Dr Gopika Vinod and Dr Santhosh, Bhabha Atomic Research Centre, Mumbai, India
- Professor Chris Gorse, Dr David Tian, Dr Giuseppe Colantuono, Leeds Beckett University
Professor Jiamei Deng obtained her PhD degree at Reading University in 2005. Her research provided solid knowledge of big data analysis, machine learning, analytics and modelling. She is currently a Professor in Artificial Intelligence and Energy at Leeds Beckett University. Professor Deng holds two European patents. She is sole author of one monograph, author of one book chapter, and has around 70 published papers in prestigious journals and international conferences.
Prof Jiamei Deng
School of Computing,
Creative Technologies & Engineering
Leeds Beckett University
Leeds LS6 3QR
T: +44 (0)1138 127627