Share this article.

A multimodal stereovision framework to model wildfires

ArticleDetailDownload PDF

Wildfires propagate in a chaotic way so finding patterns in their spread would help us understand how best to tackle them. Dr Lucile Rossi from the University of Corsica, France, is a lead scientist in the field of image processing. Dr Rossi proposes a system of multimodal vision, using cameras on unmanned aerial vehicles (UAVs) to measure the fire front. Infrared (IR) and visible information are fused to select fire areas in the visible images. 3D shapes of the fire are obtained using the fire areas selected in stereoscopic visible images. This new system has potential to track the characteristics and energy of a wildfire over long distances, providing a vital tool for firefighters.

Wildfires are a significant threat to our ecosystems, our planet, and our civilisation. Fire propagates chaotically, without control, and it is difficult to predict how it will spread. Recent years have seen devastating wildfires in which many human and animal lives have been lost, expensive assets destroyed, and firefighters’ lives increasingly put at risk.

Defined as unwanted fires that burn forests and wild lands, wildfires consume more than 340 million hectares every year across the planet and cost billions as governments attempt to prevent and curtail them. Wildfires happen regularly in nature and often spark naturally, but a warming climate, weather, and increasing incidences of fires caused by human activity – from deliberate acts or the inevitable consequences of urban spread – mean the risk of wildfires is only going to increase. We urgently need strategies to reduce wildfire risk, to improve decision-making for firefighting and land-use planning, and decrease their economic, environmental and social impacts. And for these we need data obtained during the propagation of a fire, across different fire fuels and conditions, to better understand phenomena so we can model and predict their behaviour.

Computer vision for fire prediction

In a new study, Dr Lucile Rossi and colleagues from the University of Corsica, France, have developed a system that uses unmanned aerial vehicles (UAVs) and a multimodal stereovision framework, to create a georeferenced three-dimensional (3D) picture of the fire. It enables them to obtain geometrical measurements of fire – position, rate of spread, height, length, width, surface, and volume – that are very important to predict its behaviour. The team has developed a behaviour model that is able to predict the position and the heat flux of a fire during its propagation taking into account ignition of the fire, weather, fuel, and ground topology.

Recent research has looked at the use of computer vision and image processing to help understand and predict the movement of fires. Most of these efforts focus on merging the information from multiple single-camera systems or multiple stereo pairs of cameras on the ground. These have enabled researchers to measure fires spreading for about ten metres. The detection of fire pixels in an image has been an important step for measuring fire by vision, as it determines the accuracy with which the fire characteristics can be estimated.

Work in this field so far has mostly concentrated on the visible light spectrum, however fire colour and texture, and smoke, can affect the detection of pixels. IR imaging overcomes some of these issues (short wavelength infrared can ‘see’ through smoke, when the fire emits little soot), but it cannot work on its own because fire emits light in several spectral bands at once in a non-uniform way, and fire areas that appear in visible images don’t have the same shape as the ones obtained with IR images.

Modelling the fire.

The most effective approach so far combines visual data, GPS positions and inertial measurement units (IMU) to estimate the geometric characteristics of a fire across about 10m; however even this system has to anticipate the path of the fire to optimally position the cameras and loses precision when the fire moves away.

The two main challenges facing researchers in this field therefore are finding a way to keep the cameras close to the fire as it moves, and to compute the equation of the propagation plane of the fire and its main direction for the moment each image is acquired. This is particularly important because every forest has distinct and unique characteristics, and conditions within a single forest vary. Even a small change in weather, humidity and terrain, for example, can have a big impact on how the fire progresses.

“Conditions within a forest vary, so the spread of a fire front changes as the fire progresses.”

UAV multimodal stereovision

Dr Rossi’s new system overcomes these challenges because the UAV can change position and orientation of the onboard cameras as needed. The vision device is comprised of two cameras operating simultaneously in both visible and infrared ranges, producing stereoscopic, multimodal images that are georeferenced using an onboard GPS. To determine the direction of the fire and the local propagation plane, the researchers instantaneously compute the relevant mathematical equations for each image acquired.

Two visible infrared cameras (FLIR) are attached on a carbon fibre axis 0.85m apart. The cameras simultaneously take images in the visible and longwave infrared spectra, and these two images are superimposed, compared and analysed by special computer software. The cameras capture images at a speed of one frame every two seconds, controlled by a micro-computer which ensures they are synchronised. The FLIR cameras have an onboard GPS/compass sensor, IMU board and altitude sensors. Together these enable the researchers to obtain the position and orientation of the vision system at each moment and generate georeferenced images. The combined weight of the vision system – cameras, batteries and raspberry computer, for the FLIR cameras – is just 2.278kg. Currently Dr Rossi is developing a lighter multimodal stereovision system with separate visible cameras and IR cameras and GPS/Compass sensor and IMU board added independently of the camera.

Framework for measuring the fire front.

In their experiments, the researchers combined information from the infrared image and the visible image to detect fire pixels in the visible range. They used the coordinates of matched points and the parameters of the stereovision device to triangulate different features of the fire to produce 3D points. They used these 3D data to model the geometric characteristics of the fire. They had to estimate variables such as the local propagation plane so they could model how the fire would spread. The main direction of the fire can vary according to wind and ground slope, so the team used ground fire points to estimate direction, which enabled them to then calculate further characteristics such as the inclination, width and length of the fire.

The researchers account for the constantly moving position and orientation of the UAV-mounted vision system by projecting all the 3D point into a global reference frame, to show the evolution of the fire geometric characteristics over time, including the front line of the fire.

Triangulating the data points enabled the team to produce a 3D reconstruction of the fire, from which they calculated the volume of the fire and its surface characteristics, allowing the estimation of the heat flux at the fire front and the amount of energy emitted by its surface, regardless of the position of the target. They calculated the front and back lines of the fire to work out its rate of spread and depth, also taking account of different fire fuels that would affect combustion and thus the spread of the fire.

Testing in the field

It is not possible to reproduce an actual wildfire, making it very difficult to test – or ground truth – experimental results. So, Dr Rossi and the team first demonstrated the validity of their model using a UAV flying 10–15m around a parked car with known geometrical characteristics. They applied the same triangulation and modelling methods and found that their estimated dimensions and position of the car were accurate.

In a second experiment they set an outdoor fire using wood wool placed over an area of 3 x 5m. The estimated position of the fire over time were in line with the experiment. Finally, they conducted tests on a controlled fire set over 5 x 10m over flat and inclined ground, using a UAV flying 10m high and 15m from the fire zone. By setting the fire at the short side of the rectangle they could watch the fire propagate over the full length of the fuel area. Once again, they found their estimated data matched expectations for this type of fire.

The next steps for Dr Rossi and her team will be to use landmarks as position and height references in their tests, for comparison with measured characteristics. However, even at this stage it is clear that this new multimodal stereovision framework has huge potential for accurately measuring the geometric characteristics of wildfires, enabling researchers – and ultimately firefighters – to understand and predict the propagation and behaviour of the fires that are increasingly taking such a toll on our lives and landscapes.


What do you think are the main barriers to widespread implementation of your system by governments and fire-fighting organisations?
What is important for the fire fighters during a wildfire is to know how the fire will evolve to adapt their strategy. Robust data can be predicted by combining fire behaviour model and drone measurements. To do that it is necessary that the measurements are obtained and transmitted to the fire simulator in real time. We are working on these new steps. In addition, the framework must be easily transportable and usable. SATT support enables technology transfer and the transition from research products to usable products for non-specialist users.

 

References

  • Ciullo, V, Rossi, L, and Pieri, A, (2020). Experimental Fire Measurement with UAV Multimodal Stereovision. Remote Sensing, 12. DOI: 10.3390/rs12213546.
  • Morandini, F, Toulouse, T, Silvani, X, Pieri, A, and Rossi, L, (2019). Image-Based Diagnostic System for the Measurement of Flame Properties and Radiation. Fire Technology, 55. doi.org/10.1007/s10694-019-00873-1
  • Toulouse, T, Rossi, L, Akhloufi, M, Pieri, A, and Maldague, X, (2017). A multimodal 3D framework for fire characteristics estimation. Measurement Science and Technology, 29. DOI: 10.1088/1361-6501/aa9cf3.
  • Rossi, L, Molinier, T, Akhloufi, M, Tison, Y, and Pieri, A, (2012). Estimating the surface and volume of laboratory-scale wildfire fuel using computer vision. Image Processing, IET, 6, 1031-1040. DOI: 10.1049/iet-ipr.2012.0056.
  • Rossi, L, Akhloufi, M and Tison, Y, (2011). On the use of stereovision to develop a novel instrumentation system to extract geometric fire front characteristics. Fire Safety Journal, 46, 9–20. doi.org/10.1016/j.firesaf.2010.03.001
  • Rossi, J-L, Chetehouna, K, Collin, A, Moretti, B, and Balbi, J-H, (2010). Simplified flame models and prediction of the thermal radiation emitted by a flame front in an outdoor fire. Combust Sci Technol, 182, 1457–1477. DOI:10.1080/00102202.2010.489914.
DOI
10.26904/RF-138-1802259291

Research Objectives

The development of metrological systems based on vision for the measurement and prediction of fires.

Funding

The Corsican Region, the French Ministry of Research, the CNRS (Centre National de la Recherche Scientifique), SATT (Société d’Accélération du Transfert de Technologies) Sud Est.

Collaborators

Dr Vito Ciullo, Dr Frédéric Morandini,
M. Antoine Pieri, Dr Margot Provost, Dr Tom Toulouse.

Bio

Lucile Rossi is Associate Professor at UMR SPE 6134 University of Corsica – CNRS. Her work for the last sixteen years has been to develop vision-based measurement systems for the study, modeling, prediction and fighting of wildland fires.

Lucile Rossi

Contact
UMR SPE Università di Corsica – CNRS, Campus Grimaldi, Bat PPDB, 20250 Corte.

E: [email protected]
W: https://feuxdeforet.universita.corsica/?lang=en

Creative Commons Licence

(CC BY-NC-ND 4.0) This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Creative Commons License

What does this mean?
Share: You can copy and redistribute the material in any medium or format
Related posts.