Near real-time rainfall monitoring at local scale is essential for urban flood risk mitigation. Previous research on precipitation visual effects supports the idea of vision-based rain sensors, but tends to be device-specific. We aimed to use different available photographing devices to develop a dense network of low-cost sensors. Using Transfer Learning with a Convolutional Neural Network, the rainfall detection was performed on single images taken in heterogeneous conditions by static or moving cameras without adjusted parameters. The chosen images encompass unconstrained verisimilar settings of the sources: Image2Weather dataset, dash-cams in the Tokyo Metropolitan area and experiments in the NIED Large-scale Rainfall Simulator. The model reached a test accuracy of 85.28% and an F1 score of 0.86. The applicability to real-world scenarios was proven with the experimentation with a pre-existing surveillance camera in Matera (Italy), obtaining an accuracy of 85.13% and an F1 score of 0.85. This model can be easily integrated into warning systems to automatically monitor the onset and end of rain-related events, exploiting pre-existing devices with a parsimonious use of economic and computational resources. The limitation is intrinsic to the outputs (detection without measurement). Future work concerns the development of a CNN based on the proposed methodology to quantify the precipitation intensity.
Transfer Learning with Convolutional Neural Networks for Rainfall Detection in Single Images
Notarangelo, Nicla Maria;Albano, Raffaele
;Sole, Aurelia
2021-01-01
Abstract
Near real-time rainfall monitoring at local scale is essential for urban flood risk mitigation. Previous research on precipitation visual effects supports the idea of vision-based rain sensors, but tends to be device-specific. We aimed to use different available photographing devices to develop a dense network of low-cost sensors. Using Transfer Learning with a Convolutional Neural Network, the rainfall detection was performed on single images taken in heterogeneous conditions by static or moving cameras without adjusted parameters. The chosen images encompass unconstrained verisimilar settings of the sources: Image2Weather dataset, dash-cams in the Tokyo Metropolitan area and experiments in the NIED Large-scale Rainfall Simulator. The model reached a test accuracy of 85.28% and an F1 score of 0.86. The applicability to real-world scenarios was proven with the experimentation with a pre-existing surveillance camera in Matera (Italy), obtaining an accuracy of 85.13% and an F1 score of 0.85. This model can be easily integrated into warning systems to automatically monitor the onset and end of rain-related events, exploiting pre-existing devices with a parsimonious use of economic and computational resources. The limitation is intrinsic to the outputs (detection without measurement). Future work concerns the development of a CNN based on the proposed methodology to quantify the precipitation intensity.File | Dimensione | Formato | |
---|---|---|---|
water-13-00588-v3_compressed.pdf
accesso aperto
Tipologia:
Pdf editoriale
Licenza:
Creative commons
Dimensione
525.61 kB
Formato
Adobe PDF
|
525.61 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.