Newswise — A team of international researchers has devised a sophisticated deep learning system capable of identifying natural disasters through images shared on social media. By leveraging computer vision techniques and training the system with 1.7 million photographs, the researchers achieved remarkable proficiency in analyzing, filtering, and detecting real incidents. Àgata Lapedriza, an esteemed member of the AIWELL research group specializing in artificial intelligence for human well-being at the eHealth Center and the Faculty of Computer Science, Multimedia, and Telecommunications at Universitat Oberta de Catalunya (UOC), played a key role in this project led by the Massachusetts Institute of Technology (MIT).

The escalating impact of global warming has led to an alarming rise in the frequency and severity of natural disasters like floods, tornadoes, and forest fires. Since predicting the precise occurrence and location of such calamities remains elusive, it is of utmost importance for emergency services and international aid organizations to respond swiftly and effectively to save lives. "Thankfully, technology can significantly contribute in these critical situations. Social media posts can serve as a valuable source of low-latency data to comprehend the progression and aftermath of a disaster," explained Lapedriza.

While earlier studies primarily focused on analyzing textual content, this groundbreaking research, published in Transactions on Pattern Analysis and Machine Intelligence, delved deeper. During her tenure at MIT's Computer Science and Artificial Intelligence Laboratory, Lapedriza made significant contributions by developing an incident taxonomy and the database used for training deep learning models. She also conducted experiments to validate the efficacy of the technology.

The researchers meticulously curated a comprehensive list comprising 43 incident categories, encompassing natural disasters such as avalanches, sandstorms, earthquakes, volcanic eruptions, droughts, and incidents involving human intervention like plane crashes and construction accidents. Together with 49 place categories, this list facilitated the annotation of images employed for training the system.

The team created a vast database called Incidents1M, which housed 1,787,154 images utilized to train the incident detection model. Out of these images, 977,088 bore at least one positive label corresponding to an incident classification, while 810,066 were labeled as class-negative. Similarly, for the place categories, 764,124 images held positive class labels, while 1,023,030 were assigned negative labels.

Avoiding false positives

The utilization of negative labels allowed the system to be trained in order to eliminate false positives. For instance, even though a photograph of a fireplace may bear some visual similarities, it does not indicate that the house is on fire. Once the team had constructed the database, they proceeded to train a model using a convolutional neural network (CNN) and a multi-task learning paradigm to detect incidents.

After the deep learning model had been trained to identify incidents in images, the team conducted a series of experiments to evaluate its performance. This time, they utilized an extensive collection of images sourced from social media platforms like Flickr and Twitter. "By employing these images, our model successfully detected incidents, and we verified that they corresponded to specific recorded incidents, such as the 2015 earthquakes in Nepal and Chile," stated Lapedriza.

By employing actual data, the researchers showcased the potential of a deep learning-based tool for extracting information from social media regarding natural disasters and humanitarian aid requirements. "This advancement will aid humanitarian organizations in gaining more effective insights into ongoing disasters and improving the management of humanitarian aid when it is most needed," she added.

Building upon this accomplishment, the subsequent challenge could involve using the same set of images depicting floods, fires, or other incidents to automatically assess the severity of the situations or enhance long-term monitoring. The authors also proposed that the scientific community could further the research by integrating image analysis with accompanying textual analysis, enabling more precise classification.

 

 

Journal Link: IEEE Transactions on Pattern Analysis and Machine Intelligence