tech computing disaster_response hurricanes hurricane_sandy earthquakes google mapping gis big_data
A Few Bits of Help: Computing for Disasters

by Michael Keller

As Hurricane Sandy has once again shown, the want or presence of information becomes a matter of life and death after a disaster strikes.

Data becomes a means of fulfilling need. Survivors urgently need food, gas, water, medical treatment or shelter from the elements. First responders, some from outside the hardest hit areas, need maps, real-time situation reports, staging locations, overflight imagery and ways to coordinate and manage efforts. Victims’ families and friends need to know about the fate of loved ones.

And when recovery efforts begin, officials need access to deeper research and analysis to improve their readiness for the next event and to rebuild smarter.

But from where do you get the information? How do you put it all together so that it’s useful? Much of the data already exists and, after a major event, so much more is generated moment to moment—on websites, posts on blogs, National Oceanic and Atmospheric Administration aerial photography missions, urban search and rescue finds—much of it in different formats, generated and held by different groups and spread far and wide across the Web.

“The challenge is how can computing help in these situations?” asked Dr. Edward Fox, a computer science professor at Virginia Tech, during an online discussion called Computing for Disasters: Saving Lives with Big Data hosted by technology media and research firm GigaOM. “There is an enormous amount of data available but it’s scattered.”

Fox leads a team that has developed the Crisis, Tragedy and Recovery Network, a prototype digital library network that seeks to be an information clearinghouse for disaster-related websites, blogs and other data.

“For more than three years, we’ve been capturing and trying to archive as much information as we can that describes these different events so we don’t forget about them, we don’t lose the information that appears on the Web and then disappears fairly quickly,” Fox said.

Many contributors, stakeholders

The very definition of disasters means that they require resources beyond the capabilities of a community and require a multi-agency response, GigaOM research director Matthew Spady said during the seminar.

But with so many potential contributors from city, state and federal government arms, NGOs and the private sector, the authority, authenticity and reliability of data is paramount, said Dr. Paul Miller. His consultancy, Cloud of Data, helps clients understand the implications of using cloud computing. Any system needs to track the provenance and timeliness of datasets with good metadata.

It also needs to contend with the different needs of those accessing the data. Rescuers need access to information in real time. Others, civil engineers rebuilding weeks later, for instance, might need accurate, highly specialized information that doesn’t need to be served up instantly.

“Things you need at the computational level—just as all of those stakeholders need different aspects of different kinds of access to the data at different times—the kind of computational resources one needs varies depending on the kinds of tasks you’re doing,” says Grant Ingersoll, the chief scientist at LucidWorks, a search, discovery and analytics platform developer.

Digital aid on the way

Other organizations are already working to bring together the wide array of databases that contribute valuable information during and after emergencies. Some are using algorithms to collect, synthesize and update that data, while others rely upon the abilities of human workers to make powerful tools to aid in disaster response.

Google.org, the tech company’s charitable arm, manages a crisis mapping service for victims and service groups.



Google’s efforts focus on  crisis mapping that integrates weather, damage assessment shelter location data and other information needed by the public and first responders; Person Finder, a crowd-sourced registry and message board that allows survivors, friends, family and disaster relief organizations to post and find information about people’s whereabouts and condition; data synthesis and visualization tools; and public alerts, which allow response organizations to disseminate emergency information to citizens.

Reliefweb, the United Nations Office for the Coordination of Humanitarian Affairs specialized information gateway service, is a tool that is tailored specifically to aid disaster relief and humanitarian organizations. The service’s analysts, editors and cartographers collect information about disaster and conflict zones around the clock, identify the most important information and present it as maps, reports and data.

Another effort, called Ushahidi, is an open-source project that uses the massive data of crowds to build up crisis information. It can incorporate information retrieved via text, email, Twitter and the web, and it can make sense of that real-time data with visualization tools.


The bottom line is that the field of disaster-relief computing is just beginning to emerge, and harnessing the scope and depth of big databases holds out hope for helping communities prepare for and recover from human or natural disasters.

“From the computing side, there is a lot of new effort going in in order to make this stuff scalable and in-real-time processing,” said LucidWorks’ Ingersoll. “There is a lot of stuff already out there and a lot more to come.”

Top Image: Haiti earthquake: Port-au-Prince—A man in front of the pile of rubble that once was the Université Caraibes. Courtesy Flickr user IFRC.

Michael Keller is the Managing Editor of Txchnologist. His science, technology and international reporting work has appeared online and in newspapers, magazines and books, including the graphic novel Charles Darwin’s On the Origin of Species. Reach him at mkeller@groupsjr.com.

https://www.tumblr.com/reblog/36135429518/7uo4Jiy2
Permalink
text

LATEST

blog comments powered by Disqus