PBS web series Off Book has produced a short, compelling video called “The Art of Data Visualization,” which showcases powerful presentations of complex data. Nuances in such information might be lost without displaying it visually.
“Humans have a powerful capacity to process visual information, skills that date far back in our evolutionary lineage,” the team behind the video write. “And since the advent of science, we have employed intricate visual strategies to communicate data, often utilizing design principles that draw on these basic cognitive skills. In a modern world where we have far more data than we can process, the practice of data visualization has gained even more importance.”
Top Image: Hurricanes and Tropical Storms Since 1851 courtesy of data visualization expert John Nelson and IDV Solutions. See the full-sized image here.
There’s a new supercomputer out there. Its massive processing power is capable of solving once intractable problems in healthcare, aviation and other pursuits where daily operations produce oceans of data.
The big companies have just recently started to feed the bottlenecks that have long dragged them down into this data-analyzing behemoth. It exacts from statistical and machine-learning tools the fixes to daily annoyances as disparate as late plane arrivals and hospital discharges.
But this supercomputer isn’t comprised of dozens of servers humming away on racks in air-conditioned rooms. It’s a distributed model, with petaflops-worth of processors sitting in a bedroom in New York, an apartment in Warsaw and an office in Singapore. This supercomputer is made of people.
Global Internet connectivity, an expanding industrial Internet of machine sensors generating data and talking to each other, and a community of data scientists creating refined tools to work with all that information are all merging into this virtual supercomputer.
What you know about science may be wrong.
That’s the premise of the newly founded Metaknowledge Network, a consortium of 25 scientists and scholars funded largely by the John Templeton Foundation who hail from fields like evolutionary biology, physics, history, sociology, medicine and computer science.
These scientists believe our growing ability to harness technology and computational power to analyze research will identify unexamined rules and assumptions, yielding new insights into how we both succeed and fail at the scientific process – and also how we can improve it.
Reconstructing an ancient language is painstaking work. For hundreds of hours, linguists pore over sounds and words in modern languages to create hypotheses of the earliest versions.
Until recently, scientists had not successfully automated the process on a scale of this magnitude. In a paper published in the Proceedings of the National Academies of Sciences this month, four researchers unveiled a computer program to do just this.
Some have described it as a “time machine” that can spirit a linguist thousands of years into the past. Coauthors Alexandre Bouchard-Côté and Tom Griffiths liken their methods to sequencing an ancient organism’s DNA, which reveals essential clues about an earlier habitat and ecosystem.