So, About Those Scientists On Trial In Italy…

Important disclaimers – I am not a lawyer, either in this country or Italy.  I also don’t speak the language, so I am relying on secondary sources.

ScienceInsider has reported on the decision to acquit six of the seven people convicted of manslaughter in connection with the 2009 L’Aquila earthquake.  The people had been convicted due to the poor way they communicated the risk of possible earthquakes leading up to the 6.2 quake that killed 309 people.

It is not, and never was, about the prediction of earthquakes or a misunderstanding of the underlying science.  But that was the easy message, and the one that got through, at least outside of Italy.  I could have been more effective in communicating that in the many posts I made on the topic, and I apologize for that.

Back to the latest developments.  The appellate court which acquitted six of the seven defendants (all of the scientists were acquitted, while the public official remains sentenced to 6 years) ruled that only the public official could be faulted for the reassurances that caused some people to remain indoors.  The scientists, according to the appellate court, should not have been judged by any regulatory responsibilities they had, but by how well they complied with the accepted science of the time.  And because, according to the court, the notion that a cluster of earthquakes can indicate a larger one was not a commonly accepted scientific theory until after the L’Aquila quake.

That last statement seems like it could be subject to debate for years to come.  Perhaps that debate might play out – at least in part – in the next level of appeals.  The chief prosecutor can appeal this latest decision to the top Italian appeals court.  So this may still not be over.

Counting The Impact Of How A Government Counts

Back in 2010, the Canadian government opted to make the long form portion of its 2011 census voluntary.  Researchers who use the data in their work, and policymakers who use the data to make decisions were concerned about how a voluntary survey would impact the resulting data.

As expected, the early analysis suggests that the lower quality data will lead to higher spending.  Those costs might not be borne by the national government, but the provinces, local authorities and other parties that have used this data to track changes in their jurisdictions.  Without this data, they must pay to replace it, and due to the lower quality, they are paying more for less.  Smaller jurisdictions have been harder hit from this change, as response rates have been lower in rural jurisdictions, and smaller governments are less likely to have the resources to fill those data gaps.

The Canadian Chamber of Commerce has called for the long form to be mandatory in the next census (2016), and a bill was introduced in Parliament to that effect last week.  With the Conservatives still in majority, it seems likely to fail.  Those in the United States may not find themselves concerned, as we still have 5 years to the next census.  However, the Census Bureau administers the American Community Survey, and there have been efforts to curtail that in the past.  It may happen again.

The Last Mile Isn’t Just About Broadband

Noting the upcoming 10th anniversary of the Indian Ocean tsunami, Nature analyzes the tsunami monitoring system that emerged following the devastation.

The short of it – there are still challenges at the end of the message chain.  The three regional centers were effective in communicating warnings and data to countries, but getting the message to the people away from the major cities was still a struggle.    Countries can be strategic in determining which areas may be more susceptible to tsunami effects, and focus their efforts on those areas.  But the investment in infrastructure is still significant, and the maintenance of these networks represents a non-trivial amount.  Much in the same way that the infrastructure in the U.S. made it easier to manage the Ebola cases diagnosed in that country compared to the areas hardest hit in Africa, the communications infrastructure in Hawaii and other more developed coastal areas make it easier for tsunami warnings to be heard.

What the people do with the message when (or if) they get it is a separate question.  Rational action in the face of natural disaster seems less correlated with level of development, but I’d love to see any studies that address this.

DARPA Wants To Fight A Bug

The Defense Advanced Research Projects Agency (DARPA) often uses challenges to stimulate research in challenging areas.  At least some of the current work in self-driving cars can be traced back to several of DARPA’s Grand Challenges in autonomous ground vehicles.

The latest challenge appears to be the first that DARPA has issued outside of engineering and/or information technology.  Last week it announced the CHIKV Challenge for teams to develop methods to track and predict the emergence of a virus (H/T ScienceInsider).  The competition is interested in the Chikungunya virus, which has appeared in the Western Hemisphere for the first time in decades.  It’s mosquito borne, and any challenge solutions proven successful could be used for other viruses, especially those carried by mosquitoes.

The competition starts on September 1, and run through February 1 of next year.  The contest involves predictions of disease spread over the Western Hemisphere.  Entrants must submit the methodology, along with an indication of data sources and related models, by September 1.  Over the next several months, teams will submit accuracy reports indicating how well (or badly) their predictions match the spread of the virus, and describing their prediction for the balance of the competition period.

The top six teams will receive cash prizes (unless they are part of a federally funded research and development center).  DARPA hopes to follow in the footsteps of the Centers for Disease Control, which held a comparable competition on predicting the timing, peak and intensity of influenza during the 2013-2014 season.

Oso Landslide Analysis Leads To Competing Theories And Possible Adaptation

In March a landslide in Oso, Washington destroyed a neighborhood, killing 43.  This week two scientific analyses were issued (H/T ScienceInsider).  On Tuesday the Geotechnical Extreme Events Reconnaissance team (GEER, sponsored by the National Science Foundation), released its report.  Earlier today Science reported on the unpublished analysis from the U.S. Geological Survey and researchers at the University of Washington.  A notable difference between the two reports deals with the how of the slide.

The GEER team, which is set up to do quick analyses of natural disasters, theorizes that the slide happened in two phases.  The first slide was augmented by the collapse of a portion of the mountain when underlying support gave way.  One of the USGS researchers explained their theory of the slide (which was significantly larger than the smaller slides that frequent the area) as more compressed.  They believe the second spike in the seismic data is not a major event, and that an upper portion of the mountain broke off much sooner.  It comes down to debates over the proper analysis of seismic data.

What the GEER report highlights is the absence of systematic assessment of potential for landslides when planning construction.  Given what has been achieved for building in areas prone to earthquakes, it’s a little surprising that similar efforts have not taken place for areas with higher potential for landslides.  The failure to use detection systems and take advantage of historical data are similarly surprising.  Presumably the USGS report, whenever it’s released, won’t be as far apart from the GEER team in terms of recommendations.  We’ll have to wait and see.

What you might not want to wait on is to see if you nearest slide area is taking advantage of new detection and monitoring systems.  To have the tools and not use them strikes me as tragic, especially given the catastrophic nature of most slide losses (losing one house is a catastrophe – to that family).

This Futures Game Requires Your Informed Consent

AAAS and several other organizations are partnering with SciCast – a research project run by George Mason University – in an effort to run a crowdsourced experiment in science and technology forecasting.  SciCast is a big prediction market, covering a number of topics.  The current recruiting drive appears to be focused on gathering participants interested (though not necessarily trained) in science and technology matters.  That’s because prediction markets seem to work well with informed participants, regardless of any formal experience.  Those in the market who have successful predictions gain influence in the market, and their subsequent predictions are given additional value.  The game play come in through a leaderboard, which keeps participants amused and interested so the researchers can fine tune their market algorithms.

If you want to play, you will need to register, and sign the informed consent forms.  Since the last futures/forecasting exercise I participated in didn’t have one, I was particularly interested in the details.  The project is funded, at least in part by the Intelligence Advanced Research Projects Agency (IARPA) – the intelligence community’s high-risk/high-reward research outfit.  This should be a surprise considering the value of prediction markets to intelligence agencies.  I do find it suspect that IARPA is only mentioned in the informed consent form.

Also worth noting is that the project kind of presumes that intelligence about science and technology developments is only of interest to the intelligence community.  Prediction markets could have utility for determining undersupported (and oversupported) areas of research, deficiencies in scientific and technically trained personnel, or other questions of importance to the agencies that fund, support and perform scientific and technical research.  Will IARPA be willing to share?  I have my doubts.

In New Report Bioethics Commission Recommends Expecting Incidental Findings

Today the Presidential Commission for the Study of Bioethical Issues released its report on incidental findings.  This project started at the beginning of the year, and focuses on findings that were beyond the aims or scope of a particular test or procedure.  The ethical consequences of such findings can be most keenly felt when they involves human subjects and/or medical patients (see this article on The Atlantic’s website for a specific medical case).  Such findings may not be information that a subject/patient is seeking, which adds an additional wrinkle to determining how to best approach the person and respect their rights to choose and one’s obligations to provide appropriate and effective treatment.

But they can be found in many different places, which is why the report, called Anticipate and Communicate, has a subtitle of Ethical Management of Incidental Findings in the Clinical, Research and Direct-to-Consumer Contexts.  It’s also why this isn’t the first Commission report to address incidental findings.  It was mentioned in the Commission report on genome sequencing (and probably not heeded by anyone at 23andMe, based on FDA action taken against that company)

Not that brief blog posts are the most reliable source of deep insight, but in an instance of covering a report that deals a lot in context, I have to emphasize that you really should read the report and not simply this post.  As the report notes (Executive Summary, page 2):

“Discovering an incidental finding can be lifesaving, but it also can lead to uncertainty and distress without any corresponding improvement in health or wellbeing. For incidental findings of unknown significance, conducting additional follow-up tests or procedures can be risky and costly.4 Moreover, there is tremendous variation among potential recipients about whether, when, and how they would choose to have incidental findings disclosed. Information that one recipient regards as an unnecessary cause of anxiety could lead another recipient to feel empowered in making health- related decisions.”

The tl;dr – I’m not comfortable just summarizing the report findings and leaving it at that.  it’s a complex issue that cannot easily be generalized.  The Commission takes a general approach in its report, outlining how an ethical analysis of incidental findings can be conducted, and encouraging those with expertise in the technology and scientific knowledge related to a particular context to do what they can to anticipate possible incidental findings (not all of them can) and communicate those possibilities to the people being tested, as well as the public that may be tested at some point in the future (or have something revealed about them through the testing of others).