This Futures Game Requires Your Informed Consent

AAAS and several other organizations are partnering with SciCast – a research project run by George Mason University – in an effort to run a crowdsourced experiment in science and technology forecasting.  SciCast is a big prediction market, covering a number of topics.  The current recruiting drive appears to be focused on gathering participants interested (though not necessarily trained) in science and technology matters.  That’s because prediction markets seem to work well with informed participants, regardless of any formal experience.  Those in the market who have successful predictions gain influence in the market, and their subsequent predictions are given additional value.  The game play come in through a leaderboard, which keeps participants amused and interested so the researchers can fine tune their market algorithms.

If you want to play, you will need to register, and sign the informed consent forms.  Since the last futures/forecasting exercise I participated in didn’t have one, I was particularly interested in the details.  The project is funded, at least in part by the Intelligence Advanced Research Projects Agency (IARPA) – the intelligence community’s high-risk/high-reward research outfit.  This should be a surprise considering the value of prediction markets to intelligence agencies.  I do find it suspect that IARPA is only mentioned in the informed consent form.

Also worth noting is that the project kind of presumes that intelligence about science and technology developments is only of interest to the intelligence community.  Prediction markets could have utility for determining undersupported (and oversupported) areas of research, deficiencies in scientific and technically trained personnel, or other questions of importance to the agencies that fund, support and perform scientific and technical research.  Will IARPA be willing to share?  I have my doubts.

In New Report Bioethics Commission Recommends Expecting Incidental Findings

Today the Presidential Commission for the Study of Bioethical Issues released its report on incidental findings.  This project started at the beginning of the year, and focuses on findings that were beyond the aims or scope of a particular test or procedure.  The ethical consequences of such findings can be most keenly felt when they involves human subjects and/or medical patients (see this article on The Atlantic’s website for a specific medical case).  Such findings may not be information that a subject/patient is seeking, which adds an additional wrinkle to determining how to best approach the person and respect their rights to choose and one’s obligations to provide appropriate and effective treatment.

But they can be found in many different places, which is why the report, called Anticipate and Communicate, has a subtitle of Ethical Management of Incidental Findings in the Clinical, Research and Direct-to-Consumer Contexts.  It’s also why this isn’t the first Commission report to address incidental findings.  It was mentioned in the Commission report on genome sequencing (and probably not heeded by anyone at 23andMe, based on FDA action taken against that company)

Not that brief blog posts are the most reliable source of deep insight, but in an instance of covering a report that deals a lot in context, I have to emphasize that you really should read the report and not simply this post.  As the report notes (Executive Summary, page 2):

“Discovering an incidental finding can be lifesaving, but it also can lead to uncertainty and distress without any corresponding improvement in health or wellbeing. For incidental findings of unknown significance, conducting additional follow-up tests or procedures can be risky and costly.4 Moreover, there is tremendous variation among potential recipients about whether, when, and how they would choose to have incidental findings disclosed. Information that one recipient regards as an unnecessary cause of anxiety could lead another recipient to feel empowered in making health- related decisions.”

The tl;dr – I’m not comfortable just summarizing the report findings and leaving it at that.  it’s a complex issue that cannot easily be generalized.  The Commission takes a general approach in its report, outlining how an ethical analysis of incidental findings can be conducted, and encouraging those with expertise in the technology and scientific knowledge related to a particular context to do what they can to anticipate possible incidental findings (not all of them can) and communicate those possibilities to the people being tested, as well as the public that may be tested at some point in the future (or have something revealed about them through the testing of others).

Do You Want To Play A Futures Game?

The Industrial Research Institute, along with the Institute for the Future, is hosting a 36 hour conversation/game about the future next week, and it needs as many people as possible to help make things happen.

It’s called Innovate 2038.  On September 25 and 26 (starting at 9 am Pacific time, so in some time zones the days are actually the 26th and 27th), game participants will have a conversation about the future of research and development on the Foresight Engine.  Here’s a not-so-detailed promotional video.

It appears to be an intensive crowdsourced discussion on new means of encouraging, supporting and performing research and development to address what are seen as the challenges facing society in the mid-to-long term (depending on how you think about 25 years in the future).  This is part of the Industrial Research Institute’s project on 2038 Futures, which focuses on the art and science of research and development management.  That project has involved possible future scenarios, retrospective examinations of research management, and scanning the current environment.  The game engine was developed by the Institute for the Future, and is called the Foresight Engine.  The basics of the engine encourage participants to contribute short ideas, with points going to those ideas that get approved and/or built on by other participants.

If you’re interested in participating, you need a good Internet connection and web browser.  Advance registration is necessary.

Eating Some Earthquake Crow?

Given how much I’ve written about what I consider the questionable pursuit of earthquake prediction, some of the latest news in Science magazine reminds me that I am not an expert.  In the latest issue (July 12) there’s an article making the claim that certain areas are susceptible to earthquakes under certain conditions.

According to the researchers, in areas where there has been seismic activity following human activity, there is an increased likelihood of further quakes.  Much like the initial activity, the subsequent activity would be the result of some triggering event.  In this case that would be seismic waves from large, remote earthquakes.

Researchers found that increased sensitivity to subsequent seismic activity occurred more often in areas where a long time passed between the human impetus and the induced seismic activity.  Areas that showed moderate magnitude earthquakes within 6 to 20 months of the induced activity (usually hydraulic injection) also had a higher incidence of seismic sensitivity.

I suppose it is still a bit of a stretch to suggest these findings indicate predictive power for earthquakes.  But it seems reasonable to be more alert following human-induced quakes in the event of large distant quakes.  And I would not have expected that to be possible not that long ago.

They Still Might Go To Jail, But Researchers Are Working On Earthquake Prediction

While researchers are split on whether or not earthquakes can be effectively predicted, some are working on improving what tools are available to try.  The Global Earthquake Model Foundation recently announced the public coming-out of a large earthquake dataset (H/T ScienceInsider) that, when couple with other tools, should make it a bit easier to calculate the hazard of earthquakes (the chance of one happening over a set time period) as well as the associated risks (what could happen in the event of a quake).  Besides this database, which is a collection of earthquake data for nearly a thousand quakes, the model will also take advantage of a map of strain accumulation at plate boundaries.  The information on quakes will be complemented by information on building stocks and other items connected to earthquake impact.

A serious challenge in this area is understanding the limitations of the data and of what that data might be able to indicate.  In the case of the Italian researchers who ended up in jail based on their assessments, the seriousness of the task was augmented by the state of the infrastructure in L’Aquila, which was far from earthquake-ready.  Having a better model is a good thing, but it doesn’t guarantee certainty.

Earthquake Trial Judge Explains Himself

The judge in the Italian trial that sentenced seven scientists and engineers to six years in prison over their earthquake risk assessment has issued the formal explanation of his verdict.  Judge Marco Billi, over the course of 950 pages, explains how the defendants were guilty – not of failing to predict an earthquake – but of not effectively assessing and communicating the risk of quakes during a swarm of them in the L’Aquila region in April 2009.  As Judge Billi wrote (per The Guardian)

“The task of the accused … was certainly not to predict the earthquake and indicate the month, day, hour and magnitude, but rather, more realistically, to go ahead … with the ‘prediction and prevention of the risk’”

The judge goes on to link the failure of the defendants to 29 of the 309 fatalities in the April 6 quake, and four of the injuries.  He argues that the defendants failed to consider existing studies on earthquake risks, and did not effectively communicate that risk to the public.  Judge Billi wrote, per ScienceInsider, that

“The deficient risk analysis was not limited to the omission of a single factor,” he writes, “but to the underestimation of many risk indicators and the correlations between those indicators.”

While the defendants are resistant to the judge’s arguments, there is a relative silence from those who vociferously objected to the verdict, if not the entire proceedings.  As the judge’s reasoning undercuts the dominant narrative of this trial being all about earthquake prediction, it would not surprise me to see the silence continue.  You may not consider this trial an effective means of holding officials accountable for their actions (I’m not sure all those responsible have been so held).  Regrettably, the rhetoric it sparked has not allowed for an effective discussion of the obligations of scientists and engineers in their public statements.  The judge’s official reasoning isn’t likely to change that.

The defendants have 45 days to lodge their appeal (which is expected), and will remain free until that process is completed.  It will likely take years.

The Bigger The Potential Disaster, The Less Prepared We Seem

Wednesday night (depending on your time zone), the Earth had a close encounter with the asteroid called Apophis.  The Bad Astronomer, Phil Plait, has the details over at Slate.  The asteroid was discovered in 2004, and there was a serious concern that it could hit the Earth in 2036.  Since Apophis was initially estimated to be 270 meters across, such an impact would be catastrophic.  It would have generated an impact roughly 20 times stronger than the largest nuclear weapon ever detonated on the planet.

Thankfully, calculations made during this recent pass have determined that this asteroid will come very close in 2029, but miss us in 2036.  This is extra good news, since we have determined that Apophis is roughly 75 percent more massive than initially thought.

But there are other asteroids and near-Earth objects out there.  While there are efforts to build asteroid-monitoring missions, we are still struggling with being able to find and track all of the potential impact objects.  Getting rid of them or nudging them out of the way is a challenge, but as Phil Plait describes in a TED talk, not impossible.

However, with low probability events that have high consequences (like asteroid impacts), it’s really easy to put off committing resources to these efforts.  Tsunami monitoring is another such example.  When centers created in areas hit by these disasters have problems sustaining a budget, you know getting money to mitigate a disaster that hasn’t happened yet is not an easy feat.

The Future Is Still Closer Than We Think

We now get New Year’s greetings from other planets.

And more baby steps have already been taken to make medical tricorders a reality.  While the Qualcomm Tricorder X Prize won’t be awarded just yet, that time is getting closer.  But like with the spread of genetic testing, once these devices get in the hands of lots of people without medical training, how many faulty diagnoses will be made?

Sandy Related Miscellany: Infrastructure, Voting, Uncertainty and Information

A few items that attracted my attention in connection with Sandy’s run through the American northeast.  Hopefully readers affected are approaching normal as quickly as possible.  To the extent it is possible after a storm like that.

First, a general reminder of the value of infrastructure, and the challenges of replacing and/or repairing it.  Nothing demonstrates how vital infrastructure is to New York than the current challenges of getting basic supplies and simply getting around.  It seems trivial to cite the cancellation of events like the New York City Marathon, or the postponement of other sporting events, but it is the impact on things normally taken for granted that re-emphasize the value of infrastructure.  Would that it help emphasize the folly of neglecting same.

Here’s something that will matter and have impacts well into next week:

If this is accurate, and if this is done on any significant scale, there could be a push to make this more common.  (Of course, those who may have the most need for this may also be the most challenged in getting access to reliable email.)

I would love to see how this process handles two important challenges of voting – keeping an individual vote secure (untampered with), and anonymous.  For all the times voting is compared with banking, the factor most often overlooked is that of anonymity.  I can certainly track my electronic banking transactions, but there is no expectation that those transactions are meant to be anonymous.  My vote damn well better be anonymous.

A lovely summary from the AmericanScience team blog on weather prediction and uncertainty.  It refers heavily to Nate Silver’s recent writings on weather predictions, which come primarily from his new book, The Signal and The Noise.  Silver is currently taking some heat for how he has been defending his predictive models for the upcoming Presidential election (let’s just say some people have a real hard time distinguishing between an electoral outcome and the odds of that outcome happening).  Me, I just like additional reminders that the nation had science and technology policy at least as early as the 19th century (the National Weather Service is a product of the Ulysses S. Grant administration), though I’ll argue it goes back to at least the Lewis and Clark Expedition.

I’ll close reprinting the same ‘Modelers’ Hippocratic Oath’ that AmericanScience does.  They credit Emanuel Derman and Paul Wilmott.

The Modelers’ Hippocratic Oath

~ I will remember that I didn’t make the world, and it doesn’t satisfy my equations.
~ Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.
~ I will never sacrifice reality for elegance without explaining why I have done so.
~ Nor will I give the people who use my model false comfort about its accuracy. Instead, I will make explicit its assumptions and oversights.
~ I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension.

Irregular Update Saturday: That Italian Earthquake Trial Could Test Risk Models

I recently read this Scientific American update (via Nature) on the manslaughter trial of several Italian scientists.  The case arose after the city of L’Aquila was devastated by a magnitude 6.3 earthquake in 2009.  The allegations of manslaughter come from statements made by the scientists at a meeting and a press conference in the days before the quake.  Several tremors had been felt in the region, and the claim is that the scientists did not sufficiently warn the public to take appropriate action.  At least one civil official has been indicated (and will likely soon have company), and I find it plausible that they may be seeking political cover via the scientists.  Wiretap evidence mentioned in the article could support such a scenario.

The latest update focuses on the testimony of former chief seismologist for the State of California’s Department of Transportation.  Lalliana Mualchin was a notable exception to the strong scientific outcry against the indictment of the scientists, and he pulled no punches on his estimates of the models used to assess earthquakes in the region.  Mualchin is arguing that the probabilistic risk models used in many countries systematically underestimates seismic hazards because rare and extreme events are not considered.  He argues for a return to the deterministic models previously used in the seismology community.  Differing conceptions of risk could be at play, and the Italians may change how they map seismic risk in the future.  However, Mualchin’s testimony about new building codes (he thinks the changes won’t have much impact) leaves that possibility in doubt.

As I noted earlier, I’m willing to acknowledge the possibility of scientific negligence here, but I’m not sure manslaughter quite fits the bill.  However, I am not a lawyer, and I am even less familiar with Italian law.  I do expect this trial to continue for a while, so there is more to come.