SpaceX Failure Not As Bad As It Seems (Not By A Long Shot)

SpaceX continues to work on reusable rocket technology that should make space access a lot cheaper.  Coming back from sending a Dragon capsule on its latest cargo run to the International Space Station, a Falcon 9 rocket first stage attempted a barge landing in the Pacific.  It was the first such attempt by anyone.

Unfortunately, it didn’t land on the barge so much as hit it – the rocket stage was traveling too fast.  While SpaceX has tried to land Falcon 9 first stages before, it was done on the sea, rather than a hard(er) surface.

While the rocket was unable to sufficiently slow its descent, it did connect with its target.  And while not as important as landing in one piece, landing on target is critical to the reusability of the craft.  This mission targeted a landing precision of 10 meters, 1000 times smaller than the landing precision target for the water landings.

SpaceX will return quickly to testing, as it has several missions over the next year that provide opportunities to return the first stage of a Falcon 9.  While progress may not be smooth, I think by this time next year SpaceX could well be able to soft land on a hard surface on a consistent basis.

The Morning Jolt Could Be Taken Literally

I missed it when the press ran with it earlier this year, but thanks to the Star Trek website, I’m up to speed on Air Force efforts to substitute electricity for caffeine.  The intent is to use mild (and the emphasis should be on mild) electric shocks to keep airmen and women alert and attentive.

Efforts are still quite preliminary, but research scientists interviewed have claimed that noninvasive stimulation of the right areas of the brain have enabled subjects to respond better cognitively than a control group that had been awake for comparable amounts of time.  The techniques used are taken from electrical stimulation procedures used for some psychiatric conditions.  (The levels of electricity are much, much lower than what had been used in so-called electroshock therapy.  At 1-2 milliamperes, the shocks are just perceptible.)

One challenge the researchers still have to address is determining which areas of the brain need stimulation in order to achieve the increased alertness and cognitive function.  The research has focused on subjects who work in persistent data monitoring, and researchers are optimistic that a working device may be achievable in five years or so.  Whether or not something like this could (or should) be available commercially is a question for another day.

 

More Execution Drug News – Florida Gets In On The Action

Last Tuesday Florida executed a man.  While not unusual, it was the first time the state used midazolam hydrochloride as one of the three drugs in its usual protocol.  This replaces sodium pentobarbital, the sedative that is the latest drug under scrutiny for its use in executions.  Florida’s supplies of pentobarbital are dwindling, and clearly they lacked sufficient supply for this execution.  There is another one scheduled for later this year, and the state felt it had to find an alternative.

It is not clear whether or not midazolam would serve its intended purpose in the execution – to prevent the condemned man from feeling extreme pain as the other drugs are administered to kill him.  Florida state authorities are confident that it would, but the next person scheduled to die has appealed the use of midazolam on the grounds that it might not.  A hearing is scheduled for November 6.

I don’t know how this is going to eventually turn out.  I think it more likely that drug manufacturers will effectively cut off states rather than a court determine these drugs represent cruel and unusual punishment.  Of course, this is mildly informed speculation, so keep that in mind.

Cycles of Technology Disaster

Edward Tenner teases out on The Atlantic‘s science and technology channel an unfortunate trend in failures of large technologies.  He notes a 1977 paper (and follow-up research by Henry Petroski – an engineer worth reading) that considers whether oil rigs may failure on a regular basis.  Part of their analysis involves the fact that bridges have failed with catastrophic loss roughly every thirty years since 1847 (this was confirmed with the 2007 interstate bridge collapse in Minneapolis).  That number, combined with the last big Gulf drilling rig failure back in 1979 (near Mexico, which is probably why you may not be familiar with it), led Tenner to examine the idea that oil rigs may also fail periodically.

The key thing with these trends is the nature of the failure, or the causes leading up to them.  From the 1977 New Scientist paper Tenner references:

“Our studies show that in each case, when the first example of a technologically advanced structure was built, great care and research went into its design and construction.  But as the design concept was used again and again, confidence grew to complacency and contempt for possible technical difficulties.  Testing was considered unnecessarily expensive and so it was dispensed with.

“Yet investigation of steel oil rig design strongly suggests that it is going the same way of five generations of metal bridges.”

Perhaps this trend of complacency leading to failure can partially explain the lack of change in cleanup technologies.  An expectation of no failure certainly doesn’t encourage research in cleaning up after them.

I already posted today about mining the past to benefit the future, and here we have a case of assuming the past will be the same as the future, even as changes are being made.  History does appear to be a harsh mistress.

Another Court Rejects fMRI Evidence

The judge in the Tennessee case that might have used functional MRI (fMRI) scans as part of the case has excluded such evidence.  The opinion is not yet available online, but reports indicate that the judge considered the fMRI tests (to assess the credibility of a witness) to fail several prongs of the Daubert test, which determines whether or not certain evidence is permitted in Federal court.  The Daubert test is different from the Frye standard that was relevant in the New York case concerned with fMRI evidence.  The Daubert test is more explicitly scientific.  Scientific evidence can be admitted if the judge determines that the following hold true:

  • The theory or technique can be, and has been, tested;
  • Said theory or technique has been subjected to peer review and published;
  • There is a known or estimated potential error rate;
  • Has the theory or technique been generally accepted by the scientific community

The judge criticized fMRI testing as not repeating ‘real-world’ situations, where the kinds of consequences were on the line that would be the case in a trial situation.  If the science behind the scans were more developed and accepted, he could see possibly accepting the technology independent of ‘real-world’ testing.  However, his assessment was that the scientific community had not accepted the technology for use in courts.  There were also concerns about the specific methodology used in this case that raised doubts about the ability of the scans to produce consistent results.

At the moment, it looks like brain scans for truth-telling will remain a part of science fiction.  That may change in the future, but there’s a lot of work to accomplish before that happens.

Why the N.Y. Judge Tossed fMRI Evidence

This is a further update to my post from Sunday about recent decisions in using fMRI tests in court.  The judge in the New York case has issued his opinion behind why he ruled such tests inadmissible in the case.  Wired Science has the details, including an embedded copy of the opinion.

The effort to use functional Magnetic Resonance Imaging tests was in order to demonstrate the credibility of a witness.  The consideration of the technology at issue was subject to a Frye hearing, named after the 1923 case of Frye v. United States.  (Frye is one of the significant tests in U.S. law regarding the use of science and/or technology in legal cases.)  The New York courts have opted to

“permit expert testimony if it is based on scientific principles, procedures or theory only after the principles, procedures or theories have gained general acceptance in the relevant scientific field, proffered by a qualified expert and on a topic beyond the ken of the average juror.”

The judge noted that no court that he or either counsel were aware of had dealt with the admissibility of an fMRI test.  After citing New York precedent that considers credibility to be a decision left to jurors, the judge notes the fMRI test comparable to a polygraph, something rejected by New York courts.  He rules that the fMRI test has failed the test of being a topic beyond the ken of the average juror.  That is sufficient to reject the evidence, though the judge notes in his opinion that

“even a cursory review of the scientific literature demonstrates that the plaintiff is unable to establish that the use of the fMRI test to determine truthfulness or deceit is accepted as reliable in the relevant scientific community.”

This judge has put the admissibility of fMRI back into the scientific court.

fMRI Still Knocking at the Evidentiary Door

Over a year since I last posted about using functional MRI scans (fMRI) as evidence in a U.S. court, and still they have not been accepted during a trial.  ScienceInsider has the overview, in advance of a Tennessee court’s decision on whether or not to admit fMRI scans in a criminal fraud case.

The theory is that the scans would speak to whether a person is lying or telling the truth.  Research has indicated some correlation between lying and activity in a particular part of the brain, though that conclusions is not universally accepted.  At least two companies have developed the devices for use in courts, but few have followed suit.  The use of the scans in a San Diego case was avoided when prosecutors were lining up expert witnesses to testify to the problems with the technology.  They have been used in the sentencing phase of a case in Illinois, and then as evidence of a brain disorder, not to speak to truth telling).  Depending on the ruling in Tennessee, they may soon see their first use in a trial.

UPDATEWired Science has a report of fMRI evidence being withheld in a New York case during pre-trial motions.  In this case the exclusion was based on an argument that juries have the right to determine veracity, and not technology.  Validity of the scans for assessing truth-telling ability was not at issue in the motion to exclude.

More Large Hadron Collider Funny Business

I’m behind in my Comedy Central Hour of Power watching, and just finished seeing Stephen Colbert tangle with astrophysicist Brian Cox.  Stephen spent a segment making fun of the continued troubles at the Large Hadron Collider, ending with the assertion that the problems result from sabotage from the future.  (I’d missed the business with the computer networks.)  Then, in what is arguably out-of-character for the show, Stephen proceeded to have a discussion with Professor Cox that managed to make sense of advanced particle physics.  The good professor even noticed.  It was nice to see him literally call bollocks on some Large Hadron Collider theories.  But dismissing food science as not real science was not so cool.

The thing of it is, Stephen managed all of this prior to the bird messing up the works, so he may well revisit the topic this week.  Whether he does or not, science makes up a good chunk of the October 28 episode, so why not watch the whole thing?

Technology in the Courts

Two items of recent note about how technology supports – or doesn’t – the judicial process in the United States.

Soon in San Diego there will be the first test of a functional MRI (fMRI) machine in a court case.  Defendant’s counsel in the proceeding will introduce evidence from an fMRI scan to indicate that the defendant was telling the truth.  Wired Science has the details.  In short, the idea is that lying is connected to particular activity in a part of the pre-frontal cortex.  This idea has received a share of skepticism from some researchers, and that may determine whether or not such evidence is ruled admissible.  For California law, such science-based evidence must be readily accepted in the scientific community before the courts will accept it.  At the moment it’s unclear whether the technology will make the move from House to CSI (in TV terms).

In other court related news, judges are now having to deal (H/T Scientific American’s 60 Second Science Blog) with jurors spreading news about their cases during trial via social networking sites like Twitter.  Arguably this is an old problem – jurors are supposed to keep their information to themselves, and are supposed to consider only what is admitted in court – augmented by recent technology.  Given the added expenses of sequestering a jury (and the challenge of prohibiting such broadcasting and research in any setting), its unclear how the jury system will react to this trend.