I kept seeing this story everywhere and had a simple question: if you do the math, is the death rate for scientists connected to UAP research actually higher than expected?

The short answer is no. The longer answer is no, but the individual cases are interesting.

Here’s the full write up on Substack. No paywall or anything.

I ran an exact Poisson test on a group of about 300 credentialed researchers involved with UAP studies. This included people from the Scientific Coalition for UAP Studies, the Galileo Project, the Sol Foundation, and UAPx, with the group size ranging from 230 to 500 depending on how you count. The Standardized Mortality Ratio was 1.09, with a p-value of 0.41. In other words, the results are basically random. Overall, the pattern falls within normal variation.

When I narrowed the test to only self-identified UAP researchers, the p-value dropped to 0.043. That might seem significant at first, but once you apply the look-elsewhere correction, it no longer holds up. The Bonferroni threshold cancels out the significance.

So, based on the statistics, there is nothing unusual happening at the population level.

However, while reviewing individual cases to build the group, I found something the math can't explain. Four specific cases have unusual documentation that stands out: there is no public autopsy for Amy Eskridge even four years after her death; the cause of death for an Air Force intelligence officer was never released, and the case stayed with a local medical examiner for two years; NASA would not confirm the employment of a principal scientist who worked there for 25 years; and before her death, Eskridge wrote a note predicting that suicide would be given as a false explanation, one month before she died by gunshot.

The statistics question has a clear answer, but the transparency question does not.

For context, everything I know about Poisson tests and epidemiological baselines is self-taught. This started out of curiosity and became a deep dive I couldn't stop exploring. I might have made mistakes in defining the group, adjusting for healthy worker effects, or missed something else. I genuinely welcome feedback, especially about my methods.

by AluminumAtlas

2 Comments

  1. Ok-Perspective-1624 on

    You have already slightly noted it, but my biggest critique would be how you collected the sample population. There are a 1000 different ways you could build the sample based on different angles and perspectives, so just because the deaths are statistically insignificant in one sample doesn’t mean they aren’t in another. It also proves nothing either way. These tests shine much brighter in a much more controlled environments. There’s just too many unknown variables here to truly gain insight from such results