Re-updated CDC numbers, and self-critique.

Re: earlier posts:

The apparent discrepancy November/December discrepancy between higher CovidTracking counts (COVID19 deaths) and lower CDC official excess death counts has essentially vanished.

The actual excess deaths amount to +6,500 per week for December. There was just a lot of lag.

I give myself some credit for considering that I might be in a bubble, that my faith in the two reporting systems might be too strong misplaced, and looking for alternate explanations.

But in the end I was too timid in their defense: I thought only about 5K of the 7K discrepancy would be lag, and that we would see a larger role of harvesting, for example.

Screenshot of CDC Excess deaths plot, as of 23-May-2021 showing 1-Jan-2020 to now, highlighting week ending 26-Dec-2020 with 84,715 predicted final death count.

Broniatowski on why I fail. Also, ending the pandemic.

Yesterday I commented on a cousin’s post sharing a claim about 9 reported child vaccine deaths. I looked up each death in VAERS and noted two were actually gunshot wounds, 3-5 were special cases, so only 2-4 were notably concerning. I suspect this didn’t help: she quickly deleted my comment.

David Broniatowski says I shouldn’t be surprised. He argues that both debunking and censorship are counterproductive. Remember,

Russian Twitter “troll” accounts weaponized demeaning provaccine messages as frequently as vaccine refual narratives when conducting a broad campaign to promote discord in American society.

What to do instead? The hard work of opening “collaborations with public health partners”, and especially with physicians, who are generally trusted. This is of course harder. And I’m not a physician so that’s out.

Open letter from David Broniatowski:

The vaccine rollout in the USA has slowed driven, in part, by the fact that the most eager and confident citizens have now been vaccinated. The hurdle now is no longer one of vaccine supply, but rather, demand. In two new editorials, and a podcast, all in the American Journal of Public Health, I make the case that:

  1. Debunking misinformation is insufficient to convince hesitant people to vaccinate. Rather, we must listen to their concerns and communicate the gist of vaccination in a manner that accords with their values.
  1. Blanket removal of online content by Facebook, Twitter, and Google/YouTube may be counterproductive, driving hesitant people to seek out information on alternative platforms. On the other hand, social media platforms are excellent tools for microtargeting and can help public health agents to reach people who are the most hesitant. We can use social media, in combination, with traditional methods, to build relationships with the most hesitant people and increase their likelihood of vaccinating.

Together, these strategies can help us cross the threshold of herd immunity to end the pandemic.

Podcast is here. [<- Link may not render, but it works. -ct]

Airborne / VAERS

Thanks to Mike Bishop for alerting me to Jiminez' 100-tweet thread and Lancet paper on the case for COVID-19 aerosols, and the fascinating 100-year history that still shapes debate.

Because of that history, it seems admitting “airborne” or “aerosol” has been quite a sea change. Some of this is important - “droplets” are supposed to drop, while aerosols remain airborne and so circulate farther.

But some seems definitional - a large enough aerosol is hindered by masks, and a small droplet doesn’t drop.

However, point being that like measles and other respiratory viruses, “miasma” isn’t a bad concept, so contagion can travel, esp. indoors.

VAERS Caveat

Please people, if using VAERS, go check the details. @RealJoeSmalley posts stuff like “9 child deaths in nearly 4,000 vaccinations”, but it’s not his responsibility if the data is wrong, caveat emptor.

With VAERS that’s highly irresponsible - you can’t even use VAERS without reading about its limits.

I get 9 deaths in VAERS if I set the limits to “<18”. But the number of total US vaccinations for <18 isn’t 4,000 - it’s 2.2M.

Also I checked the 9 VAERS deaths for <18:

Two are concerning because little/no risk:

  1. 16yo, only risk factor oral contraceptives
  2. 15yo, no known risks

Two+ are concerning but seem experimental. AFAIK the vaccines are not approved for breastfeeding, and are only in clinical trial for young children. Don’t try this at home:

  1. 5mo breastmilk exposure - mom vaccinated. (?!)
  2. 2yo in ¿illicit? trial? Very odd report saying it was a clinical trial but the doctors would deny that, reporter is untraceable, batch info is untraceable. Odd.
  3. 1yo, seizure. (Clinical trial? Else how vaccinated?)

Two were very high risk patients. (Why was this even done?):

  1. 15yo with about 25 severe pre-existing / allergies
  2. 17yo w/~12 severe pre-existing / allergies

Two are clearly unrelated:

  1. Error - gunshot suicide found by family, but age typed as “1.08”.
  2. 17yo, firearm suicide - history of mental illness

For evaluating your risk, only the two teens would seem relevant. They might not be vaccine-related, but with otherwise no known risk, it’s a very good candidate cause.

VAERS Query

I’m not able to get “saved search” to work, so here’s the non-default Query Criteria:

  • Age: < 6 months; 6-11 months; 1-2 years; 3-5 years; 6-17 years
  • Event Category: Death
  • Serious: Yes
  • Vaccine Products: COVID19 VACCINE (COVID19)

Group By: VAERS ID

…we are left with the problem that… Social scientists think of themselves as explorers and they will continue to sail the world’s oceans shouting “Land!” at every mirage on the horizon even if much of the Earth has already been mapped. ~Jay Greene

Avoiding the High Cost of Peer Review Failures

Wilderness & Environmental Medicine’s editor Neil Pollock frequently writes an Editor’s note about peer review, publishing, and reproducibility. I wish he were more concise, but he often raises good points, and I expect none of my colleagues read WEM, so here’s a digest. This month’s essay is Avoiding the High Cost of Peer Review Failures.
Skipping a review of the problem, and a note about unintended consequences of open access, we get to some clear caveat emptor:

The legitimacy of journals cannot be confirmed by name or impact factor scores, and often not by promises made regarding peer-review standards…. Many predatory journals have credible and even inspiring names. They can also manufacture or manipulate impact factor scores and blatantly mislead regarding peer-review practices. [including ignoring reviews]

Caveat emptor:

Mindfulness, and more than a small degree of cynicism, is necessary to critically evaluate the legitimacy of any journal.

How you will be tempted to fail:

…getting through “peer review” with no more than trivial editorial comments may seem reasonable for the person or team thinking that their words are gold.

Being invited to review may also confer an aura of legitimacy. Such events could result in additional manuscripts being submitted to the same journal.

Stop being cats:

The inherently independent nature of researchers can lead to avoidance of conversations regarding research publication. [Discuss concerns and establish institutional guidelines to avoid being trapped by predatory journals.]

For example: [breaking his sentences into bullets]

  • Did a person or team publish in such a journal inadvertently or to get around research weaknesses?
  • Should full (or any) credit be given for publications in journals found to be predatory?
  • Should job candidates with a history of publication in predatory journals be considered?
  • Should articles published in journals employing predatory practices count in tenure packages?
  • What scrutiny of the effort of flagged authors is warranted?

It’s been awhile since I read Shapin, but I’m reminded of early scientific societies and the network of trust built up by personally recommending new members. At this point I can’t see submitting to a journal that isn’t already known to my colleagues and field. But replicability indices (here | here | here…) show even that is not enough - Pollock is right that your department and institution has to ask some hard questions.

Wellerstein on nuclear secrecy:

Interview in Bulletin of the Atomic Scientists about his new book 📚 Restricted Data. A thoughtful, nuanced discussion. Here are just a few:

You learn that most of what they’re redacting is really boring.

Explicit information—information you can write down—by itself is rarely sufficient for these kinds of technologies. …That isn’t saying the secrets are worthless, but it is saying that they’re probably much lower value than our system believes them to be.

Once you peel back the layer of secrecy—even in the Eisenhower years—you don’t find a bunch of angry malcontented bureaucrats on the other side. You find rich discussions about what should and shouldn’t be released. You find differences of opinion, …

I was also surprised that so many aspects of the system that we’ve come to take for granted are really determined by a tiny number of people—maybe six or seven people.

HTT Bob Horn

Predicting replicability: scientists are 73% accurate

Congratulations to Michael Gordon et al for their paper in PLoS One!

This paper combines the results of four previous replication market studies. Data & scripts are in the pooledmaRket R package.

Key points:

  • Combined, it covers 103 replications in behavioral & social sciences.
  • Markets were 73% accurate, surveys a bit less.
  • p-values predict original findings, though not at the frequencies you’d expect.

Enough summarizing - it’s open access, go read it! 😀🔬📄

~ ~ ~

Coda

We used this knowledge in the Replication Markets project, but it took awhile to get into print, as it does.

It should be possible to get 80-90% accuracy:

  • These were one-off markets - no feedback and no learning!
  • A simple p-value model does nearly as well, with different predictions.
  • Simple NLP models on the PDF of the paper do nearly as well, with different predictions.

Replication Markets probably did worse ☹️, but another team may have done better. TBD.

Gelman on Bad Science for Good

Gelman’s recent short post on Relevance of Bad Science for Good Science includes a handy Top10 junk list:

A Ted talkin’ sleep researcher misrepresenting the literature or just plain making things up; a controversial sociologist drawing sexist conclusions from surveys of N=3000 where N=300,000 would be needed; a disgraced primatologist who wouldn’t share his data; a celebrity researcher in eating behavior who published purportedly empirical papers corresponding to no possible empirical data; an Excel error that may have influenced national economic policy; an iffy study that claimed to find that North Korea was more democratic than North Carolina; a claim, unsupported by data, that subliminal smiley faces could massively shift attitudes on immigration; various noise-shuffling statistical methods that just won’t go away—all of these, and more, represent different extremes of junk science.

And the following sobering reminder why we study failures:

None of us do all these things, and many of us try to do none of these things—but I think that most of us do some of these things much of the time.

What is your theory, again?

Just re-found this @ayjay essay in an old tab.

The question I would ask churches that are re-opening without masks or distancing, but with lots of congregational singing, is: How do you think infectious disease works, exactly? How do you think COVID–19 is transmitted? What’s the theory you’re operating on?

I still know people using an incoherent mix of, well, all of these:

  • There is no real pandemic.
  • It’s a Chinese bio-weapon.
  • Masks (etc.) don’t work.
  • There’s easy and effective treatments.

Ritchie on Sloppy Pandemic Science

Essay worth reading in its entirety: The Great Reinforcer by Stuart Ritchie.

To be sure, out of the gloom of the pandemic came some incredible advances – the stunning progress made on vaccines chief among them. But these bright spots were something of an exception. For those of us with an interest in where science can go wrong, the pandemic has been the Great Reinforcer: it has underlined, in the brightest possible ink, all the problems we knew we had with the way we practice science.

Acknowledging stunning successes in the science of COVID-19, he reviews our regrettable and predictable failures. And hitting a little too close for comfort, notes how much harm comes from a desire to help.