Listening to Burnout discussion by authors Emily Nagoski and Amelia Nagoski on Brené Brown. 📚💬

In short, emotions are tunnels. If you go all the way through them you get to the light at the end. Exhaustion happens when we get stuck in an emotion.

Explores the idea of “completing the stress response cycle”. Hm!

Andrew Sullivan on Algorithms vs. Democracy

From his essay We Are All Algorithms Now. Don’t get your news from social media.

For Facebook and Google and Instagram and Twitter, the business goal quickly became maximizing and monetizing human attention via addictive dopamine hits. Attention, they meticulously found, is correlated with emotional intensity, outrage, shock and provocation. Give artificial intelligence this simple knowledge about what distracts and compels humans, let the algorithms do their work, and the profits snowball. The cumulative effect — and it’s always in the same incendiary direction — is mass detachment from reality, and immersion in tribal fever.

Alternative views, unpleasant facts, discomforting arguments, contextualizing statistics, are, with ever-greater efficiency, filtered out of what our eyes can see and our minds absorb. And what we therefore believe becomes more fixed, axiomatic, self-reinforcing, and self-affirming. We become siloed into two affective tribes, with dehumanization of each other deepening with every news cycle. And we know what happens when dehumanization through social media is fully exploited. Ask the Rohingya of Burma, whose horrifying persecution was a function almost entirely of a Facebook disinformation campaign, seeded by a few in government and then unleashed by the masses in a spasm of genocidal violence.

Epistemic Humility: COVID19

A refreshing disclaimer from yesterday’s BMJ: Covid-19’s known unknowns

Competing interests: We have read and understood BMJ policy on declaration of interests and declare that all three authors have been wrong about covid-19. MM and MB initially believed substantial undocumented transmission meant that a large proportion of the UK population was infected during the first wave. Subsequent seroprevalence surveys indicated that this was not the case. GDS thought that SARS-CoV-2 would be amplified through children and substantial mortality displacement would be observed. Neither has been the case.

HTT: @ReplicationWatch

Linda Fallacy?

Proposed Constitutional Amendments (VA)

QUESTION 2:

Should an automobile or pickup truck that is owned and used primarily by or for a veteran….

Are pickup trucks no longer automobiles? Or is veterans+pickups more salient?

Vox article on Replication & Replication Markets

Vox’s Kelsey Piper discusses the replication crisis in “Future Perfect" this week, elaborating on Alvaro de Menard’s (probably a nom-de-blog) recent post What’s wrong with social science and how to fix it…, in the context of other Vox coverage of replication and science.

It’s a good piece. If you’ve read Menard you won’t learn a lot, but it’s got the Vox flair and reaches a wider audience. It also adds some more context, including previous Vox articles.

Two and a half corrections:

  • It looks like Piper conflated Replication Markets with the larger DARPA SCORE program that funds us. We are just one part.
  • Piper neglect Menard’s caveats: his post is written based on the forecasts of Replication Markets. Actual replication results are just arriving. Past markets have been about 70% accurate on average, so there is some wiggle room.
  • Even replication itself is a noisy process. With hundreds of replications now, we are pretty sure only about half of claims replicate, but the result for any single claim could be a chance mistake.

A couple of quotes

The problem (summarizing Menard):

If scientists are pretty good at predicting whether a paper replicates, how can it be the case that they are as likely to cite a bad paper as a good one? Menard theorizes that many scientists don’t thoroughly check — or even read — papers once published, expecting that if they’re peer-reviewed, they’re fine. Bad papers are published by a peer-review process that is not adequate to catch them — and once they’re published, they are not penalized for being bad papers.

The problem (from a 2016 Vox piece):

We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, and Fields Medalists. They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science.

A few of the articles linked by Piper

Previous Vox pieces

  • Resnick’s 2017 article about p-values and the .005 suggestion.

The real problem is the culture of science.

Some of the papers on forecasting replications:

  • Camerer et al. 2018 in Nature: markets were about 86% accurate in predicting these replications (and about 70% overall including three previous studies).
  • Altmejd et al. 2019 in PLOS ONE: predicting the previous social science replications using statistical models, about 70% accurate (AUC of 0.77), based on simple features like effect size, and whether the result was an interaction effect.
  • Hoogeveen, Sarafoglou, Wagenmakers 2020 in Advances … in Psych. Sci.: Laypeople can predict social science replications about 59% of the time.

Ode to our forecasters: What a Year! 54K surveys, 42K trades, 3K claims; Dedicated ‘casters, belief shift from priors, SSR looks sound. Replications TBD, but crosscheck hints good accuracy.

@replicationmarkets #openscience #reproducibility #replication

I reviewed the Watanabe manuscript. I think it’s worth a follow-up. Am I missing something? #reproducibility #openscience #epitwitter

outbreaksci.prereview.org/2007.0947…

This sounds fantastic. Why haven’t I done this?

www.natureindex.com/news-blog…

Karl Popper on Social Media

As for Adler, I was much impressed by a personal experience. Once, in 1919, 1 reported to him a case which to me did not seem particularly Adlerian, but which he found no difficulty in analysing in terms of his theory of inferiority feelings, although he had not even seen the child. Slightly shocked, I asked him how he could be so sure. “Because of my thousandfold experience,” he replied; whereupon I could not help saying: “And with this new case, I suppose, your experience has become thousand-and-one-fold.”

~ “Conjectures and Refutations”

Saturn. First attempt, just holding my iphone near the eyepiece and clicking photos until one finally aligned.

Science is getting harder to read. More jargon, more acronyms, worse writing. All those reduce citations.

79% of acronyms are used fewer than 10 times, ever. So cut back.

www.natureindex.com/news-blog…

New PDF compares accuracy of 7 key #COVID-19 models. On mean absolute % error for total deaths (shown), YYG and IHME’s Mortality Spline (both w/SEIR) did well; Imperial’s SEIR way overestimated. On predicted peak timing, IHME’s simple Curve Fit won - huh. See paper for caveats.

Of the various kinds of misinformation during this pandemic, at least I was spared relatives touting the cow urine cure.

Piece in Meedan on coronavirus misinfo in India.

Short well-written plea to actually read the article.

I’ve been trying Readup, which is like a small community that can’t comment unless they use Safari’s Reader-mode and finish the article.

Solitude and Leadership, by Deresiewicz

Solitude and Leadership

By William Deresiewicz | March 1, 2010

Reminders for my increasingly distracted brain.

If you want others to follow, learn to be alone with your thoughts

I find for myself that my first thought is never my best thought.

Now that’s the third time I’ve used that word, concentrating. Concentrating, focusing. You can just as easily consider this lecture to be about concentration as about solitude. Think about what the word means. It means gathering yourself together into a single point rather than letting yourself be dispersed everywhere into a cloud of electronic and social input. It seems to me that Facebook and Twitter and YouTube—and just so you don’t think this is a generational thing, TV and radio and magazines and even newspapers, too—are all ultimately just an elaborate excuse to run away from yourself. To avoid the difficult and troubling questions that being human throws in your way. Am I doing the right thing with my life? Do I believe the things I was taught as a child? What do the words I live by—words like duty, honor, and country—really mean? Am I happy?

Estrogen, COVID-19, and Rapid Reviews

In my day, philosophers encountered birth control pills at least twice during training. First and easiest, your account of causation could not simply say birth control pills reduce the chance of pregnancy - they have no effect on men’s chance. Second, and trickier, how to handle their effect on blood clots? By simulating pregnancy, they increase the risk of clots. But by preventing pregnancy they decrease it. Once, you could publish papers about that.

Well, here they are again. MIT Press has a new journal devoted entirely to rapid reviews of COVID-19 papers. (Hopkins does too.) And by way of introducing them, their most recent reviews.

This study on estrogen got two strong reviews. Women are less susceptible than men to C19.* Estrogen is one possibility. The authors confirm that post-menopausal women had worse symptoms, but then age is an even stronger risk factor.

However, thanks to medicine apparently invented to confound philosophers, it’s possible to separate estrogen from age. Among pre-menopausal women, those on oral contraceptives appear to have fared better. (This was not clear for older women on hormone replacement.) The study has some limits - it’s not a randomized trial.

But once again causal analysis of pills and blood clots is relevant, and here the pill itself provides a quasi-intervention. Causal payback, baby.

——

  • It was tempting to write “Women are less susceptible to C19 than men.” The ambiguity is delightful, and one wonders if it is true.

What is a replication

In a recent Nature essay urging pre-registering replications, Brian Nosek and Tim Errington note:

Conducting a replication demands a theoretical commitment to the features that matter.

That draws on their paper What is a replication? and Nosek’s earlier UQ talk of the same name arguing that a replication is a test with “no prior reason to expect a different outcome.”

Importantly, it’s not about procedure. I wish I’d thought of that, because it’s obvious after it’s pointed out. Unless you are offering a case study, you should want your result to replicate when there are differences in procedure.

But psychology is a complex domain with weak theory. It’s hard to know what will matter. There is no prior expectation that the well-established Weber-Fechner law would fail among the Kalahri – but it would be interesting if it did. The well-established MĂĽller-Lyer illusion does seem to fade in some cultures. That requires different explanations.

Back to the Nature essay:

What, then, constitutes a theoretical commitment? Here’s an idea from economists: a theoretical commitment is something you’re willing to bet on. If researchers are willing to bet on a replication with wide variation in experimental details, that indicates their confidence that a phenomenon is generalizable and robust. … If they cannot suggest any design that they would bet on, perhaps they don’t even believe that the original finding is replicable.

This has the added virtue of encouraging dialogue with the original authors rather than drive-by refutations. And by pre-registering, you both declare that before you saw the results, this seemed a reasonable test. Perhaps that will help you revise beliefs given the results, and suggest productive new tests.

What is the purpose of retraction? Clearly it’s appropriate in cases of fraud or negligence. But what of the routine error of novel science? Surely this is defensible:

I agree we were wrong and an unpublished specimen will eventually prove it, but I disagree that a retraction was the best way to handle the situation.

Taken from RetractionWatch.

Retractable masks?

[In a piece about about masks] (https://somethingstillbugsme.substack.com/p/many-people-say-that-it-is-patriotic), reporter Cat Ferguson blogs at @somethingstillbugsme@substack.com:

To any journalists reading this who cover COVID-19 science: please keep an eye on Retraction Watch’s list of retracted or withdrawn papers. If something seems too good to be true, push on it. Whether it’s premature to say the retraction rate is exceptionally high for COVID-19 papers, it’s worth it to be overly skeptical….

For COVID forecasting, remember the superforecasters at Good Judgment. Currently placing US deaths by March in 200K - 1.1M, with 3:2 for above 350K, up from 1:1 on July 11.

“Foolish demon, it did not have to be so.” But Taraka was no more. ~R. Zelazny

Alan Jacobs with a cautionary tale about assuming news is representative of reality, and remembering to sanity-check our answers. blog.ayjay.org/proportio…

Protests and COVID

Worried about #COVID, I did not join #BLM protests. Even if outdoors + masks, marches bunch up, & there are only so many restrooms. It’s been an open question what effect they had. NCRC has has reviewed a 1-JUN NBER study: @ county level, seems no. Can’t address individ.

Good news: Despite case rise, excess deaths have been dropping, nearly back to 100% after high 142%. Bad news: @epiellie thinks it’s just lag: early test âž› more lead time. Cases up 3-4 wks ago, ICU 2-3, deaths up last period. Q: why do ensemble models expect steady death rate?

I saw my old and much-loved Monash colleague #”ChrisWallace” https://en.wikipedia.org/wiki/Chris_Wallace_(computer_scientist) trending on Twitter. Alas, it turns out it’s just some reporter with a 5-second clip.

How about #WallaceTreeMultiplier, #MML, #ArrowOfTime, #SILIAC.

Open access is good, unless you're a journal?

Bob Horn sent me this news in Nature

NEWS
16 JULY 2020 Open-access Plan S to allow publishing in any journal Funders will override policies of subscription journals that don’t let scientists share accepted manuscripts under open licence. Richard Van Noorden

This seems good news, unless you’re a journal.

I expect journals to do a good quality control, and top journals to do top quality control. At minimum good review and gatekeeping (they are failing here, but assume that is fixed separately). But also, production. Most scientists can neither write nor draw, and I want journals to minimize typos and maximize production quality. If I want to struggle with scrawl, I’ll go for preprints: it’s fair game there.

So, if you (the journal) can’t charge me for access, and I still expect high quality, you need to charge up front. The obvious candidates are the authors and funders. The going rate right now seems to be around $2000 per article, which is a non-starter for authors. Authors of course want to fix this by getting the funders to pay, but that money comes from somewhere.

Challenge: How to get up-front costs below $500 per article?

Here’s some uninformed back-of-the-envelope saying that will be hard.

Editors. Even Rowling needs editors.

  • Assume paper subscriptions pay for themselves and peer review is free.
  • For simplicity, assume we’re paying one editor $50K to do all the key work.
  • Guess: they take at least 5 hours per 10-page paper on correspondence, editing, typesetting, and production. Double for benefits etc. That’s $250 per paper.

Looking good so far!

Webslingers: someone has to refill the bit buckets.

  • Suppose webslinger + servers is $64K/year. Magically including benefits.
  • The average journal has 64 articles in a year.
  • Uh-oh: that’s $1000 right there.

So… seems one webslinger needs to be able to manage about 10 journal websites. Is that doable? How well do the big publishers scale? Do they get super efficient, or fall prey to Parkinson’s law?

Alternative: societies / funders have to subsidize the journals as necessary road infrastructure. That might amount to half the costs. How much before they effectively insulate the new journals from accountability to quality control… again?