Discover more from Do Your Own Research
The Potemkin Argument, part 3: The Misportrayal of Dr. Flavio Cadegiani
This is a public peer review of Scott Alexander’s essay on ivermectin, of which this is the third part (though it was written before the rest of the series). You can find an index containing all the articles in this series here.
After I published my article on an error in Scott Alexander’s ivermectin piece, I got an email from a friend in Brazil who pointed to a section that hadn’t sat well with me, either, but that I had not dug into much. It was Scott’s portrayal of Dr. Flavio Cadegiani. What he highlighted is egregious, so this is the first study we will look into.
This article will comment on the entire section relating to Dr. Cadegiani, going through each claim, only changing the order of the points Scott makes in order to group them sensibly.
1. Cadegiani’s Background
Scott mentions that Cadegiani…
…the only author of the sole book in Overtraining Syndrome, the prevailing sport-related disease among amateur and professional athletes. He is also responsible for approximately 70% of the articles published in the field in the world in the last 05 years, and reviewer for more than 90% of the manuscripts in the field.
Following the link offered by Scott, we get to an article titled, “Why not chemical castration (to escape COVID-19)?” The content of that article doesn’t get any better than the title, indistinguishable from something written by an angsty teenager. I assume this is where Scott found most of the material he references. The above paragraph on Cadegiani was found on the “Corporate Leadership” page of a company Cadegiani was associated with, and no longer seems to be there. It’s probably the most awkward paragraph in a longer bio that—for all we know—could have been assembled from original Brazilian materials by a bored intern armed with Google Translate. Is it really fair to call that specific paragraph Cadegiani’s self-description? Other pages available on the web fit that definition much better.
Scott also seems to have misstated another part of Cadegiani’s background. From Scott’s article:
Also in Cadegiani news: he apparently has the record for completing one of the fastest PhDs in Brazilian history (7 months)…
From the linked website:
superlative achievements include amongst the fastest PhD obtained in the history of Federal University of São Paulo (7 months) and concurrent gold medals in Mathematics, Chemistry and Physics Brazilian Olympics in his adolescence.
When I read Scott’s article, I got the impression Cadegiani had gone to some third-rate university that rubber stamped a PhD for him. The original makes it clear that the comparison was within the same university, which happens to be the 9th highest ranked university in Latin America—not exactly your run of the mill diploma mill, then.
Also, perhaps people in the US are not familiar with the concept of a Science Olympiad. They are very prestigious competitions for high school students organized in many countries in the world (here's math for example). In Greece, getting a medal in a Science Olympiad gets you placed directly into one of the top universities in the country, bypassing the national exams that everyone else has to undergo. Getting three gold medals in three different national Science Olympiads—as Cadegiani claims —is incredibly unusual.
Speaking of finding various articles in an attempt to figure out what kind of person Cadegiani is, perhaps it’s also worth reading the opposing point of view: a story of how Cadegiani helped yet another patient—for free—the same price he charges all his COVID patients:
2. The TrateCov App
Scott writes about Cadegiani:
he was involved in a weird scandal where the Brazilian government tried to create a COVID recommendation app but it just recommended ivermectin to everybody regardless of what input it got
Apparently, this is related to an app the Brazilian government made. If you follow the link Scott embedded, it goes to an article subtitled, “Flavio Cadegiani points out a series of errors in the system launched by Eduardo Pazuello.” In other words, not only had Cadegiani nothing to do with the development of the app, but when asked, he pointed out his concerns with an app that was built using a methodology based on the paper he published. How this makes him “involved in a weird scandal,” is not something I can intuit, except if we use incredibly tortured definitions of the word “involved.”
Also, when Scott wrote “it just recommended ivermectin to everybody regardless of what input it got,” I got the sense it returned the exact same treatment recommendation for each patient. Looking into it, he’s most likely referring to this investigation by Brazilian magazine Lupa that tried four patient profiles, selected “early treatment,” and got different doses of the same drugs for each, including ivermectin. If this was indeed the source, first we have to recognize that only four reported patients does not quite constitute proof that “everybody” had the same drugs recommended. Secondly, recommending different doses of the same drugs is adaptive to the patient profile. The Lupa writers, who seem to have quite a strong anti-ivermectin position, seemed outraged that the app does not alter its recommendations sufficiently to adjust to each patient’s background. If they lived in the US, they might have particularly enjoyed our one-size-fits-all “go home and come back if your lips turn blue” prescription.
They also apparently discovered that if you declare negative weight values, you will get a negative ivermectin prescription. Excellent work at software QA by this outlet, not exactly a “scandal,” especially since they do mention that the Health Ministry said the app was developed with internal resources, not with contracting an external agency.
So overall, there appears to have been no real scandal, and Cadegiani wasn’t “involved” in it in any conventional sense of the word. There also does not seem to be any connection between Cadegiani and the app, unless we consider the fact that the app creators referred to one of his published papers as a reference for their algorithm. Being that the creators are the country’s health ministry, surely we will allow them to make their decisions about what literature to consider, even if the FDA objects. Sovereignty is weird like that. While I’m not sure how well the app worked, this seems to be entirely beside the point.
3. Ultra-High-Dose Crimes Against Humanity
Ok, but enough with the appetizers. We’ve been promised Nuremberg-level evil, so let’s see what the goods are. Quoting from Scott’s article again:
And, uh, he’s also studied whether ultra-high-dose antiandrogens treated COVID, and found that they did, cutting mortality by 92% . Which sounds great, except that it looks like most of this is that the control group had a shockingly high mortality rate, much higher than makes sense even in the context of severe COVID.
I think the charitable explanation here is that he made this data up too. But the Brazilian Parliament seems to be going with an uncharitable explanation, seeing as they have recommended that Cadegiani be charged with crimes against humanity.
The mortality rate of 49.4% would not be high for the control group, as hospitals were collapsing under the pressure of the nascent gamma variant, Orellana added.
What about that “crimes against humanity” part? Again from the BMJ:
If the published results were true, the trial should have been stopped and unblinded to ensure better treatment of the control group, said CONEP’s coordinator, Jorge Venâncio, in a statement. If they were not, “they subjected 200 people to die in research that has no scientific value at all.”
That’s right. The accusation that got him “crimes against humanity” status is not that the drug killed the patients he was testing it on: it was that if it worked this well he should have stopped his trial early and given the drug to the control group.
If you look at the BMJ article linked above, it lists not just one but four separate corrections to the original. This seems to me like the story—as it was first propagated—sounded incredibly salacious, whereas what’s now left of the article makes it sound like a politically-flavored conflict between Cadegiani and a specific person in the Brazilian bureaucracy, which got way out of hand and has now found its way to the courts.
In fact, Cadegiani was just one of 69 doctors charged similarly by the Brazilian parliament. Mind you, it’s not that the medical association has taken away his license to practice, or that he’s been convicted in court. No, that’s just a free-floating parliamentary procedure. In other words, it’s literally just politics: juicy, memeable politics.
Does this sound a bit far-fetched? Only if you don’t understand Brazilian politics and the fact that doctors who said things that aligned with what Brazilian President Jair Bolsonaro said were demonized. Actually, it’s easy to understand for US folks, because pretty much the same thing happened in the US with another local president and certain drugs he promoted.
OK fine, but proxalutamide doesn’t actually work that well, right? Well, a follow-up study by Cadegiani got high marks by McMaster Plus after they reviewed his data, and a mostly US-based study by Kintor Pharmaceutical Ltd. seems to confirm Cadegiani’s results.
I suppose we’ll find out, but it doesn’t look obvious to me that Cadegiani’s claims were made up.
Given how little “there” there seems to be, I’m inclined to throw out exaggerated claims that sound like “crimes against humanity” and “worst clinical trial in the entire history of Brazil” instead treating them as evidence that something non-obvious is going on that generates non-informative, obviously absurd claims like these.
4. What About the P-Values?
Speaking of low p-values, some people did fraud-detection tests on another of Cadegiani’s COVID-19 studies and got values like p < 8.24E-11 in favor of it being fraudulent.
I assume “some people” are Kyle Sheldrick, who analyzed another article by Cadegiani.
I’ll say directly that I don’t have a strong response to Sheldrick’s finding here. While there is a real dispute in the literature about the use of similar techniques and their usefulness, for the sake of argument, let’s just accept that the finding is real.
We should note a couple of things though:
Sheldrick was able to do his analysis because the authors made their data available on the web as of the publication of their preprint (unlike the vague future promise of data release like in the TOGETHER trial.) There’s literally a link where anyone can download the data. This doesn’t seem to me like the behavior of a fraud. It doesn’t mean that somewhere along the chain of custody the data weren’t altered, it just means that the automatic public sharing of the data immediately upon publication of the preprint doesn’t work well with the hypothesis of Cadegiani being a conscious fraud.
Keep in mind, Scott mentions this paper of Cadegiani as background to his evaluation of a different paper by Cadegiani. Even if we accept Sheldrick’s claims about that paper, what should we do with that knowledge in regard to the paper Scott is actually examining? For any finding that has been found to be suspicious, will we single out anyone associated with it and throw out any other finding they ever made? Because that would leave us in a very awkward position.
You see, Kyle Sheldrick made extreme claims against Dr. Paul Marik, alleging “audacious fraud.” He has now deleted that page in his blog with no explanation and has taken his Twitter profile private. He also still has not—as I’ve asked him to—publish the data on which studies were included in the infamous BBC article where he more or less declared any positive ivermectin study fraudulent. This is particularly concerning when we take into account that in the same article, Sheldrick’s group states that not sharing data is “a possible indicator of fraud.”
If we apply the same rule to Sheldrick as Scott applies to Cadegiani, we would have to ignore his findings on Cadegiani. Instead, I suggest we approach each element here separately. We consider the possibility that Sheldrick’s findings on Cadegiani’s paper are a real concern until further notice, but as we didn’t ignore everything Sheldrick ever did because of (potentially big) mistakes elsewhere, we extend the same courtesy to Cadegiani. I can’t see another way forward that is consistent.
5. The Mockery
The remaining references to Cadegiani are simple mockery that I’ll include only for completeness, but also to highlight that people should really exercise more caution when using such derogatory terms (e.g. “a crazy person”) towards someone who will probably read their article, unless they’re willing to accept similar levels of mockery and villainization in return without complaint. Whatever the rules of engagement, they should at the very least be symmetric.
This is at the start of the section:
Cadegiani et al: A crazy person decided to put his patients on every weird medication he could think of, and 585 subjects ended up on a combination of ivermectin, hydroxychloroquine, azithromycin, and nitazoxanide, with dutasteride and spironolactone "optionally offered" and vitamin D, vitamin C, zinc, apixaban, rivaraxoban, enoxaparin, and glucocorticoids "added according to clinical judgment". There was no control group, but the author helpfully designated some random patients in his area as a sort-of-control, and then synthetically generated a second control group based on “a precise estimative based on a thorough and structured review of articles indexed in PubMed and MEDLINE and statements by official government agencies and specific medical societies”.
Patients in the experimental group were twice as likely to recover (p < 0.0001), had negative PCR after 14 vs. 21 days, and had 0 vs. 27 hospitalizations.
And this is at the end:
Anyway, let’s not base anything important on the results of this study, mmkay?
To Sum Up
In this piece I’ve aimed to examine in depth what Scott Alexander wrote about Cadegiani, not to relitigate everything that has ever been said elsewhere about Cadegiani, which would be a far bigger project given the controversies we’ve hinted at.
Pulling it all together, Scott does seem to be raising some reasonable concern with a paper Cadegiani wrote (not the one Scott was examining). It’s also clear that Cadegiani has been unwillingly dragged into the black hole that is Brazilian politics. None of that is permission to mock his study designs, professional background, or to tar him with a supposed scandal he had no participation in. This feels like a case where the combination of politics and tabloid press in a different country produced a set of memes that were too good to check. And as could have been predicted, upon inspection, most of the salacious stuff turned out different from what was advertised.
Honestly, having lived parts of my life enmeshed in the cultures of three different countries (Greece, UK, US), I feel like a lot of what I see here is a failure of translation across cultures. What comes across as arrogance in one culture may be considered confidence in another. What sounds like some strange degree from somewhere you never heard of may be the best university in the region for that specific field of study. What sounds like a grave condemnation putting someone on the same moral level as Pol Pot or Pinochet may be an artifact of political point-scoring in a battle we don’t understand.
And while a lot of this is understandable, turning a living, breathing, human being into a caricature is really something that does not help us figure out these complicated questions. Even if the author were to claim they’re happy to accept similar portrayals by others without complaint, that section of the article contains actual falsehoods. It also contributes to the picture that the essay paints, of an unusually chaotic research field that can’t be taken seriously, one not supported by facts but innuendo and emotive conjugations.
Update 6/13/22: Scott Alexander Makes Substantial Corrections to the Original
Scott Alexander has added the following to his Mistakes page:
First, I noted that he was accused of poorly-fleshed-out “crimes against humanity” by the Brazilian government, and speculated that they might think he was killing his patients (although I said I personally thought this was false). A new source that hadn’t been published at the time I wrote the piece time explains that the “crimes against humanity” accusation is because he didn’t stop a trial when it showed the experimental drug was much more effective than the placebo drug (which is a nonsensical accusation, since the accusers don’t believe this is true anyway), although it also describes other aspects of the trial as “an ethical cesspool”. Second, although I said Cadegiani was “involved in a scandal” where the Brazilian government made a defective app, his only “involvement” was that the app used data he produced; he was not responsible for its scandalous defects. I cannot remember why I made this mistake, but I assume I saw someone else say something about this and didn’t dig deep enough to be fair to him. I regret both errors.
Indeed, the article has been corrected to remove the strange passage from the Cadegiani bio, the mention of the TrateCov app, and the discussion of the Proxalutamide study has been truncated to:
And, uh, he’s also studied whether ultra-high-dose antiandrogens treated COVID, and found that they did, cutting mortality by 92% . But the trial is under suspicion, with a BMJ article calling it “[the worst] violations of medical ethics and human rights in Brazil’s history” and “an ethical cesspit of violations”.
I don’t think anyone will be incredibly shocked to hear I’m still not too happy with what’s there, but I must commend Scott for making the edits he did. It’s definitely a step forward.
I wrote the following in response to the correction:
Thanks for the corrections WRT Cadegiani.
I doubt you're looking to make further edits, but I will note the updated version still reads:
"some people did fraud-detection tests on another of Cadegiani’s COVID-19 studies and got values like p < 8.24E-11 in favor of it being fraudulent."
Sheldrick's piece is very careful to write: "I would like to be explicit that I am not making an allegation of fraud against any specific author or their associated entities. Even where irregularities arise in data sets with multiple authors that cannot ultimately be explained, it is not usually reasonable to draw negative inference against all the authors involved. Authors are entitled to trust their collaborators, and researchers their employees."
And also, given that he's using methods very similar to what Carlisle is using, it is probably worth noting that Carlisle himself in his most famous paper writes: "In summary, the distribution of means for baseline variables in randomised, controlled trials was inconsistent with random sampling, due to an excess of very similar means and an excess of very dissimilar means. Fraud, unintentional error, correlation, stratified allocation and poor methodology might have contributed to this distortion."
Basically, what I'm saying is that the test Sheldrick did does not check for fraud but for surprisingness, and of course, extremely surprising values require explanation, but that explanation is most often a typo or other similar issue. In some cases it is fraud indeed but that is a second-level conclusion after using these tests to highlight unusual papers.
As for the other remaining element WRT the statements to the BMJ by Jorge Venâncio (head of the Brazilian regulator), Cadegiani wrote on Twitter in response to my piece:
"The regulator acted illegally by leaking information and fabricating data. The BMJ is aware of it since right after the publication. And they know the information is invalid since the Ministry of Health sent an official communication telling them that the information provided by Jorge Venancio is invalid. They've done nothing."
Now, as I mentioned, this whole mess has made its way to the courts, and it's totally reasonable to doubt statements by Cadegiani in this context, but if there is indeed an official statement by the Brazilian MoH, that is at least worth considering. It very well may be that the MoH is politically motivated of course, I can totally believe that, but the same applies for the regulator, in what is clearly a very politically contentious issue.
I suppose I still have a bittersweet taste that after seeing that the information ecology around Cadegiani was clearly polluted with ludicrous claims, Scott doesn’t inquire further to figure out what is going on, but simply excises the false and/or misleading claims, leaving others of similar provenance, and of course the characterizations and conclusions in place.
At the same time, I really did not expect Scott to make these edits, and they are generous in certain respects. He could have haggled and challenged some of them into semantic trench warfare and he did not, for which I am grateful.
To be notified when new articles are released, you can subscribe below: