Do Your Own Research
Do Your Own Research Podcast
Listen Now: TOGETHER Trial Deep Dive (Audio & Transcript)
5
0:00
-2:16:15

Listen Now: TOGETHER Trial Deep Dive (Audio & Transcript)

Showing our work and making it all make sense.
5

This article is part of a series on the TOGETHER trial. More articles from this series here.

This was a “Do Your Own Research” Twitter Space Sunday, April 10, 2022 where I did more or less a braindump of all the things I knew about the trial at the time. It should work as a deep dive for people interested in perhaps more detail than I put in my posts, or as a searchable archive.

The TOGETHER Trial list of issues discussed can be found here at ivmmeta.com — the audio will make a lot more sense if you listen to it with that list in front of you.

Charts mentioned:

Transcript

ALEX: So, I don't know if you noticed, but if you've been following my increasingly erratic Tweets, I've been delving into the minds of the creators of the TOGETHER trial and sort of starting to try to understand what is going on with it, I guess. What, the hell happened? I think I'm starting to formulate a clean idea of the events, but I think what has not been done is to get a bunch of folks that have dug into the material to sort of talk through the something like the long list of issues that is presented at ivmetta.com to sort of get a sense of what everybody thinks, right?

ALEX: So I always remember this idea of, like, if only we knew what we knew, you know. Maybe I read something on ivmmeta and I disagree with it, or maybe I have more to say about it, and maybe somebody else has another thing to say, and there's two fragments of ideas that both kind of seem like dead ends to each of us. But if we put them together, maybe there's another idea that starts. So I kinda just wanted to at least say my thoughts out loud. See if anybody else has anything to add to them as I try to sort of build an understanding. I think by this point, I'm getting a fairly solid understanding of the depth and breadth of the trial. At least in the, you know, from, from January to July of 2021. Not the later trials of, you know, did they try afterwards that IFN lambda and that stuff. I haven't dug into that.

ALEX: But, yeah, I just wanted to use this space to just kind of think out loud a little bit, give an opportunity to folks that might have an idea about what more, you know, more material about this to just sort of add to the pile. So we can maybe connect some clues that we might not have done before.

ALEX: So, basically what I'm going to do is literally just go to ivmmeta.com. At the top, they have this TOGETHER trial analysis. I'll add it as a link.

ALEX: Yep. Okay. So you should be seeing now at the top of the space a link that I will be talking through. So, I've invited a number of people to talk. So, I'll just kind of try to walk through some of these issues and sort of just share my thoughts about the particulars.

ALEX: So, you know, up top at the page, there's this statement by Mills in an email to Steve Kirsch and others, which says there's a clear signal that IVM works in COVID patients. That will be significant if more patients were added. It says, if you hear my conversation in some other recorded video, you will hear me retract previous statements where I had been previously negative. And then, in that video, that he's referring to, is that the question of whether this study was stopped too early in light of the political ramifications of needing to demonstrate that the efficacy is really impressive, really could be raised. This guy called Frank Harrell said in that video and I totally agree with Frank. Ed Mills responds, Ed Mills, by the way is kind of the let's call him the mastermind—I don't think he would even disagree with that—behind the TOGETHER trial. He's he's written a lot of the original papers and just kind of in every paper and doing all the communication on behalf of it. He's kind of the guy.

ALEX: Before even we jump in, this thing about whether the trial was stopped too early really bugs me, because there was lack of clarity about how big the trial was supposed to be. They mention 800 patients in some places per arm or 681 in other places. Having dug in deep enough, I'm convinced that they were intending to make it 681 for all the arms, including fluvoxamine and ivermectin that were running in parallel. But here's the, here's the punchline for fluvoxamine, they actually went all the way up to 741 patients. There's no explanation why their data monitoring group didn't stop at 691, as they had said they would. But with ivermectin, they stopped exactly on the limit—actually slightly short—679 patients. Right. So while I don't object that they stopped the trial where they said they would, it is kind of weird that they did not stop the fluvoxamine trial where they said they would. And, and why this matters, right? Like, because we might say like, Hey, a few more patients, what's the problem? If you get to constantly look at the data over and over again as it's coming in, you get to pick a point to stop that looks better for you, right? Every new data point might help or not help. This is why when you're doing AB testing on your website, they tell you very clearly do not stop your trial the moment it looks positive for your, for one or the other, you have to know, like you, you have to kind of have set the size—to pre-commit basically, to the size—so that you don't fall victim to that. Now, there is this things called interim analyses, which like it's kind of a compromise, right? Because they're saying that look, we may have to wait the whole way, but maybe if we check halfway through, that's okay, because if it's going completely sideways, right, there's no reason to continue. And that's fine. However, I even found this analysis that says that the more interim points you add, the more chances of a false positive you get. And I guess the false negative follows from that same analysis.

ALEX: I think they said, in that paper, that with four interim analyses, you get about 10% chance of a false positive, right? Like it's kinda significant. But they did their interim analysis and they reached the completion point. But with fluvoxamine, they didn't even stop there. They continue to an arbitrary point in the future. How do we know that that point was important or not? We don't know, but for ivermectin, somehow it got sharply at 681. In fact, Just for spite… 679, not even 681. So that's kind of weird when he says now like, well, you know, maybe we should have continued it, it's like, well for fluvoxamine you did it. There's no, this doesn't, this isn't like a hypothetical, like they, they did do this. And one of the two kind of twin trials that we're running mostly in parallel. So this requires explanation.

ALEX: Anyway. So, scrolling down, there's kind of a summary here that the paper was updated on April 5, with no indication of explanation. That's true. It goes too deep for me to know if something nefarious happened there, but I just, the basic rational sort of analysis says that they should have declared that they made a correction to the article. Right? If, you've seen this, like when, when they make somebody make a correction, especially on the pro side, they plaster this everywhere, right? Like the news starts. But here it was like, nah, whatever, it was just the typo. Right. It was just going to change it without even mentioning. And they also mentioned that the authors have not responded to the data request. I don't know which requests they're referring to, but I'm sure there's several people who have filed requests. And some of them probably quite, well-credentialed to get the data. But of course, what Mills said to Kirsch in same email above is that they will make it available through ICODA. sounds—it's some kind of “globally coordinated health data led research response to tackle the pandemic”—International COVID-19 Data Alliance. That's it. The problem with ICODA, is it's funded by the same people that are funding this trial and are, you know, are employing Mills and are… Well, they didn't fund this particular arm, but they funded the setup of the trial, to be precise. And are funding Mills and some of the pharmaceutical adjacent companies, that have members of them in the trial are also, members of this organization. Like, let me find these two that Certara and Cytel are two members of—partners of ICODA—right? So, ICODA is not like some, you know, kind of a completely independent organization. It has Bill Gates’ fingerprints all over it as does, you know, Mills and Cytel. And Certara and, um, you know, it's highly conflicted, so it's not, you know, again, ICODA’s being put up front as a, some kind of a, neutral third party that can make decisions. For one, in the fluvoxamine paper, it says clearly that they will give it to a ICODA, but ICODA will ask Mills and Reis—the other author from Brazil, the other co-principal from Brazil—whether the data should be released. And secondly, it's not independent party. So this whole ICODA business feels very much like a indirection because, you know, in the original registration, they said clearly, we will give, you know, anonymized patient data, upon request upon completion of the protocol. The protocol was completed in August, so that's when we should have had this data. Now it’s seven months later, and we're talking about whether at some point in the future—because apparently they're quite busy right now—they might give the data to some party that is not really independent, and then we will be able to apply for access to that data. But of course, that application will go back to two Mills and Reis. Uh, you know, I think this is a little bit of a joke.

ALEX: Anyway, let's get to this, actually, let me check the space and see if anybody's… what’s going on. Oh, we've got Michelle. Hello, Michelle. Approve. Cool.

ALEX: So, again, I'm scrolling and scrolling down the ivmmeta at TOGETHER analysis, which I've linked to the top of the space. You can click there to, to see it. So I've kind of gone through the preamble. I don't know how long the space is going to go for, but I’m have determined to just get everything out of my head as much as I can.

ALEX: So yeah, the first kind of criticism is delayed for more than six months, which I've mentioned. I also wrote a piece, a few months ago, about this delay. There's no real explanation. And even if they, they said, Oh, the journal didn't want to publish it. For fluvoxamine, they published a pre-print on August 23rd. They presented on August 11th, their preliminary results for both ivermectin and fluvoxamine. By the 23rd, that's 12 days, that's not even two weeks. They had pre-print up. Right. They could've put a pre-print up for ivermectin then negotiated with the journal as much as they wanted. They didn't do that. So, you know, that's definitely cause for concern.

ALEX: No response to data requests, as we've talked about. Okay. Uh, the three different death counts. Now this one is one where I get baffled. And honestly, I don't see much depth in this criticism. Like there are different death counts, right? This is a, this is an error in the papers? 1That he didn't know what he was doing because he got his sums wrong in different tables. Well, these guys did this in a few places. But beyond the sort of like, being sort of pleased with the, you know, the schadenfreude, that's the word. Beyond the schadenfreude, I don't fully understand what the implications of this are. Maybe it'll come in handy later, but I'm not a hundred percent sure what to make of it, right? It's true that they present the death as 21 or 20 under ivermectin and 25 or 26 under the placebo, and 24, 25 with the placebo, and it depends how you count them, whether it's all cause or whatever. And it definitely seems like Mills has been inconsistent and how he describes it. There's definitely some messiness going on there. The reason I'm not diving into this too much is that, and maybe this is worse, actually, a quick summary of how I approach this. If you see the list that I ivmmeta has, right. It’s like, we're looking at what, 40 issues? Like some number. In the first phase, I think it was extremely important to start gathering up issues because they're clues. They're telling you that something's gone wrong, but what goes wrong depends on what you, you find in these—what needles are you find in this haystack. Right now, where my mind is at, I've actually got a narrative for what probably went wrong. I've got a story that connects I don't know, 80% of these, together in a way that, that makes it make sense. Because like, if you, if you see 30 problems, in the trial, 40 problems, whatever number there is—by the way, I've debated with myself putting up—I've got more problems than what is here. At this point, I've stopped looking because there's so many. I've debated with myself, whether I should just be posting a problem per day until they release the stupid data. Because there're just so many. Right. But, the number of errors in a way can also act in reverse, right. People who are sort of supportive of the trial will say like, you guys are just throwing whatever you got at the wall, this is a Gish Gallop, whatever. Okay. But, the point here is not to say, okay, look how many, you know, you don't put the claimed errors on a scale and say like, you know, we've got 60 pounds of errors. Therefore you're retracted. I mean, it could be, especially if a lot of these are validated. Sure. But the way I think about it is, it's not this. The way I think about it is, where did these come from? Where does their source, right? Are they 60 independent—40, 30, whatever number—independent errors? That's weird. That's even weirder than having a unified source, right. Having one kind of thing that went wrong and caused all of these ripples. So what I'm looking for is this unifying explanation for all these things. So, the death counts I haven't been able to make it slot in, in a particularly meaningful way to the hypothesis I'm working with. So I haven't dug into this one, but there's definitely something here. It should have been noted when they updated it. And I can't say a lot more than that here. It's just unfortunate that they, seem to be quite flippant with how they are presenting the data, but also are happy to just make changes willy nilly.

ALEX: Let's move on to the next one. So the trial was not blind. This is an interesting, accusation because of course, you know, a double-blinded, randomized, controlled trial. I mean, these are the things you've got, right? You got a blind, randomized, and controlled. If you, start eating away at those, you’ve got a real issue with the trial.

ALEX: So, this one says, ivermectin placebo blinding was done by assigning a letter to each group, that was only known to the pharmacist. If a patient received the 3-dose treatment investigators immediately know that the patient is more likely to be into treatment group than the control group. Yes. So, here's what this means. I’ll just explain it. Let's forget everything else about the various allocations. The baseline allocation for most of the trial was that they had three medicines that were testing. It was fluvoxamine, metformin, which was stopped at one point, and ivermectin. And then there was a placebo arm. And the algorithm was allocating between these arms. However, within the placebo arm, because these other treatments have different durations, metformin was actually also 10-day, as was fluvoxamine—a 10-day, morning and night, actually, likes a 20 pills to take basically. Whereas the ivermectin was, at least after it got restarted, it was a 3-dose trial. So, what do you do in this case? Right? How do you, cover this divide? Well, when the trial started, if you look at the original protocol, dated December 17th, what they were going to do is give everybody 20 pills, right? Ten days, morning and night, you get 20 pills. If you were on ivermectin, the three morning pills, right? Morning one, morning two, morning three, were going to be ivermectin and you were going to get 17 placebos anyway. If you were on the others, and you were on the treatment arm, you would get 20 real pills. I'm not sure if metformin was it in fact 20 pills or there was some filling in with placebo, but you get, you get the point. However, when they restarted on high-dose ivermectin, they actually changed how they approached that. And they said We’re going to create placebo that is 3-day and 10-day. So some of the placebo patients are going to be getting three days of placebo. Some of the placebo patients are going to be getting three days of placebo, some of the placebo patients are going to be getting ten days of placebo. And, you know, one of the arms is going to be taking three days of treatment. And another one of the arms is going to be taking 10 days of treatment twice a day, for fluvoxamine. And, again, whatever it is that they did for it metformin. I haven't dug into that one as much. Now here's the problem with that, right? So let's say you've got 200 patients, and you are allocating them across four arms. So, you got 50 on ivermectin—high-dose ivermectin, right?—we're talking about the phase after March 23rd, when they were allocating to the high-dose ivermectin arm. Let's say you put 50 on high-dose ivermectin, 50 on fluvoxamine, 50 on metformin, and 50 in placebo. And within the placebo, you split them, right? You either do half, half, 3-day and 10-day, or maybe you do 2/3 10-day because metformin and fluvoxamine are both a 10-day. And 1/3. Let's say you do it half. So, then you've got 25 patients within the 50 placebo that are getting 3-day placebo and 25 that are getting 10-day placebo to match sort of the metformin/fluvoxamine regimens. Now the problem with that, is that if you gave a letter to the 3-dose placebo patients, the 3-dose treatment, whatever treatment, whether it was placebo or ivermectin, that you had the 3-dose group. And that group had a letter. Now, if I see what patient has that letter—let's say, I don't know, B—on their grouping that was supposed to be blind, right? You don't know what that is. But in fact, I do have information. I do know that there are 25 placebo patients and 50 treatment patients, which means that patient, two thirds chance, this person is an ivermectin treatment patient. Right. It's not, you know, completely blind that this was already, violating the blinding of the trial. Because, while I'm not completely sure if a specific patient I'm looking at is taking ivermectin, I am 66% sure. And if I want to sabotage them, I could totally do that.

ALEX: I don't think that happened with the data I saw from the protocol analysis. But that doesn't, you know, in a way, what I think shouldn't matter, right? The point is did they blind appropriately or not? And it does seem that there was, this feature of the revised protocol in March 23rd. I think if you go on the page, togethertrial.com/protocols, this would be version 2.0, which funnily has a parenthesis next to it, which says “first version.” So it's kinda confusing, but, um, that one. It talks about these splits in the placebo group. Anyway. So, I do agree that the trial was not blind or at least its blinding was compromised. Right. And he said that “note that we only know about this blinding failure, because the journal required the authors to restrict to the 3-day placebo group. Also note that this may apply to all arms of the TOGETHER trial and that it would have been trivial to avoid if desired.” This is a very important point.

ALEX: Some of the delay has been attributed by Mills to the journal. And what this means academically, is that there was a lot of back and forth and a lot of corrections and a lot of insistence, and quite a bit of unhappiness, I suppose, on the part of the journal, uh, about what they were looking at. So they forced them to make a number of changes to their manuscript. One of the changes that Mills, attributes to the journal, in his email, I believe to Kirsch, or maybe he did this in the talk he gave afterwards, was this allocation of the 3-day placebo because the per-protocol placebo was not described the same way for fluvoxamine. For fluvoxamine, they just said, you know, any placebo patients that follow the protocol—that includes 3-day placebo. Whereas for ivermectin, they said only 3-day placebo patients. But Mills says the journal made them do this. The problem with that is that this revealed this feature of the trial. Like until then, for the metformin papers and for the fluvoxamine paper, we didn't know that that was happening. Now, truth be told, it's in the protocol, right? If you read in detail, you'll see it. But the problem with this trial is that, if you gather all the materials that, you know, I've gone through, and some others have gone through to, understand what's going on, we're talking about like 20 thick documents, like thousands of pages, right? Like they could have made this a lot cleaner than they did, is the long story short.

MICHELLE: Alex, when you say trivial to avoid, I dunno if my interpretation was just like, they didn't have to structure the placebo that way in the first place. Like if they had just used the original protocol.

ALEX: Correct.

MICHELLE: Very unclear why they didn't do that.

ALEX: One interesting thing about this in the, in the review on the Gates Open Science, I believe, Gates Open Research portal, there's this open peer review, which Mills responded to. So that was kind of an interesting document to look into. One of the people who was very critical of their, you know, not having their committee, independent, actually praised them for, for doing this. Right. But praised them in an interesting way. He said, this is a very innovative, approach to the problem. Actually, I'm not sure if it was the same reviewer, maybe it was the other one, but you know, this actually says something. Like that this is special. This is not a normal feature of these kinds of trials. Even among the adaptive trials, this is not normal. But maybe there is some talk in the background. Like we could find research that would sort of present this as an improvement on the placebo procedure. However, that doesn't mean that the reviewer understood fully what was happening. Right. And that was a good idea. All we know is that this is a feature of the trial that is novel, and you know, naively, it sounds to us like it, effected the blinding, if it, if not, it should be explained somehow somewhere why that was not the case.

MICHELLE: Well, and you also lose, like, it doesn't make sense because the whole point is to share placebo patients across all the arms. And if you start giving them different doses, now you can't share them. So you've reduced… like it just nothing about it makes sense to me.

ALEX: They do share it. They do share the patients only in the per-protocol analysis, you don’t. And only the ivermectin paper this has done. And I think at the insistence of the journal. So they, their plan was not to do it this way.

MICHELLE: Yeah. But I, I dunno, it's not clear why they wouldn't just give everybody the 10 pills and then divided out like, oh, well only one of those is an active pill. Like it just then it’s more even for everybody and nobody loses. There's, there's nothing (inaudible). So.

ALEX: No, and this is actually one of the reasons I'm curious to see the early data. Right? One of the things that didn't happen—I guess they'll say it later—is that they, they didn't release—and I realize that you need to kind of a visual map of the trial to understand what I'm talking about—but they had the low-dose ivermectin arm in the beginning, from I believe, January 20th to somewhere around March 4th. They don't tell us when they stopped, exactly. They had the that arm going, and that was with the full sort of 20-dose placebo and even the ivermectin arm was 20 doses, et cetera, et cetera, even though the three were active. That data is not released. They have that data somewhere. They have not released it and they have not even told us—which is another interesting feature here—why they did this. Like they haven't told us, basically, what was the decision maybe from the data and safety monitoring committee, about resetting the dose? Like when was that taken? Because they seem to have a protocol, like three weeks in, with only 19 patients, recruited. They had protocol ready to go, to revise the trial. Did the committee come together at like five patients and say, we don't like this? If they could do it at five wouldn't they do it at the beginning? It's really strange at that reset happened. They stopped that and they haven't given us that data, which I think would elucidate a ton.

ALEX: Anyway, so the next one is patient counts did not match previously released enrollment graph. So, if you've seen my, on my Substack, doyourownresearch.substack.com, I've got, my first piece on TOGETHER, which is kind of like just getting some ideas out there. And I've shown how we did reverse engineering of the released graph. So they released this—I dunno what to call it—like the stack chart of enrollments, which was kind of really beautiful, but, that didn't release the data with it. But it turns out it's quite easy to reverse engineer. So, I mean, easy in terms of—it's not simple to do—but it's not, it's not very complicated to do. You just apply a grid and you can count. And what we saw there is that, the numbers they claim they had only match if they have taken placebo patients from the interim between the low-dose and high-dose ivermectin arms. So, basically what happened is they had the low-dose ivermectin trial, which went on for, I believe something like six weeks. Then they paused for two weeks. And then they started again on the high-dose. Right? In the meantime, the placebo arm was ongoing. The fluvoxamine arm was ongoing in. and the metformin arm was ongoing. So only on the ivermectin side did you have this gap. And what I've seen of the numbers, everything I add together says the same story: that the patients from those two weeks of placebo after the low-dose IVM arm was stopped, were used as placebo for the high-dose ivermectin trial. So they were offset by two weeks, actually a little bit more than two weeks. Thet were offset into this trial. Now, why is this a problem, right? They just placebo patients, you know, surely… It gets messier, but we'll talk about it later. When you start to realize that the Gamma variant was—like those weeks before and after, like right before the high-dose arm started and right after—were literally the worst days of the pandemic for Brazil. They were not just kind of any day, they were the absolute worst. They had a number of deaths that was like twice daily than any other peak that they had. I'm getting images of like, you know, Italy early on the pandemic, like just absolute chaos. Right. So it's conspicuous that, that weird sort of dislocation of patients is happening exactly as that wave is first building and then sort of cresting on Brazil and specifically in Minas Gerais that, uh, uh, state and for anybody who was from Brazil, I apologize, please don't kill me for butchering that pronunciation. I'm doing my best.

ALEX: Okay, so, funding conflict. So, I want to say a little bit about funding. What they're saying here is very narrow. They’re saying that the original protocol said that the trial was funded by Bill and Melinda Gates Foundation. Whereas the later protocols do not mention this. What I understand happened is Bill and Melinda Gates funded the, the framework, the setup of the trial, right? If it had an office of had like some, you know, some materials to buy, computers, whatever they needed, and you know, to make submissions by domain names, whatever it is that you need, like the sort of the infrastructure was funded by the Gates Foundation. But, the Gates Foundation did not fund this particular trial. Now they put that in there—maybe they thought that the Gates Foundation would fund it and they backed out whatever it is, in later protocols and every paper since then, it's very consistent that the funding has come from the Rainwater Foundation and the Fast Grants Foundation. And you know, of all I know of these people that funded those trials, they're good people. Like I know it sounds weird, at least for Fast Grants that I have direct knowledge and some knowledge, and very, vague I have of the Rainwater people. They seem like they're coming from a good place. But, you know, again, this later we'll get into the conflict of interest stuff and, I'll mention why I don't make a big fuss about that. But I'm here, this funding conflict element, that they're mentioning is just this, that they initially said that it was funded by Gates Foundation, and then they disappeared and this has not been explained. And, and I think like a lot of these things, some of these are understandable. Like you could sort of imagine what happened, right? Like they, kind of had some early conversations and said, yeah, yeah, sure. We'll fund it. And then they went back and they're like, no, I don't think we can do it or whatever. Right. And, you could see these artifacts show up. But a little bit of explanation would go a long way. Like this is something they could have said, like, look, yeah, this is, you know, now we're just stuck kind of saying like, are they hiding something? Or like, whatever. Because again, we're being forced to look through 20 different documents. There isn't like one write-up that just makes everything plain. That's with the funding conflict.

ALEX: Now the DSMC not independent, that's where things start getting really hairy. I've I've written about this. I think I was maybe—was I the person who first picked this up? I'm not a hundred percent sure. But I definitely was among the folks that raised it. And I, and I did more work on this one, so I know, I know a lot to say. So the issue with this is that, this guy, Kristian Thorlund, is the chairman of the Data and Safety Monitoring Committee. Now, this is fine. Right. He’s a professor, whatever you have a committee, keep in mind that the chairman both controls how the meetings flow and, has a special tiebreaker vote. Usually—and I know this from board meetings—right? I don't know if this particular committee has that particular structure, but that's usually what a chairman, is a way that that's their position is kind of special. Now, the problem with this is that Thorlund is not just Mills friend or Mills like acquaintance or Mills coworker at both Cytel and McMaster university. All these things are true, probably, but it's just the beginning of the issues. The problem is that Thorlund and Mills, co-founded company called MTEK sciences, which was the email address to which the TOGETHER trial was going. If you go to the togethertrial.com and the internet archive, and go back to the first version, the place they send you to ask for inquiries is info at MTEK sciences.com, I believe. And if you look at the FAQ, it says that the co-principal investigators are Mills and Thorlund. So Thorlund is not like some independent third-party person who is just kind of like wading in, or like they brought him into to provide some, you know, independent guidance. He is deeply deeply involved with the design of this trial. And in fact, if you, go further back in their shared literature, Mills and Thorlund, by the way, have written over 100 papers together. I believe it was 101 to be precise, but like just an absurd number of papers that they're cited together. They have a very deep and long collaboration. Some of those papers are about this kind of adaptive trials, and during MTEK, one of their papers, is about, High Efficiency Clinical Trials—HECT. And they also named another person. I believe the name is Jonas Häggström from who works at MTEK. And he's also at this supposedly independent, supposedly unconnected, Data and Safety Monitoring Committee. Right. So you've got, you've now got two people in this committee of, I believe five, who are deeply connected to Mills and to Cytel and to MTEK. And Häggström was working before he was working at MTEK, was working at, you guessed it, the Bill and Melinda Gates Foundation. This isn't what you do if you want a committee here that is controlling, by the way, when things begin and end. And when, you know, when a trial has completed, when there's futility, all these decisions that are attributed to this quote, unquote independent, Data and Safety Monitoring Committee, right? And this committee has one person who is, you know, as deep into it as Mills and one person who worked for them at MTEK followed them at Cytel, and, you know, his career is basically like, very, very closely linked to theirs. This is not what independence looks like.

MICHELLE: So the other thing that rubs me the wrong way on that, and I don't know if it's like, standard is that they are not included on the publication. So they're not listed in the conflict of interest. So you don't know about them unless you dig them up. I don't know if this is, did you, would you confirm, like, would you agree with that?

ALEX: That it should have been done?

MICHELLE: Well that, like all the other, all the authors on the paper, they have to list their conflicts of interest. So you can at least see like, oh, you work, you're funded separately by Bill and Melinda Gates Foundation. Right. For those guys, the DSMC, they are not listed on there. So you don't know what their conflicts of interest are.

ALEX: Well, what I would say is that they should be beyond reproach so that if you look at them, there should be nothing to find. So that's fine, that they're not listed.

MICHELLE: I guess.

ALEX: The problem is that, that there is all these, like if you have a DSMC right, and you’re having to list like a long list of like conflicts of interest. Like, you know, just forget about it! Just see, like, look, we got together with our buddies, decided whatever we wanted to do, and we just did it. And that's what we did. Okay. Just be honest. Like that’s, you know, at least say that, like, don't tell me that, like, you know, by the way, just to mention something that people might not understand and I know from my, from the startup world, MTEK was acquired by Cytel in 2019, just a very, very short amount of time before this trial. Right. It very well might be, it's actually quite common for startups when you're required to have performance targets, all of the technology or the products that you sell to your acquirer. So I don't know this for a fact, but I would definitely would not rule it out (inaudible) could have not just you know, Cytel stock, right. That would go up if the trial goes well, or whatever. You could have an explicit target on this contract that this, you know, that this trial should succeed. And again, I don't know this, but I, I can hypothesize that it's very normal and I shouldn't have to ask this question is what I'm saying. Yeah, as could Mills. Okay, fine. He’s just doing his job, right. It's fine for Mills to want to succeed. But for the DSMC, ideally they should be neutral and, and sort of, you know, uninterested in the result of the trial. What they should want is they call it a Data and safety monitoring committee. What they should want is that the data and the safety of the patients is being done at the highest standard, and that's it. Whether it comes back positive or negative, they should not care. And neither Mills nor Häggström, I believe can say that. And of course, there's other people in there that have written many papers, like Sonal Signh has written 26 papers with Mills. I mean, I just don't, don't get it, like on some level I'm like, this is just too much, like, just find some random people like they could agree with you, but like, they don't have to have like many years of shared academic, like career with them. Like if possible, like it's, I don't know. It just baffles me that it's like this blatant. Yeah. Anyway, so that's the DSMC not independent piece. And this is where things start to gel, right because you're like, okay, so the people who were making decisions, what to start, what to stop are not independent. The starting and stopping appears to be kind of odd. You know, and we can't rely on the DSMC to have done this for it. Like we can't trust them to have done this correctly. What's worse is that this was noted in the open peer review at gatesopenresearch dot whatever. And Mills responded and said, you know what, that's a good point. I'm going to take Thorland’s vote away in that committee. First of all, this was in August. So all of this, whatever we're talking about now is gone. And secondly, the reviewer comes back and says, well, if you're not going to remove him from the committee, because that's what I asked you to do, right. He's still chairman, he still runs the agenda. He's still, possibly has a tiebreaker vote and he's definitely in the room. Right? So he could, you know, intimidate others, whatever, like you don't know what those conversations are like. Again, we don't have minutes, we don’t have anything. So that reviewer, of the two actually withheld his full approval. Like if you go there, this trial has one approval from one reviewer and another one, which is, I don't know how they call it with, with reservations or something like that, but it's like, it's not a green tick, it's a green question mark. Right? So for this topic specifically, this reviewer was like, yeah, sorry, man. Like this isn't this isn't okay. And you shouldn't do this and still again, the question is like, why didn't Mills just say like, you know, that's a good point. As the reviewer said, if they wanted to, you know, consult, a statistician, like, like Thorland or whatever, they could call him into the meeting when they want to and tell him to go away, but they need to be able to talk alone. And this was not possible with the, this, this arrangement of the, of the committee. And then we get into the unequal randomization. By the way, if anybody wants to talk, feel free to ask for the mic and just bring your wisdom to the group because I'm. Yeah. Part of this is just to make a document that people who are this interested that can listen to. And part of this is to, get my thoughts out as well.

ALEX: Anyway, unequal randomization, significant confounding (inaudible). The trial reports, 1. 1. 1. 1. randomization, however independent analysis shows that much higher enrollment in the ivermectin treatment arms towards the start of the trial. Right? So this is another version of the problem we talked about before. So by my research, they had 75 placebo patients from earlier. Now, what happens if you're an algorithm that's allocating a block randomized, stratified by age and site, patients to these different arms… So what I hypothesize happened, right, and I might not know what I'm talking about when it comes to biology, but this is computer science. So, uh, kindly, uh, yeah, uh, this is my, this is my area, actually. What it feels like happened is the ivermectin arm was suspended for two weeks plus, you know, a bit longer maybe, which means that the algorithm was making three arm blocks. It was making blocks, of patients that were to be allocated to metformin, fluvoxamine, and placebo, and again, stratified by age and sites. So, you know, this had to have enough patients from the two age cohorts above 50 and below 50. But the IVM arm was suspended, right? So the blocks it made were shorter. There were smaller. Now, the IVM of arm appears again on the horizon on March 23rd. So what I think happened, and you can see the graph in the website, but it doesn't tell the story. What I think happened is the algorithm realized, that the arm it thought had gone away, had not gone away. And therefore that had a bunch of blocks that were under-allocated. So what it did is it started taking every patient that came in and basically adding them to the previously, uh, you know, backfilling, the blocks that it had created in the previous two weeks with an additional patient. And that patient was always, or almost always right, like 75% of the time by the looks of it, a IVM treatment patient. Now this has two problems. Remember what we've said about the 3-dose thing before? Right. So if you see a patient with 3-doses, whatever. Here, it's even more blatant, because like, well, this particular week, 75% of the patients that are coming in IVM treatment. Like, you know by the date, and you know by the letter. And also, that particular week was terrible for Brazil. It's one of the worst weeks—rather probably the worst week, but in absolute terms—especially in Minas Gerais, that, that, that, that, state, of the whole pandemic. Right. So coincidence, I don't know, like, you know, it looks awfully odd, but, regardless of intent, the point is that this, this matters a lot because you get these patients that have a super high case fatality rate and you disproportionately allocate them to the ivermectin arm, and the placebo arm was allocated before, right? Like this is a complete and total violation of the, uh, structure of, of a, of a clinical trial, right. Uh, if you can do this, right, and, and it all comes back to the decisions by the this committee. They decide when to start things and they decide when to stop things. Right. So if they're independent, you could say, look that the designers of the trial did not know anything about this, but since they are most certainly not independent, it all now congeals. And you're like, they're real questions of have to ask. Did they know about this? Was this coordinated in some way? They were on the ground in Brazil, they were doing clinical trial. They were getting data. They were basically, able to have a real-time, dashboard, not generally from Brazil, but specifically from their patients about how the case fatality rate was evolving exactly in the places that they were. The opportunity was there. The motive we can debate, but, uh, you know, they have way too many knobs basically. What's showing up is that they have way too many knobs in their hands and the results look like those knobs were tweaked for whatever reason, intentionally or not in a very specific way.

ALEX: By the way, there's another, implication from this. And I resisted this implication for quite a while, but now I'm starting to get more and more convinced about it, which is that not only did this perturbation that was introduced, make the ivermectin paper look worse, it made the fluvoxamine paper look better. And actually just as a short, aside, the fluvoxamine paper is fucking weird. Like they said that the fluvoxamine showed like a 30% improvement in, was it mortality or hospitalization? I'm not sure, but the headline number was like 31% improvement. if you take all the patients. But then they have the secondary thing, it was just like, if you take only per-protocol numbers, it was 91% effective. 91, like 30% or 91%? These are, these numbers are so different as to be in different universes. And many people were describing these results as like, wow. Fluvoxamine is even better than we thought. And I have to say, like, I, I took that conclusion as well, but then I was like, 91%, like this, basically made the coronavirus go away. Like, no matter what you had, who you were, we're talking about high BMI patients with comorbidities here. Right. We're talking about people, some of them like with, you know, recent cancer we’re talking about kidney problems and several days into the disease. And taking fluvoxamine apparently, 91% of them did not hit the (inaudible). Like it's just too much people should have, like myself included should have stood up and paid attention at that point when we learned that number in September, I think, because it's just, it's just like outrageous. And especially the difference between 30 and 90 is, is absurd. Like this is not, you know, when you do per-protocol, right. What does, what does per-protocol mean, except for what it means in this trial, which is another story, but in general? It means that you don't just look at all the patients, you look at the patients who you made sure took all the medication. And remember the fluvoxamine was given as 20 doses over 10 days. So it's quite a complicated regimen. Right. But at the same time, when they said per-protocol in that paper, they meant 80% adherence. Right. So you had to have taken at least 16 of the 20 doses. So it wasn't like full full adherence, it was just like mostly adherence. And so you kind of see why they would have some drop-off and it did have some drops-off. Some patients did not, even on the placebo or the treatment arm go the whole way. It's fine. But that still doesn't explain, you know, going from 30 to 91, like, that's just, it's just, kind of a whole other, a whole other level.

ALEX: Anyway. Yeah. That's, that's just something that, I don't actually know what happened with the fluvoxamine paper. As I mentioned, it looks to me like there was benefit moved because of the of the stratification and, and randomization, issue here with offsetting the placebo patients, it looks like benefit was moved from ivermectin and towards the placebo end of fluvoxamine arms. And, you know, I know that that's sounds like a huge accusation, but I mean, you know, math is math. I don't, I don't know what to do. Just looking at it and saying what I see. I really did not expect that this would be, uh, showing up.

ALEX: Next one, another mystery missing time from onset patients shows that distinctly significant efficacy. So this was, Michelle's sort of, I think, big, big finding, at least one of them.Do you wanna, do you wanna talk, talk, talk, talk this through.

MICHELLE: Me? Yeah. sorry, I'm just approving here. Mathew just jumped on too. Let's see here…

ALEX: It says missing time from onset patients show statistically significant efficacy, you know, there's 317 ghost group.

MICHELLE: Yeah. So I don't have the numbers on my fingertips, but they basically had the full number of patients in placebo patients. And then they broke out, the per-protocol patients, which I guess are people who had either only the three placebo doses and also a hundred percent adherence, whatever that means. I guess they polled them to make sure they actually followed through with their dosing.

ALEX: And this is for the time, time from onset analysis, uh, where they three days before I don't, I don't think this applies to the per-protocol thing. Does it?

MICHELLE: Yeah. That's the whole reason it's they’re missing data.

ALEX: Really? OK. So go ahead.

MICHELLE: Yeah. So there, so they took on the per-protocol patients. I should just pull up the paper.

ALEX: This, this might be, this is interesting. Actually, we may have a different understanding of this, which is this, this is exactly the way to have this conversation. Go ahead.

MICHELLE: Um, so, but for whatever reason they are missing a lot of placebo patients in the subgroup analysis for the time to symptom onset, where they break out, basically, it's like, you want to know, did it make a difference if you were in the zero to three day group or the four to seven day group? And for all the other categories, they have a lot more patients included, so they have a few missing, but they don't have as many missing. And this one they're missing like a third or more of the patients and uh, oh yeah, you're right. I'm mixing up the per-protocol thing. Um, so the thing that was interesting though, is because the overall effect of ivermectin was like 0.9, uh, the relative risk, which is like a 10% effectiveness, but not, I don't know this is the wrong way to say it, but it's not statistically significant that the confidence interval is too wide, right. Based on their frequentist statistics. So you see an effect, but it's not within the statistical range. So for all the other things you would expect, if, you know, if you had a 0.9 average, then you know, if you're looking at age like older people might be affected more, younger people might be affected less, but on average, we're still getting that 0.9 effect. For the time to onset, the average was really high. It was over 1. So it was like whatever patients were missing there that weren't included in that data, had an overwhelmingly positive effect and they calculated it out. I don't know who did that, but it was on ivmmeta…

ALEX: Yeah, I’ve got the numbers here if you want.

MICHELLE: It's 0.5, right?

ALEX: Yeah. It says a relative risk point 51 P equals 002.

MICHELLE: Yeah. So like, if you back calculate the missing patients, it's extremely positive for ivermectin. So it's just this question of like, why are those patients missing and why are they so positive? Is it just random chance or what? And it's still an unsolved mystery in my mind.

ALEX: Yeah, no, we don't. We don't really know. I think when we find that, whatever it is that happened, this would also, slot in there somehow. But again, like the, the deaths that are the, the sums that are like, you know, plus or minus, this one is also like a mystery that is related to the whole chaos, but I don't know, I can't connect it directly.

ALEX: Hey Mathew.

MATHEW: Hey, Alex.

ALEX: How's it going? Uh, you you're you're you're you're following along the, uh, the gradual, you know, loss of sanity that I'm undergoing.

MATHEW: Yeah. A little bit late. And, um, I have not done the deep dive that you have on this paper, but I was listening just now, and I thought, you know, we need some kind of a terminology that describes a minimum standard of evidence, meaning like, you know, we're talking about patients missing, right. And of course, you know, this was true in like the Pfizer vaccine trial, you know, the, the first Pfizer vaccine trial report comes out and there are all these exclusions and that that's frustrated the hell out of me ever since that report came out. And it's not the only trial that I've seen that in during the pandemic.

ALEX: Right.

MATHEW: But in particular, it's always trials that I feel like, you know, the worst about, you know, like, I wouldn't even want to talk about it unless I did a deep dive. Right, right. Like that's the way you feel. Every time you see that. There should be a name for this, there should be a name for like, there should be a minimum standard. There should be a set of things that is included in any trial before you even consider it complete. Before you consider a minimum standard of evidence. And this just should be one of those things that should at least be explained to a minimal degree before you call it a minimal standard of evidence.

MICHELLE: It's crazy because I think most people like maybe the mainstream researchers, they assume that if something's published in the Lancet, NEJM JAMA that it has passed those hurdles. So they just take the abstract at face value. But that's obviously not the case.

MATHEW: Yeah. Type equals equals while running clinical trials.

ALEX: Yeah. No, it's um, the interesting thing here, Mathew, which I don't know if you've come across these adaptive trials, but, I am just deeply conflicted, right? Because I actually quite love the design and the statistics they're using. It's cleaner. It's, you know, the Bayesian stats and stuff is beautiful. That's awesome. But like everything new, right? It doesn't have the safeguards that you would need, to be trustworthy. And not only that, so it kind of presents a bunch of knobs to manipulate things. Right. This thing I mentioned before, moving benefit from ivermectin to fluvoxamine in a standard randomized trial, you don't have that opportunity. You don't have multiple arms to play with.

MICHELLE: If they had followed the protocol, I don't think they would have had these problems. Um, ‘cause even in a, even in a regular RCT, if you front-loaded placebo and then back-loaded your patients, that would mess up the randomization. You have to,

ALEX: Well, these are the consecutive trials, right? That this is what Merrick did. Not, not in a bad way. Like he actually called it out. He was like, this is what I'm doing. I'm going to take the first six months patients I'm going to put them here. Then the second six months, I'm going to put them there. That's a different kind of trial and you analyze it differently and you expect different statistical patterns, which Mathew had demonstrated. Why people, you know, got confused.

MICHELLE: If these aren't purposeful mistakes and they're just things that happen because it's, you know, chaotic, they have to put them in the discussion and say, Hey, you know, there is this offset and Hey, we do know that there's this variant and we're going to do our best to like, at least call it out. And they didn't do any of that. They didn't point out any of their mistakes. It's all hidden.

ALEX: Yep. Next! Side effect profile consistent with many treatment patients not receiving authentic ivermectin and/or control patients receiving ivermectin. So this one, this argument, I'm, I'm, I'm not too hot about like, I get it, but also like, I don't know. And this kind of goes to the whole exclusion/inclusion vortex, which is another, sort of set of facts there that is murky. So what they're saying is basically like, if you look at diarrhea for instance, right? Like it's it's what does it say that it's actually lower in the treatment arm. There is like a well-known side effect of ivermectin. So they're like, well, How could this be? Right. And there's a similar pattern in the Lopez Medina paper where blurred vision is one of the ivermectin tells. And it was like, again, quite high in the, in the placebo group. And they're like what everybody in Columbia is seeing blurred, like now, like, like what's the deal. So I get it and it's kind of like an indication, but on its own, this wouldn't prove anything. Right. You could be like, well, you know, these people are taking it and making it forever. So they're adapted to it. There's all sorts of explanations you can come up with. And it has some value to me as a, like, adding up to all the other ones. But on its own, I don't see it as like an extremely, you know, like if this is the only thing you had, you wouldn't say like, you know, that's it, you know, we caught them or something. This is just, it's like a hint, maybe that something's going sideways, but you don't really know what to make of it without more information as is this thing—a local Brazilian investigator reports that at the time of the trial, there was only one likely placebo manufacturer, and they reportedly did not receive a request to produce identical placebo tablets. They also report that compounded ivermectin in Brazil is considered unreliable. So this is kind of, I think this might be Flavio Cadegiani. I'm not sure where this is coming from again. Interesting. If we had that sort of, written up properly from, you know, the email from the manufacturer, et cetera, et cetera. That's the kind of stuff that was used to take down Carvallo. So it has to count like an important thing. If a local placebo manufacturer was not contacted and there's only one, and they say, no, we have no idea how they made it a placebos, then, you know, okay maybe they imported them. But that has to now be explained.

ALEX: Incorrect inclusion. This is one of my favorite ones, because I think ivmmeta is correct at bringing it up, but incorrect in how they describe it. Or at least partially incorrect in how to describe it. So they say, you know, the conclusion states that ivermectin did not result in a lower incidence of hospitalization or ER observation, over six hours, this is incorrect. Hospitalization was 17% lower. It's just not statistically significant. Now, first of all, the paper does not mention statistical significance anywhere. I challenge you to open it up and you do a search and say, look for statistically significant. These words does not occur. The words P value do not occur anywhere in the paper. It is full Bayesian statistics. Right. So we don’t actually know, if it was statistically significant strictly speaking, we don't actually know if it was statistically significant. We haven't seen those numbers. What we've seen is Bayesian numbers and the Bayesian numbers tell us that it did not reach the 95% confidence interval, but the interpretation of that, and—and I'm sure Mathew can tell us a lot more about this—is not the same as the frequentist intervals. And that's why I shared this paper before, this article before about, credible intervals, which is kind of the Bayesian equivalent. The reason I have that article is because with Bayesian stats, which is, I dunno, a lot, a lot of people are saying are better. Uh, but, uh, whatever. So let me put it the other way with frequentist statistics, right? If you have an interval, you can make no statements basically about where the value is within that interval. Right? So, so that's why they say if it crosses the one line, if there's a chance it could be negative, it could be actually harmful. That is, then you really have to stay away because you can make no statements, even if it's like a little bit, right. You can make no statements about where the actual weight of the evidence is. It could actually be on the negative side and you really should not try to parse that interval. With Bayesian stats, that's not the case. You see a bell curve, and that bell curve tells you where the probability is. So the closer you get to the middle, or wherever the top of the, of the bell is, the more likely you are to get you, can't say that with Bayesian statistics, right? So when they say, basically this is the thing that's just drives me insane. If you go to the supplemental appendix and look at figure, I believe it's S2, like I'm talking about like, it's a reduced—it’s sent to the back of the library, right? You will see this now these patient stats, right. And they do say that if we take all the patients that we had intention to treat, ivermectin comes out 81.4% probability of superior. If we take modified intention to treat, which means that the patient triggered an event before 24 hours of being randomized. I don't know if it comes ahead, 79.4%, probability of superiority, right? And then if we take the per-protocol numbers, which is what Michelle was talking about before, only the 3-day patients, basically, ivermectin comes out ahead like 64% of the time. Let's leave the per-protocol thing out because it's a mess and we don't really understand how that works. But the other ones are like, you know, roughly 80% chance that ivermectin is superior. Now this is not, what did they say here? How did they write this conclusion? Did not result in lower incidents, right? Like you, not only do you not have this confidence to say black and white, you have confidence to say something positive. Like if I tell you here's the treatment, it might help you, but only four out of five times. Right. Like, it's like a, you know, 80% chance it helps, or it might not help. Would you say that, Well, since, I don't know about the fifth time, then there's no indication it helps? Like this, this is the statements they went, they went and made to the press. They said, no indication of clinical usefulness. Like, no, it's not even a hint. It's like, okay, all right, I get it. But the numbers are showing different things, right. And again, because it's Bayesian stats, we don't have to be limited by the classical frameworks that sort of put extremely tight, sort of limits on how you're allowed to interpret things. They are far more intuitive than what you would normally think these intervals mean. A lot of people mistakenly think that, the, the, the, confidence intervals mean what the Bayesian intervals actually mean, because that's the intuitive explanation.

ALEX: Anyway. Yeah. So, the conclusion is definitely correct. Yeah. Go ahead. Go ahead Matt.

MATHEW: Yeah. If I could jump in here. So I'm, I'm kind of an applied statistician. We did lots of, you know, Bayesian analysis when I was on Wall Street. If we were modeling something and we were kind of interested in, you know, curious about whether or not the, the, the variable had an effect on a system in my experience, almost all of these probabilities are pushed away from extremes in practice when you're doing Bayesian analysis. In other words, if you get a probability of 12, it's probably more likely closer to zero. If you get a probability of 80, it's probably more likely closer to 1, when you begin running your machine and practicing. And, and it's one of those things where where, you know, there's no mathematics that justifies anything like that and, and really, and truly, you know, this isn't even the way statistics is supposed to be used, right? Like biomedical applications are, are just, you know, they're, they're tenuous at best because you know, the statistics are not, as designed to give you a real number as it's made out to be. I mean, like, what does 80% mean anyway? Like, are we in like a quantum state or something? Right. So really truly, it's more up to judgment than anything, but that's my, that's my experience with applied statistics.

ALEX: Yeah. Yeah. And of course, it's, you know, it's 80% if you're facing the Gamma variant, if you're, uh, you know, so many days after symptoms, if you, yada, yada yada, right. Like, okay, well, how you take that and make a statement about the broader world is another issue, probably part of what you're, you're pointing at. So I don't know if you've seen this, Matt, but I've, I've not put it at the top of the space. There, this figure. If you haven't seen it before, you'll freak out because this is in the appendix and they literally like, just spell it out that it is like, you know, 80, 80% ish, probability of superiority, but in the conclusion of the main paper and the press, they say the exact opposite, so when I saw this, I was just like, I was baffled. And by the way, this figure is significant for another reason. Because they do a lot of Bayesian stats, in the TOGETHER trial in general, and they're and they're very proud of them, these same diagrams are featured in the metformin paper, the hydroxychloroquine paper, the fluvoxamine paper, these diagrams are always in the main body of the paper. With ivermectin, they pushed it back to the appendix. And again, no explanation why. Right. You started to get the picture of the direction of the decisions, but, it's just worth noting.

ALEX: Okay. Next, ivermectin use widespread in the community. So (skimming) So this is weird. This is super weird and I've, I've dug super deep into this so I want to get my state of mind out, uh, out to the world on this. So the original presentation on August 11, did not mention anything about exclusion for ivermectin use. And this was highlighted, Steve Kirsch, who's on good terms with Mills, also wrote at somewhere where I can't find that article now for the life of me, but I read it at the time that, you know, the, yes, you know, this was a limitation of the trial that did not exclude for ivermectin. And even Mills has been quoted as saying like, yeah, sure. But like, you know, like the, use in the community wasn't that high anyway so it washes out. First of all, if it wasn't that high, then why didn't you put an exclusion criteria? That has to be clarified somehow. Why the hell you didn't just rule it out. It's the obvious thing you do, right? You want to know if your control group is clean. But then when they publish the paper, finally, you look at the exclusion criteria, it does not say ivermectin use. You look at the protocol, it does not say ivermectin use you, you look at the normal places, you would look at them there's no hint that they exclude for use of ivermectin. But in the discussion section, they have this weird paragraph about, kind of a couple of sentences about, of course we extensively screened our patients for ivermectin use for COVID right? These words are, trust me, like workshopped over and over again, to find the right words. And of course we exclude them, et cetera, et cetera, et cetera. Now I've got a number of questions here. First of all, why is this in the discussion on in the inclusion criteria in the paper, right? That's what you put these things. this is not complicated. Secondly, what is this “for COVID” thing, right? If they used ivermectin because it was like a traditional cure for malaria, did they get to go in the control group? Like of course that what, and, in general it's just baffling and the, the weird part about it is that the authors kind of come out, swinging there and they're like, “and I'm sure the next thing you're going to say is that we didn't dose them enough, huh?” No dude, like there's the way we do these things. Like you, you're supposed to put your exclusion criteria and exclusion criteria. How did you know this? Right? Because the forms don't include thick checkbox for, used ivermectin. What they do have, is a place to note down con-committed medications. So we have to accept that the places where they did these trials, which are 12 separate sites, filled in these long forms with the patients and ask them about every medication they took and they filled in everything. And then, in the particular medication box that maybe some patients said yeah, had taken some ivermectin, there is an indication like there's a reason why you used ivermectin and the, in there they filled in COVID, right?

ALEX: So let me, summarize this for you in a, in a way that it'll make sense. These people were not able to get how long it was from symptom onset for 23% of their patients. They couldn't get an answer, or at least they don't seem to know the answer for “how long, since you’ve had symptoms?” They're missing ages of patients. Right? And you're telling me that they filled in all of this that went to all the con-committed medications. They found that that whoever was using ivermectin, and the reason why, and then they excluded those, right? My sense is that, and this is why this “for COVID” thing is there, is because even if some had written ivermectin, they just didn't fill in the reason, and therefore they could, the, authors here can claim that, well, it was for, you know, it was for parasites and it's probably a low dose and was probably like twice a year or whatever. So it doesn't matter. But that does not mean extensive screening. Somebody had said actually that, maybe they went afterwards and called everybody after this was raised as an issue. But the number of patients cited in the paper is exactly the same number of patients cited on August 11. So if they did that, they didn't exclude anybody. And the other thing that I find really weird is that Mills, when he talks about this, it's like, well, you know, they use in the community, wasn't that high anyway. Here's the problem with that. First of all, I have a publication from the area, from Brazil, that is, at the time exactly the time of the trial was ongoing. Saying that ivermectin use a shot up nine times, nine times nine X. Uh, and secondly, that publication is not pro ivermectin. It actually talks about like, oh, we're freaking out about the, safety issues or whatever. Right. So this is not like some, you know, thing that you could say like, okay, well, these guys are clearly like biased. Uh they're they're trying to like repeat a trial or something. This is just the normal, Brazilian website. You can put it on Google translate and it'll show you straight up, it says nine times increase in use. Now, Mills does not know this. You might say like, okay, it's not his obligation to know this. Yeah. But if he had actually extensively screened, and he had actually excluded a bunch of patients, he would have seen the same thing. He would have seen a big mass of patients taking ivermectin.The fact that he didn't see it, now we are forced to compare. It's like, okay, is this website lying for some reason? Can we get to the underlying data from like, from, you know, the marketing of see how the sales were going at that time? And if they were high, why didn't they see them in the, in the trial? Like, you know, there's a lot of questions here.

ALEX: Anyway, I just thought of another explanation, right? Maybe the people who did get ivermectin did so much better, that they were fine and then they didn't go to the trial. So that's confounding in a, in a different way. That may not be insignificant actually. Let's say ivermectin works for a subset of people.

MATHEW: Can I jump in real quick?

ALEX: Yeah yeah please. Go ahead.

MATHEW: This sounds like informative censoring and the fact that they would run the trial in the same location. So, okay. Informative censoring is, um, is what took place in the day in Dagan trial. The, the Dagan study on vaccines in Israel, where they had, um, what they call rolling cohorts, where people in the study were sometimes measured as if they were un-vaccinated and measured as if they were vaccinated. And what this does is it creates, a situation where some people are measured more completely as if they, they ran the measurement to exhaustion. And some people are not. And so like in the Dagan trial, you had all these people who were vaccinated, who died at a point later than was measured. But that death was literally left out of the computations.

ALEX: Oh right. This is kind of the same kind of 28 day follow-up stuff. Right. Where they're like, if you're down in day 30, it doesn't count.

MATHEW: Exactly because they were, they were no longer matched with a person who died, who was in the placebo arm, then, uh, their period of observation was cut off. And if what you have is a period of observation, that's cut off prior, that's the same thing. It's still informative censorship. And when you do that kind of informative censorship, the standard is that you have to run a sensitivity analysis before you make your claims. All these types of issues are exactly why I think that, that Bayesian estimates usually like the reality is that they usually move toward the extremes and, anything that, that does that in Bayesian analysis, you should have a list when you're done with the Bayesian analysis, like things that would, that would move the curve.

ALEX: Right?

MATHEW: And if they're all, if they're all going to move the curve in the same direction, that's where things look really suspect. This has actually, I suggested a note on, um, a blog post that Norman Fenton just wrote up. Um, on this basis, he was actually looking at the Sheldrick attack on Merrick and Fenton ran a, um, a, uh, Bayesian analysis on whether or not those, you know, what, what is the probability of fraud, but all of the things that you could list that did not go into the patient analysis, all move that probability in the same direction. So I said, Hey, just mention that this is a ceiling on the probability. In this case for this study, it should be a floor on the probability is what it sounds like.

ALEX: Yeah, exactly. That's a very good, that's a very good way to put it. Especially given some of the other items we're going to talk about.

ALEX: Anyway. And by the way, just to mention like, when we say like, oh, you know, ivermectin sales shut up in the area, this was the official recommended, uh, treatment by the government for COVID, right. Like in Brazil, that moment in time, there was this thing called the COVID kit, very politically contentious. Not everybody took it. Actually, the numbers I've seen is about 25% of the people took it. The establishment hated it, but it was the, you know, if we're looking at what was the recommended treatment by the Brazilian government. they gave you the whole panel. It was ivermectin, hydroxychoroquine, erythromycin I believe. And a few other things. So, it's not unthinkable that it'll be getting used given that, you know, the government was saying use it. And the fact they kind of just added this note in the discussion that yeah. Yeah, sure, sure. We, of course we excluded what it tells me is that they probably did a check after the fact and they constructed a, description of the situation that would allow them to justify their prior results, without excluding any patients. But in reality, they didn't question as they should have and they did not, exclude as it should have. That's my sense. But, again, if I could see the data, I would have a much better understanding and of course we don't get data.

ALEX: Anyway. Okay. The next one. I'll get really worked up here because I really think this is important separately from everything else, but I do think it ties into the story. Single dose recruiting continued after change. So this is when I dug up. I was looking at their applications to the Brazilian ethics committee for the protocol, right? So the, the trial began, the ivermectin part, on January 20th. The protocol they sent to the Brazilians in order to, reset to the ivermectin arm from low-high to high-dose, the protocol itself is dated February 15th and in fact, this protocol is what is attached to the paper. If you go to the protocol thing, you can see that the, at the bottom, it says working paper, February 15th. Okay. February 15th, however of the low-dose arm, had only recruited 19 patients as far as I can tell. The rest of the patients, 59 patients, were recruited in the low-dose arm after they were clearly intending to, reset the trial. And that, I believe, I assume it was because they thought the dose was too low. So why are you recruiting patients to an arm of a trial that you've already decided to terminate so that the data is going to be thrown away? And presumably because you believe that dose is too low, so you're not going to help these patients. 59 patients, right? If we believe the fluvoxamine data, which these patients could have been allocated to, you know, somebody statistically died because of this and it's not, it's not okay at all. And there's no explanation, right? Again, normally what happens is the DSMC terminates the arm. A new protocol is written up. The request for authorization are sent. You get the authorization back and you start over. Here, we don't know what the DSMC said. There's no mention of any decision, but there's a new protocol that appears just a few weeks into the first arm of the low-dose arm of ivermectin. They continue the low-dose arm for quite a while. And then they terminate it on a day of March 4th. We don't know why that date was chosen. Then they get back the response on March 15th. They say that they got it back on the March 21st, which is like, why lie about that now? Because the, you can see on the Brazilian website, it says March 15th clearly. But they report that they got the approval back on the March 21st and they started on March 23rd. Right. What this looks like to me is that they had complete control over both when the low-dose arm ended and when the high-dose arm started. Not good. But also like we can talk about messing with the data, but this is real people who were allocating to an arm the appear to be, you know, th th th the people that are running the trial is set themselves, did not believe it will help people, and, even if they did, the data that they would extract from this, uh, they did not intend to use. I think this is like, beyond the manipulation element that it highlights, I think it, it's, it's a massive ethical lapse, and I don't know where the board was at that time. Yeah. I think, I think this is probably the one that, I mean, arguably there's others that are, worse, you know, in, in effect to the real world, but this is like actually 59 real people that I can sort of visualize in my head, and it upsets me, I think, a lot more than the other ones.

ALEX: Anyway, next one per-protocol population different to the compared contemporary fluvoxamine arm. I think this is because, the per-protocol definition is different, right? So the per-protocol definition in the fluvoxamine arm is just, did they follow the protocol whatever that was? Whereas in the ivermectin arm it was, did they take the 3-dose placebo? And therefore, the numbers are very different because it's like 92% adherence in the fluvoxamine arm and 42% adherence. But I think this is a matter of per-protocol, not meaning per-protocol. Not meaning the same thing in these two trials. So that's bad. But I, I don't think it goes deeper than different definitions, which is, which is not a good thing by the way, just to be clear.

ALEX: The next one: time of onset required for inclusion missing for 317 patients. This is the same 317 patients we talked about with Michelle before. So these patients not only did they do extremely well—weird—not only where are they missing from the subgroup analysis because they didn't know the time from onset, there's a real question here of how the hell did you put them in the trial to begin with if you didn't know how long since symptoms? You claim that they have to be at most seven days from symptoms to be added to the trial. So if you don't know that answer, now, what might have been is that the maybe vouch that it's like less than seven days, but I don't know how much. So they get like past the binary hurdle, but not with enough precision. But again, this, this requires explanation. And of course there's like missing figures for BMI, et cetera, et cetera.

ALEX: Conflicting co-morbidity counts. This is a really fascinating one and I, and I've got a lot more work to do on it because I've started digging into it and it's, it's baffling. So what they're saying is basically like if you sort of understand how the, how the patients are structured in the trial, the ivermectin placebo arm is a, is a clean subset, almost completely a clean subset of the fluvoxamine placebo arm. Right? So like 99% basically of the placebo patients in the, in the ivermectin arm must be also patients in the fluvoxamine placebo arm. Now, because this is the whole point of this trial, by the way, right? Like the reason why your patient are you running all these arms at the same time. And you're trying to sort of spin plates is to share the placebo arm, and there's an ethical argument for this, right? You get to put fewer patients at risk. And there's a financial argument for this. You can get to use the same money to learn more. Great. I'm just saying that it's not something strange that the placebo arm was shared in principle. However, when they talk about co-morbidities right, the fluvoxamine arm shows 16 patients with asthma. The ivermectin arm shows 60 patients with asthma. It's like, okay, it can't be more, right. Has to be less. And there's other numbers like that where the co-morbidities don't match. Right. So then you're like, okay, was it the same arm? Or was it not. Because if it wasn't then how did you randomize it? You know, that opens up a lot more questions. Maybe there's an answer here. I don't, I don't know what it is.

ALEX: Next concern. Conflicted placebo arm counts across IVM and fluvoxamine arms. This is basically what we talked about before. This is how I concluded that the 77, 75 patients were backdated. There's no other way to make the numbers make sense. And to assume that the patients from the time of pausing the ivermectin low-dose arm to the time of starting the ivermectin high-dose arm, these like two weeks plus something. Those placebo patients must have been used for the ivermectin arm. There's another way to make the numbers make sense. The problem is that the, authors swear up and down that that did not happen, right? They say very clearly, Nope. It was all patients after March 23, both for treatment and placebo. They made it to say that extremely clearly everywhere, and they connect to the ethics approval that they got as they claim on March 21st. So they couldn't like if they'd have, moved things around, they're not, they're not saying it. They're swearing this didn't happen, but the numbers they've released make it super clear that there's no other explanation. You know, math is math, like, I’m not using like complicated statistics here. I'm doing basic additions, subtraction. There is no other explanation that would, that was stand. There is one other explanation that would stand, but it's not good. Which is that they continued recruiting for the ivermectin placebo arm, after the termination of the fluvoxamine and ivermectin arms. Right. So they, they went all the way deep into the summer, in August and continued recruiting placebo patients. But that's still being offset, right. It's not offset before it’s offset after. But so what, like, that's still a problem and still conflicts with everything they've said everywhere about, which patients are using it when, so it's no better, basically. There is an alternative, but it's no better than this one. And it's possibly worse because those patients were definitely far removed from the Gamma wave. As the Gamma was increasing, these patients were recruited then, and you're saying, well, there's a difference between surging and like really being at its worst. And that creates statistical noise. If you went and took patients that, you know, seven months later, that is a completely different population of patients with different inclusion criteria as well. It's a mess. So honestly, if I was the authors, I wouldn't go with that explanation because post-dating placebo is worse than pre-dating it.

ALEX: Next, conflicting target enrollment. So this is one that I feel slightly guilty about because I think I started this. But truth be told, they are inconsistent. So there's some versions of the protocol. And again, there's multiple versions of the protocol to keep track of et cetera, et cetera. But some version of the trial protocols say both 800 and 681 patients targeted. Mills, in his interview in June said, we are planning to have 800 patients, point blank. Right? So what's the deal? Were you planning to have 800, but then we went back to 681, where you, you know, the numbers 800, 681 come up all over the place all the time. I'm kind of giving them this as confusion essentially. I say like, okay, this is bad, but probably they did intend to have 681. And they, I don't know, misspoke. I don't even know what to say that it's, it's still careless. But this still runs into the problem of fluvoxamine—the paper on fluvoxamine—continuing on after the 681, right? How the hell did you miss this? Like, it went to 681 patients, and then you just let it run on for another, what is it? 60 patients? 61 patients? Like that's like almost 10% overrun. Everything else I can make make sense, except for the fluvoxamine overrun in patients like, and what's more, which is also kind of a tell, on their website before this paper came out, they said the ivermectin trial was terminated for futility, right? This means it was terminated early because it didn't reach the statistical limits that we wanted. When we look at the numbers, now this could not have happened. The ivermectin arm was definitely within the bands of, uh, probability that, they had predetermined would not be terminated for futility. And now they claim no, no, no, we didn't terminate it for futility, we terminated because we completed the trial. Fine. But why did you say you terminated for futility? Why did you continue the fluvoxamine trial? And why did you say in your website, again for fluvoxamine, you terminated it for superiority, right? That means that according to what they were saying on their website, neither the ivermectin trial, nor the fluvoxamine trial completed, which would make sense if you were planning to get 800, but now they're saying no, no, the plan was 681, which also doesn't make sense. Right. So the target enrollment business, I think initially I was far more convinced that the 681 number was like added later. I'm not convinced of that anymore, but I also can't make sense of all the information that we have. There's a lot of conflicting data and, and, and some opportunity for manipulation within this conflict.

ALEX: So the next one is kind of what I mentioned already reportedly terminated for futility although fertility threshold not reached.

ALEX: The next one is a screening to treatment delay, and I'm not dug into this much. I know we've, we've confirmed it with the folks that I'm playing around with the numbers and the data, I guess they're the folks that we're analyzing this, this, this, this trial. I know we confirmed that this happened for at least some patients. But I, I don't know. Sadly, there is this, document that the ivmmeta folks are linking to, which is, they call it the original protocol, and indeed, if you will see to drive.google.com and it has original protocol in parentheses, but it's not signed. And I don't know where it came from. I have not seen it be provided by TOGETHER. They might've been provided somewhere by somebody. It's dated March 11, which is a very important date it's like after February 15th. Right. So it must be post the protocol that they sent to the Brazilian authorities. It is before, March 15th where the approval came back. So I'm wondering if it was maybe a candidate revision of the paper that they never actually completed, because it doesn't appear anywhere. So, so I don't know where it came from and it has some interesting hints. But I don't know what to make of it unless I understand where it's sourced from. So there's that.

ALEX: Mean delay. The reported mean number of days from symptoms to randomization likely only includes the known onset patients. Right? Right. Well, that's the thing, like they say mean delay 3.9 days, I believe. Okay. If you don't know many days of symptoms 23% of your patients had, how do you know what the mean delay was? Right. So clearly it must have been for only the patients that they knew for, which is fine, but then that's not really your mean delay. It's the mean delay for subset. And we don't know what the, true number is. What's more they did the, did this, kind of statistical trick called, uh, uh, not trick like statistical, whatever the tool, called, uh, multiple imputation—imputation not amputation, by the way—multiple imputation, which kind of like fills in the data based on like, you know, hints you might have from elsewhere. Most of these missing patients were allocated to the late group, that group that had more than four days of symptoms. So if they know something, they know that these people are generally late. Right. So then does that move your mean? Probably. How much we don't know.

MATHEW: Um, so fluvoxamine, I was asking myself why fluvoxamine might be sort of in the middle of this mess. And so I, I never looked very closely at the fluvoxamine trials, but one of the things that I know about it is that, um, it tested, it looked, it looked good, but that Steve Kirsch after, uh, you know, uh, paying for, for trials to be run, could not find a pharmaceutical company that would sponsor it. Right. Like, I, I don't know the details of approval process, but apparently in order to get this medication approved for treatment for COVID, you have to get, um, you have to get somebody to like a pharmaceutical company to vouch for it. I don't know, like that seems like a weird conflict of interest to begin with. Like, I don't think anything should be written that way, but let's say that you have already set up the system designed to reject fluvoxamine on that basis that no one will, will vouch for it. Then if you include it now in a study like this, if there's any opportunity for finagling, whether it's informative censorship, some sort of a rolling cohort, um, I noticed that 60 is halfway the difference between 800 and 681, I don't know if that, if that means anything, um, I haven't done the deep dive that you have into, into this trial data, but it, it just sort of stands out as weird like that if what you said about the per-protocol versus overall efficacy rate stands out in my head too. Like if there's, if there is something, those two numbers are so disparate…

ALEX: Exactly, you’d expect some improvement, but…

MATHEW: Maybe fluvoxamine was there to create the opportunity for some sort of censorship of the data.

ALEX: I am, you know, like okay. In my, my, my darker moments, I'm not saying this as fact, I'm saying this as sort of fiction a little bit. I'm saying like, imagine if they said, if somebody was sitting in a smoke-filled room, right. And talking to the buddies and say, what are we going to do about this thing? What if we set this up in such a way that will benefit one of the, medication or not, but we, we set up a dummy, we've set up something that like, it doesn't really work with whatever it's like, okay, well, what are we gonna give them? Like, you what do you propose? Oh, I know! What about an antidepressant? LOL. And then the other guy comes back. He's like, dude, those don't even work for depression. It's like, I know, right? Like there's a cosmic joke element to, to, to fluvoxamine showing up there, is what I'm saying. I'm not, I'm not really saying that this is what actually happened. I'm just sort of hypothesizing that if it indeed does not work, and the other studies I've heard about fluvoxamine also look good, but of course I haven't dug into them. So I don't know. It's totally possible. That fluvoxamine is one thing that does work. And maybe if you think about it from a, from a PR management perspective, you'd rather create a new wave that you can then nuke later, while helping you nuke this current wave, than to give more evidence to the current sort of movement that you're you're in trouble with. And also, by the way, if you are a pharmaceutical company, that's manipulating this—say, I don't know what happened—I'm just saying like, if, if a pharmaceutical company was part of this design, they would probably want any early treatments to come to market after Paxlovid and Molnupiravir are approved. So first of all, you don't mess with their approvals. And secondly, now you're fighting on the open market, right? You're fighting on advertising, you're not fighting on, does it exist? Does it not exist? So even if fluvoxamine is the perfect drug and it works, you know, 91% of the time, whatever, whatever, it has to come to market, you know, it didn't have the wave of support that ivermectin had and therefore it will take more time, right? The argument would be like, we need more data here, by the way, one fun fact, some of the counter TOGETHER trial arguments that I've found, I found from people arguing against it because they don't want approval of fluvoxamine. Just—again—another cosmic joke here. There's, you know, the, the establishment people like the super pro establishment, people who hate this trial, not because it showed that ivermectin does not work, but because it showed that, fluvoxamine did, right? It is a multi-sided conflict that kind of reminds me of the Syrian civil war in a way, like you've got, you know, the Kurds on the one side Al Qaeda on the other, you've got the you know, the government bringing the Russians and Iranians. It’s just like, all forms of players in game. And it's, it is fascinating to try to navigate it. Now, I have to say.

ALEX: Okay, next up: viral load not reported. Um, yeah, this is one of many things in the protocol that they said they would report and they didn't. Why? We don't know, they didn't say. Are we entitled to be suspicious? Yes. It's the same thing with subgroups, by the way, I don't think imvmeta has, uh, noted this, but in the subgroup analysis, they have some subgroups that they reported on that they did not pre-announce. They have some subgroups that they pre-announced, but they did not report on. Some of those are actually reported on in the fluvoxamine paper, but not the ivermectin paper, by the way. And then some of the groups that they did report that were pre-announced have their boundaries shifted. So, uh, remember this whole time of onset thing we're talking about right. In the protocol, they say very clearly 120 hours, uh, from onset. That's where the line is going to be. 120 hours is five days. Uh, I believe, right? Uh, yeah. So why is it now set up essentially four? You know, like they say zero to three and four to seven, what did they move that back to 96 hours? We don't know. So the subgroup analysis is, is, is a mess and the same, the same thing with age, where instead of going, they've got the move to equals that there is like, uh, I don't remember exactly how it was and how they shifted it, but if they had announced earlier that we'll go for less than, or equal to 50 and 50 and above what they reported was less than 50 and 50 or equals, uh, sorry, more than, or equal to 50, on the other. So the, the year 50 cohort—people were exactly 50 years old—were moved from the one cohort to the other. Why? No explanation.

ALEX: Next up, okay. This is one, again, I picked that up and I'm quite animated about it. Incorrect dose reporting: many patients at high-risk due to BMI, may not have received lower per kg doses and show lower efficacy. This is quite outrageous. So if you see the paper and you read it, you read the whole paper top to bottom, you come away with the understanding that these guys, gave people 400 mcg—I think that's micrograms—per kilogram of body weight. Fine. Then you look in the, again, supplemental appendix, I believe. And they say we gave them, you know, whatever it was six to 12 tablets, whatever, but scaling up to 90 kilograms of weight. 90 kilograms, I don't know, in pounds, but it's kind of like a normal weight. I'm, I'm, I'm more than that. Lots of us are more than that. In fact, I looked up and you know what the average height of a male is in Brazil, and I've seen different conflicting reports. Uh, once at 171, 1 said 174. If it was 174, the, the average, weight at which you become 30 BMI is 91 kilograms, right? Why is 30 BMI important? Because the trial was balanced exactly in the middle. Like had half, the patients were lower than 30, half the patients were higher than 30. What does this tell us? Let's just roughly say half, even though it doesn't have to be exactly in the middle. If half of them are 30 BMI and above, they would have been under-dosed. ‘Cause they would have been more than 90 kilos. The ones that were below, would have been dosed appropriately. So you're looking at roughly half your men in this trial being under-dosed and some of the women, not all of the women, maybe a third of the women. I don't know—we haven’t seen the data—being under-dosed, right? So the, they report 400 mcg, the average dose they provided, per kilogram, was not 400 mcg. It could not have been. Right. So, so this is basically false. Like the paper says we did X, right. And you look in the appendix and there's like this little asterisk that if you expand it out, it looks, it looks different. Now. I've seen, the (inaudible) saying at the time that, well, we used the FLCCC protocol anyway, what's your, that's your problem. That's what they were saying. Anyway. Now they've changed their dosing later and now that's why they're complaining. That's not true. The FLCCC was suggesting 200 mcg for five days. These guys were doing 400 mcg for three days. Okay. You're like, well, you know, that's actually still more right. 200 for five days versus, so you know 1000 basically, over the treatment, whereas these guys were giving 1200. Okay. 20% more. That's kind of cool. Yes. But again, with this under-dosing element, that means that your top is dosed a lot less, and I've actually graphed this out, and indeed, above a hundred kilos or so, the Delta with FLCCC starts to open and it keeps opening. Now, high BMI patients are not just important because they’re a cohort that was under-dosed. They're also the cohorts that's the highest risk. We, we know this, like, you know, obesity and, and, and, COVID do not go well together. So we're looking at a situation where the more at risk you were the less effective dose you got. Right. So I asked (inaudible) today, actually I had a back and forth with him on Twitter. I was like, why did they do this? And he kept changing the topic. You talked to me about the, COVID our trial, the active six trial, the average here, the average there. He's an author on this paper. He should know this, or at least he should be able to stick to the topic. Is like something about the average distribution of weights in Brazil, he told me. I'm like, no, you have have your patients are over 30 BMI. Your average male, 30 BMI is 90 kilos. So your high BMI cohort, half the men, and like many of the women were under-dosed and he didn't have anything to say that. So if there was some justification he would have given it to me. And, so this to me stands out as a very strong reason why the study is flawed. And also again, ethical lapse like there's a 90 kilogram number there that has to have, uh, citation to it. Like, why did you add this? Right. This is not what the FLCCC did. In fact, let's add a secondary topic here. The FLCCC recommended even at that time—I’ve checked—the V9 version of their outpatient protocol recommends taking every making with a meal, right. Which increases the bioavailability. These guys did not do that. They explicit said the opposite, which reduces—so-so-so even the numbers you see compared if a FLCCC versus TOGETHER trial, which kind of look the lower weights, they look better for TOGETHER than for FLCCC at the time, that doesn't take into account that, taking with a meal has a, an improvement by availability. Now some, there's this, this segment in the, in the protocol at the bottom that says, that there's some research that says that the improvement you get for the elderly is only like 25%. Okay. First of all, that's still something, secondly, there's this other paper that says the improvement you get is two and a half times more absorption. So I don't know which one is true. I know if they thought that it wasn't as much as I thought it was, they could have just given it with a meal. Instead they're hiding behind sort of the FDA label and, uh, other things that just don't sound reasonable to me. And more than happy to contravene for fluvoxamine. So, Yeah, this is, this is really frustrating, how the dosing was done. And again, no good explanation coming. And potentially real people put it real risk for unclear reasons is not okay.

ALEX: The next one, plasma concentration below known effective value. This one, I think since they were roughly in line with the FLCCC protocol for at least the lower weights, I think this is okay, except for the old issue we just talked about, right. About taking with a meal and the higher BMI people. But, there's this, the study that, that kind of showed, uh, that you kind of need a specific kind of critical mass of ivermectin use basically, to have a result. And, and that they modeled the authors model that you didn't reach that. I'll mention something here actually, because it's kind of interesting. The authors include, this guy, Craig, something, I don't remember his last name. And I'm not even sure if it's the first name is Craig, from Australia, from Monash university, so Monash University is important because, the, the original work on ivermectin working in vitro for COVID was done in that university, by Kylie Wagstaff staff and her team. This guy came out of the same university and almost immediately started putting out material that that's garbage and you can't—the whole meme about we can't reach the effective plasma concentration came out of that group. So this paper here has an author who was involved in saying even if you give a high dose, you can't reach the effective plasma concentration. So far so good, right? People can have their opinions. Why the hell did they give the start to trial with a single dose? Which had zero chance of reaching that level, right? Don't you want to give a high dose to prove your point? Like give a high dose show that it doesn't reach the, it doesn't do anything. And, you know, go back to Australia and give Kylie Wagstaff the middle finger. Instead, you start with a low dose and then you, you, you move that up, but like in a very kind of stingy way and kind of sorta and leave people uncovered etc. etc. Why? Ivermectin, as far as we know, we, there might be disagreements about efficacy. I get that, there are no serious disagreements about safety. There's people like mumbling about safety, but there's no, you know, there's cases reported that somebody took like a hundred times the effective dose as a way to commit suicide. And she walked out of the hospital four days later with no lasting effects. I mean, It's really hard to harm yourself with this drug. And everybody's being like hyper cautious about like the precise amounts. I get it. Right. Of course it medications, medication, you gotta be cautious, but like when people are dying and you've got good indication and a good safety record, I don't know again, like why under-dose when your whole argument is that, even high dose won't reach the plasma concentration. Then, you know, try it. Anyway, otherwise though, if they had dosed, as they said they had dosed, especially with a meal, I don't think this would have been an issue. It's an issue because of, of the caveats that they put on the dosing.

ALEX: Next up primary outcome, easy to game. Selected after ivermectin one dose arm. Correct. So they, initially said they would report separately. This is a whole other story, and I'm realizing actually I wasn't talking how much material, I've observed in this trial and how much material there is to talk about. And probably not even half of it. They changed the outcome of the trial from, uh, over 12 hours observation to over six hours of observation. Okay. And then they have some explanation about like, well, this was the peak of the Gamma wave and it was a mess. It was like, you know, people being hospitalized in corridors and like whatever. Okay. Well, that's a problem, right? Don't really want to, take data from a health system that is in panic, right. Again, I'm, I'm, I'm visualizing sort of Italy early COVID. and what you would expect is that the ER, numbers would look random and they do, if you look just at the ER observation numbers, there are in the paper, they look like, inverse of everything else on the paper. Right. So that I would make an apparently makes you do worse in terms of the ER observation. I don't know that that's, if that makes any sense, and then hospitalization does better. So like what you get observed longer in the ER, but you somehow magically are less likely to go to the hospital. This looks like statistical noise to me. And really just think about it, right? This is a health system collapsing. People are getting hit by the Gamma variant. One of the deadliest variants we've ever seen. These are the worst days of the pandemic in Brazil. They're setting up field stations everywhere. People are just rolling in sick. Are we saying that like the doctors would follow up on every patient in a timely fashion, that somebody was not left in a waiting room for like eight hours instead of four? Give me a break like this is, this is not serious. Right. And yet what they did is they had them, I believe as separate endpoints and they merged them. They said, we'll report on ER observation over six hours or hospitalization as if it's kind of like equivalent, as well. So they reduce the number of hours that you have to be observed to count as, you know, quote unquote hospitalized, basically like, over 12 hours it was originally, they made it over six hours. Their explanation made little sense. And again, these ER, field stations that they were working with. Fair enough, Brazil was in chaos. But the but the whole point of an RCT is there's a stable background here to work with, right? Not that it's changing rapidly as you are, undergoing your trial. So, I don't know what to make of it. I wish they had reported at least their originals and their new one. So we can see changing a primary end point in the middle of the trial is not a good thing. It's, it's considered a really bad thing. If you hear (inaudible) and those guys, they constantly rag on trials for violating the preregistration. This trial, I'm having trouble finding anything that they did do, according to that preregistration at this point. Like they changed primary end point secondary input subgroups has been just like limits. Everything keeps, keeps changing, like inclusion criteria, exclusion criteria. Okay. All right. I mean, at this point, and this is the problem, you kind of have to trust the researchers, right. And the whole point of an RCT is that you don't have to trust the researchers. So we’re having like weird versions happening with this trial and, and again, I don't know what to make of it, but I, I, I really don't like what I'm seeing.

ALEX: Next one, I think is, a good point, but probably tangential. So they said they included contra indicated chronic kidney disease patients. And indeed ivermectin is contra-indicated for kidney disease at least in some places. I wasn't able to track sort of the, you know, so it was on the label or not. And therefore you, first of all, you put these people at risk, theoretically. But honestly this looks like an honest mistake. What I've seen is that that contraindication is not really that serious either. Anyway. So, I doubt anybody was actually harmed by that. But it does kind of speak maybe to a lack of attention to detail, which is important. But I, again, I don't think that this criticism will take down the trial.

ALEX: Antigen test requirement, is fascinating. So when they said, you know, you're, COVID positive, you can be at as the trial, they didn't do a PCR test that did a, like a, rapid test. Basically, if I'm understanding this correctly. Rapid tests have a false positive degree. Right. So are we adjusting the expected statistical power of the trial for the number of false positives that we're gonna duct in the trial? No. That's a problem because the number we talked about before, 681 was precisely set. To have a certain event rate to observe a certain kind of effect with a certain amount of power. When your variables are moving around, and we're not talking now about the Gamma variant and the ER observation and all that stuff, but whether you're using a PCR and antigen test to, uh, induct somebody to trial, all of that should be reducing your confidence in your, in your numbers. And yet did not seem to have happened.

ALEX: They don't provide time from onset analysis for mortality or hospitalization only the combined measure. Yes. And again, they, they really should have done that. There's per-protocol and modify additional to treat mortality, and hospitalization results missing. Health nerds saw fit to say like, well, you know, there are some mortality numbers in the appendix. Yeah. But there's not the full analysis like they did for fluvoxamine. Right. Again, like there's a matter of parity here as well. And really the authors ended up in a tricky situation because in a way, having all these papers is an advantage. If you want to obscure something because there's too many pieces of the puzzles put together. But in another way, there's a lot of data and taken through different snapshots of time, that can be used to sort of infer what happened in the trial, and sort of see differences in treatment differences in approach, and make inferences about why there's differences that are there. And, and, and, and this is actually interesting how certain things were provided for fluvoxamine, but not for ivermectin or vice versa.

ALEX: Same for outcomes, many outcomes specify for the trial, in the protocol appeared to be missing. They had a co-primary endpoint of mortality COVID-19 mortality, which they didn't report on, on the all cause apparently, a bunch of other ones, you know, why, why are these things missing? It's like they had seven months to do it. Surely, there's no, there's no explanation that the, the, the age information that they're missing for some patients important because it's an inclusion criterion, and a randomization criterion, actually. So if you don't know the age, what's the deal. Again, we can maybe hypothesize that they knew roughly, but not precisely. Again, this has to be discussed. This is not up to me to, explain what they, what they did.

ALEX: The next one is mid trial protocol changes many, many changes throughout the protocol. I kind of get it because it's an adaptive trial, but only if your committee that make sure that these things are unbiased is independent and it wasn't. So again, this comes back into our field of vision because we don't trust the committee and nobody really should. I mean this is not a matter of like liking people, not liking people. They're just, we're not independent, flat out. Which by the way, is also a false claim in their papers. They say that the, DSMC was independent. And I don't know the definition of independent that would make that committee look independent. I'm sorry, like maybe you say they're honest or they're good people, or they have like a good record. I could grant you all these things. The independence thing is there for a reason right what is it? The wife of the Caesar has to not only be honest, but appear honest. And I have my doubts about whether this committee was honest or not or whatever. But maybe we could say like, okay, maybe I'm being paranoid. But what we do know is that it does not meet the definitions of independence, of impartiality, that we, want in these trials is in order to not have to have these conversations.

ALEX: And a specific criteria actually that were modified like, late in the protocol, they started adding the criteria and fever over 38 degrees Celsius at baseline. 38 degrees is barely a fever, right? Like, uh, again, I don't know if you guys speak Celsius. I think I'm at an advantage here. 36.6 is your baseline temperature, right? 38 is like a little bit above. It's not, you have a fever, but it's a low fever. So, if that's an inclusion criterion, that means that like somebody who is young has a bit of a fever, they get added to this trial, who's supposed to be for advanced patients—but only towards the end. Right. And this matters because remember it's all the fuckery that happened early on and again, it doesn't have to be intentional, but all of this mess has now cleaned out. And now you're adding patients that very well could be baseline healthy. Right. So what does that mean? That means that you are watering down the statistical power of your results. If you get patients in that are going to be fine anyway, then any drug that does work is going to have less opportunity to show it. This is why adding, you know, having a fever is now a criterion for being added to a trial on like, you know, severely ill patients, with comorbidities, like, you know, is odd, you know?

MATHEW: Alex, can I ask a question?

ALEX: Yep. Go ahead.

MATHEW: So, so not all these participants like jumped in at the same time or exited the trial for computations at the same time, if I understand what you're saying.

ALEX: Yeah. Yeah. It was, it was a rolling trial. It happened from the high-dose. Well, the whole, let's say the whole trial, it started in, uh, January 15th, ‘20 for real. And it ended on August 5. And they have multiple protocol changes in the middle.

MATHEW: Okay. Uh, okay. Wow. Okay. So, um, something that I found in…

ALEX: I'll give you one thing that will, that will that'll tickle you even better. Vaccines went from an exclusion criterion into an inclusion criterion—partially—uh, only 14 days before became an inclusion criteria. They shifted their, their vaccines themselves as a, as a, as a, uh, criterion and shifted around throughout the trial. It's it's it's and they don't mention anything there's no subgroups, nothing.

MATHEW: Okay. So different arms were started and ended at different periods of time.

ALEX: Correct.

MATHEW: Is there any place in the study where they do a risk adjustment for, um, you know, different risks on different days, right? Because the, the risk curve should look wildly different in January than it does in April.

ALEX: I have not seen that if anybody can, uh, has looked it up, uh, let, let us know, because I have myself have not seen, that adjustment.

MATHEW: Just to mention, I did a readjustment for, for, the Israeli study, and I found that that risk adjustment had a factor of minimum 1.9. That's not small at all.

ALEX: Right, right. I'll show you something, because I think you'll, you'll get it. Just give me one minute. I posted basically my “hint, hint, nudge, nudge” explanation for all this mess. Can you see, there's a tweet that must have been added now to the space. There's one with four, four charts. It's very striking. Uh, it will be added, I guess, shortly. These four charts show the enrollment pace, which shows the discontinuity I'm talking about in ivermectin high. It shows the deaths in that particular state in Brazil. Shows the cases in that particular state in Brazil. And it shows for the Gamma variant, but basically the weekly deaths in Brazil and how the Gamma variant is dominating and pulling up the weekly deaths in Brazil. According to this chart, it went from like something like, I dunno, 5,000ish to like close to 20,000 at the peak. Weekly. Deaths per week for Brazil as a whole. Right. It's it's we're talking about like a massive jump in mortality happening right in the middle of this trial.

MATHEW: I've never used Twitter spaces before…

ALEX: So if you have the bar with the, with the space, if you can open it up so you can see everybody at the top of that list, there are some Tweets that are attached. And I think the first one, uh, is one with four diagrams that I have, posted. But anyway, the point is that, what you're saying is blatantly true. The CFR moved all over the place throughout the trial. I guess their explanation for this would be that they are not actually comparing the drugs to each other. Right. They're comparing everybody to placebo and the placebo was nominally concurrent, even though I've I'm pretty sure it wasn't. They were saying it was. But yeah, this kind of adjustment would be really fun to do.

ALEX: Yeah. So the next, the next point that they're making is what I said before the vaccine status and clear, uh, there's some places they say they will exclude you for, uh, having been vaccinated, other places they say they will include you if you've been vaccinated up to 14 days prior, no, no, sorry. The early protocol said the vaccination was an inclusion criterion. And then there was an exclusion criterion and excluding vaccinated patients is fascinating in another way too, because, you would expect, especially right after the Gamma wave, where everybody's freaking out the most at-risk patients to go get vaccinated. Right. So the makeup of the trial, remember this is early 2021, right? This is when the whole mess with Andrew Hill happens. This is when the vaccines are released. There's a lot of things happening between January and August 2021 in the pandemic, beyond the Gamma stuff that's happening specifically in Brazil. There's a ton of stuff happening in the background, both politically and, and in terms of vaccine availability. So if you're excluding people that have taken the vaccine, you don't have to change the exclusion criterion. And that criterion itself is changing your background population as you go. Because again, you would expect the most at-risk people to have gotten vaccinated, gradually through the trial. Right. Like, so, so, so your ability to get at-risk patients, is decreasing. In fact, I've actually shown that, the early part of the trial, they had, a bit over 50%, I believe 53% acceptance rate of patients that they were evaluating. So for every two patients that were evaluating, they were admitting into the trial approximately one. For the latter half of this study, I've shown that this plummets to about a quarter. So for every four patients, they are evaluating they're adding one to the trial. Again, why is this changed? Not explained, but the, the criteria are shifting and also a bunch of people are getting vaccinated. Uh, so you might, uh, you might start to form a picture.

ALEX: Ah, next, late change in results from previously released data. I think this one is actually not an issue. So, ivmmeta says that, you know, they talked in August about a certain number of results. They're talking now about certain other number of results, but this has been clarified that they had not finished the 28 day follow-up and they spoke in August, which makes sense because they closed admissions on August 6th. So they could not fully evaluate all the events. And you would expect that some more events would have added up. Now, this doesn't talk about the mortality discontinuities today, table 2 and all of that stuff that has been stealth edited. But, the, the discontinuity with the original results, I think maybe I'm not seeing something, but I'm not… I'm unconvinced.

ALEX: Statistical analysis plan dated after trial start. This was another one I picked up. Their statistical analysis plan is dated March 26. They started adding patients on March 23. Also, that plan is not fully signed by everybody that has to sign it until April 8th. Does that mean that they were doing different things? You know, I'm told that this is kind of common, but again, we're looking at a situation here where the DSMC is not independent, et cetera, et cetera. You know, can we conclude that they looked at the data as it was rolling in and made some decisions? I sure hope not, but you know, hope is not a strategy. Uh, again, this is why you sign things and you file things and you pre-register things, and it's supposed to be the gold standard trial and, and, and dates are shifting all over the shop, right? Like it's, it was frustrating.

ALEX: We talked about the per-protocol placebo results being very different. Um, oh yeah, no, no. This is a particular feature of that. So if you look in this appendix figure S2, they have a different, like, they have the intention to treat cohort the, uh, modified intention to treat and the per-placebo. The, cohort that did three days per placebo did better than every other arm on every other cohort. So. It did better than other placebos—like a lot—even did better than ivermectin on the other arms. Right. But on that particular arm, because it's per-protocol, ivermectin also does better. So it doesn't flip, but, um, it does spectacularly well, and you're like, okay, what's the deal? Like why is, per-protocol placebo doing super well. Now, could it be that, certain people died, in that group, uh, before they completed their, their, their treatment? And therefore are counted out of the, placebo group as not per-protocol? And therefore that group looks synthetically better? I dunno. Things that should have been explained. As it looks like right now, there's like this massive mystery around, why does the per-protocol placebo look better than everything else? Like just substantially better. And what was in that placebo and kind of have it?

ALEX: The imputation protocol violation. Again, I'm not sure if it meets their definition. I need somebody who understands this stuff. Mathew, you're probably the right person. But they state that the imputation is going to be used for, numbers that have a certain statistical analysis to be done on them, whatever But long story short, they state and upper limit of imputation is going to be done. That is 20% missing data to fill in. And they used it to fill in on at least one of their tables, to fill in the time from onset. But time from onset they're missing 23% of the data. So it's missing more than the maximum that they would accept to fill in statistically. And they do say they've done it. The one part I'm not sure is if it's used on that table in the way that they said it was going to be used in the statistical analysis plan. Whether there's like some leeway there, but I, I like filling in 23% of your data points statistically sounds, dodgy to begin with. Having said that you won't do it for more than 20% and doing a 23. I don't know it at the very least sounds like there was chaos, on the streets, which we kind of know that was, uh, because of the other, parts of the story. And they tried to cover it up here somehow. not good again, I give this slight doubt that maybe the way that they said that we use imputation, the way they use it are slightly different. So the limitation doesn't apply. But I'm doubtful that that's true. I'm just sort of noting it.

ALEX: The next one is, uh, I believe a typo that was fixed again with no notification that it was fixed, but whatever.

ALEX: Conflicting adverse events counts, this is, I think there's a lot more to be done here that I haven't jumped into that a lot.

ALEX: And then we move on to things like conflicts of interest, right? So I'm just going to read a bunch of this at once going to talk about it. So this is possibly the largest financial conflict of interest of any trial to date. Disclose conflicts include Pfizer, Merck, Bill and Melinda Gates Foundation, Australian government, Rainwater, Fast Grants, Medicines Development for Global Health, Nova Quests, Regeneron, AstraZeneca, Daiichi, Sankyo, Commonwealth Science Research Organization, and Card Research. UNITAID is a sponsor, at least if you look at the website, analysis done by a company that receives payment from and works closely with Pfizer—that’s Cytel—co-principal investigation works for Cytel and the Gates Foundation—that is Mills. The Gates Foundation is founding partner of Gavi, which took out Google ads, telling people not to use ivermectin. I don't think there's a significant doubt about Bill Gates feels about ivermectin. And especially with IMS holdings and Certara is the same they're the same kind of situation where they are, companies whose mission includes helping pharmaceutical companies get approval and designing scientific studies that help them get approval. Now, to me, this is important, not because they're helping pharmaceutical companies, but because they're indicating that they know what they're doing. These people are experts. There's several people in this trial that know a lot about running clinical trials, right? So when we're looking at sophisticated behaviors about sort of, you know, causing glitches and algorithms and shifting arms and whatever you could say, like, dude, like, what are these guys like you know, geniuses? Apparently, yes. Like these are the kinds of people that work for the biggest pharmaceuticals and do the kinds of trials that we know have a higher likelihood of getting approved. Like there's this thing called the funding effect where, trials that have been run by pharma, are just statistically more likely to to get approved than trials that have not been run by pharma. And you know, why would that be. Maybe they're really good at selling trials. Maybe these are the people that know all the ins and outs, and maybe they're using them here for the opposite purpose. Maybe, maybe not. It would be good if the DSMC was independent so we can have some sort of assurance, but that was not done. So now we have all these doubts.

ALEX: Then they talk about the Gamma variant, which shows very different characteristics. And again, we've seen this problem in many different ways. We've seen the mortality jump in the, in the trial. We've seen the CFR jumping in the region. We see the Gamma variant taking over, we know from the Gamma variant that is, I think that the best estimate I saw is like, it's 50% more likely to kill you, basically. Right? Like it's not a little bit, it's a lot. In fact, it probably was too aggressive and that's why it got eventually replaced by Delta. But it's, it's the worst variant we've ever seen and yeah, this changing in the middle of your trial, uh, and you not taking that into account does not look very good.

Alex: Single dose arm results missing. Yep. Well, that's just, what's happening, single arm results missing. They haven't given us the data. We need that data to fill in the rest of the story. We can make some guesses, but it's, it's not good for them to collect data, to get people, to sacrifice themselves for science and to not even put it out. Sorry, not good. Especially when, you know, there's a doubt whether, well, the certainty that you recruited people into that arm when you had declared your intention to shift them. So the least you can do is release the data.

ALEX: Okay. This is kind of funky, actually, anomalous results from the same region. Apparently, the Molnupiravir trials did significantly better in Brazil, in that region, at that time. Right. And, and if you look at the Molnupiravir results, they're, they're predominantly like propelled to significance by those results by specifically for Brazil from that region at that time. What that means, nobody knows, but it's kind of funky. What's also funky is that, remember this guy, Kristian Thorland that we talk about who is like Mill's sort of soulmate and, chairman of the data and safety monitoring committee, he then, I don't know, show’s up, having written a paper with, Gideon Meyerowitz-Katz, Kyle Sheldrick, and Kristian Thorland. So now seeing another guy from the, from the same DSMC, who is also co-author of many papers with Mills, 26, I believe, and the fifth member of the group, because, you know, the Beatles always have to have the fifth beetle, Andrew Hill. How the hell, you got two people from this group that are like, you know, dedicated to finding fraud ivermectin trials, whatever, you know, my feelings are known, but okay, well sure. They can write a paper. Two people from the TOGETHER data and safety monitoring committee. These people haven't collaborated before to my knowledge and Andrew Hill. Right. Showing up together to write a paper, arguing against Molnupiravir. What??? This is not how papers look normally. Right? You have an academic group. Maybe there's like a supervisor. If you see it, it's like, you don't have people from like five countries who all have entered the radar of proponents of early treatment for very different reasons. Uh, and, of all of them, Andrew Hill, I think is like by far it's beyond doubt that he has engaged in academic misconduct. Writing a paper, arguing against Molnupiravir, I, this, I don't even know, like, I don't know what to make of it. I don't know how these people met. I have this paper why they chose Molnupiravir. It, it all is confusing. So I don't want to say anything specific. I have some thoughts, but, uh, it's, it's, it's all low probability events. This kind of completely throws me off.

ALEX: They mentioned this is being designed by Cytel. Again, we mentioned Cytel before and how they are connected.

ALEX: Placebo unspecified. I don't think it's unspecified. They did mention vitamin C, I believe in the, in the hydroxychloroquine trials, which is another problem. Um, but not, uh, everything I've seen about, uh, January 21 onwards is talk. So I don't know how they see that. But I don't, I don't think, plus he wasn't specified,

ALEX: Yeah, this is the thing about the published protocols. And I don't know where ivmmeta got the 1B protocol. I suspect it is a failed kind of candidate protocol, basically. There are many, many changes throughout the trial, uh, that that's for sure, but, uh, I don't know that particular, issue, uh, with the published protocols, it would be very good if, we had the, like, if ivmmeta told us the story of, of where it came from.

ALEX: The last one, is that, uh, based on probability of superiority, ah, this is a new one that they've added, which is one of mine. Based on probability to superiority hidden in the appendix. So the based on probability of superiority figure featured in the main paper for fluvoxamine, metformin, and hydroxychloroquine was hidden in the appendix for ivermectin. Patients 50 years old, were assigned—this is separate, um, that there's, um, yeah, separate smaller issues, or not smaller, really. If we don't have the data, we don't know how much it affects the results were moved to different subgroups.

ALEX: There was, the weirdness around greater efficacy being seen for older patients than younger patients though honestly, I don't know. I don't know where that was anyway.

ALEX: And then there's like a bunch of other comments that are done prior that I'm not going to go into because we've just spent enough time already. But yeah, so that's the paper. Oh my God. All right. All right. I'm ready to, to stand down. If anybody wants to raise their hands to say hello and, ask any questions. I think Twitter's doing that thing by the way, where it's, not showing me requests.

ALEX: Oh, Jared. Hello, Jared. Let me approve your request. I’d love to hear your thoughts.

JARED: Let's say this has been great. Um, very much enjoyed the analysis.

ALEX: Thank you. Yeah, yeah. I've uh, as I was talking, I'm realizing I've probably dug into this a little bit more than I realized. Yeah,

JARED: I I'm, uh, I'm actually a trial candidate for the Beat MS trial. Um, I've been studying, uh, reading medical research for years now about multiple sclerosis and it's, I'm a computer scientist, so it was really satisfying to hear another computer scientist do a deep dive.

ALEX: Yeah, no, it's, um, it's fascinating that, if, if what I think happened happened in this trial, it would have been an issue around, algorithmic sort of, you know, trying to satisfy preset constraints, in the, in the face of adversity where adversity is, is arms sort of showing up and disappearing, and, and trying to keep consistency.

JARED: I feel like that's a charitable analysis and, uh, I'm happy to hear somebody be so charitable.

ALEX: I am not… Look, my less charitable interpretation stuff: this was done on purpose. That it happened. It happened that, uh, why it happened I'm, I'm, I'm refraining because I'm not a telepath. And honestly, I consider, uh, incompetence to be a worse, situation than malice. Uh, people will have heard me say, because you can, you can negotiate with the malicious. You cannot negotiate with the incompetent. So it's calling them incompetent—if this happens to you and it's not on purpose—it's, it's, it's extreme incompetence. Calling them incompetent is not a, is not a, I don't necessarily see it as strictly charitable. Um, but, um, whatever it is, I just like to talk about the things I know and not the things that I don't know. I, I, I'm pretty certain that after it did happen, a number of things were tweaked to make the paper look a lot better, right? Like to hide what happened. I don't know that what happened happened on purpose. Again, I have many, many questions that I've articulated here. But I, I could see a world in which maybe this was a fuck up that happened completely accidentally. And then things were airbrushed to make it look like it didn't happen, but the there's just too many, too many, uh, elements, uh, gone awry and they fit together under this, this story. So it's, it's just too, it's just too clear.

JARED: Agreed. And thank you. Of course, for the analysis.

ALEX: At least this will be recorded, uh, for, for posterity. Okay. So if, um, nobody has any particular topics to bring up, uh, I usually here. Um, Hey Tom.

TOM: Hi, uh, I, I'm pretty busy. I've been listening and, uh, and I'm full of awe and wonderment for your analysis. That's all they've got to say.

ALEX: Yeah. I may have gone a little OCD on this and lost a couple of nights of sleep. I'm just, again, I'm having, having gone so deep. I am, I've developed like a fondness for the design, but I'm also, it also looks like the people who were critical to this design were also critical in compromising it. And that's just sad. You know, if that's what happened, it's just, it's just depressing that, that, that, that this design is really, you know, in principle how trials should be happening. I love it. It's it's great. Um, but there should be the safeguards around it to make sure that it is unimpeachable and, and this was. Impeachable. Anyway, thanks. So yeah, basically, um, unless anybody else has any questions, I'm sure. I hear often, like, after the fact that a lot of people, sending me comments, so I'd love to, you know, continue the conversation, send me stuff. I'm the reason I'm doing this in part is because I want to, stir the collective intelligence to you to point me to things. Somebody had said that, you know, isn't it commenting about this stuff in public giving, you know, the potential enemy, you know, your playbook, you can just showing him that, showing them all your cards. And that is true, of course, assuming there's somebody was motivated to make the paper look good, knowing all the issues gives them some ability to maneuver. However, my counter argument was that without talking in public, I would have no concerns. My issues are issues because I've talked in public, I've talked to people I've connected, I've seen, ivmmeta gather, I've seen so many people just put together different pieces. And then through interaction, I've met people we've, we've come together and started to analyze stuff. And sort of escalated our understanding and different, you know, we've chased various, you know, uh, dead ends, uh, that, uh, you know, looked like issues, but actually weren't issues. Uh, I mentioned a few of them here. So this is something that lives and dies in public. I understand the potential for it to, you know, have some, negative effects, I suppose, but I'm not much for a subterfuge. But even if I were, I think it's obvious that this work is exciting because it's work that is done by many, it's different people throwing different fragments of understanding into the ring and, sort of, you know, it coming together from ideas that maybe not all of us had to begin with. Like, you know, I may throw in something and it might not be correct with somebody else or something in there. And it's like, ah, yes, it's a, it's a collaboration. So I think we just have to do it in public. Uh, I don't, I don't see how else. We can do private stuff, but, yeah, it's not going to be as, as, as powerful. Hey Jared. I see a hand up.

JARED: I just wanted to note that, uh, I'm a big fan of epistemology in general. Um, the idea of obscuring your tactics in order to defeat your opponent. That is not how science works at all. Oh my gosh. So yeah, having these discussions in public and the analysis and the tools of the analysis critiqued is exactly what we're here for. Thank you for doing it.

ALEX: I appreciate—I think if I'm being charitable to the people who were saying that they have concluded that this is not how science is done, right. They have concluded that there is something extremely shady, going on and therefore that now we are not in a scientific realm, we're in a political realm. Right. That's why they, they, they, they say this, I see where they’re—it's not that I don't see where they're coming from. I just saying, like, I don't have a choice because this, this analysis is not all mine. A lot of people, who are here, who are acknowledged my posts, as much as I can, but really I can't, because there's just so many people who have added to this analysis. Uh, we've all done it in public. And I think this is the sign of the future. I think the future looks more like this and less like that. So, whatever the downsides, I think we're going to have to mitigate them because I don't think we can avoid, uh, I don't think there's a better way to do it then. Well, I'm sure there's better ways, but not that the in public part is, is, is definitely part of the better way. So, uh, it is what it is and thank you all for listening in. I kind of find it hard to believe that you've heard me ramble about, uh, scientific study for over, um, two hours, but you know, you did it to yourself, so got no one else will blame. All right. Cool. Thanks.

Thanks to @mmpaquette, @EduEngineer and @MaryKRe for their contributions and and great conversation.

This article is part of a series on the TOGETHER trial. More articles from this series here.

5 Comments
Do Your Own Research
Do Your Own Research Podcast
Conversations questioning our current sanity level as a species.
Listen on
Substack App
RSS Feed
Appears in episode
Alexandros Marinos