You might have seen the title of this piece and expected to read about how industry funds naughty little researchers who then carry out studies which show how fantastic and healthy that industry’s product is. Nope.
This piece is going to be about other conflicts of interest:
clicks
attention
likes
You might recall a few months ago that a paper came out which purported to show a positive relationship between the artificial sweetener erythritol and cardiovascular disease.
I was so aghast at the discrepancy between the methodology used and the title, and even worse, the headlines, that I wrote a pretty scathing tweetorial:
Now, in my opinion, every single sub-study within this paper had significant flaws (check out the tweetorial for that), but here, I want to talk about the overall approach to the research question, and how I worry that this relates to conflicts of interest that I actually think are growing to be more widespread and worrisome than financial incentives.
The authors have a really important research question: does the consumption of a commonly consumed ingredient within the food supply increase the risk of heart attacks and strokes? And the point I want to make is that if your genuine interest is in answering this question, you would design the most robust study you could to answer this question, right?
Yet, I don't think the case can be remotely made that in this publication the scientists carried out the most robust science they could to answer their very important and very valid question.
Let's review:
the investigators found a positive relationship between serum erythritol* (not intake of the sweetener erythritol) and severe CVD events in humans with CVD in an observational study
then in some mechanistic studies (in cells and in mice) they found that exposure to erythritol increased markers of coagulation (blood clotting)
So at this point a perfectly reasonable hypothesis is that erythritol is increasing the risk of major cardiovascular events because erythritol causes platelet aggregation (thrombosis).
And very, very testable. In fact, the investigators looked like they were going to try and sort of test this in their trial registration.
But no. I don't know what's happening with that trial, we’ll come to that.
Instead, what the investigators did next in this paper was a pharmacokinetic (PK) study (the type of study you normally do to understand more about how drug or ingredient is metabolised and excreted). The investigators found that following a 30g dose of erythritol, the erythritol concentration in the blood reached the levels that were associated with platelet aggregation in their mechanistic work. Then, taking all of their data together, they concluded that “erythritol is both associated with incident MACE (severe CVD events) and fosters enhanced thrombosis”.
Now, as a body of work, it’s not terrible. I mean, they did a nice triangulation between the observational data, mechanistic data and and PK data. But this type of triangulation - using a series of weaker or indirect methods to try and join the dots - is only justified in situations where there simply isn't any other way of getting at an answer.
But……. if you want to find out if erythritol consumption increases platelet aggregation in humans….WHY WOULDN’T YOU JUST GO AHEAD AND DO THE GOLD STANDARD OF A DOUBLE-BLINDED RANDOMISED CONTROLLED TRIAL OF ERYTHRITOL VS PLACEBO ON PLATELET AGGREGATION IN HUMANS?
Oh but wait, I did mention that they are doing a trial….
Yes, but it’s not randomised. It’s also not blinded. And they don’t really have a decent control. I honestly don’t understand.
Randomisation is literally one of the easiest parts of doing a clinical trial. You have to get through ethical approval (time consuming) recruit the participants (the hard part), screen them (also hard), get informed consent and then you randomise them (EASY). You can get free, super easy software to do randomisation with one click.
Then blinding. Blinding is important for removing things like selection bias, or the placebo effect. Blinding in diet studies is often notoriously hard because the participants themselves can usually tell when they’ve been assigned the Mediterranean or vegetarian diet for example. Usually, you can only blind the investigating team, or maybe only the person doing the analysis. But double-blinding is the GOLD STANDARD (neither the participants nor the investigators know whether the participant is getting A or B etc) and should always be the method used if possible. And you know when it is possible, nay, EASY, to double-blind??? When you’re giving a sweetener in a plain packet. Seriously. I don’t understand why you wouldn't do this.
Then a control - they are giving xylitol as one arm, and erythritol as the other (again, no randomisation). Xylitol is also a polyol, and I think from the trial registration the investigators hypothesise both may cause platelet aggregation. So for robustness, I would want to see maybe a control like a non-polyol sweetened water and maybe water on it’s own (hard to blind this though ) just to look at what platelet aggregation does without polyols, and what happens with no intervention. And again, if you are so worried that erythritol might be causing platelet aggregation, why wouldn’t you compare erythritol to a proper placebo before/alongside looking at xylitol?
And this is where I come to conflicts of interest.
I look at this paper, and I read through the background, and for me the interest of the investigators here should be in finding out: does erythritol cause platelet aggregation? If your interest is in genuinely answering this question, why the jiggins would you design a study that's not randomised, that's not blinded, and has no control??
What about the editors of this journal? Nature Medicine has an impact factor of 40 and is considered one of the most important and impactful scientific journals in the world. So surely the interest of this prestigious scientific journal is to disseminate the most robust, impactful science they can? HAHHAHAHAA.
The scientific research publishing model has many other competing interests.
Clicks, views and shares: The currency of the 21st century.
Online academic journals are not that different from online newspapers. They get a bunch of money from advertisers. And I am sure what advertisers want most is to publish in places where lots of people are going to see their ads. So if journals publish a controversial THIS COMMON INGREDIENT MIGHT BE KILLING YOU paper they get a lot of eyeballs as Gwyneth Paltrow put it.
Having wide readership probably also increases the likelihood that other researchers cite your journal’s papers - and this is a key metric that contributes to a journal’s precious impact factor.
What’s in it for researchers who publish in this way? Most academic institutions base promotion on the number of publications you have eg in the last 3 years and which journals (basically which impact factor) your manuscripts were published in. So Nature Medicine wants sexy, controversial findings? WE SHALL DELIVER.
Researchers might also be pressured by their institutions to publish before they’re ready. And institutions I have been part of have had press departments designed solely to get the institution name in the news. I’ve worked in places where we used to get daily emails of “please provide a comment on this study so we can get it on Science Media Centre asap”.
But might researchers also be compelled to approach science in a way that seems designed to generate the most attention? Interviews on TV? More invites to conferences? More followers on twitter?
Scientific research publishing should not model itself on bloody TikTok.
As a consequence of the lack of rigor in the methodological approach used in this publication- we have no greater certainty about the effect of erythritol on health than we had before it was published. Whats the point? I posit a big part of the answer is Clicks, Attention and Likes. From journals, from institutions and, yes, from researchers.
High quality research should move a field forward. It should be reproducible. It should be robust enough that - usually in conjunction with other studies and data - it can change clinical practice, laws, or guidelines. That’s what impact is.
Impact isn’t clicks. Scientists really need to stop with “my paper was the most widely read study of 2023!!!” or “50,000 downloads in a month!!!”. It’s meaningless. Brag when someone replicates your findings.
* erythritol is produced by the body, and the rate of production increases in cardiovascular disease. So you would expect more erythritol in the plasma of people with cardiovascular disease, compared to people without. Therefore if you observe a positive relationship between erythritol and cardiovascular disease you simply don't know whether it's the cardiovascular disease causing more erythritol to be produced, or erythritol from the diet is causing the cardiovascular disease.
why I said f*** you when offered a PhD..
Really interesting perspective. Have you contacted Nature Medicine with these concerns?