RECORDED ON JANUARY 24th 2025.
Dr. Elisabeth Bik is a microbiologist and scientific integrity consultant. She is known for her work detecting photo manipulation in scientific publications. Her work has resulted in 1069 Retractions, 149 Expressions of Concern, and 1008 Corrections (as of November 2023). Dr. Bik is the founder of Microbiome Digest, a blog with daily updates on microbiome research, and the Science Integrity Digest blog.
In this episode, we talk about scientific integrity and scientific fraud. We start by defining the terms, and then talk about factors behind scientific fraud, including individual incentives, how science as an institution works, and the publication system. We also discuss plagiarism, fake images and AI, and how Dr. Bik discovers scientific fraud. Finally, we talk about what happens to fraudsters, the correction and retraction of scientific papers, and what would be the best ways of combating fraud.
Time Links:
Intro
Scientific integrity and scientific fraud
Individual incentives
Science as an institution
The publication system
Plagiarism
Discovering scientific fraud
Fake images and AI
What happens to fraudsters?
Correction and retraction of papers
Solutions
Follow Dr. Bik’s work!
Transcripts are automatically generated and may contain errors
Ricardo Lopes: Hello, everyone. Welcome to a new episode of the Dissenter. I'm your host, as always, Ricardo Lopez, and today I'm joined by Doctor Elizabeth Bick. She's a microbiologist and scientific integrity consultant. And today we're going to talk about mainly the topics of scientific integrity and scientific fraud. So, Doctor Bick, welcome to the show. It's a huge pleasure to everyone.
Elisabeth Bik: My pleasure to be here with you, Ricardo.
Ricardo Lopes: So let's start perhaps with some definitions or uh to explain some terms here, some basic terms. So where is integrity in the context of science?
Elisabeth Bik: Well, integrity is about honesty, about, uh, truthfully reporting what you have found during your experiments. So, and that, that is actually, I think for me, the heart of what science should be. It's, it's about finding the truth. So we do interviews, we research, we are in the lab, uh, we try to cure patients. But I feel when we write our scientific papers about it, we need to truthfully report on what we have found. So basically, it's about honesty and, and not lying about really reporting what you found, not leaving out the results you didn't want, um, not cutting corners, also being um ethical in a way to your uh to your patients, to your animals that you might use in animal experiments, to your co-authors. Now, usually, science integrity is not, it, it, it depends a little bit on the definition, but for me, it's this whole, this whole range, uh, this whole spectrum of being honest towards your data and your colleagues and your, um, yeah, the experiments that you do.
Ricardo Lopes: And on the other hand, what is scientific fraud? What counts as scientific fraud?
Elisabeth Bik: Well, there, uh, it's, it's basically cheating and lying. Um, THERE'S, uh, there's different, different definitions for countries in the United States where I live. Um, IT'S usually one of three things. So there's, there's 3 types of science fraud that you could, um, that you could, could look at. So, the first one is plagiarism. Where you would, um, you would copy somebody else's text or ideas, but not give credit to the person who was, who was first. So that is one type of misconduct. The second type is falsification, where a person obtains results, but changes them a little bit. Let's say. Um, A particular patient or a particular experiment had an outcome, but it didn't quite was the outcome you had hoped as a researcher. So you change some of the data and now, oh, suddenly it's a positive or a negative, and that would be falsification. And the third type of misconduct is fabrication, and that is completely making up results. So typing in some numbers in a spreadsheet, making a graph, etc. WITHOUT actually going in the lab and making, uh, and, and doing measurements and things like that. So completely making up results. So those are the, the three types of science misconduct, but Those are the more extreme types. There's, there's a lot of in between, in between being completely honest and completely fraudulent. There's a lot of gray zone. So usually we call those questionable research practices, uh, where, let's say you're a little bit sloppy, you don't label your, your data very well, and so you, you make an error when you report on the data. And, and that's a very big middle zone. And it's sometimes very hard to know if a person really had an intention to mislead, which would be misconduct, or was just sloppy. There's, it, it's very hard to distinguish between those, uh those things.
Ricardo Lopes: So I also want to understand here what are some of the factors behind scientific fraud, what leads to scientific fraud and stuff like that. So, uh let me start by asking you about perhaps the individual incentives that people might have in science. So what individual incentives actually do scientists have to commit fraud?
Elisabeth Bik: Well, we love metrics as scientists, um, and our bosses love metrics. We love to, uh, to ask people how many papers did you publish and what was their impact factor and what was their, uh, how many times were they cited, your age index, uh, how many of your papers were in nature or in science. So scientists, but also the people above them, the institutions. Um, THE hiring committees, people who hire new faculty, for example, they love to look at those numbers, those metrics. And, and yeah, if you, if you don't have very good metrics, if you didn't publish that, that much, or you weren't cited that much, or your results are just not very nice, so it's hard to publish, it is very tempting to cheat. It's like doping in sports, right? Like if you see Everybody using doping and everybody winning, um, and you're trying to be honest, but you see everybody around you uses doping, uh, or in science is cheating. It's very tempting to also cheat because you see people winning races. You see people in science publishing papers who cheat. And so it is. Uh, AS long as there's no, um, uh, consequences for cheating, and you see a lot of cheating around you, I think other people are starting to cheat. So it depends on your environment. If you, if you grow up as a scientist, if you work in a lab where everybody cheats, you're tempted to do it as well, because otherwise your results are not as great as those folks around you and And yeah, I think we put too much emphasis on positive results, on amazing findings, uh, on, on numbers, on metrics, and that is ultimately what leads to cheating, but it also differs from country to country. So, in some countries, the pressure to publish is higher. There's even Um, money you can, you can earn if you publish a paper, um, you might get a promotion and, and so it depends per country, and that is why we see that in some countries, there seem to be a little bit more cheating than in other countries. It's not because those, those people are more, uh, you know, are better cheaters or bigger cheaters. It's because of these monetary incentives or these other, uh, incentive structures that lead people to cheat.
Ricardo Lopes: This is also tied at least to some extent with uh career advancements, right, because since the scientists are sort of forced to publish X number of papers per year depending on the country, uh, then that also went. HAS to do with them being able to advance in their career,
Elisabeth Bik: right? Yeah, absolutely. If we, if we really hold scientists accountable to how many papers they publish, then people are going to find ways to to make that happen. But in science, you don't always get the results you want, uh, you know, your cells don't want to grow or, or the results are different than you expected. And I, I feel it's not the right way to measure if a scientist is good. Um, WE should not be looking too much at, at how many. Did you publish and how many were in science. I don't think that necessarily tells you if that person is a better scientist than a person who works slowly and only publishes maybe one paper for 2 years or, or even longer. Like, I think good science needs time. And if we, if we force people to publish once a year, people are gonna find creative ways to, to get, make that happen, and that might involve cheating.
Ricardo Lopes: And do you think that there's anything about how science as an institution is designed that promotes wrong or not?
Elisabeth Bik: Yeah, it, it is because those that focus on, on, uh on numbers and I think most scientists who are successful will have um a page saying how many, you know, a list of all their papers and a list of all their citations and all the prizes that they would that they won. And I think this is how society. Is that, that's just the way it is. But I think we focus a little bit too much. And, and especially when we, let's say, hire, uh, I don't say we because I don't actually hire people, but when, when um people are Looking for a new professor at the university, they will look at resumes and, you know, as in any other job, obviously, but we, we put a lot of focus on how many papers did somebody publish, uh, all these metrics, all these citations, and universities do the same. They. They love to be the number one in their country. They love and those uh lists of what is the best university is often those lists are often based also on number of citations, numbers of Nobel Prizes a university has one number of. Yeah, uh, science publications or nature publications. So all of us are in this rat race to look at these numbers. And yeah, that is just the way that science is organized. And there's just not enough focus on things like open science on rigor, on how, how, uh, you know, how well we, we, uh, report on our data, uh, not on. Reproducibility and those are also very important factors of science, but if you do them as a scientist, if you let's say work on really good methods to publish your paper so that other people can reproduce it or you share all your data, those things cost time and they're not always rewarded by the institutions who just focuses on a number of publications, things like mentorship. Uh, HOW do you, how do you guide your graduate students, uh, through the early phases of their career, or how many papers did you peer review, or how many times did you do social outreach or, uh, try to, uh, go to a pub and explain to people. Science and how great it is, those things are often not rewarded for us as scientists, and they're, they're equally good. I feel that those things also make a really great scientist, but it's really hard to put them on your resume and to make them count.
Ricardo Lopes: And when it comes to the publication system itself, is the fact that negative results tend to not get published or at least not as much as the positive ones also a big problem?
Elisabeth Bik: Yes, that, that is a big problem. And, and that, that ties back to reproducibility. BUT because sometimes there, there's a publication with amazing findings, and people try to reproduce it, and they, they want to build on that work, and this is how science work. We build on each other's work, but they cannot, they cannot seem to get it to work. And I think I've been in that situation and probably every scientist, you're like, why doesn't the experiment work? And you, you start to think of yourself as a failure. It's very rare that you think, well, the other people didn't describe it well. You usually will blame it on yourself. Um, I guess I didn't do it correctly, uh, you know, I, I, I was just a couple of seconds too late, and that's why the experiment failed or things like that. So we tend to, to Maybe be ashamed of those results and not publish them. Most journals will not accept them. Journals are looking for novel findings that other people would love to hear about, and there's just, it's very hard to publish negative results to say, well, we, we couldn't reproduce this, we couldn't make this work. And very often, um, there's many reasons why experiments don't work. Sometimes it's just this little, this little tiny thing that you didn't think would be important. Let's say the, the, the temperature in your room is just one degree too hot or too low, or the shaker doesn't shake the same way as the other people, uh shaker or. There, the ozone concentration in the air is too high and you had no idea that that was important. But sometimes it could be fraud. And so there have been a couple of fraud cases where I was involved in where then later people said, yeah, we could never reproduce that paper. And then And and then you have to think how many people try to reproduce it? How many people spend money and time and. And tears trying to make it to work. And, uh, and that's just such a shame that we don't talk about these things, that we don't have platforms to publish negative results. There's a couple of journals who start to do this, and that's great, but we need more. And yeah, uh, I think 90% of the, the things we do don't work and we never publish them, and it's only the 10% that does work, it gets published. So there's, there's so much money being wasted on, on those negative results.
Ricardo Lopes: So I would like to ask you a little bit more about citations now. Are there ways by which researchers can artificially increase their citation metrics? And in this particular case, I would like for you to tell us specifically about the case of citation rings.
Elisabeth Bik: Right. And, uh, I mean, you could, of course, always sneak in that one extra paper that is not maybe that relevant to your new paper and sneak in one of your old papers and, oh, now you have an extra citation. So I think most people cite their own work because you, like I said before, you build on each other's work and you build on your own work. So sometimes you will say, oh, we did, we did this experiment exactly the way we did it in that older paper. So that would count towards your citations. Um. I've seen papers where people cites, uh, let's say they have 50 citations, 50 references, and uh 20 of them are by their own from their own lab or from their own. So that is obviously frowned upon, but there's actually no rule and then reviewer might not even see that. Citation rings, that's a very creative way. So these are, I'm not quite sure how it works, but I think the way it works is that let's say a group of people, I don't know, 50 people, 50 researchers form a secret group and they say we're all gonna cite each other's papers. And so it's, it doesn't really get noticed because, you know, these people cite other people's papers, but the other folks in the group cite their papers. And so it all, in that way you all increase each other's papers. And, and sometimes we see these papers where um there's a statement, um. Like some, some general statement, like I don't know, DNA sequencing has greatly increased something, uh, and then there's, there's 10 references. So between brackets it said reference 37 to 46 or so. And all those references are from, from certain authors and, and if you really notice that they might actually not. Be related to the topic that they're cited in the context of. So it, it's, it's a way that people sneak in references and very often these are citation rings, but they're sometimes very hard to recognize if a person, you know, cites uh 20 of their own papers, of course, we can see that. But if a person cites 20 papers that are not very related and are from that secret pact of, of, uh, of the citation rings, it's very hard to notice that. Mhm.
Ricardo Lopes: How about plagiarism? Is there lots of plagiarism in the scientific literature?
Elisabeth Bik: Not as much as there was, let's say in the early 2000s. So I actually started to become a science detective by, because somebody had plagiarized my work and I, I found out about it and I was very mad and, um, and the paper was I think around, published around maybe 2010 or so, so I found it in 2013 and I think in that time journals weren't really checking for plagiarism very often. Um, IT was started that there are some tools to check for plagiarism. It's basically a glorified Google search where you just search if somebody else used your text and um so later journals, let's say around 20. Maybe 2010 or 2015 journals were starting more and more to use plagiarism checkers. So I think real plagiarism is pretty rare because it gets caught by both journals. Um, AND I'm not talking about one or two sentences or a couple of sentences in the methods where you just described how you did a particular standard thing. Those things I don't call plagiarism. I'm talking about complete review articles that were completely plagiarized or were a whole pair. GRAPHS were copy pasted. So that is, that is, you know, the really bad plagiarism that you don't find, uh, very often anymore, but, um, there's, there's novel techniques to hide plagiarism and in particularly, I need to mention, of course, artificial intelligence because you can take a paragraph that somebody else has written and if you're lazy, you just say rewrite that paragraph in using check GBT. Um, OR some other, uh, generative AI, and it will generate a new text that is exactly the same as the old, but it's revered. So you can use your old plagiarism detector tools and you're, it's not gonna flag it as plagiarism. But in reality, it still is. It's still the same text, it's reworded, but especially if you look at References, you can see it's still using the same references. So, it is very hard to recognize that. And, and, and, you know, you can always say, well, I just rewrote it, so it's not plagiarism. But with AI you can, you can very quickly write an article without doing anything yourself than just copy pasting text that somebody else has written. And so it's, it's increasingly hard to recognize plagiarized text. Um, AND yeah, I'm a bit worried about that, but maybe that's just the new tool we have and uh maybe we shouldn't worry too much about how we're gonna rewrite the same thing again because there's only so many ways you can say, uh, you can write an introduction, let's say about breast cancer or so. I mean, you're gonna use the same terms, you're gonna use the same statistics. And so maybe. In the end, we, we shouldn't worry too much about that type of plagiarism. But of course, when it's plagiarism of data when you just steal somebody else's data or even figures or photos, that is, that is still wrong, of course. But I think when you describe a particular problem in science, a particular, like for the introduction, for example, you would describe, you know, you would. Give the facts and so I can sort of see that maybe we shouldn't worry too much about pleasure. We cannot recognize it anymore. If, if you're a smart fraudster, you'll just use AI and you're gonna rewrite that paragraph. Mhm.
Ricardo Lopes: So tell us a little bit about uh how you do your work. I mean, what are the most common ways you discover scientific fraud?
Elisabeth Bik: So I focus on images, on photos and uh so I will look at scientific papers and And search for duplicated images. Let's say within a paper there's a bunch of photos, little panels, very often western blots or photos of tissues or cells, and, and the same photo is used twice. So it's a duplicated photo, but one is experiment A and one is experiment B, so they should not be the same. That could be a sloppy, of course, a sloppy, uh, you know, the researcher who just grabbed the wrong photo. Sometimes photos overlap. So you see that let's say one photo is experiment A, one photo is labeled as a different experiment, experiment B, but there's a little overlap that you can find. And so that looks like it was the same sample, maybe it was moved under a microscope a little bit. So those are overlapping images, or sometimes images are, are mirrored or, or rotated, things like that. Uh, AND then the third type of image duplication that I look for are photos that within the photo there's duplicated elements. So it's one photo. And you see the same cell 3 times or the same type of tissue copy pasted a couple of times. And yeah, that is, of course, that's very bad because uh people do that intentionally. Maybe to hide a crack or maybe to make it look like uh there's more cells growing with this drug or or fewer cells or things like that. So, so those are the type of things I'm, I'm looking for. I'm using my eyes, but I'm also using tools. Uh, AND, and the tools I'm using, uh, have databases of, of images in other papers. So I might be able to find papers that, um, that have stolen an image from a different paper, or that the authors just reuse their own. Uh, PHOTO again and make it look like a different experiment. I need to very carefully look, because sometimes the duplication is quite OK. Sometimes it's quite OK to use the same control experiment. Let's say you test two different drugs and you have, you know, the, the, the, the not treated cells versus drug one, and then the not treated cells versus drug 2. So the, the control, the not treated sample might be the same and that is quite OK. So those are, are fine. I'm not gonna flag those, um, but I'm trying to find inappropriate duplications.
Ricardo Lopes: And what about using fake images? I mean, what kinds of fake images do people use and where do they get them?
Elisabeth Bik: Um, WELL, we have seen, uh, Western blots that look very unrealistic, and we think they were generated using artificial intelligence. Now, AI has, of course, Greatly improved over the last 2 or 3 years. Uh, BUT there was a period when AI was not that good yet, I guess. And, and we saw these weird blots, and I wasn't the first one seeing it, but, but many other people had noticed them. And all the, the, the western blots, they look like horizontal stripes, but the backgrounds of these blots was always the same. So it was, it was hundreds and hundreds and hundreds of papers that all had photos with the same backgrounds. And we're like, that is weird. Uh, THAT they have to have a common source. Uh, NOW, of course, the technique of AI is so much better that you can teach your, your artificial intelligence. You can just give it a bunch of real photos and say, generate a new photo for me. And I cannot no longer use my software because it's not a duplicated image. It's a novel, unique image. So these are really, really hard to find. And I think all of us have seen photos generated by AI that look very realistic. Uh, SOMETIMES you can still recognize that it's fake, but I think as the technique gets better, it's very easy to generate photos of, of tissues and cells and, and even of human faces that are in. Distinguishable from, from the real thing. And of course, if you are a science fraudster, you will be very happy with this technique because you don't no longer have to work in the lab and you can crank out fake paper after fake paper. So generative AI, especially when it comes to image, you can, uh, when you think about science integrity, I think it allows fraudsters to. To generate more and more fake papers, and I'm not quite sure how detect that. And if we, I don't think we're ready for for that amount of fraud.
Ricardo Lopes: Mhm. So tell us specifically about the case of the rat with the big balls and the enormous pins.
Elisabeth Bik: Yes, I, I wrote a blog post about it. I mean, that was, uh, when was that? Last summer or so. Uh, THAT was a very funny image. It was not a photo, but it was clearly generated by AI because it showed indeed a rat with a giant penis and giant balls. And it was supposed to be, it was published in scientific papers, a paper in, in one of the Frontier's journals. And, and it, I think it was supposed to show how stem cells were isolated from testicles of rats, which You know, that's a scientific technique, but it wasn't very helpful. First of all, because the, the anatomic um proportions of the rats were quite wrong. I don't think rats have penises like that. It rose into the, the skies. It was just so big. Um. But also you couldn't, I mean, that was funny, but obviously, we were all having a hard, very, you know, good laugh with it, but you couldn't read the labels, it was just AI cannot really generate uh letters yet very well, so you couldn't, it was just, you couldn't read the letters, um. And uh yeah, it was just um funny and not very useful from a scientific point of view. That would not have been that bad. And actually, the authors did disclose, they used mid journey, which is one of the generative AI uh tools to to generate images. But, um, also the text of the whole article appeared to have been generated using AI and, and they had not disclosed that, and so the paper was retracted. Very quickly, actually may ahead, but we all, it was very funny. We all had a very good laugh about it on, on, uh, on Twitter and other social media. I think, and most scientists probably have seen that image. And I wrote a, a blog post about it. And, um, yeah, it's, uh, so it's good that it was retracted, but uh, it generated a lot of joy, I think, in most people's hearts. Yeah.
Ricardo Lopes: There are also been cases of fake images of signaling pathways. What is the problem with such
Elisabeth Bik: fakes? Um, YEAH, there's, there's several, um, science fields that have been infected, if you will, with fake images with, um, in the fields there have been many papers that we think. HAVE been mass produced by what we call paper mills, and, and these are fake papers. And, and in some of the fields in biology, it's, there's, let's say many different molecules or there's many different types of cancers and so you can just write an. AND then you change one molecule into another molecule or you change one cancer into another cancer, but you basically and you rewrite the text a little bit, so nobody would think it's the same text. But because you have all these different combinations of different molecules or pathways or cancers. It looks like a new paper and uh we can almost recognize this by the titles like um inhibition of uh noncoding RNA 123 by uh this pathway uh will decrease uh cell growth in cancer through this pathway or something like that. The, the tidal structures were very similar. Um, SO, uh, the, the, uh, tadpole paper mill that I referred to earlier, where all the Western blos had exactly the same backgrounds, those all had very similar title structures to the point where you could just see the title and know it was the same group who generated that. And these papers were, were, were sold to. Different authors. These came all from, uh, uh, people working in Chinese hospitals. And yeah, it was basically the same paper written over and over again. Text slightly changed, images slightly changed. Um, BUT, but yeah, 6400 papers that we found to all have this, use the same template.
Ricardo Lopes: And what happens to scientists who are caught committing fraud?
Elisabeth Bik: Well, very often, almost nothing or nothing. Uh, UNFORTUNATELY, uh, science is, is sort of like the Tour de France, the big biking race where At some point, a lot of people used doping and nobody seemed to care and people were biking faster and faster because they use doping and if you didn't use doping, you would definitely not win the race and nobody seemed to care. And of course at some point people said we need to care about the sport if we really care about biking and you know, how good is an athlete versus how How much doping do we use? We need to check for doping. And so it changed the sports, I think for the better. Um, BUT in science, we're not at that stage yet. We, I think we're starting to realize that there's a lot of fraud, but very little happened. Occasionally a paper gets retracted, uh, but it's not enough. Um, AND, and especially the, the serial fraudsters, the people have produced multiple papers with, with fraud, those folks seem to. Very often not get uh punished, punished in any way. They, there's very little consequences. Sometimes, especially when it's a senior researcher, they usually will blame their junior research assistants or grad students and maybe those folks get fired, but the senior person. Keeps offloading. That's, that's what we see. Uh, PAPERS often don't get retracted or even corrected. Authors come up with all kinds of stories trying to explain why this cell is visible twice or 3 times or 100 times in the same photo, and the editors believe it and they just issue a correction. It's just, it seems that nobody, uh, at least until a couple of years ago, that, that editors and journals and publishers didn't really. Realize how big of a problem it has had become. Now they are starting to realize, but it's um yeah, there has been a time where people were too naive. uh AND institutions seem to also do very little. They um Uh, very often senior research bringing a lot of money, so an institution might have to decide like, OK, are we gonna, let's say fire the fraudulent professor. Or are we gonna keep him or her on because they bring in so much money, so many grants, and, and, and I think that a lot of universities will, will just tell the, the fraudulent person, oh, don't fraud anymore, or fraud better, at least that we cannot catch you. Um, AND, and because they, they really love, of course, the money that some of these professors bring in and Um, and the, you know, they have high publication records, and that makes the university end up higher in their rankings. And we all love the rankings, and it's just, there's so much conflict of interest. Um, YOU want these folks to be, you know, leave signs, please, and don't fraud anymore. But in reality, these people are not held accountable, and they keep on frauding, and there's too many examples of that.
Ricardo Lopes: So, let me ask you just a little bit about retraction. When are papers retracted and, I mean, how many among the ones that should be retracted really get retracted?
Elisabeth Bik: Oh yeah. So, um, so, uh, taking a step back, uh, a scientific paper, once it is peer reviewed and published it, and, and somebody finds an error in it. There's, there's several things that can be done then to the paper. So the first one is a correction. Uh, IT can be called an aratum or a corrigenum, but it's basically all the same thing. Um, THAT is usually used, let's say there's a small error, 22 little photos were, were switched, or there's a misspelling in the name of one of the authors that happens a lot, um, or they forgot to add, uh, I don't know, the, the funder wasn't mentioned or things like that. Small errors that don't really, yeah, influence the paper. Um, AND then there can be an expression of concern, which is usually, it's very often a temporary thing, and it's good because it, it tells you if the publishers puts that on the paper, it knows there, it's usually because there's an ongoing investigation or there are some bigger errors, we're still thinking about it, but at least in the meantime, we're warning the readers that there's a problem. And then the, the, the most severe step that can happen to a paper is a retraction. So the paper will remain online. It's not that it's removed from the interwebs. It will be labeled usually with a watermark, retracted, will be, or like a big red sign at the top. You can still read it. Um, BUT basically it tells you that the publisher or the, the journal editor no longer has faith in the journal, and it's very often for a fatal. Or, um, it could be because an author, one of the authors found that one of the calculations was completely wrong, completely changing the outcome. So it can be started by an author who says, Sorry, we found a big error. We're gonna retract the paper, we're gonna redo the experiments, republish it, maybe. And I think that is, that is great when the authors actually recognized that there was a big error and retract the paper, but most often it is because of all kinds of Suspicions of misconduct or big errors, let's say there's many papers, many, uh, panel image panels that overlap. Uh, MANIPULATED photos, uh, duplications, uh, to the extent where you no longer have trust in the data, and that is often by whistleblowers, by external people, maybe by me when I see a big problem in the paper, I will write to the editors and then the editors will, uh, perhaps decide to retract the paper. Now it doesn't happen as often as I would like to see it. I I had an I initially when I started to do this work, I had a set of 800 papers in which I found problems. Um, I wrote to the editors and not all of these should be retracted, obviously, some were small errors, but only 13 uh was either corrected or retracted after waiting 5 years and sending a reminder. And so 5 years seems a very long time to take a decision, uh, if a paper. You know, should be corrected or retracted. So only 1/3 of these papers have been taken action on and now we're in about 10 years later, it's only a little bit more than half of the papers have either correction or retraction. So it's very slow, uh, and in half of the cases, um, even after waiting 10 years, nothing happens, and it's so frustrating because we see a big problem in the paper and the journals looked out of way, and That is bad. If you, if you know that your airbag of your car is, uh, is, is faulty, you would hope that there's a recall and that, you know, you, you don't have to pressure the car manufacturer to, to get a new car or to get it fixed. And it seems that publishers in many cases, and there's good exceptions, there's good publishers who really do the right thing, but There's too many, uh, publishers who don't seem to care about quality control. They publish bad papers and then when people complain about it, the customer service is also very bad and it will be like buying a very expensive car, realizing there's a problem, and then the dealer not wanting to take any action. I think as customers uh of buying, you know, people who bought a car, you would not accept that. But in science, we seem to all. Be OK with that, and we should not be OK because we pay the publishers a lot of money. And I'm not quite sure what all that money goes to, but it doesn't seem to go to quality control or customer service, so that needs to change. And again, there are good publishers who do the right thing, but there's still too many who are too focused on making money and not on uh quality control and customer service.
Ricardo Lopes: And in the meantime, while we wait for the paper to get corrected or retracted, it can be cited several times,
Elisabeth Bik: yeah, that's a very good point. We, the reader might not know that there's a big problem with this paper. If I write to the editor, none of the readers would know there's a big problem, and that is why most. PEOPLE who work like me who find problems in papers, we will post on a website called Puppeer.com, like publication and peer review pop peer, and we will, we will, uh, flag papers for all kinds of problems. So if you do a literature search, you can type in the DOI, the unique identifier of the paper, and you can see if somebody else has. Seen something about the paper. It could be a positive command or a negative command. Now most comments on Papa are about image problems because they're visible, right? Like it's very easy to see that a photo is duplicated. It's much easier to see than that a bar graph has been made up. Like that is very often if you are a good fraudster, you would, you, you, that would not be visible, but. Uh, WE want to warn the, the readers of these papers that there's particular problems. As soon as I can find it, I will post it on Pope. So within 5 minutes, it's on Papa, while retractions take, you know, sometimes 10 years, and, and that is just too slow. Like, why would it take so long? And, and so it's frustrating and all this time, people might think that the paper is completely fine, they might not realize that there's potential problems with it. So, um, it's a, it's a very good point and that is why we're We science detectives are very often frustrated with the lack of response.
Ricardo Lopes: So, I have one final question slash topic I would like to explore with you here today. So, what do you think are the best solutions here? How can we best combat fraud?
Elisabeth Bik: Yeah, it's a big question, right? Um, SO, well, there's, there's multiple things you can think of. Of course, we need to educate young people, um, you know, at the start of their science career, what is acceptable, what is OK, what are things you should not be doing, um. That is one thing. I think education is important, but I think at the same time, there need to be consequences for science fraud. There need to be quicker corrections and retractions. There need to be um a lower threshold for science fraud. Like if you find a photoshopped image, you should not allow the authors to send in a completely new set of figures that would be, you know, winning the Tour de France, testing positive for doping. And saying, well, just bring in a clean sample in 2 weeks and we're all good. No, that is just, no, that paper should be retracted. There should be um bigger consequences for people who have been caught fraud, uh, doing fraud by their institutions, um, and funders, people who, who, you know, they should not receive any funding for the, for the next future. And some of these things happen occasionally, but it seems. Too little too late, it's too slow, um, so consequences, education, um, those I think are the, the most important things and, and I think we also need to focus more on open science, um, you know, sharing your data set, um, sharing, uh, negative results, publishing negative results. And maybe doing science a little bit more slowly, uh, being able to show that in, uh, that experiments are reproducible. And, and sometimes I dream about a new way of, of science publishing where we publish smaller experiments, one experiment, one figure, and then other people can reproduce that, and then say, yes, we, we were able to exactly get the same results or no, we could not do it. Maybe am I missing something, you know, are there certain Experimental conditions you didn't include or so, um, as a way maybe to slow down science a little bit, but make it more reproducible and, and reliable, and I think we, we should just focus more on that. But it, it's so easy to focus on metrics and publications and how are we gonna change our complete system? It's probably not gonna happen anytime soon. But I think we need to think about other ways of publishing our papers and rewarding reproducibility and negative results publishing.
Ricardo Lopes: Great. So, Doctor Bick, just before we go, where can people find you and your work on the internet?
Elisabeth Bik: Well, I was on Twitter X, but now I'm, I switched to Blue Sky, so Elizabeth Bick, my name, um, you would, you should be able to find me on Blue Sky. Um, I'm on LinkedIn, I'm on, um, Science Integrity Digest, that is my blog, um, and usually very responsive. Usually on Blue Sky, I that that's where I'll, uh. Uh, YOU'LL have the biggest chance of finding me. And if, if I don't follow you, just ask me if I can follow you so you can send me a direct message and uh then I'll share my email address and uh we can talk.
Ricardo Lopes: Great. So thank you so much for taking the time to come on the show. It's been really fun and a real pleasure to talk with you.
Elisabeth Bik: Same as, yeah, same here. Thank you, Ricardo and Uh, yeah, I hope everybody will stay very honest.
Ricardo Lopes: Hi guys, thank you for watching this interview until the end. If you liked it, please share it, leave a like and hit the subscription button. The show is brought to you by Nights Learning and Development done differently, check their website at Nights.com and also please consider supporting the show on Patreon or PayPal. I would also like to give a huge thank you to my main patrons and PayPal supporters Perergo Larsson, Jerry Mullerns, Frederick Sundo, Bernard Seyches Olaf, Alex Adam Castle, Matthew Whitting Barno, Wolf, Tim Hollis, Erika Lenny, John Connors, Philip Fors Connolly. Then the Mari Robert Windegaruyasi Zup Mark Nes called in Holbrookfield governor Michael Stormir, Samuel Andre, Francis Forti Agnsergoro and Hal Herzognun Macha Joan Labrant Juan and Samuel Corriere, Heinz, Mark Smith, Jore, Tom Hummel, Sardus France David Sloan Wilson, asilla dearraujoro and Roach Diego London Correa. Yannick Punter Darusmani Charlotte blinikol Barbara Adamhn Pavlostaevsky nale back medicine, Gary Galman Sam of Zallirianeioltonin John Barboza, Julian Price, Edward Hall Edin Bronner, Douglas Fre Francaortolotti Gabrielon Scorteseus Slelitsky, Scott Zacharyishtim Duffyani Smith John Wieman. Daniel Friedman, William Buckner, Paul Georgianneau, Luke Lovai Giorgio Theophanous, Chris Williamson, Peter Wozin, David Williams, Dio Augusta, Anton Eriksson, Charles Murray, Alex Shaw, Marie Martinez, Coralli Chevalier, bungalow atheists, Larry D. Lee Junior, old Erringbo. Sterry Michael Bailey, then Sperber, Robert Grassyigoren, Jeff McMann, Jake Zu, Barnabas radix, Mark Campbell, Thomas Dovner, Luke Neeson, Chris Stor, Kimberly Johnson, Benjamin Galbert, Jessica Nowicki, Linda Brandon, Nicholas Carlsson, Ismael Bensleyman. George Eoriatis, Valentin Steinman, Perkrolis, Kate van Goller, Alexander Aubert, Liam Dunaway, BR Masoud Ali Mohammadi, Perpendicular John Nertner, Ursula Gudinov, Gregory Hastings, David Pinsoff Sean Nelson, Mike Levine, and Jos Net. A special thanks to my producers. These are Webb, Jim, Frank Lucas Steffinik, Tom Venneden, Bernard Curtis Dixon, Benedic Muller, Thomas Trumbull, Catherine and Patrick Tobin, Gian Carlo Montenegroal Ni Cortiz and Nick Golden, and to my executive producers Matthew Levender, Sergio Quadrian, Bogdan Kanivets, and Rosie. Thank you for all.