Leslie McIntosh - Digital Science https://www.digital-science.com/people/leslie-mcintosh/ Advancing the Research Ecosystem Mon, 13 Oct 2025 20:48:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://www.digital-science.com/wp-content/uploads/2025/05/cropped-favicon-container-2-32x32.png Leslie McIntosh - Digital Science https://www.digital-science.com/people/leslie-mcintosh/ 32 32 The TL;DR on… ERROR https://www.digital-science.com/blog/2024/09/tldr-error/ Wed, 25 Sep 2024 17:02:11 +0000 https://www.digital-science.com/?post_type=tldr_article&p=72358 We love a good deep dive into the awkward challenges and innovative solutions that are transforming the world of academia and industry. In this article and in the full video interview, we’re discussing an interesting new initiative that’s been making waves in the research community: ERROR.
Inspired by bug bounty programs in the tech industry, ERROR offers financial rewards to those who identify and report errors in academic research. ERROR has the potential to revolutionise how we approach, among other things, research integrity and open research by incentivising the thorough scrutiny of published research information and enhancing transparency.
Suze sat down with two other members of the TL;DR team, Leslie and Mark, to shed light on how ERROR can bolster trust and credibility in scientific findings, and explore how this initiative aligns with the principles of open research and how all these things can drive a culture of collaboration and accountability. They also discussed the impact that ERROR could have on the research community and beyond.

The post The TL;DR on… ERROR appeared first on Digital Science.

]]>
We love a good deep dive into the awkward challenges and innovative solutions transforming the world of academia and industry. In this article and in the full video interview, we’re discussing an interesting new initiative that’s been making waves in the research community: ERROR.

Inspired by bug bounty programs in the tech industry, ERROR offers financial rewards to those who identify and report errors in academic research. ERROR has the potential to revolutionize how we approach, among other things, research integrity and open research by incentivizing the thorough scrutiny of published research information and enhancing transparency.

I sat down with two other members of the TL;DR team, VP of Research Integrity Leslie McIntosh and VP of Open Research Mark Hahnel, to shed light on how ERROR can bolster trust and credibility in scientific findings, and explore how this initiative aligns with the principles of open research – and how all these things can drive a culture of collaboration and accountability. We also discussed the impact that ERROR could have on the research community and beyond.

ERROR is a brand new initiative created to tackle errors in research publications through incentivized checking. The TL;DR team sat down for a chat about what this means for the research community through the lenses of research integrity and open research.

Leslie’s perspective on ERROR

Leslie’s initial thoughts about ERROR were cautious, recognizing its potential to strengthen research integrity but also raising concerns about unintended consequences.

She noted that errors are an inherent part of the scientific process, and over-standardization might risk losing the exploratory nature of discovery. Drawing parallels to the food industry’s pursuit of efficiency leading to uniformity and loss of nutrients, Leslie suggested that aiming for perfection in science could overlook the value of learning from mistakes. She warned that emphasizing error correction too rigidly might diminish the broader mission of science – discovery and understanding.

Leslie: “Errors are part of science and part of the discovery… are we going so deep into science and saying that everything has to be perfect, that we’re losing the greater meaning of what it is to search for truth or discovery [or] understand that there’s learning in the errors that we have?”

Leslie also linked this discussion to open research. While open science encourages interpretation and influence from diverse participants, the public’s misunderstanding of scientific errors could weaponize these mistakes, undermining trust in research. She stressed that errors are an integral, even exciting, part of the scientific method and should be embraced rather than hidden.

Mark’s perspective on ERROR

Mark’s initial thoughts were more optimistic, especially within the context of open research.

Mark: “…one of the benefits of open research is we can move further faster and remove any barriers to building on top of the research that’s gone beforehand. And the most important thing you need is trust, [which] is more important than speed of publication, or how open it is, [or] the cost-effectiveness of the dissemination of that research.”

Mark also shared his excitement about innovation in the way we do research. He was particularly excited about ERROR’s approach to addressing the problem of peer review, as the initiative offers a new way of tackling longstanding issues in academia by bringing in more participants to scrutinize research.

He thought the introduction of financial incentives to encourage error reporting could lead to a more reliable research landscape.

“I think the payment for the work is the most interesting part for me, because when we look at academia and perverse incentives in general, I’m excited that academics who are often not paid for their work are being paid for their work in academic publishing.”

However, Mark’s optimism was not entirely without wariness. He shared Leslie’s caution about the incentives, warning of potential unintended outcomes. Financial rewards might encourage individuals to prioritize finding errors for profit rather than for the advancement of science, raising ethical concerns.

Ethical concerns with incentivization

Leslie expressed reservations about the terminology of “bounty hunters”, which she felt criminalizes those who make honest mistakes in science. She emphasized that errors are often unintentional.

Leslie: “It just makes me cringe… People who make honest errors are not criminals. That is part of science. So I really think that ethically when we are using a term like bounty hunters, it connotes a feeling of criminalization. And I think there are some ethical concerns there with doing that.”

Leslie’s ethical concerns extended to the global research ecosystem, noting that ERROR could disproportionately benefit well-funded researchers from the Global North, leaving under-resourced researchers at a disadvantage. She urged for more inclusive oversight and diversity in the initiative’s leadership to prevent inequities.

She also agreed with Mark about the importance of rewarding researchers for their contributions. Many researchers do unpaid labor in academia, and compensating them for their efforts could be a significant positive change.

Challenges of integrating ERROR with open research

ERROR is a promising initiative, but I wanted to hear about the challenges in integrating a system like this alongside existing open research practices, especially when open research itself is such a broad, global and culturally diverse endeavor.

Both Leslie and Mark emphasized the importance of ensuring that the system includes various research approaches from around the world.

Mark: “I for one think all peer review should be paid and that’s something that is relatively controversial in the conversations I have. What does it mean for financial incentivization in countries where the economics is so disparate?”

Mark extended this concept of inclusion to the application of artificial intelligence (AI), machine learning (ML) and large language models (LLMs) in research, noting that training these technologies requires access to diverse and accurate data. He warned that if certain research communities are excluded, their knowledge may not be reflected in the datasets used to build future AI research tools.

“What about the people who do not have access to this and therefore their content doesn’t get included in the large language models, and doesn’t go on to form new knowledge?”

He also expressed excitement about the potential for ERROR to enhance research integrity in AI and ML development. He highlighted the need for robust and diverse data, emphasizing that machines need both accurate and erroneous data to learn effectively. This approach could ultimately improve the quality of research content, making it more trustworthy for both human and machine use.

Improving research tools and integrity

Given the challenges within research and the current limitations of tools like ERROR, I asked Leslie what she would like to see in the development of these and other research tools, especially within the area of research integrity. She took the opportunity to reflect on the joy of errors and failure in science.

Leslie: “If you go back to Alexander Fleming’s paper on penicillin and read that, it is a story. It is a story of the errors that he had… And those errors were part of or are part of that seminal paper. It’s incredible, so why not celebrate the errors and put those as part of the paper, talk about [how] ‘we tried this, and you know what, the refrigerator went out during this time, and what we learned from the refrigerator going out is that the bug still grew’, or whatever it was.

“You need those errors in order to learn from the errors, meaning you need those captured, so that you can learn what is and what is not contributing to that overall goal and why it isn’t. So we actually need more of the information of how things went wrong.”

I also asked Mark what improvements he would like to see from tools like ERROR from the open research perspective. He emphasized the need for better metadata in research publishing, especially in the context of open data. Drawing parallels to the open-source software world, where detailed documentation helps others build on existing work, he suggested that improving how researchers describe their data could enhance collaboration.

Mark also feels that the development of a tool like ERROR highlights other challenges in the way we are currently publishing research, such as deeper issues with peer review, or incentives for scholarly publishing.

Mark: “…the incentive structure of only publishing novel research in certain journals builds into that idea that you’re not going to publish your null data, because it’s not novel and the incentive structure isn’t there. So as I said, could talk for hours about why I’m excited about it, but I think the ERROR review team have a lot of things to unpack.”

Future of research integrity and open research

What do Leslie and Mark want the research community to take away from this discussion on error reporting and its impact on research integrity and open research?

Leslie wants to shine a light on science communication and its role in helping the public to understand what ERROR represents, and how it fits into the scientific ecosystem.

Leslie: “…one of the ways in which science is being weaponized is to say peer review is dead. You start breaking apart one of the scaffolds of trust that we have within science… So I think that the science communicators here are very important in the narrative of what this is, what it isn’t, and what science is.”

Both Leslie and Mark agreed that while ERROR presents exciting possibilities, scaling the initiative remains a challenge. Mark raised questions about how ERROR could expand beyond its current scope, with only 250 papers reviewed over four years and each successful error detection earning a financial reward. Considering the millions of papers published annually, it is unclear how ERROR can be scaled globally and become a sustainable solution.

Mark: “…my biggest concern about this is, how does it scale? A thousand francs a pop, it’s 250 papers. There [were] two million papers [published] last year. Who’s going to pay for that? How do you make this global? How do you make this all-encompassing?”

Conclusion

It is clear from our discussion that ERROR represents a significant step forward in experimenting to enhance both research integrity and open research through this incentivised bug-hunting system.

Leslie has highlighted how the initiative can act as a robust safeguard, ensuring that research findings are more thoroughly vetted and reliable, but she does remind us that we need to be inclusive in this approach. Mark has also emphasized the potential for a tool like this in making publication processes more efficient – and even finally rewarding researchers for all the additional work that they’re doing – but he does wonder how this can scale up to foster a more transparent and collaborative research environment that aligns perfectly with the ethos of open research as well.

Leslie and Mark’s comments are certainly timely, given that the theme of Digital Science’s 2024 Catalyst Grant program is innovation for research integrity. You can find out more about how different segments of research can and should be contributing to this space by reading our TL;DR article on it here.

We look forward to exploring more innovations and initiatives that are going to shape – or shatter – the future of academia, so if you’d like to suggest a topic we should be discussing, please let us know.

The post The TL;DR on… ERROR appeared first on Digital Science.

]]>
FoSci – The emerging field of forensic scientometrics https://www.digital-science.com/blog/2024/05/forensic-scientometrics/ Wed, 08 May 2024 09:00:00 +0000 https://www.digital-science.com/?post_type=tldr_article&p=71722 Our VP Research Integrity, Dr Leslie McIntosh, explains forensic scientometrics (FoSci) – the emerging field focused on inspecting and upholding the integrity of scientific research.

The post FoSci – The emerging field of forensic scientometrics appeared first on Digital Science.

]]>
Dr Leslie McIntosh
Quotes icon
In this era of increased scrutiny, defining and embracing forensic scientometrics – FoSci – becomes essential in strengthening trust in and around science.”
Dr Leslie McIntosh
VP, Research Integrity, Digital Science

This work of inspecting and upholding the integrity of scientific research has long been conducted in the background of science and scholarly communication, carried out by passionate individuals driven by ethics and principles to ensure the veracity of the scientific method or record and the downstream impacts on policy, practice, and theory. 

Groups and individuals, such as RetractionWatch, have monitored, collected, classified, and written about retractions for over ten years. Many of those doing the verifying have specialization detection – from nefarious networks to image manipulation to tortured phrases. Additionally, many organizations have developed specific offices and infrastructure to support research integrity – such as the US Office of Research Integrity at the National Institutes of Health, institutionally-based research integrity and security offices, as well as the newly formed research integrity offices located in major publishing organizations.  

Despite the growth of investigative research and methods on scientific misconduct, the discipline itself lacks a common definition and description of its field.  So, what do you call the collective work of analyzing publications,  data, images, and statistics to uncover errors, misinformation, or manipulations in scientific publications? We propose calling this emerging field forensic scientometrics – FoSci for short.

Research integrity experts call for new forensics discipline: Forensic scientometrics

Quotes icon
By embracing FoSci as a specialized and necessary field, we can galvanize interest, foster the development of a community of practice, and signal the importance of this crucial work.”
Dr Leslie McIntosh
VP, Research Integrity, Digital Science

Why use forensics? Forensics refers to applying scientific knowledge and methods to matters of law, particularly for investigating crimes and providing evidence in legal proceedings.

First, there is an investigative nature of the work we do even if the results do not end up in the court of law. While fraud investigations and syndicate networks encompass legal realms a, the intentional manipulation of anything scientific (as in the process of scientific discovery or manipulation) does not have a special place. That doesn’t mean it shouldn’t be there.

Second, scientific publications are being used in the court of law as evidence, but the papers themselves and their veracity do not get scrutinized by expert scientometricians. A common belief is that a peer-reviewed academic paper indicates that the research has passed the scientific method stress test. Yet, peer-reviewers vet (or should) the scholarly question within the paper, not the weight of evidence in a societal context. 

Scientometrics involves the quantitative analysis of scientific publications and research outputs in this larger context. It encompasses the measurement and evaluation related to scientific activities, such as the impact of research, patterns of collaboration among researchers, citation analysis, and the productivity assessment and influence of individuals, institutions, or scientific journals. Scientometrics employs statistical and mathematical methods to derive meaningful insights into the structure and dynamics of scientific knowledge, contributing to our understanding of the scientific community’s development and impact over time.

As we navigate the evolving landscape of scientific inquiry, the emergence of forensic scientometrics as a distinct field reflects a collective commitment to upholding research integrity, from the pioneers who have tirelessly exposed misconduct to the institutional changes taking place, the journey towards a recognized field is well underway. In this era of increased scrutiny, defining and embracing forensic scientometrics – FoSci –  becomes essential in strengthening trust in and around science.

Bio

Leslie D. McIntosh, PhD is the VP of Research Integrity at Digital Science and dedicates her work to improving research, reducing disinformation, and increasing trust in science.

As an academic turned entrepreneur, she founded Ripeta in 2017 to improve research quality and integrity. Now part of Digital Science, the Ripeta algorithms lead in detecting Trust Markers of research manuscripts. She works with governments, publishers, institutions, and companies around the globe to improve research and scientific decision-making. She has given hundreds of talks, including to the US-NIH, NASA, and World Congress on Research Integrity, and consulted with the US, Canadian, and European governments. Dr. McIntosh’s work was the most-read RetractionWatch post of 2022. In 2023, her influential ideas on achieving equity in research were highlighted in the Guardian and Science.

Publications and preprints

McIntosh, Leslie and Hudson Vitale, Cynthia. 2024. Forensic Scientometrics — An emerging discipline to protect the scholarly record. arXiv https://doi.org/10.48550/arXiv.2311.11344 

Porter, Simon and McIntosh, Leslie. 2024. Identifying Fabricated Networks within Authorship-for-Sale Enterprises. arXiv https://doi.org/10.48550/arXiv.2401.04022

McIntosh, L.D. and Hudson Vitale, C., 2023. Safeguarding scientific integrity: A case study in examining manipulation in the peer review process. Accountability in Research, pp.1-19. https://doi.org/10.1080/08989621.2023.2292043 

Blogs

McIntosh, Leslie D. (2024): FoSci – The Emerging Field of Forensic Scientometrics The Scholarly Kitchen

McIntosh, Leslie D. (2024): Science Misused in the Law 

The post FoSci – The emerging field of forensic scientometrics appeared first on Digital Science.

]]>
Science misused in the law https://www.digital-science.com/blog/2024/02/science-misused-in-the-law/ Wed, 07 Feb 2024 08:15:55 +0000 https://www.digital-science.com/?post_type=tldr_article&p=69386 Digital Science has conducted an investigation into 11 papers used in evidence in a prominent abortion drug legal case in the United States. Here are the findings.

The post Science misused in the law appeared first on Digital Science.

]]>
A case study of the scientific articles cited in the US Mifepristone court case

The recent retraction of three published papers, two that were instrumental in a prominent abortion drug legal case in the United States, highlights the implications of potentially biased research influencing critical legal decisions. This post shares details of a Digital Science-conducted investigation into 11 papers used in court evidence – including the now-retracted papers – and what that means for both science and law.

Science as evidence

As scientists investigating trust in and of science, we began to explore how science – specifically published research – is used in the law. A notable concern is if the court system utilizes scholarly articles to establish scientific truths, then it relies on a system that may be vulnerable to influence by stakeholders who aim to advance their agendas rather than advancing impartial science.

When science is incorrectly used – specifically in the case of law – there may be far-reaching consequences. Science can be misused or manipulated to add gravitas, particularly in cases that are polarizing in society, are emotionally charged and fall along societal and political lines. Yet, ignorance of science is no excuse for misuse of science in the law.

For scholarly information to be considered credible and trustworthy science, it must meet multiple criteria, including: be a representative sample of papers over a given topic and delivered through trustworthy mechanisms. After those initial checks, the science itself must be sound. Then, the evidence must also be appropriately understood by the courts.

Quotes icon
If the court system utilizes scholarly articles to establish scientific truths, then it relies on a system that may be vulnerable to influence by stakeholders who aim to advance their agendas rather than advancing impartial science.

Mifepristone legal challenge

Our case study questions whether the scientific evidence presented in a high-profile court case in the United States (Alliance for Hippocratic Medicine, et al vs the Federal Drug Administration) provides an unbiased presentation of the science. 

Heard earlier in 2023 in a Texas federal court, the legal action was taken against the US Food and Drug Administration (FDA) and was aimed at overturning FDA approval of the abortion drug Mifepristone. The case will be heard before the US Supreme Court on 26 March 2024.

Eleven published research articles were offered as evidence in this court case. (For the full list, see our Methods and Data section below). The federal judge ordered a suspension of mifepristone’s FDA approval, citing scientific evidence presented to the court.

This case was interesting to us as it made US and international news in the wake of the 2022 Dobbs case in the US Supreme Court, which had overturned Roe v. Wade. Because of our interest in research integrity, we felt this court case was worthy of further investigation, in which we asked ourselves: How were scientific papers and science presented and used as evidence?

What we would have expected: The strongest research on mifepristone, published in quality journals, with rigorous peer reviews and any conflicts of interests described. What we found, however, did not meet these standards.

Retracted articles

Three of the 11 articles cited in the court case were published in one journal, Health Services Research and Managerial Epidemiology. This journal has a recognized publisher behind it who has the ability to investigate possible manipulation of the scientific process. And after six months since one expression of concern, the publisher retracted three of Dr Studnicki’s papers based on undisclosed conflicts of interest and unreliable research methodology. Two of those papers were cited in the Texas court case. In the retraction Sage “confirmed that all but one of the article’s authors had an affiliation with one or more of Charlotte Lozier Institute, Elliot Institute, and American Association of Pro-Life Obstetricians and Gynecologists, all pro-life advocacy organizations, despite having declared they had no conflicts of interest when they submitted the article for publication or in the article itself.”

Expected results

What should we have expected from credible published research in this field?

Representative sample of papers over a given topic

Using the world’s largest linked database, Dimensions, we queried for (mifepristone OR RU-486) AND “medical abortion” over the years of 2019-2022, to assess the experts, organizations, and journals in this field of study. We also queried for (mifepristone OR RU-486) AND “medication abortion” over the years of 2019-2022. The results varied slightly but not significantly in the conclusions and are not shown below.

Experts

Due to the sensitivity of this research topic, we have not shown the authors’ names but summarize the findings.

Expected authors of the publications

When we query for mifepristone (or RU-486) and “medical abortion” over the years of 2019-2022, we find 25 top researchers (by number of publications, citations, citation mean, FCR to those publications). We note that none of the top 25 researchers – or their papers – were cited in the US court case on mifepristone.

Actual authors of the publications used in court

Using the data from Dimensions, we ascertained that none of the authors of publications cited in the mifepristone court case are among the top 24 in their field in the world. The authors also do not appear to be connected with the top 24 researchers in the field. Instead, the authors of four of the 11 papers presented in the mifepristone court case are all well-connected co-authors with each other. 

Organizations

Expected organizations to have affiliations on the publications

We identified 500 institutions (by number of publications and citations) publishing research in this field. We would have expected to see these institutions and their work cited in the mifepristone court case.

Actual organizations to have affiliations on the publications

The authors of the publications used in the mifepristone court case are primarily affiliated with the Charlotte Lozier Institute (CLI), who have co-authored publications with members of the American Association of Pro Life OB/GYNS (AAPLOG) among other organizations. These authors do not have connections to those highly-cited organizations shown in the previous image. Two of the 11 papers have authors from those institutions in the above figure – those from Planned Parenthood and Princeton University.

Journals

Expected publication journals to have been represented 

We would have expected journal representation from the top 20 journals by mean number of citations. ‘Citations (Mean)’ is the mean average citation number for a given group of publications being analyzed. Other metrics can also be used to determine the top journals of the field.

ID Journal Name Citations (mean)
1 Current Opinion in Cell Biology 103
2 Nature 102
3 Frontiers in Immunology 82
4 Endocrine Reviews 74
5 MMWR Surveillance Summaries 58
6 Human Reproduction Update 46
7 Social Science & Medicine 44
8 The Lancet Regional Health – Americas 35
9 Advances in Pediatrics 33
10 Stem Cell Research & Therapy 33
11 Human Reproduction Open 32
12 Journal of Women’s Health 32
13 Reproductive Toxicology 31
14 Journal of Clinical Nursing 30
15 The Lancet Global Health 29
16 JAMA Network Open 28
17 Pharmaceuticals 28
18 Journal of Health Economics 26
19 Drug Delivery and Translational Research 26
20 Social Science Research 24

Which journals were used in the case

Source: Titles in Mifepristone 2023 Texas Case Citation Mean Rank
Obstetrics and Gynecology 11 78
Health Communication 11 85
Contraception 7 74
Health Services Research and Managerial Epidemiology 3 225
The Linacre Quarterly 1 310
Human Reproduction 0 436
BMJ Evidence-Based Medicine
Issues in Law & Medicine

Science should be delivered through trustworthy mechanisms

We would have expected that authors, editors, and peer reviewers abide by the guidelines for conflicts of interest

For assessing conflict of interest, we are guided by the International Committee of Medical Journal Editors (ICMJE) definition, wherein “all participants in the peer-review and publication process – not only authors but also peer reviewers, editors, and editorial board members of journals – must consider and disclose their relationships and activities when fulfilling their roles in the process of article review and publication”. (ICMJE, n.d.).

For the journal Health Services Research and Managerial Epidemiology, a Sage journal, its policy on declaring conflicts of interest (COI) can be found here. In summary: All authors must provide a ‘Declaration of Conflicting Interests’ statement to be published with the article, disclosing any financial ties to sponsoring organizations or for-profit interests related to products discussed in the text. If no conflicts exist, the statement should say “None Declared”. Any interests that could appear as conflicts should also be disclosed to the Editor in the cover letter.

COI could exist if those involved in writing, editing, and peer-reviewing the papers have strong political affiliations, are employees, members, leaders or founders of, or writes papers for or acts as an expert witness solely aligning with organizations affiliated with the topic. Multiple peer reviewers with diverse affiliations helps balance potential biases. 

We would expect a balance of scientific experts on a topic with none or declared conflicts of interest across the authors, peer reviewers, and article editors. If they have COI, we would expect them to vary from person and role (e.g., the author, peer reviewers, and/or article editor should not work for, have investments in, etc. in Company X). Note that the role and control of the editorial board varies by journal and publisher and can have some to no decision authority of the articles.

In this case, we looked for activities of perceived conflicts of interest at the individual and paper level to assess a ‘risk profile’. An individual may have political affiliations but still be objective; they should still disclose these. The reason for having multiple peer reviewers is to balance those potential biases through different reviews. Because none of the journals have open peer reviewers, and we do not know who the specific article editor was for a paper, we could assess those items for COI. However, the retraction notice does state “Sage became aware that a peer reviewer who evaluated the article for initial publication also was affiliated with Charlotte Lozier Institute at the time of the review.”

For further details on understanding conflicts of interest in scientific papers, see this publication: Safeguarding Scientific Integrity: Examining Conflicts of Interest in the Peer Review Process.

What we found regarding declared and undeclared conflicts of interest

Of the 11 papers used in the case, we found eight with undeclared conflicts of interest, where there is the potential for conflicts of interest. Note that declaring conflicts of interest (also known as ‘competing interests’) in publications has moved to an expected practice across disciplines within the past five years.

Publication Year Journal Title Article Title Authors’ affiliated advocacy organization Declared Conflict of Interest Possible Conflict of Interest
2012 Obstetrics and Gynecology Extending outpatient medical abortion services through 70 days of gestational age. Planned Parenthood No Yes
2009 Obstetrics and Gynecology Immediate Complications After Medical Compared With Surgical Termination of Pregnancy No Yes (Pharma) Yes
2015 Contraception Efficacy and safety of medical abortion using mifepristone and buccal misoprostol through 63 days Planned Parenthood No Yes
2011 Human Reproduction Immediate adverse events after second trimester medical termination of pregnancy: results of a nationwide registry study No Yes (Pharma) Yes
2020 Health Communication #AbortionChangesYou: A Case Study to Understand the Communicative Tensions in Women’s Medication Abortion Narratives Anti-abortion No Yes
2013 The Linacre Quarterly The Maternal Mortality Myth in the Context of Legalized Abortion Anti-abortion No Yes
2021 Health Services Research and Managerial Epidemiology RETRACTED:
A Longitudinal Cohort Study of Emergency Room Utilization Following Mifepristone Chemical and Surgical Abortions, 1999–2015
Anti-abortion No Yes
2021 Issues in Law & Medicine Deaths and Severe Adverse Events after the use of Mifepristone as an Abortifacient from September 2000 to February 2019. Anti-abortion No Yes
2021 Health Services Research and Managerial Epidemiology Mifepristone Adverse Events Identified by Planned Parenthood in 2009 and 2010 Compared to Those in the FDA Adverse Event Reporting System and Those Obtained Through the Freedom of Information Act Anti-abortion No Yes
2022 Health Services Research and Managerial Epidemiology RETRACTED:
A Post Hoc Exploratory Analysis: Induced Abortion Complications Mistaken for Miscarriage in the Emergency Room are a Risk Factor for Hospitalization
Anti-abortion No Yes
2011 BMJ Evidence-Based Medicine Adolescent girls undergoing medical abortion have lower risk of haemorrhage, incomplete evacuation or surgical evacuation than women above 18 years old No No None Found

Discussion

Logically, scientific articles have been used as exhibits in this Texas court case because mifepristone  is a chemical drug. Yet, much of the scientific evidence appeared to be authored by the plaintiffs or organizations affiliated with the plaintiffs.

If the scientists are the authorities on the subject and have no conflicts of interest, there might be a case for using those studies as evidence. However, we would expect extreme rigor of the science and the ethics of those involved. Those organizations listed on the scholarly papers are either part of the plaintiff or affiliated with them. Two of the papers have evidence of undisclosed conflicts of interest and unreliable methods according to the retraction notice. Hence, the ‘science’ that has been cited appears to have been compromised at the very least, and potentially manipulated to serve the aims of those organizations.

At this point in time, two of the eleven papers used in evidence have now been retracted – a move towards correcting the scientific record. While it is now too late for the Texas court, which has already considered the retracted papers as part of its proceedings, it should not be too late for some further serious questioning of scientific publications presented in the original case. This issue highlights that more must be done to detect and prevent the manipulation or misuse of science and conflicts of interest in the courts.

Methods

Court Case

Alliance for Hippocratic Medicine, American Association of Pro-Life Obstetricians and Gynecologists, American College of Pediatricians, Christian Medical & Dental Associations[1], Dr. Shaun Jester, Dr. Regina Frost-Clark, Dr. Tyler Johnson[2] Dr. George Delgado[3] vs the Federal Drug Administration (2:22-cv-00223-Z).

Data

Eleven peer-reviewed articles were offered as evidence in this court case and used in this study. ‘Exhibits’: https://doi.org/10.6084/m9.figshare.25203248 We did not examine the statements or prior court cases mentioned in the exhibits.

DOI Title Source title Publisher
10.1177/23333928221103107 A Post Hoc Exploratory Analysis: Induced Abortion Complications Mistaken for Miscarriage in the Emergency Room are a Risk Factor for Hospitalization Health Services Research and Managerial Epidemiology SAGE
10.1177/23333928211068919 Mifepristone Adverse Events Identified by Planned Parenthood in 2009 and 2010 Compared to Those in the FDA Adverse Event Reporting System and Those Obtained Through the Freedom of Information Act Health Services Research and Managerial Epidemiology SAGE
10.1177/23333928211053965 A Longitudinal Cohort Study of Emergency Room Utilization Following Mifepristone Chemical and Surgical Abortions, 1999–2015 Health Services Research and Managerial Epidemiology SAGE
Deaths and Severe Adverse Events after the use of Mifepristone as an Abortifacient from September 2000 to February 2019. Issues in Law & Medicine
10.1080/10410236.2020.1770507 #AbortionChangesYou: A Case Study to Understand the Communicative Tensions in Women’s Medication Abortion Narratives Health Communication Taylor & Francis
10.1016/j.contraception.2015.01.005 Efficacy and safety of medical abortion using Mifepristone and buccal misoprostol through 63 days Contraception Elsevier
10.1179/2050854913y.0000000004 The Maternal Mortality Myth in the Context of Legalized Abortion The Linacre Quarterly SAGE
10.1097/aog.0b013e31826c315f Extending outpatient medical abortion services through 70 days of gestational age. Obstetrics and Gynecology Wolters Kluwer
10.1136/ebm.2011.100064 Adolescent girls undergoing medical abortion have lower risk of haemorrhage, incomplete evacuation or surgical evacuation than women above 18 years old BMJ Evidence-Based Medicine BMJ
10.1093/humrep/der016 Immediate adverse events after second trimester medical termination of pregnancy: results of a nationwide registry study Human Reproduction Oxford University Press (OUP)
10.1097/aog.0b013e3181b5ccf9 Immediate Complications After Medical Compared With Surgical Termination of Pregnancy Obstetrics and Gynecology Wolters Kluwer

[1] Conducts lobbying activities

[2] https://www.indianasenaterepublicans.com/johnson Mentioned on AAPLOG https://aaplog.org/caring-for-both-a-curbside-consult-series/

[3] Associated with CLI and AAPLOG board member

The post Science misused in the law appeared first on Digital Science.

]]>
Mind the trust gap https://www.digital-science.com/blog/2023/07/mind-the-trust-gap/ Sun, 09 Jul 2023 22:32:05 +0000 https://www.digital-science.com/?post_type=tldr_article&p=64205 Trust. Five letters, multiple meanings, immense power. Trust arrives on foot and leaves on horseback. Trust is the basis for society, but foundations are fracturing in a world of growing divides.
Trust in research has never been more important in our lifetimes.
In the vast subway system of the scientific world, we must navigate through research integrity. All who create or consume science are on this journey. How do we safely traverse information and hold on to the sanctity of science. How do we mind the trust gap?

The post Mind the trust gap appeared first on Digital Science.

]]>
Understanding and addressing the trust gap in modern research

Trust. Five letters, multiple meanings, immense power. Trust arrives on foot and leaves on horseback.1 Trust is the basis for society, but foundations are fracturing in a world of growing divides.

Trust in research has never been more important in our lifetimes.

In the vast subway system of the scientific world, we must navigate through research integrity. All who create or consume science are on this journey. How do we safely traverse information and hold on to the sanctity of science. How do we mind the trust gap?

In this TL;DR theme we explore important issues surrounding trust through our blogs, podcasts, social media posts and the events Digital Science will be attending:

  • Perception: What does trust in research look like? How can trust in research be a positive force?
  • Identity: Is AI a force for good in research? Who (and what) can we trust in an AI future? What are the impacts on universities globally?
  • Landscape: What are Trust Markers in a research setting? What are the risks and benefits of using Trust Markers?
  • Communications: How do trust and peer review fit together in scholarly communications? How do we translate trust from scholarship to society?
  • Networks: Is the breakdown of trust becoming a barrier to collaboration and progress? What role do geopolitical forces play? Is there fragmentation in research that is affecting our trust in processes and publications?

Thought-provoking articles

We’ve curated articles from our in-house experts as well as those in our community to get under the skin of trust in research and what we can all do to safeguard future research integrity.

A conflict of interests – Manipulating peer review or research as usual?

When are commonly held interests too overlapping for peer reviewers? Examining a case of undeclared conflicts of interest.

connected blocks as network
SDGs research outputs per year by country income
SDGs research outputs per year by country income

SDGs: A level playing field?

A new white paper on the UN SDGs shows more can be done to raise up funding and research recognition for the developing world.

Zooming in on zoonotic diseases

An analysis has revealed disparities in the research effort to combat the growing risk of animal-borne diseases amid climate change.

Laboratory worker in the Rodolphe Mérieux laboratory of Bamako, Mali
Laboratory worker in the Rodolphe Mérieux laboratory of Bamako, Mali
lab tech using microscope

Reproducibility and research integrity top UK research agenda

Digital Science reflections on The House of Commons Science, Innovation and Technology Committee report on Reproducibility and Research Integrity.

The lone banana problem in AI

The subtle biases of LLM training are difficult to detect but can manifest themselves in unexpected places. Digital Science CEO Daniel Hook calls this the ‘Lone Banana Problem’ of AI.

Dr Jessica Miles and student
Dr Jessica Miles and a science fair exhibitor

A different perspective on responsible AI

How a school science fair inspired a passion for science communication, a PhD in microbiology, and a valuable perspective on the current AI debate.

Building trust in research

At Digital Science our tools and services are used on a daily basis by millions of researchers and students worldwide. Trust and responsibility to our user community has always been at the core of what we do, and as technology continues to evolve we recognize our role to play in helping to build global trust in research.

We’ve been supporting this since our founding in 2010, with a specific focus in recent years on building practical applications to help, including the investment in and development of the world’s leading tools for building trust in research:

Throughout 2023 we will have a special focus on showcasing the people across Digital Science whose work has a particular relevance to trust in the community. We’ll add those interviews and insights to this post as we publish them.

You also can find us at events & webinars throughout the year, and if you’d like to know more please get in touch

Footnotes

1. from the Dutch saying Vertrouwen komt te voet en gaat te paard.

The post Mind the trust gap appeared first on Digital Science.

]]>
Our new avenue for interesting things https://www.digital-science.com/blog/2023/04/our-new-avenue-for-interesting-things/ Thu, 27 Apr 2023 18:25:36 +0000 https://www.digital-science.com/?post_type=tldr_article&p=62313 Welcome to Digital Science TL;DR, our new avenue for interesting things!
We bring you short, sharp insights into what’s going on across the Digital Science group; both through our in-house experts and in conversation with amazing people from the community. And we’ll keep it brief!

The post Our new avenue for interesting things appeared first on Digital Science.

]]>
Welcome to Digital Science TL;DR, our new avenue for interesting things!

We bring you short, sharp insights into what’s going on across the Digital Science group; both through our in-house experts and in conversation with amazing people from the community. And we’ll keep it brief!

Why TL;DR? Because we’ve all experienced the “Too long; didn’t read” feeling at times, and by explicitly calling this out we’re making sure we provide a short summary at the top of every article here. 🙂

Introducing our core team

We have a core team of five (at present!) who will be the primary authors of new content on the site, often working in collaboration with our in-house experts and those in the scientific and research community.

You can think of it like our core team acting as the lightning rods ⚡ attracting cool, exciting, and sometimes provocative content from across the Digital Science group and our wider community of partners, end users, customers and friends.

And so without further ado, please say hello to: Briony, John, Leslie, Simon and Suze!

Briony Fane

Briony Fane is Director of Researcher Engagement, Data, at Digital Science. She gained a PhD from City, University of London, and has worked both as a funded researcher and a research manager in the university sector. Briony plays a major role in investigating and contextualising data for clients and stakeholders. She identifies and documents her findings, trends and insights through the curation of customised in-depth reports. Briony has extensive knowledge of the UN Sustainable Development Goals and regularly publishes blogs on the subject, exploring and contextualising data from Dimensions.

John Hammersley

John Hammersley has always been fascinated by science, space, exploration and technology. After completing a PhD in Mathematical Physics at Durham University in 2008, he went on to help launch the world’s first driverless taxi system now operating at London’s Heathrow Airport.

John and his co-founder John Lees-Miller then created Overleaf, the hugely popular online collaborative writing platform with over eleven million users worldwide. Building on this success, John is now championing researcher and community engagement at Digital Science.

He was named as one of The Bookseller’s Rising Stars of 2015, is a mentor and alumni of the Bethnal Green Ventures start-up accelerator in London, and in his spare time (when not looking after two little ones!) likes to dance West Coast Swing and build things out of wood!

Image credit Alf Eaton. Prompt: “A founder of software company Overleaf, dancing out of an office and into London while fireworks explode. high res photo, slightly emotional.
Image credit Alf Eaton. Prompt: “A founder of software company Overleaf, dancing out of an office and into London while fireworks explode. high res photo, slightly emotional.”

Leslie McIntosh

Leslie McIntosh is the VP of Research Integrity at Digital Science and dedicates her work to improving research and investigating and reducing mis- and disinformation in science.

As an academic turned entrepreneur, she founded Ripeta in 2017 to improve research quality and integrity. Now part of Digital Science, the Ripeta algorithms lead in detecting trust markers of research manuscripts. She works around the globe with governments, publishers, institutions, and companies to improve research and scientific decision-making. She has given hundreds of talks including to the US-NIH, NASA, and World Congress on Research Integrity, and consulted with the US, Canadian, and European governments.

Abstract portrait illustration in the style of Frida Kahlo

Simon Porter

Simon explaining graphical representation of the research from Australian National University
Simon explaining a graphical representation of the research from Australian National University

Simon Porter is VP of Research Futures at Digital Science. He has forged a career transforming university practices in how data about research is used, both from administrative and eResearch perspectives. As well as making key contributions to research information visualization, he is well known for his advocacy of Research Profiling Systems and their capability to create new opportunities for researchers.

Simon came to Digital Science from the University of Melbourne, where he worked for 15 years in roles spanning the Library, Research Administration, and Information Technology.

Suze Kundu

Suze Kundu presenting
Suze Kundu presenting at Disney event

Suze Kundu (pronouns she/her) is a nanochemist and a science communicator. Suze is Director of Researcher and Community Engagement at Digital Science and a Trustee of the Royal Institution. Prior to her move to DS in 2018, Suze was an academic for six years, teaching at Imperial College London and the University of Surrey, having completed her undergraduate degree and PhD in Chemistry at University College London.

Suze is a presenter on many shows on the Discovery Channel, National Geographic and Curiosity Stream, a science expert on TV and radio, and a science writer for Forbes. Suze is also a public speaker, having performed demo lectures and scientific stand-up comedy at events all over the world, on topics ranging from Cocktail Chemistry to the Science of Superheroes.

Suze collects degrees like Pokémon, the latest being a Masters from Imperial College London that focused on outreach initiatives and their impact on the retention of women engineering graduates within the profession.

Suze is a catmamma and in her spare time loves dance and Disney, moshing and musical theatre.

Introducing our core topics

We are focusing our content around a set of core topics which are critical not just to the research community but to the world as a whole; at Digital Science we believe research is the single most powerful transformational force for the long-term improvement of society, and our vision is a future where a trusted, frictionless, collaborative research ecosystem helps to drives progress for all.

With this vision in mind, our five core topics at launch are: Global Challenges, Research Integrity, The Future of Research, Open Research, and Community Engagement.

These topics will no doubt continue to evolve over time, but that gives us a lot to get started with! Here’s the short summary of what those topics mean to us:

Global challenges

Most of the world’s technical and medical innovations begin with a scientific paper. It has been said that the faster science moves, the faster the world moves.

But perhaps more importantly, society increasingly looks to science for solutions to today’s most pressing social and environmental challenges. If we’re going to face up to complex health issues, an ageing population, and the digital transformation of the world, we need science and research that is faster, more trustworthy, and more transparent.

With this in mind, we explore how science and research, and its communication, is evolving to meet the needs of our rapidly changing world.

Research integrity

Research integrity will be a dominant theme in scholarly communications over the next decade. Challenges around ChatGPT, papermills, and fake science will only get thornier and more complex. We expect all stakeholders – research institutions, publishers, journalists, funding agencies, and many others – will need to dedicate more resources to fortify trust in science.

Even faced with these challenges, taking the idea of making research better from infancy to integration is exciting. Past and present, our team has built novel and faster ways to establish trust in research. We are happy to have grown a diverse group that will continue to develop the technical pieces needed to assess trust markers.

The future of research

Since its inception, Digital Science has always concerned itself with the future of research tools and infrastructure, with many of our products playing a transformative role in the way research is collaborated on, organised, described and analysed. Within this topic, we explore how Digital Science capabilities can continue to contribute to research future discussions, as well as highlighting interesting developments and initiatives that capture our imagination.

Open research

At Digital Science, we build tools that help the researchers who will change the world. Information wants to be free and since the dawn of the web, funders have been innovating their policies to ensure that all research will become open.

Digital Science believes that Open Research will help level the playing fields and allow anyone anywhere to contribute to the advancement of knowledge. It also helps with other areas that pre-web academia struggled with. These include, reproducibility, transparency, accessibility and inclusivity.

These posts will cover the why and the how of open research, as it becomes just “research”.

Community engagement

One of Digital Science’s founding missions was to invest in and nurture small, fledging start-ups to transform scholarly research and communication. Those founding teams now form the heart of Digital Science, and the desire to make, build, and change things for the better is at the core of what we do.

But we’ve never done that in isolation; Digital Science is a success because it’s always worked with the community, and most of us came from the world of research in one form or another!

In these community engagement posts we highlight and showcase some of the brilliant new ideas and start-ups in the wider science, research and tech communities.

What’s up next?

That’s all for this welcome post, but stay tuned for a whole batch of launch content being written as we speak! We’ll also have regular weekly posts from the team, and would love to hear from you if you have an idea for a subject we should cover, or simply if you’d like to say hello! 

You can contact us via the button in the top bar or footer, or via the social media links for our individual authors. 

Ciao for now!  

The post Our new avenue for interesting things appeared first on Digital Science.

]]>
Our Jimmy – Thank you for your service https://www.digital-science.com/blog/2023/03/our-jimmy-thank-you-for-your-service/ Wed, 15 Mar 2023 09:20:09 +0000 https://www.digital-science.com/?p=61422 A personal tribute from Leslie McIntosh to Jimmy Carter; get a glimpse of his impactful presence and lifetime of impacts.

The post Our Jimmy – Thank you for your service appeared first on Digital Science.

]]>
Former US President Jimmy Carter, in Austin, Texas, 2014
Former US President Jimmy Carter, in Austin, Texas, 2014. Photo by the LBJ Library, courtesy of the Carter Center

“There’s our Jimmy,” said an excited Southern woman at a rest stop heading to Plains, Georgia, from Atlanta.

I thought it was odd to think of former US President Jimmy Carter as ours. No one should be possessed by another. Yet, the phrase stuck with me through the past four years. This woman and I were part of a group involved with the Carter Center as donors or volunteers, and we were heading for a weekend in Plains with Jimmy and Rosalynn Carter (and other Carter family members and a few US Secret Service agents).

I do not remember his presidency, but I remember my Texan family’s resentment towards him. And the jokes at his expense, especially the ones about peanuts. But what I know after being more aware of the Carter Center is his commitment to global health and peace that transformed millions of lives worldwide. And he and Rosalynn have touched my heart and helped me find my due North. 

A profoundly religious man, President Jimmy Carter’s preaching was through practice by improving human life. The Carter Center has accomplished so much due to its laser focus on peace and health. His brief four-year presidency offered the springboard to provide a better life for millions. [see below: A lifetime of impacts.]

One of the Center’s initiatives helped reduce the number of Guinea worm cases from about 3.5 million in 1986 to just 13 in 2022. Guinea worm is a parasite that causes debilitating effects when it infects a person, grows within the abdomen then emerges from the skin. Preventing the suffering of millions started with the combination of sharp focus and achievable goals – in this case neglected tropical diseases (NTDs). [see below: Guinea worm not infecting millions thanks to the Carter Center.]

Carter and Adams Bawa dressing the Guinea worm wound of a girl in Ghana
In Ghana, Carter Center volunteer Adams Bawa dresses the Guinea worm wound of a six-year-old child, as former US President Jimmy Carter tries to comfort her. Photo courtesy of the Carter Center.

Other crucial factors for success have been to build trust with communities and find sustainable solutions. The Center’s staff and volunteers hail from across the globe, but most significantly, from the countries affected by the NTDs. And solutions through community engagement have made filtering available water sources common and kept people and animals with emerging worms away from open bodies of water. [see below: Carter Center research exemplifies global collaborations.]

Drawn to the Carter Center for its dedication to eliminating disease, I came to respect its work, its people, and its volunteers. And to continuously be amazed by President Jimmy Carter.    

One fond memory is of a smiling Jimmy Carter speaking in 2018 at the ‘Weekend at the Carter Center’. At some point, he asked the audience of a few hundred people sitting in the Atlanta auditorium: When were (US) women given the right to vote? The audience members willing to speak would fumble around the 1920s for the date. Carter, not smiling, would respond with a resounding ‘NO.’ Quiet from the audience. “1965”, he belted, “with the Voting Rights Act – when all women had the right to vote.” We learned the lesson. 

 Jimmy Carter speaking at a Carter Center event
Former US President Jimmy Carter speaking at a Carter Center event. Photo by Leslie McIntosh.

Maybe he lacked the charisma to woo enough US voters to secure a second presidential term. Yet the single-term presidency opened doors to opportunities for positive change. He leveraged the gift selflessly, serving the world population his entire life – tirelessly working for all people, even the ones who did not like or respect him. His contribution to us was to build something that could outlast him. “Waging peace. Fighting disease. Building hope.” 

Many articles on President Carter will illuminate his life in the coming weeks and months. But monumental events offer a time to pause and reflect. So I have seized the moment to celebrate an admirable life. 

If contemporary writing is the first draft of history, here is a small contribution. 

Because this history is his, mine, and ours. Our Jimmy.

A lifetime of impacts

Leslie McIntosh, Liz Smee, Anthony Dona, Shane Jackson, Briony Fane

President Carter leveraged his one term as the ‘most powerful person in the world’ as the US President to have a lifetime of impacts. This is one glimpse into those from the perspective of publications by at least one author at the Carter Center.

Decision tree graph showing he impact of the Carter Center’s work
Figure 1: Graphic demonstrating the impact of the Carter Center’s work (see links):
1. Assisted in the control and elimination of other neglected tropical diseases.
2. Fostered democracy through voting.
3. Promoted human rights around the world.
4. Developed a mental health program.

Carter Center research exemplifies global collaborations

Since 1988, researchers from the Carter Center have accumulated over 450 research publications. The Carter Center developed an international collaborative network finding solutions with people and countries. Unsurprisingly, over 70% of these publications include researchers across a number of different countries. While we would expect most of the publications to hail from the United States, the Center’s deep commitment to working within local communities across the globe is reflected in their collaboration network including Ethiopia, UK, and Nigeria.

Global map  - Academic publications by authors at the Carter Center 1986-2022
Figure 2: Academic publications by authors at the Carter Center 1986-2022. Source: Dimensions

Carter Center publications and citations by the UN Sustainable Development Goals (SDGs)

Sustainable Development Goal (SDG) Publications Citations
1 No Poverty 2 80
2 Zero Hunger 1 36
3 Good Health and Well-being 379 8,518
4 Quality Education 2 32
5 Gender Equality 1 8
6 Clean Water and Sanitation 25 663
8 Decent Work and Economic Growth 1 3
10 Reduced Inequalities 1 52
11 Sustainable Cities and Communities 1 12
14 Life Below Water 2 16
16 Peace, Justice and Strong Institutions 10 187
© 2023 Digital Science & Research Solutions Inc. All rights reserved.

The United Nations adopted 17 Sustainable Development Goals to provide “a shared blueprint for peace and prosperity for people and the planet, now and into the future”. Given the focus of the Carter Center, it is unsurprising to find the vast majority of their publications associated with SDGs. The table above reveals that out of a total of 468 research outputs published by Carter Center researchers from 1986-2022, 425 (91%) are associated with 11 of the SDGs, most notably SDG3 – Good Health and Well-being.

Guinea worm not infecting millions thanks to the Carter Center

The World Health Organization (WHO) recognised the need to focus on Guinea worm disease (dracunculiasis) in 1981. The Carter Center partnered with the WHO and other organisations since 1986 working to eradicate Guinea worm. At first, it appears there are a growing number of cases, however, this is due to counting. Setting an audacious goal of eradicating a disease starts with determining who and how many are affected. By 2021, only 15 cases were counted. Read more at the WHO or Carter Center on Guinea worm eradication.

Map infographic - Changes in Guinea Worm cases in Africa from 1980 to 2021
Figure 3: Changes in Guinea Worm cases from 1980 to 2021. Guinea worm cases declined from a significant problem (darker red) in the 1980s to reduced cases (lighter red) to eradication (red to grey). Source data from World Health Organization Dracunliasis eradication portal. Data animated using R

Carter Center publications influence policies

From their relatively small number of publications, the Carter Center punches above its weight, with 131 publications being cited in 149 publically available policies. The majority of the policies focus on tropical diseases and inform WHO guidelines from the overall topic of neglected tropical diseases to a subset within NTDs, such as treating onchocerciasis. Given the co-focus on peace initiatives, publications surrounding elections and governing also influenced policies, such as with the 2012 Ghana election. The Center’s holistic focus to build hope through communities also comes through in policies fostering community engagement. The Carter Center goals transported ideas to work with outcomes communicated through scholarly publications then into policies. ‘Waging peace. Fighting disease. Building hope.‘ has never been just a tagline.

bar graph - Carter Center publications cited in policies 1991-2021
Figure 4: Carter Center publications cited in policies 1991-2021. Source: Dimensions.

Acknowledgements

Thanks to the work from Digital Science experts on Dimensions data (Liz Smee, Anthony Dona, Simon Porter), data visualization (Shane Jackson), and the UN Sustainable Development Goals (Briony Fane).

The post Our Jimmy – Thank you for your service appeared first on Digital Science.

]]>
A conflict of interests – manipulating peer review or research as usual? https://www.digital-science.com/blog/2023/01/a-conflict-of-interests/ Wed, 11 Jan 2023 08:34:55 +0000 https://www.digital-science.com/?p=60271 When are commonly held interests too overlapping for peer reviewers? Examining a case of undeclared conflicts of interest.

The post A conflict of interests – manipulating peer review or research as usual? appeared first on Digital Science.

]]>
Quotes icon
In seeking to define morality and moral actions, the Catechism of the Catholic Church states in paragraph 1753 that, “A good intention (for example, that of helping one’s neighbor) does not make behaviour that is intrinsically disordered, such as lying and calumny, good or just. The end does not justify the means.”
Stephen Sammut
PhD

Science, scientific method, and politics

It is tempting to think of science in the abstract as objective and pure based on rigorous analysis of empirical evidence. Conversely, politics might often appear less structured and more chaotic, based on subjective values and driven by interest groups and compromises. However, both are human endeavours – neither science nor politics functions solely in the abstract. Both are influenced by biases that are often not evident or transparent to the external observer. The scientific method is one mechanism of checks and balances used to curtail undue, inappropriate, or political influence on science. 

The scientific method teaches researchers to be sceptical and revolves around the performance of rigorous experiments, the collection of data, and the unbiased presentation of results in a format with sufficient explanation and transparency that peers may review, question and reproduce the results. In contrast to the platonic ideal of the scientific method, scientific enterprise in practice is more complex and nuanced. It involves many scientists with complex relationships and drivers, research institutions with needs, funding agencies with stakeholders, and publishers with shareholders. All operate according to their incentives and values. And they compete for support and funding within a society shaped by a complex, dynamic, and multi-stakeholder landscape. 

Politics also operates in what often seems like a detached or parallel universe in which decisions are reached via a mix of scientific and economic evidence, the needs of the general population, and sometimes by influential interested individuals, groups, and companies. 

In reality, science and politics have always been intimately connected, and neither works in practice as they do in theory. Science is political, and although politicians and lobbyists may not use the scientific method, they use science. Science may be used politically but what is crucial is to ensure that politics and subjectivity do not interfere with the scientific method.

Peer review is a check within the framework of scientific communication, but it is not the check. It is, however, the one salient to this story.

Existing since the 1700s, peer review provides an opportunity to validate scientific research. Growing to an accepted norm about 50 years ago, peer review ideally operates by having knowledgeable, independent experts review scientific research. Most people reading this article understand the broad workings of peer review. The peer reviewers should be independent of each other and experts in a topic covered in the paper (Fig. 1). The reviewers offer insight into the quality of the subject and the strength of the methods. In theory, all actors should be independent of one another, but in practice, this is rarely the case. ‘Peers’ means there should be some overlap among people and their knowledge – the people taking on the review must have the capacity and capability to form a thoughtful critique of a given piece of work. To that end, the editors, peer reviewers, and authors are often part of the same scientific society or even organisation (Fig. 2). 

Because the peer review process can vary and has not been standardised, the difference between optimising and manipulating the process may not be clear. The first is a grey area of knowing how the system works and fine-tuning the approach for professional gains. The latter refers to understanding how the system works and stepping over community boundaries of acceptable practices. The Committee on Publication Ethics (COPE) offers guidance on peer review. In contrast, the International Committee on Medical Journals Ethics (ICMJE) clearly states: “Reviewers should declare their relationships and activities that might bias their evaluation of a manuscript and recuse themselves from the peer-review process if a conflict exists.” 

See what you think in the following actual case.

Manipulation of peer review or research as usual?

We take a controversial 2022 research publication as our subject in this case study. However, the nature of the research is not critical to our discussion but rather the scholarly communications process and its integrity – specifically the character of the peer review process. We abstract crucial elements of this case and highlight the most salient and relevant issues. We look at this case without revealing the topic area, as this can be a distraction to the point at hand. 

We identified the current case not via a specific literature search (i.e., a topic-based approach) but rather by studying variances in trust marker signatures (e.g., hypothesis, conflict of interest, funding statements) across a range of literature, being blind to the subject area. This paper fell outside a specified range of norms for several trust markers. For example, the study purpose did not use the drier language typical for research in this area which, combined with the lack of a funding statement, raised an initial suspicion. 

Our chosen case involves three guest editors, four peer reviewers, and a single author, all of whom appear to be closely affiliated either in the community or through their professional affiliations. Three peer reviewers work directly for a single private organisation (“Organisation X”). One of the guest editors, the fourth peer reviewer, and the author are all affiliated with Organisation X. However, only one of the peer reviewers listed an affiliation with Organisation X. The other two guest editors are closely aligned with the principles of organisation X but are leaders in similar organisations. Only one of the peer reviewers originated from a traditional academic research institution. The other peer reviewers did not have affiliations with traditional research institutions. Nuances of peer review are described elsewhere.

Generally, we expect reviewers to have varying and overlapping knowledge and training in related fields for proper peer review. For example, having a topic expert and a statistician in economics would overlap fields with different areas of expertise. Additionally, we expect to see a balance of knowledge and affiliations across editors, peer reviewers, and the author. Affiliations may overlap in narrow fields with small or cutting-edge communities, but the case in question is not a narrow field. Aligned interests raised a flag, though.

In summary, the expertise of guest editors, peer reviewers and the author appears to overlap, as do their perspectives, affiliations, and alignment of interests. (Fig. 3).

Objectively and without specific context, many questions come to mind: When would these overlaps be acceptable while maintaining a robust commitment to research integrity? What other information do you need to know to make that decision? Will the peer reviewers be able to critically and independently evaluate the science within the paper?

The Case: When are commonly held interests too overlapping for peer reviewers? 

The case mentioned above is the recently published (and now retracted) paper in Frontiers in Psychology, “The Turnaway Study: A Case of Self-Correction in Science Upended by Political Motivation and Unvetted Findings” (Coleman, 2022). This paper sought to criticise The Turnaway Study, a landmark study describing “the mental health, physical health, and socioeconomic consequences of receiving an abortion compared to carrying an unwanted pregnancy to term”. The article came to our attention through algorithms where trust markers appear irregular. This alert suggested we search social media and PubPeer, where a corroborating signal was found. In addition, the signal indicated we should look closer at the trust markers within the article to ensure due diligence of scientific processes was followed. Because Frontiers published the names of reviewers and their declared affiliations, this transparency allows researchers to review their associations in the context of the peer review process and assess the potential for insularity. 

Coleman’s article, retracted on 26th December 2022, and described in Retraction Watch, appeared in the journal as part of a research topic (a curated article collection, somewhat like a special issue), Fertility, Pregnancy and Mental Health – a Behavioral and Biomedical Perspective. This research topic was led by three guest editors at Frontiers, while the specific Coleman article had four peer reviewers. All peer reviewers state different affiliations, but three are with the same anti-abortion Charlotte Lozier Institute (CLI), which states on its website that it is: “the preeminent organisation for science-based pro-life information and research.” Moreover, the editor charged with reviewing this article is affiliated with CLI. However, most associations were not disclosed (see table and Fig. 4).

Name Role Stated Affiliation Affiliation with Potential for Conflict of Interest Cited by CLI*
Stephen Sammut Guest Editor Franciscan University of Steubenville Charlotte Lozier Institute, Former member WECARE** 1
Patrick P Yeung Guest Editor Saint Louis University St Louis Guild of the Catholic Medical Association
Denis Larrivee Guest Editor Loyola University Chicago International Association of Catholic Bioethics
Robin Pierucci Reviewer Homer Stryker MD School of Medicine, Western Michigan University Charlotte Lozier Institute 7
Steven Braatz Reviewer American Association of ProLife ObGyns Charlotte Lozier Institute 4
Tara Sander Lee Reviewer Charlotte Lozier Institute Charlotte Lozier Institute 8
John Thorp Reviewer Carolina Asia Center, University of North Carolina at Chapel Hill Crisis Pregnancy Center Director 7
Priscilla K. Coleman Author Human Development and Family Studies, Bowling Green State University Former Director, WECARE** 4
*Cited by CLI means the author wrote or was cited in blog posts or other writings published by the Charlotte Lozier Institute. Note that being cited by CLI does not indicate an endorsement from the person being cited.
**World Expert Consortium for Abortion Research and Education (WECARE).

CLI presented an amicus brief (an expert opinion) to the US Supreme Court on 29th July 2021 in support of overturning the court’s earlier decision to uphold the outcome of Roe vs Wade, which had asserted for the past 50 years that women in the United States have a constitutional right to an abortion. Moreover, one of the peer reviewers for the Coleman article, Robin Perrucci, MD, an associate scholar at CLI, filed a separate amicus brief on 20th July 2020 with the Life Legal Defense Foundation in the Dobbs v. Jackson Health US Supreme Court case. Priscilla K. Coleman directed the World Expert Consortium for Abortion Research and Education (WECARE), where Stephen Sammut was among ten other members. John Thorp’s legal testimonies on abortion have previously come into question, and he has been the medical director of an anti-abortion crisis pregnancy centre for over 40 years.

Giving air to unethical practices

We are passing no comment on the area of research involved here since this is a highly emotive area for many. However, this peer review process is of clear interest in research conduct and integrity viewed independently of the underlying research. Furthermore, our simple example highlights the potential for institutes, peer reviewers, or authors to translate aligned political interests into scientific influence.

A decision-making majority of editors and peer reviewers are members or affiliates of organisations with publicly stated aligned interests; this process does not meet the standard of the independent, unbiased scientific method.

Allowing this paper to be published in the scholarly record provides a sense of unwarranted legitimacy to the arguments. We hope that publishers will learn from this experience and take action.

For those responsible for the paper, including its undeclared conflicts of interest, the end goal of having a ‘peer-reviewed’ article does not justify the means used to get there.

Note: Part of this analysis was presented at the eResearch Australasia conference in Brisbane, Australia, October 2022.

The post A conflict of interests – manipulating peer review or research as usual? appeared first on Digital Science.

]]>
Motivations of bad actors in science: The personal, the professional, the political https://www.digital-science.com/blog/2022/05/motivations-of-bad-actors-in-science/ Thu, 26 May 2022 11:22:19 +0000 https://www.digital-science.com/?p=57999 From lone wolves to science mercenaries, why do charlatans in science exist, what do they stand to gain, and what can be done about them?

The post Motivations of bad actors in science: The personal, the professional, the political appeared first on Digital Science.

]]>
When science meets influence and ambition

Scientific publications can serve as key evidence to policymakers, as well as provide possible discussion points to inform public debate. For example, comprehensive, systematic reviews of literature regularly influence recommendations such as medical guidelines when it comes to public health policy around major issues such as the COVID-19 pandemic. The growing number of preprints available should in theory provide a faster, albeit less reviewed mechanism for researchers to share their work during the pandemic. However, what this entails is that the means to meddle with scientific communications are that much more available. But what would motivate a person, group of people, or even an organisation to intentionally game the scientific system? Personal, professional, or political – the motivations exist within people who want fame and fortune to fast-track their ambitions. Whether they use fair means or foul.

Charlatans in science are sadly not new. Persons making grandiose claims about their knowledge and outrageous cures for diseases have peppered medical history for centuries. With charismatic personalities and opportunities to influence, such individuals have professed false cures in the house of Tsar Nicholas (Rasputin) and misled ailing Londoners during an epidemic (Gustavus). Charmers playing by their own rules – gaslighting others.

Dictionary definition of charlatan.
“Charlatans in science are sadly not new.” Stock image.

Even in the least nefarious circumstances, lone actors can emerge to try to falsify science. Immense pressures placed on scientists to conduct research, publish results and have those results cited would tempt anyone to search for shortcuts. Researchers are humans, after all. Implement these requirements in an environment that supports gamifying just about anything, and even the most honest person could fail under the pressure that’s exerted.

In one case where citations were required for someone’s work, a researcher created fictitious authors in plagiarized papers to cite that work. Their work, in fact. Dr Yibin Lin posted six papers and attempted to submit eight more to preprint servers (see one example here). The case of attempting to accelerate promotion resulted in a 10-year ban from scientific research within the US.

In other cases, the motivations can only be understood from the people themselves, for example those individuals who fake being scientists. There is a long history of people outside of science providing advice as if they were experts. Some amazing citizen scientists exist, but the signal-to-noise ratio favours chaos more than substance.

There are parallels with predatory journals and those who publish their articles in them. In her seminal article on the motivations of authors published in such journals, Tove Faber Frandsen identified two main groups – the unaware and the unethical. The former claim to be ignorant of the existence of predatory journals, and innocent in succumbing to the tried and tested tactics of predatory publishers, the latter, on the other hand, exploit their existence to ensure publications – sometimes to ensure they reap the incentives in place for them, sometimes to publish unsubstantiated research. Most clearly, this has occurred during the pandemic with everything from 5G conspiracy theories to the promotion of debunked drugs and therapies appearing in fake journals.

pic showing two signs saying truth and lies respectively
“Even in the least nefarious circumstances, lone actors can emerge to try to falsify science.” Stock image.

As harmless as acting alone may seem in such an expansive scientific ecosystem, the consequences of a lone wolf pale in comparison to targeted attacks. Science mercenaries, well-funded by and coordinated with varying industries, can intentionally fracture the confidence in a topic. Seitz and Sanger – trained physicists later hired to undermine the harms of tobacco and climate change – worked on the atomic bomb and had legitimate education and training as physicists. As described in depth in the Merchants of Doubt, the two scientists would eventually testify in court as if they were experts in epidemiology, environmental science, virology, and dozens of other areas to undermine the confidence of overwhelming scientific evidence in the harm of tobacco and the impact of climate change. They are not alone. A whole industry exists to profit from undermining science. Worried that second-hand smoke may kill your industry? The answer seems to be to kill the research by overwhelming the regulatory agencies and polluting the scientific literature.

Dr Leslie McIntosh
Quotes icon
Countering nefarious acts and actors must be coordinated throughout the scholarly community – publishers, institutions, and funding agencies, to name a few. Policies and practice must move from the current defensive, reactive position, to the offensive.”
Dr Leslie McIntosh
CEO, RIPETA
dice spelling out trust
“In the end, the tactics of the few will overpower an ecosystem lacking a robust strategy.” Stock image.

As with other pandemics, we have seen a plethora of charlatans emerge during the COVID-19 pandemic. From the MBAs who would ‘set the record straight on COVID’ to the self-proclaimed experts on COVID and policy. To illustrate one scam: a scientific piece would be written, typically with one author having credentials in a scientific field. The co-authors either do not exist (e.g. Yan report) or do not have supporting credentials (e.g. research conducted by Walach). In some cases, the ‘papers’ appear in repositories with little proofing evident. In other cases, the work gets published in ostensibly peer-reviewed journals – meaning the peer review, if it happened at all, was not rigorous, such as those articles published in predatory journals. Success in the eyes of the authors would see a scientific social media outcry happen where someone eventually shreds the methodology. But the authors have won. It’s a misinformation strategy: i) Put out bad-faith information on Topic X; ii) Methodology of Topic X is deeply refuted; iii) Topic X is discussed; iv) Words of Topic X are propagated. Win for the bad-faith actors.

This would be like writing an article: Squirrels – Wonderful Companions in the Garden. As everyone knows, evil squirrels steal tomatoes from the garden and throw acorns from trees maliciously trying to deprive the owners of any peace. Cute con artists at best. The authors’ intent would be to spread the lie of the delightful aspects of squirrels – intentionally putting in the key phrase they want propagated in the article title. So when a social media argument ensues, and good scientists cite the title as-is and the bad-faith actor’s message sticks: squirrels are lovely garden mates. And the lie spreads – because we as scientists playing by scientific rules indulge in critiquing the methodology before deciding on the legitimacy of the source. We legitimize their argument.

An individual infiltrating published science with falsehoods still pollutes the ecosystem. But the motivation to put profit over protecting society causes harm at scale. In the end, the tactics of the few will overpower an ecosystem lacking a robust strategy.

Countering nefarious acts and actors must be coordinated throughout the scholarly community – publishers, institutions, and funding agencies, to name a few. Policies and practice must move from the current defensive, reactive position, to the offensive – taking proactive measures to prevent harmful players entering the ecosystem and promoting automated quality checks that scale with the pace of scholarly communication.

For more information about how Ripeta can help make better science easier – for publishers, funders, researchers and academic institutions – please visit the Ripeta website.

The post Motivations of bad actors in science: The personal, the professional, the political appeared first on Digital Science.

]]>
Provocative paper titles https://www.digital-science.com/blog/2021/08/provocative-paper-titles/ Tue, 24 Aug 2021 15:00:03 +0000 https://www.digital-science.com/?p=55333 Does a disconnect between a paper’s abstract and its title indicate a potential need to inspect the article for possible trust issues?

The post Provocative paper titles appeared first on Digital Science.

]]>
At Ripeta, we develop tools to automatically scan manuscripts for key scientific quality indicators and provide feedback on ways to improve research reporting. We assess, design, and disseminate practices and measures to improve the reproducibility of, and trust in science with minimal burden on scientists.

In what can often feel like a sea of dry scientific writing, provocative titles in scientific research papers stand out. Occasionally, legitimate scientists conducting good research will attempt more humorous titles. Sometimes, they even land! 

To highlight the joy of a jaunty paper title, our friends at Writefull, providers of AI-based research proofreading services, have developed a fun app to generate scientific paper titles based on article abstracts. When pondering paper titles, I wondered whether a disconnect between a paper’s abstract and its title could indicate a potential need to inspect the article for possible trust issues, and what better way to investigate it than to use their app!

And so, without further ado, or indeed statistical significance, I present three articles: 

Article 1: A (very not) scintillating title 

Human Created Title

An analysis of form and function of a research article between and within publishers and journals

Writefull’s Computer-Generated Title 
  • Research Article Heading Organization and Forms for Machine Learning and Natural Language Processing: A Case Study from a Single Institution
  • A Heading Form and Function Analysis for Machine Learning
  • Research Article Heading Form and Function Analysis Using Rhetorical Functions

Picking an enticing article we just published at Ripeta about research article heading and subheadings, I wanted to see how close our paper name compared to those generated by a computer. Based on the alternative possibilities, the similarities of titles reassured me that we accurately framed our paper as dryly as possible, keeping in line with scientific naming conventions. Quite an interesting article when training machine learning algorithms to parse and categorize articles. However, definitely not click-bait.

An image of a paper entitled "An analysis of form and function of a research article between and within publishers and journals" on an iPad screen

Article 2: A title from an author trying to be clever (apologies Dr. Luke)

Human Created Title

Where there’s smoke there’s money: Tobacco industry campaign contributions and U.S. Congressional voting

Writefull’s Computer-Generated Title Possibilities
  • Voting Behaviors of Representatives from the Tobacco Industry Political Action Committees in the United States: A Cross-Sectional Analysis
  • The Effectiveness of Campaign Contributions for Tobacco-Related Legislators in the United States: A Cross-Sectional, Multilevel Model
  • Voting Behavior of Tobacco Industry Political Action Committees

A search in Dimensions shows over 160 articles alluding to the proverb ‘Where there’s smoke’ in the title. Not that uncommon. Maybe even overused? From personal experience, Dr. Doug Luke enjoys using more flavourful titles for his papers and talks to make statistics sound as interesting as it really is. The generated titles compare favourably to the original segment after the academic colon.

An image of a Dimensions screen showing a paper entitled "Where there's smoke there's money: Tobacco industry campaign contributions and U.S. Congressional voting" on an iPad screen

Article 3: A provocative title (from a retracted article)

Human Created Title

The Safety of COVID-19 Vaccinations—We Should Rethink the Policy

Writefull’s Computer-Generated Title Possibilities
  • Vaccine Safety and Risk Assessment for mRNA Vaccine COVID-19
  • Vaccination of COVID-19: A Review of the Safety of Vaccines
  • Safety Evaluation of COVID-19 Vaccines: The mRNA Vaccination versus the Number Needed for Vaccination

The problem with this title is the authors put in a recommendation into the title, which plays on the boundaries of scientific cultural norms. In fact the term ‘rethink the policy’ appears in only a handful of article titles. More troublesome is that the recommendation in the title does not logically follow from the paper, as also reflected by the auto-generated titles given by Writefull. Before even considering the fraughtful methods of the paper, we know the title and substance of the paper don’t agree with each other.

Provocative paper titles remind us that, first, scientists are able to laugh at themselves a little, and second that the title itself could have a bearing on the readership and thus the exposure of the science within. Could there be a relationship between paper titles and trust? We’d love to hear your thoughts. Tweet us @ripetaReview.

An image of a paper entitled "The Safety of COVID-19 Vaccinations—We Should Rethink the Policy" on an iPad screen

Want to try your hand at the title generation app? Go to the Writefull Title Generator and let us know what you found @Writefullapp and @ripetaReview.

At Ripeta we will keep exploring and automating checks to make better science easier. To learn more, head to the Ripeta website or contact us at info@ripeta.com.

The post Provocative paper titles appeared first on Digital Science.

]]>