ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Research Integrity

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

The inclusion of retracted trials in systematic reviews: implications for patients’ safety0

 

After a paper has been through peer review and has been published it is the obligation of the scientific community to scrutinise an author’s work. If a serious error or misconduct is spotted the paper should be retracted and the work is removed from the evidence base. Over the past ten years there has been an exponential growth in the number of retracted papers. Much of the increase may be explained by the use of technology that has made it easier to spot duplicate publications, or fabricated data, for example. Once a paper is retracted researchers should not cite this work in future publications; this is, however, not the case. Many papers continue to be cited long after they have been retracted. Retraction Watch has a list of the ten most highly cited retracted papers. The paper that currently holds the number one spot has been cited a total of 942 times, after retraction. It is plausible that researchers are using retracted work to justify further study. This may be the scientific equivalent of “fruit of the poisonous tree”. That is to say, if the research is based on tainted work then that work is itself tainted. Authors may also include retracted work in systematic reviews and meta-analyses. In clinical disciplines – such as nursing or medicine – this is particularly worrisome.

Clinical practice should be based on the best available evidence, i.e. from systematic reviews. If a review were to include a retracted paper then the resulting meta-analysis would be contaminated and recommendations for practice emerging from the study would be unsound; ipso facto putting patients at risk because a clinician is using evidence that is flawed. To date we have found five examples in the nursing literature where this has happened. We have written to the journal editors to advise then of the error that authors have made. In our minds this is a cut and dry issue. The author has clearly made an error, potentially a serious error and one that will need to be resolved. Either the editor will need to issue an erratum or potentially retract the review (and there are examples in the literature where this has happened).

There is a second way in which a systematic review may include research that is retracted. This is when the authors of the review cite a paper that is retracted after the review is published. A more nuanced debate is perhaps required given that the review author has not made a mistake. Would it not be punitive to the author – potentially damaging their career prospects – to retract a review when they have not made a mistake? However, the inclusion of a paper that has subsequently been retracted has the potential to impact effect sizes in meta-analysis and/or review conclusions. My group undertook a study to explore how often retracted clinical trials were included in systematic reviews. The answer; more common than you might think. We followed up the citations of eleven retracted nursing trials and determined that they were included in 23 systematic reviews. Currently there is no mechanism that will alert authors (or publishing editors) that their systematic review includes a study that has subsequently been retracted. We suspect, but don’t know for certain, that in medicine and the allied health professions there are many more systematic reviews that include retracted studies. Clinical practice guidelines, such as those produced by the National Institute of Health and Care Excellence (NICE) rely on evidence from systematic reviews. And this is where our observation flips from being an interesting intellectual exercise to one that may impact patient safety. Could it be that patients are being exposed to ineffective treatments because guidelines are based on flawed reviews?

Journal editors, reviewers and researchers need to be aware and mindful that systematic reviews may contain citations that have been retracted. There is a compelling argument that the editor who issues a retraction notice for a paper also has a duty to alert authors citing this work of the retraction decision. Part of the peer review process should be checking that included references (particularly those included in meta-analysis) are not retracted, it might also be argued. Finally, not only do review authors need to ensure that they have not cited retracted papers, but they also have a responsibility to periodically check (something the Cochrane collaboration encourage authors to do) the status of included studies.

The inclusion of retracted trials is a threat to the integrity of systematic reviews. Consideration needs to be given to how the scientific community responds to the issue with the ultimate goal of keeping patients safe.

Professor Richard Gray is the editor of the Journal of Psychiatric and Mental Health Nursing. No other conflict of interest declared.

Contributor
Richard Gray PhD
Professor of Clinical Nursing Practice, La Trobe University, Melbourne, Australia
Richard’s University profile |  r.gray@latrobe.edu.au

This post may be cited as:
Gray R. (26 May 2018) The inclusion of retracted trials in systematic reviews: implications for patients’ safety. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/the-inclusion-of-retracted-trials-in-systematic-reviews-implications-for-patients-safety

How can we get mentors and trainees talking about ethical challenges?0

 

When it comes to research integrity, the international community often tends to focus on the incidence of research misconduct and the presumption that the remedy is to have more training in responsible conduct of research. Unfortunately, published evidence largely argues that these perceptions are demonstrably wrong. Specifically, formal training in courses and workshops is much less likely to be a factor in researcher behavior than what is observed and learned in the context of the research environment (Whitbeck, 2001; Faden et al., 2002; Kalichman, 2014).

These research findings should not be surprising. Most of an academic or research career is defined by actually conducting research and working with research colleagues. The idea that a single course or workshop will somehow insulate a researcher from unethical or questionable behavior, or arm them with the skills to deal with such behavior, would seem to be a hard case to make. That isn’t to say that there is no value in such training, but the possible impact is likely far less than what is conveyed by the research experience itself. With that in mind, the question is how, if at all, can research mentors be encouraged to integrate ethical discussions and reflections into the context of the day-to-day research experience?

With this as a challenge, we have been testing several approaches at UC San Diego in California to move conversations about RCR out of the classroom and into the research environment. With support from the US National Science Foundation, this project began with a 3-day conference comprised of ~20 leaders in the field of research integrity (Plemmons and Kalichman, 2017). Our goal was to develop a curriculum for a workshop in which participating faculty would acquire tools and resources to incorporate RCR conversations into the fabric of the research environment. Based on consensus from the conference participants, a curriculum was drafted, refined with input from experts and potential users, and finalized for pilot testing. Following two successful workshops for faculty at UC San Diego, the curriculum was rolled out for further testing nationally with interested faculty.

The focus of the workshop curriculum was five strategies participating faculty might use with members of their research groups. These included discussions revolving around (1) a relevant professional code of conduct, (2) creation of a checklist of things to be covered at specified times with all trainees, (3) real or fictional research cases defined by ethical challenges, (4) creation of individual development plans defining roles and responsibilities of the mentor and trainees, and (5) developing a group policy regarding definitions, roles, and responsibilities with respect to some dimension of practice particularly relevant to the research group. In all cases, the goal is to create opportunities that will make conversations about the responsible conduct of research an intentional part of the normal research environment.

The results of this project were encouraging, but still leave much to be done (Kalichman and Plemmons, 2017). Workshops were provided for over 90 faculty, who were strongly complimentary of the program and the approach. In surveys of the faculty and their trainees after the workshops, there were high levels of agreement that the five proposed strategies were feasible, relevant, and effective. However, while use of all five strategies was high post-workshop, we surprisingly found that trainees reported high levels of use pre-workshop as well. In retrospect, this should have been expected. Since workshops were voluntary, it is likely that faculty who attended were largely those already positively disposed to discussing responsible conduct with their trainees. One question worth asking is whether repeating workshops for interested faculty only will have a cascading effect over time, drawing in increasing numbers of faculty and serving to shift the culture. Also, it remains to be tested whether these workshops would be useful if faculty were required to attend.

For those interested in implementing these workshops in their own institutions, the curriculum, template examples and an instructor’s guide are all available on the Resources for Research Ethics Education website at: http://research-ethics.org/educational-settings/research-context.

References

Faden RR, Klag MJ, Kass NE, Krag SS (2002): On the Importance of Research Ethics and Mentoring. American Journal of Bioethics 4(2): 50-51.

Kalichman M (2014): A Modest Proposal to Move RCR Education Out of the Classroom and into Research. J Microbiol Biol Educ. 15(2):93–95.

Kalichman MW, Plemmons DK (2017): Intervention to Promote Responsible Conduct of Research Mentoring. Science and Engineering Ethics. doi: 10.1007/s11948-017-9929-8. [Epub ahead of print]

Plemmons DK, Kalichman MW (2017): Mentoring for Responsible Research: The Creation of a Curriculum for Faculty to Teach RCR in the Research Environment. doi: 10.1007/s11948-017-9897-z. [Epub ahead of print]

Whitbeck C (2001): Group mentoring to foster the responsible conduct of research. Science and Engineering Ethics 7(4):541-58.

Contributors
Michael Kalichman – Director, Research Ethics Program, UC San Diego | University biomkalichman@ucsd.edu

Dena Plemmons | University of California, Riverside | University page

This post may be cited as:
Kalichman M. and Plemmons D. (2017, 21 December 2017) How can we get mentors and trainees talking about ethical challenges? Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/can-get-mentors-trainees-talking-ethical-challenges

Dealing with “normal” misbehavior in science: Is gossip enough?0

Posted by Admin in Research Integrity on September 20, 2017 / Keywords: , , , ,
 

As scientists, whether in the natural or social sciences, we tend to be confident in the self-policing abilities of our disciplines to root out unethical behavior. In many countries, we have institutionalized procedures for dealing with egregious forms of misconduct in the forms of fabrication, falsification, and plagiarism (FFP).

But research is increasingly calling attention to more “everyday” forms of misconduct—modes of irresponsible (if not unethical) behavior, pertaining to how we conduct our research as well as our relationships with colleagues. These include, for example:

  • cutting corners and being sloppy in one’s research (which makes future replication difficult)
  • delaying reviews of a colleague’s work in order to beat them to publication
  • exploiting students
  • unfairly claiming authorship credit
  • misusing research funds
  • sabotaging colleagues, and so on.

Such behaviors don’t violate FFP, but nevertheless fall short of the professional standards we aspire to. They begin to shape the implicit norms we internalize about what it takes to become successful in our fields (i.e., the formal script may be that we are to give others their due credit, but “really” we know that winners need to play dirty). Further, such actions can foster experiences of injustice and exploitation that lead some of us to leave our professions altogether. They thus compromise the integrity of scientific research and can create the climate for more serious violations to occur.

Just because such forms of what DeVries, Anderson, and Martinson call “normal misbehavior” can’t be formally sanctioned, it doesn’t mean they go unnoticed. Rather, in the research that my colleagues and I conducted on scientists in several countries, we found such accounts to be commonplace. Why, then, the confidence in the self-policing abilities of our disciplines? The answer, we were surprised to find, was gossip.

Scientists regularly circulate information in their departments and subfields about those who violate scientific norms. Through such gossip, they try to warn one another about colleagues whose work one ought not to trust, as well as those with whom one should avoid working. The hope here is that the bad reputation generated by such gossip will negatively impact perpetrators and serve as a deterrent to others.

What we found, however, was that the same respondents would admit that many scientists in their fields managed to be quite successful in spite of a negative reputation. Some talked about stars in their disciplines who managed to regularly publish in top journals precisely because they cut corners, or managed to be highly prolific because they exploited students. Others feared that influential perpetrators could retaliate against challengers. Some others complained of “mafias” in their disciplines that controlled access to prestigious journals and grants. Still others didn’t want to develop a reputation as a troublemaker for challenging their colleagues.

Perhaps the strangest case we encountered was of a scientist at a highly reputed institution in India who was notorious for beating students with shoes if they made mistakes in the lab. Former students would try to warn incoming students through posters around campus, but this did little to hinder the flow of new students into the lab.

Our findings overall suggest that such gossip works as an effective deterrent only when targets of gossip are of lower status than perpetrators. For instance, gossip among senior scholars about the irresponsible behavior of a postdoc or junior faculty member can inhibit their hiring and promotion. However, the veracity of such gossip is hard to verify, and false rumors can destroy someone’s career. In one case we encountered, a scientist saw a colleague spread false gossip about a potential hire, but was unable to intervene in a timely manner to correct this rumor. Transgressors may also remain unaware of gossip, and thus may not be able to correct their behaviors. In cases where targets are of higher status, gossip seems little more than a means of venting frustration, with little effect on perpetrators. Overall, as a means of social control in the discipline, gossip is rather ineffective.

So why does all this matter?

The very prevalence of such gossip indicates that scientific communities still need to take more steps to improve the integrity of their organizations and fields, beyond simply sanctions for FFP. The content of such gossip should be important to leaders of scientific institutions because it can provide important access to rampant forms of irresponsible behavior that erode the integrity of scientific institutions. Obviously, such gossip can’t simply be taken at face value; investigation is needed to weed out false rumors. Institutions need to develop better channels to report questionable behavior and need to regularly analyze such reports for patterns that warrant attention.

What’s most crucial is that institutional leaders prioritize creating a climate that fosters prevention and transparency, encourages speaking up about such issues, and provides safety from potential retaliation. These are among the best practices for protecting whistleblowers, as identified by the Whistleblower Protection Advisory Committee (WPAC) of the US Department of Labor. In addition to ethics training on issues related to FFP, the ongoing professionalization of scientists needs to include more overt discussion about

  • the implicit norms of success in the field
  • the prevalence and causes of burnout
  • how to productively address some of the more rampant forms of irresponsible behavior (such as the ones I listed earlier in this post), and
  • systemic issues, such as competitive pressures and structural incentives that enable the rationalization of irresponsible behavior

If such measures are implemented, we can significantly improve the ethical climates of our institutions and disciplines; reduce some of the attrition caused by institutional climates that tolerate (and even reward) such “normal misbehavior”; and help prevent the more egregious scandals that shake the public’s trust in science.

References

Martinson, B. C., Anderson, M. S., & De Vries, R. (2005). Scientists behaving badly.  Nature, 435(7043), 737-738.
Chicago

Shinbrot, T. (1999). Exploitation of junior scientists must end. Nature, 399(6736), 521.

De Vries, R., Anderson, M. S., & Martinson, B. C. (2006). Normal misbehavior: Scientists talk about the ethics of research. Journal of Empirical Research on Human Research Ethics, 1(1), 43-50.

Vaidyanathan, B., Khalsa, S., & Ecklund, E. H. (2016). Gossip as Social Control: Informal Sanctions on Ethical Violations in Scientific Workplaces. Social Problems, 63(4), 554-572.

Whistleblower Protection Advisory Committee (WPAC). (2015). Best Practices for Protecting Whistleblowers and Preventing and Addressing Retaliation. https://www.whistleblowers.gov/wpac/WPAC_BPR_42115.pdf

Contributor
Dr. Brandon Vaidyanathan is Associate Professor of Sociology | The Catholic University of America | CUA Staff pagebrandonv@cua.edu

This post may be cited as:
Vaidyanathan B. (2017, 2o September 2017) Dealing with “normal” misbehavior in science: Is gossip enough? Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/dealing-normal-misbehavior-science-gossip-enough

In a world of hijacked, clone and zombie publishing, where shouldn’t I publish?3

 

When we talk to research higher degree candidates and early career researchers about publication ethics, one question comes up repeatedly. Indeed, it is a question we are frequently asked by experienced researchers, particularly those who wish to publish in a new field – where should I publish? That’s a difficult question to answer in the abstract so first we would like to remove some distractions from the decisions that need to be made. In this piece, we look at the other side of the coin and explore where researchers should not publish.

Research institutions often provide their staff with incentives to publish in top-ranking journals as determined by impact factor. Publishing in these journals can boost the university’s standing in some international rankings and national research assessment exercises. Consequently, performance indicators, promotion and recruitment criteria, track records for grant assessment and even financial bonuses may be aligned with these outlets.

Good research takes a long time and we should take care where we place our outputs. If we want our papers to be read, we need to look for a journal that reaches our prospective audience. In some fields, this might mean a niche but highly rated journal linked to a particular professional association; in other cases, we seek a journal that is covered by reputable indexes and databases like Medline, PubMed, Scopus or the Web of Science. Only then would a paper be included by subsequent systematic review or meta-analysis, for example.

However, many researchers may find it tough to break into the top 25%, let alone the top 10% of journals. Even if they can, the process can prove lengthy and frustrating as journals use robust peer review processes and may call for repeated and extensive and perhaps even unwarranted revisions. In the face of this, some scholars may come under pressure to publish quickly, particularly if the award of a doctorate or confirmation of the first job is dependent on having something in print. And, for some purposes (including Australian institutional block research funding until quite recently), quantity may trump quality.

There are traps for the unwary who find themselves in this position. Everyone wants to avoid predatory journals and publishers and, yet, not everyone does. Not even some top researchers manage to avoid these outlets according to one study of academic economists (Wallace and Perri, 2016). Researchers, it seems, can be seduced by an invitation from journal editors, an invitation sometimes filled with ‘flattering salutations, claims that they had read the recipient’s papers despite being out of the journals claimed area of study, awkward sentence structure and spelling mistakes, and extremely general topics’ (Moher and Srivastava, 2015).

While many researchers have been duped, publication scammers are not always given a free ride. A few have come under some pressure from legal authorities. In 2016, the Federal Trade Commission (FTC) filed a brief in the US District Court against the OMICS Group and related entities. The brief reveals a little about what is known about these journals. OMICS, for instance, is a Nevada-registered, Hyderabad-based entity that claims to run 700 journals. The FTC alleged that OMICS deliberately misled potential authors by misrepresenting the composition of the editorial board, the process of review, the journal’s impact factor and the fee for publication:

…the academic experts identified by Defendants lack any connection with Defendants’ journals. Further, in many instances, articles submitted for publishing do not undergo standard peer review before publishing. And Defendants’ journals’ impact factors are not calculated by Thomson Reuters, nor are their journals included in PubMed Central. Moreover, Defendants fail to disclose, or disclose adequately, that consumers must pay a publishing fee for each published article. (p.5)

Recently, OMICS has diversified its strategy. In 2016, Canadian journalists reported OMICS had bought at least the trading name of reputable Canadian publishers and appeared to have also picked up their publishing contracts with well-regarded journals. This, it seems, was done so that OMICS could use these names as a front to attract articles to its predatory publishing stable (Puzic, 2016). Some professional associations who found their publishing contracts taken over have declared their intention to break their connection to OMICS.

When assessing which journals to target for your work, you might:

  1. Read recent issues of the journal. Are the papers of a quality you would cite? Can you find evidence of good editorial standards? Would your work fit among the papers published there? What, Macquarie University counsels its staff to consider, is its relevance, reputation, visibility and validity?
  2. Check the standing of the publication’s editors. Are they members of the Committee on Publication Ethics (COPE) or, if their journals are online, the Open Access Scholarly Publishers’ Association (OASPA); predatory publishers are less likely to be members. COPE has also created the Check.Submit.campaign to support authors’ decision-making.
  3. Talk to a research librarian, your peers and mentors about the potential publisher. If you know anyone who is on the Editorial Advisory Board, ask them about the journal at the same time that you seek to establish whether whether the journal might be interested in your work. Some leading academics have found their names on Editorial Advisory Boards of predatory journals and have discovered that it is easier to join these lists than have their name removed.
  4. Read the publisher’s policies and editorial review practices. Are they coherent, do they provide detailed information on submission guidelines and peer review processes? If they guarantee a speedy turnaround, that is often a warning sign. Check whether they impose ‘article processing charges’ (APCs). Then, check again.

Reach out to researchers that have previously been published there to discuss their experiences and impressions.

Not every legitimate journal can extract itself from predatory publishers. Where once respected journals are hijacked by criminal enterprises, they can continue their existence as ‘zombie journals’, trading off the reputation built up in the past but behaving like any other predatory journal. There are other kinds of dishonest practices. Some predatory publishers have established ‘clone journals’ who use the same title as a legitimate journal and reproduce the original journal’s website or make minor changes to the title in an attempt to deceive unwary authors (Prasad, 2017). Hijacked, clone and zombie publishing can turn a glowing recommendation into a trap for the unwitting. Analysis of criminal activities in publishing has taken a little time to catch up with offending patterns. In recent work, Moher and Shamseer (2017) argued that the term predatory journal should be replaced by ‘illegitimate entities’, refuting the idea that such entities were entitled to clothe themselves in the language of the legitimate publication industry.

So, what advice should we give to researchers about being prudent with the treasured fruit of their labours? Until recently, one quick answer might have been to avoid journals on Beall’s List, a list of ‘predatory journals’ that Jeffrey Beall, a US-based librarian, had placed on a ‘blacklist’. The list always had its critics (Naylon, 2017). Variables used by Beall such as open access, fees to publish, locations in low- to medium-income countries, and novel peer review practices are not automatically predictors of a predatory publisher. Nor does the converse necessarily guarantee a publisher is a safe choice. However, whatever its longstanding flaws, Beall’s List is rapidly losing its currency. Earlier this year Beall decided to ‘unpublish’ his list; the list is no longer updated and is only available on cache sites. Institutions seeking a successor to Beall’s List can look towards a commercial provider, Cabells, which has announced its own Blacklist. Anyone using a blacklist should also check journals against a ‘white list’ like the Directory of Open Access Journals or even the old Excellence in Research Australia journal rankings (removed from the research council websites, but still circulated discreetly like a samizdat newsletter among Australian academics).

Unfortunately, unless black and white lists are updated continuously, they can never keep up with changes in the publication industry. Some publishers once regarded as predatory genuinely improve their practices over time. On the other hand, illegitimate practices have also changed over time. Over the last few years, we have seen the movement of organised and unorganised crime into the industry, attracted by the roughly US$100m in fees that Shamseer et al. (2017) estimated (very roughly) that predatory publishers might be obtaining.

So, the quick answer to the question ‘where shouldn’t I publish?’ is that since the demise of Beall’s List, researchers need to engage in critical enquiry and reflection about a potential publisher. This should not come as a shock – the same advice would have been true long before the end of Beall’s List.

In recent weeks, we’ve been including in the Resource Library discussion pieces, papers and strategies that propose how to assess publishers.

These include:

Not the ‘Beall’ and end-all: the death of the blacklist, AOAG Webinar Series (Dr Andy Pleffer & Susan Shrubb | April 2017)

Beyond Beall’s List: Better understanding predatory publishers, Association of College & Research Libraries (Monica Berger and Jill Cirasella | March 2015)

Black lists, white lists and the evidence: exploring the features of ‘predatory’ journals, BioMed Central Blog (David Moher & Larissa Shamseer | March 2017)

Warning: conmen and shameless scholars operate in this area. Times Higher Education (James McCrostie | January 2017)

Blacklists are technically infeasible, practically unreliable and unethical. Period. – LSE Blog (Cameron Neylon | January 2017)

Beware! Academics are getting reeled in by scam journals – UA/AU (Alex Gillis | January 2017)

References

Moher, D. and Shamseer, L. (2017) Black lists, white lists and the evidence: exploring the features of ‘predatory’ journals. BioMed Central Blog 16 Mar 2017. https://blogs.biomedcentral.com/bmcblog/2017/03/16/black-lists-white-lists-and-the-evidence-exploring-the-features-of-predatory-journals/

Moher, D. and Srivastava, A. (2015) You are invited to submit…. BMC Medicine, 13(1), p.180. https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-015-0423-3

Neylon C. (2017) Blacklists are technically infeasible, practically unreliable and unethical. Period. LSE Blog. https://cameronneylon.net/blog/blacklists-are-technically-infeasible-practically-unreliable-and-unethical-period/

Prasad, R. (2017) Predatory journal clones of Current Science spring up. The Hindu, 14 July. http://www.thehindu.com/sci-tech/science/predatory-journal-clones-of-current-science-spring-up/article19277858.ece

Puzic, S. (2016) Offshore firm accused of publishing junk science takes over Canadian journals. CTV News. 28 September. http://www.ctvnews.ca/health/health-headlines/offshore-firm-accused-of-publishing-junk-science-takes-over-canadian-journals-1.3093472?hootPostID=00bc7834da5380548a8b2d58e40c8b29

Shamseer, L, Moher, D., Maduekwe, O., Turner, L., Barbour, V., Burch, R., Clark, J., Galipeau, J., Roberts J. and Shea, B.J. (2017) Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison. BMC Medicine 15:28. https://doi.org/10.1186/s12916-017-0785-9

Wallace, F. and Perri, T. (2016) Economists behaving badly: publications in predatory journals. MPRA Paper No. 73075, posted 15 August. https://mpra.ub.uni-muenchen.de/73075/1/MPRA_paper_73075.pdf

Also see

Examining publishing practices: moving beyond the idea of predatory…
Continuing Steps to Ensuring Credibility of NIH Research: Selecting Journals with…
Illegitimate Journals and How to Stop Them: An Interview with Kelly Cobey and…
Open access, power, and privilege

Contributors
Prof. Mark Israel, senior consultant AHRECS, Mark’s AHRECS bio – mark.israel@ahrecs.com
Dr Gary Allen, senior consultant AHRECS, Gary’s AHRECS bio – gary.allen@ahrecs.com

This post may be cited as:
Israel M & Allen G. (2017, 26 July) In a world of hijacked, clone and zombie publishing, where shouldn’t I publish? Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/world-hijacked-clone-zombie-publishing-shouldnt-publish

Page 1 of 41234