ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Researcher responsibilities

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Empowering and enabling participation in human research: Reflections from two Queenslanders living with Multiple Sclerosis0

 

Dr Gary Allen
MS Qld Ambassador | AHRECS Senior Consultant | Member NS s4 review committee


Natalie Walsh
MS Qld Community Engagement Manager

Participation in ethical human research often provides four positive opportunities for persons living with MS:

(i) A welcome distraction from the sometimes-cruel realities of living with this progressive neurological condition.

(ii) An opportunity to provide insight into the practical challenges of symptoms that may be invisible to observers other than family, close friends and carers, and to give voice to the experiences of persons who are disenfranchised.

(iii) Access to whatever benefits are anticipated as a result of a project.

(iv) An opportunity to make a positive contribution to the body of knowledge and/or other public good.

The exclusion of people living with MS from research is a concern with regard to the ethical values of Justice (e.g. NS 4.5.3) and Beneficence because it denies access to the benefits described above, on the grounds of a disability. It is also a merit and integrity concern because, if a section of the community is excluded from a research project, there is at least the possibility the results might be different for people living with MS.

Prevalence in society
In Australia 1 in 5 people live with a disability. The average age of people diagnosed with MS is just 30 and 3 out of 4 are female.

On average, more than 10 Australians are diagnosed with MS every week. There are over 25,600 people in Australia living with MS, including 4,970 Queenslanders and the condition affects each person differently. The progress, severity and specific symptoms of MS cannot be predicted. MS is a lifelong condition for which a cure is yet to be found. However, doctors and scientists are making discoveries about the treatment and management of MS every day.

MS is one of the most common chronic neurological conditions of the Central Nervous System and may affect the brain, spinal cord and optic nerve and impacts more young people in Australia than any other chronic progressive neurological disease.

Symptoms and research
It is important to note that the symptoms associated with MS can be different differ in both presentation and severity for each individual.

Symptoms of MS will vary and are unpredictable.  No two people will experience the same symptoms to the same degree. Symptoms can come and go, and can also be affected temporarily by other factors such as hot weather or an infection.

Although MS can cause a wide variety of symptoms, most people only experience a small number of these.  For most of the common MS symptoms, there are now many effective forms of symptom management. It is also important to note that the symptoms listed here are not exclusive to MS and can appear in many different neurological conditions.

The symptoms of MS can be both visible and invisible to others and include:

  • Changes in memory, concentration or reasoning
  • Slurring or slowing of speech
  • Extreme tiredness (unusual fatigue): a debilitating kind of general exhaustion and weariness which is unpredictable disproportionate to the activity
  • Visual disturbance, including blurring of vision, double vision (diplopia), inflammation of the optic nerve (optic neuritis), pain and (rarely) loss of vision
  • Dizziness and vertigo
  • Emotional and mood changes
  • Pain
  • Altered sensation, such as tingling, numbness or pins and needles
  • Altered muscle tone, such as muscle weakness, tremor, stiffness or spasms
  • Difficulties with walking, balance or coordination: – these include loss of balance, tremors, unstable walking (ataxia), dizziness (vertigo), clumsiness of a limb, lack of coordination, and weakness (affecting in particular the legs)
  • Sexual changes
  • Bladder and bowel changes
  • Sensitivity to heat and/or cold

Exclusion
The exclusion of persons living with MS can typically occur in one of two ways:

(i) Intentionally because of the perceived vulnerability of the population, especially if an individual’s symptoms include impact on executive function, such as cognition and memory.

(ii) Unintentionally
……..a. because the research activities don’t accommodate the limitations imposed by an individual’s symptoms.
……..b. because communication is not extended to the networks outside of the research community.

Empowering and enabling participation
The exclusion of persons living with MS from research should be limited to circumstances where an individual’s symptoms would confound the collected data (e.g. a person with a severe intention tremor in their lead hand is unlikely to be able to quickly draw a shape they saw) or where they are especially vulnerable to harm (e.g. high-intensity exercise when their symptoms include autonomic impact on their cardiovascular system).

Rather than excluding potential participants who live with MS, researchers and review bodies are encouraged to consider:

(i) Whether the complexity of the research and nature of the risks are such that the competence of potential participants should be established. This might be explored in a simple conversation, as is recommended by paragraph 4.5.10 of the National Statement, e.g.

…….a. in the case of low risk anonymous data collection, accepting consent without establishing competence.

…….b. considering strategies to scaffold consent and respecting the wishes of individuals, even if substitute consent is required.

…….c. including a support person to provide individual assistance to participants

(ii) Conducting testing in a cool and bright location and at preferred times, such as mornings.

(iii) Allowing participants to request rest breaks with refreshments available

(iv) Supporting screen readers and closed captioning.

(v) Supporting suitable interface controls other than a mouse.

(vi) Reimbursing transport, parking or companion costs if travel is required.

Reference groups
The establishment of a reference group can be a valuable way to explore whether the anticipated benefits of a project are perceived as justifying the risks (as recommended by paragraph 2.1.5 of the National Statement), whether the support strategies are sufficient, and whether the language of the recruitment and consent materials are appropriate.

References:
National Statement on Ethical Conduct in Human Research (2007 updated 2018)

This post may be cited as:
Allen, G. & Walsh N. (1 October 2019) Empowering and enabling participation in human research: Reflections from two Queenslanders living with Multiple Sclerosis. Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/empowering-and-enabling-participation-in-human-research-reflections-from-two-queenslanders-living-with-multiple-sclerosis

Should we Reframe Research Ethics as a Professional Ethics?0

 

Dr Nathan Emmerich
Research Fellow in Bioethics at ANUMS

Despite the fact that one of the urtexts of bioethics—Beauchamp and Childress’ principles of biomedical ethics—offers a set of concepts that purport to apply to both research and medical practice it is nevertheless the case that we standardly contrast research ethics with professional ethics. The operating presumption seems to be that a proper grasp of professional ethics requires an understanding of the unique role professional’s play, whereas the same cannot be said of research ethics. Here the presumption is that researchers are not unique but interchangeable. Furthermore, their individuality is inimical to good, and therefore ethical, research.

Whilst both healthcare professionals and researchers should be objective, the professional enters into a singular relationship with their patients. The position of the researcher can, however, be occupied by any relevantly qualified individual and their function is to report their scientific observations. Thus, underlying this contrast is an epistemological point. The perceived importance of the relationship between doctors and patients means that whilst the ethics of the preeminent profession, medicine, are predicated on professionalism they are equally predicated on something that is distinctively (inter)personal. In contrast, the notion that there might be an (inter)personal dimension to the relationship between researchers and research participants is inimical to the requirement for objectivity, at least for a certain value of objectivity.

COMMENTARY
Nik Zeps, AHRECS

.
In this thought-provoking blog, Nathan Emmerich challenges the notion that there is any distinction between research ethics and professional ethics when it comes to social science research. That is, the very nature of the enterprise requires that the researcher be deeply engaged in ethical discourse throughout the conduct of the study and not simply at a point in time to satisfy the regulatory requirements of ethics committees to obtain their approval. Whilst the argument is reserved for the social sciences, and there is some hesitancy to extend it beyond this, it is clear that the arguments made are true for all research, including biomedical. There is a reluctance to challenge notions about the divide between research and clinical practice that have been with us for over 50 years, but perhaps it is time to have a proper discussion about whether this is or is not applicable any longer. Patient centered research with an emphasis on co-design with consumers upends the notion that this type of research maintains a separation between researchers and research participants. Social science research provides an immediate opportunity for rethinking how we behave ethically, but biomedical research should follow hot on the heels.

Therein, of course, lies the rub. According to Stark, the differentiation between research ethics and professional ethics can be traced to the National Institute of Health, Bethesda, Maryland, USA, circa 1950. Given the existing competition between the codes of professional ethics promulgated by medicine’s sub-specialties, the nascent idea of a research ethics was conceived pragmatically and in aprofessional terms. When it came to biomedical research, and the epistemology of the natural sciences, this was not an issue. However, consistent with Schrag’s critique of the subsequent development of research ethics as neglecting concerns expressed by social scientists, this is more problematic when it comes to the social sciences, particularly at the more interpretive end of the spectrum.
.

In qualitative social science the unique perspective, position or standpoint of the researcher is essential to understanding socio-cultural reality and, therefore, to the process of conducting research. Furthermore, it is not something that can be eliminated by the use of (replicable) quantitative measures. This does not mean qualitative research cannot be objective. Rather, it means that the notion of objectivity differs between the natural and social sciences. Doing qualitative social science does not mean embracing subjectivity. Rather, it requires qualitative researchers to embrace epistemological reflexivity and to aim at objectivity as a value, virtue, or standpoint of social research.
.

When this is coupled with the fact that such research often seeks to give expression to the ‘lived experience’ of research participants, one can see how a concern for the (inter)personal must return to center stage in discussions of social scientific research ethics. One way of doing so would be to rethink the ethics of social scientific research as a form of professional ethics. Thus, rather than simply ‘frontloading’ ethical decision-making as a part of the design of proposed research, which can then be subject to peer review or evaluation by committee, we can more clearly acknowledge that engaging with the ethical dimension of research requires ongoing attention. The range of ethical issues researchers might encounter, both in the field and as a function of their role, are such that we cannot hope to fully address them preemptively. In this context, and consistent with the contemporary concern for the integrity of both research and researchers, we might draw on the idea of researchers as professionals and, in so doing, embrace the view that they ought to be guided by a set of internal professional norms or ethics.
.

Of course, this is not exactly a solution to the ethical issues social scientists might encounter in the course of research. It does, however, invite further engagement with such questions. Indeed, one can say more than this. Rather than thinking of the ethics of research as something to be addressed and codified by external commentators, such as bioethicists, the idea that research might benefit from a professional ethics invites researchers themselves to lead the discussion. No doubt questions remain, not least on what might constitute a profession or professional group in this context. Nevertheless, this proposal suggests that both professional groups and professional researchers should play a privileged role in creating, interpreting and putting into practice the substantive commitments of their own professional ethics. Furthermore, it is for them to set forth, justify and communicate the stance they adopt to other stakeholders.
.

This suggestion stands in relatively stark contrast to conceptions of research ethics, where external standards and evaluations are seen as having priority. To me, the difference is akin to the one we find when comparing research ethics committees and clinical ethics committees. The former tends to be rather one-sided; it assesses and offers judgment on research proposals or documents. The latter engages with professional actors and, through a process of mutual dialogue and discussion, facilitates and contributes to the individual’s own ethical formations. Which approach is more likely to promote the ethics and integrity of research, particularly social scientific research, seems self-evident.
.

Dr Nathan Emmerich is a Research Fellow in Bioethics at ANUMS. The ideas presented in this post stem from a book chapter entitled ‘A Professional Ethics for Researchers?’ (online first) recently published in Iphofen (Ed) Handbook of Research Ethics and Scientific Integrity (Springer) as well as an earlier publication ‘Reframing Research Ethics.

References:

Beauchamp, T.L., and J.F. Childress. 2009 [1979]. Principles of Biomedical Ethics. 6th Edition. Oxford, UK: Oxford University Press.

Emmerich, N. 2016 ‘Reframing Research Ethics: Towards a Professional Ethics for the Social Sciences’. Sociological Research Online 21(4):7 http://www.socresonline.org.uk/21/4/7.html

Emmerich, N. 2019. ‘A Professional Ethics for Researchers?’ In Iphofen, R. (Ed) Handbook of Research Ethics and Scientific Integrity. Springer. Online First: https://doi.org/10.1007/978-3-319-76040-7_34-1

Iphofen, R. (Ed) Forthcoming 2020. Handbook of Research Ethics and Scientific Integrity. Springer, https://link.springer.com/referencework/10.1007/978-3-319-76040-7

Stark, L. 2011. Behind Closed Doors: IRBs and the Making of Ethical Research. University of Chicago Press. https://www.press.uchicago.edu/ucp/books/book/chicago/B/bo12182576.html

Schrag, Z.M. 2010. Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009. The Johns Hopkins University Press. https://jhupbooks.press.jhu.edu/title/ethical-imperialism

This post may be cited as:
Emmerich, N. (1 October 2019) Should we Reframe Research Ethics as a Professional Ethics? Research Ethics Monthly. Retrieved from: https://ahrecs.com/human-research-ethics/should-we-reframe-research-ethics-as-a-professional-ethics

Ethics, Security and Privacy – the Bermuda Triangle of data management?0

 

Malcolm Wolski and Andrew Bowness
Griffith University

 

To manage sensitive research data appropriately, ethics, security and privacy requirements need to be considered. Researchers are traditionally familiar with ethics, but often have not considered the privacy and security pieces of the puzzle. Our reasons for making this statement are:

  • IT products used in research change rapidly
  • Legislation changes rapidly and there are jurisdictional issues
  • Most researchers are not legal or IT experts
  • No one teaches them enough basics to know what is risky behaviour

The recent revision to the Australian Code for the Responsible Conduct of Research (2018) on Management of Data and Information in Research highlights that it is not just the responsibility of a university to use best practice, but it is also the responsibility of the researcher. The responsible conduct of research includes within its scope the appropriate generation, collection, access, use, analysis, disclosure, storage, retention, disposal, sharing and re-use of data and information. Researchers have a responsibility to make themselves aware of the requirements of any relevant codes, legislation, regulatory, contractual or consent agreements, and to ensure they comply with them.

It’s a complex world

However, this is becoming an increasingly more complex environment for researchers. First, privacy legislation is dependent on jurisdiction of participants. For one example, a research project involving participants in Queensland is impacted by not only the Australian Privacy Act but also the Queensland version (Information Privacy Act 2009 Qld), and, if a participant or collaborator is an EU citizen, the General Data Protection Regulation (EU GDPR).

Secondly, cybersecurity and information security activities in universities have increased dramatically in recent times because of publicised data breaches and the impact of data breach legislation. If your research involves foreign citizens, you may also find foreign legislation impacting the type of response required.

Thirdly, funding agencies, such as government departments are increasingly specifying security and privacy requirements in tender responses and contracts.

These are having an impact on research project governance and practices, particularly for projects where the researcher has identified they are working with sensitive data. While the conversation typically focuses on data identified under the privacy acts as sensitive (e.g. Personally Identifiable Information (Labelled) under the Australian Privacy Act), researchers handle a range of data they may wish to treat as sensitive, whether for contractual reasons (e.g. participant consent, data sharing agreements) or for other reasons (e.g. ethical or cultural).

We have noticed an increasing trend within institutions where researchers are being required to provide more information on how they manage data as specified in a proposal or in a data sharing agreement. This typically revolves around data privacy and security, which is different from the ethics requirements.

What does “security” and “privacy” mean to the practitioner

IT security is more about minimising attack points though process or by using IT solutions to prevent or minimise the impacts of hostile acts or alternatively minimise impacts though misadventure (e.g. leaving a laptop on a bus). Data security is more in the sphere of IT and not researchers. This is reflected in which software products, systems and storage are “certified” to be safely used for handling and managing data classified as sensitive. IT usually also provides the identity management systems used to share data.

We have also noticed that researchers are relying on software vendors’ website claims about security and privacy which is problematic because most cloud software is running from offshore facilities which do not comply with Australian privacy legislation. Unless you are an expert in both Australian legislation and cybersecurity you need to rely on the expertise of your institutional IT and cybersecurity teams to verify vendors’ claims.

In the current environment, data privacy is more about mandated steps and activities designed to force a minimal set of user behaviours to prevent harm caused through successful attacks or accidental data breaches. It usually involves punishment to force good behaviour (e.g. see Data Breach Legislation for late reporting). Typically, data privacy is more the responsibility of the researcher. It usually involves governance processes (e.g. who has been given access to what data) or practices (e.g. what software products the team actually uses to share and store data).

What we should be worrying about

The Notifiable Data Breaches Statistics Report: 1 April to 30 June 2019 highlighted that only 4% of breaches, out of 254 notifications, were due to system faults, but 34% were due to human error and 62% due to malicious or criminal acts. Based on these statistics, the biggest risk associated with data breaches is where the data is in the hands of the end-user (i.e. the researcher) not with the IT systems themselves.

We argue the risks are also greater in research than the general population because of a number of factors such as the diversity of data held (e.g. data files, images, audio etc), the fluidity of the team membership, teams often being made up of staff across department and institutional boundaries, mobility of staff, data collection activities offsite, and the range of IT products needed in the research process.

For this discussion, the focus is on the governance and practice factor within the research project team and how this relates back to the ethics requirements when it has been highlighted that the project will involve working with sensitive data.

Help!!

We have worked closely with researcher groups for many years and have noticed a common problem. Researchers are confronted with numerous legislative, regulatory, policy and contractual requirements all written in terminology and language that bears little resemblance with what happens in practice. For example, to comply with legislation:

  • what does sending a data file “securely” over the internet actually look like in practice and which IT products are “safe”?
  • Is your university-provided laptop with the standard institutional image certified as “safe” for data classified as private? How do you know?
  • Is your mobile phone a “safe” technology to record interviews or images classified as private data? What is a “safe” technology for field work?

Within the university sector a range of institutional business units provide support services. For example, IT may provide advice assessing the security and privacy compliance of software, networked equipment or hardware infrastructure and the library may provide data management advice covering sensitive data. At our institution, Griffith University, the eResearch Services and the Library Research Services teams have been working closely with research groups to navigate their way through this minefield to develop standard practices fit for their purpose.

What we think is the best way forward

Our approach is to follow the Five Safes framework which has also been adopted by the Office of the National Data Commissioner. For example:

  • Safe People Is the research team member appropriately authorised to access and use specified data i.e. do you have a documented data access plan against team roles and a governance/induction process to gain access to restricted data?
  • Safe Projects Is the data to be used for an appropriate purpose i.e. do you have copies of the underlying data sharing/consent agreements, contracts, documents outlining ownership and licensing rights?
  • Safe Settings Does the access environment prevent unauthorised use i.e. do IT systems and processes support this and are access levels checked regularly?
  • Safe Data Has appropriate and sufficient protection been applied to the data i.e. what is it and does it commensurate with the level of risk involved?
  • Safe Outputs Are the statistical results non-disclosive or have you checked rights/licensing issues?

Expect to see a lot more of the Five Safes approach in the coming years.

References

Hardy, M. C., Carter, A., & Bowden, N. (2016). What do postdocs need to succeed? A survey of current standing and future directions for Australian researchers.2, 16093. https://doi.org/10.1057/palcomms.2016.93

Meacham, S. (2016). The 2016 ASMR Health and Medical Research Workforce Survey. Australian Society of Medical Research.

Contributors

Malcolm Wolski, Director eResearch Services, Griffith University

Andrew Bowness, Manager, Support Services, eResearch Services, Griffith University

This post may be cited as:
Wolski, M. and Bowness, A. (29 September 2019) Ethics, Security and Privacy – the Bermuda Triangle of data management?. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/ethics-security-and-privacy-the-bermuda-triangle-of-data-management

The F-word, or how to fight fires in the research literature0

 

Professor Jennifer Byrne | University of Sydney Medical School and Children’s Hospital at Westmead

 

At home, I am constantly fighting the F-word. Channelling my mother, I find myself saying things like ‘don’t use that word’, ‘not here’, ‘not in this house’. As you can probably gather, it’s a losing battle.

Research has its own F-words – ‘falsification’, ‘fabrication’, different colours of the overarching F-word, ‘fraud’. Unlike the regular F-word, most researchers assume that there’s not much need to use the research versions. Research fraud is considered comfortably rare, the actions of a few outliers. This is the ‘bad apple’ view of research fraud – that fraudsters are different, and born, not made. These rare individuals produce papers that eventually act as spot fires, damaging their fields, or even burning them to the ground. However, as most researchers are not affected, the research enterprise tends to just shrug its collective shoulders, and carry on.

But, of course, there’s a second explanation for research fraud – the so-called ‘bad barrel’ hypothesis – that research fraud can be provoked by poorly regulated, extreme pressure environments. This is a less comfortable idea, because this implies that regular people might be tempted to cheat if subjected to the right (or wrong) conditions. Such environments could result in more affected papers, about more topics, published in more journals. This would give rise to more fires within the literature, and more scientific casualties. But again, these types of environments are not considered to be common, or widespread.

But what if the pressure to publish becomes more widely and acutely applied? The use of publication quotas has been described in different settings as being associated with an uptick in numbers of questionable publications (Hvistendahl 2013; Djuric 2015; Tian et al. 2016). When publication expectations harden into quotas, more researchers may feel forced to choose between their principles and their (next) positions.

This issue has been recently discussed in the context of China (Hvistendahl 2013; Tian et al. 2016), a population juggernaut with scientific ambitions to match. China’s research output has risen dramatically over recent years, and at the same time, reports of research integrity problems have also filtered into the literature. In biomedicine, these issues again have been linked with publication quotas in both academia and clinical medicine (Tian et al. 2016). A form of contract cheating has been alleged to exist in the form of paper mills, or for-profit organisations that provide research content for publications (Hvistendahl 2013; Liu and Chen 2018). Paper mill services allegedly extend to providing completed manuscripts to which authors or teams can add their names (Hvistendahl 2013; Liu and Chen 2018).

I fell into thinking about paper mills by accident, as a result of comparing five very similar papers that were found to contain serious errors, questioning whether some of the reported experiments could have been performed (Byrne and Labbé 2017). With my colleague Dr Cyril Labbé, we are now knee deep in analysing papers with similar errors (Byrne and Labbé 2017; Labbé et al. 2019), suggesting that a worrying number of papers may have been produced with some kind of undeclared help.

It is said that to catch a thief, you need to learn to think like one. So if I were running a paper mill, and wanted to hide many questionable papers in the biomedical literature, what would I do? The answer would be to publish papers on many low-profile topics, using many authors, across many low-impact journals, over many years.

In terms of available topics, we believe that the paper mills may have struck gold by mining the contents of the human genome (Byrne et al. 2019). Humans carry 40,000 different genes of two main types, the so-called coding and non-coding genes. Most human genes have not been studied in any detail, so they provide many publication opportunities in fields where there are few experts to pay attention.

Human genes can also be linked to cancer, allowing individual genes to be examined in different cancer types, multiplying the number of papers that can be produced for each gene (Byrne and Labbé 2017). Non-coding genes are known to regulate coding genes, so non-coding and coding genes can also be combined, again in different cancer types.

The resulting repetitive manuscripts can be distributed between many research groups, and then diluted across the many journals that publish papers examining gene function in cancer (Byrne et al. 2019). The lack of content experts for these genes, or poor reviewing standards, may help these manuscripts to pass into the literature (Byrne et al. 2019). And as long as these papers are not detected, and demand continues, such manuscripts can be produced over many years. So rather than having a few isolated fires, we could be witnessing a situation where many parts of the biomedical literature are silently, solidly burning.

When dealing with fires, I have learned a few things from years of mandatory fire training. In the event of a laboratory fire, we are taught to ‘remove’, ‘alert’, ‘contain’, and ‘extinguish’. I believe that these approaches are also needed to fight fires in the research literature.

We can start by ‘alerting’ the research and publishing communities to manuscript and publication features of concern. If manuscripts are produced to a pattern, they should show similarities in terms of formatting, experimental techniques, language and/or figure appearance (Byrne and Labbé 2017). Furthermore, if manuscripts are produced in a large numbers, they could appear simplistic, with thin justifications to study individual genes, and almost non-existent links between genes and diseases (Byrne et al. 2019). But most importantly, manuscripts produced en masse will likely contain mistakes, and these may constitute an Achilles heel to enable their detection (Labbé et al. 2019).

Acting on reports of unusual shared features and errors will help to ‘contain’ the numbers and influence of these publications. Detailed, effective screening by publishers and journals may detect more problematic manuscripts before they are published. Dedicated funding would encourage active surveillance of the literature by researchers, leading to more reports of publications of concern. Where these concerns are upheld, individual publications can be contained through published expressions of concern, and/or ‘extinguished’ through retraction.

At the same time, we must identify and ‘remove’ the fuels that drive systematic research fraud. Institutions should remove both unrealistic publication requirements, and monetary incentives to publish. Similarly, research communities and funding bodies need to ask whether neglected fields are being targeted for low value, questionable research. Supporting functional studies of under-studied genes could help to remove this particular type of fuel (Byrne et al. 2019).

And while removing, alerting, containing and extinguishing, we should not shy away from thinking about and using any necessary F-words. Thinking that research fraud shouldn’t be discussed will only help this to continue (Byrne 2019).

The alternative could be using the other F-word in ways that I don’t want to think about.

References

Byrne JA (2019). We need to talk about systematic fraud. Nature. 566: 9.

Byrne JA, Grima N, Capes-Davis A, Labbé C (2019). The possibility of systematic research fraud targeting under-studied human genes: causes, consequences and potential solutions. Biomarker Insights. 14: 1-12.

Byrne JA, Labbé C (2017). Striking similarities between publications from China describing single gene knockdown experiments in human cancer cell lines. Scientometrics. 110: 1471-93.

Djuric D (2015). Penetrating the omerta of predatory publishing: The Romanian connection. Sci Engineer Ethics. 21: 183–202.

Hvistendahl M (2013). China’s publication bazaar. Science. 342: 1035–1039.

Labbé C, Grima N, Gautier T, Favier B, Byrne JA (2019). Semi-automated fact-checking of nucleotide sequence reagents in biomedical research publications: the Seek & Blastn tool. PLOS ONE. 14: e0213266.

Liu X, Chen X (2018). Journal retractions: some unique features of research misconduct in China. J Scholar Pub. 49: 305–319.

Tian M, Su Y, Ru X (2016). Perish or publish in China: Pressures on young Chinese scholars to publish in internationally indexed journals. Publications. 4: 9.

This post may be cited as:
Byrne, J. (18 July 2019) The F-word, or how to fight fires in the research literature. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/the-f-word-or-how-to-fight-fires-in-the-research-literature

0