ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Publication ethics

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Australian Code 2018: What institutions should do next1

 

Gary Allen, Mark Israel and Colin Thomson

At first glance, there is much to be pleased about the new version of the Australian Code that was released on 14th June. A short, clear document that is based upon principles and an overt focus on research culture is a positive move away from the tight rules that threatened researchers and research offices alike for deviation from standards that might not be appropriate or even workable in all contexts.

The 2007 Code was rightly criticized on several grounds. First, weighing a system down with detailed rules burdened the vast majority with unneeded compliance for the recklessness and shady intentions of a very small minority. Second, there was reason to suspect the detailed rules did not stop the ‘bad apples’. Third, those detailed rules probably did not inspire early career researchers to engage with research integrity and embrace and embed better practice into their research activity. Finally, the Code did little to create an overall system able to undertake continuous improvement.

But, before we start to celebrate any improvements, we need to work through what has changed and what institutions and researchers need to do about it. And, then, maybe a quiet celebration might be in order.

Researchers have some fairly basic needs when it comes to research integrity. They need to know what they should do: first, as researchers and research supervisors in order to engage in good practice; second, if they encounter poor practice by another researcher; and, third, if other people complain about their practices.

The 2007 Australian Code offered some help with each of these. In some cases, this ‘help’ was structured as a requirement and over time was found wanting. The 2018 version appreciated that these questions might be basic but that the answers were often complex. The second and third questions are partly answered by the accompanying Guide to Managing and Investigating Potential Breaches of the Code (the Investigation Guide) and we’ll return to this. The answer to the first question is brief.

The Code begins to address responsibilities around research integrity through a set of eight principles that apply to researchers as well as their institutions: honesty; rigour; transparency; fairness; respect; recognition of the rights of Indigenous peoples to be engaged in research; accountability, and promotion of responsible research practices. Explicit recognition of the need to respect the rights of Aboriginal and Torres Strait Islander peoples did not appear in the 2007 version. There are 13 responsibilities specific to institutions. There are 16 responsibilities, specific to researchers, that relate to compliance with legal and ethical responsibilities, require researchers to ensure that they support a responsible culture of research, undertake appropriate training, provide mentoring, use appropriate methodology and reach conclusions that are justified by the results, retain records, disseminate findings, disclose and manage of conflicts of interest, acknowledge research contributions appropriately, participate in peer review and report breaches of research integrity.

In only a few cases might a researcher read these parts of the Code and conclude that the requirements are inappropriate. It would be a little like disagreeing with the Singapore Statement (the one on research integrity, not the recent Trump-Kim output). Mostly, the use of words like ‘appropriate’ within the Code (it appears three times in the Principles, twice in the responsibilities of institutions and five times in responsibilities of researchers) limit the potential for particular responsibilities to be over-generalised from one discipline and inappropriately transferred to others.

There are some exceptions, and some researchers may find it difficult to ‘disseminate research findings responsibly, accurately and broadly’, particularly if they are subject to commercial-in-confidence restrictions or public sector limitations, and we know that there are significant pressures on researchers to shape the list of authors in ways that may have little to do with ‘substantial contribution’.

For researchers, the Code becomes problematic if they go to it seeking advice on how they ought to behave in particular contexts. The answers, whether they were good or bad in the 2007 Code, are no longer there. So, a researcher seeking to discover how to identify and manage a conflict of interest or what criteria ought to determine authorship will need to look elsewhere. And, institutions will need to broker access to this information either by developing it themselves or by pointing to good sectoral advice from professional associations, international bodies such as the Committee for Publication Ethics, or the Guides that the NHMRC has indicated that it will publish.

We are told that the Australian Code Better Practice Guides Working Group will produce guides on authorship and data management towards the end of 2018 (so hopefully at least six months before the deadline of 1 July 2019 for institutions to implement the updated Australian Code). However, we do not know which other guides will be produced, who will contribute to their development nor, in the end, how useful they will be in informing researcher practice. We would hope that the Working Group is well progressed with the further suite if it is to be able to collect feedback and respond to that before that deadline.

There are at least eight areas where attention will be required. We need:

  1. A national standard data retention period for research data and materials.
  2. Specified requirements about data storage, security, confidentiality and privacy.
  3. Specified requirements about the supervision and mentoring of research trainees.
  4. A national standard on publication ethics, including such matters as republication of a research output.
  5. National criteria to inform whether a contributor to a research project could or should not be listed as an author of a research output.
  6. Other national standards on authorship matters.
  7. Specified requirements about a conflicts of interest policy.
  8. Prompts for research collaborations between institutions.

For each of those policy areas the following matters should be considered:

1. Do our researchers need more than the principle that appears in the 2018 Australian Code?

2. If yes, is there existing material upon which an institution’s guidance material can be based?

3. Who will write, consider and endorse the guidance material at a national or institutional level?

Many institutions will conclude it is prudent to wait until late 2018 to see whether the next two good practice guides are released and discover how much they cover. Even if they do so, institutions will also need to transform these materials into resources that can be used in teaching and learning at the levels of the discipline and do so in a way that builds the commitment to responsible conduct and the ethical imaginations of researchers rather than testing them on their knowledge of compliance matters.

Managing and Investigating Potential Breaches

The Code is accompanied by a Guide to Managing and Investigating Potential Breaches of the Code (the Investigation Guide). The main function of this Guide is to provide a model process for managing and investigating complaints or concerns about research conduct. However, before examining how to adopt that model, institutions need to make several important preliminary decisions.

First, to be consistent with the Code, the Guide states that institutions should promote a culture that fosters and values responsible conduct of research generally and develop, disseminate, implement and review institutional practices that promote adherence to the Code. Both of these will necessitate the identification of existing structures and processes and a thorough assessment to determine any changes that are needed to ensure that they fulfil these responsibilities.

This means that institutions must assess how their processes conform to the principles of procedural fairness and the listed characteristics of such processes. The procedural fairness principles are described as:

  • the hearing rule – the opportunity to be heard
  • the rule against bias – decisionmakers have no personal bias in the outcome
  • ‘the evidence rule – that decisions are based on evidence.

The characteristics require that an institution’s processes are: proportional; fair; impartial; timely; transparent, and confidential. A thorough review and, where necessary, revision of current practices will be necessary to show conformity to the Guide.

Second, when planning how to adopt the model, institutions need to consider the legal context as the Guide notes that enterprise bargaining agreements and student disciplinary processes may prevail over the Guide.

Third, the model depends on the identification of six key personnel with distinct functions. Some care needs to be taken to match the designated roles with the appropriate personnel, even if their titles differ from those in the model, in an institution’s research management structure. The six personnel are:

  • a responsible executive officer, who has final responsibility for receiving report and deciding on actions;
  • a designated officer, appointed to receive complaints and oversee their management;
  • an assessment officer or officers, who conduct preliminary assessments of complaints;
  • research integrity advisers, who have knowledge of, and promote adherence to, the Code and offer advice to those with concerns or complaints;
  • research integrity office, staff who are responsible for managing research integrity;
  • review officer, who has responsibility to receive requests for procedural review of an investigation.

Last, institutions must decide whether to use the term ‘research misconduct’ at all and, if so, what meaning to give to it. Some guidance is offered in a recommended definition of the term but, as noted above, this will need to be considered in the legal contexts of EBAs and student disciplinary arrangements.

Conclusion

The update to the Code provides a welcome opportunity to reflect on a range of key matters to promote responsible research. The use of principles and responsibilities and the style of the document offers a great deal of flexibility that permits institutions to develop their own thoughtful arrangements. However, this freedom and flexibility comes with a reciprocal obligation on institutions to establish arrangements that are in the public interest rather than ‘just’ complying with a detailed rule. We have traded inflexibility for uncertainty; what comes next is up to all of us.

Click here to read about the AHRECS Australian Code 2018 services

The Contributors
Gary Allen, Mark Israel and Colin Thomson – senior consultants AHRECS

This post may be cited as:
Allen G., Israel M. and Thomson C. (21 June 2018) Australian Code 2018: What institutions should do next. Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/australian-code-2018-what-institutions-should-do-next

We invite debate on issues raised by items we publish. However, we will only publish debate about the issues that the items raise and expect that all contributors model ethical and respectful practice.

In a world of hijacked, clone and zombie publishing, where shouldn’t I publish?3

 

When we talk to research higher degree candidates and early career researchers about publication ethics, one question comes up repeatedly. Indeed, it is a question we are frequently asked by experienced researchers, particularly those who wish to publish in a new field – where should I publish? That’s a difficult question to answer in the abstract so first we would like to remove some distractions from the decisions that need to be made. In this piece, we look at the other side of the coin and explore where researchers should not publish.

Research institutions often provide their staff with incentives to publish in top-ranking journals as determined by impact factor. Publishing in these journals can boost the university’s standing in some international rankings and national research assessment exercises. Consequently, performance indicators, promotion and recruitment criteria, track records for grant assessment and even financial bonuses may be aligned with these outlets.

Good research takes a long time and we should take care where we place our outputs. If we want our papers to be read, we need to look for a journal that reaches our prospective audience. In some fields, this might mean a niche but highly rated journal linked to a particular professional association; in other cases, we seek a journal that is covered by reputable indexes and databases like Medline, PubMed, Scopus or the Web of Science. Only then would a paper be included by subsequent systematic review or meta-analysis, for example.

However, many researchers may find it tough to break into the top 25%, let alone the top 10% of journals. Even if they can, the process can prove lengthy and frustrating as journals use robust peer review processes and may call for repeated and extensive and perhaps even unwarranted revisions. In the face of this, some scholars may come under pressure to publish quickly, particularly if the award of a doctorate or confirmation of the first job is dependent on having something in print. And, for some purposes (including Australian institutional block research funding until quite recently), quantity may trump quality.

There are traps for the unwary who find themselves in this position. Everyone wants to avoid predatory journals and publishers and, yet, not everyone does. Not even some top researchers manage to avoid these outlets according to one study of academic economists (Wallace and Perri, 2016). Researchers, it seems, can be seduced by an invitation from journal editors, an invitation sometimes filled with ‘flattering salutations, claims that they had read the recipient’s papers despite being out of the journals claimed area of study, awkward sentence structure and spelling mistakes, and extremely general topics’ (Moher and Srivastava, 2015).

While many researchers have been duped, publication scammers are not always given a free ride. A few have come under some pressure from legal authorities. In 2016, the Federal Trade Commission (FTC) filed a brief in the US District Court against the OMICS Group and related entities. The brief reveals a little about what is known about these journals. OMICS, for instance, is a Nevada-registered, Hyderabad-based entity that claims to run 700 journals. The FTC alleged that OMICS deliberately misled potential authors by misrepresenting the composition of the editorial board, the process of review, the journal’s impact factor and the fee for publication:

…the academic experts identified by Defendants lack any connection with Defendants’ journals. Further, in many instances, articles submitted for publishing do not undergo standard peer review before publishing. And Defendants’ journals’ impact factors are not calculated by Thomson Reuters, nor are their journals included in PubMed Central. Moreover, Defendants fail to disclose, or disclose adequately, that consumers must pay a publishing fee for each published article. (p.5)

Recently, OMICS has diversified its strategy. In 2016, Canadian journalists reported OMICS had bought at least the trading name of reputable Canadian publishers and appeared to have also picked up their publishing contracts with well-regarded journals. This, it seems, was done so that OMICS could use these names as a front to attract articles to its predatory publishing stable (Puzic, 2016). Some professional associations who found their publishing contracts taken over have declared their intention to break their connection to OMICS.

When assessing which journals to target for your work, you might:

  1. Read recent issues of the journal. Are the papers of a quality you would cite? Can you find evidence of good editorial standards? Would your work fit among the papers published there? What, Macquarie University counsels its staff to consider, is its relevance, reputation, visibility and validity?
  2. Check the standing of the publication’s editors. Are they members of the Committee on Publication Ethics (COPE) or, if their journals are online, the Open Access Scholarly Publishers’ Association (OASPA); predatory publishers are less likely to be members. COPE has also created the Check.Submit.campaign to support authors’ decision-making.
  3. Talk to a research librarian, your peers and mentors about the potential publisher. If you know anyone who is on the Editorial Advisory Board, ask them about the journal at the same time that you seek to establish whether whether the journal might be interested in your work. Some leading academics have found their names on Editorial Advisory Boards of predatory journals and have discovered that it is easier to join these lists than have their name removed.
  4. Read the publisher’s policies and editorial review practices. Are they coherent, do they provide detailed information on submission guidelines and peer review processes? If they guarantee a speedy turnaround, that is often a warning sign. Check whether they impose ‘article processing charges’ (APCs). Then, check again.

Reach out to researchers that have previously been published there to discuss their experiences and impressions.

Not every legitimate journal can extract itself from predatory publishers. Where once respected journals are hijacked by criminal enterprises, they can continue their existence as ‘zombie journals’, trading off the reputation built up in the past but behaving like any other predatory journal. There are other kinds of dishonest practices. Some predatory publishers have established ‘clone journals’ who use the same title as a legitimate journal and reproduce the original journal’s website or make minor changes to the title in an attempt to deceive unwary authors (Prasad, 2017). Hijacked, clone and zombie publishing can turn a glowing recommendation into a trap for the unwitting. Analysis of criminal activities in publishing has taken a little time to catch up with offending patterns. In recent work, Moher and Shamseer (2017) argued that the term predatory journal should be replaced by ‘illegitimate entities’, refuting the idea that such entities were entitled to clothe themselves in the language of the legitimate publication industry.

So, what advice should we give to researchers about being prudent with the treasured fruit of their labours? Until recently, one quick answer might have been to avoid journals on Beall’s List, a list of ‘predatory journals’ that Jeffrey Beall, a US-based librarian, had placed on a ‘blacklist’. The list always had its critics (Naylon, 2017). Variables used by Beall such as open access, fees to publish, locations in low- to medium-income countries, and novel peer review practices are not automatically predictors of a predatory publisher. Nor does the converse necessarily guarantee a publisher is a safe choice. However, whatever its longstanding flaws, Beall’s List is rapidly losing its currency. Earlier this year Beall decided to ‘unpublish’ his list; the list is no longer updated and is only available on cache sites. Institutions seeking a successor to Beall’s List can look towards a commercial provider, Cabells, which has announced its own Blacklist. Anyone using a blacklist should also check journals against a ‘white list’ like the Directory of Open Access Journals or even the old Excellence in Research Australia journal rankings (removed from the research council websites, but still circulated discreetly like a samizdat newsletter among Australian academics).

Unfortunately, unless black and white lists are updated continuously, they can never keep up with changes in the publication industry. Some publishers once regarded as predatory genuinely improve their practices over time. On the other hand, illegitimate practices have also changed over time. Over the last few years, we have seen the movement of organised and unorganised crime into the industry, attracted by the roughly US$100m in fees that Shamseer et al. (2017) estimated (very roughly) that predatory publishers might be obtaining.

So, the quick answer to the question ‘where shouldn’t I publish?’ is that since the demise of Beall’s List, researchers need to engage in critical enquiry and reflection about a potential publisher. This should not come as a shock – the same advice would have been true long before the end of Beall’s List.

In recent weeks, we’ve been including in the Resource Library discussion pieces, papers and strategies that propose how to assess publishers.

These include:

Not the ‘Beall’ and end-all: the death of the blacklist, AOAG Webinar Series (Dr Andy Pleffer & Susan Shrubb | April 2017)

Beyond Beall’s List: Better understanding predatory publishers, Association of College & Research Libraries (Monica Berger and Jill Cirasella | March 2015)

Black lists, white lists and the evidence: exploring the features of ‘predatory’ journals, BioMed Central Blog (David Moher & Larissa Shamseer | March 2017)

Warning: conmen and shameless scholars operate in this area. Times Higher Education (James McCrostie | January 2017)

Blacklists are technically infeasible, practically unreliable and unethical. Period. – LSE Blog (Cameron Neylon | January 2017)

Beware! Academics are getting reeled in by scam journals – UA/AU (Alex Gillis | January 2017)

References

Moher, D. and Shamseer, L. (2017) Black lists, white lists and the evidence: exploring the features of ‘predatory’ journals. BioMed Central Blog 16 Mar 2017. https://blogs.biomedcentral.com/bmcblog/2017/03/16/black-lists-white-lists-and-the-evidence-exploring-the-features-of-predatory-journals/

Moher, D. and Srivastava, A. (2015) You are invited to submit…. BMC Medicine, 13(1), p.180. https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-015-0423-3

Neylon C. (2017) Blacklists are technically infeasible, practically unreliable and unethical. Period. LSE Blog. https://cameronneylon.net/blog/blacklists-are-technically-infeasible-practically-unreliable-and-unethical-period/

Prasad, R. (2017) Predatory journal clones of Current Science spring up. The Hindu, 14 July. http://www.thehindu.com/sci-tech/science/predatory-journal-clones-of-current-science-spring-up/article19277858.ece

Puzic, S. (2016) Offshore firm accused of publishing junk science takes over Canadian journals. CTV News. 28 September. http://www.ctvnews.ca/health/health-headlines/offshore-firm-accused-of-publishing-junk-science-takes-over-canadian-journals-1.3093472?hootPostID=00bc7834da5380548a8b2d58e40c8b29

Shamseer, L, Moher, D., Maduekwe, O., Turner, L., Barbour, V., Burch, R., Clark, J., Galipeau, J., Roberts J. and Shea, B.J. (2017) Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison. BMC Medicine 15:28. https://doi.org/10.1186/s12916-017-0785-9

Wallace, F. and Perri, T. (2016) Economists behaving badly: publications in predatory journals. MPRA Paper No. 73075, posted 15 August. https://mpra.ub.uni-muenchen.de/73075/1/MPRA_paper_73075.pdf

Also see

Examining publishing practices: moving beyond the idea of predatory…
Continuing Steps to Ensuring Credibility of NIH Research: Selecting Journals with…
Illegitimate Journals and How to Stop Them: An Interview with Kelly Cobey and…
Open access, power, and privilege

Contributors
Prof. Mark Israel, senior consultant AHRECS, Mark’s AHRECS bio – mark.israel@ahrecs.com
Dr Gary Allen, senior consultant AHRECS, Gary’s AHRECS bio – gary.allen@ahrecs.com

This post may be cited as:
Israel M & Allen G. (2017, 26 July) In a world of hijacked, clone and zombie publishing, where shouldn’t I publish? Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/world-hijacked-clone-zombie-publishing-shouldnt-publish

PID Power: Persistent Identifiers as Part of a Trusted Information Infrastructure0

 

We live in a world where fake news and alternative facts are, unfortunately, part of how we share information. Expertise is becoming less valued and, in some cases, is even seen as a liability. In this environment, how do we engender trust in scholarly communications?

Developing a strong and sustainable information infrastructure, which enables reliable connections between researchers, their contributions, and their organizations, is critical to building this trust. Many of the pieces we require are already in place, but work is still needed to ensure that they operate the way we need them to, and that all sectors – funders, publishers, and universities, as well as vendors and other third parties – understand the vital role each plays.

Persistent identifiers (PIDs) play an important part in making the research infrastructure work, and doing so in a transparent way, which builds trust.  Wikipedia describes persistent identifiers as: “a long-lasting reference to a document, file, web page, or other object … usually used in the context of digital objects that are accessible over the Internet. Typically, such an identifier is not only persistent but actionable … you can plug it into a web browser and be taken to the identified source.”

In the scholarly communications world, PIDs enable clear identification of and reliable connections between people (researchers), places (their organizations), and things (their research contributions and works). Examples of PIDs in common use in research and scholarship include ORCID iDs, ResearcherID, and Scopus IDs for people; GRID, Ringgold, and Crossref Funder Registry IDs for organizations; and DOIs (Digital Object Identifiers) such as those minted by Crossref and DataCite for publications and datasets.

So, how exactly can PIDs help build trust in the research infrastructure – and the scholarship supported by that infrastructure?

Tackling the problem of fake reviews and reviewers is a good example of the power of persistent identifiers in practice. While the vast majority of reviews and reviewers are legitimate, unfortunately some individuals and organizations deliberately attempt to manipulate the system to their own, or their client’s, advantage. Industry organizations such as COPE – the Committee on Publication Ethics – recognize that this as an issue and it’s also found its way into mainstream media, where it’s often seen as more ‘evidence’ that science isn’t working.

But imagine a world where all research institutions routinely connect their organization ID to their researchers’ ORCID records and, at the same time, assert their affiliation. That institutional validation makes information about those researchers significantly more trustworthy.

And now imagine a world where researchers routinely use their ORCID iD during the manuscript submission/review process. Where publishers routinely include those iDs in the metadata for DOIs for the papers/open peer review reports authored by those researchers. And where that information is automatically pushed back into the author’s ORCID record, for example by Crossref or DataCite. Those trusted connections (assertions)  between each researcher and her/his publications and reviews could help editors and publishers build up an authoritative picture of each researcher, creating an even higher level of confidence that they are who they say they are. Adding in information from funders about the reviewer’s awards would add an even higher level of certainty. Taken together, the use of PIDs in this way could be a powerful tool in combatting the fake author and reviewer problem.

This scenario clearly shows that tackling the issue of trust in scholarly communication requires a community approach.. Each sector plays a role: institutions connect and assert affiliations to ORCID records; publishers connect and assert works; funders connect assert awards; and PID organizations including Crossref, DataCIte, and ORCID provide the “plumbing” that enables those assertions and connections to be made, easily and reliably.

Of course, researchers themselves also need to be involved in improving trust in scholarly communications. Using PIDs is a good (and easy!) first step – the technology is already in place across hundreds of systems that researchers interact with.  So, for example, researchers who use their ORCID iD when publishing or reviewing a paper, can authorize Crossref or DataCite to automatically update their ORCID record every time a DOI for one of their works is minted (provided that their publisher includes the iD in the metadata). Likewise some funders are already collecting ORCID iDs during grant application and then connecting information about awards granted back to the applicant’s ORCID record. And, in an exciting new opportunity, it’s now possible for researchers to sign into ORCID using their institutional credentials and, at the same time, grant their university permission to update their ORCID record, including asserting their affiliation. Vendor systems across all sectors – grant application, manuscript submission, CRIS systems, and more – are supporting all these efforts.

As Simon Porter of Digital Science pointed out in his keynote at PIDapalooza 2016, the challenges of achieving this goal are at least as much social as technical. Understanding why PIDs are important is every bit as critical as implementing them in researcher systems. So, if you’d like to  play your own part in making  our vision of a trustworthy PID-enabled research infrastructure a reality, please help us spread the word about the power of PIDs in your own organizations!

Contributor
Alice Meadows | Director of Community Engagement & Support, ORCID
Alice’s ORCID staff page and Alice’s LinkedIn page
a.meadows@orcid.org

This post may be cited as:
Meadows A. (2017, 27 July) PID Power: Persistent Identifiers as Part of a Trusted Information Infrastructure Research Ethics Monthly. Retrieved from: https://ahrecs.com/research-integrity/pid-power-persistent-identifiers-part-trusted-information-infrastructure

 

Review of the Australian Code for the Responsible Conduct of Research1

 

The Australian Code for the Responsible Conduct of Research 2007 (the Code) is Australia’s premier research standard. It was developed by the government agencies that fund the majority of research in Australia, namely the National Health and Medical Research Council (NHMRC) and the Australian Research Council, in collaboration with the peak body representing Australian universities (Universities Australia). The Code guides institutions and researchers in responsible research practices and promotes research integrity. The Code has broad relevance across all research disciplines.

The Code is currently under review.

A new approach for the Code has been proposed, informed by extensive consultation with the research sector and advice from expert committees. The Code has been streamlined into a principles-based document and will be supported by guides that provide advice about implementation, such as the first Guide to investigating and managing potential breaches of the Code.

NHMRC, ARC and UA recognise the importance of engaging with the Australian community, including research institutions, researchers, other funding bodies, academies and the public, to ensure the principles-based Code and supporting guides are relevant and practical. A public consultation strategy is an important part of any NHMRC recommendation or guideline development process.

As such, NHMRC on behalf of ARC and UA invites all interested persons to provide comments on the review. A webinar was held on 29 November 2016 to explain the new approach to the Code. You are invited to view this webinar (see link below) and can participate in the public consultation process by visiting the NHMRC Public Consultation website. Submissions close on 28 February 2017.

Further information on the review can be found here.

.
The contributor:

National Health and Medical Research Council (Australia) – Web | Email

This post may be cited as:
NHMRC (2017, 20 January) Review of the Australian Code for the Responsible Conduct of Research. Research Ethics Monthly. Retrieved from:
https://ahrecs.com/research-integrity/review-australian-code-responsible-conduct-research

Page 1 of 212