ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

ResourcesPsychology

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Doing the right thing: Psychology researchers retract paper three days after learning of coding error – Retraction Watch (Adam Marcus | August 2019)0

Posted by Admin in on August 21, 2019
 

The news you’ve made a critical error in the analysis of a project’s data can be devastating.  Particularly given the career harming consequences that can be associated with retractions.  So, like Retraction Watch, we congratulate this psychology team for their prompt and responsible actions.

We always hesitate to call retraction statements “models” of anything, but this one comes pretty close to being a paragon.
.

Psychology researchers in Germany and Scotland have retracted their 2018 paper in Acta Psychologica after learning of a coding error in their work that proved fatal to the results. That much is routine. Remarkable in this case is how the authors lay out what happened next.
.

The study, “Auditory (dis-)fluency triggers sequential processing adjustments:”
.

investigated as to whether the challenge to understand speech signals in normal-hearing subjects would also lead to sequential processing adjustments if the processing fluency of the respective auditory signals changes from trial to trial. To that end, we used spoken number words (one to nine) that were either presented with high (clean speech) or low perceptual fluency (i.e., vocoded speech as used in cochlear implants-Experiment 1; speech embedded in multi-speaker babble noise as typically found in bars-Experiment 2). Participants had to judge the spoken number words as smaller or larger than five. Results show that the fluency effect (performance difference between high and low perceptual fluency) in both experiments was smaller following disfluent words. Thus, if it’s hard to understand, you try harder.
.

Read the rest of this discussion piece

Does psychology have a conflict-of-interest problem? – Nature (Tom Chivers | July 2019)0

Posted by Admin in on August 5, 2019
 

Some star psychologists don’t disclose in research papers the large sums they earn for talking about their work. Is that a concern?

Generation Z has made Jean Twenge a lot of money. As a psychologist at San Diego State University in California, she studies people born after the mid-1990s, the YouTube-obsessed group that spends much of its time on Instagram, Snapchat and other social-media platforms. Thanks to smartphones and sharing apps, Generation Z has grown up to be more narcissistic, anxious and depressed than older cohorts, she argues. Twenge calls them the ‘iGen’ generation, a name she says she coined. And in 2010, she started a business, iGen Consulting, “to advise companies and organizations on generational differences based on her expertise and research on the topic”.

Twenge has “spoken at several large corporations including PepsiCo, McGraw-Hill, nGenera, Nielsen Media, and Bain Consulting”, one of her websites notes. She delivers anything from 20-minute briefings to half-day workshops, and is also available to speak to parents’ groups, non-profit organizations and educational establishments. In e-mail exchanges, she declined to say how much she earns from her advisory work, but fees for star psychologists can easily reach tens of thousands of dollars for a single speech, and possibly much more, several experts told Nature.

Twenge’s academic papers don’t mention her paid speeches and consulting. Yet that stands in stark contrast to the conflict-of-interest (COI) guidelines issued by the International Committee of Medical Journal Editors (ICMJE), an influential organization whose standards have been widely adopted by many medical and some psychology journals. Those guidelines say that such ‘personal fees’ should be declared as potential COIs in research papers because readers should be made aware of any financial interests that they might perceive as potentially influencing the findings.

Read the rest of this discussion piece

Debriefing for ego threat may require more than we thought – Psychology & More (Dana C. Leighton, Ph.D. | July 2019)0

Posted by Admin in on July 7, 2019
 

When social psychologists manipulate a participant’s attitudes or beliefs, we have an ethical obligation to undo that manipulation. I explain it to my students as “putting the participant back the way we found them.” We frequently use a debriefing procedure, in the form of a written and/or (as in the case of my lab) verbal notice something to the effect of “yuk yuk, gosh, ya know what? we were just kidding. the thing you (read/did) was fake, we made it up, and it doesn’t mean anything.” Here is an example from the verbal debriefing script I used in a study several years ago that presented participants with a fake newspaper article about vandalism by University of Texas students.

I want to thank you for your participation here today and for your contribution to this project. We really appreciate your help with this work. Let me tell you a little bit about what we are trying to study.

First, we want to assure you that the incident you read about never happened on the campus. We created a fake newspaper article about it in order to better understand how people respond to these kinds of situations. To our knowledge, no University of Texas students have ever been involved in such an incident.

Read the rest of this blog post

The full article is behind a paywall, but here’s the reference:
Miketta, S., & Friese, M. (2019). Debriefed but still troubled? About the (in)effectiveness of postexperimental debriefings after ego threat. Journal of Personality and Social Psychology. Advance online publication.
http://dx.doi.org/10.1037/pspa0000155

Reboot undergraduate courses for reproducibility – Nature (Katherine Button | September 2018)0

Posted by Admin in on December 12, 2018
 

Collaboration across institutes can train students in open, team science, which better prepares them for challenges to come, says Katherine Button.

Three years ago, as I prepared to start as a lecturer in the University of Bath’s psychology department, I reflected on my own undergraduate training. What should I emulate? What would I like to improve? The ‘reproducibility crisis’ was in full swing. Many of the standard research practices I had been taught were now shown to be flawed, from P-value hacking to ‘HARKing’ — hypothesizing after the results are known — and an over-reliance on underpowered studies (that is, drawing oversized conclusions from undersized samples).

It struck me that the research dissertation students do in their final year is almost a bootcamp for instilling these bad habits. Vast numbers of projects, limited time and resources, small sample sizes, the potential for undisclosed analytic flexibility (P-hacking) and a premium on novelty: together, a recipe for irreproducible results.

Most undergraduate dissertations turn into exercises tallying the limitations of the research design — frustrating for both student and supervisor. However, each year a few students get lucky and publish, securing a huge CV advantage. I wondered what lesson this was teaching. Were we embedding a culture that rewards chance results over robust methods?

Read the rest of this discussion piece

0