Dan and James discuss a recent paper that investigated how science journalists evaluate psychology papers. To answer this question, the researchers presented science journalists with fictitious psychology studies and manipulated sample size, sample representativeness, p-values, and institutional prestige
Links
Other links
Everything Hertz on social media
Support us on Patreon and get bonus stuff!
Citation
Quintana, D.S., Heathers, J.A.J. (Hosts). (2023, September 30) "173: How do science journalists evaluate psychology papers?", Everything Hertz [Audio podcast], DOI: 10.17605/OSF.IO/SG4BM
Support Everything Hertz
180: Consortium peer reviews
179: Discovery vs. maintenance
178: Alerting researchers about retractions
177: Plagiarism
176: Tracking academic workloads
175: Defending against the scientific dark arts
174: Smug missionaries with test tubes
172: In defence of the discussion section
171: The easiest person to fool is yourself (with Daniel Simons and Christopher Chabris)
170: Holy sheet
169: Using big data to understand behavior (Live episode with Sandra Matz)
168: Meta-meta-science
167: Diluted effect sizes
166: Is science becoming less disruptive over time?
165: Self-promotion
164: The great migration
163: eLife's new peer review model
162: Status bias in peer review
161: The memo (with Brian Nosek)
Create your
podcast in
minutes
It is Free
The Poetry of Science
Behavioral Grooves Podcast
Hidden Brain
Mysterious Radio: Paranormal, UFO & Lore Interviews
The Science of Happiness