Dan and James discuss a recent paper that investigated how science journalists evaluate psychology papers. To answer this question, the researchers presented science journalists with fictitious psychology studies and manipulated sample size, sample representativeness, p-values, and institutional prestige
Links
Other links
Everything Hertz on social media
Support us on Patreon and get bonus stuff!
Citation
Quintana, D.S., Heathers, J.A.J. (Hosts). (2023, September 30) "173: How do science journalists evaluate psychology papers?", Everything Hertz [Audio podcast], DOI: 10.17605/OSF.IO/SG4BM
Support Everything Hertz
80: Cites are not endorsements (with Sean Rife)
79: Clinical trial reporting (with Henry Drysdale)
78: Large-scale collaborative science (with Lisa DeBruine)
77: Promiscuous expertise
76: Open peer review
75: Overlay journals (with Daniele Marinazzo)
74: Seeing double (with Elisabeth Bik)
73: Update your damn syllabus
72: Anonymity in scientific publishing
71: Moving for your job
70: Doubling-blinding dog balls
69: Open science tools (with Brian Nosek)
68: Friends don’t let friends believe in impact factors (with Nathan Hall)
67: Shit Academics Say (with Nathan Hall)
66: Ideal worlds vs grim truths
65: Blockchain and open science (with Jon Brock)
64: Salami slicing
63: Science journalism (with Brian Resnick)
62: Adopting open science practices (with Dorothy Bishop)
61: Performance enhancing thugs (with Greg Nuckols)
Create your
podcast in
minutes
It is Free
The Poetry of Science
Behavioral Grooves Podcast
Hidden Brain
Choiceology with Katy Milkman
The Science of Happiness