- Office Location: ST 327
Office Phone: (818) 677-2810
- J.D. 2003, University of Nebraska College of Law
- Ph.D. 2000, Florida International University
- M.S. 1998, Florida International University
- B.A. 1995, Creighton University
- Specialty Areas: Social Psychology, Psychology and Law, Industrial/Organizational Psychology.
- PSY 321/L -Experimental Psychology and Lab
- PSY 345/L - Social Psychology and Lab
- PSY 356 - Industrial/Organizational Psychology
- PSY 386 - Psychology and Legal Process
- PSY 640 - Social Cognition
McAuliff, B. D., Nicholson, E., Amarilio, D., & Ravanshenas, D. (2013). Supporting children in U.S. legal proceedings: Descriptive and attitudinal data from a national survey of victim/witness assistants. Psychology, Public Policy, and Law, 19, 98-113.
McAuliff, B. D., & Kovera, M. B. (2012). Do jurors get what they expect? Traditional versus alternative forms of children’s testimony. Psychology, Crime, and Law, 18, 27-47.
McAuliff, B. D., & Duckworth, T. D. (2010). I spy with my little eye: Jurors' detection of internal validity threats in expert evidence. Law and Human Behavior, 34, 489-500.
McAuliff, B. D., & Bornstein, B. H. (2010). All anchors are not created equal: The effects of per diem versus lump sum requests on pain and suffering awards. Law and Human Behavior, 34, 164-174.
McAuliff, B. D. (2009). Judging the validity of psychological science from the bench: A case in point. Journal of Forensic Psychology Practice, 9, 310-320.
McAuliff, B. D., & Groscup, J. (2009). Daubert and psychological science in court: Judging validity from the bench, bar, and jury box (pp. 26-52). In J. Skeem, K. Douglas, & S. Lilienfeld (Eds.), Psychological science in the courtroom: Controversies and consensus. New York: Guilford.
McAuliff, B. D., Kovera, M. B., & Nunez, G. (2009). Can jurors recognize missing control groups, confounds, and experimenter bias in psychological science? Law and Human Behavior, 33, 247-257.
McAuliff, B. D., & Kovera, M. B. (2008). Juror Need for Cognition and sensitivity to methodological flaws in expert evidence. Journal of Applied Social Psychology, 38, 385-408.
McAuliff, B. D, & Kovera, M. B. (2007). Estimating the effects of misleading information on witness accuracy: Can experts tell jurors something they don’t already know? Applied Cognitive Psychology, 21, 849-870.
Kovera, M. B., & McAuliff, B. D. (2000). The effects of peer review and evidence quality on judge evaluations of psychological science: Are judges effective gatekeepers? Journal of Applied Psychology, 85, 574-586.
Kovera, M. B., McAuliff, B. D., & Hebert, K. S. (1999). Reasoning about scientific evidence: Effects of juror gender and evidence quality on juror decisions in a hostile work environment case. Journal of Applied Psychology, 84, 362-375.
Phillips, M. R., McAuliff, B. D., Kovera, M. B., Cutler, B. L. (1999). Double-blind photoarray administration as a safeguard against investigator bias. Journal of Applied Psychology, 84, 940-951.
Perry, N. W., McAuliff, B. D., Tam, P., Claycomb, L., Dostal, C., & Flanagan, C. (1995). When lawyers question children: Is justice served? Law and Human Behavior, 19, 609-629.
Research and Interests
My research uses basic social and cognitive psychological theories to understand human behavior in applied settings. I have used multiple methods to examine a variety of empirical questions relating to people's involvement in the legal system. For example, how do jurors and legal professionals evaluate scientific evidence? Do law enforcement personnel who know the suspect's identity administer lineups fairly? What factors influence the accuracy of children's reports? Are jurors sensitive to these factors or could expert testimony increase their understanding?
My major research program examines how laypeople and legal professionals evaluate social scientific evidence. Can jurors, attorneys, and judges differentiate between sound research and research containing methodological flaws? If they cannot, what factors influence their evaluations? With the assistance of my advisor, I conducted a series of studies to answer these questions. Based on prior research by Nisbett and his colleagues, we reasoned that people (especially those with legal training) would lack the ability to evaluate scientific evidence effectively. We hypothesized that people instead may rely on heuristic cues. To test our hypotheses, we manipulated the validity (e.g., construct or internal validity) of a study that an expert wished to offer at trial. We also varied heuristic cues associated with the expert's study (e.g., general acceptance, publication status, ecological validity) to determine whether these cues influenced people's evaluations of social scientific research.
In our first study (Kovera, McAuliff, & Hebert, 1999), jury-eligible undergraduates viewed a videotaped trial simulation of a hostile work environment case containing expert testimony. We found that undergraduates were insensitive to variations in the construct validity of the expert's research. That is, their decisions did not vary as a function of whether the study contained only one versus multiple measures of sexual harassment. Instead, undergraduates focused on the ecological validity and general acceptance of the research when making decisions about liability and damages. My dissertation expanded on this previous research by using a more ecologically valid sample and incorporating an individual difference variable (i.e., the Need for Cognition) into the experimental design (McAuliff & Kovera, under review). Jurors read trial transcripts of the hostile work environment case that contained expert testimony. We discovered that high NFC jurors were more likely to find for the plaintiff when the expert's study was internally valid, whereas low NFC jurors' decisions were unaffected by variations in the study's validity. Similarly, high NFC jurors rated the valid study to be of higher quality than the invalid study; however, low NFC jurors' ratings did not differ as a function of the study's validity. In future research I will continue to explore the role of NFC in jurors’ evaluations of scientific evidence. I will also assess how courts might use expert testimony that addresses methodological issues to improve jurors' reasoning skills.
I also have investigated the ability of judges and attorneys to evaluate scientific evidence. In two separate studies (Kovera & McAuliff, 2000; Kovera & McAuliff, under review), we surveyed judges and attorneys to determine whether scientific training increased their sensitivity to methodological flaws present in social scientific research. Our survey of Florida circuit court judges contained a basic fact pattern of a hostile work environment case and a description of the expert's testimony for the plaintiff. Most judges in our sample did not recognize methodological flaws present in the expert’s research. Judges were no more likely to admit the internally valid study than to admit the study containing a confound, the study missing a control group, or the study containing experimenter bias. However, scientific training appeared to help judges recognize the merits in valid studies and the problems with studies containing confounds. We also conducted a national survey of attorneys specializing in employment and discrimination law. Attorneys role-played that they were representing a defendant in a hostile work environment case and read a summary of the expert's testimony that the plaintiff wished to admit at trial. Ninety-five percent of the attorneys indicated they would file a motion to bar the admission of the expert's testimony. The general acceptance and internal validity manipulations did not influence their decisions. Attorneys rated the generally accepted evidence to be more reliable and valid than the evidence that was not generally accepted; however, the internal validity manipulation did not affect their judgments of the scientific quality of the evidence. Overall, scientifically trained attorneys provided similar responses to the judgments of untrained attorneys.
Investigator Bias and Eyewitness Identification
My second line of research examines certain factors that influence the accuracy of eyewitness identifications. Experts in the field of eyewitness identification have recommended the use of double-blind lineups (i.e., lineups in which neither the administrator nor the eyewitness know the suspect's identity) to prevent false identifications. Until recently, however, no empirical research has addressed the effectiveness of the double-blind procedure. With the help of my colleagues (Phillips, McAuliff, Kovera, & Cutler, 1999), I conducted the only study to date to examine this issue. We manipulated whether the lineup administrators knew the suspect's identity when administering perpetrator absent lineups to eyewitnesses. We found that knowledge of the suspect's identity can increase false identification rates in certain circumstances. Moreover, this manipulation did not affect eyewitnesses' or administrators' ratings of lineup fairness or eyewitnesses' ratings of pressure to make an identification. I have planned future studies to examine whether the use of the double-blind procedure is equally effective in lineups that contain the actual perpetrator and how an investigator intentionally or unintentionally communicates his or her knowledge of the suspect's identity to the eyewitness. Once I have identified lineups containing biased cues, I will show these lineups to jurors, legal professionals, and law enforcement personnel to determine whether these individuals can identify biased lineups.
Child Witness Testimony
I designed my third line of research to better understand children's capabilities as witnesses and juror decision-making in cases involving children’s testimony. I collaborated on one study (Perry, McAuliff, Tam, Claycomb, Dostal, & Flanagan, 1995) that examined the effects of certain question forms typically used by lawyers on the accuracy of children and adults. In that study, we found that questions phrased in "lawyerese" decreased the accuracy of all witnesses, regardless of age, compared to questions phrased in a simpler, more straightforward manner. Questions containing multiple parts, negatives, double negatives, or difficult vocabulary all posed significant problems for witnesses. In a second study that I conducted with the assistance of my advisor, we examined methodological issues relevant to children's suggestibility research (McAuliff & Kovera, under review). Using meta-analysis, I was able to demonstrate that certain methodological features, such as the use of inappropriate control groups and age-related comparisons, drastically can affect the results researchers obtain. In order for experimental results to reflect true age-related differences in accuracy, researchers must use appropriate methods when examining witness memory. My master's thesis examined what findings in the suggestibility literature experts generally accept and jurors commonly understand (McAuliff & Kovera, under review). Experts and jurors completed a survey in which they estimated the effect size of misleading information on witness accuracy in various conditions. Witness memory experts and jurors provided larger effect size estimates for younger children compared to adults indicating their belief that suggestibility decreases with age. Experts believed certain variables such as event detail centrality, level of witness participation, and source prestige moderate the misinformation/accuracy relationship, whereas jurors did not. In future research I will examine whether expert testimony addressing these moderating variables can overcome the deficits in jurors' understanding of witness memory.
Most recently, I collected data concerning jurors' expectancies for child victims' demeanor at trial. Several innovative procedures have been introduced (e.g., testimony via closed-circuit television) to accommodate children in abuse cases. Although these procedures reduce children's stress and increase their accuracy, they also decrease children's perceived credibility. We reasoned that children's relaxed demeanor might violate jurors' behavioral expectancies and, in turn, decrease children's perceived credibility (McAuliff & Kovera, 2002). Our results demonstrated jurors hold beliefs about children's behavior (e.g., nervousness, eye contact, rate of speech) that may be violated when courts use alternative testimonial procedures. In future research, I will vary the child's demeanor and form of testimony to determine how these variations affect jurors' perceptions of child witness credibility.