Thursday 25 February 2016

Rationality and Its Rivals

On 11-12 December 2015, The 2nd International Conference on Natural Cognition: Rationality and Its Rivals was held at University of Macau, organised by Nevia Dolcini. Interesting and exciting talks were given, mostly by philosophers, on varieties of topics including; rational norms, reasoning, ecological rationality, cognitive biases, self-deception, religious beliefs, and emotions and moods.


I briefly summarise some talks below. (See here for the full programme with abstracts.)


In some cases, there is no non-circular justification of a particular rational norm or rule. For instance, Hume's famous discussion of inductive inferences seems to show that there is no non-circular justification of inductive inferences. One might try to justify inductive inferences on the basis of their past success but, as Hume pointed out, this justification itself is inductive and hence circular. In his talk "Circularity and Objective Rational Norms", Jonathan Ichikawa (British Columbia) argued that that some norms or rules are simply and brutely objectively correct, even if there is no non-circular justification of them and, and even if we cannot distinguish the good cases of circular justification from the bad cases. He discussed some interesting implications to the externalism/internalism debate about rationality.


Neil Van Leeuwen's (Georgia State)"Imagination and Sacred Value" was about the peculiar features of religious preferences and beliefs. First he introduced Scott Atran's recent study which reveals the the peculiar features of religious preferences, such as the violation of transitivity and the lack of temporal discounting. Then, he argued that, in causing behaviour, the non-standard system of preferences in religious contexts (called "sacred value") should be combined with the non-standard system of beliefs. In particular, he argued the “beliefs” that interact with sacred values should be more similar to imaginings than to ordinary factual beliefs, which is consistent with his claim in this paper.

Tuesday 23 February 2016

Are Mental Disorders Natural Kinds?


This post is by Şerife Tekin, Assistant Professor of Philosophy at Daemen College. Here she summarises her paper ‘Are Mental Disorders Natural Kinds? A Plea for a New Approach to Intervention in Psychiatry’, forthcoming in Philosophy, Psychiatry, and Psychology.

Thanks to Ema for inviting me to share my work with the Imperfect Cognitions blog readers. What follows is a snapshot of my arguments in the paper mentioned above. In the article I engage a debate with a long history in philosophy of science: the metaphysical status of mental disorders and empirical investigatibility. I offer an evaluation of what I call a Looping Debate (Tekin 2014), and recommend its replacement with a Trilateral Strategy.

Among philosophers interested in metaphysical and epistemological issues of psychiatric classification, the application of the theory of natural kinds to mental disorder is a particularly contentious topic (e.g. Hacking 1995; Cooper 2004, Zachar 2000; Graham 2010; Kendler, Zachar, and Craver 2010; Samuels 2009). For many, the concept of natural kind is appealing because of its conceived utility in scientific generalizations. Members of a particular natural kind are thought to share a large number of scientifically relevant properties that ground scientific explanations, predictions, and interventions. If mental disorders are natural kinds, as many argue, the discovery of their shared properties can yield explanations, predictions, and interventions (Cooper 2007; Samuels 2009).

In short, the motivation to attribute natural kind status to mental disorders originates from the desire to make them amenable to manipulation. If the scientifically relevant properties of, say, bipolar disorder, can be identified in this fashion, we can formulate scientific explanations, offer predictions about their course, and develop interventions.

Thursday 18 February 2016

Virtues and Vices in Evidence-Based Clinical Practice

On January 27th 2016 delegates convened in Green Templeton College at the University of Oxford to discuss virtues and vices in evidence-based medicine. The delegates came from a variety of different fields, including nursing, medicine, sociology, policy-making and philosophy.


The aim of the workshop was to discuss the gap between evidence-based research and clinical practice. A large body of research has been undertaken and developed into guidelines for medical practitioners. However, there are a number of limitations to these guidelines: e.g. they fail to reflect the complex social and moral nature of healthcare, and they fail to reflect the explanatory role of cognitive mechanisms and character traits in clinical practice. These limitations to evidence-based medicine were discussed in the workshop and the talks aimed to outline strategies that could be used to ameliorate the flaws. 

Talks focussed on professional virtues, intellectual virtues and vices, psychological virtues and vices, and professional vices. This blog post focuses on the first two talks, on professional virtues and intellectual virtues and vices.

Iona Heath discussed professional virtues. She argued that there are two forms of thought, with different standards of correctness, which can be used in the healthcare setting. One form of thought involves rational argument, the other involves story-telling. Evidence-based medicine, she argued, focuses too heavily upon rational argument, leading the stories that are crucial to gaining a proper understanding of patients to be ignored. She argued that the neglect of stories and emphasis placed on rational argument is reflected in the words used in discourse surrounding healthcare. Words like “care”, “responsibility”, and “creativity” are now neglected and words like “attentiveness”, “imagination”, “commitment” are being lost. Meanwhile, words like “regulation”, “inspection”, “failure”, “risk” are ubiquitous.

Heath emphasised the importance of addressing these problems via virtues like improvisation, which involves learning to cope with incomplete information and unpredictable settings. She argued that generalists, who are able to be flexible, recognising and accommodating errors, are in the best position to exercise this virtue. She also emphasised the importance of the wisdom involved with recognising the lack of universality of rules. Finally, she emphasised the importance of other-directed virtues, virtues of intersubjectivity, which allow for meaningful interactions with patients and good quality communication. These virtues include character traits like compassion and generosity, which facilitate meaningful exchanges of ideas.

Tuesday 16 February 2016

Epistemic Benefits of Delusions (2)


This is the second in a series of two posts by Phil Corlett (pictured above) and Sarah Fineberg (pictured below). Phil and Sarah are both based in the Department of Psychiatry at Yale University. In a post published last week, and this post they discuss the adaptive value of delusional beliefs via their predictive coding model of the mind, and the potential delusions have for epistemic innocence (see their recent paper 'The Doxastic Shear Pin: Delusions as Errors of Learning and Memory', in Cognitive Neuropsychiatry)Phil presented a version of the arguments below at the Royal College of Psychiatrists' Annual Meeting in Birmingham in 2015, as part of a session on delusions sponsored by project PERFECT. 


Our analysis depends on two distinct systems for instrumental learning, one goal-directed, the other habitual (Daw et al., 2005). The goal-directed system involves learning flexible relationships between actions and outcomes instantiated in the more computationally intensive prefrontal cortices (Daw et al. 2005). On the other hand, habits involve more inflexible representations of the relations between environmental stimuli and behavioural actions; when a particular cue is perceived, a specific action is elicited irrespective of the consequences (Daw et al. 2005). These two systems compete to control behaviour (Hitchott et al. 2007).

Delusions may be sub-served by the striatal habit system because the computationally intensive (but flexible) goal-directed system is impaired. A recent account of reinforcement learning from the point of view of predictive processing actually suggests that the goal-directed system is associated with processing higher up the Bayesian hierarchy; bootstrapped from simpler habitual reflexes lower down (Pezzulo et al., 2015). In this account, behavioural control is a function of the degree of precision of prediction error and control is ceded to the level of the hierarchy with the highest precision (of priors and prediction error)—there is no need for separate competing systems (Pezzulo et al. 2015).

We propose a single impairment in prediction-error–driven learning (across the hierarchy) in three stages: beginning with a delusional mood; next, the delusion forms in an a ha moment, in which explanatory insight occurs. Finally, the delusion is applied to do further explanatory work. Delusions may be an adaptive product of the shear pin breaking because they enable patients to remain in connection with their environment and to exploit its regularities to some degree.

Prior to delusions, there is a prodromal delusional mood in which attention is drawn toward irrelevant stimuli, thoughts, and associative connections which are distressing and unpredictable (Kapur 2003). This reflects an impairment in the brain’s predictive learning mechanisms, such that inappropriate prediction errors, are registered (Corlett et al. 2010).

Thursday 11 February 2016

Philosophy of Psychiatry Today: Interview with Dominic Murphy

In this post, Reinier Schuur, PhD student at the University of Birmingham, interviews Dominic Murphy (pictured below), Associate Professor at the University of Sydney, on current debates in the philosophy of psychiatry.




RS: Many people have said that over the last 20 years, philosophy of psychiatry has grown, as has the interaction between philosophers and psychiatrists. Do you agree? Do you think this interaction will increase, and what should the role of philosophers be in psychiatry, and vice versa?

DM: I suppose it has grown. When I started thinking about psychiatry in the mid-90’s (I started my PhD in 1994), back then there very few philosophers of psychiatry. Reznek had just written his book , Jennifer Radden and Stephen Braude had written, and Ian Hacking was about to start his writing. The field was very small and it has certainly grown. I think probably psychiatrists are interacting with philosophy. There has always been conceptual literature in psychiatry and there have always been some psychiatrists that have been interested in these sorts of issues.

So yes, I guess it’s been growing. I think it will probably continue, I think in some sense philosophy of psychiatry has started to become entrenched a little bit more now. I don’t think it really counts as a specialisation exactly, but I guess that more and more people are thinking of it as one of the things that they are interested in, and who knows maybe it will be like philosophy of biology in the 70’s and it will really take hold. I suppose the interaction will continue, I mean at the recent Copenhagen conference there were psychiatrists that had never been to a philosophy conference before. And it was very interesting and I hope more and more psychiatrists will get into it, though I don’t suppose it will be more than a very minor interest in psychiatry.

As far as the role of philosophy is psychiatry, I’m not sure. The way that I got into it was as a philosopher of mind and a philosopher of psychology. I was interested in psychiatry because people were talking about mental illness as evidence for certain hypotheses in the philosophy of mind. So people looked at Autism, or the contrast between Autism and Williams Syndrome, and they wondered if that meant that the theory of mind was modular in some way. Dennett and Humphrey found support for Dennett’s view of the self in the multiple personality literature. So I think there is always going to be an interest in looking at psychiatric diagnoses in the light of concerns that philosophers of psychology have, and there is always going to be an interest looking at philosophy of psychology in the light of some of the mental illnesses. Then I guess, you can also see a sort of philosophy of science wing in philosophy of psychiatry interested in much of the things philosophers of science have always cared about in explanation and so on and reduction. And I think as there’s been more nuanced philosophy of neuroscience the last 10 to 15 years, I think some of that has been lining up with the philosophy of psychiatry, such as the people looking at explanation and reduction. So I guess there will be these two sorts of tracks.


Tuesday 9 February 2016

Epistemic Benefits of Delusions (1)


This is the first in a series of two posts by Phil Corlett (pictured above) and Sarah Fineberg (pictured below). Phil and Sarah are both based in the Department of Psychiatry at Yale University. In this post and the next they discuss the adaptive value of delusional beliefs via their predictive coding model of the mind, and the potential delusions have for epistemic benefits (see their recent paper 'The Doxastic Shear Pin: Delusions as Errors of Learning and Memory', in Cognitive Neuropsychiatry). Phil presented a version of the arguments below at the Royal College of Psychiatrists' Annual Meeting in Birmingham in 2015, as part of a session on delusions sponsored by project PERFECT.


The predictive coding model of mind and brain function and dysfunction seems to be committed to veracity; at its heart is an error correcting, plastic, learning mechanism that ought to maximize future rewards, and minimize future punishments like the agents of traditional microeconomics—so called econs (Padoa-Schioppa forthcoming). This seems at odds with predictive coding models of psychopathology and in particular psychotic symptoms like hallucinations and delusions (Corlett et al. 2010). Put simply, if delusions result from a noisy maladaptive learning mechanism, why do individuals learn anything at all—let alone the complex and strongly held beliefs that characterize psychotic illness? We know from behavioural economists like Kahneman, Tversky, and, Thaler that humans can depart from econ-like responding. Can predictive coding depart likewise? And does it depart in interesting ways that are relevant to delusions?

We think so. Bayesian models of cognition and behaviour need not necessarily optimize expected value. For example, Bayesian models of message passing in crowds can recapitulate the rumors and panic that characterize communication after a salient world event (Butts 1998). With regards to delusions, we would like to re-consider Daniel Dennett and Ryan McKay’s assessment of adaptive mis-beliefs (McKay and Dennett 2009). McKay and Dennett explored the existence of misbeliefs—incorrect beliefs that, despite being wrong, nevertheless confer some advantage on the adherent. They argued that only positive illusions—beliefs that one is more competent, more attractive, less biased than in reality, etc.—were evolved, adaptive, misbeliefs (McKay and Dennett 2009). We (and others) think delusions might confer such a function (Hagen 2008).

Thursday 4 February 2016

Workshop on Belief and Emotion

On Friday 27th November, project PERFECT (Department of Philosophy), together with the Aberrant Experience and Belief research theme (School of Psychology), held a mini-workshop on the topic of Belief and Emotion. In this post I summarise the three talks given by Allan Hazlett, Neil Levy, and Carolyn Price

Allan Hazlett opened the workshop with his paper ‘On the Special Insult of Refusing Testimony’. He argued that refusing someone’s testimony (i.e. not believing what someone tells you) is insulting, and to express such refusal amounts to a special kind of insult. Understanding telling as an attempt to engage in information sharing, Hazlett suggested that in telling someone that p, I am asking that person to believe that p because I believe it. Refusing my testimony would be to insult me because it constitutes the person's not trusting me. Hazlett concluded by asking why it is that not trusting would be insulting? He canvassed four ideas to answer this question, lack of trust (1) undermines intimacy, (2) undermines solidarity, (3) implies non-competence, and (4) constitutes non-acceptance. Elaborating on (3) Hazlett argued that trusting requires an attitude about someone’s competence, specifically about their reliability (tendency to believe truths), and their sincerity (tendency to honesty).

Neil Levy continued the workshop with his paper ‘Have I Turned the Stove Off? Explaining Everyday Anxiety’. He was interested in a certain kind of discordancy case, namely, neurotic anxiety. His focus was on the case of Joe, who believes that he turned the stove off, but nevertheless finds himself wondering ‘have I turned the stove off?’ Joe cannot be sure he turned it off, or that the apparent memory of so doing has the correct time stamp (since he cooks every morning). This case is one of discordancy since Joe engages in action which is contrary to his belief (i.e. ruminating on the matter). Levy argued against several interpretations of this case: its being a credence case (Joe has a low, non-negligible credence that he did not turn the stove off), its being an in-between belief case (Joe in-between believes that he did not turn the stove off), its being an alief case (Joe alieves that he did not turn the stove off), and its being a metacognitive error case (Joe imagines that he did not turn the stove off). Levy finished by sketching an alternative account according to which such a case is one in which the relationship between the representation and action is deviant, in its being mediated by anxiety.

Carolyn Price closed the workshop with her paper ‘Emotion, Perception, and Recalcitrance’. Price was interested in the phenomenon of recalcitrant emotion, that is, the fact that sometimes our considered judgements and our emotional responses are in tension with one another. She discussed the comparison of cases of emotional recalcitrance with cases of recalcitrant perception (conflict between judgement and perception), specifically, optical illusions. She noted that this comparison has been used to support that claim that emotion is a form of perception. However, a problem with this view is that recalcitrant emotions are judged as irrational, whereas recalcitrant perceptions are not. Price suggested that though this thought seems right, it also suggests a puzzle, namely: if emotions are not judgements, why is it that we think recalcitrant emotions are irrational? Price suggested an answer to this question which appealed to emotions and judgements answering to different standards of evidence.

Tuesday 2 February 2016

Cognitive Bias, Philosophy and Public Policy

This post is by Sophie Stammers, PhD student in Philosophy at King’s College London. Here she writes about two policy papers, Unintentional Bias in Court and Unintentional Bias in Forensic Investigation, written as part of a recent research fellowship at the Parliamentary Office of Science and Technology, supported by the Arts and Humanities Research Council.

The Parliamentary Office of Science and Technology (POST) provides accessible overviews of research from the sciences, prepared for general parliamentary use, many of which are also freely available. My papers are part of a recent research stream exploring how advances in science and technology interact with issues in crime and justice: it would seem that if there is one place where unbiased reasoning and fair judgement really matter, then it is in the justice system.

My research focuses on implicit cognition, and in particular, implicit social bias. I am interested in the extent to which implicit biases differ from other cognitions—whether implicit biases are a unified, novel class, or whether they may be accounted for by the cognitive categories to which we are already committed. So I was keen to get a flavour of how the general topic might be applied in a public policy environment, working under the guidance of POST’s seasoned science advisors.

Monday 1 February 2016

Beliefs that Feel Good Workshop


On December 16th and 17th, the Cognitive Irrationality project hosted a workshop on beliefs that feel good, organized by Marie van Loon, Melanie Sarzano and Anne Meylan. This very interesting event dealt with beliefs that feel good but are epistemically problematic in some way, as well as with beliefs for which this is not the case.

While the majority of talks and discussions focused on problematic cases, such as wayward believers, self-deceptive beliefs and unrealistically optimistic beliefs, there was also a discussion of epistemic virtues and the relation between scepticism and beliefs in the world. Below, I summarize the main points made in the talks.

Quassim Cassam probed the question why people hold weird beliefs and theories. Some examples are the theory that the moon landings were faked, or that 9/11 was in reality an inside job. Quassim argued that more often not, these types of beliefs stem from epistemic vices. In as far as these vices are stealthy, i.e. not apparent to the person who has them, it can be very hard to persuade people of the falsity of their opinions, as these vices reinforce and insulate the wayward beliefs. While it can happen that these wayward believers suddenly come to realize the inadequacy of their belief system in an act of deep unlearning, this is normally not something we can bring about by means of reasoning and persuasion.

​In a talk that straddled the disciplines of philosophy, psychology and neuroscience, Federico Lauria put forward an account of self-deception as affective coping. He argued that we self-deceive in order to avoid painful truths and that a full explanation of self-deception needs to account for affective evaluation of information. He argued that self-deceptive individuals evaluate evidence that their desire is frustrated in the light of three appraisals: (i) ambiguity of the evidence, (ii) negative impact on one's well-being (mediated by somatic markers), and (iii) low coping potential. In addition, self-deceptive people evaluate the state of desire satisfaction positively, which is mediated by increased dopaminergic activity. The affective conflict between truth and happiness is at the heart of self-deception.

In my own talk, I asked whether unrealistic optimism is irrational. I argued that unrealistically optimistic beliefs are frequently epistemically irrational because they result from processes of motivated cognition which privilege self-related desirable information while discounting undesirable information. However, unrealistically optimistic beliefs can be pragmatically rational in as far as they lead individuals to pursue strategies that make a desired outcome more likely. Interestingly, where they are pragmatically rational, they are also less epistemically irrational, as the likelihood of success increases. However, this does not mean that they become realistic or justified. They merely become less unwarranted than they would otherwise have been.

Susanna Rinard proposed a novel response to scepticism, arguing that even if we concede that ordinary beliefs are not well supported by our evidence, it does not follow that we should give up on everyday beliefs. Rather, we can run the risk of having false beliefs about the external worlds because we also incur a reverse cost when suspending belief, which is that of not having true beliefs. Furthermore, suspending judgment may come with additional costs, such as being effortful and unpleasant.

Finally, Adam Carter gave an account of insight as an epistemic virtue, drawing both on virtue epistemology and the psychology of insight problem solving. Insight experiences are characterized by a certain ‘aha’ moment where relations between states of affairs previously seen as unrelated strike the individual. Adam argued that the virtue of insightfulness lies in both cultivating the anteceding stages of preparation and incubation of insights described in the empirical literature and moving from insight experiences to judgment and/or verification in a considered way, i.e. not blindly endorsing the result of every insight experience but giving it due weight.

Every talk was followed by a response and lively discussion. Thanks to Anne, Melanie and Marie for organizing a workshop which was both intellectually stimulating and friendly.