Monday 31 March 2014

Explaining Delusions (4)

This is a response to Phil Corlett's contribution to the blog, posted on behalf of Max Coltheart.


Max Coltheart
I think it would be helpful if at this stage the distinction between monothematic and polythematic delusions were introduced. A monothematic delusional condition is one where the patient has only a single delusional belief (or a small set of delusional beliefs related to a single theme). A polythematic delusional condition is one where the patient has many different and unrelated delusional beliefs. This distinction between monothematic and polythematic delusion was offered by Davies, Coltheart, Langdon and Breen (2001) and has been discussed by the philosopher Jennifer Radden in her book (Radden, 2010).


As Coltheart (2013) noted “Well-known cases of polythematic delusion include Daniel Schreber, a judge in the German Supreme Court, who believed that he had the plague, that his brain was softening, that divine forces were preparing him for a sexual union with God, and that this would create a new race of humans who would restore the world to a lost state of blessedness (Bell 2003); more details of his case are provided on pp. 50-52 of Radden’s book. Another example was the Nobel laureate John Nash (who was diagnosed with schizophrenia); among the delusional beliefs he held were that he would become Emperor of Antarctica, that he was the left foot of God on Earth, and that his name was really Johann von Nassau (Capps 2004).”

Thursday 27 March 2014

Workshop on Defeat and Religious Epistemology

On 17-18th March, The New Insights and Directions for Religious Epistemology project at the University of Oxford hosted a workshop on defeat and religious epistemology. Papers were given by Charity Anderson (Oxford), J. Adam Carter (Edinburgh), Maria Lasonon-Aarnio (Michigan), John Pittard (Yale Divinity School), Edward Wierenga (Rochester) and Michael Bergmann (Purdue).
The workshop began with a discussion of Anderson’s paper on Defeat, Testimony and Miracles. Anderson considered the rationality of believing in a miracle report in Hume’s infamous essay of Miracles. Anderson discussed the role epistemic defeat plays in Hume’s argument and she claimed that Hume’s central claim is not, as is often thought, that testimony is a weak source of knowledge, but rather, that some kinds of testimony, namely testimony to the miraculous, are unreliable.

Wednesday 26 March 2014

Implicit Bias and Epistemic Innocence

In this post I will suggest some reasons for thinking that at least some beliefs based on implicit bias are epistemically innocent. An implicit bias is a bias ‘of which we are not aware […] and can clash with our professed beliefs about members of social groups’, and which can ‘affect our judgments and decisions’ (Crouch 2012: 7). Empirical work has shown that such biases are held by ‘most people’, even those people who avow egalitarian positions, or are members of the targeted group (Steinpreis et al. 1999). 

As Lisa and I have said in previous posts, we understand the epistemic innocence of a cognition as that cognition's meeting two conditions. Here are the conditions a belief based on implicit bias would have to meet in order to be epistemically innocent:

1. Epistemic Benefit: The belief delivers some significant epistemic benefit to an agent at a time (e.g., it contributes to the acquisition, retention or good use of true beliefs of importance to that agent).

2. No Relevant Alternatives. Alternative beliefs that would deliver the same epistemic benefit are unavailable to the agent at that time.

Tuesday 25 March 2014

Explaining Delusions (3)

Phil Corlett
This is a response to Max Coltheart's contribution to the blog, posted on behalf of Phil Corlett.

Thank you Max. Your responses are enlightening. I do have a number of follow-up questions, if I may.

Follow up to Q 1 – If prediction error is intact in people with delusions, how would we observe the patterns of prediction error disruption in our data? These patterns have been consistent across endogenous (Corlett et al., 2007) and drug induced delusions (Corlett et al., 2006) as well as in healthy people with delusion-like ideas (beliefs in telekinesis for example) (Corlett & Fletcher, 2012). Importantly, these neural responses (aberrant prediction errors) correlated with delusion severity across subjects in these studies.
On the other hand, if prediction error must be intact for 2-factor theory, do our data suggest that 2-factor theory not apply to delusions that occur in schizophreniform illnesses?

Sunday 23 March 2014

Explaining Delusions (2)

This is a response to Phil Corlett's contribution to the blog, posted on behalf of Max Coltheart.

Max Coltheart
I thank Phil for his illuminating questions about my post, and will attempt to answer them, in Q&A format (Q is Phil, A is me).

Q: Are you aligning prediction error with Factor 1 or Factor 2? It seems Factor 1, but I wanted to check – particularly since you align Factor 2 with the functioning of right dorsolateral prefrontal cortex, which, as you know, we’ve implicated in prediction error signaling with our functional imaging studies.

A: In our account, what generates prediction error is Factor 1: for example, in Capgras delusion, the prediction is that an autonomic response will occur when the face of one’s spouse is seem, but that prediction is in error, since the predicted response does not occur. But detection of this prediction error would only occur if the system that detects such errors is intact. And the job of this system is to generate hypotheses
to account for these errors: a delusional hypothesis would not occur unless this function of the prediction-error system were also intact. Our understanding of your model is that there is something wrong with the prediction-error system in people with delusions. As for right dorsolateral prefrontal cortex, we associate Factor 2 with this region, believing that damage to the region results in impairment of the belief evaluation system.


Thursday 20 March 2014

Mental Illness: Philosophy, Ethics and Society

Matthew Broome and Lisa Bortolotti
On 17 March 2014 Kengo Miyazono organised a public engagement event as part of the Arts and Science Festival at the University of Birmingham on 17 March 2014. The main theme of the event was a reflection on the importance of psychiatric diagnosis in establishing whether someone is responsible for committing a crime.

The event consisted of several activities, a talk by Matthew Broome (Psychiatry, Oxford) on a case study he had written about, featuring a man with schizophrenia committing a crime; a brief commentary by Lisa Bortolotti (Philosophy, Birmingham) explaining how the considerations made about the case study could enlighten the debate on the recent Breivik case in Norway; a questions and answers session with the audience; and a discussion session for which the audience split in two groups. Doctoral students Sarah-Louise Johnson, Rachel Gunn and Ben Costello also contributed to the discussion. Additional activities are planned this week on the event website, Mental Illness: Philosophy, Ethics and Society.

Tuesday 18 March 2014

Delusions as Acceptances

Richard Dub
My name is Richard Dub. I'm currently a postdoctoral fellow at the Swiss Centre for the Affective Sciences, and I recently received my Ph.D. in Philosophy from Rutgers University. In my dissertation, I offered a model of delusions that attempted to answer two questions: What are delusions? How are they formed? Delusions, I argue, are pathological acceptances formed on the basis of pathological cognitive feelings.

Neither 'acceptance' nor 'cognitive feeling' is an entirely mainstream concept.  A concern that motivates a lot of my work is that it is procrustean to try to explain all mental phenomena in terms of a select few propositional attitudes.  There is little reason to insist that belief and desire must take their traditional place of prominence.  The mind is lush, not sparse.  The ordinary concept of belief is likely what Ned Block calls a "mongrel concept": a concept that imperfectly picks out various dissimilar cognitive states.

Saturday 15 March 2014

Explaining Delusions (1)

This is a response to Max Coltheart's post (and comments), posted on behalf of Phil Corlett.


Phil Corlett



As usual Max, a thorough, interesting and well written piece.

I am curious about a couple of things.

First, you say that the prediction error signal fails in our model. Are you implying that we believe delusions form in the absence of prediction error? Our data point to the opposite case. Prediction errors are inappropriately engaged in response to events that ought not to be surprising. That is why people with delusions learn about events (stimuli, thoughts, percepts) that those without delusions would ignore.

Second, you claim that prediction error is key to belief updating in your model. Are you aligning prediction error with Factor 1 or Factor 2? It seems Factor 1, but I wanted to check – particularly since you align Factor 2 with the functioning of right dorsolateral prefrontal cortex, which, as you know, we’ve implicated in prediction error signaling with our functional imaging studies.

Third, I am curious about some specific facets of 2 Factor theory: It seems to me that your account is based on, and works well for what I call Neuropsychological delusions, delusions that arise form some sort of closed head injury damaging right frontal and parietal cortex amongst other regions. Your argument is clear, you need perceptual system damage (Factor 1) and belief evaluation damage (Factor 2). In these neuropsychological cases, doesn’t the damage happen simultaneously, as a result of a stroke or accident? If so, why do they update their old belief at all?

According to your model, you would need to wake up from your coma, have the experience of unfamiliarity for your wife, update your belief appropriately (using the normally functioning prediction error signal) then hold fast to your belief. To form the delusion based on the odd experience, wouldn’t you need belief evaluation to be working normally, then once the belief is formed, evaluation would break.
 
Fourth, why are these monothematic delusions so circumscribed? Belief evaluation seems rather circumscribed – they do not form delusions about numerous themes like patients with schizophrenia. Wouldn’t a general belief evaluation deficit require that?

Fifth, your model does not allow for any top-down influence of belief on perception. Visual illusions like the rotating hollow mask demonstrate the powerful effect that priors have on how we see the world. Your separate factors seem not to allow for that – would you agree?

Along these lines, spontaneous confabulation, where, after a head injury, people believe for example that they are still engaged in their pre-injury lives (e.g. a lawyer referring to the doctors and nurses in the hospital and judges and barristers), seems to be a more top-down delusion like phenomenon. What is the Factor 1 here?

Further more, Max asked whether there were any examples of people with pure Factor 1 damage who form delusions (as prediction error theory would demand). In the absence of a clearer definition of what Factor 1 is, I will assume that it would engender a pure sensory deficit. There is a case report of someone with damage to the brachial plexus, a key node of the peripheral nervous system supplying the arm and hand. After damaging his brachial plexus (and crucially in the absence of central damage) he began to feel that his hand had been replaced by a mechanical robot hand that did not belong to him and began to produce elaborate narratives to explain his experiences.

Finally, I would like to offer a consilient explanation that might unite our models.

 It is important to appreciate that there are at least two different types of prediction error. There is the standard, gradient descent or delta rule based prediction error that is employed to update weights in particular learning settings. I’ll call this within model prediction error, we use this to learn about the specific aspects of a model that we have assumed to pertain. There is also a prediction error over models – it tells us that the model we assumed is inadequate and we need to invoke another. Computational learning theorists like Nathaniel Daw have shown separable but interacting neural mechanisms for these errors – crucially, model or state prediction error is located in DLPFC and parietal cortex. Could prediction error be the algorithmic bases (Pace David Marr) for Factors 1 and 2?

Perhaps that is something to be developed in another blog post.

Thursday 13 March 2014

Workshop on Belief in Birmingham

Scott Sturgeon, Rae Langton, and Susanna Siegel
On 12th March, the Department of Philosophy at the University of Birmingham held a workshop on Belief, as part of the Birmingham workshops in philosophy series. Papers were given by Scott Sturgeon (Birmingham), Rae Langton (Cambridge), and Susanna Siegel (Birmingham/Harvard). 

Sturgeon opened the workshop with his paper ‘Epistemic Attitudes: the Tale of Bella and Creda’. He was interested in three questions. First: which are our epistemic attitudes? Second: Do elements of a given attitudinal space reduce to others in that space? Third: Do elements of a given attitudinal space reduce to others in a different space? Focus was on this third question, more specifically: the Belief-first view (credences reduce to beliefs) versus the Credence-first view (beliefs reduce to credences).

Tuesday 11 March 2014

Explaining Delusional Belief: The Two-Factor Account

Max Coltheart
I'm Max Coltheart, Emeritus Professor of Cognitive Science at the Centre for Cognition and its Disorders and Department of Cognitive Science, at Macquarie University. I work in cognitive neuropsychology (especially developmental dyslexia) and cognitive neuropsychiatry (especially delusional belief). I am also interested in the use of functional neuroimaging to attempt to test cognitive theories. I did a joint undergraduate degree in psychology and philosophy at the University of Sydney and have worked with the philosophers Martin Davies, John Sutton and Peter Menzies.

Since the late 1990s my colleague Robyn Langdon and I, initially in collaboration with Martin Davies and the clinical neuropsychologist Nora Breen, and later in collaboration with various others especially Ryan McKay, have been developing a cognitive-level theory of the genesis of delusions which we call the two-factor theory. Our view is that scientific understanding of any delusional condition has been achieved if the answers to two questions has been discovered. Question 1 is: what caused the idea that is the content of the delusion to first come to the deluded person’s mind? Question 2 is: what caused this idea – a candidate for belief - to be accepted as a belief, rather than rejected, which is what ought to have happened because there is so much evidence against it?

Thursday 6 March 2014

Workshop - Costs and Benefits of Imperfect Cognitions

University of Birmingham
On May 8th and 9th there will be a two-day workshop at the University of Birmingham, discussing some of the key themes in the Epistemic Innocence project.

The workshop will be one of the means by which the project interim results are disseminated, and will promote an exchange between philosophers and psychologists on the potential pragmatic and epistemic benefits and costs of beliefs, memories, implicit biases, and explanations. It will also be a venue for Imperfect Cognitions network members to meet and talk about their research, and think about potential areas for future collaboration.

The workshop is funded by the Arts and Humanities Research Council, and has also received the support of the Analysis Trust. This made it possible to subsidise the registration fee, and award workshop bursaries to the graduate students attending. Abstracts for the talks can be found here. If you wish to participate, register by 15th March 2014 or follow live-tweeting by @epistinnocence.

If you want to know more about the project, articles featuring research on epistemic innocence have been published in the OUP Blog and the Philosophy@Birmingham blog in February and The Reasoner in March.


Tuesday 4 March 2014

Thought Insertion and the Minimal Self

Caitrin Donovan
I am a student in the Unit for the History and Philosophy of Science at the University of Sydney. In my recently completed honours dissertation, I argued that the delusional phenomenon of thought insertion problematises certain aspects of the 'minimal model' of self.

Many philosophers now believe that the self is in some way constructed by narrative; through socio-linguistically mediated story-telling, we achieve diachronic unity by taking a reflective stance on our experiences. According to the strong formulation of this thesis, conscious beings only develop selves once they acquire the higher-order linguistic and reflective capacities required for autobiographical self-understanding.