Thursday, 28 January 2016

Neuroscience and Responsibility Workshop

Responsibility Project
This post is by Benjamin Matheson, Postdoctoral Fellow at the University of Gothenburg, working on the Gothenburg Responsibility Project. (Photos of workshop participants are by Gunnar Björnsson).

The workshop on ‘Neuroscience and Responsibility’, part of the Gothenburg Responsibility Project, took place in 14 November 20145. The conference was well attended, the talks were informative, and the discussion was lively and productive.

Michael Moore (Illinois) kicked things off with his talk ‘Nothing But a Pack of Neurons: Responsible Reductionism About the Mental States that Measure Moral Culpability’. Part of Moore’s current project is to show that reductionism (roughly, the view that mental states are just brain states) is not a threat to our responsibility practices – that is to say, we can still be morally and legally responsible even if mental states reduce to brain states. The worry is that if mental states reduce to brain states, then it is not us but rather our brain states that are responsible for our actions – that is, we are ‘nothing but a pack of neurons’ and not the appropriate loci for attributions of responsibility. Moore argued that this worry is ill founded. He emphasised that reducibility does not imply non-existence, and that mistakenly conflating reductionism with this second stance (also known as eliminativism) is what leads to the worry that the truth of reductionism would imply responsibility scepticism. Moore ended by noting that the same move can assuage worries that neuroscientific discoveries are or might be a threat to our responsibility practices.

Tuesday, 26 January 2016

The Disoriented Self


This post is by Michela Summa (pictured above), who works on the Body Memory project at the University of Heidelberg. Here she summarises her paper ‘The Disoriented Self. Layers and Dynamics of Self-Experience in Dementia and Schizophrenia’, published in Phenomenology and the Cognitive Sciences.

In recent years, several authors have defended a stratified or hierarchical account of the self and self-experience. Some of these accounts have proved to play an important role in the interpretation of psychiatric diseases. In this paper, I addressed the cases of dementia and of schizophrenia in light of the hierarchical model of self and self-experience. Thereby, I set myself two main aims: first, to investigate the potentialities and the limits of applying the hierarchical understanding of the self to dementia and schizophrenia; secondly, to reassess the model itself on the basis of some characteristic traits of both pathologies and possibly to propose some modifications.

The paper begins with a discussion of how the hierarchical model presupposes a formal-ontological law of foundation, which becomes visible in many texts written, for instance, on the distinction between minimal self and narrative self. Simplifying a rather complex debate, we can say that the concept of the minimal self generally refers to the basic, pre-reflective form of self-awareness, i.e., to the sense of mineness implied in all active and passive experiences.

The concept of the narrative self, instead, designates a higher and richer layer of self-experience, which presupposes language, self-reflection, and the possibility of recognizing oneself throughout one’s own life history. I argued that such a relation is generally understood as a univocal foundational relation, according to which the more complex layers cannot exist if the most basic layer does not exist. I further suggested that, if we go beyond purely formal ontology, the dependence relation is not limited to existence. Rather, in several works we can find the suggestion that the modification of the basic layers inevitably affect the higher, whereas the opposite does not seem to be the case.

Applying such a distinction between basic and higher layers of the self to psychopathology has certainly helped to highlight how different diseases affect particular layers of the self, leaving others untouched. However, a closer investigation of both dementia and schizophrenia shows that such a model needs to be theoretically refined, particularly in what concerns the implicit assumption concerning the foundation between layers of experience.

I accomplished this investigation under the assumption that dementia and schizophrenia are diseased that entail two different kinds of disorientation. The two pathologies, indeed, appear to be paradigmatic insofar as they affect, respectively, the higher and the basic layers of self-experience. Indeed, the experience of disorientation in demented patients appears to be connected with a disturbance of the higher faculty of reflective self-distancing. In schizophrenic patients, by contrast, such an experience is grounded upon a more basic self-disturbance. However, considering both diseases, I indicated some limits of the model.


Thursday, 21 January 2016

The Problem of Defining Delusion

This post is by Giulia Cavaliere and James Rubert Fletcher, both PhD students in Social Science, Health and Medicine, King's College London.  



On November 12th, the third event organized by King’s College London’s new joint venture Philosophy & Medicine took place. Previous events have featured colloquiums about placebo-controlled clinical trials and the challenges of communicating cancer risk. This third colloquium focused instead on the issue of mental health and in particular on the problem of defining delusion.

The first part of the colloquium was led by Dr Abdi Sanati, a consultant psychiatrist and a philosophy scholar from the North East London NHS Foundation Trust. Dr Sanati opened with the description of a clinical case concerning a woman experiencing problems with a prosthesis and reporting her wish to have it removed. Her doctors originally encouraged her to keep the prosthesis, addressing her discomfort as something “delusional”. Eventually, after many painful examinations, it was discovered that there was indeed something wrong with her prosthesis.

Dr Sanati used this case to illustrate both the theoretical difficulties of clearly identifying something as a delusion and the practical implications in real-life cases of labelling something as a delusion. After this introduction, Dr Sanati briefly presented the work of Karl Jaspers on delusion and the definitions used respectively in the 4th and 5th editions of the DSM (Diagnostic and Statistical Manual of Mental Disorders), in which delusions were firstly defined as “false beliefs” and then as “fixed beliefs”. 

After outlining various critiques of these definitions, proposed especially by philosophy scholars, Dr Sanati shifted to the contribution of contemporary influential authors as Berrios, Campbell and Bortolotti in framing the concept of delusion. His talk aimed at showing both the theoretical difficulties and controversies concerning the definition of delusion and the importance of the contribution from philosophy in this search for a definition – a search that has not yet come to an end.

On this issue, fellow guest, Dr Luis Flores, commented that philosophy can help to test the consistency of the definitions and ideas put forward by psychiatrists, but it can also question the validity of such a heterogeneous concept altogether.

The talk was followed by a very active discussion between Dr Flores, Dr Sanati and the audience. Students and researchers from philosophy, medicine and social science dialogically expressed their views, doubts and concerns.

Tuesday, 19 January 2016

Agency and Ownership in the Case of Thought Insertion and Delusions of Control


This post is by Shaun Gallagher (pictured above). He is Lillian and Morrie Moss Professor of Philosophy at the University of Memphis. In this post he summarizes his recent article 'Relations between Agency and Ownership in the Case of Schizophrenic Thought Insertion', published in the Review of Philosophy and Psychology.

In a recent paper I offer a response to some philosophers who have raised objections to the idea that in schizophrenic delusions of control and thought insertion the problem is primarily with the sense of agency. Instead, they argue, it concerns the sense of ownership. Let me start by clarifying the distinction, because in fact it is a double distinction, or a distinction made on two levels. On the level of first-order, pre-reflective experience the distinction between sense of agency (SA) and sense of ownership (SO) can be seen in the contrast between voluntary and involuntary movement. In the latter case, for example, if some one pushes me from behind, I can say that I am moving – that it is my body that is moving (I have SO for the movement, meaning simply that I have the experience that I am the one moving); but for that initial movement I was not the initiator, and thus I do not have SA for it. 

On top of that distinction, so to speak, there is another distinction made at the level of second-order reflective consciousness. Retrospectively I can report that I was the one who moved. This is what Stephens and Graham (2000) call the attribution of subjectivity, or attribution of ownership (call it AO to distinguish it from SO). Likewise, I can say whether I was the agent of that movement. This is the attribution of agency (AA). SA and SO are experiential, whereas AA and AO are judgments made about movement or action.

Prereflective experience
Reflective judgment
SO
AO
SA
AA

There are various debates about the subpersonal mechanisms (e.g., comparator models) that underlie the phenomenology of agency and ownership. There are even more basic questions about whether there are such experiences of SA and SO. I think there are ways to address these concerns, but here let me just say that whatever the best way to explain the mechanisms that underpin SA or SO, to say that there are no such experiences suggests that we only discover what we have done after we have done it, and in fact it implies that the best we can do make inferences about what we have done based on something like the sensory (proprioceptive) evidence. This would apply to thinking and deliberation as well, if we consider thinking and deliberation to be actions engaged in by a subject.

As I deliberate and form an intention, or as I engage in an action or set of actions, if there is no SA, for example, then, in making a judgment about what I have done or about the fact that I have acted (AA), on what do I base my judgment? Stephens and Graham (2000) suggest that I base it on whether the action that I am considering is consistent with my self-narrative, or with the theory that I have about myself. If somehow I judge the action to be inconsistent with my self-narrative, then I would conclude that I did not do the action. And if in fact it had been my action, then, on their account, my mistaken conclusion would be delusional. Accordingly, on that view, delusions of control and thought insertion are simply the result of mistaken inference.

Monday, 18 January 2016

Causal Illusions and the Illusion of Control: Interview with Helena Matute

In this post I interview Helena Matute (picture below), who is Professor of Psychology and director of the Experimental Psychology laboratory at the University of Deusto in Bilbao, Spain.


AJ: You are a leading expert on causal illusions. Could you explain what causal illusions and illusions of control are?

HM: A causal illusion (or illusion of causality) occurs when people perceive a causal relationship between two events that are actually unrelated. The illusion of control is just a special type of causal illusion in which the potential cause is our own behavior. That is, a causal illusion is often called an illusion of control when people believe that their own behavior is the cause of the unrelated effect, or, in other words, when they believe that they have control over uncontrollable events in their environment.

Illusions of causality and of control occur in most people, particularly under certain conditions. For example, when the potential cause and the potential effect occur frequently and in close temporal contiguity, most people develop the illusion that they are causally related. It becomes very difficult, if not impossible, to detect with our bare eyes that they are not. Indeed, just as we need a measurement tape to counteract optical illusions when we want to measure the size of an apartment, the use of a specialized tool (in this case the scientific method) is absolutely necessary when we need to make sure that a relationship (say between taking a certain pill and some health benefit) is causal rather than incidental.

For instance, imagine that someone tells people to take drug A when they suffer from syndrome X “because 80% of 100 patients who took drug A recovered from syndrome X”. Of course this information seems to suggest that there is a causal relationship between drug A and recovery, and if it comes from people we know and trust, we could develop the illusion. Note, however, that there is an important piece of information that we are not given in this example. Imagine that it is also true that “80% of 100 patients who took a sugar pill instead of drug A recovered from syndrome X just as well and just as fast”. Ummm. This new information is now telling us that drug A is totally inefficient, it is just like taking candy. That is, we humans are not prepared to detect these illusions unless we run a controlled experiment, or a controlled clinical trial, so that we get both the information on what happens when we take the drug and what happens when we do not take it.

The problem is that in our daily life, most of us tend to be content with the information that 80% of people taking the pill felt better afterwards, and therefore we assume a causal relationship and take the pill without even asking what the recovery rate for those not taking the pill was. This illusion of causality can have serious consequences (as when people refuse to go to the doctor just because they feel they can intuitively know what works for them). Without the aid of a careful and controlled methodology we humans are victims of causal illusions very often.


Thursday, 14 January 2016

Loebel Lectures 2015: Steven Hyman

In this post Reinier Schuur (University of Birmingham) reports from the Loebel Lectures in Psychiatry and Philosophy held on the 3rd, 4th, and 5th of November 2015. The lectures were delivered by Professor Steven E. Hyman, former director of the NIMH (National Institute of Mental Health), and currently at the Stanley Center for Psychiatric research, at the Broad Institute of MIT and Harvard. 




Steven Hyman gave his lectures on ‘The theoretical challenge of modern psychiatry: no easy cure’, which dealt with the future of psychiatry and the potential ‘collision’ between patient’s lived experience and our neurobiological understanding of mental disorders. A small conference was also held on the 5th of November on Hyman’s lectures, where several philosophers of psychiatry spoke, such as Derek Bolton, Tim Thornton, Jonathan Glover, and Julian Savulescu.

The title of the first lecture by Hyman was ‘The problem of modern psychiatry: the collision of neurobiological materialism with the experience of being human’. Hyman argues that there are two perspectives of the patient in psychiatry. The first is of the patient as subject, which concerns the patient’s lived experience of mental illness from a first-person perspective and self-narratives. The second is of the patient as object, as a ‘thing’ where something goes ‘physically wrong’, either in terms of their brain, or their brain in relation to the environment in some ‘mechanistic’ way.

Hyman is skeptical that a smooth ‘conceptual integration’ between these two perspectives will be possible, but leaves his arguments for this for the next two lectures. For the first lecture, Hyman sets out the history and future for the ‘mechanistic picture’ of mental illness.

The history of 20th century psychiatry is a mixture of break-throughs and let-downs. Accidental findings of efficacious drugs were a clinical blessing but an intellectual curse, because they made us focus on particular targets for treatment based on prior success rather than looking at new fundamental ways of treating mental disorders. Because of this, the concept of predictive validity led psychiatry into a ‘cul de sac’, rediscovering the same mechanisms. And while the DSM-III greatly improved construct validity and diagnostic reliability, it eventually led to a ‘reification’ of diagnostic categories, an overly reductionistic approach, and an impoverishment in models of psychopathology.

Moreover, the DSM as a framework hinders findings from basic research sciences from properly informing psychiatry, because DSM categories do not converge on valid disease entities, evident in their high levels of co-morbidity. And categorical approaches might have been motivated to counter the anti-psychiatry movement to show that psychiatrists were dealing with were ‘real’ diseases, thereby biasing against promising dimensional approaches which might have had better research and clinical utility. So what’s the up-shot of the ‘mechanistic picture’?

While Hyman gives us a sobering account of the progress of psychiatry, there is still room for ‘cautious’ optimism. One of the things we have learned over the last decades is of the high correlations of aggregate genetic factors for the prevalence of mental disorders. The real challenge here is that in order to have a proper genetic understanding of mental disorders, we need better observations of their effects, which requires both new technologies and large scale ‘big-data’ genetic analysis. Statistical power matters, as Hyman says.

And the good news is that we have begun to ditch the old ‘Mendelian’ way of thinking about genetic analysis which hindered previous research programmes, innovating new statistical approaches and factoring in environmental factors in genetic analysis in the form of epigenetics, and the overall cost of technological analysis has greatly decreased. Moreover, dimensional approaches are increasingly being accepted over categorical ones for research purposes. While it may take decades to reap the fruits of these new innovations and changes, there is some hope for optimism. The next question is, given this potential trajectory of the future of psychiatry, how will future findings relate to the patient’s lived experience of their mental illnesses? This is the question addressed in the second lecture.

Tuesday, 12 January 2016

Response to McKay's 'Bayesian Accounts and Black Swans'

This post is by Matthew Parrott (pictured below) and Philipp Koralus (pictured below). In a previous post they summarised their recent paper ‘The Erotetic Theory of Delusional Thinking’, published in Cognitive Neuropsychiatry. Ryan McKay, in a post the following week, summarised his recent paper responding to Matthew and Philipp, 'Bayesian Accounts and Black Swans: Questioning the Erotetic Theory of Delusional Thinking'. In this post, Matthew and Philipp respond to Ryan. 



We are very grateful to Ryan McKay for taking the time to read our paper, and for formulating helpful questions in response to it. We hope that by addressing these questions, we can clarify any aspects of our theory that might be puzzling.


1) How can we reliably distinguish exogenous and endogenous question raising?

As we said in our original post, we propose that patients ‘entertain roughly the same default questions that most people strongly associate with various external stimuli, but that they either envisage fewer alternative possible answers to these questions or raise fewer follow up questions’. Thus, according to our theory, there is a difference between exogenously raising a question, which is when the process of raising a question is the result of some external stimulus, and endogenously raising a question, which is when a patient raises a question independently of external stimuli. This sort of distinction is familiar in cognitive science. We find it, for example, in discussions of attention (i.e., endogenous attention is internally controlled and exogenous attention is externally controlled). One would presumably differentiate these experimentally in roughly the same way that one would for attention, by controlling for external stimulus.

2) If deluded individuals are selectively deficient in raising their own questions, why are they unable to fully utilize and retain questions that others raise?

In our paper, we propose that one virtue of the Erotetic Theory is that it can actually explain why many delusional individuals come to momentarily doubt their delusion in response to external questioning but then quickly fall back into it. As we note in our paper, an individual would have to take the right questions on board while evaluating their perceptual experience. Just posing more questions externally after that wrong conclusion has already been drawn may not be enough (and does not seem to be enough in practice). The study by Breen and collaborators that McKay mentions is completely consistent with this picture. One might also note that MF appears to acknowledge that his delusion is ‘strange’ during the course of a clinical interview, which we think shows that he is not ‘completely impervious to external questioning’ as McKay suggests. In any case, it is not terribly surprising that MF’s delusion is relatively insensitive to counterevidence or to general considerations of plausibility, which both seem to be general features of delusional cognition. But it is worth emphasizing that raising a question in the technical sense of the Erotetic Theory is not the same as presenting a person with some evidential considerations against something he or she believes, after a delusional conclusion has already been drawn.

3) If I encounter a person lacking the C-feature, what could lead me to speculate that the person is an imposter?

McKay seems to think that representing one’s spouse as lacking what we call a C-feature is not sufficient to speculate that the person is an imposter. We disagree. As we claim in the paper, one’s spouse is normally represented as a set of features, one of which is something like ‘intimacy’ (represented as a property of the person), or what we discuss as the ‘C-feature’ in our paper. If one’s representation lacks this, then one will answer the question: ‘who is this among the people I know?’ with ‘no one that I know’, i.e., a stranger. Notice that this means that one will have a conscious visual experience of someone who looks exactly like one’s spouse but one will also think the person is a stranger (no one that I know). That is almost a definition of an imposter.

4) If the absence of the C-feature prompts imposter speculations when I encounter someone I know, why would the absence of other relevant features (e.g., a moustache that has been shaved off) not do the same?

We suspect that people process some properties as essential and some as superficial. We suspect that there is only a serious possibility of robust misidentification if we mistakenly think that an essential property is lacking. We proposed that the C-feature, as we called it, is taken as an essential property of a close loved one, unlike, say, having a moustache. A key difference may be that you can reconcile lack of a superficial property with identity without having to envisage the possibility of misperception on your own part.

5) Why would the suppositional reasoning strategy require a capacity for endogenous-question-raising, whereas the correct logical strategy wouldn’t?

According to the Erotetic Theory, suppositional reasoning is formally defined as an inquisitive process. Nothing about being presented with a conditional reasoning task requires someone to adopt a suppositional strategy; this would require one to endogenously raise a question. As the paper demonstrates, an extremely interesting consequence of this is that suppositional reasoning generates errors in certain reasoning tasks.

6) Is the Erotetic Theory a theory of delusions, or schizophrenia, or both?

As we tried to explain in our original post, the Erotetic Theory is a theory of reasoning. In our paper, we demonstrate how this theory can both explain and predict different patterns of reasoning that we see exhibited by delusional subjects, namely the tendency to ‘jump to conclusions’ on probabilistic reasoning tasks and the adoption of the Capgras delusion in response to highly anomalous experiences. The theory also explains a pattern of reasoning exhibited by schizophrenic subjects (Mellet, et al; 2006).

At this stage, it is probably best not to think of our theory as a general account of delusion or schizophrenia, but rather as a proposal about one cognitive deficit that can lead to the patterns of reasoning observed in psychiatric subjects. It may be that this deficit is strongly associated with delusion (schizophrenic subjects are highly prone to delusion), or with schizophrenia (schizophrenics exhibit the JTC bias and it is also an aetiology of Capgras delusion). We think this is an interesting question that is worth pursuing.

7) What is the equivalence between raising fewer questions and raising lesser questions (questions with fewer alternatives)?

On our regimented account, questions are sets of alternatives. Aggregating multiple sets of alternatives (i.e., multiple questions) into one larger set of alternatives, and just taking on board a question with more alternatives largely come to the same thing.

8) Does the suggestion that individuals are less inquisitive add to what we know (which is that certain subjects seek less evidence when forming beliefs and making decisions)?

The Erotetic theory gives us an explanation of why subjects seek less evidence. It also makes sense of phenomena that are not obviously captured by bare notions of “seeking less evidence,” for example the results on reasoning with conditionals.

Thursday, 7 January 2016

Report on the 2015 University of Sydney Winter School

Caitrin Donovan and Reinier Schuur report from the 2015 University of Sydney Winter School. The programme was entitled "Cross-cultural psychological differences and their philosophical implications" and seminars were led by Stephen Stich (Rutgers) and Dominic Murphy (University of Sydney).


From the 19th till the 23rd of October the unit for History and Philosophy of Science at the University of Sydney held its second winter school programme, this year on the topic of “Cross-cultural psychological differences and their philosophical implications”.

Intuitions have been long been the bread and butter of various philosophical projects, which use them to evaluate semantic, epistemic, ethical and ontological theories and concepts. This can be seen in ‘Gettier cases’ in epistemology where accounts of knowledge are tested against intuitions on what counts as knowledge, and in ‘trolley cases’ in ethics where what is morally justified action is tested by our intuitions of such actions. Appeal to intuitions might indicate an older tradition in philosophy where intuition is a kind of perception that ‘intuits’ certain philosophical truths. Appeal to intuitions may also reflect the project of ‘conceptual analysis’ where the goal is to capture, via our intuitions, a common-sense notion of some philosophical concept like ‘knowledge’.

The assumption behind both these projects is that our intuitions are generally shared and stable enough to count as solid evidence. But what if this assumption is incorrect? This is what Stich argues in his experimental philosophy work, where he shows that there are differences in epistemic and moral intuitions across various cultures and demographics. If such findings are verified, they could have serious implications for our appeal to intuitions as evidence in traditional philosophical projects and conceptual analysis as a methodology. It is worth asking what effect Stich’s intuition-scepticism has on the debate over the nature of mental disorder, where intuitions are often invoked as a priori constraints in determining what should count as a mental disorder.

In addition to intuition-scepticism, we also considered the extent to which empirical research challenges widely held assumptions concerning the universality of psychological processes. Though subjects taken from ‘WEIRD’ (Western Educated Industrial Rich Democratic) nations are frequently used (tacitly or otherwise) as a model for human nature, their representativeness is challengeable. Heinrich et al’s critical metastudy is a notable example of this. Interestingly, the authors present evidence suggesting that the Müller-Lyer illusion is not, as previously assumed, a universal phenomenon. If basic perceptual processes are not encapsulated, but sensitive to cultural influence, culture may have a more powerful and diversifying influence on our psychological make-up than is commonly assumed.

Tuesday, 5 January 2016

Individual Differences in Cognitive Biases



This post is by Predrag Teovanović (pictured above), graduate student at the University of Belgrade. In this post he summarises his recent paper ‘Individual Differences in Cognitive Biases: Evidence Against One-Factor Theory of Rationality’, co-authored with Goran Knežević and Lazar Stankov, published in Intelligence.


If there is a minimal definition of rational behavior, it can be found here. From the normative standpoint, rational behaviour is hard (if not impossible) to maintain all the time. Hence, we satisfice by trying to optimize the boundaries of bounded rationality at the intersection of our own resources (time, information, money, and cognitive capacities) and environmental demands. Cognitive biases (CBs) emerge in that junction.  

Since what defines rational behaviour depends on both environment and organism, and since specific CBs arise in different environments - it is reasonable not to expect from CBs to be highly related to individual differences in organisms’ capacities and habits. In my article, I write-up the most forthright empirical demonstration of this expectation that I could obtain with the great assistance of my dear colleagues Goran Knežević and Lazar Stankov.
                                   
I took an approach initiated by Keith E. Stanovich and Richard F. West (1998, 2000) and pursued it in a more psychometrical way. First, I developed new measurement procedures for the assessment of individual differences in seven CBs (anchoring effect, belief bias, overconfidence bias, hindsight bias, base rate neglect, outcome bias, and sunk cost effect). These procedures were devised from conceptual definitions of aforementioned CB phenomena and/or seminal examples of CB tasks (which are their operationalisations). On average, dozens of items were used for each CB phenomenon.