Thursday 30 June 2016

Mental Health and the Criminal Justice System: A Social Work Perspective


In this post, Ian Cummins (pictured below), introduces his new book Mental Health and the Criminal Justice System: A social work perspective. The book was published in March 2016 by Critical Publishing.

I am a Senior Lecturer in Social Work based at Salford University. I trained as a probation officer and worked as a mental health social worker in Manchester before taking up academic posts.  My main research revolves around the experiences of people with mental health problems in the criminal justice system. This includes all areas of the criminal justice system but I have focused on policing and mental illness.  I argue the criminal justice system has become, in many incidences, the default provider of mental health care. These issues are examined in my new book.



The book essentially explores the interaction between two of the most significant social policy developments of the past forty years - deinstitutionalisation (or the closure of large psychiatric institutions) and mass incarceration (or the expansion of the use of imprisonment).  The failure to develop properly resourced community mental health systems has seen the de facto criminalisation of the severely mentally ill and the abandonment of a social state sense to support some of most vulnerable members of society.  The book explores the way that mental illness can be a factor in a decision making across the criminal justice system, from interactions between police officers and citizens on the street to decisions about the appropriate sentences for those who are mentally ill but convicted of offences. 


My approach is influenced by Wacquants analysis of processes of advanced marginality. In particular, I see both the failure of community care and the development of mass incarceration as part of the wider neo-liberal political project’s shift towards more punitive responses to social problems. I argue that these developments are tied to the trope of individualism that is at the heart of neo-liberal political ideas. 


Tuesday 28 June 2016

Failing to Self-ascribe Thought and Motion - Part II


This post is by David Miguel Gray (pictured above), currently Assistant Professor of Philosophy at Colgate University and in the Spring of 2017 will be Assistant Professor of Philosophy at the University of Memphis. David’s research interests are in the philosophy of cognitive psychology (in particular cognitive psychopathology), as well as philosophy of mind, and philosophy of race and racism.

In this post David will address some theoretical issues with n-factor accounts of monothematic delusions. This post, and his previous one, will draw on his recent paper ‘Failing to self-ascribe thought and motion: towards a three-factor account of passivity symptoms in schizophrenia’, published in Schizophrenia Research.


Cognitive-level theories of monothematic delusions have become heavily discussed, significantly in part to the work of Max Coltheart, Robyn Langdon, and Martin Davies (e.g. see Davies and Coltheart 2000, Davies et al. 2001, Coltheart et al. 2007, Coltheart 2013). Theirs is a ‘two-factor’ theory in that it claims there are two impairments that must be explained for all monothematic delusions. The first factor that must be explained is why a delusional hypothesis is a 'prima facie reasonable response to the subject’s experience' and the second factor to be explained concerns how one can adopt and maintain a delusional hypothesis given its 'utter implausibility and the uniform skepticism with which other people greet it' (Davies and Coltheart 2000).

Leaving the second factor aside, I argue that the explanatory project involving the first factor is ill defined in that it combines the requirement to explain cognitive abnormalities (viz. abnormal experience) with the requirement to explain the inferential process that results in a delusional hypothesis. Whereas the first of these explanatory demands counts as a ‘factor’, as defined above, the second may or may not, depending on whether the inferences that lead to the delusional hypothesis could be described as abnormal. Other n-factor theorists (e.g. one factor theorists like Phil Corlett and two factor theorists like Coltheart), think these inferential processes are normal and rational. I agree. So why is explaining these processes important?


Thursday 23 June 2016

Mind, Madness and Melancholia

On 10th May 2016 the Royal Society of Medicine hosted a one-day conference on Mind, Madness and Melancholia: Ideas and institutions in psychiatry from classical antiquity to the present. I attended the two morning sessions which focused on the concept of madness and melancholy in ancient Greece, ancient Rome, and in the golden era of Arab medicine.



The first speaker was Glenn Most (pictured below), professor of Greek Philology at the Scuola Normale, Pisa, Italy. He started by challenging the view that there was a smooth transition from mythos to logos in the Greek thought, that is, that phenomena previously regarded as mysterious and supernatural were then explained by human reason. When it comes to mental phenomena concerning insanity, for a long time a multiplicity of methods and approaches co-existed. People who manifested strange behaviour could be 'treated' in phases: confined first, asked to try physical remedies second, conceived of as possessed by the gods and subject to religious rituals third, and then left to simple prayer. Thus, madness could be approached medically by the ancient Greeks, but was also given some religious and moral significance, as if being mad were a punishment by the gods.



Plato in the dialogues distinguished between illness of the body and illness of the soul, further dividing the latter into mania and melancholia. A gradual medical approach to illnesses of the soul followed, with Hippocrates developing a humoral theory of them (according to which illnesses are causes by imbalances among elements such as yellow bile, black bile, phlegm and blood), and Galen identifying the brain as the organ where problems arise. Although Galen studied the anatomy of the brain, his remedies were not based on such investigations, but more akin to modern 'talking therapies'.

Tuesday 21 June 2016

The Intrasubjectivity of Self, Voices, and Delusions


This post is by Cherise Rosen (pictured above). Cherise is an Assistant Professor in the Departments of Psychiatry and Public Health at the University of Illinois at Chicago. She has conducted extensive research on issues involving the symptoms and longitudinal course of psychosis. 

Her research has focused on the phenomenological aspects of psychosis, hallucinations, delusions, metacognition, and self-disturbances. Much of her research follows mixed-methods research designs to elucidate findings that include the subjective experience. 

Additionally, her research investigates the underlying epigenetic mechanisms of psychosis. In this post, Cherise summarises her recent paper (co-authored with Nev Jones, Kayla A. Chase, Hannah Gin, Linda S. Grossman, and Rajiv P. Sharma) 'The Intrasubjectivity of Self, Voices, and Delusions: A Phenomenological Analysis', published in Psychosis. 

In our recent study, we focused on the phenomenologically complex and nuanced interrelatedness of self, voices, and delusions. We investigated the prevalence of co-occurring Auditory Verbal Hallucinations (AVHs) and delusions in schizophrenia compared to bipolar disorder with psychosis; the correlations between AVHs and forms of delusions; and if there are sub-categories/clusters of AVHs and forms of delusions that are distinct and identifiable and what is the symptom presentation of these sub-categories/clusters?

Thursday 16 June 2016

Implicit Bias and Philosophy


Today Michael Brownstein and Jennifer Saul introduce Implicit Bias and Philosophy, Volumes 1&2.

We’re Michael Brownstein and Jennifer Saul.  

Michael is Assistant Professor of Philosophy at John Jay College/City University of New York.  He works in philosophy of psychology, with emphasis on the nature of the implicit mind, and on related topics in the philosophy of action and ethics.  

Jenny is Professor of Philosophy at the University of Sheffield and Director of the Society for Women in Philosophy UK.  Her research is primarily in philosophy of language, feminist philosophy, and philosophy of race.






Back in 2009, very few philosophers were working on implicit bias. (You could probably count them on the fingers of one hand, perhaps two.) Jenny thought there was a lot of potential for philosophical work on the topic, and decided to apply for a research network grant to bring philosophers and psychologists together to work through the issues. She thought it would be great if maybe 5 or 6 more philosophers got interested in the topic.

In 2011-2012 the Leverhulme Implicit Bias and Philosophy Research Network held a series of workshops at the University of Sheffield, bringing together a fast-expanding network of philosophers and others working on implicit bias and related issues. Almost overnight, this had become one of the fastest growing areas of philosophical research.

Implicit Bias and Philosophy, Volumes 1&2 emerged from these workshops. The first volume focuses on the metaphysics and epistemology of implicit bias and stereotype threat. After an introduction by Jenny and Michael, which briefly describes what implicit biases are, how they’re measured, and so on, five chapters—by Keith Frankish, Bryce Huebner, Jules Holroyd and Joseph Sweetman, Edouard Machery, and Ron Mallon—investigate the nature of implicit attitudes and the cognitive processes underlying stereotype threat.

Then, in the second half of the first volume, seven chapters—by Louise Antony, Alex Madva, Stacey Goguen, Catherine Hundleby, Carole Lee, and Laura Di Bella, Eleanor Miles, and Jenny—address a variety of epistemological questions. These include whether implicit biases should cause us to be skeptics about our own ability to be fair and just; whether implicit biases cause us to face a dilemma between being rational and being just; whether stereotype threat causes a form of epistemic injustice; how implicit biases can create reasoning fallacies; how implicit gender biases may affect hiring and publishing in STEM fields; and how implicit and explicit stereotypes might be affecting philosophy as a profession.

Tuesday 14 June 2016

Failing to Self-ascribe Thought and Motion - Part I


This post is by David Miguel Gray (pictured above), currently Assistant Professor of Philosophy at Colgate University and in the Spring of 2017 will be Assistant Professor of Philosophy at the University of Memphis. David’s research interests are in the philosophy of cognitive psychology (in particular cognitive psychopathology), as well as philosophy of mind, and philosophy of race and racism. 

In this post David provides an explanation of abnormal experiences and the inferential processes involved in delusions of thought insertion and alien control. Next week, he will address some theoretical issues with n-factor accounts of monothematic delusions. Both posts will be drawing from his recent paper ‘Failing to self-ascribe thought and motion: towards a three-factor account of passivity symptoms in schizophrenia’, published in Schizophrenia Research.

In my article I focus on two commonly known passivity symptoms of schizophrenia: thought insertion and alien control. Leaving aside the question of why delusional hypotheses are maintained in light of conflicting evidence (and become endorsed or believed by the subject) (Davies and Coltheart 2000), I focus just on the sort of explanation we need to explain how a delusional hypothesis is formed. I take the source of such a hypothesis to be an abnormal experience; and, I take it that thought insertion and alien control involve the same kind of abnormal experience (at least in one central aspect). However, much work has to be done to provide an account of how an abnormal experience could even give rise to a delusional hypothesis such as ‘Bob is putting thoughts into my head’ or ‘I wanted to raise my arm but then Marissa did it’.

Thursday 9 June 2016

Early Career Mind Network Event, Warwick


The Early Career Mind Network (ECMN) is a new initiative with the aim to establish a strong network of early career researchers in the philosophy of mind who do not yet have permanent positions in academic philosophy. On 27th to 28th April 2016 the University of Warwick hosted the first ECMN research forum, organised by Alisa Mandrigin. This blog post provides an overview of the research ideas explored by the researchers at the event.

Tom McClelland spoke first on the Grand Illusion Hypothesis (GIH). According to the GIH, we overestimate how rich visual experience is outside of focal attention. McClelland described empirical findings showing that our visual systems compensate for the limits of attention by encoding information about the average features of groups or crowds. McClelland outlined precisely how findings relating to this ‘ensemble perception’ should be viewed as important to the GIH. He denied that the findings show that there is no Grand Illusion, but argued that they do suggest that visual phenomenology is less diminished than the GIH might seem to suggest because our experience of things outside focal attention can be aided by ‘ensemble perception’.


 
Next Vivan Joseph asked how we should address the question of whether we can attend to an object while remaining unaware of it. He presented reasons for rejecting approaches to the question that involve a significant departure from the folk conception of attention. He argued, for example, that philosophers who discuss experimental data about blindsight (the ability to respond to visual stimuli without being conscious of it) when trying to answer whether we can attend to an object without being aware of it beg the question against those, such as William James, who argue that attention is the concentration of consciousness.



Max Jones discussed a debate between representationalists and enactivists. Representationalists explain perception by appealing to perceptual representations, with many arguing that we perceive by reconstructing representations of our local environment. Meanwhile, enactivists argue that perception does not use representations. Jones argued for a third option: that our perception of reality is facilitated by the representation of possibilities.



Tuesday 7 June 2016

Is it Irrational to Believe in Conspiracy Theories?

Karen Douglas



This post is by Karen Douglas (pictured above), Professor of Social Psychology at the University of Kent. Karen studies the psychology of conspiracy theories and the consequences of conspiracist thought. Here she asks, is it irrational to believe in conspiracy theories?

Conspiracy theories explain the ultimate causes of events as secret plots by powerful, malicious groups. For example, popular conspiracy theories suppose that the 9/11 attacks were planned by the US government to justify the war on terror, and that climate change is a hoax coordinated by climate scientists to gain research funding. Some conspiracy theories seem outlandish to most people. For example, very few people would agree that world leaders such as Barack Obama and David Cameron are reptilian humanoids in disguise. However, many other conspiracy theories give people pause for thought. Indeed, one recent investigation showed that around 50% of Americans believe at least one conspiracy theory (Oliver and Wood 2014). Nevertheless, conspiracy theories have a bad reputation. Many view them as irrational beliefs held only by paranoid, disenfranchised members of society.

But if conspiracy theories are so popular, can belief in them really be irrational? Recent research on the psychology of conspiracy theories has attempted to answer this question. Some research suggests that beliefs in conspiracy theories can be characterised as irrational because conspiracy beliefs correlate with various psychopathological measures. For example, conspiracy belief has been linked to paranoid thinking and schizotypy (Darwin and Neave 2011), delusional ideation (Dagnall et al 2015), and paranormal beliefs (Lobato et al. 2014).

Other research, however, has found that believing in conspiracy theories may simply be a natural by-product of the way we perceive the world around us. Specifically, people have a tendency to attribute agency and intentionality to things around them (Douglas et al 2016), to overestimate the likelihood of events occurring together (Brotherton and French 2014), and to assume that “big” events must have had a “big” cause (Leman and Cinderella 2007). All of these common cognitive tendencies predict the extent to which people believe in conspiracy theories. People also believe in conspiracy theories to the extent that they believe that they, personally, would be willing to conspire in similar situations (Douglas and Sutton 2011). These findings therefore suggest that believing in conspiracy theories may be a natural consequence of the way that human beings think and process information.

Other evidence suggests that belief in conspiracy theories may be a response to people’s individual circumstances. For example, conspiracy theories appear to resonate more with people who lack power, (Whitson and Galinsky 2008) or feel uncertain about things that happen to them (van Prooijen and Jostmann 2013). Perhaps, if people feel that the world is stacked up against them, conspiracy theories offer an explanation for their predicament. Even believing in contradictory conspiracy theories may not be completely irrational if the conspiracy theories together support an overall worldview that conspiracies are possible (Wood, Douglas, and Sutton 2012).

And conspiracies are, of course, possible. In 1972, Republican officials spied on the Democratic National Headquarters from the Watergate Hotel. Whilst there was much suspicion about underhanded dealings taking place at the time, this was not confirmed until 1975 when White House tape recordings linked President Nixon to the break-in. Or consider the Tuskegee Syphilis Study in which the US Public Health Service carried out a clinical study on 400 poor African American syphilis sufferers between 1932 and 1974. Adequate treatment was intentionally withheld so that the agency could study the course of the disease.

If we know that events like these are possible, is it really irrational to give conspiracy theories a second thought?

Monday 6 June 2016

Is Unrealistic Optimism an Adaptation?




We humans have a well-established tendency to be overly optimistic about our future and to think that the risk of bad things happening to us is lower than is likely, while we think that the chance of good things happening to us is higher than is likely. Why is this case? What drives these positive illusions?

There are two possible ways in which we can understand and try to answer these questions. We can either look at the causal mechanisms underlying unrealistic optimism, or we can ask why this feature has survived and spread through human populations. Evolutionary psychology aims to answer the second question, in essence claiming that we are unrealistically optimistic because this has had benefits in terms of survival and reproduction.

So why should it be adaptive to have systematically skewed beliefs, which are frequently unwarranted and/or false?  Martie Haselton and Daniel Nettle have argued that unrealistic optimism is a form of error management, it helps us make the least costly error in situations of decision making under uncertainty.

Error management theory holds that when making decisions in contexts of uncertainty, we should err on the side of making low cost, high benefit errors and that this strategy can at times outperform unbiased decision making (cf. Haselton and Nettle 2006). This is nicely illustrated by the now well-known fire alarm analogy. If a fire alarm is set at a slightly too sensitive setting, we will have the inconvenience of having to turn it off when the toast has burnt every once in a while. If it is set at a more insensitive setting, we run the risk of burning alive in our beds because the alarm was activated too late. The over-sensitivity of the fire alarm brings only low costs (annoyance), but high rewards (reducing risk of death). 

This model of the selectional benefits of unrealistic optimism is committed to the claim that we should only be unrealistically optimistic in situations where potential payoffs for action are high and costs of failed action are low. If individuals were unrealistically optimistic in high cost/low benefit scenarios, this would decrease their chances of survival and reproduction. Does unrealistic optimism conform to this pattern?


Thursday 2 June 2016

Mental Time Travel

In this blog post, Kourken Michelian introduces his new book Mental Time Travel: Episodic Memory and Our Knowledge of the Personal Past.



I'm a lecturer in the philosophy department at the University of Otago. Before moving to New Zealand last year, I worked for several years at Bilkent University in Turkey and at the Institut Jean-Nicod in France. My work is mostly on memory.

Everyone -- including philosophers -- knows that memory doesn't work like a tape recorder, but philosophers have a way of forgetting this fact. When working on other topics, they often assume that it's an acceptable idealization to treat memory as if it did work that way. If they then happen to read research by psychologists or others that reminds them that remembering really doesn't work like a tape recorder -- that it's a thoroughly reconstructive process, rather than simple reproductive one -- they react with surprise.

If remembering isn't a reproductive process, then what's the difference between remembering and imagining? If remembering is reconstructive rather than reproductive, how can it be a source of knowledge? At least, this was more or less my reaction when I began to look at the psychology of memory.