Thursday, 21 May 2015

Believing against the Evidence

Believing against the Evidence
by Miriam McCormick
This post is by Miriam McCormick, Associate Professor of Philosophy at the University of Richmond. Miriam presents her new book, Believing Against the Evidence: Agency and the Ethics of Belief (Routledge, 2015).


When I first had a student tell me that she doesn’t believe in evolution I was at a loss of how to respond. To me, that sounded like someone telling me that she didn’t believe in gravity. It seemed both irrational and wrong. Experiences like this are common; we think that one’s actual belief can deviate from how one ought to believe. The dominant view among contemporary philosophers is that any belief formed against the evidence is impermissible. On such a view, which I call “evidentialism,” it is easy to diagnosis what is wrong with my student’s belief. I use the term “pragmatism” to refer to the view that some non-evidentially based beliefs are permissible. A central aim of this book is to defend pragmatism. One challenge to the pragmatist view I defend is to show how we can distinguish pernicious non-evidentially based beliefs from those that are permissible.

Wednesday, 20 May 2015

Semantic Dementia and the Organization of Conceptual Knowledge

Joseph McCaffrey
In honour of Dementia Awareness Week 2015 (17th-23rd May), we have a post by Joseph McCaffrey, a graduate student in the University of Pittsburgh's Department of History and Philosophy of Science. Here Joseph summarises his recent article 'Reconceiving Conceptual Vehicles: Lessons from Semantic Dementia', published in Philosophical Psychology.


We take our concepts for granted. When you explore the world, you automatically categorize the objects around you, tapping into a bewildering array of information. You see (or hear) a sheep and instantaneously know it is is a mammal, an animal, a provider-of-wool, a white fluffy thing that bleats, and much more. As a philosopher of cognitive science, I am interested in how the mind stores, accesses, and manipulates this conceptual knowledge.

In semantic dementia, a rare variant of frontotemporal dementia, patients lose concept knowledge in a progressive and debilitating fashion. Early on, caused by damage to a brain region called the anterior temporal lobes, patients experience striking semantic deficits (i.e. problems with word and object meaning) while other cognitive abilities, including speech production and autobiographical memory, remain fairly intact. At first, a patient with semantic dementia may be unable to recognize a picture of a duck, saying 'it is some kind of bird'. Later, the same patient may only know that the picture depicts some sort of animal.

My paper explores what semantic dementia means for debates about the 'vehicles' of conceptual knowledge. An old philosophical debate concerns whether concepts are reactivated sensory experiences. The British empiricists of the 17th and 18th centuries, such as John Locke and David Hume, thought of concepts as simulations of past perceptual experiences. Thinking about a hammer might involve simulating what a hammer looks like, how to swing one, etc. On the other hand, some philosophers believe that concepts are distinct from percepts. Descartes famously argued that you can know what a chiliagon (a geometric figure with 1,000 sides) is even though it is impossible to picture one. That knowledge must be something different than a perceptual simulation.

Tuesday, 19 May 2015

Refining our Understanding of Choice Blindness

Robert Davies
This post is by Robert Davies, a PhD student at the University of York. Robert is interested in self-knowledge and memory, and particularly how the study of memory can shed light on philosophical problems in self-knowledge. 

Here is one variety of introspective failure: I make a choice but, when providing reasons, I offer reasons that could not be my reasons for that choice. Choice Blindness research by Lars Hall, Petter Johansson, and their colleagues (2005–) suggests it is surprisingly prevalent (see e.g. Johansson et al. 2008), showing a low rate of manipulation detection and a high degree of willingness, in non-clinical participants, to offer confabulatory explanations for manipulated choices across a range of modalities and environments (see e.g. Hall et al. 2006; Hall et al. 2010).

We see ourselves as introspectively competent, rational decision-makers—capable of knowing our reasons, weighing them as reasons, and self-regulating when required—but since widespread confabulation seems at odds with this, some reconciliation with the data is required.

Some preferences are subject to shifting attention or mood, so caprice does not always elicit criticism. I can feel pulled now to tarte aux fraises and now to cheesecake, and I can attest to the virtues of both. Since liking and preferring are related attitudes, we might borrow factors in favour of one when answering questions about the other, especially if a preference is marginal, a choice is forced, or we are unexpectedly asked to articulate deciding factors in our selection. But moral choices are not like dessert choices—I do not think lying is fine because I fancy a change—and Choice Blindness has been detected in those too (see Hall et al. 2012).

Thursday, 14 May 2015

Self-knowledge for Humans

In today's post Quassim Cassam presents his recent book entitled Self-knowledge for humans (Oxford University Press, 2014). Quassim is a Professor of Philosophy at the University of Warwick, UK

Quassim Cassam

What is it about self-knowledge that makes it philosophically interesting? One familiar answer to this question is that the epistemological privileges and peculiarities of self-knowledge are what justify all the attention paid by philosophers to this topic. There is a presumption that our beliefs about our own thoughts aren’t mistaken, and knowledge of our own thoughts is neither inferential nor observational. A different answer sees the elusiveness and human importance of self-knowledge as the key. On this account, our aim as philosophers should be to understand why self-knowledge matters and explain why it is so hard to get.

These motivations for being interested in self-knowledge point in different directions. The standard examples of epistemologically distinctive self-knowledge are examples of what I call trivial self-knowledge. They include such things as knowing that you believe you are wearing socks or that London is your favourite city. Trivial self-knowledge is neither elusive nor, at least on the face of it, especially important. The contrast is with what I call substantial self-knowledge, that is, knowledge of such things as your character, emotions, aptitudes, values and why you have the attitudes you have. Substantial self-knowledge appears to matter in ways that trivial self-knowledge does not but it lacks the epistemological privileges and peculiarities of trivial self-knowledge. The difference between substantial and trivial self-knowledge is one of degree rather than kind but it seems obvious that the former is relatively hard to come by whereas the latter is relatively easy. Self-ignorance at the substantial end of the spectrum is always on the cards.

Tuesday, 12 May 2015

The Crisis of Psychiatry and the Promised Neurocognitive Revolution

This post is by Massimiliano Aragona, philosopher at the University of Rome

The DSM-5 (American Psychiatric Association 2013) has been published in the midst of unusual controversy. Criticisms had always been advanced, but in the past the DSM system was the dominant paradigm (the 'Bible of Psychiatry'), so they were considered ‘marginal’ complaints by: psychoanalysts, antipsychiatrists, experts of various fields worried about an excessive medicalization of human sufferance, and psychopathologists concerned with the progressive abandonment of deep qualitative phenomenological analysis in favour of superficial quantitative diagnostic criteria. Such criticisms are important per se, but were largely neglected at the time.

Today it is different, because it is the credibility of the DSM itself that is in question. And, along with the DSM, it is a general way to conceive psychiatry which is in crisis: 'the neo-Kraepelinian paradigm established by Robins and Guze and institutionalised in the DSM has resulted in so many problems and inconsistencies that a crisis of confidence has become widespread' (Zachar and Jablensky 2014: 9).

Some years ago I worked on the hypothesis that the DSM nosology could be conceived as a Kuhnian paradigm (Aragona 2006; in English see partial accounts in Aragona, 2009a, 2009b). I was aware that psychiatry and psychology were not, as a whole, Kuhnian paradigms, but my assumption was that such an epistemological model could apply to the subsystem of psychiatric nosology. The consequence was that several ‘concrete’ problems in psychiatric research (internal heterogeneity of mental disorders and lack of prognostic and therapeutic specificity, excessive comorbidity rates, and so on) could be modeled not as merely empirical problems but as Kuhnian anomalies. In this model, ‘anomalies’ are apparently empirical outputs largely dependent upon the way the system is internally structured. In short, the general idea was that such anomalies were suggesting that the system was entering a state of crisis, also showing the main intra-paradigmatic reasons for the crisis and allowing a comparison of possible revolutionary solutions.

A recent paper (Aragona 2014) compared the predictions of the time to what is happening today.

Here I will focus just on the part of that paper focusing on a possible revolutionary approach which at that time was unpredicted, while today it represents the most likely scenario for future research. This model is the neurocognitive model proposed by the U.S. Institute of Mental Health (NIMH) and denominated RDoC Project (Insel et al 2010; Cuthbert and Insel 2013).