Wednesday, September 24, 2014

Some thoughts about the PGR and Brian Leiter

In academic year 2002/03, I was finishing my undergraduate degree at Rice University, and I decided I was interested in applying to grad school in philosophy. Like many undergraduate philosophy majors, I knew next to nothing about the discipline of philosophy—I just knew that I'd enjoyed my philosophy courses, and done well in them, and I wanted more. The ideal circumstance, of course, would have been if someone with intimate knowledge of a wide variety of philosophy departments sat down with me for many hours and helped me to select a number of possible good fits. That was impossible, in my case and in most cases, for many reasons. I was exactly the kind of person the Philosophical Gourmet Report was meant to help. One of my professors pointed me to it, and I used it as a starting point for my research into grad school. It was an extremely useful resource, and I would have been worse off without it. So I agree with the people who have recently written to Brian Leiter, thanking him for creating what is a useful professional service.

Since then, as I have gotten to know the profession more intimately, I have become aware of many concerns about the PGR. Some of them, I think, like the weirdly strategic aspect with which some departments make hire in an attempt to raise themselves in the rankings, are an accidental result of the PGR's large success and influence. I also recognise that there are appropriate concerns about the PGR's methodology, and that it has a tendency to amplify problematic biases about who is and isn't a good philosopher, and what is and isn't a 'core' area of philosophy. I understand why some philosophers think that the PGR does more harm than good. But I do think that it fills what continues to be a genuine need in the profession. I don't really have better advice for a student trying to take the first steps to think about where to apply to grad school than to look at the PGR. Unless and until there is a better source of information available, the PGR remains useful and important.

But the other thing that I have come to realise, as I have gotten to understand the workings of professional philosophy better, is that Brian Leiter has a tremendous influence in the profession, in significant part because of his role as founder and editor of the PGR. And while he often channels his influence in what I consider to be positive directions, he also has engaged in a harmful pattern of bullying and silencing of those who disagree with him. If he were 'just any' philosopher saying mean things about people, this would be rude (and, in my view, unacceptable) but only marginally harmful. But in a culture in which philosophers are afraid to voice dissent against such a powerful individual, the harm is magnified tremendously. I do not think that Leiter himself understands the stifling and silencing effect that his words have on the less powerful people in the profession. In the most recent high-profile instance I have in mind, as most readers will already know, the target was my wife, Carrie Jenkins. Carrie wrote a widely celebrated statement, in wholly general terms, about the importance of philosophers treating each other respectfully. Brian Leiter—who had not previously been in correspondence with Carrie—interpreted this as a criticism of him personally, and wrote Carrie an insulting email, which had significant stifling and intimidating effects. In my opinion, this is not only unacceptable behaviour, but an abuse of the powerful position that Leiter finds himself in. And although the situation with Carrie is the one I am the most familiar with, it seems clear from discussions with others that this kind of bullying, silencing behaviour represents a pattern. That is why I have signed on to this statement (update: here), publicly declaring that I will not assist in the production of the PGR while it is under Brian Leiter's control. I am an untenured junior member of the profession, and have never been asked to contribute to the PGR, but I consider public statements like this important, especially in this context where fear of becoming the object of a negative Leiter campaign is so prevalent. It is important that other philosophers see that if they take a stand, they will not be alone. I am happy to see that many much more prominent philosophers than I—including at least one person who was on the PGR advisory board last week—have also signed.

I remain ambivalent about the PGR itself. As indicated above, I think it plays an important role. Perhaps something else could play that role in a better way, but unless and until such something exists, I think that the PGR itself does good. But in the status quo, where it makes everyone afraid of Brian Leiter, there is serious harm that comes along with that good. It is time for that harm to stop. The best solution for now would be for the PGR to proceed without its founder.

Saturday, August 30, 2014

Pritchard on pragmatics of knowledge ascriptions

I'm working on a review of Duncan Pritchard's book Epistemological Disjunctivism. I'll probably try out a few ideas here over the next couple of months. I want to start out by focusing on something from near the end of the book—§8 of Part III. Here, Duncan is trying to deal with what he considers to be a challenge to the particular form of neo-Moorean disjunctivist response to the skeptical paradox he's been developing. The salient element of the view is that, contrary to skeptical intuitions, one does typically know that e.g. one is related in the normal way to the world, rather than being a brain in a vat. This, even though one lacks the ability to discriminate perceptually between being related in the normal way to the world and being a brain in a vat.

The challenge Duncan considers in this section is that Moorean assertions like "I know I'm not a brain in a vat" seem conversationally inappropriate. As he puts it earlier in the book,
[T]here appears to be something conversationally very odd about asserting that one knows the denial of a specific radical sceptical hypothesis. That is, even if one is willing to grant with the neo-Moorean that one can indeed know that one is not, say, a BIV, it still needs to be explained why any explicit claim to know that one is not a BIV (i.e., 'I know that I am not a BIV') sounds so conversationally inappropriate. Call this the conversational impropriety objection. (115)
The answer Duncan gives to this challenge in §8 ("Knowing and Saying That One Knows") is that the Moorean claims in question, in the contexts under consideration, generate false conversational implicatures to the effect that one has the relevant discriminatory abilities:
[I]n entering an explicit knowledge claim in response to a challenge involving a specific error-possibility one is not only representing oneself as having stronger reflective accessible grounds in support of that assertion than would (normally) be required in order to simply assert the target proposition, but also usually representing oneself as being in possession of reflectively accessible grounds which speak specifically to the error-possibility raised. (142)
I tend to be suspicious of pragmatic explanations for infelicity that don't come along with systematic explanations. Grice tells nice stories about how his maxims predict particular implicatures, given various contents asserted. What is Duncan's explanation for why first-person knowledge assertions implicate that one has the perceptual capacity to discriminate the state of affairs claimed to be known from alternatives that have been mentioned? Let's take an example, adapted from one of Duncan's (p. 146 -- one of his "unmotivated specific challenge" cases):

  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
  • Zula: I know that they're zebras.

Duncan's view is that Zula last utterance is true but unassertable—unassertable because it implicates falsely that Zula can discriminate perceptually between zebras and cleverly disguised mules. But why does it implicate that, if it doesn't entail it? I can't see how any of Grice's maxim's would generate the implicature in this case. Without some kind of story about where the implicature comes from, the suggestion that any impropriety comes down to pragmatics looks suspiciously ad hoc.

Notice also that certain predictions of the pragmatic explanation do not seem to be borne out. Since Duncan's story depends essentially on the implicatures involved in Zula's assertion, it does not extend to knowledge attributions that Zula doesn't assert. For example, it does not extend to Zula's unasserted thought in this case:
  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
  • Zula: [thinking to herself] What an asshole. I know that they're zebras.
Zula's thought won't mislead Asshole or anybody else, so Duncan's story can't show why it's inappropriate. But it seems intuitively problematic in the same way her original assertion is. Similarly, there seems to be impropriety about Moorean assertions in third-personal contexts where one won't mislead. Suppose that you and I know full well that Zula can't tell the difference between a real zebra and a fake zebra; we also know full well that she is looking at a real zebra right now. Consider this:
  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
  • Me: [to you, out of earshot of Z and A] Zula knows that they're zebras.
My assertion seems problematic in the same way Zula's original one does; but I do not mislead anyone. (We could also consider, for this point, a version of the first-personal case where it is stipulated to be common knowledge that Zula lacks the discriminatory ability in question.)

Here is one more observation about the case. Suppose nobody says anything about knowledge, as in this variant:
  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But maybe they're cleverly disguised mules.
  • Zula: They are zebras.
Insofar as I can feel the force of Duncan's suggestion that Zula's original final utterance—'I know that they're zebras—implicates that she has special abilities to rule out fakes, I think the same applies here. But if so, I think that this may show that even if Duncan has identified something wrong with the knowledge assertion, he hasn't identified everything wrong with it. For we have no inclination whatsoever to think that Zula speaks falsely in asserting, even in the face of the skeptical challenge, that there are zebras. The case is very different for her self-ascription of knowledge. The intuition is not merely that she shouldn't say she has knowledge; it's that she doesn't. (Indeed, I think the intuition is that it'd be fine for her to assert that she doesn't have knowledge.) Since there seems to be a special phenomenon about knowledge ascriptions, the pragmatic story will only work if it is particular to knowledge ascriptions. But I don't think it is; once the challenge has been made, an outright assertion of the proposition that was challenged does—so far as I can tell, in exactly the same way a bare knowledge ascription does—in some sense convey that one has the ability to answer the challenge.

More thoughts on more central elements of Duncan's very interesting book to follow. I started here for the simple reason that  it was freshest in my mind when I finished the book today.

Tuesday, April 29, 2014

More on the well of knowledge norms

Dustin Locke has published a response to my Thought article, "Knowledge Norms and Acting Well". My paper (draft here) argued that lots of counterexample-based arguments against knowledge norms of practical reasoning take a problematic form: generating a case where it seems like S knows that p, but where it seems like S is not in a strong enough epistemic position to phi. These verdicts together tell us nothing interesting unless we assume some story about the relationship between p and phi; but defenders of knowledge norms needn't and shouldn't accept many such relationships.

For example, in Jessica Brown's widely-discussed surgeon case, it is thought to be intuitive that before double-checking the charts, (a) the surgeon knows that the left kidney is the one to remove; but (b) the surgeon ought not to operate before double-checking the charts. This is only a problem for the idea that one's reasons are all and only what one knows if the proposition the left kidney is the one to remove would, if held as a reason, be a sufficient reason to justify operating before double-checking the charts. But why should one think that?

Dustin resists my argument at several points. I'm not sure what to say in response to many of them; I think they're helpfully clarifying the sources of disagreement, but they don't make me feel any worse about my point of view. For example, Dustin seems to be happy to rest on certain kinds of very theoretical intuitions, like the intuition that the surgeon isn't justified in using the proposition that the left kidney is the bad one as a reason that counts in favour of removing the left kidney immediately. I don't have this intuition, and I wouldn't want to trust it if I did. I feel pretty good about intuitions about what actions are ok in what circumstances, but deeply theoretical claims like these don't seem to me to be acceptable dialectical starting places.

In what I found to be the most interesting part of his paper, Dustin also constructs a version of Brown's surgeon case where, if one assumes that (a) a Bayesian picture of practical rationality is correct and (b) practical reasons talk translates into the Bayesian talk by letting one conditionalize on one's reasons, we can derive the intuition mentioned above. I think that both of these assumptions are very debatable, but I also think that the case Dustin tries to stipulate is more problematic than he assumes. He offers the following stipulations:
  1. The surgeon cares about, and only about, whether the patient lives.
  2. The surgeon has credence 1 that exactly one of the patient's kidneys is diseased, and a .99 degree of credence that it is the left kidney.
  3. If the surgeon performs the surgery without first checking the chart, she will begin it immediately; if she first checks the patient's chart, she will begin the surgery in one minute.
  4. The surgeon has credence 1 that were she to check the chart, she would then remove the correct kidney.
  5. If the patient has the correct kidney removed during the operation, then there are the following probabilities that he will live, depending on how soon the surgery begins: (5a) If the surgery begins immediately and the correct kidney is removed, there is a probability of 1 that the patient will live; (5b) If the surgery begins in one minute and the correct kidney is removed, there is a probability of .999 that the patient will live.
  6. If the patient has the wrong kidney removed during the operation, then the probability that he will live is 0.
(This list is quoted directly.) I have two worries. First, Dustin also says of the case that "it's quite plausible that the surgeon knows that the left kidney is diseased", and assumes that she does. But this requires a very substantive epistemological and psychological assumption about the relationship between credence and knowledge. It is not at all innocent to assume that knowledge is consistent with non-maximal credence like this. For lottery-related reasons, Dustin is probably committing himself to the denial of multi-premise closure here. (Indeed, for reasons like the ones Maria Lasonen-Aarnio has emphasized, he may very well commit himself to denying single-premise closure.) That's not a completely crazy thing to end up being committed to, but I think it substantially mitigates the rhetorical force of an argument against me here. Similarly, there are probably good reasons to deny that the surgeon outright believes that the left kidney is diseased under these circumstances, either for conceptual/metaphysical reasons (see e.g. Brian Weatherson's "Can we do without pragmatic encroachment" or Roger Clarke's "Belief is credence one (in context)" or for psychological reasons (e.g. Jennifer Nagel's "Epistemic anxiety and adaptive invariantism"). If any of these views are right, the Dustin is committing to knowledge without outright belief.

My second worry concerns stipulation number 1: this is a surgeon who cares only about the life of the patient. From a realistic point of view, this is a very strange surgeon. According to Dustin's stipulations, the surgeon cares nothing at all about any of the following: whether she follows hospital procedure; whether she sets a good example for the students observing; whether she acts only on propositions that she knows; whether she is proceeding rationally. These strong assumptions are not idle; if we allow that she cares about any of these things, the utility calculus will not require her to go without checking, even when she conditionalizes on the content of her knowledge that the left kidney is diseased. (Suppose she cares about whether she acts only on that which she knows, and that she doesn't know whether she knows; then there is a substantial risk of the negative outcome of acting on something she doesn't know.) But these very strange assumptions will make our intuitions harder to trust. When we try to imagine ourselves in her position, we naturally assume she cares about the ordinary things people might care about. Stipulating that she only cares about one thing—not even mentioning the many other things we have to remember to disregard—makes it very hard to get into her mindset. So I'm inclined to mistrust intuitions about so heavily-stipulated a case.

Tuesday, January 14, 2014

Diary of a Narcissist

This is a recent diary entry by Reginald, a confused narcissist. 
Dear Diary,
I am perturbed. As you know, I've long thought that, if I'm not perfection itself, I must at least be the next best thing to it. I thank Providence every day for so far elevating me above the common man. It is no exaggeration to say that hitherto, I have counted myself among the very most beautiful and significant people in the world. But today I received a terrible shock. While searching the internet for further discussions of me, I happened across a paper by a philosopher called David Kaplan. What I found there shook my deepest convictions to the core. Kaplan argues that certain words—'demonstratives' or 'indexicals', he calls them—are context sensitive; that is to say, the referent of these terms can vary according to the conversational context in which they're used. My first thought, on reading this, was that it seemed like an interesting and plausible semantic claim. The referent of the word 'that', for example, is simply whatever it is at which my flawless finger happens to be pointing when I speak.
But that isn't all.[*] It's one thing to recognise the general semantic framework—it's quite another to make particular entries in the list of context-dependent terms. Among Kaplan's list of context-dependent terms are the very dearest and most important to me! He includes on his list, for example, such touchstones as 'I' and 'me'! Can you imagine, diary? I—Reginald the all-right—dependent on such contingencies as conversational contexts? Never in my wildest dreams would I have imagined that anyone would so trivialise me. Needless to say, I am deeply shaken. Can I really accept that I am so unimportant? That there is nothing special about me, but rather than I'm just whoever happens to be speaking in a given conversation? The thought terrifies me. Tomorrow I shall read works by Gareth Evans and Christopher Peacocke to see if they might restore me to the glory I thought I deserved.

Friday, December 27, 2013

New Paper: "Hybrid Virtue Epistemology and the A Priori"

Ben Jarvis and I have completed a draft of a new paper, "Hybrid Virtue Epistemology and the A Priori". Abstract is below, pdf is here, and comments are welcome!
Abstract. How should we understand good philosophical inquiry? Ernest Sosa has argued that the key to answering this question lies with virtue-based epistemology. According to virtue-based epistemology, competences are prior to epistemic justification. More precisely, a subject is justified in having some type of belief only because she could have a belief of that type by exercising her competences. Virtue epistemology is well positioned to explain why, in forming false philosophical beliefs, agents are often less rational than it is possible to be. These false philosophical beliefs are unjustified—and the agent is thereby less rational for having them—precisely because these beliefs could not be formed by exercising competences. But, virtue epistemology is not well positioned to explain why, in failing to form some true philosophical beliefs, agents are less rational than it is possible to be. In cases where agents fall short by failing to believe philosophical truths, the problem is not that they have unjustified beliefs, but that they lack justified ones. We argue that Timothy Williamson's recent critique of the a priori/a posteriori distinction falls prey to similar problem cases. Williamson fails to see that a type of belief might be a priori justified if and only if, even without any special confirming experiences, agents fall short by failing to have this type of belief. We conclude that there are types of beliefs that are deeply a priori justified for any agent regardless of what epistemic competences the agent has. However, we also point out that this view has a problem of its own: it appears to make the acquisition of a priori knowledge too easy. We end by suggesting that a move back towards virtue-based epistemology is necessary.  But in order for this move to be effective, epistemic competences will have to be understood very differently than in the reliabilist tradition.

Saturday, October 05, 2013

Jessica Brown on evidence and luminosity

In "Thought Experiments, Intuitions, and Philosophical Evidence," Jessica Brown introduces a problem for "evidence neutrality" deriving from Williamson's anti-luminosity arguments: evidence neutrality implies that if S has E as evidence, it is always possible for S's community to know that E is evidence, which entails the false claim that evidence is luminous. Sounds ok. Then she writes this puzzling passage:
We might wonder whether we could overcome this first problem by weakening the content element of evidence neutrality. Instead of claiming that if p is part of a subject’s evidence, then her community can agree that p is evidence, the relevant condition could be weakened to the claim that her community can agree that p is true. Although this revised version of the evidence-neutrality principle avoids Williamson’s objection that one is not always in a position to know what one’s evidence is, it faces an objection from Williamson’s anti-luminosity argument. Williamson claims to have established that no nontrivial condition is luminous, where a condition is luminous if and only if for every case a, if in a C obtains, then in a one is in a position to know that C obtains (2000, 95). There is not space here to assess the success of Williamson’s anti-luminosity argument. However, assuming that it is successful, it seems that no mere tinkering with the content element of evidence neutrality will suffice to defend it.
I'm just not seeing the problem here. The proposal we're considering is this: any time S has E as evidence, S (and/or S's community) is in a position to know that E is true. But this does not imply that any non-trivial condition is luminous. The claim that evidence is luminous would need knowledge that E is evidence on the right-hand side; the claim that truth is luminous would need no restriction to evidence on the left-hand side. Saying that evidence requires being a position to know truth looks wholly consistent with Williamson's luminosity argument. Indeed, setting aside the role of the community -- which as far as I can tell is idle in the argument Brown is considering -- it follows trivially from Williamson's own view, E=K. Notice that S's knowing that p entails that S is in a position to know that p is true; this is no violation of anti-luminosity.
Anybody see what I'm missing?

Monday, July 08, 2013

The Rules of Thought: Fregean mental content

I posted a couple of days ago about one of the three main hooks into The Rules of Thought -- an explanation and theory of the a priori. Today I'll write about another -- a theory of mental content. Again, I'm just being completely shameless here and talking about why you might be interested in our book. Please skip if you find that sort of thing distasteful.

In our book, Ben Jarvis and I defend a Fregean theory of mental content. We hope that it does three things: it provides the best resolution to (the most interesting version of) Frege's puzzle; it has a plausible story to tell about the relationship between Fregean senses and the psychological states that constitute propositional attitudes; and it is able to underwrite the epistemology of the a priori. We came to our work on mental content via the epistemology, but we consider this latter project independently motivated and foundational. Our treatment of mental content comprises Part I of the book, and we hope that the book is as much a contribution to mental content as it is to epistemology.

Consider these two propositions:

  1. Some roses are red.
  2. Some roses have a colour.
Here's a very natural idea: it's part of the essence of these two propositions that (1) entail (2). There are lots of ways one might fill this out, but it's very natural to say that part of what makes proposition (1) the proposition that it is is that any time it is true, (2) is also true. We take this natural idea and carry it a step further. Not only do propositions have truth conditions necessarily and essentially; they also have rational acceptance conditions necessarily and essentially. Part of what makes (1) and (2) the propositions that they are is that they stand in a particular rational relationship to one another. In particular, (1) rationally entails (2), in addition to metaphysically entailing it.

We call these rational entailment relations Fregean senses. You can think of Fregean senses in our sense as a kind of truth conditions. If you're comfortable thinking this way, they're equivalent to sets of 'rationally possible worlds' (where there are some of the latter in which, e.g., Hesperus is not Phosphorus). Fregean senses encode what a content rationally commits one to.

Our unstructured Fregean senses constitute a departure from Fregean orthodoxy, which would have structured senses. This is motivated in significant part by the kinds of considerations I discussed in this post last week. We think there is an important theoretical role to be played by such unstructured entities, because the notion of rational commitment is fundamental to our story about mental content. (Of course, we also believe in more structured counterparts -- these, we call 'propositions'. Naturally, there are many ways to apply labels in this neighborhood; we try to justify our terminological choices, but the possibility for superficial disagreement here is significant.)

You need Fregean senses, we think, for basically the same reason Frege thought: to account for Frege cases. On our view, however, the most fundamental category of Frege cases isn't about the possibility of informativeness, or the explanation for certain kinds of behaviour. Frege's puzzle is ultimately a puzzle about rationality. If I believe that Hesperus is a star, and then I learn that Hesperus is a planet, I face rational pressure to revise my previous belief. This wouldn't be so if I learned instead that Phosphorus is a planet. None of the neo-Russelian views out there, we argue, can explain this fact. We explain it very straightforwardly: HESPERUS and PHOSPHORUS are different contents, which carry different rational relations.

(This is a view about the metaphysics of attitudes, not about the semantics of attitude reports. As we explain in the book, our view in consistent with a lot of views -- including neo-Russellian ones -- about the latter.)

Rational commitments, on our story, are primitive and fundamental. Chapter 5 of our book draws an analogy between our way of thinking about senses with Timothy Williamson's suggestion to put knowledge 'first'. We think it is a mistake to seek substantive explanations for why certain rational entailments obtain between certain contents. This move might motivate some to suspect us of shrugging off the most fundamental questions, but this isn't necessarily the case. True enough, calling senses fundamental is in some sense a way of moving the bump in the carpet somewhere else. But we have a lot to say about its new location: the psychological realisation of Fregean sense.

If you spot us the suggestion that there are some abstract entities called 'propositions' that have inherent and essential rational relations with one another, a major open question becomes: how is it that we humans manage to stand in any kind of significant relations to these obscure entities? This is among the most central questions in Part I of our book. A nice and convenient answer, were it true, would be the familiar conceptual role theorist's answer: contents can be characterised by particular inferential roles, and a subject thinks thoughts with those contents by virtue of dispositions to infer according to those special roles. (This should remind you of Christopher Peacocke.) Unfortunately, as people like Quine and Williamson have shown, this nice and convenient answer isn't true. We need a more complicated story.

Ben and I agree with Peacocke that there are certain privileged inferential roles that play a special, content-fixing role. The inference from "is red" to "is coloured" is special in a way that that from "is red" to "looks at least a bit like sriracha" is not. But we don't think that this special inference need be encoded at all directly in the dispositions of any subject who possesses the concept RED. Instead, we suggest that these special inferences have a privileged teleo-normative, rather than dispositional, status. Part of what it is to possess the concept RED is to be such that inference to COLOURED is proper or correct. Part of what makes a football player a goalie is that she is supposed to prevent the ball from going into the net; it is partly in virtue of her behaviour that she is subject to this norm. But it's not a requirement that she be very good at her job.

In a closely analogous way, we think that there are rules of thought. Part of what it is to think is to be subject to certain rational norms; for example, the norm that one should infer (2) from (1). Subjects constitute thinkers partly in virtue of their behaviour and dispositions, but in a way that doesn't guarantee a particularly high level of compliance. According to the story of the book, subscription to particular rules emerges in virtue of the best systematisation of the myriad first-order dispositions to apply concepts in various ways. I can't go into much more detail in this blog post, but a different kind of analogy might help get the approach into mind. Imagine a wooded area, with various significant locations along the perimeter. People need to get from place to place, via the woods, and at first, it's pretty arbitrary what route they take. They don't all just go in a straight line, because some parts of the woods are easier to walk through than others. Over time, paths emerge. Lots of factors influence with paths come to exist -- which destinations are most important, the natural lay of the land, which routes already exist, etc. But once there are paths, there are, in some sense, correct ways to get through the woods. This path is the way you're supposed to go. This, even though nobody ever laid down the law; the path emerged over time as the product of lots of other more arbitrary activity. There's lots more to say about how this could work -- and there are many respects in which the analogy is imperfect -- but I hope that this gives at least a rough idea of the teleo-normative inferential roles that we discuss in the book.

(It is worth noting that an implication of the approach is that we need not construe contents individualistically. We're entirely open to the idea that contents are public, and the best systematisation of first-order dispositions occurs at a broader social level. If this is right, our view implies that rationality, like meaning, ain't in the head. That's fine with us.)

I'll write one more post about the third hook into the book -- consideration of the role of intuitions in epistemology -- soon.