Monday, October 20, 2014

High School Popularity: A Modest Proposal

One of the difficult things about high school is figuring out which people to be friends with. After all, it can make a big difference in your life! Being friends with popular kids is a good way to become more more popular yourself. Plus, your teachers might treat you better, you'll get more cool stuff (from being friends with the popular kids, who are also often the richer ones), etc. In the status quo, however, freshmen often enter high school without a very clear idea of who the popular kids are, so they're aiming their friendship aspirations pretty haphazardly.
But here's an idea for an enterprising popular kid to provide an invaluable service to everybody. He gets a group of his friends together, and they rate everybody in the school (or at least everyone they think is at least minimally popular) for popularity. Then he can make the results known to the whole school, free of charge! Now everyone worth thinking about trying to be friends with comes along with a numerical popularity rating. Sure, everybody's going to try to be friends with that one girl who was already a 4.8, and she doesn't have the time or energy to be friends with everyone, but since she's the most popular, it makes sense for her to be able to be the most selective about choosing her friends. And aren't the best candidates for friends the ones who deserve to have access to the most popular kid's friendship?
Now I'm the first to admit this system won't be perfect. The popular kids are likely to get ever more popular, since everyone will know that they're the people to try to be friends with. And of course everyone will have some incentive to be friends with that one kid who started the rating system, and with the raters that make up his circle of friends. (The latter can be mitigated somewhat if that first kid occasionally makes changes to the roster of kids who do the ratings.) So yeah, maybe there are better possible systems. But in the status quo, people are just trying to guess who's most popular by asking a couple of people or -- even more unfairly -- by judging by superficial cues like race and attractiveness and athletic ability. How is that fair.

Friday, October 10, 2014

Introspective and Reflective Distinguishability

Mooreans, including neo-Mooreans, think that we know lots of ordinary stuff, and that we also—maybe on this basis—know the denials of extraordinary skeptical scenarios. Duncan Pritchard defends a particular disjunctivist brand of neo-Mooreanism, according to which, in cases of successful perception, one has reflective access to factive reasons of the form I see that p, and perceptual knowledge based on such reasons. So for instance, when one looks at red wall under ordinary circumstances:

  • One sees that the wall is red.
  • One has reflective access to the fact that one sees that the wall is red.
  • One knows that the wall is red on the basis of the fact that one sees that the wall is red.
Since Duncan also accepts a closure principle on knowledge, he accepts:
  • One knows that the wall isn't a white wall illuminated by red light.
Like all forms of Mooreanism, Duncan's view is in tension with certain skeptical intuitions. For example, it is in tension with this intuition:
(S) One can't tell by introspection that one is faced with a red wall rather than a white wall with red light.
As Duncan puts it,
If, in the non-deceived case, one has reflective access to the relevant factive reason as epistemological disjunctivism maintains, then why doesn't it follow that one can introspectively distinguish between the non-deceived and deceived cases after all, contrary to intuition? ... In short, the problem is that it is difficult to see how epistemological disjunctivism can square its claim that the reflectively accessible reasons in support of one's perceptual knowledge can nonetheless be factive with the undeniable truth that there can be pairs of cases like that just described [ordinary perceptual cases and corresponding deceptions] which are introspectively indistinguishable. (21)
(Duncan defines 'introspective indistinguishability' as the inability to know by introspection alone that the cases are distinct. (p. 53))

If I wanted to be a neo-Moorean of broadly Duncan's style (something I might well want to do), I'd just deny S, along with the many other skeptical intuitions that come out false on this view. But Duncan doesn't want to go that way; as this passage indicates, he considers S and claims like it to be 'undeniable truths'. (On p. 92 he even says that disjunctivists in particular are "unavoidably committed to denying that agents can introspectively distinguish" between the relevant cases.) I confess I don't see why it's so important to hold on to this particular skeptical intuition while happily rejecting others, such as the intuition that an ordinary person at the zoo doesn't know that she isn't looking at a cleverly disguised mule.

How does Duncan go about resolving the tension between his disjunctivism and S? By leaning on the 'by introspection' qualifier. He does think that, if one in the good case, one can reason thus, resulting in knowledge of the conclusion: "I have factive reason R. Only in the good case would I have factive reason R. Therefore, I'm in the good case." But, he says, this is consistent with intuitions like S, which are about introspective abilities. And while one may be able to tell by introspection what reasons one has, one cannot tell by introspection that factive reasons obtain only in the good cases. This is something one can come to know by a priori reflection, but not by introspection. (And maybe the same goes for the epistemic standing of the inference from the two premises to the conclusion.)

This is ultimately a much milder concession to skeptical intuitions than at first it appeared. Although he preserves the letter of his interpretation of the claim that we can't introspectively distinguish the good cases from the bad cases, he does so by pointing out that "introspectively" is a stronger qualifier than one might have realised. He does think (p. 95) that one can reflectively distinguish between good and bad cases, where reflective distinguishability is the ability to know distinct base on a combination of introspection and a priori reasoning.

So two thoughts. First the smaller one: is it really right to exclude a priori reasoning from the considerations that establish 'introspective distinguishability'? It's very hard for me to even make sense of just what that constraint is. (In The Rules of Thought, Ben and I argue that we can't divorce any kind of thought from a priori reasoning.) Consider these two cases: (1) I am presented in ordinary circumstances with a blue ball. (2) I am presented in ordinary circumstances with a black ball. Given the way my perceptual faculties work, we should consider these cases to be distinguishable in the relevant sense if any are. But is it clear that I can know them to be distinct without using a priori reasoning? It's not like the proposition that they're distinct is made available to me directly via introspection. Instead, I have introspective access to how one case looks, and to how another case looks, and I observe that they're different. From this I infer, using something like Leibniz's law, that they're distinct.

Second, supposing Duncan is right about introspective distinguishability: maybe this just shows that the worry wasn't properly articulated in the first place. I submit that someone motivated by the kinds of skeptical pressures that would drive someone to say that you can't tell good cases and bad cases apart by introspection, isn't going to feel better if you allow a priori reasoning along with introspection. The key skeptical intuition in the first place was just that it shouldn't be that easy to tell the good cases and the bad cases apart. And there's no getting around it: that's just an intuition that disjunctivists need to deny. Once we come to appreciate this fact, I'm not sure how important it is to conform to the letter of certain idiosyncratic statements of the intuition.

Wednesday, September 24, 2014

Some thoughts about the PGR and Brian Leiter

In academic year 2002/03, I was finishing my undergraduate degree at Rice University, and I decided I was interested in applying to grad school in philosophy. Like many undergraduate philosophy majors, I knew next to nothing about the discipline of philosophy—I just knew that I'd enjoyed my philosophy courses, and done well in them, and I wanted more. The ideal circumstance, of course, would have been if someone with intimate knowledge of a wide variety of philosophy departments sat down with me for many hours and helped me to select a number of possible good fits. That was impossible, in my case and in most cases, for many reasons. I was exactly the kind of person the Philosophical Gourmet Report was meant to help. One of my professors pointed me to it, and I used it as a starting point for my research into grad school. It was an extremely useful resource, and I would have been worse off without it. So I agree with the people who have recently written to Brian Leiter, thanking him for creating what is a useful professional service.

Since then, as I have gotten to know the profession more intimately, I have become aware of many concerns about the PGR. Some of them, I think, like the weirdly strategic aspect with which some departments make hire in an attempt to raise themselves in the rankings, are an accidental result of the PGR's large success and influence. I also recognise that there are appropriate concerns about the PGR's methodology, and that it has a tendency to amplify problematic biases about who is and isn't a good philosopher, and what is and isn't a 'core' area of philosophy. I understand why some philosophers think that the PGR does more harm than good. But I do think that it fills what continues to be a genuine need in the profession. I don't really have better advice for a student trying to take the first steps to think about where to apply to grad school than to look at the PGR. Unless and until there is a better source of information available, the PGR remains useful and important.

But the other thing that I have come to realise, as I have gotten to understand the workings of professional philosophy better, is that Brian Leiter has a tremendous influence in the profession, in significant part because of his role as founder and editor of the PGR. And while he often channels his influence in what I consider to be positive directions, he also has engaged in a harmful pattern of bullying and silencing of those who disagree with him. If he were 'just any' philosopher saying mean things about people, this would be rude (and, in my view, unacceptable) but only marginally harmful. But in a culture in which philosophers are afraid to voice dissent against such a powerful individual, the harm is magnified tremendously. I do not think that Leiter himself understands the stifling and silencing effect that his words have on the less powerful people in the profession. In the most recent high-profile instance I have in mind, as most readers will already know, the target was my wife, Carrie Jenkins. Carrie wrote a widely celebrated statement, in wholly general terms, about the importance of philosophers treating each other respectfully. Brian Leiter—who had not previously been in correspondence with Carrie—interpreted this as a criticism of him personally, and wrote Carrie an insulting email, which had significant stifling and intimidating effects. In my opinion, this is not only unacceptable behaviour, but an abuse of the powerful position that Leiter finds himself in. And although the situation with Carrie is the one I am the most familiar with, it seems clear from discussions with others that this kind of bullying, silencing behaviour represents a pattern. That is why I have signed on to this statement (update: here), publicly declaring that I will not assist in the production of the PGR while it is under Brian Leiter's control. I am an untenured junior member of the profession, and have never been asked to contribute to the PGR, but I consider public statements like this important, especially in this context where fear of becoming the object of a negative Leiter campaign is so prevalent. It is important that other philosophers see that if they take a stand, they will not be alone. I am happy to see that many much more prominent philosophers than I—including at least one person who was on the PGR advisory board last week—have also signed.

I remain ambivalent about the PGR itself. As indicated above, I think it plays an important role. Perhaps something else could play that role in a better way, but unless and until such something exists, I think that the PGR itself does good. But in the status quo, where it makes everyone afraid of Brian Leiter, there is serious harm that comes along with that good. It is time for that harm to stop. The best solution for now would be for the PGR to proceed without its founder.

Saturday, August 30, 2014

Pritchard on pragmatics of knowledge ascriptions

I'm working on a review of Duncan Pritchard's book Epistemological Disjunctivism. I'll probably try out a few ideas here over the next couple of months. I want to start out by focusing on something from near the end of the book—§8 of Part III. Here, Duncan is trying to deal with what he considers to be a challenge to the particular form of neo-Moorean disjunctivist response to the skeptical paradox he's been developing. The salient element of the view is that, contrary to skeptical intuitions, one does typically know that e.g. one is related in the normal way to the world, rather than being a brain in a vat. This, even though one lacks the ability to discriminate perceptually between being related in the normal way to the world and being a brain in a vat.

The challenge Duncan considers in this section is that Moorean assertions like "I know I'm not a brain in a vat" seem conversationally inappropriate. As he puts it earlier in the book,
[T]here appears to be something conversationally very odd about asserting that one knows the denial of a specific radical sceptical hypothesis. That is, even if one is willing to grant with the neo-Moorean that one can indeed know that one is not, say, a BIV, it still needs to be explained why any explicit claim to know that one is not a BIV (i.e., 'I know that I am not a BIV') sounds so conversationally inappropriate. Call this the conversational impropriety objection. (115)
The answer Duncan gives to this challenge in §8 ("Knowing and Saying That One Knows") is that the Moorean claims in question, in the contexts under consideration, generate false conversational implicatures to the effect that one has the relevant discriminatory abilities:
[I]n entering an explicit knowledge claim in response to a challenge involving a specific error-possibility one is not only representing oneself as having stronger reflective accessible grounds in support of that assertion than would (normally) be required in order to simply assert the target proposition, but also usually representing oneself as being in possession of reflectively accessible grounds which speak specifically to the error-possibility raised. (142)
I tend to be suspicious of pragmatic explanations for infelicity that don't come along with systematic explanations. Grice tells nice stories about how his maxims predict particular implicatures, given various contents asserted. What is Duncan's explanation for why first-person knowledge assertions implicate that one has the perceptual capacity to discriminate the state of affairs claimed to be known from alternatives that have been mentioned? Let's take an example, adapted from one of Duncan's (p. 146 -- one of his "unmotivated specific challenge" cases):

  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
  • Zula: I know that they're zebras.

Duncan's view is that Zula last utterance is true but unassertable—unassertable because it implicates falsely that Zula can discriminate perceptually between zebras and cleverly disguised mules. But why does it implicate that, if it doesn't entail it? I can't see how any of Grice's maxim's would generate the implicature in this case. Without some kind of story about where the implicature comes from, the suggestion that any impropriety comes down to pragmatics looks suspiciously ad hoc.

Notice also that certain predictions of the pragmatic explanation do not seem to be borne out. Since Duncan's story depends essentially on the implicatures involved in Zula's assertion, it does not extend to knowledge attributions that Zula doesn't assert. For example, it does not extend to Zula's unasserted thought in this case:
  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
  • Zula: [thinking to herself] What an asshole. I know that they're zebras.
Zula's thought won't mislead Asshole or anybody else, so Duncan's story can't show why it's inappropriate. But it seems intuitively problematic in the same way her original assertion is. Similarly, there seems to be impropriety about Moorean assertions in third-personal contexts where one won't mislead. Suppose that you and I know full well that Zula can't tell the difference between a real zebra and a fake zebra; we also know full well that she is looking at a real zebra right now. Consider this:
  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
  • Me: [to you, out of earshot of Z and A] Zula knows that they're zebras.
My assertion seems problematic in the same way Zula's original one does; but I do not mislead anyone. (We could also consider, for this point, a version of the first-personal case where it is stipulated to be common knowledge that Zula lacks the discriminatory ability in question.)

Here is one more observation about the case. Suppose nobody says anything about knowledge, as in this variant:
  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But maybe they're cleverly disguised mules.
  • Zula: They are zebras.
Insofar as I can feel the force of Duncan's suggestion that Zula's original final utterance—'I know that they're zebras—implicates that she has special abilities to rule out fakes, I think the same applies here. But if so, I think that this may show that even if Duncan has identified something wrong with the knowledge assertion, he hasn't identified everything wrong with it. For we have no inclination whatsoever to think that Zula speaks falsely in asserting, even in the face of the skeptical challenge, that there are zebras. The case is very different for her self-ascription of knowledge. The intuition is not merely that she shouldn't say she has knowledge; it's that she doesn't. (Indeed, I think the intuition is that it'd be fine for her to assert that she doesn't have knowledge.) Since there seems to be a special phenomenon about knowledge ascriptions, the pragmatic story will only work if it is particular to knowledge ascriptions. But I don't think it is; once the challenge has been made, an outright assertion of the proposition that was challenged does—so far as I can tell, in exactly the same way a bare knowledge ascription does—in some sense convey that one has the ability to answer the challenge.

More thoughts on more central elements of Duncan's very interesting book to follow. I started here for the simple reason that  it was freshest in my mind when I finished the book today.

Tuesday, April 29, 2014

More on the well of knowledge norms

Dustin Locke has published a response to my Thought article, "Knowledge Norms and Acting Well". My paper (draft here) argued that lots of counterexample-based arguments against knowledge norms of practical reasoning take a problematic form: generating a case where it seems like S knows that p, but where it seems like S is not in a strong enough epistemic position to phi. These verdicts together tell us nothing interesting unless we assume some story about the relationship between p and phi; but defenders of knowledge norms needn't and shouldn't accept many such relationships.

For example, in Jessica Brown's widely-discussed surgeon case, it is thought to be intuitive that before double-checking the charts, (a) the surgeon knows that the left kidney is the one to remove; but (b) the surgeon ought not to operate before double-checking the charts. This is only a problem for the idea that one's reasons are all and only what one knows if the proposition the left kidney is the one to remove would, if held as a reason, be a sufficient reason to justify operating before double-checking the charts. But why should one think that?

Dustin resists my argument at several points. I'm not sure what to say in response to many of them; I think they're helpfully clarifying the sources of disagreement, but they don't make me feel any worse about my point of view. For example, Dustin seems to be happy to rest on certain kinds of very theoretical intuitions, like the intuition that the surgeon isn't justified in using the proposition that the left kidney is the bad one as a reason that counts in favour of removing the left kidney immediately. I don't have this intuition, and I wouldn't want to trust it if I did. I feel pretty good about intuitions about what actions are ok in what circumstances, but deeply theoretical claims like these don't seem to me to be acceptable dialectical starting places.

In what I found to be the most interesting part of his paper, Dustin also constructs a version of Brown's surgeon case where, if one assumes that (a) a Bayesian picture of practical rationality is correct and (b) practical reasons talk translates into the Bayesian talk by letting one conditionalize on one's reasons, we can derive the intuition mentioned above. I think that both of these assumptions are very debatable, but I also think that the case Dustin tries to stipulate is more problematic than he assumes. He offers the following stipulations:
  1. The surgeon cares about, and only about, whether the patient lives.
  2. The surgeon has credence 1 that exactly one of the patient's kidneys is diseased, and a .99 degree of credence that it is the left kidney.
  3. If the surgeon performs the surgery without first checking the chart, she will begin it immediately; if she first checks the patient's chart, she will begin the surgery in one minute.
  4. The surgeon has credence 1 that were she to check the chart, she would then remove the correct kidney.
  5. If the patient has the correct kidney removed during the operation, then there are the following probabilities that he will live, depending on how soon the surgery begins: (5a) If the surgery begins immediately and the correct kidney is removed, there is a probability of 1 that the patient will live; (5b) If the surgery begins in one minute and the correct kidney is removed, there is a probability of .999 that the patient will live.
  6. If the patient has the wrong kidney removed during the operation, then the probability that he will live is 0.
(This list is quoted directly.) I have two worries. First, Dustin also says of the case that "it's quite plausible that the surgeon knows that the left kidney is diseased", and assumes that she does. But this requires a very substantive epistemological and psychological assumption about the relationship between credence and knowledge. It is not at all innocent to assume that knowledge is consistent with non-maximal credence like this. For lottery-related reasons, Dustin is probably committing himself to the denial of multi-premise closure here. (Indeed, for reasons like the ones Maria Lasonen-Aarnio has emphasized, he may very well commit himself to denying single-premise closure.) That's not a completely crazy thing to end up being committed to, but I think it substantially mitigates the rhetorical force of an argument against me here. Similarly, there are probably good reasons to deny that the surgeon outright believes that the left kidney is diseased under these circumstances, either for conceptual/metaphysical reasons (see e.g. Brian Weatherson's "Can we do without pragmatic encroachment" or Roger Clarke's "Belief is credence one (in context)" or for psychological reasons (e.g. Jennifer Nagel's "Epistemic anxiety and adaptive invariantism"). If any of these views are right, the Dustin is committing to knowledge without outright belief.

My second worry concerns stipulation number 1: this is a surgeon who cares only about the life of the patient. From a realistic point of view, this is a very strange surgeon. According to Dustin's stipulations, the surgeon cares nothing at all about any of the following: whether she follows hospital procedure; whether she sets a good example for the students observing; whether she acts only on propositions that she knows; whether she is proceeding rationally. These strong assumptions are not idle; if we allow that she cares about any of these things, the utility calculus will not require her to go without checking, even when she conditionalizes on the content of her knowledge that the left kidney is diseased. (Suppose she cares about whether she acts only on that which she knows, and that she doesn't know whether she knows; then there is a substantial risk of the negative outcome of acting on something she doesn't know.) But these very strange assumptions will make our intuitions harder to trust. When we try to imagine ourselves in her position, we naturally assume she cares about the ordinary things people might care about. Stipulating that she only cares about one thing—not even mentioning the many other things we have to remember to disregard—makes it very hard to get into her mindset. So I'm inclined to mistrust intuitions about so heavily-stipulated a case.

Tuesday, January 14, 2014

Diary of a Narcissist

This is a recent diary entry by Reginald, a confused narcissist. 
Dear Diary,
I am perturbed. As you know, I've long thought that, if I'm not perfection itself, I must at least be the next best thing to it. I thank Providence every day for so far elevating me above the common man. It is no exaggeration to say that hitherto, I have counted myself among the very most beautiful and significant people in the world. But today I received a terrible shock. While searching the internet for further discussions of me, I happened across a paper by a philosopher called David Kaplan. What I found there shook my deepest convictions to the core. Kaplan argues that certain words—'demonstratives' or 'indexicals', he calls them—are context sensitive; that is to say, the referent of these terms can vary according to the conversational context in which they're used. My first thought, on reading this, was that it seemed like an interesting and plausible semantic claim. The referent of the word 'that', for example, is simply whatever it is at which my flawless finger happens to be pointing when I speak.
But that isn't all.[*] It's one thing to recognise the general semantic framework—it's quite another to make particular entries in the list of context-dependent terms. Among Kaplan's list of context-dependent terms are the very dearest and most important to me! He includes on his list, for example, such touchstones as 'I' and 'me'! Can you imagine, diary? I—Reginald the all-right—dependent on such contingencies as conversational contexts? Never in my wildest dreams would I have imagined that anyone would so trivialise me. Needless to say, I am deeply shaken. Can I really accept that I am so unimportant? That there is nothing special about me, but rather than I'm just whoever happens to be speaking in a given conversation? The thought terrifies me. Tomorrow I shall read works by Gareth Evans and Christopher Peacocke to see if they might restore me to the glory I thought I deserved.
Fondly,
Reginald

Friday, December 27, 2013

New Paper: "Hybrid Virtue Epistemology and the A Priori"


Ben Jarvis and I have completed a draft of a new paper, "Hybrid Virtue Epistemology and the A Priori". Abstract is below, pdf is here, and comments are welcome!
Abstract. How should we understand good philosophical inquiry? Ernest Sosa has argued that the key to answering this question lies with virtue-based epistemology. According to virtue-based epistemology, competences are prior to epistemic justification. More precisely, a subject is justified in having some type of belief only because she could have a belief of that type by exercising her competences. Virtue epistemology is well positioned to explain why, in forming false philosophical beliefs, agents are often less rational than it is possible to be. These false philosophical beliefs are unjustified—and the agent is thereby less rational for having them—precisely because these beliefs could not be formed by exercising competences. But, virtue epistemology is not well positioned to explain why, in failing to form some true philosophical beliefs, agents are less rational than it is possible to be. In cases where agents fall short by failing to believe philosophical truths, the problem is not that they have unjustified beliefs, but that they lack justified ones. We argue that Timothy Williamson's recent critique of the a priori/a posteriori distinction falls prey to similar problem cases. Williamson fails to see that a type of belief might be a priori justified if and only if, even without any special confirming experiences, agents fall short by failing to have this type of belief. We conclude that there are types of beliefs that are deeply a priori justified for any agent regardless of what epistemic competences the agent has. However, we also point out that this view has a problem of its own: it appears to make the acquisition of a priori knowledge too easy. We end by suggesting that a move back towards virtue-based epistemology is necessary.  But in order for this move to be effective, epistemic competences will have to be understood very differently than in the reliabilist tradition.