Saturday, August 30, 2014

Pritchard on pragmatics of knowledge ascriptions

I'm working on a review of Duncan Pritchard's book Epistemological Disjunctivism. I'll probably try out a few ideas here over the next couple of months. I want to start out by focusing on something from near the end of the book—§8 of Part III. Here, Duncan is trying to deal with what he considers to be a challenge to the particular form of neo-Moorean disjunctivist response to the skeptical paradox he's been developing. The salient element of the view is that, contrary to skeptical intuitions, one does typically know that e.g. one is related in the normal way to the world, rather than being a brain in a vat. This, even though one lacks the ability to discriminate perceptually between being related in the normal way to the world and being a brain in a vat.

The challenge Duncan considers in this section is that Moorean assertions like "I know I'm not a brain in a vat" seem conversationally inappropriate. As he puts it earlier in the book,
[T]here appears to be something conversationally very odd about asserting that one knows the denial of a specific radical sceptical hypothesis. That is, even if one is willing to grant with the neo-Moorean that one can indeed know that one is not, say, a BIV, it still needs to be explained why any explicit claim to know that one is not a BIV (i.e., 'I know that I am not a BIV') sounds so conversationally inappropriate. Call this the conversational impropriety objection. (115)
The answer Duncan gives to this challenge in §8 ("Knowing and Saying That One Knows") is that the Moorean claims in question, in the contexts under consideration, generate false conversational implicatures to the effect that one has the relevant discriminatory abilities:
[I]n entering an explicit knowledge claim in response to a challenge involving a specific error-possibility one is not only representing oneself as having stronger reflective accessible grounds in support of that assertion than would (normally) be required in order to simply assert the target proposition, but also usually representing oneself as being in possession of reflectively accessible grounds which speak specifically to the error-possibility raised. (142)
I tend to be suspicious of pragmatic explanations for infelicity that don't come along with systematic explanations. Grice tells nice stories about how his maxims predict particular implicatures, given various contents asserted. What is Duncan's explanation for why first-person knowledge assertions implicate that one has the perceptual capacity to discriminate the state of affairs claimed to be known from alternatives that have been mentioned? Let's take an example, adapted from one of Duncan's (p. 146 -- one of his "unmotivated specific challenge" cases):

  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
  • Zula: I know that they're zebras.

Duncan's view is that Zula last utterance is true but unassertable—unassertable because it implicates falsely that Zula can discriminate perceptually between zebras and cleverly disguised mules. But why does it implicate that, if it doesn't entail it? I can't see how any of Grice's maxim's would generate the implicature in this case. Without some kind of story about where the implicature comes from, the suggestion that any impropriety comes down to pragmatics looks suspiciously ad hoc.

Notice also that certain predictions of the pragmatic explanation do not seem to be borne out. Since Duncan's story depends essentially on the implicatures involved in Zula's assertion, it does not extend to knowledge attributions that Zula doesn't assert. For example, it does not extend to Zula's unasserted thought in this case:
  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
  • Zula: [thinking to herself] What an asshole. I know that they're zebras.
Zula's thought won't mislead Asshole or anybody else, so Duncan's story can't show why it's inappropriate. But it seems intuitively problematic in the same way her original assertion is. Similarly, there seems to be impropriety about Moorean assertions in third-personal contexts where one won't mislead. Suppose that you and I know full well that Zula can't tell the difference between a real zebra and a fake zebra; we also know full well that she is looking at a real zebra right now. Consider this:
  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But who knows? Maybe they're cleverly disguised mules.
  • Me: [to you, out of earshot of Z and A] Zula knows that they're zebras.
My assertion seems problematic in the same way Zula's original one does; but I do not mislead anyone. (We could also consider, for this point, a version of the first-personal case where it is stipulated to be common knowledge that Zula lacks the discriminatory ability in question.)

Here is one more observation about the case. Suppose nobody says anything about knowledge, as in this variant:
  • Zula: [looking at some zebras in the zoo] There are some zebras over there.
  • Asshole: They look like zebras. But maybe they're cleverly disguised mules.
  • Zula: They are zebras.
Insofar as I can feel the force of Duncan's suggestion that Zula's original final utterance—'I know that they're zebras—implicates that she has special abilities to rule out fakes, I think the same applies here. But if so, I think that this may show that even if Duncan has identified something wrong with the knowledge assertion, he hasn't identified everything wrong with it. For we have no inclination whatsoever to think that Zula speaks falsely in asserting, even in the face of the skeptical challenge, that there are zebras. The case is very different for her self-ascription of knowledge. The intuition is not merely that she shouldn't say she has knowledge; it's that she doesn't. (Indeed, I think the intuition is that it'd be fine for her to assert that she doesn't have knowledge.) Since there seems to be a special phenomenon about knowledge ascriptions, the pragmatic story will only work if it is particular to knowledge ascriptions. But I don't think it is; once the challenge has been made, an outright assertion of the proposition that was challenged does—so far as I can tell, in exactly the same way a bare knowledge ascription does—in some sense convey that one has the ability to answer the challenge.

More thoughts on more central elements of Duncan's very interesting book to follow. I started here for the simple reason that  it was freshest in my mind when I finished the book today.

Tuesday, April 29, 2014

More on the well of knowledge norms

Dustin Locke has published a response to my Thought article, "Knowledge Norms and Acting Well". My paper (draft here) argued that lots of counterexample-based arguments against knowledge norms of practical reasoning take a problematic form: generating a case where it seems like S knows that p, but where it seems like S is not in a strong enough epistemic position to phi. These verdicts together tell us nothing interesting unless we assume some story about the relationship between p and phi; but defenders of knowledge norms needn't and shouldn't accept many such relationships.

For example, in Jessica Brown's widely-discussed surgeon case, it is thought to be intuitive that before double-checking the charts, (a) the surgeon knows that the left kidney is the one to remove; but (b) the surgeon ought not to operate before double-checking the charts. This is only a problem for the idea that one's reasons are all and only what one knows if the proposition the left kidney is the one to remove would, if held as a reason, be a sufficient reason to justify operating before double-checking the charts. But why should one think that?

Dustin resists my argument at several points. I'm not sure what to say in response to many of them; I think they're helpfully clarifying the sources of disagreement, but they don't make me feel any worse about my point of view. For example, Dustin seems to be happy to rest on certain kinds of very theoretical intuitions, like the intuition that the surgeon isn't justified in using the proposition that the left kidney is the bad one as a reason that counts in favour of removing the left kidney immediately. I don't have this intuition, and I wouldn't want to trust it if I did. I feel pretty good about intuitions about what actions are ok in what circumstances, but deeply theoretical claims like these don't seem to me to be acceptable dialectical starting places.

In what I found to be the most interesting part of his paper, Dustin also constructs a version of Brown's surgeon case where, if one assumes that (a) a Bayesian picture of practical rationality is correct and (b) practical reasons talk translates into the Bayesian talk by letting one conditionalize on one's reasons, we can derive the intuition mentioned above. I think that both of these assumptions are very debatable, but I also think that the case Dustin tries to stipulate is more problematic than he assumes. He offers the following stipulations:
  1. The surgeon cares about, and only about, whether the patient lives.
  2. The surgeon has credence 1 that exactly one of the patient's kidneys is diseased, and a .99 degree of credence that it is the left kidney.
  3. If the surgeon performs the surgery without first checking the chart, she will begin it immediately; if she first checks the patient's chart, she will begin the surgery in one minute.
  4. The surgeon has credence 1 that were she to check the chart, she would then remove the correct kidney.
  5. If the patient has the correct kidney removed during the operation, then there are the following probabilities that he will live, depending on how soon the surgery begins: (5a) If the surgery begins immediately and the correct kidney is removed, there is a probability of 1 that the patient will live; (5b) If the surgery begins in one minute and the correct kidney is removed, there is a probability of .999 that the patient will live.
  6. If the patient has the wrong kidney removed during the operation, then the probability that he will live is 0.
(This list is quoted directly.) I have two worries. First, Dustin also says of the case that "it's quite plausible that the surgeon knows that the left kidney is diseased", and assumes that she does. But this requires a very substantive epistemological and psychological assumption about the relationship between credence and knowledge. It is not at all innocent to assume that knowledge is consistent with non-maximal credence like this. For lottery-related reasons, Dustin is probably committing himself to the denial of multi-premise closure here. (Indeed, for reasons like the ones Maria Lasonen-Aarnio has emphasized, he may very well commit himself to denying single-premise closure.) That's not a completely crazy thing to end up being committed to, but I think it substantially mitigates the rhetorical force of an argument against me here. Similarly, there are probably good reasons to deny that the surgeon outright believes that the left kidney is diseased under these circumstances, either for conceptual/metaphysical reasons (see e.g. Brian Weatherson's "Can we do without pragmatic encroachment" or Roger Clarke's "Belief is credence one (in context)" or for psychological reasons (e.g. Jennifer Nagel's "Epistemic anxiety and adaptive invariantism"). If any of these views are right, the Dustin is committing to knowledge without outright belief.

My second worry concerns stipulation number 1: this is a surgeon who cares only about the life of the patient. From a realistic point of view, this is a very strange surgeon. According to Dustin's stipulations, the surgeon cares nothing at all about any of the following: whether she follows hospital procedure; whether she sets a good example for the students observing; whether she acts only on propositions that she knows; whether she is proceeding rationally. These strong assumptions are not idle; if we allow that she cares about any of these things, the utility calculus will not require her to go without checking, even when she conditionalizes on the content of her knowledge that the left kidney is diseased. (Suppose she cares about whether she acts only on that which she knows, and that she doesn't know whether she knows; then there is a substantial risk of the negative outcome of acting on something she doesn't know.) But these very strange assumptions will make our intuitions harder to trust. When we try to imagine ourselves in her position, we naturally assume she cares about the ordinary things people might care about. Stipulating that she only cares about one thing—not even mentioning the many other things we have to remember to disregard—makes it very hard to get into her mindset. So I'm inclined to mistrust intuitions about so heavily-stipulated a case.

Tuesday, January 14, 2014

Diary of a Narcissist

This is a recent diary entry by Reginald, a confused narcissist. 
Dear Diary,
I am perturbed. As you know, I've long thought that, if I'm not perfection itself, I must at least be the next best thing to it. I thank Providence every day for so far elevating me above the common man. It is no exaggeration to say that hitherto, I have counted myself among the very most beautiful and significant people in the world. But today I received a terrible shock. While searching the internet for further discussions of me, I happened across a paper by a philosopher called David Kaplan. What I found there shook my deepest convictions to the core. Kaplan argues that certain words—'demonstratives' or 'indexicals', he calls them—are context sensitive; that is to say, the referent of these terms can vary according to the conversational context in which they're used. My first thought, on reading this, was that it seemed like an interesting and plausible semantic claim. The referent of the word 'that', for example, is simply whatever it is at which my flawless finger happens to be pointing when I speak.
But that isn't all.[*] It's one thing to recognise the general semantic framework—it's quite another to make particular entries in the list of context-dependent terms. Among Kaplan's list of context-dependent terms are the very dearest and most important to me! He includes on his list, for example, such touchstones as 'I' and 'me'! Can you imagine, diary? I—Reginald the all-right—dependent on such contingencies as conversational contexts? Never in my wildest dreams would I have imagined that anyone would so trivialise me. Needless to say, I am deeply shaken. Can I really accept that I am so unimportant? That there is nothing special about me, but rather than I'm just whoever happens to be speaking in a given conversation? The thought terrifies me. Tomorrow I shall read works by Gareth Evans and Christopher Peacocke to see if they might restore me to the glory I thought I deserved.

Friday, December 27, 2013

New Paper: "Hybrid Virtue Epistemology and the A Priori"

Ben Jarvis and I have completed a draft of a new paper, "Hybrid Virtue Epistemology and the A Priori". Abstract is below, pdf is here, and comments are welcome!
Abstract. How should we understand good philosophical inquiry? Ernest Sosa has argued that the key to answering this question lies with virtue-based epistemology. According to virtue-based epistemology, competences are prior to epistemic justification. More precisely, a subject is justified in having some type of belief only because she could have a belief of that type by exercising her competences. Virtue epistemology is well positioned to explain why, in forming false philosophical beliefs, agents are often less rational than it is possible to be. These false philosophical beliefs are unjustified—and the agent is thereby less rational for having them—precisely because these beliefs could not be formed by exercising competences. But, virtue epistemology is not well positioned to explain why, in failing to form some true philosophical beliefs, agents are less rational than it is possible to be. In cases where agents fall short by failing to believe philosophical truths, the problem is not that they have unjustified beliefs, but that they lack justified ones. We argue that Timothy Williamson's recent critique of the a priori/a posteriori distinction falls prey to similar problem cases. Williamson fails to see that a type of belief might be a priori justified if and only if, even without any special confirming experiences, agents fall short by failing to have this type of belief. We conclude that there are types of beliefs that are deeply a priori justified for any agent regardless of what epistemic competences the agent has. However, we also point out that this view has a problem of its own: it appears to make the acquisition of a priori knowledge too easy. We end by suggesting that a move back towards virtue-based epistemology is necessary.  But in order for this move to be effective, epistemic competences will have to be understood very differently than in the reliabilist tradition.

Saturday, October 05, 2013

Jessica Brown on evidence and luminosity

In "Thought Experiments, Intuitions, and Philosophical Evidence," Jessica Brown introduces a problem for "evidence neutrality" deriving from Williamson's anti-luminosity arguments: evidence neutrality implies that if S has E as evidence, it is always possible for S's community to know that E is evidence, which entails the false claim that evidence is luminous. Sounds ok. Then she writes this puzzling passage:
We might wonder whether we could overcome this first problem by weakening the content element of evidence neutrality. Instead of claiming that if p is part of a subject’s evidence, then her community can agree that p is evidence, the relevant condition could be weakened to the claim that her community can agree that p is true. Although this revised version of the evidence-neutrality principle avoids Williamson’s objection that one is not always in a position to know what one’s evidence is, it faces an objection from Williamson’s anti-luminosity argument. Williamson claims to have established that no nontrivial condition is luminous, where a condition is luminous if and only if for every case a, if in a C obtains, then in a one is in a position to know that C obtains (2000, 95). There is not space here to assess the success of Williamson’s anti-luminosity argument. However, assuming that it is successful, it seems that no mere tinkering with the content element of evidence neutrality will suffice to defend it.
I'm just not seeing the problem here. The proposal we're considering is this: any time S has E as evidence, S (and/or S's community) is in a position to know that E is true. But this does not imply that any non-trivial condition is luminous. The claim that evidence is luminous would need knowledge that E is evidence on the right-hand side; the claim that truth is luminous would need no restriction to evidence on the left-hand side. Saying that evidence requires being a position to know truth looks wholly consistent with Williamson's luminosity argument. Indeed, setting aside the role of the community -- which as far as I can tell is idle in the argument Brown is considering -- it follows trivially from Williamson's own view, E=K. Notice that S's knowing that p entails that S is in a position to know that p is true; this is no violation of anti-luminosity.
Anybody see what I'm missing?

Monday, July 08, 2013

The Rules of Thought: Fregean mental content

I posted a couple of days ago about one of the three main hooks into The Rules of Thought -- an explanation and theory of the a priori. Today I'll write about another -- a theory of mental content. Again, I'm just being completely shameless here and talking about why you might be interested in our book. Please skip if you find that sort of thing distasteful.

In our book, Ben Jarvis and I defend a Fregean theory of mental content. We hope that it does three things: it provides the best resolution to (the most interesting version of) Frege's puzzle; it has a plausible story to tell about the relationship between Fregean senses and the psychological states that constitute propositional attitudes; and it is able to underwrite the epistemology of the a priori. We came to our work on mental content via the epistemology, but we consider this latter project independently motivated and foundational. Our treatment of mental content comprises Part I of the book, and we hope that the book is as much a contribution to mental content as it is to epistemology.

Consider these two propositions:

  1. Some roses are red.
  2. Some roses have a colour.
Here's a very natural idea: it's part of the essence of these two propositions that (1) entail (2). There are lots of ways one might fill this out, but it's very natural to say that part of what makes proposition (1) the proposition that it is is that any time it is true, (2) is also true. We take this natural idea and carry it a step further. Not only do propositions have truth conditions necessarily and essentially; they also have rational acceptance conditions necessarily and essentially. Part of what makes (1) and (2) the propositions that they are is that they stand in a particular rational relationship to one another. In particular, (1) rationally entails (2), in addition to metaphysically entailing it.

We call these rational entailment relations Fregean senses. You can think of Fregean senses in our sense as a kind of truth conditions. If you're comfortable thinking this way, they're equivalent to sets of 'rationally possible worlds' (where there are some of the latter in which, e.g., Hesperus is not Phosphorus). Fregean senses encode what a content rationally commits one to.

Our unstructured Fregean senses constitute a departure from Fregean orthodoxy, which would have structured senses. This is motivated in significant part by the kinds of considerations I discussed in this post last week. We think there is an important theoretical role to be played by such unstructured entities, because the notion of rational commitment is fundamental to our story about mental content. (Of course, we also believe in more structured counterparts -- these, we call 'propositions'. Naturally, there are many ways to apply labels in this neighborhood; we try to justify our terminological choices, but the possibility for superficial disagreement here is significant.)

You need Fregean senses, we think, for basically the same reason Frege thought: to account for Frege cases. On our view, however, the most fundamental category of Frege cases isn't about the possibility of informativeness, or the explanation for certain kinds of behaviour. Frege's puzzle is ultimately a puzzle about rationality. If I believe that Hesperus is a star, and then I learn that Hesperus is a planet, I face rational pressure to revise my previous belief. This wouldn't be so if I learned instead that Phosphorus is a planet. None of the neo-Russelian views out there, we argue, can explain this fact. We explain it very straightforwardly: HESPERUS and PHOSPHORUS are different contents, which carry different rational relations.

(This is a view about the metaphysics of attitudes, not about the semantics of attitude reports. As we explain in the book, our view in consistent with a lot of views -- including neo-Russellian ones -- about the latter.)

Rational commitments, on our story, are primitive and fundamental. Chapter 5 of our book draws an analogy between our way of thinking about senses with Timothy Williamson's suggestion to put knowledge 'first'. We think it is a mistake to seek substantive explanations for why certain rational entailments obtain between certain contents. This move might motivate some to suspect us of shrugging off the most fundamental questions, but this isn't necessarily the case. True enough, calling senses fundamental is in some sense a way of moving the bump in the carpet somewhere else. But we have a lot to say about its new location: the psychological realisation of Fregean sense.

If you spot us the suggestion that there are some abstract entities called 'propositions' that have inherent and essential rational relations with one another, a major open question becomes: how is it that we humans manage to stand in any kind of significant relations to these obscure entities? This is among the most central questions in Part I of our book. A nice and convenient answer, were it true, would be the familiar conceptual role theorist's answer: contents can be characterised by particular inferential roles, and a subject thinks thoughts with those contents by virtue of dispositions to infer according to those special roles. (This should remind you of Christopher Peacocke.) Unfortunately, as people like Quine and Williamson have shown, this nice and convenient answer isn't true. We need a more complicated story.

Ben and I agree with Peacocke that there are certain privileged inferential roles that play a special, content-fixing role. The inference from "is red" to "is coloured" is special in a way that that from "is red" to "looks at least a bit like sriracha" is not. But we don't think that this special inference need be encoded at all directly in the dispositions of any subject who possesses the concept RED. Instead, we suggest that these special inferences have a privileged teleo-normative, rather than dispositional, status. Part of what it is to possess the concept RED is to be such that inference to COLOURED is proper or correct. Part of what makes a football player a goalie is that she is supposed to prevent the ball from going into the net; it is partly in virtue of her behaviour that she is subject to this norm. But it's not a requirement that she be very good at her job.

In a closely analogous way, we think that there are rules of thought. Part of what it is to think is to be subject to certain rational norms; for example, the norm that one should infer (2) from (1). Subjects constitute thinkers partly in virtue of their behaviour and dispositions, but in a way that doesn't guarantee a particularly high level of compliance. According to the story of the book, subscription to particular rules emerges in virtue of the best systematisation of the myriad first-order dispositions to apply concepts in various ways. I can't go into much more detail in this blog post, but a different kind of analogy might help get the approach into mind. Imagine a wooded area, with various significant locations along the perimeter. People need to get from place to place, via the woods, and at first, it's pretty arbitrary what route they take. They don't all just go in a straight line, because some parts of the woods are easier to walk through than others. Over time, paths emerge. Lots of factors influence with paths come to exist -- which destinations are most important, the natural lay of the land, which routes already exist, etc. But once there are paths, there are, in some sense, correct ways to get through the woods. This path is the way you're supposed to go. This, even though nobody ever laid down the law; the path emerged over time as the product of lots of other more arbitrary activity. There's lots more to say about how this could work -- and there are many respects in which the analogy is imperfect -- but I hope that this gives at least a rough idea of the teleo-normative inferential roles that we discuss in the book.

(It is worth noting that an implication of the approach is that we need not construe contents individualistically. We're entirely open to the idea that contents are public, and the best systematisation of first-order dispositions occurs at a broader social level. If this is right, our view implies that rationality, like meaning, ain't in the head. That's fine with us.)

I'll write one more post about the third hook into the book -- consideration of the role of intuitions in epistemology -- soon.

Saturday, July 06, 2013

The Rules of Thought: Philosophy and the a priori

I'm going to live up to the blogger stereotype and set a few posts on autofocus. The shameless project is to make the case that you might have good reason to read The Rules of Thought, the book that Benjamin Jarvis and I recently wrote. (OUP catalogue page) (my webpage)

I think that there are three possible hooks into our project. One of them -- the one that represented our own way into the project -- concerns the epistemology of the a priori in general, and the epistemology of philosophy in particular. Ben and I trace this interest pretty specifically to 2005, when, while PhD students at Brown, we took Joshua Schechter's seminar on the a priori, and also attended Timothy Williamson's Blackwell-Brown lectures, which eventually became The Philosophy of Philosophy. We were attracted by traditional idea that in many paradigmatic instances, philosophical investigation proceeded in some important sense independently from experience, but came to appreciate that (a) there were deep mysteries concerning the explanation for how this could be, and (b) there were strong challenges that suggested that the traditional idea couldn't be right. For example, the traditional idea has it that judgments about thought experiments constitute appreciate of facts that are both a priori and necessary; but Williamson gave what is now a somewhat famous argument that this can't be so: thought experiments don't include enough detail to entail the typical judgments. So the best they can support is something like a contingent, empirical counterfactual: if someone were in such-and-such circumstances, he would have JTB but no K, etc.

We wrote a defensive paper in response to Williamson's argument, explaining how one can understand the content of thought-experiment judgments in a way that renders them more plausibly necessary and a priori, invoking the notion of truth in fiction. ("Thought-Experiment Intuitions and Truth in Fiction" -- (draft) (published)) That paper did two useful things: it gave an objection to Wiliamson's treatment, and it defended a traditional aprioristic picture from Williamson's particular critique. But on the latter score, it was purely defensive; it did little to explain how a priori justification or knowledge was possible, or to articulate just what apriority could consist in. Another paper, "Rational Imagination and Modal Knowledge," (d) (p) gave a bit more epistemological background, and a focus on modal epistemology in particular. By the time of that paper, we were underway on the book.

What we needed, we realized, was a much fuller story about apriority, including detailed engagement with extant critiques of the notion. We give this in Part II of The Rules of Thought. Some of the critiques -- in particular, some of those from Williamson and Hawthorne, as well as some similar challenges from Yablo and Papineau -- show that a characterisation of apriority in terms of more psychological states like knowledge and justified belief is extremely difficult, perhaps impossible. (Here's a related blog post from last year.) Our general characterisation of the a priori is a negative one, given in terms of propositional justification. A subject has a priori propositional justification for p just in case she has justification for p, and this isn't due in constitutive part to any of the subject's experiences. We explain how this approach avoids the challenges to the a priori that are in the literature, and argue that there is strong reason to think that philosophical investigation is often a priori in our sense. The focus on propositional justification requires a fairly strong version of the traditional distinction between warranting and enabling roles for experience, which we attempt to explicate.

The negative characterisation is thin by design. We are explicitly open to a kind of pluralism about apriority, according to which various positive epistemic states can realise apriority. The state we focus on most is what we call 'rational necessity' -- certain contents are, we think, by their nature such that there is always conclusive reason to accept them. (Much more on this idea in another post on another motivation for the project.) But we allow that other states may realise apriority as well; we are open, for example, to the idea that it is a priori that perception is generally reliable, even though this isn't rationally necessary. Perhaps some kind of pragmatic explanation for these a priori propositions may be found.

In the context of our theory of the a priori, and our more detailed positive story about rational necessity, we rehearse the main ideas from our two previous papers on philosophical methodology: thought-experiment judgments, properly understood, often have contents that are rationally necessary, hence a priori; so likewise for many judgments in modal epistemology concerning what is metaphysically possible. This all happens in Part II of the book.

So that's the first hook for our book: understanding the a priori and the epistemology of philosophy. We tell a story that is able to vindicate a number of pretty traditional ideas about how philosophy works (but without problematic focus on words or concepts). The other two hooks will each get another post -- one concerning Fregean ideas about mental content, and one about the role of intuitions.