Tuesday, August 18, 2015

The Certainty Norm of Assertion

In a well-known paper, Jason Stanley argues against the knowledge norm of assertion, in favour of a certainty norm of assertion. As Jason notices, knowledge-denying Moore-paradoxes like (1) aren't the only Moore-paradoxes in town; certainty-denying conjunctions like (2) seem similarly paradoxical:

  1. Jason works at Yale but I don't know that Jason works at Yale
  2. Jason works at Yale but it's not certain that Jason works at Yale

Jason thinks that knowledge doesn't entail certainty, and so a knowledge norm of assertion can't explain what's wrong with assertions of (2). Instead, he opts for a certainty norm of assertion, that's meant to explain both.

The argument I thought I remembered from the paper was that the certainty norm was strictly stronger than the knowledge norm. The certainty norm explains (2) in the obvious way (just like the knowledge norm explained (1)); and it explains (1) by invoking the entailment of knowledge by certainty. If I don't know that Jason works at Yale, it's not certain, so I can't assert the first conjunct. But today I read the paper again, and that's definitely not the argument. Certainty, in the sense Jason discusses, does not entail knowledge.

The relevant notion of certainty here is 'epistemic certainty', according to which 'one is certain of a proposition p if and only if one knows that p (or is in a position to know that p) on the basis of evidence that gives one the highest degree of justification for one's belief that p'. (p. 35) Since certainty does not entail knowledge, on Jason's view, it is not quite clear to me how a certainty norm explains the infelicity of (1). Of course one can't know both conjuncts in (1), but I don't see why why one couldn't have certainty in both conjuncts. Here is Jason's attempt to deal with the issue:
Consider the proposition that there are no large Jewish elephants in my bedroom. This may have been an epistemic certainty for me five minutes ago, even though I did not know that there were no large Jewish elephants in my bedroom. I did not know that there were no large Jewish elephants in my bedroom, because I did not believe it, and I did not believe it simply because it didn't occur to me ever to entertain that possibility. Nevertheless, in this case, if I had entertained the propsition that there are no large Jewish elephants in my bedroom, I would have known it. The reason this counterfactual is true is because it is an epistemic certainty for me that there are no large Jewish elephants in my bedroom. So the fact that a proposition is an epistemic certainty for a person does not entail that the person knows that proposition. If a proposition is an epistemic certainty for a person at a time, then it does follow that the person is in a *position to know* that proposition. Being in a position to know a proposition is to be disposed to acquire the knowledge that the proposition is true, when one entertains it on the right evidential basis. Since epistemic certainty entails possession of this dispositional property, utterances [like (1)] are odd. (p. 49)
The thought seems to be that if something is certain, then if one asserts it, one must know it. But it doesn't follow from his explanations of these notions that this must be so—couldn't something be asserted without being entertained on the right evidential basis? Suppose for instance that p is certain for me, but I don't know p because I am ignoring my overwhelming evidence for p, and basing my belief that p on some bad evidence. Couldn't it be, in this case, that it's also certain for me that I don't know p? I think it seems plausible that I might—my evidence, which, suppose, includes some sort of introspective access to the source of my belief—might overwhelmingly establish that I don't know that p. But if so, the certainty norm predicts that 'p but I don't know that p' should be assertable.

Thursday, August 06, 2015

Assertability without Assertion

I think there are cases where one doesn't assert something, but one wouldn't be in violation of any norms of assertion if one did. Probably you can think of lots of cases like that. For instance, suppose that Helen is having a conversation about sloths and their habits. Suppose that she has whatever arbitarily high epistemic access you like with respect to the fact that urinate only once a week, but chooses not to mention this fact, preferring in this instance to listen to the other people speaking instead. If she had asserted it, this would have amounted to an expression of her knowledge or better (certainty, maybe), and it would have given knowledge to her interlocutors, who would have celebrated this fact as relevant and interesting. But she keeps it to herself instead.

I think this is a pretty mundane kind of case—it happens all the time. (There are other kinds of cases with the relevant feature too—imagine a case where one does the wrong thing by refraining from asserting. One may—indeed, ought—to assert, but doesn't.) But I also think it's a counterexample to Rachel McKinnon's 'Supportive Reasons Norm', which she suggests is the central norm governing assertion.

Here is the Supportive Reasons Norm (given on p. 52 of Rachel's recent assertion book):
One may assert that p only if
(i) One has supportive reasons for p,
(ii) The relevant conventional and pragmatic elements of the context are present, and
(iii) One asserts that p at least in part because the assertion that p satisfies (i) and (ii).
My case of Helen is a counterexample because Helen may assert that sloths urinate only once a week, but fails to satisfy condition (iii), since she doesn't make the relevant assertion at all, let alone for a particular reason. In general, that condition will ensure that the permissibility of the assertion entails that the assertion is made. Since it's not true we're only permitted to assert the things we do assert, I don't think condition (iii) is part of a proper characterization of what one may assert. (It is much easier to think something like it may have a role to play in a characterization of when a given assertion is a proper one. Perhaps that's what Rachel had in mind.)

Sunday, July 19, 2015

External Factors and Evidential Symmetry

I'm thinking about the relationship between factive reasons and internalism.

A certain gumball machine has two possible modes. In mode A, it delivers blue gumballs with 90% probability, and red gumballs with 10% probability; in mode B, those proportions are reversed. (The probability for each gumball is independent.) Every morning, a fair coin is flipped to determine in which mode it will remain for the duration of the day. Vibhuti knows all of this. She begins our story with an epistemic probability of .5 for proposition h.
h: the machine is in mode A.
Now two of Vibhuti's friends who have been to the gumball machine today come along. Tunc tells her that he bought a gumball, and it was blue. (This is evidence in favour of h.) Eric tells her that he bought a gumball, and it was red. (This is evidence against h.) Tunc and Eric are equally (and highly) honest and reliable (and Vibhuti knows this). The evidential situation looks entirely symmetric, so Vibhuti's evidential probability for h looks still to be .5.

But certain approaches to evidence might disrupt this apparent symmetry. Suppose it turns out that Eric is lying, but Tunc is telling the truth, and indeed, reporting something he knows. (We've stipulated that this is unlikely, but not that it's impossible.) The lie is skillful, and Vibhuti isn't suspicious; she very reasonably takes both of their assertions at face value. Let's also take on board the following epistemic assumptions (if only to see where they lead):

  1. Testimony almost always puts one in possession of knowledge of the fact that the testimony occurred.
  2. Testimony at least sometimes puts one in possession of knowledge of the fact testified.
  3. E=K.
(Note that I am not assuming a reductivist approach to testimony; there's no claim that the knowledge from 2 typically or ever is based on the knowledge from 1.)

Given these assumptions, it looks like we may not get Vibhuti's case as symmetrical after all. Although she has some evidence in favour of h and some against it, it isn't all symmetrical. For it looks like her relevant evidence is the following:
  • Tunc says he got a red one.
  • Tunc got a red one.
  • Eric says he got a blue one.
The first and third on this list look to be symmetrical for and against h. But the strongest item here counts unambiguously in favour of it. You might think that the second swamps the first in evidential relevance—that sort of seems right—if so, then we could just look at this list:

  • Tunc got a red one.
  • Eric says he got a blue one.
Here we have one piece of evidence in each direction, but the first item, which counts in favour of h, looks stronger than the second. So it looks like there's going to be some pressure against the idea that Vibhuti's evidential probability in h is .5; it seems like it should be higher than .5.

So how, if at all, could E=K (and really, the challenge applies to a broader range of views: anyone strict enough to demand true evidence, but lax enough to allow testimonial contents sometimes to be evidence) accommodate the apparent evidential symmetry in cases like this? I see four options.
  1. Deny that one can ever get the contents of testimony as evidence, because we don't really know the things we're told, even when we're told by people who know. (Skepticism about testimony.) This might be more palatable than it seems if accompanied with some kind of contextualism about both 'knows' and 'evidence'.
  2. Deny that one can ever get the contents of testimony as evidence, because not all knowledge is evidence—maybe only direct or basic knowledge counts as evidence. (E=BK)
  3. Deny that in particular cases like this one can get knowledge via testimony. If one friend is lying to you, then you're in a skeptical situation where testimony is unreliable. (But will this solution be general enough?)
  4. Admit everything I've said about what evidence Vibhuti has, but argue that, for purposes of evidential probability, the situation is symmetrical after all. (The relationship between evidence and evidential probability is complex; I'm really working with something of a 'black box' for the latter—must we suppose that the black box delivers the assymetrical verdict in a case like this?)
Maybe there are more, I'm not sure.

Thursday, July 16, 2015

Philosophical Assertions

Chapter 9 of Sandy Goldberg's recent book Assertion argues, on the basis of peer disagreement in philosophy, that it characteristic of typical, apparently appropriate, philosophical discussions that they include many assertions of contents for which the speakers are not justified. This is part of a bigger case that the epistemic norms governing assertion do not always make for strong epistemic constraints.

I am not convinced that the cases of disagreement he's looking at are cases in which people are typically making out-right assertions of controversial contents. I am attracted to something like the idea he recognizes in this passage:
I have sometimes heard it said (in conversation) that there really are no straight (first-order) assertions of controversial matters in philosophy, only speculations and conditional (or otherwise hedged) claims. A characteristic claim in this vicinity is that philosophers do not flat-out assert their claims, but instead suggest them tentatively, or with something like a "we have some reason to think that" operator in front of them. (p. 247)
I would weaken this point slightly—perhaps philosophers do sometimes assert things that they can't justifiably believe because of considerations about peer disagreement, but hold that these assertions are inappropriate norm violations. If most of the time, including all of the times that are clearly appropriate, philosophers are doing something weaker than flat-out asserting, then his larger argument for a weaker assertion norm doesn't really get off the ground. (While it may be unacceptable to suppose that philosophy is absolutely full of unwarranted assertions; it seems to me not at all implausible to suppose that some philosophers sometimes make unwarranted assertions.) But I do think that philosophers often tend to exaggerate the strength with which philosophers tend to put forward their philosophical ideas.

In response to this move, Sandy writes the following in defence of 'PASD'—the claim that it is common for philosophers in cases of systematic peer disagreement to assert controversial claims. This is a continuation of the passage quoted above:
I agree that this is sometimes the case. But I find it dubious in the extreme to think that all cases of apparent assertions made in philosophy under conditions of systematic peer disagreement are like this. Surely there are some cases in which a philosopher continues to assert that p, despite the systematic p-relevant peer disagreement. (fn: Indeed, some philosophers have even conceded as much in their own case.) Here, two points of support can be made. First, it should be obvious to anyone who has participated in or observed philosophical practice that there are (some, and arguably many) occasions on which a claim is advanced under conditions of systematic peer disagreement without any explicit hedge or "there are reasons to think" operator in play. For this reason, if the hedging proposal is to work, it must postulate an implicit (linguistically unmarked) hedge or "there are reasons to think" operator in play in all such cases. But such a global postulation would appear to be fully theory-driven, and so ad hoc. What is more (and this is my second point), there are independent reasons to think that such a postulation is not warranted. In particular, the suggestion—that philosophical practice under conditions of systematic peer disagreement always involves hedged rather than straight assertion—appears to be belied by other aspects of our practice. Why the vehemence with which some (apparently first-order, categorical) philosophical claims are made, even under conditions of systematic peer disagreement? Why so much heat, if all we are doing is entering hedged claims? Why do we go to such great lengths to try to defend our claims in the face of challenge? Why not shrug off such challenges to our claim that p, with the remark that, after all, we were merely claiming that there are reasons supporting that p? Relatedly: why is it that the typical response to challenges is to try to defend the claim that p, not the (weaker) claim that there are reasons to believe that p? Finally, if all we are doing in philosophy is entering hedged claims, why is talk of our philosophical "commitments" so prevalent? Reflecting on this practice, I conclude that assertions are made in philosophy, even in the face of systematic peer disagreement. PASD is true.
I want to make three observations about this argument. First, for the reasons mentioned above, I think that Sandy is wrong to focus on the question of whether philosophers ever make assertions of controversial claims. For his argument to work, he needs this to be common enough that the verdict that such claims are unwarranted is undermining of philosophical practice. Second, he seems to be focused primarily on the idea that philosophers are asserting something weaker, like an existential claim about reasons; a more promising version of the idea seems to me that we are often making weaker commitments to categorical philosophical contents—that we're often speculating that p, for instance, rather than out-right asserting it. (Sandy recognizes that there is an important distinction here elsewhere in the book.)

Third, I think that when we really get down to what philosophers actually say and write, outright assertions of contentious claims are much rarer than we sometimes suppose. We very often use 'it seems to me that' etc. hedges. I think the passage I've just quoted from Sandy is reasonably representative in terms of philosophical force and style—but how often does it contain actual assertions of contentious claims? Let's look in detail:

  • I agree that this is sometimes the case. May or may not be an assertion, but if it is one, it's an uncontroversial one Sandy makes about himself.
  • But I find it dubious in the extreme to think that all cases of apparent assertions made in philosophy under conditions of systematic peer disagreement are like this. Ditto.
  • Surely there are some cases in which a philosopher continues to assert that p, despite the systematic p-relevant peer disagreement. To my ear, the 'surely' makes this an invitation to notice for oneself, not an outright assertion. (I can almost hear question-marks on 'surely' claims.) But if it is an assertion it's a very weak one, and not one I'd expect to see systematic disagreement about.
  • Indeed, some philosophers have even conceded as much in their own case. An assertion, but not a contentious one.
  • Here, two points of support can be made. Ditto.
  • First, it should be obvious to anyone who has participated in or observed philosophical practice that there are (some, and arguably many) occasions on which a claim is advanced under conditions of systematic peer disagreement without any explicit hedge or "there are reasons to think" operator in play. Plausibly an assertion, but not controversial.
  • For this reason, if the hedging proposal is to work, it must postulate an implicit (linguistically unmarked) hedge or "there are reasons to think" operator in play in all such cases. An assertion. If 'the hedging proposal' is the idea that philosophers never make assertions in these cases, it looks like an uncontroversial one; if it's the more general idea that one can avoid his argument by invoking hedging moves, I think it's an unwarranted assertion for the reasons mentioned above.
  • But such a global postulation would appear to be fully theory-driven, and so ad hoc. Exhibits the kind of hedges he's talking about.
  • What is more (and this is my second point), there are independent reasons to think that such a postulation is not warranted. Exhibits the kind of hedges he's talking about.
  • In particular, the suggestion—that philosophical practice under conditions of systematic peer disagreement always involves hedged rather than straight assertion—appears to be belied by other aspects of our practice. Exhibits the kind of hedges he's talking about.
  • Why the vehemence with which some (apparently first-order, categorical) philosophical claims are made, even under conditions of systematic peer disagreement? Why so much heat, if all we are doing is entering hedged claims? Why do we go to such great lengths to try to defend our claims in the face of challenge? Why not shrug off such challenges to our claim that p, with the remark that, after all, we were merely claiming that there are reasons supporting that p? Relatedly: why is it that the typical response to challenges is to try to defend the claim that p, not the (weaker) claim that there are reasons to believe that p? Finally, if all we are doing in philosophy is entering hedged claims, why is talk of our philosophical "commitments" so prevalent? Six rhetorical questions provided to invite the reader to share in the appearance mentioned in the previous point. No assertions here.
  • Reflecting on this practice, I conclude that assertions are made in philosophy, even in the face of systematic peer disagreement. Not obviously a contentious assertion. It could be an uncontentious assertion about Sandy. It could be the very weak assertion that assertions are sometimes made in philosophy under systematic disagreement.
  • PASD is true. Looks like a contentious assertion; I'd be willing to call it unwarranted, because of the objections mentioned above. (PASD is a claim about what is normal; the existential doesn't justify it.) Note also that this could be interpreted as embedded within the previous sentence's 'I conclude that' operator, in which case it would not be a contentious assertion.

Again, I don't think Sandy's writing here is idiosyncratic; this is what lots of analytic philosophy looks like. If this is representative, it seems that quite a small proportion of philosophical writing constitutes outright assertion of contentious claims. So the idea that such assertions are unwarranted does not imply that warranted philosophical dialogue and debate is impossible. Sandy exaggerates the role of contentious assertions in philosophical discussion.

Sunday, June 28, 2015

Internalism and the Meditations

Here's a Cartesian idea: there is special epistemic access to facts about our own subjective, internal experiences. Other knowledge we may have, like knowledge of the external world, must be derived from the more basic knowledge, which concerns the internal.

This is clearly something Descartes thinks, but is there an argument to that effect? I'd always thought he did; the Meditations offers something like this:
  1. There are possible skeptical scenarios for beliefs about the external world
  2. There are no possible skeptical scenarios for beliefs about the internal
  3. Being such that there's no possible skeptical scenario for it is the marker of the kind of epistemic fundamentality in question; so
  4. The internal, not the external, is what has the kind of epistemic fundamentality in question
Premise (3) is no doubt dubitable, but I'll decline from dubiting it at present. I want to get clearer about (1) and (2). What's it take to be a skeptical scenario with respect to some belief? It seems like maybe Descartes treats a skeptical scenario with respect to a given belief as a possible case where one is wrong about that belief, but things seem exactly the same. But if that's the working understanding of a skeptical scenario, then it looks like we're just assuming the kind of internalism I'm looking for justification for. Why should we think the key question, for whether a given scenario has skeptical implications, is whether things seem the same? It seems that one would only sign up to that criterion if one were already convinced that seemings are really epistemically important.

Note that a more neutral characterization of skeptical scenarios might have it that a skeptical scenario with respect to p is a possible case where one is wrong about p, even though one has all the same basic evidence. But putting things this way, premises (1) and (2) become much less obvious.

So I'm tempted to think there's not actually any pressure in favour of internalism in Descartes's reflections on skeptical scenarios; reflection on which kind of deception is and isn't possible might just amount to teasing out the internalist commitments one initially finds oneself with.

Monday, June 01, 2015

Factoring Views about Having Reasons

I have been thinking about Mark Schroeder’s very interesting paper, “Having Reasons”. He argues against a ‘factoring account’ of having a reason for action, and he also argues that epistemologists have been misled by assuming a parallel factoring account of evidence.

I have three reactions.

  1. Schroeder is unclear about what exactly the commitments of the factoring account are; I think he may slide between a stronger and a weaker reading of it. This isn’t disastrous for his own project, because he wants to reject both readings, but I think it’s important to keep them separate (in part because of (2) below).
  2. The stronger reading is pretty plausibly false (though maybe not just for the reasons Shroeder says) but the weaker reading is pretty plausibly true (despite his arguments).
  3. Epistemologists have not been misled by assuming (a strong form of) the factoring account.
I’ll try to defend (1) in this post.

What is the factoring account? Schroeder first introduces it via an analogy:
When someone has a ticket to the opera, that is because there is a ticket to the opera, and it is in her possession—she has it. Similarly, if one has a golf partner, this can only be because there is someone who is a golf partner, and one has him. But here, it is not like there are people out there who have the property of being golf partners, and one is in your possession. Rather, being a golf partner is simply a relational property, and the golf partner you have—your golf partner—is simply the one who stands in the golf partner of relation to you. 
A factoring account of having opera tickets is true. There is an opera ticket, and moreover, one has it. A factoring account of having golf partners, however, is to be rejected. What exactly is wrong with this view? Schroeder says it’s a commitment to the implausible claim that “there are people out there who have the property of being golf partners, and one is in your possession.” But of course, strictly speaking, there are people out there who are golf partners, and one of them is mine. I agree with Schroeder that there’s an important contrast between these cases, but I don’t think he’s quite articulated what it is. I think it has to do with grounding. What makes it the case that I have an opera ticket is the existence of this thing the opera ticket, combined with me standing in a suitable relationship to it. But the existence of the golf partner, combined with my relationship to her, doesn’t make it the case that I have a golf partner. On the contrary, it is my having her as a golf partner that makes it the case that she is a golf partner. The relationship, not the object, is relatively fundamental here; the existence of the golf partner—though genuine—is derivative.

So distinguish these claims:

  • Weak Factoring: Any time S has R as a reason, there exists a reason R, and S stands in a suitable having relation to R.
  • Strong Factoring: What it is for S to have R as a reason is for there to exist a reason R, and for S to stand in a suitable having relation to R.
As the names imply, Strong Factoring implies Weak Factoring, but not vice versa. If what I said about golf partners is correct, Weak Factoring does not get at the intuitive contrast between opera tickets and golf partners. The analogue of Weak Factoring is true of golf partners. (Contra the letter of Schroeder's text, any time one has a golf partner, there really is someone who is a a golf partner that one has.) I don’t think Schroeder is at all clear about this; he writes at times as if ‘the Factoring Account’ is just Weak Factoring. (i.e., “[T]he Factoring Account has two major commitments. In any case in which it seems that there is a reasons someone has to do something, whatever is the reason that she has must be just that: (1) a reason for her to do it, and (2) one that she has.” p. 58)

The distinction makes an important difference when it comes to thinking about the views one might have about reasons. For example, here is a possible view one might have about reasons: R=K. (A proposition is among a subject’s reasons if and only if the subject knows that proposition.) This view counts as a Weak Factoring view—any time you have knowledge, there is some knowledge, and moreover, you have it. But it is not a Strong Factoring view; the extinct of the knowledge ontologically depends on your having the knowledge. It is more like golf partners than opera tickets.

“Weak Factoring” is probably a misnomer, really—the view in question isn’t a kind of factoring at all. It’s a mere entailment claim. So when Schroeder’s argument against what he calls ‘The Factoring View’ takes the form of counterexamples to Weak Factoring, he’s really making a much more radical claim than anything we should call the rejection of a factoring treatment of having reasons. He's rejecting the mere entailment from having a reason to there being a reason.

(His counterexamples are cases where a subject acts on a reasonable but mistaken belief—like Bernard Williams’s subject who takes a sip of the liquid in his glass because he falsely believes it’s a martini. I don’t think these are counterexamples, for reasons I won’t go into right now.)

Thursday, March 19, 2015

Perceptual Justification and the Logic of 'Because'

Here's an invalid argument form:

  1. If x is F, then that's because x is G.
  2. If x is G, then x is H. Therefore,
  3. If x is F, then that's because x is H.
This instance should make it obvious that this form is invalid, if it's not already obvious:

  1. If Laila got an A, then that's because she received a total score of 80 or higher.
  2. If Laila received a total score of 80 or higher, then she passed the course. Therefore,
  3. If Laila got an A, then that's because she passed the course.
I'm not sure just what inferences are valid in the logic of this sort of 'because', but this one isn't. If there were an appropriate 'because' in premise (2), then transitivity of 'because' would establish the validity of the inference. I'm not sure whether I think 'because' is transitive'. But it's not closed under the material conditional, or even entailment.

So I think that Eli Chudnoff is mistaken in supposing that these two arguments support the idea that perceptual and intuitive justification obtains in virtue of phenomenology:
  1. If your perceptual experience representing that p justifies you in believing that p, then it does so because in having this experience it is for you just like having a perceptual experience that puts you in a position to know that p.
  2. If in having an experience it is for you just like having a perceptual experience that puts you in a position to know that p, then it has presentational phenomenology with respect to p.
  3. So if your perceptual experience representing that p justifies you in believing that p, then it does so because it has presentational phenomenology with respect to p. (Intuition, p. 92)
  1. If your intuition experience representing that p justifies you in believing that p, then it does so because in having that experience it is for you just like having an intuition experience that puts you in a position to know that p.
  2. If in having an experience it is for you just like having an intuition experience that puts you in a position to know that p, then it has presentational phenomenology with respect to p.
  3. So if your intuition experience representing that p justifies you in believing that p, then it does so because it has presentational phenomenology with respect to p. (Intuition, p. 97)
These arguments are invalid. The validity would, I think, be debatable if each premise (2) were strengthened into a 'because' claim. Maybe that is the most charitable interpretation of Eli here?