Author Topic: On Evidence [Potential Blog Post]  (Read 7591 times)

0 Members and 1 Guest are viewing this topic.

Offline Sigmaleph

  • Ungodlike
  • Administrator
  • The Beast
  • *****
  • Posts: 3615
    • sigmaleph on tumblr
On Evidence [Potential Blog Post]
« on: September 07, 2013, 10:46:23 pm »
My first drafts for two posts I intend to put up on the blog. The first discusses the probabilistic interpretation of evidence. The second is an analysis of a particular bit of fallacious reasoning, which serves to illustrate the advantage of the interpretation of evidence I prefer over informal or intuitive notions.

Putting them here for feedback/criticism. Be as harsh as you have to, you can't hurt my feelings (thought it'd be preferable to say "This sucks because of X, Y and Z", rather than just "this sucks")

The first one:


Sceptics always like to go on about how “extraordinary claims require extraordinary evidence” and “You need evidence to justify your beliefs” and “That which is asserted without evidence, can be dismissed without evidence”, and many other variations on that theme. One gathers that this 'evidence' thing is quite a big deal, which it is. I'm not here today to argue about the importance of evidence, though, but rather its nature.

What is evidence? How do we know whether something is evidence, whether it is of any use to justify a belief? In particular cases, the answer can be trivial: “I wanted to know at what temperature water boils, so I stuck a thermometer in a pot and put it over a flame. When I saw it boil, I checked the reading, and it said 100°C, so that's evidence that the boiling point of water is 100°C, at least, all other things being equal to my experimental set up” And no doubt this is evidence, but what is the general pattern at work? Informal definitions abound, as well as intuitive notions, but as I intend to show, one can do better. This is not an original idea I'm proposing, but rather an effective rigorous version that isn't sufficiently common, even amongst those who should care (like those aforementioned sceptics). This is not pure intellectual wankery about semantics: people make serious mistakes in their epistemology that can sound reasonable with an intuitive notion of evidence, and are shown to be deeply flawed if one takes a more rigorous approach.

To start with, we shall define evidence as follows: We say that some observation is evidence for a hypothesis, if making the observation increases the probability that the hypothesis is true. There are elegant ways of putting this in the language of probability theory (such as: P(H|E)>P(H) ), but let's stick to English sentences for now.

There are some caveats to make. When we talk about evidence, we usually exclude very weak evidence from the consideration. More so, when we talk about legal evidence, or scientific evidence. For good reason, too: human brains don't really keep track of exact numbers for how much or how little we believe in something. If you can't feel, intuitively, the difference between assigning 79%  and 80% probability, then there's no point in trying to add 0.003%. It'll just get lost in the noise. So calling it 'evidence', the sort of thing that should change our minds at least a little bit, would cause us to overestimate its power, try to change our minds by 2% because that's the smallest real difference we can feel (numbers are completely made up and only for illustrative purposes. They don't reflect the complexities of the situation).

But, appropriate warnings when using for human purposes aside, I do think the probabilistic view is appropriate. It keeps all the desirable properties of evidence as informally understood: It's subject to gradation, it's cumulative (though not trivially so, a subject which would require a further post to elaborate on), a large amount should cause us to be more certain of our beliefs, and so on. It also has the advantage that there is already a well-developed field of the maths of probability, and many of its results can guide us in our ideas of how to deal with evidence (of this I could speak at length).

To anticipate likely objections:

“Probability talks about the relative frequency of random events, what does that have to do with the truth or falsity of a belief?”

As it turns out, probability is a mathematical model of subjective uncertainty, which includes but is not at all limited to relative frequencies of random events. If I shuffle a standard deck of cards, the probability of the queen of hearts being the card at the top is 1/52, surely we all agree to this*. Now, if I take a peek and see that the top card is the four of spades, then I would no longer assign the same odds, but you don't have this knowledge. Even if some might protest that the card is fixed and will not change, you cannot shift your subjective probability based on information you don't have, so you would have to continue to assign 1/52.

And this is right and proper: if you were to bet against a third party (who hasn't peeked at the cards), you should decide to bet on whether the card is the queen of hearts if the payout is better than 51:1. If you were to shuffle the deck many times, take the card at the top, and record the result, you should expect to obtain a queen of hearts, on average, once per fifty-two draws, and this frequency to become more and more accurate the more draws you make. Crucially, this does not depend at all on whether I looked at the card before you did or not, on whether I just shuffled the deck, or used the decay or a radioactive isotope to generate an ordering, or whichever other random or pseudo-random method you prefer. The results of probability theory apply equally well to “A random process might generate results A, B, or C, with the frequencies fa, fb, and fc” and to “One of A, B, or C is already true, and I don't know which with perfect certainty, but I have an expectation of ea, eb and ec for each of them”


“A hypothesis should be highly informative, not highly probable. The more content your hypothesis has, the less probable it is, for the probability of “A, and also B” is always less then or at most equal to the probability of A alone, or B alone”

Ah, but the idea is not to find a set of beliefs with maximum probability. The point is to assign the exactly correct probability to all hypotheses, given the information available, and then seek new information. Over time, new information should converge to increasing the probability of the actually correct hypothesis. The one who tries to maximise probability by saying “A”, rather than “A and B”, ignores that B has a probability as well, they are simply refusing to speak of it. You can try to maximise the odds that you are not found in error by saying as little as possible, but this is as much of a victory as claiming to be undefeated at chess because you only played once against your six-year-old niece.


“But what is the point of all this, anyway? You say not being rigorous causes mistakes, but as far as I can see people have been doing fine in dealing with evidence without trying to complicate it with maths”

If only. If you start taking a rigorous look at evidence, informed by probability theory, you'll see how often the intuitive treatment of the notion yields flawed results. More on that, up next.


* Assuming I didn't cheat in some way, and by 'standard deck of cards' we mean the French deck. You'd be hard-pressed to find a queen of hearts in what my compatriots would call the standard deck.
Σא

Offline Sigmaleph

  • Ungodlike
  • Administrator
  • The Beast
  • *****
  • Posts: 3615
    • sigmaleph on tumblr
Re: On Evidence [Potential Blog Post]
« Reply #1 on: September 07, 2013, 10:54:53 pm »
The second one:


Previously, I said:

This is not pure intellectual wankery about semantics: people make serious mistakes in their epistemology that can sound reasonable with an intuitive notion of evidence, and are shown to be deeply flawed if one takes a more rigorous approach.

Time to pay up on that.

Not long ago someone in the sceptic community made a remark to the effect of “All claims, extraordinary or not, require evidence. And by definition, a claim is not evidence in itself”*. I would not be surprised if a substantial number of the audience thought the notion to be perfectly reasonable, as opposed to entirely absurd.

“What could possibly be absurd about that?”, you ask. “You're a sceptic, and surely, the notion that you should have evidence for your beliefs is a fundamental tenet of scepticism” Indeed it is! That's not what I object to. “Then you believe a claim counts as evidence for itself? That completely nullifies the idea you just agreed to. What's the point of asking for evidence for beliefs if you're just going to take the belief as evidence?” Well, that is why I wrote that whole other thing on the nature of evidence.

If you read my previous post, you might notice that I was careful to speak of evidence for 'hypotheses', rather than 'claims'. Why? Well, other than the fact that it doesn't sound quite as pretentious as 'hypothesis', 'claim' has the inherent implication that it's something someone is saying. A hypothesis exists independently of whether someone proposes it, a claim does not. Which means that, when you encounter a claim, you have already made an observation. Specifically, you have observed that someone is making that claim.

“Big deal”, you say. “People say all sorts of weird shit, doesn't make them true”. But the point is not that evidence makes beliefs true, remember? It's that it makes them more probable. “And? Why does someone saying something make it more probable?” I'm glad you asked, imaginary person whose dialogue I'm writing to move this piece along.

In the general case, there is no guarantee that someone saying something should cause you to assign it higher probability, that much is true. For example, if you suspect someone is secretly trying to kill you and they are giving you advice about which types of berries are poisonous, you'd probably do better by doing the opposite of what they say (unless they expect you to do that).

But that is not every case, and there are plenty of situations where an assertion being made should make you judge it more probable. If you think the person knows the answer, and you don't have any strong expectation for them to lie, for example. If they are a respected authority in a field relevant to the claim. If it's a question you never considered before, but your interlocutor has done so at length, and you don't think them an idiot.

People's beliefs, people's assertions, they are facts about the world. They are correlated with aspects of the world**, much in the same way as the reading of a thermometer is correlated with temperature. To specifically exclude a particular kind of fact about the world from qualifying as evidence is completely arbitrary; but if you think in terms of  “claims” and “justifying beliefs”, this might not be apparent. If you think in terms of updating the probability of a hypothesis, it is somewhat more so.

“Which is nice and all, but isn't there a problem here? You argue that your notion of evidence is correct, because it gives correct answers here where intuitive notions can fail. And you argue intuitive notions fail here, based on your previous definition of evidence! Circular, innit?”

Not quite. The probabilistic view of evidence provides a good illustration of the fallacious nature of “claims cannot be evidence”, but it is not at all required to show that fallacy. Consider: I meet a guy and he tells me his name is Paul. If I refuse to accept this claim as evidence for itself and demand he show some ID, not only am I being rather rude, but I'm also being stupid. It is a fact that, very nearly every time, if you meet someone and they tell you their name, they are telling the truth. I have anecdotal evidence that corroborates this; every time someone tells me their name and I later find out independently, from looking at ID or school records or something, the names matched. I don't think that's a coincidence.

“But scepticism is not about navigating basic social interactions! It's about other sorts of claims, like UFOs and alternative medicine and stuff.”

Reality does not draw a sharp boundary between “stuff sceptics usually apply scepticism to” and “all that other stuff”. If you have a theory of how to obtain reliable knowledge, there's no particular reason it should work in a fundamentally different manner for homoeopathy and for Paul's name. The techniques you use might be different (I am certainly not suggesting that you take homoeopaths at their word!), but the deep-level foundations should be the same, and account for why you treat both cases differently. In this case, it's obvious: Paul knows his own name, much like most people know theirs, and has no reason to lie to me. Whereas homoeopaths cannot say how they know homeopathy works, and even if they know it doesn't, they have an economic incentive to sell it.

If you epistemology doesn't work properly in the simple case of finding out people's names, that hints at the idea that it's not quite right. Certainly sceptics usually deal in cases where the claim itself is very weak evidence, or maybe no evidence at all, but that's not a fundamental property of evidence, rather a result of a selection effect: if it was a simple enough question that we can just take people's word for it, we probably don't need the systematic approach of scepticism. Evidence is still the same thing, and to treat it differently can lead to error.



* The specific person, as well as the claim they were responding to, are mostly irrelevant to the point I wish to make.


** Not in the general case, again, but it is so in specific cases. My belief that the sky is blue is correlated with the colour of the sky, because I formed it by observing the sky.  Consider that to say that people's beliefs are uncorrelated with the world, is to say that everything you believe, you believe it for no good reason and could just as well be false. And if you think you are the only one whose beliefs are correlated with reality, well... what evidence do you have for that?
Σא