Thanks. (I could tell that first pic was of a woman, though.)

As long as we’re quibbling, the picture of Alice and the queen is of the wrong queen. You want the White Queen and you have the Red Queen. That would be fine if you were running as fast as you could to stay in the same place, but not if you wanted to believe as many as six impossible things before breakfast.

John Harshman:
As long as we’re quibbling, the picture of Alice and the queen is of the wrong queen. You want the White Queen and you have the Red Queen. That would be fine if you were running as fast as you could to stay in the same place, but not if you wanted to believe as many as six impossible things before breakfast.

Ha. Good catch!

Also, FWIW, the two quotes (that by Twain and that by Carroll) didn’t seem to me to jibe very well. Doesn’t Rohrer tell us both to exclude and not to exclude the impossible from our probability estimates in two successive breaths?

ETA: I guess it’s not really the quotes themselves so much as Rohrer’s takes on them that produce that inconsistency.

To be fair, that search only produces 676,000 results. Obviously not mainstream thinking.

FWIW, anyone who wants more detail on the history of Bayesian Reasoning can read “The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy” by Sharon Bertsch McGrayne

I really enjoyed the book, and used copies are pretty cheap.

petrushka: To be fair, that search only produces 676,000 results. Obviously not mainstream thinking.

Wow. That many. And yet there’s only one theory of evolution.

Fair Witness:
FWIW, anyone who wants more detail on the history of Bayesian Reasoning can read“The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy” by Sharon Bertsch McGrayne

I really enjoyed the book, and used copies are pretty cheap.

I read that a few years ago. And I remember liking this Amazon review which I think gives a nice picture of both its merits and flaws:

271 of 278 people found the following review helpful
4.0 out of 5 stars An enjoyable popular science book that needs more depth, May 29, 2011
By
Sitting in Seattle
Verified Purchase(What’s this?)
This review is from: The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy (Kindle Edition)
“The Theory That Would Not Die” is an enjoyable account of the history of Bayesian statistics from Thomas Bayes’s first idea to the ultimate (near-)triumph of Bayesian methods in modern statistics. As a statistically-oriented researcher and avowed Bayesian myself, I found that the book fills in details about the personalities, battles, and tempestuous history of the concepts.

If you are generally familiar with the concept of Bayes’ rule and the fundamental technical debate with frequentist theory, then I can wholeheartedly recommend the book because it will deepen your understanding of the history. The main limitation occurs if you are *not* familiar with the statistical side of the debate but are a general popular science reader: the book refers obliquely to the fundamental problems but does not delve into enough technical depth to communicate the central elements of the debate.

I think McGrayne should have used a chapter very early in the book to illustrate the technical difference between the two theories — not in terms of mathematics or detailed equations, but in terms of a practical question that would show how the Bayesian approach can answer questions that traditional statistics cannot. In many cases in McGrayne’s book, we find assertions that the Bayesian methods yielded better answers in one situation or another, but the underlying intuition about *why* or *how* is missing. The Bayesian literature is full of such examples that could be easily explained.

A good example occurs on p. 1 of ET Jaynes’s Probability Theory: I observe someone climbing out a window in the middle of the night carrying a bag over the shoulder and running away. Question: is it likely that this person is a burgler? A traditional statistical analysis can give no answer, because no hypothesis can be rejected with observation of only one case. A Bayesian analysis, however, can use prior information (e.g., the prior knowledge that people rarely climb out wndows in the middle of the night) to yield both a technically correct answer and one that obviously is in better, common-sense alignment with the kinds of judgments we all make.

If the present book included a bit more detail to show exactly how this occurs and why the difference arises, I think it would be substantially more powerful for a general audience.

In conclusion: a good and entertaining book, although if you know nothing about the underlying debate, it may leave you wishing for more detail and concrete examples. If you already understand the technical side in some depth and can fill in the missing detail, then it will be purely enjoyable and you will learn much about the back history of the competing approaches to statistics.

Bayes’ Theorem is used when we want to make inferences. It tells us how to convert prior beliefs about parameters of a process into posterior beliefs, after observing the outcome, both being expressed as probabilities.

When we predict the outcome of evolutionary processes, generally we are not using Bayes’ Theorem, we are assuming a starting point and working out the probabilities of various outcomes. The stochastic processes of, say, population genetics compute one of the terms in the Bayesian calculation, but they are not using the whole calculation.

For example, if I am uncertain whether the selection coefficient favoring allele B over allele b is +s or -s, and I observe a change in gene frequencies, I can calculate the probability of that change under the assumption of +s and also under the assumption of -s. That part is where evolutionary theory comes in. That can be used to do Bayesian inference if we add in prior beliefs about +s versus -s.

I’ve been reading a lot about theories that use Bayes’ Theorem as a model of cognition generally. The video was helpful, so thank you!

Wow. That many. And yet there’s only one theory of evolution.

Wow. That many. And yet pickled cabbage is so cheap. And other non sequiturs.

Why on earth do we start with P(w) = uniform?

What justifies that assumption?

If we believe something that’s not true, it can make it harder or impossible to learn from our data. (23:36)

Fred.

Mung:
Why on earth do we start with P(w) = uniform?

What justifies that assumption?

There’s an infinity of possible distributions, but we only have limited time and computational power. A uniform prior might be wrong, but where we don’t have any data we don’t know any better, so it’s at least a fair distribution (IOW we don’t assume that any particular outcome is more probable than another, thus biasing the calculation).
Empirical data can alter our prior if we can get data on it.

Kantian Naturalist:
I’ve been reading a lot about theories that use Bayes’ Theorem as a model of cognition generally. The video was helpful, so thank you!

I also found the video very clear. And a handy reference for the next time Bayes theorem crops up and I can watch it again to refresh my memory. I’m going to re-read some of the stuff Lizzie and others wrote a while ago on probability. It might make more sense to me. The eleP(T|H)ant in the room for instance.

There are many ways to use the term “Bayesian.” But mainly it denotes a particular interpretation of probability. In modest terms, Bayesian inference is no more than counting the number of ways things can happen, according to our assumptions. Things that can more ways are more plausible. And since probability theory is just a calculus for counting, this means that we can use probability theory as a general way to represent plausibility, whether in reference to countable events in the world or rather theoretical constructs like parameters. Once you accept this gambit, the rest follows logically. Once we have defined our assumptions, Bayesian inference forces a purely logical way of processing that information to produce inference.

– Richard McElreath

I can see how counting would appeal to a retail store clerk. 🙂

Alan Fox: I also found the video very clear. And a handy reference for the next time Bayes theorem crops up and I can watch it again to refresh my memory. I’m going to re-read some of the stuff Lizzie and others wrote a while ago on probability. It might make more sense to me. The eleP(T|H)ant in the room for instance.

The real eleP(T|H)ant in the room is your position’s failure to find a way to test its claims

Whoopsie- but thanks for admitting that you don’t have a clue.

Hey all,

I’m conducting an experiment next/trial week and I’d like to use the Bayesian theorem to come up with an expected result in advance. I’m not sure how to make it all work and would like some help.

It involves human robotic interaction. I can post more details if anyone has some time and want’s to give it a go.

Thanks. (I could tell that first pic was of a woman, though.)

As long as we’re quibbling, the picture of Alice and the queen is of the wrong queen. You want the White Queen and you have the Red Queen. That would be fine if you were running as fast as you could to stay in the same place, but not if you wanted to believe as many as six impossible things before breakfast.

Ha. Good catch!

Also, FWIW, the two quotes (that by Twain and that by Carroll) didn’t seem to me to jibe very well. Doesn’t Rohrer tell us both to exclude and not to exclude the impossible from our probability estimates in two successive breaths?

ETA: I guess it’s not really the quotes themselves so much as Rohrer’s takes on them that produce that inconsistency.

How is modern evolutionary theory Bayesian?

Mung,https://en.wikipedia.org/wiki/Bayesian_inference_in_phylogeny

Bayesian verses Frequentist

check it out

peace

What is the probability that a woman with a tattoo of Bayes Theorem has long hair?

I like the reference to having half of a belief and one-sixth of a belief. That was funny.

Not so unusual among gamblers …

Mung,If only there were some means of discovering answers to such questions.

Here is how.

Welcome to TSZ, Ido Pen.

To be fair, that search only produces 676,000 results. Obviously not mainstream thinking.

FWIW, anyone who wants more detail on the history of Bayesian Reasoning can read “The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy” by Sharon Bertsch McGrayne

I really enjoyed the book, and used copies are pretty cheap.

Wow. That many. And yet there’s only one theory of evolution.

I read that a few years ago. And I remember liking this Amazon review which I think gives a nice picture of both its merits and flaws:

But why Bayes’ Theorem?

Probability Theory: The Logic of Science

Bayes’ Theorem is used when we want to make inferences. It tells us how to convert prior beliefs about parameters of a process into posterior beliefs, after observing the outcome, both being expressed as probabilities.

When we predict the outcome of evolutionary processes, generally we are not using Bayes’ Theorem, we are assuming a starting point and working out the probabilities of various outcomes. The stochastic processes of, say, population genetics compute one of the terms in the Bayesian calculation, but they are not using the whole calculation.

For example, if I am uncertain whether the selection coefficient favoring allele B over allele b is +s or -s, and I observe a change in gene frequencies, I can calculate the probability of that change under the assumption of +s and also under the assumption of -s. That part is where evolutionary theory comes in. That can be used to do Bayesian inference if we add in prior beliefs about +s versus -s.

I’ve been reading a lot about theories that use Bayes’ Theorem as a model of cognition generally. The video was helpful, so thank you!

Mung,Wow. That many. And yet pickled cabbage is so cheap. And other

non sequiturs.Why on earth do we start with P(w) = uniform?

What justifies that assumption?

If we believe something that’s not true, it can make it harder or impossible to learn from our data. (23:36)Fred.

There’s an infinity of possible distributions, but we only have limited time and computational power. A uniform prior might be wrong, but where we don’t have any data we don’t know any better, so it’s at least a

fairdistribution (IOW we don’t assume that any particular outcome is more probable than another, thus biasing the calculation).Empirical data can alter our prior if we can get data on it.

I also found the video very clear. And a handy reference for the next time Bayes theorem crops up and I can watch it again to refresh my memory. I’m going to re-read some of the stuff Lizzie and others wrote a while ago on probability. It might make more sense to me. The eleP(T|H)ant in the room for instance.

I can see how counting would appeal to a retail store clerk. 🙂

Belated Merry Christmas, Richard Hughes.

Statistical Rethinking – Lecture 01 – YouTube

The real eleP(T|H)ant in the room is your position’s failure to find a way to test its claims

Frankie,Waaaah waaaah more ID only as anti evolutionary flailings.

stcordova,Thanks Sal. You too.

Except ID is not anti-evolution, Richie.

Whoopsie- but thanks for admitting that you don’t have a clue.

Hey all,

I’m conducting an experiment next/trial week and I’d like to use the Bayesian theorem to come up with an expected result in advance. I’m not sure how to make it all work and would like some help.

It involves human robotic interaction. I can post more details if anyone has some time and want’s to give it a go.

peace

http://crantastic.org/packages/LearnBayes