Monday, June 21, 2010

A potential explanation for heterogeneity of hereditary propensity for religiosity/credulity?

I was making a comment over at Bruce Hood's blog and a thought occurred to me that I think is worthy of its own blog post.

Pardon me for failing to dig up a citation for this, but there appears to be growing evidence that propensity for religiosity and other o'er-credulous beliefs has a hereditary component. In other words, there could be genes that make you more likely to believe without evidence (or to state it conversely, there could be genes that make you more skeptical).

One problem with this is that it takes very specific conditions to allow genotypic heterogeneity in a population. If there is a clear selective advantage afforded to one phenotype over another, then under normal conditions the corresponding genotype should eventually come to dominate the entire population.

Now, I'm totally a layperson in this regard, and I won't pretend that I understand the mathematics behind what enables heterogeneity, nor that I could list more than a couple of conditions that allow it. But I happen to know offhand that one way you can get heterogeneity is via a host/parasite relationship.

The idea goes like this: Host phenotype A is resistant to parasite phenotype X, but vulnerable to parasite phenotype Y. Host phenotype B is the reverse, i.e. resistant to Y but vulnerable to X. All other things being equal, the prevalence of each phenotype in each population will tend to oscillate, with shifted (or is it opposite?) phases. If the parasite population is dominated by X, then natural selection will cause the host population to tilt towards A... after which natural selection then causes the parasite population to tilt to Y, causing the host population to shift to B, etcetera ad infinitum.

I'm sure there's an evolutionary biologist reading this right now screaming about the horrible state of evolution education in America's schools, but I think I at least got the gist of it... maybe?

Okay, so now on to inherited credulity... one account for why humans are so damn credulous, given by Michael Shermer, is that Type I errors (false positives) tend to be far less costly than Type II errors (false negatives), therefore natural selection will favor individuals who tend to be biased towards Type I errors. Shermer's favored example is the rustle in the grass that might be a tiger. If you assume it's a tiger, and you're wrong, then you waste a little time and energy fleeing from a predator that wasn't there. If you assume it's the wind and you're wrong, then you are dinner.

One thing that's always bothered me about Shermer's account is that he rarely, if ever, explicitly brings probability into it. It's not enough that the cost of a Type I error is less than that of a Type II error -- it must be the case that the cost of a Type I error multiplied by the chance that it is an error must be less than the equivalent calculation for Type II errors. Presumably, most rustles in the grass are in fact not tigers, so for his account to truly show that natural selection would favor "patternicity" requires a bit more than what he has explicitly stated.

It was while complaining about Shermer's omission on Bruce Hood's blog that it occurred to me that we might have a potential solution here for the problem of heterogeneity in inherited credulity. I now copy-paste from my comment on Bruce's blog, with minor edits:

It's just conceivable that there could be an interaction between patternicity and certain environmental conditions (e.g. prevalence of food, prevalence of predation, etc.) that mirrors the interaction between host and parasite — an interaction we already know to mathematically support heterogeneity in a population.

If you'll pardon a "Just So" flight of fancy on my part: We could imagine a population of prey organisms where some individuals are "strongly" biased towards Type I errors, and others are "weakly" biased (whatever that would mean, but run with me here for a second). If conditions of predation were static, we’d expect one of the two phenotypes to become dominant. But if the local density of predators has a cyclic fluctuation, then the ratio of the two phenotypes in the prey population could conceivably track that cycle. In years with an unusually high-density predator population, the "weakly"-biased individuals tend to get eaten before they can reproduce, favoring the "strongly"-biased individuals; while in years with a low predator population, the "weakly"-biased individuals are able to gather more food than their cousins and therefore out-reproduce them.

Furthermore, this very shift could in itself exert a selective influence on the predator population. As the prey population becomes dominated by "weakly"-biased individuals, it's boom time for the predators. Those dumb prey animals just stand there like they didn't even notice that rustling in the grass. The predator population grows, which as we already postulated (remember, this is all wild speculation) will cause the "strongly"-biased phenotype to be favored in the prey population. Suddenly it seems like the prey have smartened up, as they flee even before it's possible they could have even seen the predator. More predators starve, their population wanes, and the cycle begins again with the "weakly"-biased prey being favored.

There's boatloads of math required before we could state that it's truly plausible, but it sounds to me to at least be plausible in principle. IANAEvolutionary Biologist, nor do I have any prayer of working out the mathematical model myself, so for now this will have to stand as merely another "Just So" story by an armchair biologist.

Hey, wait a minute, postulating a superficially-plausible-sounding evolutionary explanation for human behavior, and then throwing it out there like it's fact without even bothering to test it for plausibility against a mathematical model? I think I've found a second career in evolutionary psychology!

1 comment: