January 11, 2015 at 11:13 am
· Filed under Uncategorized
In describing a new HIV evolution paper in Science, Dan Graur (aka “Judge Starling”) writes:
“The only thing “novel” about the analysis was the use of a Bayesian method for phylogeographic inference. Interestingly, as in all examples of its use that I have seen so far, the method tells you nothing you do not know or cannot infer by much simpler means.”
I haven’t looked at the paper in question, but I have noticed this common infatuation with pointlessly (even recklessly) complex statistical methods.
Permalink
January 11, 2013 at 6:23 pm
· Filed under Uncategorized
From the world of PoliSci, comes this discussion about the use of preregistration of studies and mock reports. It’s on my ever-lengthening “to read” list. My impression of this strategy (without reading the articles) is that research can be more informative if we openly specify our theory and predictions prior to collecting data to test the theory. This avoids the bias towards statistically significant results and the implicit post-hoc nature of current scientific publication practices.
Permalink
September 11, 2012 at 4:51 am
· Filed under Uncategorized
I just learned of this web service “RANDOM.ORG – True Random Number Service” via a python module (http://pypi.python.org/pypi/randomdotorg/).
It’s clever, but I have to wonder about this distinction between “true randomness” and pseudorandomness. I understand the non-randomness of pseudorandom algorithms, I’m just not sure that I buy that a natural process can be truly random. I don’t know if they are relying of the complexity of the process, or quantum theory.
Either way, I think I’d prefer a pseudorandom algorithm on my own machine over a supposedly random value sent to me over the network. Even if my intention is to have a neutral arbiter in some game of chance, I don’t see the benefit of “true” randomness over pseudorandomness from some public server.
=======
update: A good discussion of this issue at the SuperUser site. I love the StackExchange Network. Two important points stand out:
1) Pseudo-random number generators can become more random by constantly incorporating additional external information into the system. I assume this is what Random.org is doing.
2) For some purposes, pseudo-random numbers are more appropriate than truly random numbers. For instance, a stochastic simulation requires frequent bug-hunting, which would be nearly impossible if its “random” actions were not generated purely by the internal state of the system.
Permalink
August 16, 2012 at 2:27 pm
· Filed under research, Uncategorized
Here is the abstract from PubMed. Right now, I have no comment except to say that this does not change my previously published opinions about the importance of recombination in the evolution of E. coli. More later.
Evidence of non-random mutation rates suggests an evolutionary risk management strategy.
Abstract
A central tenet in evolutionary theory is that mutations occur randomly with respect to their value to an organism; selection then governs whether they are fixed in a population. This principle has been challenged by long-standing theoretical models predicting that selection could modulate the rate of mutation itself. However, our understanding of how the mutation rate varies between different sites within a genome has been hindered by technical difficulties in measuring it. Here we present a study that overcomes previous limitations by combining phylogenetic and population genetic techniques. Upon comparing 34 Escherichia coli genomes, we observe that the neutral mutation rate varies by more than an order of magnitude across 2,659 genes, with mutational hot and cold spots spanning several kilobases. Importantly, the variation is not random: we detect a lower rate in highly expressed genes and in those undergoing stronger purifying selection. Our observations suggest that the mutation rate has been evolutionarily optimized to reduce the risk of deleterious mutations. Current knowledge of factors influencing the mutation rate—including transcription-coupled repair and context-dependent mutagenesis—do not explain these observations, indicating that additional mechanisms must be involved. The findings have important implications for our understanding of evolution and the control of mutations.
Permalink