Monday, September 03, 2007

Dembski/Marks - Article Number One

Please everyone, could we just get back to science, thanks. This is article number one of three. Judging from the critical response(?) these three articles are indeed a "slam dunk." Sharpen your pencils. Roll up your sleeves. Let me know what you think about these articles and their implications for science. Thankyou all in advance for your attention to these papers.

8 comments:

William Brookfield said...

Jeff Shallit from "Bethel the Buffoon" 09, 2007 (http://recursed.blogspot.com/2007_09_01_archive.html) says..

"Bethell shows a profound misunderstanding of information theory when he claims, "Francis Crick, the co-discoverer of the structure of DNA was asked how the all-important coding information found its way into the DNA in the first place. It's so complex that a reliance on random events will never get us there." Bethell apparently doesn't understand that in the Kolmogorov theory of information, complexity is the same as randomness. It's easy to get complexity; all you need is a source of random events."

OK, so now it is "easy" to get complex coded DNA information. No mystery here. Jeff's figured it all out for us! Randomness = Information..NOT.

Clearly Bethel is talking about the DNA information being complex (specified complexity) not just the string being algorithmically incompressible (K-complex).
I discussed this on the “Darwinism is a Hoax” tread… http://icon-rids.blogspot.com/2006_10_16_archive.html

“When one is talking about random gene duplications and random mutations, the complexity being produced here is the wrong type of complexity. I.E. (m)K-complexity not (i)K-complexity. The "K" stands for the information theorist "Kolmogorov" and refers to the algorithmic incompressibility often used to identify randomness in digitized/quantized systems. The (m) refers to the word "mock" in that digitization is "mocking" the uniform (and subsequently non-complex) probability distribution of randomness by forcing randomness to appear complex (when it is not). This type of complexity (m)K-complexity) is produced by the collision of randomness and any digitized system that is attempting to accurately express it. Digitization is a problem for randomness because every digit (ATGC) is itself a manifestation of order (the opposite of randomness). DNA (ATGC) is a digitized system.”

Informational K-complexity is specific in that it contains specific instructions that map to biological function and form. This is what is needed for functional DNA.

Randomness and digitization provide only unspecified (m)K-complexity (noise).

To Summarize:

Informational K-complexity is specific and requires error correction (I.E., protection from random errors in order to maintain functionality). See perhaps – “Error Correction Runs Deep” http://www.iscid.org/boards/ubb-get_topic-f-6-t-000170.html plus “Error Correction Runs Yet Deeper” http://telicthoughts.com/error-correction-runs-yet-deeper/ -- both by Mike Gene.

Non-informational (m)K-complexity is unspecified. Being random and non-functional, it requires no protection from random errors (for it is but a string of random errors).

The word “in-FORM-ation” is based on the root word “form.” The word “form” refers to “order” (the opposite of randomness) just as the word “formlessness” refers to “randomness.”

In an earlier thread...

http://icon-rids.blogspot.com/2007/07/new-dembskimarks-papers-at-ei-lab.html

..I mentioned Jeff Shallit's name as a possible critic of Dembski/Marks new papers on information. Unfortunately Jeff seems to have a profound lack of understanding of what "information" actually is. Without a good understanding of information, a coherent response is not possible.

Anonymous said...

William said: Unfortunately Jeff seems to have a profound lack of understanding of what "information" actually is. Without a good understanding of information, a coherent response is not possible.

William, Jeffrey Shallit is a professor of computer science at a major Canadian university and an author of the textbook Algorithmic Number Theory. From that I can reasonably conclude that Shallit understands the concept of information. What's your expertise in this area?

William Brookfield said...

Olegt,

As I said in the first thread on this blog, I don't have any formal qualifications. Dembski however has two Phd's. Are you going to address any of these arguments? BTW my post was about algorythmic information theory (K-complexity) not "algorythmic number theory." Do you actually support Shallit's answer?

"Bethell shows a profound misunderstanding of information theory when he claims,
"Francis Crick, the co-discoverer of the structure of DNA was asked how the all-important coding information found its way into the DNA in the first place. It's so complex that a reliance on random events will never get us there." Bethell apparently doesn't understand that in the Kolmogorov theory of information, complexity is the same as randomness. It's easy to get complexity; all you need is a source of random events."

William Brookfield said...

The link at the top appears to be getting old. Go here for "Bethell the Buffoon."

William Brookfield said...

I think I should say a few (hopefully clarifying) words about (m) K-complexity.

(m)K-complexity arises as a reflection of randomness in a digitized/quantized system. While this type of non-informational complexity expresses the “haphazard” or “lawless” nature of randomness, it fails to reflect the uniform nature of randomness. I am defining “randomness” here as “a uniform probability distribution over a set of possible outcomes.” If your jackpot(three lemons) appears more (or less) often that that given by chance, then the internal probability distribution of the slot machine is biased, non-uniform, non-random. It is non-randomly "weighted" in the direction of more (or less) jackpots. In the case of (m)K-complexity the simple (non-complex) uniformity of randomness is being blocked by the fact that digitized/quantized systems are necessarily composed of non-uniform digits or units.

Nonetheless, the uniformity of randomness reappears whenever an (m)K-complex string is long enough to permit a statistical analysis of the Frequency Of Occurrence (FOO) for every available outcome. In these cases, the uniformity is reflected in the fact that no outcome is preferred (weighted) over any other outcome. When the jackpot configuration (to use the example) is not occurring (over the long haul) with a frequency greater or less than any of the other available configurations then the machine is deemed to be “random” or “fair.”

Thus, while “a source of random events” projected into a digitized system will indeed result in (m)K-complexity, it will not produce specified informational (i)K-complexity (specified complexity).

Anonymous said...

William,

I have dealt directly with one of the 3 articles by Dembski and Marks, Active Information in Evolutionary Search, at Panda's Thumb awhile ago. Here is the link.

The article you are interested in discussing provides definitions and an overall strategy for criticizing evolutionary algorithms but it does not deal with evolutionary algorithms. For the actual critique that paper refers to a third article by Dembski and Marks, Unacknowledged Information Costs in Evolutionary Computing: A Case Study on the Evolution of Nucleotide Binding Sites.

Ironically, the third article has been taken down by Marks. The reason? The calculation in that article was based on a program that turned out to contain a bug. Their numbers were off by 13 orders of magnitude. You can read all about it here.

To sum it up, two out of three articles by Dembski and Marks do not contain any specific criticisms of evolutionary algorithms. The third article containing the critique has been withdrawn. Let's wait until something concrete resurfaces.

William Brookfield said...

Hi Olegt,

Thanks for the info.Haggstrom has now replied to the second article here

William Brookfield said...

The problem I have with Haggstroms' article is that he hasn't answer the crucial question "Is enviromental change instructive (such that new instructions are being added to the genome) or is the environment concussive (such that organism are taking a hit and sustaining damage)?" Did environmental change build new instructions into the peppered moths genome or did light colored moths just get killed(hit)?