Friday, August 29, 2008

My "Correlational Improbability" proposal

"Correlational Improbability"- July 31/08 is my way of mathematically quantifying the second Dembskian notion of "an independently given, detachable pattern."

From ResearchID.org:Re - “Specified Complexity”:"The first component is the criterion of complexity or improbability. The second is the criterion of specificity, which is an independently given, detachable pattern."

The (often critiqued) second component "specificity" is mathematically quantifiable as a correlational improbability. Multiplying this improbability by the initial improbability (the first component) gives the pertinent “hyper-hyperfinite” or “trans-hyperfinite” internal information/improbability figure for any given complex functional, pseudo-platonic(maintained) system. Note that the first component of “improbability” is itself a hyperfinite number below the universal probability bound. See-- Specified_Complexity at ISCID.

Note added Sept 24/08: I have initiated a separate thread at ISCID in order to discuss this topic.

6 comments:

William Brookfield said...

I should perhaps just add that (as I see it) the "order" or "form" in "in-FORM-ation" is found on the Y axis whereas the compressed K-complexity that appears random (but is not random) is found on the X axis. The X axis and Y axis could also be reversed. The important thing is that from a one dimensional perspective(x axis or y axis only) the correlational order/form is hidden in the present Shannon & Kolmogorov formulations.

William Brookfield said...

In Algorithmic Information Theory Handbook - Grundwald/Vitanyi 2007 Section 5.2 we read "in Shannon's Theory 'information' is fully determined by the probability distribution on the set of possible messages, and unrelated to the meaning, structure or content of individual messages."

Thus, in Shannon's theory 'information' is "unrelated" to information as it is commonly understood - the "meaning, structure and content" of the message(the actual information).

While "meaning" and "message content" are mathematically slippery, "structure" is not and it is the amount/amplitude of structure that my "correlational improbability" is intended to quantify.

I would not say that Shannon and Kolmogorovs formulation are unrelated to information but instead that these formulations quantify only the X-axis of information, while missing the Y-axis -- the inherent specified/structured/correlatedness-- of information.

William Brookfield said...

One reason that I am presenting my ideas is that there exists a significant amount of confusion in the scientific community regarding "randomness" and "information."

Jeff Shallit from "Bethel the Buffoon" 09, 2007 says..

"Bethell shows a profound misunderstanding of information theory when he claims, 'Francis Crick, the co-discoverer of the structure of DNA was asked how the all-important coding information found its way into the DNA in the first place. It's so complex that a reliance on random events will never get us there.' Bethell apparently doesn't understand that in the Kolmogorov theory of information, complexity is the same as randomness. It's easy to get complexity; all you need is a source of random events."

If you were to smash a window you would produce a random k-complex mess of broken glass but this is not structural complexity. "Information" is a form of co-ordinated, correlated structural complexity. Random (uncoordinated, unspecified, and uncorrelated) events cannot produce this type of (informational/structural) K-complexity.

William Brookfield said...

Re "Bethel the Buffoon" I should perhaps mention that I do not support the practice of name calling. Some day in some future civiliation (that is in fact "a civilization") people will be treated with respect. Till then such barbarism is "par for the course."

William Brookfield said...

Just to be clear, My proposal here is that for "information" to be properly quantified (to include its non-random specificity and the subsequent correlational implications) "information" must be proportional to the negative log (to base 2) of the probability squared.

It should also be noted that "randomness/chance" is devoid of correlations. The reason that the combined probability is so low is that, in a perfect system (with infinte resolution) the probability of order/information appearing "by chance," would be zero.

That there is a non-zero probability at all, is the result of the residual order(endogenous information) already in the system itself. This is true for all internally free but nonetheless finite/constrained/ordered systems.

William Brookfield said...

I have been reviewing Dembski's approach to design detection and I think I can see the essential difference...

"It's the combination of a simple pattern (macroscopic?) and a low probability (microscopic/bits&bytes?) that should arouse the suspicion of design, according to Dembski"

Unlike Dembski's, my correlational analysis remains at the microscopic level and can subsequently be seemlessly connected to existing information theories (Shannon & Kolmogorov).