Evolution Icon Evolution
Intelligent Design Icon Intelligent Design

Peer-Reviewed Paper Successfully Measures Specified Complexity in Computer Images

A new peer-reviewed article in the journal IET Computer Vision, “Measuring meaningful information in images: algorithmic specified complexity,” by Winston Ewert, William A. Dembski, and Robert J. Marks II, again attempts to apply the concept of algorithmic specified complexity (ASC) as a measure of meaning vs. randomness in a dataset. In a previous article I noted that the team at the Evolutionary Informatics Lab tried to apply algorithmic specified complexity (ASC) to successfully predict random patterns in Conway’s Game of Life versus those that were constructed by a programmer. In this paper the authors try to distinguish between “images which contain content from those which are simply redundancies, meaningless or random noise.” They begin by asking:

Is information being created when we snap a picture of Niagara Falls? Would a generic picture of Niagara Falls on a post card contain less information than the first published image of a bona fide extraterrestrial being?

They attempt to answer these questions by stating:

For an image to be meaningfully distinguishable, it must relate to some external independent pattern or specification. The image of the sunset is meaningful because the viewer experientially relates it to other sunsets in their experience. Any image containing content rather than random noise fits some contextual pattern. Naturally, any image looks like itself, but the requirement is that the pattern must be independent of the observation and therefore the image cannot be self-referential in establishing meaning. External context is required. If an object is both improbable and specified, we say that it exhibits “specified complexity.”

So how can we detect whether there is such a complex and specified pattern?

The more the image can be described in terms of a pattern, the more compressible it is, and the more specified. For example, a black square is entirely described by a simple pattern, and a very short computer programme suffices to recreate it. As a result, we conclude that it is highly specified. In contrast, an image of randomly selected pixels cannot be compressed much if at all, and thus we conclude that the image is not specified at all. Images with content such as sunsets take more space to describe than the black square, but are more specified than random noise. Redundancy in some images is evidenced by the ability to approximately restore groups of missing pixels from those remaining.

The black square might be compressible and specified, but that does not mean it is complex. As they note, “The random image is significantly more complex, whereas the solid square is much less complex.”

But these are relatively simple cases. They then try to tackle more complex cases, such as a photograph of Louis Pasteur with increasing amounts of random noise added. As ASC predicts, they find that the more noise is added to the image, the lower the ASC. Similarly, as you resize an image of Einstein so that it loses some of its clarity, it also loses ASC. This is all as their model predicts.

But what about the case of a picture of “stick men on a sea of noise”? They found that ASC was still able to detect the presence of a complex and specified feature even when it was surrounded by noise. They conclude that ASC is an effective methodology for distinguishing random image data from meaningful images:

We have estimated the probability of various images by using the number of bits required for the PNG encoding. This allows us to approximate the ASC of the various images. We have shown hundreds of thousands of bits of ASC in various circumstances. Given the bound established on producing high levels of ASC, we conclude that the images containing meaningful information are not simply noise. Additionally, the simplicity of an image such as the solid square also does not exhibit ASC. Thus, we have demonstrated the theoretical applicability of ASC to the problem of distinguishing information from noise and have outlined a methodology where sizes of compressed files can be used to estimate the meaningful information content of images.

The applicability in the context of intelligent design is clear: If ASC is a useful tool for distinguishing designed images from random ones, then perhaps it can be applied to biological systems or other natural structures to detect design there as well.

 

Casey Luskin

Associate Director and Senior Fellow, Center for Science and Culture
Casey Luskin is a geologist and an attorney with graduate degrees in science and law, giving him expertise in both the scientific and legal dimensions of the debate over evolution. He earned his PhD in Geology from the University of Johannesburg, and BS and MS degrees in Earth Sciences from the University of California, San Diego, where he studied evolution extensively at both the graduate and undergraduate levels. His law degree is from the University of San Diego, where he focused his studies on First Amendment law, education law, and environmental law.

Share

Tags

Computational SciencesEvolutionary Informatics Labpeer-reviewRobert MarksscienceWilliam Dembski