Whenever you harness a random phenomenon for a function, you are doing intelligent design. For instance, raindrops falling on the ground are unpredictable, but the moment you dig a ditch to channel them to run a waterwheel, you have used your goal-directed intelligence for a pre-determined purpose, even if the inputs were random. Evolutionists routinely miss this distinction. Maybe it’s because they just hope their bottom-up theory is true.
The authors of a paper in Nature Nanotechnology, for example, commit the fallacy right in the title: “Evolution of a designless nanoparticle network into reconfigurable Boolean logic.” If it’s reconfigurable, it’s not designless. If they “evolved” it to do logic, it’s illogical to call it Darwinian, which they say inspired their approach. If they applied their minds to exploit the physical properties of particles for a purpose, then they circumvented the purposelessness of natural selection.
Natural computers exploit the emergent properties and massive parallelism of interconnected networks of locally active components. Evolution has resulted in systems that compute quickly and that use energy efficiently, utilizing whatever physical properties are exploitable. Man-made computers, on the other hand, are based on circuits of functional units that follow given design rules. Hence, potentially exploitable physical processes, such as capacitive crosstalk, to solve a problem are left out. Until now, designless nanoscale networks of inanimate matter that exhibit robust computational functionality had not been realized. Here we artificially evolve the electrical properties of a disordered nanomaterials system (by optimizing the values of control voltages using a genetic algorithm) to perform computational tasks reconfigurably. We exploit the rich behaviour that emerges from interconnected metal nanoparticles, which act as strongly nonlinear single-electron transistors, and find that this nanoscale architecture can be configured in situ into any Boolean logic gate. This universal, reconfigurable gate would require about ten transistors in a conventional circuit. Our system meets the criteria for the physical realization of (cellular) neural networks: universality (arbitrary Boolean functions), compactness, robustness and evolvability, which implies scalability to perform more advanced tasks. Our evolutionary approach works around device-to-device variations and the accompanying uncertainties in performance. Moreover, it bears a great potential for more energy-efficient computation, and for solving problems that are very hard to tackle in conventional architectures. [Emphasis added.]
On a Continuum
To their credit, the authors do identify their work as “artificial” selection, but they see it on a continuum with natural selection, never making the distinction between unguided natural processes and intelligent processes. They merely assume that intelligence was, at some point in natural selection, an emergent property that allowed their physical brains (which presumably emerged millions of years ago) to “optimize” the properties of “disordered” elements (like the raindrops) into Boolean logic computers (like the waterwheel).
Design is written all over their materials and methods. Yet they persist in claiming there is no design involved. “That our system is truly designless and reconfigurable makes our approach fundamentally different from the designed circuits” of previous attempts, they say. New Scientist fell headlong into the fallacy, comparing what the programmers did with what Darwinian evolution does:
Traditional computers rely on ordered circuits that follow preprogrammed rules, but this limits their efficiency. “The best microprocessors you can buy in a store now can do 1011operations per second, and use a few hundred watts,” says Wilfred van der Wiel of the University of Twente in the Netherlands. “The human brain can do orders of magnitude more and uses only 10 to 20 watts. That’s a huge gap.”
To close that gap, researchers have tried building “brain-like” computers that can do calculations even though their circuitry was not specifically designed to do so. But no one had made one that could reliably perform calculations.
Van der Wiel and his colleagues have hit the jackpot, using gold particles about 20 nanometres across. They laid a few tens of these grains in a rough heap, with each one about 1 nanometre from its nearest neighbours, and placed eight electrodes around them.
When they applied just the right voltages to the cluster at six specific locations, the gold behaved like a network of transistors — but without the strict sequence of connectionsin a regular microchip. The system not only performed calculations, but also used less energy than conventional circuitry.
Nothing about the particles told the researchers what voltages to try, however. They started with random values and learned which were the most useful using a genetic algorithm, a procedure that borrows ideas from Darwinian evolution to home in on the “fittest” ones.
All the Rage
It’s interesting that the authors compare their disordered electrical circuits to neural networks. These are all the rage, as intelligent designers seek to improve computers by mimicking the networked architecture of biological brains. Traditional computers are predominantly linear in operation: one calculation’s output is input for the next. Neural networks, being nonlinear, give the advantage of simultaneous operations.
Deep neural networks mimic the brain by creating hundreds of millions of connections between “artificial neurons” organized in layers. “These types of networks can be trained to perform hard classification tasks over huge datasets,” Phys.org says, “with the remarkable property of extracting information from examples and generalizing them to unseen items.” The article explains the advantages:
The way neural networks learn is by tuning their multitude of connections, or synaptic weights, following the signal provided by a learning algorithm that reacts to the input data. This process is in some aspects similar to what happens throughout the nervous system, in which plastic modifications of synapses are considered to be responsible for the formation and stabilization memories. The problem of devising efficient and scalable learning algorithms for realistic synapses is crucial for both technological and biological applications.
From Worm to Human
But are biological neural networks “emergent” properties of cells that were not designed for learning? A primer in Current Biology examines neural nets from the simplest worm to the human brain and tries to see if Darwinian evolution connected the dots.
With the aim of discussing the evolution of neural nets, we focus here mainly on animals in which nerve nets form a major part of the nervous system and that have positions in the animal tree of life that are informative for considerations of how nervous systems have evolved (Figures 2 and 3).
In the article, we learn about the simplest of animals, like jellyfish and hydra (phylum Cnidaria), which possess “simple” nerve nets connected to muscle sheaths that allow them to respond to stimuli. Figure 2 compares nerve nets in various animals. We find in Figure 3 a phylogenetic diagram showing the distribution of nerve nets in the animal kingdom. The authors show a sequence of increasing complexity, from earthworms that use nerve nets to perform rhythmic movements like peristalsis, to fruit flies and vertebrates, whose nerve nets are organized into more complex structures like nerve cords, onward and upward to central nervous systems.
At a gross scale, it seems reasonable to connect the dots between hydra and Hyracotherium. “We see that nerve nets are good for many things and are quite versatile systems that can be integrated in varied ways into animal bodies,” the authors say. “…Assuming the nerve net is the earliest neural tissue in which interwoven neurons connect with epithelial sensory cells and internal muscle cells, we might be able to postulate a pathway leading to derived nerve condensations, such as neurite bundles, medullary cords and brains.”
Problems for locating a Darwinian pathway, however, mount as we consider the details:
- Convergence. The deciphered genome of a comb jelly (phylum Ctenophora) has led to a “robustly debated” notion that it is the basal metazoan. “Sponges (Porifera) and placozoans lack neurons, so if comb jellies are a sister taxon to all other metazoans, theneither those two taxa have lost neurons during evolution or neurons (and nerve nets) evolved twice independently (see Figure 3).” Either way, how did the first neurons appear?
- Parallel emergence. “In some animals with prominent subepidermal longitudinal nerve cords — for example, vertebrates, the fruit fly Drosophila melanogaster and the annelid Platynereis dumerilii — the molecular and functional organization of the nerve cord show very striking similarities, which has been argued to reflect an ancient origin of the nerve cord. In contrast, comparative morphology and recent advances in solving animal relationships with molecular tools suggest that internalizations from a basiepidermal to a subepidermal condensation happened multiple times independently, for example inside the ribbon worms (Nemertea) and segmented worms (Annelida).”
- Cell complexity. Neurons are not simple. They have specialized ion channels, genes, and enzymes. Moreover, they have to know how to connect to one another and understand each other’s signals. “Recent studies indicate that differentially expressed molecular markers — transcription factors as well as neuropeptides and neurotransmitters — assign specific neurons to different identities and functions.” How did neurotransmitters emerge to carry the electrical signals across synapses? Additionally, the muscles they connect to have to interpret the signals and respond appropriately.
- Development. Nerve nets do not just appear in the adult fully formed. They have to develop in the embryo: meaning, specialized neurons have to diversify from stem cells then migrate into position and make connections. “Thus, the formation of nerve nets presents specific challenges at several levels and it appears that different organisms employ different developmental mechanisms to overcome these challenges and eventually end up with a nerve net-like nervous system.” The authors imply that a simple progression is lacking.
- Unknowns. “It will also be important to acquire a better understanding of the functional properties of different nerve nets. What types of behaviour do they allow, and what advantages might this confer for life in a particular environment? Only by such a multiplicity of studies on a broad range of species will it be possible to understand how nerve nets can be transformed during evolution into more complex architectures, and whether there might be a common mechanism that can explain how similar-looking central nerve cords evolved independently several times.” Clearly this is not understood today, despite a century or study since G. H. Parker proposed in 1919, “The nerve-net of the lower animals contains the germ out of which has grown the central nervous systems of the higher forms.”
- Promissory notes: “The biology of nerve nets remains a fascinating and poorly understood topic and it is clear that comparative studies of neural development and physiology of non-model systems embedded in an ecological context are paramount to finally understand nervous system evolution.“
In other words, someday evolutionists might connect the dots. Right now, even simple nerve nets in jellyfish and hydra are remarkably well designed for what they do.
This paper could not find an evolutionary pathway in biological neural networks. The other had to impose intelligent design on an artificial neural network to claim it was like biological evolution. Perceptive readers detect intelligent design through it all.
This article was originally published in 2015.