In the 1940's and 50's, computers were designed as 'artificial brains'. The challenge that early computer scientists faced was to try to grasp the key set of essential scientific problems and reduce them to subsets of achievable engineering tasks. Needless to say, their efforts have been an overwhelming success. Computers have become essential to almost every aspect of modern life.
Since the turn of the millenium, the nature of the scientific challenge has become reversed. Now that we understand computer design (hardware and software) better, we can bring these insights to bear on the nature of our cognition (brains and minds). Our brains are now being reconceived as 'biological computers'. Our minds (our selves) are the 'software' that runs on the brain's underlying 'wetware'. It is entirely conceivable that the historico-cultural* purpose of building computers is understanding the biological basis for our own behaviour.
There are some problems whose perceived degree of difficulty is so great that we call them 'mysteries'. These are the complementary issues of
1. consciousness (subjective experience)
2. meaning (semantic grounding)
In 2008, I initiated a theoretical research project at Flinders University in South Australia, with the express aim of shedding light on these 'intractable' topics. Under the guidance of Professor Richard Clark, then Head of the School of Psychology, and some other key staff members, I set up a multi-disciplinary research program which consisted of several stages:-
2. Computer Science
4. Integration (2011 Honours thesis)
5. Modelling (planned PhD thesis)
In spite of several unplanned personnel changes, inevitable in a large University over a period of years, Project Integration (step 5) was provisionally completed in 2011. For the first time ever, anywhere, it is now theoretically possible to construct an artificial person. All of the philosophical and scientific problems have been overcome. What remains is the engineering task, which should not be minimised.
Many of the 'problems' that I overcame were chimera, paper tigers, 'own goals'- call them what you will, they were not real technical difficulties, or genuine areas of ignorance, but time-consuming artefacts brought about mainly by poor use of language, and a certain lack of regard for history. If language is your main tool, keep it sharp. If history is your main resource, plug any leaks.
The overall aim of the process, expressed in engineering terms, is to construct a significantly better cognitive architecture than those currently available (eg ACT-R). Cognitive Architectures (CA's) differ from Cognitive Models in one key respect - they seek to emulate not just the behaviour produced by a mind, but to reflect in a non-trivial manner the neural structure of the brain underneath. They are much harder to design and construct because two sets of interconnected constraints must be simultaneously satisfied. They can then be tested using details of real human brains and minds.
In 2011, after almost a decade of dedicated study, the Tricyclic Differential Engine (TDE) was conceived. Like the Turing Machine (TM), the TDE is a model of an idealized computer. Unlike the TM, however, the TDE is made from functional components which are a 1:1 match for the three main lobes of the vertebrate brain, the Parietal, the Frontal, and the Temporal. Also, the TDE can be recursively compounded into a very real brain design called the TDE-R. Real models of the TM are useful as laboratory curiousities, or for 'geek' programming exercises only. The TDE-R is a model of your brain!
The TDE is a model of one cerebral hemisphere of a vertebrate brain. It consists of very many long-range neural homeostatic loops, or cybercircuits, each of which uses a type of feedback control called 'predictor-corrector'. In robotics this identical configuration is called a 'Kalman filter'**. Each circuit consists of loops which pass through these three lobes. Each loop exerts contralateral control over the conscious and subconscious dynamics of one semantic feature. Each loop is matched to one neural column in the visual regions of the cortex, if the feature is a permanent object.
Semantic features are recursively defined sets. Those features which prove to recur often are stored in the Temporal and Limbic lobes as space and energy state hierarchies ('archives') respectively. The parietal and frontal lobes act as spatial and temporal short term memory areas ('buffers') respectively. They each interact with the contents of Temporal and Limbic memory areas ('archives') to either recognise existing or record novel feature sets. The interaction between buffers and archives is a characteristic of all (discrete***) computation. There are two such buffer-archive pairs (BAP's) in every TDE, one spatial and one temporal. There is a BAP for cospatial features (Temporal-Parietal), and a BAP for synchronous features (Limbic-Frontal). There is support for this view, for example, Borst & Kosslyn (2008) state that “mental images arise from perceptual representations that are created from stored information, not information that is currently being registered by the senses”.
All biological learning is 'match-based' learning, all biocomputers are based on Buffer-Archive pairs
It should be noted that almost all current ANN designs, with the notable exception of Grossberg's ART, violate the BAP principle. In the ART (Adaptive Resonance Theory) machine, F2 is the archive and F1 is the buffer. Both the TDE and ART machines use 'match-based' learning, in which an 'alphabet' must exist before meaning-based learning can occur (ie a 'lexicon' is created) - this is the point that Treisman & Gelade make in . In the Turing Machine, Alan Turing divides the tape into E-squares (buffer, likely to be e-for-erased) and F-squares (archive, f-for-fixed data), a design feature which seems to confirm the universality of Buffer-Archive pairing in general models of computation.
Equivalence of process and structure - they are handled by the same structures
The cerebral permanent memory archive, one in each temporal lobe, is topologically similar to a tesseract, in which time and space are equivalent. This is more economical of data capacity. It also permits the use of an in-built metaphor mechanism able to be used during memory access- ie spatial patterns are potentially recognised by temporal templates, and vice-versa. This also hints at a deeper process-structure equivalence. The TDE is not only a feature recogniser, it is a motion analyser too! Consider the following 'toy' example, which nonetheless exhibits all the characteristics of real cases-
The gross feature is a square, a macro-structure. however, it has a finer grain consisting of four processes (rotations thru 90 degrees) and four micro-structures (90 degree corners or intersections). These diagrams should be familiar to students of Fukushima's Neocognitron . Note that the original Neocognitron is only capable of recognition with translation invariance. Later versions have experimented with rotational invariance, with varying degrees of success. The TDE is based upon rotation as the key invariance class, since higher order rotations (blades, rotors) include translation and shear as totally included sub-classes.
Computational processes: All computations are executed as (physical or virtual, eg deferred or imagined) motions. The literature on mental motions (eg rotations-see ) demonstrates they are dynamically equivalent to their real-time counterparts. The current set of active input and output features in the Parietal lobe produces a net force reaction. Thus, the TDE is a real-time robotics engine with built-in feature binding mechanisms. The actual movement mechanism is just a servo loop, one for each feature, as in the following diagram. This fact has been overlooked by scientists, partly because many of the lessons learned by the cyberneticists of the 40's,50's and 1960's have been forgotten or devalued.
Any electrical engineer will recognise the diagram at left- it consists of a servo controller, and a (controlled) system. The diagram at right depicts the neuro-anatomical mapping of the functional parts of the servo loop to the parts of one half of a human cerebrum. The Limbic-Frontal lobes form the temporal (=pertaining to time) BAP, while the Parietal-Temporal (=pertaining to the temples) lobes form the spatial BAP. Each BAP consists of a pairing between a sensory buffer (short-term memory) and a template archive (long-term memory).
Data representation: Conscious features are grouped into current 'snapshots' called Situation Images (or SI's - see left side of diagram below). SI theory is essentially the same as Perceptual Control Theory , whose catch-phrase is "behaviour is the control of perception". The same realisation that is behind SI theory and PCT theory is also the one that underpins Common-Coding theory in psychology. That is, this is the same true idea that keeps popping up in history, but that keeps getting ignored. Conscious mental processes consist of global goal-seeking feedback loops, which use SI's as their indivisible semantic atoms. At the conscious level, the computational processes are serial. Unconscious features (declarative data which subsumes procedural servo-loop dynamics) are organised as re-learnable instincts, as per Tinbergen and Albus. Each time we use language, we reiterate this compound data structure. While each sentence we construct is a novel semantic object, we build it thus from re-useable parts, the words and phrases of our lexicon and phrase books. Within each of our spoken sentences is a classic Tinbergen hierarchy, as neat a demonstration of the underlying theory as you could want for.
Recursive layering: The TDE can be recursively combined with itself. The result is the TDE-R. They are related by the mathematical symbolism TDE-R=(TDE)^TDE. Recursion is a word that puts many people off. This is unfortunate, since the underlying idea is easy- recursion is simply placing computations in nested brackets, as in x= 4x(6+3). In computer software, the same idea occurs when nested function calls create activation stacks. The technique is referred to a 'structured programming', and the whole software industry would be lost without it.
The TDE-R has some very interesting properties. It explains much of the lateral functional specialisation observed between the left and right cerebral hemispheres. The simple TDE processes information, while the compound TDE processes semantic values, or predicates (='facts'). It can support non-linear editing of time, as occurs in narrative data representations (language, memory). The TDE-R requires a much larger role for the cerebellum and basal ganglia in overall cognition than is allowed by current opinion in neuroscience.
An isometric sketch of the recursive arrangement of the parts of the TDE-R appears below. The LCH is the global 'frontal' lobe, while the RCH is the global 'temporal' lobe. The LCH corresponds to the upper part of the Tinbergen hierarchy (consciously attended items) while the RCH corresponds to the lower part (automatically learned lexicon of template-forms). This explains the dominance of the LCH in linguistic processing. It is not that the RCH is not involved, it is just that its contribution is a latent one.
The TDE-R is associated with a Piercean language model
Language is important in any theory of human cognition which claims to be complete (like TDE-R), since it epitomises so much of what we consider to be uniquely human about our cognitive function. The TDE-R language model is 100% compatible with C.S. Pierce's semiotics, even though it was derived separately from first principles. This fact alone reinforces both theories, even though they are separated by more than 100 years of scientific research and history.
Language entities such as words are acoustic and textual objects in their own right. That is, they are perceived as objects, just like tables and giraffes. Objects (and processes, too, as it turns out) are constructed from hierarchies of affine 'world' features. Each feature gains its semantic grounding by hierarchical association with a set of lower-level non-affine 'self' predicate values (eg the color 'blue', the spatial concept 'above'). This explains the importance of the world-self 'dyad' within each 'snapshot' of reality, or Situation Image (SI).
However, words and higher-order linguistic objects have one further, necessary feature- their generality: they are all class descriptors without exception. The word 'table' means all tables, until indexed (ie referenced deictically), eg by anaphor or preposition such as 'the'. In Pierce's semiotics, they are not just signs (conveyors of meaning), they are universal signs called 'symbols'. In the wholly equivalent TDE-R framework, words and other syntactic constructs are universal feature classes. This characteristic is what gives them the power to describe all situations, and as it happens, thoughts, which are just virtual situations. This is one reason why TDE-R can be classified as a 'language of thought' (LOT) theory.
Future Research - 'G' - the coding language used by the genes
It seems premature to talk about future research when the software engineering for the full TDE-R model is incomplete, but plans are like promises- they must be made, if only to be broken later. The TDE-R as depicted in the diagram above is not the only viewpoint. There is another way of looking at the brain and body, another teleological (goal-oriented) viewpoint, but one which yields the following alternative layered view-
In the right-hand column, the biological drives (ends, or goals) at various levels of organisation appear. In the left-hand column, the matching mechanisms (means, or methods) appear. There is clearly a pattern or 'invariant' which emerges at all levels of description. This invariant, logically, must form the basis of a frequently executed DNA module in the hypothetical gene code 'G'.
Author's Note- Since I first made this website, some interesting things have happened. I proved that the TDE is Turing Complete, ie it can do anything that a TM can do, and vice versa. Also, some simple googling reveals that the languages P'' and its semantic equivalent, Brainfuck, are also most likely to be equivalent to G. Like the gene code, they have exactly four (4) instructions. The P'' code for finding the predecessor of an integer, ie (n-1) is given by.... R ( R ) L ( r' ( L ( L ) ) r' L ) R r while the Brainfuck equivalent is > [ > ] < [ − [ < [ < ] ] − < ] > +. The four DNA codons occur in triplets, giving an 'alphabet' of 4 x 4 x 4 = 64 possibilities. I am no molecular biologist, so watch this space. However, none of this changes my original research findings about our brain/mind- it really is a TDE-R. Believe!
The brain is based upon Pose Cell Arrays, roughly equivalent to Grossberg's ART machine. These put the basic data representation as a vector-rotor form. That is, the DNA code represents molecular rotations/morphs in (probably) 3D space.
* pertaining to the history of civilisation
** incredibly, no one (in the literature, that is) seems to have realised that the Kalman Filter (KF) is a key factor in conscious processing, in that it permits the construction of zero-wait command and control hierarchies. To be fair, in the TDE model, a different interpretation is given to the predictor and corrector stages. The predictor stage ('leading' or feedforward phase) is the low-latency 'involuntary' part, and the corrector stage ('lagging' or feedback phase), is the high-latency part. The mechanism of consciousness consists of the piecewise replacement of conscious, novel inputs with unconscious, construction-kit output units obtained from the cerebellum's cross-bar perceptron. Think about exactly what this means- it is one thing to automate a well-learned activity (ie replace its feedback loop 'gaps' with pre-formed building blocks), but to be able to automate each and every completely new, novel movement as it occurs is an astounding trick. There are two net benefits. The first is to dramatically reduce the effect or motor latency (the time delay between drive states and driven actions) and the second is to dramatically reduce the affect or reaction latency (the time delay between experienced situations and emotional reactions to these situations). The result is that the organism equipped with this 'algorithm' is much more engaged in the world's interplay of cause-and-effect (ie sensorimotor 'immediacy'). When external things happen, they are experienced (almost) immediately, but more importantly, when the organism ' free wills' a given behaviour, their body responds (almost) immediately to their inner commands. This mechanism debunks Libet's Paradox completely, because it explains why there appears to be a backwards referral of time, even with an action loop the organism has not had prior experience of.
*** the phrase 'discrete computation' is a tautology. All computation is discrete:-see my paper -'There is no such thing as sub-symbolic computation'.
1. Fodor, J. (2001) 'The Mind doesn't work that way'
2. Chomsky, N. (2006) 'Language & Mind'
3. Prof. Leon Lack, Dr. Julie Mattiske (Flinders University of South Australia- School of Psychology)
4. Dyer-TDE-WCCI-2014.pdf (this is a version of my Honours thesis modified for prospective inclusion in WCCI-2014, the World Conference on Computational Intelligence)
5. Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y . (2004). ACT-R: An integrated theory of the mind. Psychological Review 111, (4). 1036-1060.
6. Borst, G. & Kosslyn, S.M. (2008) Visual mental Imagery and visual perception: structural equivalence revealed by scanning processes. Memory & Cognition 36 (4), 849-86.
7. Shepard, R.N. & Metzler, J. (1971) Mental rotation of three-dimensional objects. Science, 171, 701-703
8. Peirce, C.S. (1868) On a New List of Categories. Proc. Amer. Acad. Arts and Sciences 7, 287-298.
9. Treisman, A.M., Gelade, G. (1980) A Feature-Integration Theory of Attention. Cog. Psychol. 12, 97-136
10. Fukushima, K. (1980) Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics 36(4), 93–202
11. Powers, William T. (1973, 2005). Behavior: The control of perception. Chicago: Aldine de Gruyter. ISBN 0-202-25113-6. [2nd exp. ed. = Powers (2005)].
------------------------------ Copyright 2013 Charles Dyer------------------------------