Circularity and Infinite Regress in Understanding

Editor’s Note: Take everything that I say in this blogpost with a grain of salt. I am not an expert on computer science theory, so it is possible that I have misrepresented some concepts, particularly the metacircular interpreter and reflective programming.

“There is only the experiencing of experience.” – Rupert Spira

A pair of brief dialogues

Dialogue 1 (taken from here):

First person: “This is the truth.”
Second person: “What is ‘truth’?”
First person: “What do you mean ‘What is truth’?”
Second person: “What do you mean ‘What do I mean’?”
First person: “What are we talking about anyway?”

Dialogue 2 (original):

Zen master: “This is.”
Novice: “What is it?”
Zen master: “This.”
Novice: “What’s that?”
Zen master: “No, it’s this.”
Novice: “What’s this?”
Zen master: “This is what I’m experiencing.”
Novice: “What are you experiencing?”
Zen master: “This.”
Novice: “That sounds circular. I don’t get what this is.”
Zen master: “There’s nothing to get. You already got it.”
Novice: “What have I got?”
Zen master: “Awareness that this is.”
Novice: “Which is what?”
Zen master: “It’s what you got.”
Novice: “Hmm. It seems that the harder I try to look for it, the more it eludes me.”
Zen master: “Then you should stop looking.”
Novice: “Because I’ve got it.”
Zen master: “Right.”
Novice: “Oh.”


Novice: “What were we talking about anyway?”

Human understanding

There is something about circularity that we find deeply uncomfortable.

A thought experiment called the Munchhausen Trilemma claims that it is impossible to know anything for certain because any proof must rest, fundamentally, on one of three types of arguments. Circular reasoning is one of them, but we consider it to be a logical fallacy. If x is true because of y, yet y is true because of x, then we haven’t offered a valid proof for statement x. The premises for proving an argument must not be contingent on the original argument itself.

Unfortunately, we can’t seem to escape circularity; it seems to be a central feature of any language. Dictionaries define words in relation to other words, so at some point, a pair of words must be defined, circularly, in terms of each other. For instance, Webster’s 1828 dictionary defines the word “regain” as “to recover, as what has been escaped or been lost” and the word “recover” as “to regain, to get or obtain that which was lost” (see Fig 1). Hence, we can’t understand the meaning of either “regain” or “recover” from the dictionary alone. In general, as this logician states, “we will not from the dictionary be able to learn the meaning of any of the words, unless we know the meaning of some of them in advance.” We have to look beyond the dictionary to gain true understanding, and indeed, as the philosopher C.H. Whiteley noted, we grasp most, if not all, of the elementary expressions in language through ostensive definitions, which define words with respect to our experience. For example, an ostensive definition of the word “apple” would be given by pointing out an actual apple that we perceive in the real world, or an image of it. In other words, an agent must bear some connection to the outer world – a certain level of awareness or cognition that extends beyond words – in order to truly comprehend their meaning.

Screen Shot 2018-05-29 at 10.13.18 AM

Fig 1. Indirect self-reference in the definitions of “regain” and “recover.” Diagram taken from here.

Artificial (computer) understanding

I use the word “agent” above because I’m not referring just to humans, but also to artificial intelligence (AI). AI cognition operates within the constraints of a language, one that involves physical symbols rather than the kinds of words that humans typically use. In their influential 1976 paper, the computer scientists Allen Newell and Herbert Simon proposed the physical symbol system hypothesis (PSSH), which states that “a physical symbol system [such as a digital computer, for example] has the necessary and sufficient means for intelligent action.” The PSSH is more generally predicated on the notion that AI systems are instantiations of mathematical and logical constructs known as a formal systems. Without diving too deep (the famed cognitive and computer scientist Douglas Hofstadter has essentially written a whole book on this topic), these constructs attempt to systematize mathematics by formulating a stylized vocabulary that uses symbols to express statements about numbers. Formal systems accept certain axioms, which are statements consisting of various symbols, and contain rules of inference that dictate the ways that expressions can be manipulated to produce more statements. (0) The appeal of formal systems lies in the fact that every valid expression can be derived from the original axioms and the rules of inference. Therefore, we can prove whether a computational machine, such as an AI, is capable of reaching a certain state from the characteristics of the pertinent formal system, which include its symbolic language. (It should come as no surprise that there is a one-to-one correspondence between computer programs and mathematical proofs.)

The limitations of AI cognition do not lie within the language itself. Though it is, at its most fundamental, comprised of just two numbers (see footnote 1), the logical consistency of the language gives rise to remarkable computational power, to algorithms that far exceed our human capacity to solve certain problems. Rather, the problem is that AI systems do not appear to have any recognition of a world beyond their language. Therefore, a computer program doesn’t seem to have any ability to relate the symbols in the language to a corresponding referent outside of itself. For instance, a human working in an assembly line knows what a set of instructions means because he is able to match them with a series of actions and objects in his local environment. He understands what the word “lever” corresponds to in the phrase “pull the lever,” because he perceives, right in front of him, the object that it signifies. On the other hand, a program that is instructed to perform precisely the same task has no idea what the word “lever” (more accurately, its machine code translation) represents. It only ever knows “lever” as a symbol and not as the lever in of itself. To use Whiteley’s terminology, it can’t ground the symbols that it manipulates in any sort of ostensive definition. As the philosopher John Searle put it, “What a [computer] does is manipulate formal symbols. The fact that the programmer and the interpreter of the computer output use the symbols to stand for objects in the world is totally beyond the scope of the computer.”


Fig 2. 20th-century Belgian artist René Magritte’s “The Treachery of Images.” The caption of the painting translates to “this is not a pipe.”

Because an AI does not appear to have any awareness of the outer world, its “understanding,” if it can be said to have any at all, must be limited to the sphere of its own inference processes. (2) And indeed, a computer program does have to possess precisely this sort of understanding in order to execute the tasks that are encoded within it; to perform its own procedures, the machine must “comprehend” the language in which it is programmed. Understanding, in this case, involves translating the higher-level language in which a program is written into lower-level machine code (i.e. the symbolic formal language). This sort of translation is the role of a compiler or an interpreter (I’ll be using these two words interchangeably, but there are some crucial differences. See footnote 3. Additionally, “executing” and “processing” a program are synonymous with interpreting it.) In very early programming languages, the compiler language, known as the “assembly language,” had a very strong, often one-to-one correspondence with machine code instructions, referring to addresses in a computer’s memory with function names like “add” or “jump” rather than cryptic bits like “00111001”. However, as computation became more complex and grew to encompass myriad combinations of different operations, encoding the interpreter in an assembly language became infeasible. Lisp, one of the first modern programming languages and also the focus of a great deal of AI research in the 20th century, featured an interpreter that wasn’t written in assembly language, or machine code, or any lower-level language. In fact, the interpreter for Lisp was written in Lisp itself. This might sound circular, and indeed it is. Lisp’s interpreter is known as a metacircular evaluator, not only because it is expressed in the same language that it evaluates, but also because, as this website explains, the foundational constructs of the language interpret or evaluate themselves.

But how is it possible that a program can interpret itself by referring to itself? We saw earlier that defining a pair of words in relation to each other, like “regain” and “recover,” doesn’t communicate any meaningful information about either one of them. Similarly, it seems like describing a computational expression by referring back to the expression itself doesn’t tell us anything we didn’t already know. And in fact, the aforementioned website makes a direct comparison between metacircular evaluators and circular definitions in dictionaries, noting that “[self-evaluation is] exactly like looking up a word in a dictionary and finding that the dictionary uses the original word.” While perhaps puzzling, the circular nature of interpretation in some programming languages should not be surprising to us. Just like the dictionary, a computer has to define the expressions of its language circularly; it isn’t aware of the objects that they represent in the real world, so the meanings that it assigns to the symbols of the language must ultimately point back to the original symbols themselves.

So, how can an interpreter function properly if it is circular in nature? Has it really achieved anything meaningful if the expressions it interprets just evaluate to themselves? To answer these questions, it’s important to explain the crucial differences between computational evaluation and defining words in a human language. Explaining the meaning of a word in English always requires you to refer to some other word; evaluating a computational expression does not. For instance, the expression “3” always evaluates to “3,” and the operator “true” similarly evaluates to “true.” These may seem like trivial examples, but primitive expressions like these – i.e. numbers and true/false conditions, as well as arithmetical operations like “+” – constitute the fundamental constructs of the language Lisp. Any computation in Lisp, no matter how complex, will eventually be simplified to these primitive expressions. For example, the Lisp interpreter will evaluate the computation (eval ‘(+ a 5) ‘((a 10) (b 7)), which in English means “evaluate the expression ‘a + 5’ with the variables a = 10, b = 7,” by looking up the number corresponding to the variable “a” and then interpreting the original expression as “10 + 5” (4). The number 10 evaluates to itself, as does the number 5, so the only remaining step is to apply the desired computation by executing the addition of the two numbers through a lower-level language. Therefore, Lisp evaluates a given computation by reducing it to its primitive sub-expressions and then applies (executes) the resulting procedure. Sometimes, the applied expression contains computations that themselves need to be evaluated; this “nested evaluation” could occur, for example, if we changed the Lisp code above to (eval ‘(+a 5) ‘((a (+ 10 3) (b 7)), which signifies the command “evaluate the expression ‘a + 5’ with the variables a = 10 + 3, b = 7”. Thus, the interpreter first evaluates “10 + 3”, which it then applies to the expression “a + 5”, which is subsequently evaluated as “13 + 5.” Based on this recurring sequence of evaluating and applying, we see that the essence of the interpretation process is, as the authoritative textbook Structure and Interpretation of Computer Programs (SICP) states, “a basic cycle in which expressions are reduced to procedures to be applied to arguments, which in turn are reduced to new expressions to be evaluated, and so on, until we get down to symbols, whose values are looked up, and to primitive procedures, which are applied directly.”


Fig 3. The eval-apply cycle. Note that this diagram mentions “environments,” which are features of metacircular interpreters that I’ve neglected to discuss. (Honestly, I don’t fully understand them yet.)

In summary, the metacircular interpreter operates through an eval-apply cycle, and the cycle terminates when an expression has been reduced to a primitive construct. Due to this termination condition, the interpreter is able to evaluate computations perfectly well in spite of its circular structure. However, in order to understand how the metacircular interpreter works (as we have just done), we need to decode the processes that are encoded in the algorithm of the interpreter. Critically, the interpreter is itself written in a formal, symbolic language, and we have to decipher those symbols in order to make sense of the interpreter. (This notion that “the evaluator, which determines the meaning of the expressions in a programming language, is itself another program” is so important that SICP, the textbook mentioned above, regards it to be the most fundamental idea in programming.) In other words, the interpreter must itself be interpreted through another, higher-level interpreter. Furthermore, this latter interpreter, known as the meta-interpreter, could in turn require a meta-meta-interpreter, and so on and so forth. When the interpreter is written in a different, lower-level language than the interpreted code, the original program can be translated to machine code in a finite number of steps. But when the two are expressed in the same language, as is the case with the metacircular interpreter, the interpreter is executed by running a copy of itself. This copy is interpreted through another copy, ad infinitum. This never-ending line of interpretation is rather similar to defining words in human languages without any base understanding; the meaning of one word is given in terms of another, which is given in terms of another, and so on and so forth, but no real comprehension is ultimately achieved.

In 1983, the computer scientist Brian Cantwell Smith, seeking to construct an AI system that could “reason” about its own computational methods, introduced a programming architecture that consists of an infinite tower of meta-circular Lisp interpreters. At the base level of the tower, the original code written by the programmer, or the “user” program, is processed. In the next level, the meta-interpreter of the base interpreter is executed, and so forth.  Each level of the tower sends along a representation of its computation to the next level. To be more precise, every level, aside from the base, interprets the program running beneath it and then passes the state of that interpretation (i.e. its data structures) as arguments to the functions of the meta-interpreter at the level above (5). Furthermore, an interpreter can also manipulate the code that it receives from the level directly underneath it. In Smith’s words, the interpreter gains access to structural descriptions (i.e. representations) of its own operations, thereby empowering it to direct the course of its own computation. Thus, the infinite tower has enabled the interpreter to reflect upon itself. It may be surprising that a computational system is capable of self-reflection, given that the level at which code is executed is always separate from the “meta-level” where the behavior of the system executing the code is analyzed. Hence, the interpreter usually only includes operators for evaluating the user program, and not for inspecting the internal structures that enable the act of interpretation itself. However, because the interpreters in the infinite tower can receive information about the data structures of lower levels, it can effectively have meta-level awareness about the nature of its interpretation while also behaving as a “user-level” program that merely carries out computations, which are passed onto higher segments of the tower. Thus, an interpreter can be extended with a self-representation of its own implementation; in other words, a reflective system embeds a model of a programming language within the language itself.

Furthermore, every level of the infinite tower is capable of self-reflection, which means that every interpreter contains a model of the language within itself. Given that the processor at one level is simply processing the processor immediately beneath it, which is in turn processing a lower processor, the model at each level will also consist of all the models below it in the tower. As such, the tower contains a model of the language within a model within a model within a model, etc. As the computer scientist Bas Steunebrink puts it, the tower can be described as an “infinite nesting of interpreters,” in which the interpreters aren’t stacked upon one another but instead are embedded within each other in a linear hierarchy.

Screen Shot 2018-07-09 at 12.08.20 PMScreen Shot 2018-07-09 at 12.08.28 PM

Fig 4. A non-reflective interpreter (left) juxtaposed with the infinite reflective tower (right). The diagram of the non-reflective case represents Smith’s model of programming, in which every computational system consists of a structural field (which contains all the entities that comprise the language’s implementation) and a processor. The real world is composed of the objects and features that the symbols in the structural field, such as numbers, are meant to stand for. The reflective tower is visualized here as an infinite nesting of interpreters. This image is taken from here.

The reflective tower is reminiscent of the Droste effect (depicted below), in which an image contains a copy of itself, which contains a copy of itself, ad infinitum. In artwork that exhibit the Droste effect, it’s clear that there is only one image which is repeated over and over again within itself. In other words, since all the copies of the artwork (embedded within the artwork itself) are the same, they are essentially instantiations of one standard imageSimilarly, once the interpreters within the infinite tower stop making modifications to the code that the lower levels pass onto them, the higher levels of the tower will henceforth consist of processors redundantly processing identical processors. (6) Thus, the models of the programming language after this point in the tower are identical to one another, so they are all manifestations of one standard evaluator. As the computer scientist John C.G. Sturdy notes, this requires a final meta-evaluator or ultimate machine “to make it all work, and to create the levels of interpretation as needed for inspection and modification by the reflective operators.” The ultimate machine must be external to the tower, yet it also mimics the properties of the standard evaluator, which is internal to the tower. (7)

If the ultimate machine behaves as though it were the standard evaluator, then it seems as though it should also take on the form of a tower. Hence, it’s possible to structure the ultimate machine as a meta-tower, but of course, the meta-tower could be evaluated in turn by another tower, and so on. It is therefore possible to imagine a infinite tower of meta-towers that successively evaluate each other, much like the metacircular interpreters that we discussed earlier. In this stupendous architecture, the ultimate machine – the evaluator of all evaluators, the final interpreter – keeps hiding from view. According to Sturdy, “as you approach it by following data structures, it keeps handing you newly-created substitutes for itself.”

True understanding conceals itself from us, the further that we try to pursue it.


Fig 5. Visualization of the Droste effect, which is, as Wikipedia describes it, “the effect of a picture recursively appearing within itself.” The namesake of the effect is a Dutch cocoa tin designed in 1904, pictured above. The nurse inside the tin is shown to be carrying a tray that is holding the tin itself.

…and back again

Full self-reflection is so elusive for computational systems perhaps because they don’t have any self-awareness. Sure, there is a singular system within which a machine carries out operations and processes its internal data structures, but it has no qualitative conception of the system as a self. This sense of an “I” is predicated on having some level of subjective experience, and the computer, despite all that it does, lacks the raw feeling or sensation of doing. As a human, I have an irreducible, fundamental consciousness of my self, a sense of what it is like to be “me,” which is totally independent of my personality, my past history, and any other features. Reflection will enable a machine to access its own structure and behavior, but without this sort of consciousness, it only ever “understands” itself in relation to its properties. It bears no relation to itself as a self, as a witness to its own experience.

A machine’s inability to reflect on itself is inextricably linked to its inability to achieve true understanding. Both are rooted in the fact that the machine cannot seem to experience anything. Consequently, it isn’t aware of itself as a system that is manipulating symbols in a programming language, nor is it aware of the entities in the real world that those symbols denote.

Understanding is not only related to but also is an act of self-reflection. Your sensory perception receives data about the outside world, but it is only by reflecting on your perception that you become consciously aware of it. Understanding, therefore, is a process of shifting your attention to the raw feeling of whatever you’re perceiving, of tapping into the felt presence of direct experience. Put another way, it involves becoming aware of your own awareness, or experiencing your own experience. If you’re undergoing a back ache, for example, you reflect on it, and thereby become conscious of it, by noticing the subjective sensation of your pain. You’re experiencing what it is like for you to feel the pang in your muscles.

Screen Shot 2018-07-27 at 2.57.16 AM

Fig 6. A brain that doesn’t understand (diagram A) vs. a brain that does understand (diagram B). Image taken from an essay in this book.

This qualitative experience is something that only you can access, through introspection. You may try to convey the nature of your pain to other people by labeling its features or discussing its causes. And you may be effective at capturing a number of the meaningful properties of the pain, but the pain in of itself, independent of the features that we ascribe to it, remains utterly incommunicable. No language, whether formal or informal, can immediately relay the actual sensation of pain; it can only do so indirectly by invoking past memories of the experience. In fact, someone could construct a formal theory about pain that, just like a programming language, is defined with an unambiguous, fully self-consistent semantics, in which all the various types of pain are mapped to their physical correlates in the body. Yet learning this theory, no matter how thoroughly, won’t give you any knowledge of what it’s like to experience pain. No amount of information about the chemical neurotransmitters that trigger the perception of soreness will tell you what soreness really feels like. The physical substance is a symbol for your awareness of the feeling, and as we discovered with computational systems, interpreting symbols in relation to other symbols will not result in any understanding of the real entities they stand for.

As it turns out, attempting to understand sensation through rational abstraction is quite comparable to the eval-apply cycle of the metacircular interpreter. You “evaluate” the immediate sensation of pain by assessing what sort of pain it is (a sting, a burn, etc.) and then you “apply” your assessment by receiving treatment for it. Having applied it in this manner, you might be asked to further evaluate your pain; for instance, your doctor may ask you to rate your pain on a scale from 1 to 10. In the computational example, further rounds of evaluation and application will eventually boil down to a primitive construct, which evaluates to itself. In the human case, the primitive construct is irreducible, raw sensation. If your doctor wants to know why you rated your pain as a “5,” you might reply, “I don’t know; it just feels that way.” You’ve evaluated the sense of pain in terms of itself; it is as it is, in a way that you cannot elaborate on. You understand that you are in pain because you experience it, but what does it mean to be experiencing pain? There is no deeper significance. To be aware of your own experience means to be aware of it. It’s self-referentially circular; you can’t rationalize it.

Just like a computational interpreter is usually a finite program but a system that seeks to interpret its own interpretation (through reflection) is theoretically infinite, understanding your experience – simply by being aware of it – is deeply intuitive, but understanding the act of understanding – i.e. by seeking to resolve the question “what does it mean to experience my experience?” in a non-circular fashion – will result in an infinite regress. The latter assumes that meaning is external to our experience, even though everything we know stems from what we experience. In other words, it requires a meta-interpreter for understanding the meaning of what we intuitively interpret through experience. The sensations of qualitative experience therefore become objects of interpretation; they are reduced from entities that are evident in of themselves to symbols that stand for and evaluate to something else in the domain of the meta-interpreter. However, as we saw with computational systems, the language of the meta-interpreter is itself symbolic, such that the symbols of the original algorithm stand for further symbols, which subsequently stand for … more symbols. Similarly, unless it is self-justifying, the meta-interpreter of our experience is itself subject to interpretation, thereby requiring a higher-level interpreter that may also, in turn, need to be interpreted. Thus, it’s possible to model the meta-understanding of experience as something like the infinite reflective tower, where the ultimate interpreter, much like the ultimate machine in Sturdy’s theory of computation, hides further from view.

For example, in order to believe that logical reasoning is the ultimate benchmark for making sense of our experience – that logic does not require interpretation because there is no truth that lies beyond it – it must be possible to prove the rules of logic using logic itself. However, verifying logic by means of itself presupposes the validity of logic. So, there must be some other standard for establishing the laws of logic, which then replaces logic as the principle of proof. This standard is your pre-rational intuition. As these philosophers have pointed out, the statement that ‘A is A’, along with other axioms of logic like non-contradiction and cause-and-effect, is not a “logical principle” so much as it is an expression of tautological intuition, which you know to be true from your experience. You may cast doubt on the validity of your intuition by claiming that you cannot know whether your intuition is correct, only that you possess it. But the very ability to question your own intuition is predicated on the fact of your experience. Challenging your experience implicitly presumes that you already have it, since the act of challenging is itself a form of experience.

So, the more that you try to model our experience in terms of something besides experience, the more you end up describing experience in terms of itself. You appear to be farther removed from raw experience as you attempt to understand it through logic and other tools of rational abstraction. But underlying your reasoning is nothing other than your experience. Just like the girl in the Morton Salt can (see figure 7) conceals an image that contains herself, we seem to obscure our experience by examining it. But when we look closer, we find that the act of examining is just an instantiation of our own experience. This phenomenon is precisely like the Droste effect; there is only the experiencing of experience, at every level of understanding. Experience is modeled within experience, which is modeled within experience, all the way down.


Fig 7. A can of Morton’s Salt. Notice how the girl on the can is herself carrying a can of Morton’s Salt, although her elbow obscures her image on the carried can.

Toward the beginning of this article, I claimed that we understand everything in relation to experience. How do you understand experience? By experiencing it through your awareness. in other words, you can’t explain it except in relation to itself. Understanding your experience at a higher level than your perceptual awareness is essentially metacircular. You conceive of experience as a symbol for some “meta-language” used for analyzing symbols as a whole; for example, you may think that experience means whatever you can logically reason about it. But the fundamental axioms of logic are established relative to experience, so experience is defined, circularly, in terms of itself.

The more that you try to rationally understand your experience, the less successful you will be. The moment that you stop trying, by simply shifting your attention to your awareness, is the moment that you finally understand.

In the words of the sage Mooji, you are looking for something, but you are already looking from there.

(0) Here’s an example of a formal system, one that Hofstadter introduces at the beginning of his seminal book Gödel, Escher, Bach.

(1) The precise details of the formal systems that AI function within are not particularly relevant to this blogpost, but they are nonetheless worth reviewing for the purpose of offering some context. Computers operate within a formal, symbolic language whose alphabet consists solely of bits, or 1’s and 0’s. According to the grammar of the language, any string of eight bits will form a valid symbol (or technically, a well-formed formula). Broadly speaking, the program running on the machine contains the rules of inference, and its inputs serve as axioms.

(2) Some philosophers (perhaps Hegel) may claim that self-awareness can only be developed through an awareness of the outer world, which would imply that an AI has no consciousness of itself.

(3) According to Hofstadter’s Gödel, Escher, Bach, compilers translate all the expressions in a program and then execute the machine code, whereas interpreters translate one line of a program and immediately carry out the corresponding procedure.

(4) This example is taken from a presentation by Mark Boady on metacircular evaluation.

(5) By “argument,” I mean “parameter” or “input.” For example, a function for squaring numbers might accept a list of integers as its argument.

(6) Because the levels of the reflective tower become redundant after a certain point, the tower can be reduced to a finite version of itself. This makes the execution of the tower practical.

(7) Thus, the ultimate machine, curiously enough, can be said to be within and without the tower.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s