Please login to read the full paper.
If you don’t have a user account yet, buy the digital version of the issue to create your access.

forgotten password?

Of Lattices, Grids, Strips and TapeAlan Turing’s Formal Machines

Jean Lassègue

Interview with

What are the links between information and computing? How and why was computing invented? Are computers capable of representing data and creating forms? To learn more about these questions, the Back Office team interviewed Jean Lassègue,33 Researcher at the CNRS (Institut Marcel Mauss-EHESS) and Director of the LIAS (Linguistics, Anthropology, and Sociolinguistics) Research Team. philosopher, epistemologist, and author of a biography of Alan Turing.44 Jean Lassègue, Turing, (Paris: Les Belles Lettres, 1998).

Back Office What exactly was information for Alan Turing?

Jean Lassègue As far as I know, the word ‘information’ was rarely used by Turing—he preferred the technical notion “weight of evidence.” There are two concurrent definitions of the concept of information: Claude Shannon’s55 Claude Shannon (1921–2001) was an American electrical engineer nd mathematician. He is one of the founders of information theory, which concerns the quantification of information in terms of probabilities. (1948), who defined information as the transmission of a signal within the context of a statistical theory of communication (the rarer the appearance of a signal, the more informative it is), and the one proffered by the Russian mathematician Andreï Kolmogorov in the 1960s where information is defined as programming within the context of the theory of computability created by Turing (wherein information is the measure of the complexity of a program, the description of an object by a program being as complex as it is rich in information). Those are the theoretical aspects… However, in common parlance (and this is the meaning of the word that Turing preferred), this is not what is understood by ‘information’ which we related to the notion of meaning. In order for a piece of ‘information’ to make sense to a human, it is necessary to spacialize it, ie. to represent it within a space (a two-dimensional one in the case of graphic interfaces). Its relation to space necessarily implies a construction of form distinguishable from content: this is how meaning is constructed, which is above all geometric and non-linguistic, so consequently does not presuppose a signal, code or message, as the mathematician René Thom stated, with great insight.66 René Thom (1923–2002) is a French mathematician and epistemologist who was a founder of catastrophe theory. In summary, to signify is, at its most fundamental, to lend form to space. 

If one reflects along these lines, one might consider the tape in linear time of the machine described by Turing in 1935. He called it a “paper machine” because it consisted of an abstract mathematical machine which, initially, was not physically realized, like the extreme reduction of space to a graphic interface of only one dimension, that of the paper upon which one inscribes marks: there is either an empty box or one that is checked—that’s all. It could not be more simple. You can always achieve more complexity by adding empty boxes and checked boxes. Turing went as far as to postulate that it would be possible to simulate human ingenuity by adding new boxes to the tape. 

In fact, between information as a linguistic concept and meaning as a geometric concept, there is an abyss, that of the intelligibility of space. The concept of information is not a spatial concept, but a linguistic one, and that is the reason for which it is possible for Turing to make a radical separation between the lev…