History of type theory facts for kids
The type theory was initially created to avoid paradoxes in a variety of formal logics and rewrite systems. Later, type theory referred to a class of formal systems, some of which can serve as alternatives to naive set theory as a foundation for all mathematics.
It has been tied to formal mathematics since Principia Mathematica to today's proof assistants.
Contents
1900–1927
Origin of Russell's theory of types
In a letter to Gottlob Frege (1902), Bertrand Russell announced his discovery of the paradox in Frege's Begriffsschrift. Frege promptly responded, acknowledging the problem and proposing a solution in a technical discussion of "levels". To quote Frege:
Incidentally, it seems to me that the expression "a predicate is predicated of itself" is not exact. A predicate is as a rule a first-level function, and this function requires an object as argument and cannot have itself as argument (subject). Therefore I would prefer to say "a concept is predicated of its own extension".
He goes about showing how this might work but seems to pull back from it. As a consequence of what has become known as Russell's paradox both Frege and Russell had to quickly amend works that they had at the printers. In an Appendix B that Russell tacked onto his The Principles of Mathematics (1903) one finds his "tentative" theory of types. The matter plagued Russell for about five years.
Willard Quine presents a historical synopsis of the origin of the theory of types and the "ramified" theory of types: after considering abandoning the theory of types (1905), Russell proposed in turn three theories:
- the zigzag theory
- theory of limitation of size,
- the no-class theory (1905–1906) and then,
- readopting the theory of types (1908ff)
Quine observes that Russell's introduction of the notion of "apparent variable" had the following result:
the distinction between 'all' and 'any': 'all' is expressed by the bound ('apparent') variable of universal quantification, which ranges over a type, and 'any' is expressed by the free ('real') variable which refers schematically to any unspecified thing irrespective of type.
Quine dismisses this notion of "bound variable" as "pointless apart from a certain aspect of the theory of types".
The 1908 "ramified" theory of types
Quine explains the ramified theory as follows: "It has been so called because the type of a function depends both on the types of its arguments and on the types of the apparent variables contained in it (or in its expression), in case these exceed the types of the arguments". Stephen Kleene in his 1952 Introduction to Metamathematics describes the ramified theory of types this way:
- The primary objects or individuals (i.e. the given things not being subjected to logical analysis) are assigned to one type (say type 0), the properties of individuals to type 1, properties of properties of individuals to type 2, etc.; and no properties are admitted which do not fall into one of these logical types (e.g. this puts the properties 'predicable' and 'impredicable' ... outside the pale of logic). A more detailed account would describe the admitted types for other objects as relations and classes. Then to exclude impredicative definitions within a type, the types above type 0 are further separated into orders. Thus for type 1, properties defined without mentioning any totality belong to order 0, and properties defined using the totality of properties of a given order belong to the next higher order. ... But this separation into orders makes it impossible to construct the familiar analysis, which we saw above contains impredicative definitions. To escape this outcome, Russell postulated his axiom of reducibility, which asserts that to any property belonging to an order above the lowest, there is a coextensive property (i.e. one possessed by exactly the same objects) of order 0. If only definable properties are considered to exist, then the axiom means that to every impredicative definition within a given type there is an equivalent predicative one (Kleene 1952:44–45).
The axiom of reducibility and the notion of "matrix"
But because the stipulations of the ramified theory would prove (to quote Quine) "onerous", Russell in his 1908 Mathematical logic as based on the theory of types also would propose his axiom of reducibility. By 1910 Whitehead and Russell in their Principia Mathematica would further augment this axiom with the notion of a matrix — a fully extensional specification of a function. From its matrix a function could be derived by the process of "generalization" and vice versa, i.e. the two processes are reversible — (i) generalization from a matrix to a function (by using apparent variables) and (ii) the reverse process of reduction of type by courses-of-values substitution of arguments for the apparent variable. By this method impredicativity could be avoided.
Truth tables
In 1921, Emil Post would develop a theory of "truth functions" and their truth tables, which replace the notion of apparent versus real variables. From his "Introduction" (1921): "Whereas the complete theory [of Whitehead and Russell (1910, 1912, 1913)] requires for the enunciation of its propositions real and apparent variables, which represent both individuals and propositional functions of different kinds, and as a result necessitates the cumbersome theory of types, this subtheory uses only real variables, and these real variables represent but one kind of entity, which the authors have chosen to call elementary propositions".
At about the same time Ludwig Wittgenstein developed similar ideas in his 1922 work Tractatus Logico-Philosophicus:
3.331 From this observation we get a further view – into Russell's Theory of Types. Russell's error is shown by the fact that in drawing up his symbolic rules he has to speak of the meanings of his signs.
3.332 No proposition can say anything about itself, because the propositional sign cannot be contained in itself (that is the whole "theory of types").
3.333 A function cannot be its own argument, because the functional sign already contains the prototype of its own argument and it cannot contain itself...
Wittgenstein proposed the truth-table method as well. In his 4.3 through 5.101, Wittgenstein adopts an unbounded Sheffer stroke as his fundamental logical entity and then lists all 16 functions of two variables (5.101).
The notion of matrix-as-truth-table appears as late as the 1940–1950s in the work of Tarski, e.g. his 1946 indexes "Matrix, see: Truth table"
Russell's doubts
Russell in his 1920 Introduction to Mathematical Philosophy devotes an entire chapter to "The axiom of Infinity and logical types" wherein he states his concerns: "Now the theory of types emphatically does not belong to the finished and certain part of our subject: much of this theory is still inchoate, confused, and obscure. But the need of some doctrine of types is less doubtful than the precise form the doctrine should take; and in connection with the axiom of infinity it is particularly easy to see the necessity of some such doctrine".
Russell abandons the axiom of reducibility: In the second edition of Principia Mathematica (1927) he acknowledges Wittgenstein's argument. At the outset of his Introduction he declares "there can be no doubt ... that there is no need of the distinction between real and apparent variables...". Now he fully embraces the matrix notion and declares "A function can only appear in a matrix through its values" (but demurs in a footnote: "It takes the place (not quite adequately) of the axiom of reducibility"). Furthermore, he introduces a new (abbreviated, generalized) notion of "matrix", that of a "logical matrix . . . one that contains no constants. Thus p|q is a logical matrix". Thus Russell has virtually abandoned the axiom of reducibility, but in his last paragraphs he states that from "our present primitive propositions" he cannot derive "Dedekindian relations and well-ordered relations" and observes that if there is a new axiom to replace the axiom of reducibility "it remains to be discovered".
Theory of simple types
In the 1920s, Leon Chwistek and Frank P. Ramsey noticed that, if one is willing to give up the vicious circle principle, the hierarchy of levels of types in the "ramified theory of types" can be collapsed.
The resulting restricted logic is called the theory of simple types or, perhaps more commonly, simple type theory. Detailed formulations of simple type theory were published in the late 1920s and early 1930s by R. Carnap, F. Ramsey, W.V.O. Quine, and A. Tarski. In 1940 Alonzo Church (re)formulated it as simply typed lambda calculus. and examined by Gödel in 1944. A survey of these developments is found in Collins (2012).
1940s–present
Gödel 1944
Kurt Gödel in his 1944 Russell's mathematical logic gave the following definition of the "theory of simple types" in a footnote:
- By the theory of simple types I mean the doctrine which says that the objects of thought (or, in another interpretation, the symbolic expressions) are divided into types, namely: individuals, properties of individuals, relations between individuals, properties of such relations, etc. (with a similar hierarchy for extensions), and that sentences of the form: " a has the property φ ", " b bears the relation R to c ", etc. are meaningless, if a, b, c, R, φ are not of types fitting together. Mixed types (such as classes containing individuals and classes as elements) and therefore also transfinite types (such as the class of all classes of finite types) are excluded. That the theory of simple types suffices for avoiding also the epistemological paradoxes is shown by a closer analysis of these. (Cf. Ramsey 1926 and Tarski 1935, p. 399).".
He concluded the (1) theory of simple types and (2) axiomatic set theory, "permit the derivation of modern mathematics and at the same time avoid all known paradoxes" (Gödel 1944:126); furthermore, the theory of simple types "is the system of the first Principia [Principia Mathematica] in an appropriate interpretation. . . . [M]any symptoms show only too clearly, however, that the primitive concepts need further elucidation" (Gödel 1944:126).
Curry–Howard correspondence, 1934–1969
The Curry–Howard correspondence is the interpretation of proofs-as-programs and formulae-as-types. The idea starting in 1934 with Haskell Curry and finalized in 1969 with William Alvin Howard. It connected the "computational component" of many type theories to the derivations in logics.
Howard showed that the typed lambda calculus corresponded to intuitionistic natural deduction (that is, natural deduction without the Law of excluded middle). The connection between types and logic lead to a lot of subsequent research to find new type theories for existing logics and new logics for existing type theories.
de Bruijn's AUTOMATH, 1967–2003
Nicolaas Govert de Bruijn created the type theory Automath as a mathematical foundation for the Automath system, which could verify the correctness of proofs. The system developed and added features over time as type theory developed.
Martin-Löf's Intuitionistic type theory, 1971–1984
Per Martin-Löf found a type theory that corresponded to predicate logic by introducing dependent types, which became known as intuitionistic type theory or Martin-Löf type theory.
Martin-Löf's theory uses inductive types to represent unbounded data structures, such as natural numbers.
Martin-Löf's presentation of his theory using rules of inference and judgments becomes the standard for presenting future theories.
Coquand and Huet's Calculus of Constructions, 1986
Thierry Coquand and Gérard Huet created the Calculus of Constructions, a dependent type theory for functions. With inductive types, it would be called "the Calculus of Inductive Constructions" and become the basis for Coq and Lean.
Barendregt's lambda cube, 1991
The lambda cube was not a new type theory but a categorization of existing type theories. The eight corners of the cube included some existing theories with simply typed lambda calculus at the lowest corner and the calculus of constructions at the highest.
Identity Proofs are Not Unique, 1994
Prior to 1994, many type theorists thought all terms of the same identity type were the same. That is, that everything was reflexivity. But Martin Hofmann and Thomas Streicher showed that that was not required by the rules of the identity type. In their paper, "The Groupoid Model Refutes Uniqueness of Identity Proofs", they showed that equality terms could be modeled as a group where the zero element was "reflexitivity", addition was "transitivity" and negation was "symmetry".
This opened up a new area of research, homotopy type theory, where category theory was applied to the identity type.