Syntactic metaprogramming I
This is the first in a series of posts where I am going to explain how to use the metaprogramming features of meta-cedille. Before we start discussing any specifics, I want to give a little background on the system. Meta-cedille is of course based on cedille, a minimalistic, dependently typed programming language without a datatype system. I won’t explain why cedille is an interesting theory here, but you can for example read this post by Victor Maia, or have a look at the cedille documentation. The main reason why cedille was chosen as a host language for this metaprogramming system was because of its simple implementation and consistency proof, but this is not its only advantage over other type theories.
Meta-cedille is an extension of cedille (technically, cedille-core, implemented here), that adds full syntactic metaprogramming without breaking the consistency of the underlying type theory. This means that not only terms or other code can be read and written within the language itself, but also the syntax of the language can be fully modified by the programmer. A user of the language can specify their own LL1
grammar (a special type of context-free grammar, with the restriction that any non-terminal can uniquely be assigned a parsing rule by just reading one character), that is to be used for parsing. This feature seems to have lots of potential as a substitute for built-in features, as well as for enabling features that other languages miss. For a somewhat complete list of what I believe to be possible, see here. In particular, it seems possible to implement a syntactic sugar for inductive datatypes, which is something that Victor Maia talks about in the above mentioned post. His issue with cedille-core is that defining an inductive datatype requires a lot of boilerplate code, making it difficult to read and write. Further, if a syntactic sugar for inductive datatypes was added, it would either be part of the language implementation, making it more complicated (and adding potential for breaking something), or be something that would be expanded to regular cedille-core code before distribution, making it difficult to read. With meta-cedille, a third option seems possible: adding the syntactic sugar with metaprogramming, combining the advantages of both previously mentioned approaches.
In this first entry, we will talk about how to add a notation for natural numbers to the language. Having the ability to do this was one of the initial design goals of the language. In fact, I found it always annoying that languages like Coq or Agda didn’t have a built-in data type for natural numbers, but that they had a syntax for writing their elements built-in. Thus, the initial syntax of the language consists just of the terms of cedille-core, plus some necessary extensions for metaprogramming. This syntax was built with a goal of a simple implementation, so it is very bare-bones and sometimes hard to read. But the whole point of the language is that this doesn’t matter, as everything can be changed.
This will mostly be an introduction by example. If you want a full explanation of how to modify the grammar of the language, see here. You should have some familiarity with how parsing with a context-free grammar works.
Let’s get started with the parsing rules. For simplicity, we’ll only do notation in base 2, but this will of course easily generalize:
let init$bindigit$0 := f : Bool.
let init$bindigit$1 := t : Bool.
Note that these are definitions of values of type Bool
. These two lines just define alternative names for the two inhabitants of the type. The type annotation on the right is always optional, but it gives a good additional safety check that we really have defined the right thing (it can also be used to replace an inferred type by a nicer, equivalent type). The important part happens to the left of the :=
sign: the names of values may be used for parsing rules. Parsing rules consist of three parts, separated by $
characters: first, a prefix (or namespace, whatever you prefer), second a nonterminal symbol, and third a replacement. The prefix is later passed to a special command that applies the changes to the grammar, and all rules with that prefix will be considered. As we want to extend the previous grammar, we have to use the prefix init
. The nonterminal and replacements are fairly simple: we introduce two rules for a nonterminal bindigit
(that didn’t exist before), and it can be parsed either as 0
or as 1
. Depending on which digit was parsed, this will then either return the term f
or t
. Next:
let init$binnat'$_bindigit__binnat'_ :=
<cons Bool> : Π _ : Bool Π _ : [List Bool] [List Bool].
-- with better syntax, this would be Bool -> List Bool -> List Bool
let init$binnat'$ := <nil Bool> : [List Bool].
Here, we’re defining a nonterminal binnat'
, that can either be parsed by first parsing a bindigit
and then another binnat'
(i.e. a digit and then possibly some more digits), or by parsing an empty string. Note that the square brackets denote the application of a term to a term, and the angle brackets denote erased application of terms (so if you would write f x y
in Haskell, you would have to write [[f x] y]
here). If you don’t know cedille, you can just regard the two forms of application as the same thing for now. This syntax is currently being used because it gives a simple LL1
grammar, but I hope to migrate to a better syntax soon.
In this piece of code, you can also see how the parsed data is collected: parsing a binnat'
will give us a list of booleans. You can also see that the parsing rules essentially dictate the type of their values (this isn’t enforced: you can put arbitrary code on the right hand side. But you will get type errors after the parsing step if you don’t assign your types consistently). If you were wondering why the nonterminal was called binnat'
, here is the reason:
let init$binnat$_bindigit__binnat'_ :=
<cons Bool> : Π _ : Bool Π _ : [List Bool] [List Bool].
A binnat
should at least contain one digit, so we added an intermediate symbol to ensure this. The final parsing rule is this:
let init$term$=nu=_binnat_ :=
λ bs : [List Bool] [[quote$square quoteBoolListToNat] [quoteListBool bs]]
: Π _ : [List Bool] init$term.
So, we can parse a term
, by parsing =nu=
and a binnat
afterwards. =nu=
is an escape code for the greek character ν. If you are wondering how to write parsing rules that use the $, _
or =
characters, you can use =dollar=, =underscore=
and =equals=
respectively.
The reason why we prefix a number with this character is because meta-cedille currently has support for de-Bruijn indices, which are indicated by unprefixed numbers. This will probably be changed in some form later, but for now, we have to put up with another prefix. The more interesting part in this definition is the part to the right of the :=
sign. First, note that the type shows us that this should be a function that takes a list of booleans and returns a init$term
. This is just the type of terms - it contains only a single $
character, so this isn’t used as a parsing rule. It was just convenient to name it like this, because all its constructors are parsing rules.
The more interesting part in this rule is how it is defined. First, the function quote$square
(again, not a parsing rule) represents a term constructor that applies the second term to the first one, with square brackets. Thus, if a
and b
are terms, [[quote$square a] b]
represents the term [a b]
. The term quoteBoolListToNat
is a reference to the function boolListToNat
, so [[quote$square quoteBoolListToNat] x]
represents the term [boolListToNat x]
. Finally, quoteListBool
is a function that turns a list of booleans into a term by recursively writing down the constructors of that list and applying them properly. So, what this function will do at the end is take the list of bools you provided and replace it by a function application of boolListToNat
to whatever list you provided. The implementation of boolListToNat
is more or less standard, so I’m not going to talk about it here. quoteListBool
is implemented as follows:
let quoteListBool :=
[[<<inductionList Bool> λ _ : [List Bool] init$term> quoteNilBool]
λ b : Bool λ bs : [List Bool] λ rec : init$term
[[quote$square [[quote$square quoteConsBool] [quoteBoolFun b]]] rec]]
: Π _ : [List Bool] init$term.
This needs some more explanation: <<inductionList Bool> λ _ : [List Bool] init$term>
is a function that takes as arguments a base case for induction and an inductive step and returns a function that turns a list of bools into a term. In case the list is the empty list we want to write <nil Bool>
, which is what the term quoteNilBool
does. In the inductive step the list consists of a first element and a rest, bound as b
and bs
respectively. We also get a term rec
, which is essentially the term [quoteListBool bs]
(but we cannot write this here: we cannot refer to a name in its own definition). We want to write [[<cons Bool> b] bs]
here, which is precisely what the term after the lambdas does.
If you are wondering why I used this function and a quoted list of bools instead of just writing [suc [suc ... [suc zero] ... ]]
, this has been done for performance reasons. The generated code needs to be typechecked afterwards and writing it down in unary notation would give us exponential typechecking time in the length of the number, whereas this solution is linear. The current environment meta-cedille runs in is already incredibly slow, and having the number 1000 need 10 minutes of typechecking is just bad.
After all these definitions are finished, we can apply them:
seteval eval init stmt.
The second argument to seteval
tells the environment to use the init
prefix, which then adds our new parsing rules to the already existing ones. The third argument specifies the initial nonterminal symbol to be used by the grammar (which is something we didn’t interact with so far - we could add a new top-level statement to the language by adding a parsing rule for stmt
). The first argument specifies the evaluator, which is used to execute the parsed data and potentially modify the environment. This is just a regular function with a special result type that can be defined by the user. I will write about these in future posts.
You can find the full source code here, but to run it, you will need to import some other files beforehand (there is no module system yet and redundant imports will throw errors and abort execution, as of now). You can have a look at this file which imports all the necessary files.