I’ll develop a new language on which that mismatch doesn’t exist, the Symmetric Interaction Calculus (SIC). That means every Symmetric Interaction Net can be interpreted as an SIC term, and vice-versa; and each graph rewrite corresponds to one rule on the calculus.
I’ll briefly explain how all terms of that language can be optimally reduced with the Abstract Algorithm, and how all intermediate steps of that reduction correspond to a SIC term. term)----------------------------- lambda copyterm = (f0 & f1) (x0 & x1)λt. Note that, since there is no rule to duplicate a superposed value, SIC corresponds to the abstract algorithm with only one “fan” node.
It is not only asymptotically faster than Java Script, Scheme, Haskell or any other language that has lambdas, as explained on my previous post, but it is also actually efficient: we’re reaching 150m rewrites/s in a GTX 1080 with a massively parallel Open CL implementation, which makes it faster than GHC even for “normal” programs. If you wanna hear more about this, feel free to follow me on Twitter.
Finally, it has a clear cost model, which makes it well suited as the foundation of a decentralized VM; a functional counterpart of the EVM. There are two reasonable answers: either we copy the superposition, which may lead to a super-superposition, or we undo the superposition, moving each superposed value to each target location. It is quite dead right now, but I’ll be using it whenever I have something new to show off.
We’ll completely ignore the abstract algorithm and develop such calculus by itself. First, because the substitution procedure must traverse over . Recall our definition of application: Does that break the world? It is still a computing model with unambiguous reduction rules. As soon as a function is applied to an argument, the occurrence of its variable is replaced by an argument, wherever that argument is, even outside itself. Obviously, we can’t actually do that, as we already agreed copying isn’t a constant-time operation.
Let’s start by defining a syntax with functions, variables and function application: And, well, that’s it. If implemented naively, that’s a quadratic amount of work.
The proposal is to use the algorithm without an oracle (i.e., incomplete, but efficient), and restrict our term language to only be able to express terms that are compatible with it.
The most promising way to do it is to shift to a different underlying logic: Elementary Affine Logic, which essentially stratifies terms in “layers”, preventing a term to duplicate another term in the same level, making it Absal-compatible. This proposal is particularly interesting because of its nice normalization properties.
This suggests a mismatch between the Abstract Algorithm and the λ-calculus. term)--------------------------------------- superposed application K = x0 & x1term = f0 K & f1 Kλt.
But there are workarounds: If the abstract algorithm doesn’t cover all terms, can’t we change it so that it does?