Β-normalization and partially-applied built-ins

I’ve been working on a Java implementation, and in general it’s been pretty straightforward to write directly from the spec, but I’ve run into something I don’t understand.

From the spec:

β-normalization does not simplify partially applied built-in functions

…and the reduction rules later on that page for the individual folds only apply in the fully-saturated case.

The Haskell implementation does simplify if any arguments are provided, though:

$ dhall normalize <<< "Natural/fold 0"
λ(natural : Type) → λ(succ : natural → natural) → λ(zero : natural) → zero

I can just follow the Haskell implementation here, but I’m wondering whether the spec or my reading is wrong.

Your reading is correct, the Haskell impl is not standard-compliant here. FWIW dhall-rust does respect the spec on that point.
So that leaves us to figure out which approach is best.

I personally prefer what the current standard says, mostly because for any number that’s not tiny, the term gotten from expanding Natural/fold n would be unnecessarily large, and I don’t really see the benefits of that

Thanks for bringing this up!

I wonder whether the Haskell implementation’s behaviour is in a way a left-over from the fusion support?

I also wonder whether there’s a performance benefit to β-normalizing partial applications…

In any case, we should add some acceptance tests to clarify what’s right.

Yeah, this is a bug in the Haskell implementation, which was introduced when switching to a faster evaluator in:

… which introduced the non-standard behavior for Natural/fold

I have a pull request up to add a regression test to the standard: https://github.com/dhall-lang/dhall-lang/pull/950

1 Like