Skip to content

Instantly share code, notes, and snippets.

@glebec
Created October 18, 2019 17:27
Show Gist options
  • Save glebec/7d9d1425576378f5acdd6652feef6a33 to your computer and use it in GitHub Desktop.
Save glebec/7d9d1425576378f5acdd6652feef6a33 to your computer and use it in GitHub Desktop.
Smartly.io DevTalks - Lambda Calculus post-talk Q&A

Lambda Calculus Q&A

These are my responses to some of the questions asked at the end of my Lambda Calculus talk, held on 2019-10-15 at Smartly.io's DevTalks in Helsinki. I've grouped similar questions together.

Books and Resources

  • What's the one encompassing book you would recommend to read on this very interesting topic?
  • Can you recommend a book (or books) that cover some of the history you shared in more detail?
  • So what IS the book to read to get into understanding all of this?

Greg Michaelson's "An Introduction to Functional Programming Through Lambda Calculus" is more approachable than most mathematical treatments of the subject (such as Henk Barendregt's "Lambda Calculus with Types"), but more thorough than most introductory articles. It relates the subject to programming and includes helpful exercises; I recommend it.

The portion of my talk which covers history is largely drawn from Felice Cardone and J. Roger Hindley's "History of Lambda-calculus and Combinatory Logic". Ron Pressler also cites interesting primary sources in his article "Finite of Sense and Infinite of Thought: A History of Computation, Logic and Algebra, Part III".

You can see a selection of other material I found helpful at https://github.com/glebec/lambda-talk#additional-resources.

Lambda Calculus in Practice?

  • Do you use lambda calculus in your role at Google?
  • Where to find good examples of using lambda calculus in practise?

The extent to which lambda calculus is practical depends on what you decide counts as being LC!

The mathematical formalism – that is, specific grammar featuring 𝜆 symbols, jargon like "eta reduction," and so on – is mostly used to communicate ideas and derive proofs about computation. I've not used that much in my professional roles, but occasionally it serves as a convenient lingua franca in discussion with functional programmers.

Many of the concepts associated with LC – computation via evaluation, first-class / higher-order functions, functional representations, combinators, closures, and currying – are not strictly unique to LC, but show up in combinatory logic and functional programming. If those features count as "practical LC," then many programmers use some LC regularly, consciously or not.

I think the spirit of the question can best be answered as follows: studying lambda calculus has helped me to become more fluent in thinking functionally, and even led me to a few specific techniques I might not have learned otherwise. For example, the Y combinator (or its JavaScript-compatible version, the Z combinator) can be used to decouple recursive algorithms from the recursion step itself. That decoupling allows for stackable recursion decorators like automatic logging and memoizing.

I've written up some demonstrations at https://github.com/glebec/lambda-talk-practical, but please keep in mind that these are minimalist examples; in a shared codebase I would wrap them in an expressive API.

Static Typing

  • How would you contrast statically typed functional languages likes Haskell with dynamically typed like Lisp?

I don't have enough experience with dynamically-typed primarily-functional languages like Lisp to contrast them with Haskell, Elm, and PureScript. I will say that Haskell etc. let me more confidently refactor code or model problem domains than a mixed-paradigm dynamic language like JavaScript. And while Java's type system can feel to me like a chore or obstacle, Haskell's type system usually feels more like a force multiplier or creative aid.

To do this topic justice would require a much longer article, but I will try to summarize a few relevant points.

Algebraic Data Types are a remarkably direct and expressive way to model information – they practically bridge conceptualization and implementation into a single entity. I doubt I could do better than Scott Wlaschin in explaining this, so I recommend everyone watch his excellent talk, Domain Modeling Made Functional.

Hindley-Milner type inference means you often don't need to explicitly annotate your types. It is still considered good practice to annotate top-level or particularly "interesting" values for the sake of human readers, but the compiler is often smart enough to tell you what those types should be. The signal-to-noise ratio of types in a Haskell program is generally good.

Haskell's type syntax is concise and decoupled from values; function signatures can be written on their own line. That separation helps you think explictly in terms of value-level code implementing type-level code. Types are the interface, where architecture is determined. Values are interchangeable and refactorable dependencies, which satisfy the types. You can sketch out an entire program (with static verification) before writing any implementations.

Another advantage of this decoupling is that you can communicate a lot about pure functions from just their type signatures; they make for great documentation. You can even search the package registries of Haskell, Elm, and PureScript by standalone function signature!

I don't want to give the wrong impression – nothing is perfect. But I miss Haskell's type system when working in other language families, which I think is fairly telling.

Purely Functional State

  • Real world applications might still need state like database instead of being purely functional?

Haskell, Elm, and PureScript are all purely functional, yet are used to write web servers, interactive command line tools, client-side UIs and other highly stateful applications that interact with the real world. How can this be? The short answer is that they "cheat" in clever ways, which preserve many benefits of pure FP while still enabling the developer to express imperative concepts.

There have been various models for how to make purely functional languages "useful," as IO is inherently an impure activity. One early model for Haskell (somewhat similar to Elm's current model) is to make the language a tool for implementing a pure handler function of type Event -> Message, bridging a hypothetical stream of input events to a hypothetical stream of output messages. When coupled to a stateful runtime system, this functional reactive programming model means the code you write is pure, but subsequently embedded by the compiler into an impure pre-designed wrapper.

The current model used in Haskell and PureScript is slightly different. They express impure concepts through an IO or Effect datatype. IO values are program descriptions rather than traditional subroutines. You can build and combine them (with static verification), but never "invoke" them directly. Merely creating an IO value does not immediately trigger any actual I/O, just as writing down a recipe does not turn on your oven.

After combining compatible smaller IO values into a single aggregate IO value, assigning it to Haskell's main variable tells the compiler turn it into an executable binary. In other words, Haskell and PureScript are pure languages for metaprogramming an impure program in a disciplined way.

This approach has several pros and cons. Merely importing a module which defines some IO value never "fires the missiles," because an IO value is always a program-in-waiting, not an already-commenced procedure. This layer of indirection preserves various technical benefits of pure FP, such as referential transparency and reusability.

However, in some ways this system departs from the "spirit" of FP, because the syntax and techniques used to define an IO value are superficially imperative and stateful. In other words, when writing IO values in Haskell, the developer is often thinking imperatively.

Thankfully, even if we accept that portions of a Haskell program return to being conceptually imperative, Haskell's type system explicitly demarcates which functions are pure FP in both the moral and technical senses, versus which are only technically pure. It is an antipattern to stuff all of your logic inside IO values; factoring out the pure bits and minimizing the IO footprint of your program is a core Haskell skillset. That skillset then translates well to other languages, even ones without an explicit IO type.

FP/LC Popularity

  • Yes, but lisp(scheme) was invented before C and C is more popular these days ( don't want to be offensive, I am a fp fan ). Q: why C(ada) is more popular ?
  • When do you reckon lambda calculus will become a viable alternative for the 'average' developer?
  • Perhaps functional programming will win the world by other languages becoming functional step by step with new functional features?

Why FP remains relatively niche is a perpetually-debated and vexatious question for fans of the subject. Just a few weeks ago, Richard Feldman gave a talk, "Why Isn't Functional Programming the Norm". That talk and others cover some reasons FP might not be as popular as its proponents think it should be.

Early FP wasn't very performant, and modern FP (especially a lazy language like Haskell) can still be difficult to reason about with respect to memory leaks. By its very nature, FP sits at the extreme end of the abstraction spectrum, with little to no concept of the physical machine that actually performs computations. Perhaps that means FP is doomed to never be direct enough for the needs of most "real world" developers, who require more predictable performance characteristics.

On the other hand, modern functional language compilers like GHC feature remarkable optimizations, and with profiling and hand-tuning, many functional programs can be tweaked to have excellent performance. And of course, different problem domains require different levels of performance; plenty of mainstream apps today seem to be totally cavalier with memory.

Perhaps the market and education systems are too inefficient or unfair to overcome the systemic entrenchment of languages with large enterprise support, marketing budgets, and platform exclusivity. There may be a hiring pipeline problem, or a lack of experienced CTOs willing to try a less-established approach. The most obvious counterpoint would be that if a small, nimble startup would gain a truly significant competitive edge using a functional language, then surely we would see more companies using such languages.

Maybe FP is fun for a small group of academics and geeks, but isn't approachable enough for "normal" developers looking to Get Things Done. Functional programmers often respond by saying that if learning FP was the norm, then other paradigms like imperative / OOP would be seen as difficult and confusing; that it is a matter of familiarity. Or, perhaps FP has a steeper initial learning curve, but is easier in the long run as a codebase grows and changes.

I personally believe that anyone can learn to write effective, simple, daily-grind functional programs – that FP is not (just) an ivory tower. Sometimes the excitement FPers have for bleeding edge type astronuat features can make FP seem more impenetrable than it really is.

Ultimately though, we are indeed seeing a growing trend of FP adoption, not just pure FP but also the importation of functional concepts into mixed-paradigm languages. Will we ever reach a tipping point where FP becomes the "norm"? I really couldn't say, but I hope we will at least get to the point where there are enough pure FP jobs to satisfy the community.

In the meantime, learning a functional language can make someone more adept at programming in general. Actually trying a variety of approaches is the best way to learn the strengths and weaknesses of each!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment