Skip to content

Instantly share code, notes, and snippets.

@melvic-ybanez
Last active August 1, 2024 08:25
Show Gist options
  • Save melvic-ybanez/04638dfea49bd39b856562a7e393a573 to your computer and use it in GitHub Desktop.
Save melvic-ybanez/04638dfea49bd39b856562a7e393a573 to your computer and use it in GitHub Desktop.
What I Didn't Know about Functional Programming until 2020

What I Didn't Know about Functional Programming until 2020

  1. Programming using a series of transformations and aggregations, something I've been doing for years, is known as programming in the map/reduce style.

  2. The more abstract the type is, the greater its cardinality, and the smaller the set of operations it supports. So make use of universal quantifiers, particularly by implementing fully parametric functions. They guide you on how to implement their term-level definitions by narrowing down the number of possible implementations. In other words, the type system of Scala (or Haskell, for that matter) is not only great for capturing compile-time errors, but is also capable of leading you to the correct solution.

  3. You can encode union types by combining different Scala features such as type constructors, subtyping and implicits, and by taking advantage of the Curry-Howard Isomorphism and De Morgan's Laws for negation. Miles Sabin has a well-written blog post on how to implement this in Scala.

  4. According to the Curry-Howard Isomorphism --- the correspondence between logic and computation --- conjunction corresponds to product types, disjunction to sum types, negation to A => Nothing type (for all A), and implication to arrows/morphisms. Scala encodes universal quantification via parametric polymorphism. So def identity[A]: A => A, for instance, can be read as "For all A, it is the case that A implies A". Existential quantification, on the other hand, can be encoded using abstract type members. The correspondence is shown below:

    Scala Logic
    (Int, String) Conjunction
    case class Person(name: String, age: Int) Conjunction (since it's just a labelled tuple)
    Either[Int, String] Disjunction
    P => Q P implies Q (i.e., If P, then Q)
    P => Nothing Logical negation (i.e., Not P)
  5. The cardinality of a product type is equal to the product of the cardinalities of each of its constructors, hence the name. For instance, a pair of boolean values has the cardinality of four (i.e (true, true), (true, false), (false, true), (false, false)).

    The cardinality of a sum type is equal to the sum of the cardinalities of its constructors. For instance, Either[Boolean, Int] is a set that contains true, false, and all the possible integers, so it has the cardinality of boolean in addition to the cardinality of the set of all integers.

    The cardinatlity of the function type is equal to the cardinality of the codomain (or logical consequent) raised to the power of the cardinality of the domain (or logical antecedent). In other words, A => B has the cardinality of |B| ^ |A|.

  6. The category of sets, Set, is a monoidal category with respect to both product and coproduct types. Unit, being a singleton set, is the empty operator or neutral element for product types while Nothing (or Void in Haskell), being the empty set, is the empty operator for coproducts.

  7. The basic building blocks of algebraic data types are the constant of unit, const(()), and the identity on any type A.

  8. The functors I've been using the most are covariants, particulary covariant endofunctors.

    A different functor from the opposite category also exists. It's called contravariant endofunctors, and it's map has the following signature:

    def contramap[A, B](f: B => A): F[A] => F[B]

    It maps the same objects as the covariant functor, but the morphism of the domain category (in the case of fmap, the function A => B) is reversed. This is because we are mapping a morphism B => A to the functor of it's dual.

    To refresh our knowledge of functors, an endofunctor is a functor where the domain and codomain categories are the same. Since Scala always deal with the category of types/sets and functions, all of its functors are endofunctors. So we can drop the endo prefix for brevity.

  9. The application of const to the value of type Unit is isomorphic to the unit function unit[A]: A => Unit. In the study of exponentials and algebraic data types, both represents the identity for the powers of one.

  10. Exponentials can explain why curried functions and their corresponding non-curried forms are isomorphic. We know that functions taking in n number of arguments are essentially just functions taking in a single n-tuple argument. So we need to see that (A, B) => C is isomorphic to A => B => C, forall A, B, and C. By categorical exponentials, we have:

    // Note: This isn't a valid Scala code
    (C ^ B) ^ A <=> C ^ BA

    I'm borrowing the equivalence symbol <=> from propositional and first-order logic. Mathematicians sometimes refer to this as the bi-conditional symbol if they are explicit about the absence of causation (which is part of mathematical implication).

  11. Exponentials and algebraic data types also formally explain why a function of a sum type can be broken down into multiple functions, each taking in a member of the sum type. The following equation generalizes this idea:

    A ^ (B + C) = (A ^ B)(A ^ C)

    This means Either[B, C] => A can be broken down into a pair of functions B => A and C => A, though in Scala, it's usually just implemented using pattern matching:

    def handleTransactionResult(result: Either[ErrorCode, User]) = result match {
      case Left(error) => ???
      case Right(user) => ???
    }

    In Haskell, the syntax of the pattern matching (not the case of one) gives more emphasis on the idea of separating the implemenations into smaller functions.

  12. The relationship between monads and function composition can be observed in the fish operator >=>. The signature of >=> closely resembles the signature of function composition and in fact differs only in how the codomains in >=> are effectful computations:

    def >=>[M[_], A, B, C]: (A => M[B]) => (B => M[C]) => (A => M[C])

    A default implementation that eliminates the output's domain A by applying the first function to it gives rise to the need for the bind operator >>= (or flatMap in Scala):

    def >=>[M[_], A, B, C](f: A => M[B])(g: B => M[C])(implicit M: Monad[M]): A => M[C] = 
      a => M.flatMap(f(a))(g)
  13. The relationship between monads and Kleisli's >=> helps explain why monads are so prevalent in functional programming. Imagine solving a complex problem. The functional way of doing it is by recursively decomposing it into smaller sub-problems, the solution of each implemented as a simple function, and then composing all these functions back to solve the whole problem. In practice, however, things might not be easy to compose directly because some, if not all, of these functions might return effectful computations (i.e. values wrapped in some contexts).

    To glue together these effectful computations (or "embellished types" according to Bartosz Milewski), you would need the fish operator and/or monads (or even monad transformers, if the embellishments aren't the same).

  14. The bind operator >>= (or flatMap in Scala) is not strictly required to define a monad. All you need are two basic operators: join and return (or unit). Remember that a monad is an object in the category of endofunctors, and so by definition it is an endofunctor and therefore inherits fmap (or map in Scala). You can define >>= in terms of its own fmap, and join. Here's the minimum (alternative) definition of a monad:

    trait Monad[F[_]] extends Functor[F] {
      def >>=[A, B](monad: F[A])(f: A => F[B]): F[B] = join(fmap(monad)(f))
    
      def join[A](monad: F[F[A]]): F[A]
    
      def unit[A](value: A): F[A]
    }

    The two basic operators join and unit corresponds to monoid's binary operation and identity element respectively, an observation that gave birth to (or at least demystify) the infamous quote: "A monad is a monoid in the category of endofunctors".

  15. A Profunctor is just a bifunctor that is contravariant in its first argument. The mapping of morphisms dimap, therefore, is the same as bifunctor's bimap, except the domain and codomain of the first argument are switched:

    trait Profunctor[F[_, _]] {
      def dimap[A, B, C, D]: (B => A) => (C => D) => F[A, C] => F[B, D]
    }
  16. A type variable is said to be in negative position if it's the domain of the function, and it is said to be in positive position if it's the codomain of the function. A type is said to be covariant with a type variable in a positive position, and contravariant with a type variable in negative position. In the following example, Runnable is contravariant in A and covariant in B:

    final case class Runnable[A, B](run: A => B)

    Note that A is in negative position and B is in positive position. A trickier example involves a double negative position, which ultimately result to a positive one, as follows:

    final case class Runnable[A, B](run: (A => B) => Unit)

    Note that while A still appears in negative position in A => B, the whole function A => B has become the domain of the run function, which means A is actually in the positive position.

    Knowing which type variable is in negative or positive position is extremely helpful in figuring out ahead-of-implementation which types can be implemented as instances of Functor, Contravariant, Bifunctor, Profunctor and other related typeclasses. For instance, you can't implement a covariant functor for A => String because A appears in the negative position, but you can implement a contravariant functor for it:

    type Stringify[A] = A => String
    
    implicit val stringifyContravariant: Contravariant[Stringify] = new Contravariant[Stringify] {
      def contramap[A, B](stringify: Stringify[A])(f: B => A) = stringify compose f
    }
  17. Free Monads and their signatures can be derived from simply observing the constructors of the free monoid. After all, a monad is just a monoid.

    Loosely speaking, a free monoid is a monoid that is free of undesired information loss (or collapsing of elements). The traditional instance would be the (List[A], ++, Nil) monoid for all A. The binary operator ++ would not unnecessarily collapse values from lists, and the empty list is the perfect neutral element that follows the identity element axiom.

    By the same token, a free monad is free of unneeded collapsing of functors. You can therefore model the free monad from the List[A] type. The correspondence is shown below:

    Free Monoid Free Monad
    List[A] Free[F[_], A]
    Nil Return[F[_], A](a: A)
    Const[A](head: A, tail: List[A]) Bind[F[_], A](fa: F[Free[F, A]])

    This should also remind you of the return (or unit) and join operators of monad. Return contains just a, without the functor application, while Bind applies the functor F to the free monad, making it a recursive data structure, like list.

  18. For all monoid M, there exists a monoid homomorphism (otherwise known as monoid morphism) from a free monoid F[A] to M, if we are given a function A => M:

    def hom[A, M](ma: List[A])(f: A => M)(implicit M: Monoid[M]): M

    For instance, given the free monoid (List[String], ++, Nil), the monoid (Int, + , 0) would be the codomain of the morphism _.length.

  19. For every pair of objects a and x in a category C, there exists a hom-functor Hom(a, -): C -> Set that maps x to the hom-set Hom(a, x). Hom(a, -) is essentially a shorthand for x -> Hom(a, x) (which reminds me of Scala's _ notation in simple lambdas).

  20. A representable functor is any functor F that is isomorphic to the hom-functor Hom(a, -), for every category C and object a in the category C. Since F and Hom(a, -) are functors, the isomorphism between them and its inverse are natural transformations (and also, because these morphisms follow the naturality conditions).

    The typeclass definition will allow support for memoization of some sort and lookups:

    trait Representable[F[_]] {
      type Repr
    
      def tabulate[A]: (Repr => A) => F[A]
    
      def index[A]: F[A] => Repr => A
    }

    We wrote Repr => A for the hom-set Hom(a, x), the codomain of Hom(a, -). Tabulate turns a function into a functor/datastructure (like memoizing a function) while index takes a functor and a "key" and yields a value.

  21. You can't have a representable functor for a sum type. The reason for this can be explain using the algebra of types. That is, for every representable functor F and hom-functor Hom(a, -), we can equate their types using exponentials:

    Hom(a, -) <=> F
    Hom(a, x) <=> F x
    x ^ a = F x  // remember exponentials
    
    // generalizing...
    (-) ^ a = F
    
    // with simple algebraic manipulation...
    a = log F
    

    With the equations above, we can tell that if F is a product type, then a is a sum type (we've already seen something like this before):

    let a := c + d
    (-) ^ a <=> ((-) ^ c)((-) ^ d)
    

    This means we can have a product for F x. However, if F x was a sum type, things would be different. This is easier to show using code:

    def repEither: Representable[Either[String, *]] = new Representable[Either[String, *]] {
      type Repr = Int
    
      def tabulate[A] = f => Right(f(0))
    
      def index[A] = {
        case Right(value) => _ => value
        case Left(err) => ??? // what to do here????
      }
    }

    Apparently, the issue happens in the implementation of index. We can't ensure, at the type-level, that the sum type constructor we'd get would provide us with the type we need. Same thing can be said with the other sum types, such as List.


Next List: What I Didn't Know about Functional Programming until 2021

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment