Franco Battaglia

Functional Program Design in Scala ​course notes


Cheat Sheet (TODO)

Recap on functions and pattern matching

Case classes

Scala's preferred way to define complex data, i.e. represent JSON

abstract class JSON
case class JSeq (elems: List[JSON]) extends JSON
case class JObj (bindings: Map[String, JSON]) extends JSON
case class JNum (num: Double) extends JSON
case class JStr (str: String) extends JSON
case class JBool (b: Boolean) extends JSON
case object JNull extends JSON

val data = Jobj(Map(
  "firstName" -> JStr("John"),
  "phoneNumbers" -> JSeq(List("1", "2"))
))

// Let's pattern match on JSON
def show(json: JSON): String = json match {
  case JSeq(elems) => "[" + (elems map show mkString ", ") + "]"
  ...
  case JStr(str) => str
  ...
  case JNull => "null"
}
Functions are objects

A block with a function { (String, JSON) => String } is, in fact, a block with a Function1 with an apply method which is then expanded to new Function1[(String, Json), String] { def apply ... }, which is immediately evaluated

Because functions are traits, we can subclass the function type:

trait Map[Key, Value] extends (Key => Value) ...
// remember: Scala maps are functions, here is the wrapper

trait Seq[Elem] extends (Int => Elem)
elems(i) ~ elems[i] // because of the implicit Seq wrapper, elems is a function
The type of blocks

What is the type of the block { case "ping" => "pong" }? Based on what we've seen, we could say String => String, and that would be partially correct. Let's see:

val f: String => String = { case "ping" => "pong" } // compiles correctly

f("ping") // pong
f("another string") // MatchError on "another string")

It turns out f's type is indeed String => String, but not just any String: it only accepts ping. But the compiler did not warn us about this

How can we tell beforehand if f is going to fail? Partial functions!

Partial functions
trait PartialFunction[-A, +R] extends Function1[-A, +R] {
  def apply(x: A): R
  def isDefinedAt(x: A): Boolean
}

val f: PartialFunction[String, String] = { case "ping" => "pong" }

PartialFunction is a subtype of Function, but you can query if the function is defined for some argument. If a function is a partial one, Scala will expand it as follows:

f.isDefinedAt("ping") // true
f.isDefinedAt("another string") // false

val f: ... // this will get expanded to:
new PartialFunction[String, String] {
  def apply(x: String) = x match {
    case "ping" => "pong"
  }
  def isDefinedAt(x: String) = x match {
    case "ping" => true
    case _ => false
  }
}

Note: it is possible to get a MatchError even when using partial functions, for example not matching on the outermost arguments (i.e. nested pattern matching on a list's elements)

Recap on collections

Note: see notes of the first course

Scala has a hierarchy of collection classes with a set of common methods: map, flatMap, filter, foldLeft

For expressions are simplifications of combining map, flatMap and filter. There are translation rules which generators and filters apply in lazy ways

Bonus: the left-hand side of a generator may be a pattern, which are implicit filters:

val data: List[JSON] = ...
for {
  JObj(bindings) <- data
  JSeq(phones) = bindings("phoneNumbers")
  JObj(phone) <- phones
  JStr(digits) = phone("number")
  if digits startsWith "212"
} yield (bindings("firstName"), bindings("lastName"))

If a pattern is used, an additional implicit filter will only keep the values which match the filter

1.1 - Queries with For

The for notation is somewhat equivalent to query languages for databases

Let's see an example:

case class Book(title: String, authors: List[String])
val books: List[Book] = Set("title", List("author1", ...), ...)

// find the titles of books whose author's name is "Bird":
for (b <- books; a <- b.authors if a startsWith "Bird") yield b.title
// find titles which have the word "Program":
for (b <- books if b.title indexOf "Program" >= 0) yield b.title
// find names of all authors who have written at least 2 books:
for {
  b1 <- books
  b2 <- books
  if b1.title < b2.title // lexicographical order guarantees unique pairs of different books
  a1 <- b1.authors
  a2 <- b2.authors
  if a1 == a2
} yield a1

1.2 - Translation of for

map, flatMap and filter are expressed as for expressions as:

def map[T, U](xs: List[T], f: T => U): List[U] = for (x <- xs) yield f(x)

def flatMap[T, U](xs: List[T], f: T => Iterable[U]): List[U] = for (x <- xs; y <- f(x)) yield y

def filter[T](xs: List[T], p: T => Boolean): List[T] = for (x <- xs if p(x)) yield x

In reality, the convertion is the other way around:

// base rule:
for (x <- e1) yield e2 // maps to:
e1.map(x => e2)

// additional rules:
for (x <- e1 if f; s) yield e2 // where f is a filter and s is a sequence of gens and filters, maps to:
for (x <- e1.withFilter(x => f); s) yield e2

for (x <- e1; y <- e2; s) yield e3 // two generators map to a flatMap, and the inner generator varies faster:
e1.flatMap(x => for (y <- e2; s) yield e3)

// translation exercise:
for (b <- books; a <- b.authors if a startsWith "Bird") yield b.title
// solution (using rule 3):
books.flatMap(b => for (a <- b.authors if a startsWith "Bird") yield b.title)
// using rule 2:
books.flatMap(b => for(a <- b.authors.withFilter(a => a startsWith "Bird") yield b.title)
// finally, using rule 1:
books.flatMap(b => b.authors.withFilter(a => a startsWith "Bird").map(b => b.title))

withFilter is a lazy variant of filter which does not produce an intermediate list, but filters the following map or flatMap

1.3 - Functional random generators

For expressions are not tied to collections. For any domain we only need to define map, flatMap and withFilter, and so for expressions may be used on it

Random value generators

Can we get random values for booleans, strings, pairs, tuples, lists, sets, trees? Let's use an integers generator as a base case

trait Generator[+T] {
  def generate: T
}

val integers = new Generator[Int] {
  val rand = new java.util.Random
  def generate = rand.nextInt()
}

val booleans = new Generator[Boolean] {
  def generate = integers.generate > 0
}
...

This can be cumbersome, because of the boilerplate for every type. Ideally, we would write:

val booleans = for (x <- integers) yield x > 0
// which would then map to:
val booleans = integers map (x => x > 0)
//
def pairs[T, U](t: Generator[T], u: Generator[U]) = for {
  x <- t
  y <- u
} yield (x, y)
// which maps to:
t flatMap(x => u map (y => (x, y)))

Now we need an implementation of flatMap and map on our type for this to work!

trait Generator[+T] {
  self => // self is an alias of outer this

  def generate: T

  def map[S](f: T=> S): Generator[S] = new Generator[S] {
    def generate = f(self.generate)
  }

  def flatMap[S](f: T => Generator[S]): Generator[S] = new Generator[S] {
    def generate = f(self.generate).generate
  }
}

Here map returns a new generator instance, which allows to generate a random value of type T and then applies the function which transforms it into a type S, i.e.: generate an integer and then turn it into a boolean using the provided function for booleans (which is x => x > 0)

Using self as an alias for the outer this allows us to use the outer generate definition (because the inner method is also called generate). This is the same as using Generate.this.generate

Let's translate the new definitions:

val booleans = for (x <- integers) yield x > 0
// according to our def of map, this means:
val booleans = new Generator[Boolean] {
  def generate = integers.generate > 0
}

def pairs[T, U](t: Generator[T], u: Generator[U]) = t flatMap { x => u map { y => (x, y) } }

def pairs[T, U](t: Generator[T], u: Generator[U]) = new Generator[(T, U)] { def generate = (t.generate, u.generate) }

And look at some more examples:

def choose(lo: Int, hi: Int): Generator[Int] = for (x <- integers) yield lo + x % (hi - lo) // get some number between lo and hi

def oneOf[T](xs: T*): Generator[T] = for (idx <- choose(0, xs.length)) yields xs(idx) // pass a list and get some element between 0 and its length. T* means varargs, so one or many lists

def lists: Generator[List[Int]] = for {
  isEmpty <- booleans
  list <- if (isEmpty) emptyLists else nonEmptyLists
} yield list

def emptyLists = single(Nil)
def nonEmptyLists = for {
  head <- integers
  tail <- lists
} yield head :: tail
// exercise: implement Tree generator

def trees: Generator[Tree[Int]] = for {
  isLeaf <- booleans
  tree <- if (isLeaf) leafs else inners
} yield tree

def leafs = for {
  x <- integers
} yield Leaf(x)
def inners = for {
  left <- trees
  right <- trees
} yield Inner(left, right)

Bonus: let's create a generic test function:

def test[T](g: Generator[T], numTimes: Int = 100)(test: T => Boolean): Unit {
  for (i <- 0 until numTimes) {
    val value = g.generate
    assert(test(value))
  }
}

This function performs a test function on a generator g numTimes, which is not guaranteed to be deterministic because generators are random

Instead of writing tests, we can write properties which are assumed to hold using the ScalaCheck tool, which generates random test cases:

forAll { (li: List[Int], l2: List[Int]) => l1.size + l2.size == (l1 ++ l2).size }

1.4 - Monads

Data structures with map and flatMap are common. The name that describes this class of data structures with some algebraic laws is monad.

A monad is a parametric type M[T] with two operations: flatMap and unit. flatMap is commonly called bind

trait M[T] {
  def flatMap[U](f: T => M[U]): M[U]
}

def unit[T](x: T): M[T]
  • List is a monad with unit(x) = List(x)
  • Set is a monad with unit(x) = Set(x)
  • Option is a monad with unit(x) = Some(x)
  • Generator is a monad with unit(x) = single(x)

flatMap is an operation on each of these types

map can be defined for every monad as a combination of flatMap and unit (map, then wrap in unit, then flatten thus unwrapping):

m map f ~ m flatMap (x => unit(f(x)))
m map f ~ m flatMap (f andThen unit)

To qualify as a monad, a type has to satisfy three laws:

  1. Associativity of flatMap
  2. Left unit: wrapping x and flatmapping f equals f(x) <=> unit(x) flatMap f == f(x)
  3. Right unit: flatmapping a monad m to unit equals m <=> m flatMap unit == m
Checking monad laws on Option
abstract class Option[+T] {
  def flatMap[U](f: T => Option[U]): Option[U] = this match {
    case Some(x) => f(x)
   case None => None
  }
}

// show left unit law
Some(x) flatMap f == f(x)
Some(x) match {
  case Some(x) => f(x) // the law holds
  ...
}
// show right unit law
opt flatMap Some == opt
opt match {
  case Some(x) => Some(x)
  ...
} == opt
// check associativity: left to reader...

We should care about these laws because we can use for expressions seamlessly

Checking monad laws on Try

Try resembles Option, but instead of using Some/None there is a Success case with a value and a Failure that contains an exception:

abstract class Try[+T]
case class Success[T](x: T) extends Try[T]
case class Failure(ex: Exception) extends Try[Nothing]

Try is used to pass results of computations that can fail with an exception between threads and computers

Usage of try:

// Try uses a Java try and wraps in into the Try box. expr is CBN in case it fails
object Try {
  def apply[T](expr: => T): Try[T] = try Success(expr) catch { case NonFatal(ex) => Failure(ex) }
}

Try(expr)

Very interesting: just like Option, Try computations can be composed in for expressions:

for {
  x <- computeX
  y <- computeY
} yield f(x, y)

If computeX and computeY succeed with Success, this will return Success(f(x, y)). If either computation fails, this will return Failure(ex)

But Try is not a monad because the left unit law fails! Example of breaking of left unit law:

Try(() => 5) flatMap(expr => throw new NonFatalException()) == throw new NonFatalException()

Here the Try monad will absorb the exception, while the non-wrapped application of f results in an exception in the right hand side

Try trades this monad law for another, more useful law:

An expression composed from Try map flatMap will never throw a non-fatal exception

Week 1 assignment

Github repo (private)

2.1 - Structural induction on Trees

Prove a property P(t) for all trees t of a certain type:

  1. Show that P(1) holds for all leaves 1 of a tree
  2. For each type of internal node t with subtrees Sn, show that P(Sn) implies P(t) (if all subtrees satisfy => the tree does too)
// our class for today's example:
abstract class IntSet {
  def incl(x: Int): IntSet
  def contains(x: Int): Boolean
}

object Empty extends IntSet {
  def incl(x: Int): IntSet = NonEmpty(x, Empty, Empty)
  def contains(x: Int): Boolean = false
}

case class NonEmpty(elem: Int, left: IntSet, right: IntSet) extends IntSet {
  def incl(x: Int): IntSet =
    if (x < elem) NonEmpty(elem, left incl x, right)
    else if (x > elem) NonEmpty(elem, left, right incl x)
    else this // see Empty's incl

  def contains(x: Int): Boolean =
    if (x < elem) left contains x
    else if (x > elem) right contains x
    else true
}

How can we prove the correctness of this implementation? Let's prove it proving the laws it respects. For any set s:

  1. Empty contains x = false
  2. (s incl x) contains x = true
  3. (s incl x) contains y = s contains y if x != y
Let's prove (1)

According to the definition of the object Empty, this is true

Let's prove (2)

Proof by structural induction:

// Base case: `s = Empty`
(Empty incl x) contains x
~ NonEmpty(x, Empty, Empty) contains x // by definition of Empty.incl
~ true // by definition of NonEmpty.contains

// Induction step: `s = NonEmpty(x, l, r)` for any x
(NonEmpty(x, l, r) incl x) contains x
~ NonEmpty(x, l, r) contains x // by definition of NonEmpty.incl
~ true // by definition of NonEmpty.contains
// y < x
(NonEmpty(y, l, r) incl x) contains x
~ NonEmpty(y, l, r incl x) contains x // incl
~ (r incl x) contains x // contains
~ true // induction hypothesis
// y > x: analogous to previous case
Let's prove (3)

Proof by structural induction (let y < x):

// Base case: `s = Empty`
(Empty incl y) contains x == Empty contains x?
~ NonEmpty(y, Empty, Empty) contains x
~ Empty contains x // we get the equality

// Induction step: `s = NonEmpty(z, l, r)`
// let z = x
(NonEmpty(x, l, r) incl y) contains x == NonEmpty(x, l, r) contains x
~ NonEmpty(x, l incl y, r) contains x
~ true
// let z = y
(NonEmpty(x, l, r) incl y) contains x == NonEmpty(y, l, r) contains x
~ NonEmpty(y, l, r) contains x
// 3 trivial cases left

2.2 - Streams

Streams are like Lists, but evaluated only on demand (lazy)

If we want to get the second prime between 1000 and 10000, this is a valid solution:

((1000 to 10000) filter isPrime)(1)

But this solution is pretty bad, because if will first get all primes in the provided range and then get the second element of the list

We need to avoid computing the tail of a sequence until it is needed

Streams are similar to lists, but their tail is evaluated on demand

val xs = Stream.cons(1, Stream.cons(2, Stream.empty))

Stream(1, 2, 3)

(1 to 1000).toStream // Stream(1, ?)

Streams do not understand :: because it always produces a List

There is the #:: operator, which produces a Stream

Concrete implementations of Streams are defined in the Stream companion object. This is why Nil turns into Stream.empty and :: turns into Stream.cons

The tail parameter in a Stream is a CBN (=>) parameter instead of CBV, in order to differ tail evaluation

2.3 - Lazy evaluation

  • Do things as late as possible
  • Never do things twice

In the previous implementation of Stream (2.2), if tail is called several times, the stream will be recomputed each time

This can be solved by storing the result of the first evaluation of tail and reusing the stored result instead of recomputing

  • By-name evaluation: always evaluate def
  • Lazy evaluation: compute once lazily lazy val
  • Strict evaluation: evaluate immediately val
def expr {
  val x = { print("x"); 1 }
  lazy val y = { print("y"); 2 }
  def z = { print("z"); 3 }
  z + y + x + z + y + x
}
expr // xzyz
// x is evaluated immediately first, then z is always recomputed, y is only computed once

2.4 - Computing with infinite sequences

Infinite stream of all integers starting from a given number:

def from(n: Int): Stream[Int] = n #:: from(n + 1)
// stream of all natural numbers:
val nats = from(0)
// stream of multiples of 4
nats map (_ * 4)
The sieve of Eratosthenes

Technique to calculate prime numbers:

  1. Start with all integers from 2
  2. Eliminate all multiples of 2
  3. The first element is now 3, a prime number
  4. Eliminate all multiples of 3
  5. Iterate forever. The first number is always a prime
def sieve(s: Stream[Int]): Stream[Int] =
s.head #:: sieve(s.tail filter (_ % s.head != 0))
Square roots
def sqrtStream(x: Double): Stream[Double] {
  def improve(guess: Double) = (guess + x / guess ) / 2
  lazy val guesses: Stream[Double] = 1 #:: (guesses map improve)
  guesses // won't compute until necessary
}

sqrtStream(4) take (10) // will only evaluate 10 elements

2.5 - Case Study: the Water Pouring Problem

Given an arbitrary number of bottles with arbitrary capacities, find a combination of filling bottles to obtain a certain capacity


Week 2 assignment

Github repo (private)

3.1 && 3.2 - Implicits (changed in Scala 3)

The Scala compiler is able to infer types from values and expressions

val y = x + 1

Here, the compiler is able to infer that y hast type Int because the + operation is between two Ints, and thus returns an Int value (inferring type from values)

The compiler is also able to infer values from types. Let's see this example of sorting lists:

def sort[A](xs: List[A]): List[A] = { ... }

The generic type parameter A is not enough, because the < operation is not defined for the arbitrary type A

One solution is to pass, as an additional parameter, a comparator function. In the Scala standard library, there is a type to represent Orderings

// first approach: use a `lessThan` function
def sort[A](xs: List[A])(lessThan: (A, A) => Boolean): List[A] = ???
...
sort(xs)((x, y) => x < y)
// second refactor: using Ordering
def sort[A](xs: List[A])(ord: Ordering[A]): List[A] = ???
...
sort(strings)(Ordering.String)

Can we avoid passing the second parameter, which is really verbose and can be inferred?

def sort[A](xs: List[A])(implicit ord: Ordering[A]): List[A] = ???

Using the keyword implicit, the compiler will try to infer the argument based on its expected type

  1. A method can have only one implicit parameter list, and it must be the last parameter list
  2. When calling the method, the implicits are optionally left out
Searching for implicits

How does the compiler choose the correct definition when trying to infer an implicit parameter? It searches for definitions which:

  • Have type T
  • Are marked implicit
  • Are visible at the point of function call, or defined in a companion object
object Ordering {
  implicit val Int: Ordering[Int] = ...
}

implicit def orderingPair[A, B](implicit
  orderingA: Ordering[A],
  orderingB: Ordering[B]
): Ordering[(A, B)]

If there isn't any matching definition, an error is reported

If more than one implicit definition is found, an ambiguity is reported, unless one is more specific than the other

A is more specific than B if:

  • A has more "fixed" (non-generic) parts, or
  • A is a subclass of B
Implicit search scope

The search for an implicit value of type T first looks for all visible implicit definitions (inherited, imported, defined in scope)

If nothing is found at lexical scope, it continues searching in companion objects associated with T

A companion object is an object that has the same name as a type (i.e. object Int)

Types associated with T are:

  • Its parents
  • The traits in which it is defined, i.e. trait Some[T]
  • T itself
Syntactic sugar for context bounds

You may omit the implicit parameter list, if it is context bound

def printSorted[A: Ordering](as: List[A]): Unit = ??? // this is equivalent to:
def printSorted[A](as: List[A])(implicit ev1: Ordering[A]): Unit = ???
Implicit query

Calling the implicitly operation, one can query an implicit value of a type

def implicitly[A](implicit value: A): A = value
implicitly[Ordering[Int]] // scala.math.Ordering$Int$@73564ab0

3.3 - Type-directed programming quiz

Github repo (private)

3.4 - Type classes

Ordering is a type class

Type classes provide another form of polymorphism. For every implicit value of type Ordering[A], the compiler resolves at compile time the implementation that matches the passed A, defined inside the Ordering object

Consider the rational numbers example and Ordering:

case class Rational(num: Int, denom: Int)

object RationalOrdering {
  implicit val orderingRational: Ordering[Rational] =
    new Ordering[Rational] {
      def compare(q: Rational, r: Rational): Int =
        q.num * r.denom - r.num * q.denom
    }
}
Type classes laws

Type classes are often accompanied by laws, which describe properties that instances of the type class must satisfy

This is very useful to users, because they can be sure about extending a type class or using them implicitly

For example, instances of Ordering[A] must satisfy:

  • Inverse: the sign(x < y) == inverse(sign(y < x))
  • Transitive: x < y && y < z => x < z
  • Consistent: x == y => x < z == y < z

Each type class has its laws, and should provide ways for instance implementers to check the laws

Example of type class: Ring

A ring is a algebraic structure which defines addition + and multiplication *

Rings are useful to abstract over various domains (arithmetic, polynomials, series, matrices, functions) and have various properties which should always check

trait Ring[A] { // define the type class trait
  def plus(x: A, y: A): A
  def mult(x: A, y: A): A
  def inverse(x: A): A
  def zero: A
  def one: A
}

object Ring { // define implementations in companion objects
  implicit val ringInt: Ring[Int] = new Ring[Int] {
    def plus(x: Int, y: Int): Int = x + y
    def mult(x: Int, y: Int): Int = x * y
    def inverse(x: Int): Int = -x
    def zero: Int = 0
    def one: Int = 1
  }
}

// a rule check example:
def plusAssociativity[A](x: A, y: A, z: A)(implicit ring: Ring[A]): Boolean =
  ring.plus(ring.plus(x, y), z) == ring.plus(x, ring.plus(y, z))

In Scala, the Numeric type class represents a ring structure

3.5 - Conditional implicit definitions

What happens if the want to sort a List of type A?

implicit def orderingList[A](implicit ord: Ordering[A]): Ordering[List[A]] =
  new Ordering[List[A]] {
    def compare(xs: List[A], ys: List[A]) =
      (xs, ys) match {
        case (x :: xsTail, y :: ysTail) =>
          val c = ord.compare(x, y)
          if (c != 0) c else compare(xsTail, ysTail)
        case (Nil, Nil) => 0
        case (_, Nil)   => 1
        case (Nil, _)   => -1
      }
  } 

Here, we take advantage of the implicit Ordering parameter, and we generalize it to compare Lists of any type

val xss = List(1, 2, 3) // List[Int]
sort(xss)
// this is equal to writing:
sort[List[Int]](xss)(orderingList(Ordering.Int))

Here the compiler will combine two implicit definitions (orderingList and Ordering.Int), first matching on type List[A] with the defined object and then looking up for implicits of type A for Ordering

3.6 - Implicit conversions

Here is a Scala definition of a Json AST:

sealed trait Json
case class JNumber(value: BigDecimal) extends Json
case class JString(value: String) extends Json
case class JBoolean(value: Boolean) extends Json
case class JArray(elems: List[Json]) extends Json
case class JObject(fields: (String, Json)*) extends Json

With this definition, the JSON document shown above can be constructed like the following:

JObject("name" -> JString("Paul"), "age" -> JNumber(42))

This works fine, but it is a bit more verbose than the plain JSON syntax. Would it be possible to design an API providing a syntax closer to plain JSON, like obj("name" -> "Paul", "age" -> 42)?

Let's use implicit conversions:

object Json {
  import scala.language.implicitConversions
  implicit def stringToJson(s: String): Json = JString(s)
  implicit def intToJson(n: Int): Json = JNumber(n)
}

obj("name" -> "Paul", "age" -> 42) // will implicitly be obj("name" -> Json.stringToJson("Paul"), "age" -> Json.intToJson(42))
Extension methods (deprecated in Scala 3)
case class Duration(value: Int, unit: TimeUnit)
val delay = Duration(15, TimeUnit.Second)
// would it be possible to write val delay = 15.seconds ? We need to extend Int with a method `seconds`
object Duration {

  object Syntax {
    import scala.language.implicitConversions
    implicit class HasSeconds(n: Int) {
      def seconds: Duration = Duration(n, TimeUnit.Second)
    }
  }

}

Implicit conversions should be avoided when possible, because they change the type of expressions. Some brief examples:

import scala.language.implicitConversions

case class Rational(numerator: Int, denominator: Int)

object Rational {
  implicit def fromInt(n: Int) = Rational(n, 1)
}
val r: Rational = 42
//////////////
implicit class HasIsEven(n: Int) {
  def isEven: Boolean = n % 2 == 0
}

42.isEven

4.1 - Functions and state

Until now, our programs have been side-effect free. This means time wasn't important, because the substitution model of computation does not care about order if everything is stateless: rewriting can be done anywhere, and all correct rewritings lead to the same solution

This is called confluence, and is also known as the Church–Rosser theorem in lambda calculus

Introducing state

An object has state if its behavior is influenced by its history (i.e. a bank account with a balance)

In Scala, the keyword var is used to define mutable state

4.2 - Identity and change

val x = E; val y = E;

val x = E; val y = x

These two expressions are the same (referential transparency)

But what does "the same" mean? Operational equivalence: x and y are equivalent if no possible test f(x, y) can distinguish between them and f(x, y) == f(x, x) are the same for all possible pairs

Let's see this with an example:

val x = new BankAccount
val y = new BankAccount

x deposit 30
y withdraw 20 // error: insuficient funds
/////
x deposit 30
x withdraw 20 // balance: 10

Conclusion: x and y are not the same

But what if we had used the substitution model?

val x = new BankAccount
val y = x

The x in the definition of y could be replaced by new BankAccount, and we know this is not true

When we add assignments in a program, the substitution model stops being valid

4.3 - Loops

Let's define a while loop in Scala using blocks and tail recursion!

@tailrec
def WHILE(condition: => Boolean)(command: => Unit): Unit = if (condition) {
    command
    WHILE(condition)(command)
  }
  else ()
For-loops

For-loops translate similarly to for-expressions, but using the foreach combinator instead of map and flatMap

for (i <- 1 until 3; j <- "abc") println(...)
// translates to:
(1 until 3) foreach (i => "abc" foreach (j => println(...))

4.4 - DSLs and state (Digital circuits example)

Let's use HOF, DSLs and state to model digital circuits

  • A digital circuit is composed of wires and functional components
  • A Wire can carry a true or false signal
  • The base componentes are NOT, AND, OR
  • The componentes have a delay, i.e. the outputs don't change immediately

Let's model a half adder, which returns a sum bis S and a carry bit C. Then let's model a full adder, which also takes an in carry (cin)

  • S = a || b && NOT(a && b)
  • C = a && b
val a, b, c = new Wire

def inverter(input: Wire, output: Wire): Unit
def andGate(a1: Wire, a2: Wire, output: Wire): Unit
def orGate(a1: Wire, a2: Wire, output: Wire): Unit

def halfAdder(a: Wire, b: Wire, s: Wire, c: Wire): Unit = {
  val d = new Wire
  val e = new Wire
  orGate(a, b, d)
  andGate(a, b, c)
  inverter(c, e)
  andGate(d, e, s)
}

def fullAdder(a: Wire, b: Wire, cin: Wire, sum: Wire, cout: Wire): Unit = {
  val s = new Wire
  val c1 = new Wire
  val c2 = new Wire
  halfAdder(b, cin, s, c1)
  halfAdder(a, s, sum, c2)
  orGate(c1, c2, cout)
}
Implementation of the DSL
type Action = () => Unit // function which does not take parameters, does an effect

trait Simulation {
  def currentTime: Int = ???
  def afterDelay(delay: Int)(block: => Unit): Unit = ???
  def run(): Unit = ???
}

class Wire {
  private var signal = false
  private var actions: List[Action] = List()
  def getSignal: Boolean = signal
  def setSignal(s: Boolean): Unit = if (s != signal) {
    signal = s
    actions foreach (_())
  }
  def addAction(a: Action): Unit = {
    actions = a :: actions
    a()
  }
}

def inverter(input: Wire, output: Wire): Unit = {
  def invertAction(): Unit = {
    val inputSig = input.getSignal
    afterDelay(inverterDelay) { output setSignal !inputSig }
  }
  input addAction invertAction
}
...

A system is represented by a mutable list of actions, and those actions change the state of objects and can imply other future actions

5.1 - Imperative event handling (observer pattern)

The observer pattern is used when views need to react to changes in a model

Variants of this pattern are called

  • Publish/subscribe (pub/sub)
  • Model/view/controller (MVC)
    • The model publishes
    • The view subscribes
    • The controller is an optional handler
trait Publisher { // observable
  private var subscribers: Set[Subscriber] = Set() // observers

  def subscribe(subscriber: Subscriber): Unit = subscribers += subscriber

  def unsubscribe(subscriber: Subscriber): Unit = subscribers -= subscriber

def publish(): Unit = subsribers.foreach(_.handler(this))
} // i.e. notify

trait Subscriber { // observer
  def handler(pub: Publisher)
}
Pros and cons

Pros

  • Views decoupled from the state
  • n views can have the same state
  • Simple

Cons

  • Imperative style. Handlers are Unit
  • Many moving parts
  • Concurrency makes things complicated (model race conditions)
  • Views are tightly bound to one state
  • Bug-prone

5.2 - Functional reactive programming

What is Functional reactive programming (FRP)?

FRP (Flapjax, Elm, Bacon.js, React4J) started in 1997 with the paper Functional Reactive Animation.

Event streaming dataflow programming systems as Rx are related to FRP, but not strictly FRP

Reactive programming is about reacting to sequences of events that happen in time

Functional view: aggregate an event sequence into a signal

  • A signal is a value that changes over time
  • It is a function from time to the value domain
  • Instead of mutating updates, we define new signals in terms of existing ones
Observers vs FRP example

Let's say we want to track the position of the mouse in the screen.

Imperative style:

MouseMoved(toPos: Position)

is fired each time the mouse moves, and should be handled by the subscriptors


FRP view:

mousePosition: Signal[Position]

represents the current mouse position at any point in time. A function from time to positions

Fundamental signal operations

There are two fundamental operations over signals

  1. Obtain the value of the signal at current time. This is expressed as ()
mousePosition() // current mouse position
  1. Define a signal in terms of other signals
def isMouseInRectangle(lowerLeft: Position, upperRight: Position): Signal[Boolean] = Signal {
  val pos = mousePosition()
  lowerLeft <= pos && pos <= upperRight
}

This is a discrete signal with two states (true or false)

Constant signals
val sig = Signal(3) // a signal that is always 3
Signals over time
  • We can use externally defined signals and map over them
  • We can use a Var, which is a subclass of Signal

Var provides an update operation, which allows to redefine the value of a signal

val sig = Var(3)
sig.update(5) // from now on, sig returns 5
sig() = 5 // same thing, assignment syntax sugar

Note: In Scala, calls to update can be written as assignments. This works on many types

Advantage of signals

Even if we use Var for mutable signals, the fact that we can map over them means we have a strong relation

Each update is maintained automatically at all future points in time, and no such mechanism exists for mutable variables

A bank account using signals

This is a classic example of a bank account with deposit and withdraw methods, using FRP and signals

class BankAccount {
  val balance = Var(0)
  def deposit(amount: Int): Unit = if (amount > 0) {
    val b = balance()
    balance() = b + amount
  }

  def withdraw(amount: Int): Unit = if(0 < amount && amount <= balance()) {
    val b = balance() // value must be pulled out before reassigning
    balance() = b - amount
  } else throw new Error("insufficient funds")
}

Now let's listen to the signals:

def consolidated(accts: List[BankAccount]): Signal[Int] = Signal(accts.map(_.balance()).sum) // create a signal out of the sum of balance signals

val a = new BankAccount()
val b = new BankAccount()
val c = consolidated(List(a, b))
c() // 0
a deposit 20
b deposit 30
c() // 50

// now let's define a third, independent signal
val exchangeRate = Signal(170)
val inDollar = Signal(c() / exchangeRate())
b withdraw 10
inDollar() // c() has changed because of the mutation of b, thus inDollar() has changed automatically too
Signal update vs. variable assignment
val num = Var(1)
val twice = Signal(num() * 2)
num() = 2
// here, twice would now evaluate to 4
// compare this to:
val num = Var(1)
val twice = Signal(num() * 2)
num = Var(2)
// here, because of reassignment and no signal update, twice still depends on the former num signal, thus twice will evaluate to 2

5.3 - a simple FRP implementation

TODO

5.4 - Week 5 assigment

Github repo (private)