Lazily Composable Combinatorics (fuzzing in Scala)

scalaFuzzing is a brute-force technique used to find bugs in software. Networks protocols, files, web-servers, and many other kinds of software can be fuzzed. In this post, I’ll show a few reasons why Scala is particularly well-suited to writing a smart fuzzer. Smart fuzzers (or generational fuzzers) understand the semantics of the target well-enough to generate structurally valid and invalid data.

For the sake of agrument, let’s invent a very simple byte/bit oriented protocol to fuzz. Let’s say it only contains 2 bytes.

Byte0 – First two bits independently have meaning,  whereas the remaining 6 represent a sequence number modulo 64

Byte1  - 1 byte function code with certain valid values

 

Let’s also assume that we have a function defined to assemble byte1 from it’s pieces:

// assemble into a valid byte
def byte1(b1: Boolean, b2: Boolean, seq: Byte) : Byte

Now we define the important values that we want to fuzz:

val bit1 = List(true, false)
val bit2 = List(true, false)
val seq = List(0, 1, 63)
val funcs = List(0, 5, 7, 255)

Because fuzzing is O(2^n), and more complex protocols will have a large number of fields to fuzz, we try to limit our test cases above to the ones that are likely to find bugs. For example, integer boundaries are always interesting.

With our imperative hats on, we could then write a fuzzer with the following structure:

for(b1 <- bit1) {
  for(b2 <- bit2) {
    for(s <- seq) {
      for(f <- funcs) {
         attack(List(byte1(b1, b2, seq), f))
      }
    }
  }
}

We could easily write a program like this in any imperative language. It would look similar in C#, Java, C/C++. We can improve this a little bit by leveraging the syntactic sugar provided by Scala for-comprehensions:

for {
  b1 <- bit1; 
  b2 <- bit2; 
  s <- seq; 
  f <- funcs;  
} attack(List(byte1(b1, b2, seq), f))

That reads nicely, but fundamentally still has the same problem: we have to define all of the degrees of freedom in a protocol/file in a single place. This blows up quickly for large protocols; this structure lacks composability. It’s also annoying that we can’t separate the problem of attack frame generation from the actual fuzzing of the device with the ‘attack’ function. This structure, however, does possess one very desirable property: it’s lazy. Each frame is not generated until it’s actually needed. In an ideal world we’d have our laziness and compose it too!

Lazy functional programming for the win

Scala can do this tersely and lazily using iterators and for-comprehensions.

object Byte1 extend Iterable[Byte] {

 // implemenation not important
 def byte1(b1: Boolean, b2: Boolean, seq: Byte): Byte

 val bit1 = List(true, false)
 val bit2 = List(true, false)
 val seq = List(0, 1, 63)

 def iterator : Iterator[Byte] = for {
   b1 <- bit1.iterator; 
   b2 <- bit2.iterator; 
   s <- seq.iterator } yield byte1(b1, b2, s)

Calling iterator does not generate all permutations of Byte1 immediately. It returns a new iterator that represents all permutations of the sub-fields. To understand what this is doing, it may be best to give a simple example:

def cross : Iterator[Int] = for {
  a <- List(1,2).toIterator;
  b <- List(3,4).toIterator } yield a*b

cross.foreach(println)

What do you suppose that prints?

3
4
6
8

The key is to realize that the iteration is lazy. It doesn’t actually compute each value until it is needed. To truly understand how cool this is you have to see the types of hoops you have to jump through to implement it in Java.

Can we do better?

If we’re going to be doing this lazy permutation thing all the time, we should define some helper functions and create a domain specific language (DSL) to help us make our code even easier to read.

// Lazy permutation transformations on iterators
object Cross {  

  def apply[A, B, Z](a: Iterable[A], b: Iterable[B])
   (convert: (A, B) => Z): Iterator[Z] = 
     for (ai <- a.iterator; bi <- iterator) 
       yield convert(ai, bi)

  // higher order
  def apply[A, B, C, Z] ....
}

We define transformations in a general way above and this gives us some nice syntacic sugar. Your code for permuting byte1 becomes:

object Byte1 extend Iterable[Byte] {

 // implemenation not important
 def byte1(b1: Boolean, b2: Boolean, seq: Byte): Byte

 val bit1 = List(true, false)
 val bit2 = List(true, false)
 val seq = List(0, 1, 63)

 def iterator : Iterator[Byte] = Cross(bit1,bit2,seq)(byte1)

}

Now that we can write pieces whose only concern is defining the valid permutations of sub-fields it’s easy to modularize our fuzzer. Higher level pieces can be composed from lower level ones without sacrificing laziness:

object Fuzzer extends App {

  // Maybe we send this over a socket to the target application
  def attack(bytes: List[Byte): Unit

  val byte2 = List(0,1,7,255)

  def frames = Cross(Byte1, byte2)((b1,b2) => List(b1,b2))

  frames.foreach(attack)

}

It’s not uncommon for the total attack set to represent millions of trials and is a bad idea to generate them up-front.

This type of smart fuzzer architecture has many desirable properties:

  1. Modular and easily composable.
  2. Cleanly separates attack-generation from the fuzzing process itself.
  3. Preserves laziness: Finitely bounded in memory and start-up time
  4. Attack generation is non-random and repeatable
  5. Code is easy to read and understand

Smart fuzzing frameworks like Peach/Sully may have properties 1-4, but even if they do I seriously doubt they accomplish 5. Defining protocols in XML is also tedious compared to what a well designed Scala DSL can provide. Scala collections, themselves, are a powerful DSL for describing the attack space.

This entry was posted in Uncategorized. Bookmark the permalink.
Add Comment Register



Leave a Reply

Your email address will not be published. Required fields are marked *


five × 9 =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>