Showing posts with label FScheme. Show all posts
Showing posts with label FScheme. Show all posts

Friday, February 22, 2013

Dynamo: Preventing Unnecessary Evaluation (Part 2)

In my last blog post, I spoke about how we could use graph analysis in Dynamo to avoid duplicating code when compiling Dynamo workflows down to the FScheme engine. I am pleased to announce that this has now been implemented and is live on GitHub!

Of course, nothing ever goes right the first time. Hours after pushing all the new code to GitHub, during a moment of quiet reflection, I discovered why the approach I originally outlined was flawed. As with most issues in programming--I find--the problem has to do with mutation.

So what's wrong?

Let's use the following workflow as an example:



If we were to optimize this using the original approach, the resulting FScheme code would look like this:

(λ (path)
  (let ([a (read-file path)])
    (list (begin (write-file path "text")
                a
                path)
          a)))

(Note that this is a definition for a new node, so it gets compiled down to a function. Hence the λ at the beginning.)

At first glance, the problem may not seem immediately obvious. The Read File node is connected to two inputs, Perform All and List. We use the lowest single ancestor algorithm to find where we can place a let binding: in this case, it's around the List node. We evaluate the Read File node, store it in a new identifier a, and then in both places that it's used, we refer to a.

...so what's wrong?

FScheme--like most Schemes--is an eager language. This means that it will evaluate all that it can as soon as it can. When traversing the above code, first FScheme will first evaluate (read-file path) and store it in a. Then it will evaluate the body of the let binding, where (write-file path "text") will be evaluated. The problem here is that Perform All (which compiles to the begin expression) must evaluate its inputs in the order they are listed, otherwise side-effects could occur in the wrong order. Since the order in which side-effects occur matters a lot, this is a big problem.

What we really want is to not evaluate (read-file path) until we normally would, after (write-file path "text"). Then, we want to store that value somewhere and reference it later when we need it. In more traditional imperative programming languages, this is familiar. Take a look at the following snippet of C#:

delegate (string path)
{
  string a;
  writeFile(path, "text");
  a = readFile(path);
  return new List<string>() { path, a };
};

Notice how we store a after we perform writeFile, and then later we just refer to a again. Well, we can do something similar in Scheme:

(λ (path)
  (let ([a (begin)])
    (list (begin (write-file path "text")
                 (begin (set! a (read-file path))
                        a)
                 path)
          a)))

This may look strange, but we're following the same pattern as in C#:  we declare a new identifier a that's uninitialized (the (begin) code returns a value that, when used, will result in an error), then where a would normally first be referenced, we evaluate (set! a (read-file path)) which stores an actual value in a. In all other places, we can just refer to a which now contains an actual value.

Putting it all together

The full algorithm is as follows:
  1. For each node X that has multiple outputs connected:
    1. Assign a unique string ID to be used as a storage variable
    2. Look up the lowest single ancestor LSA(X)
  2. Compile the dynamo graph starting from the entry point (node with no connected output).
    1. When reaching LSA(X), insert a let binding, binding ID to a (begin)
    2. The first time X is reached, insert a begin that first binds the compiled form of X to ID, and then returns ID.
    3. All subsequent times X is reached, simply insert ID

Sunday, February 17, 2013

Dynamo: Preventing Unnecessary Evaluation

Hello again! Today I'm pleased to present some progress with automatic removal of duplicate code, an issue that has plagued Dynamo workflows for a long time.

What's the problem?

Take a look at the following workflow:


Notice that several of the nodes in the workflow have their outputs connected to multiple places. Therein lies the heart of the problem: as a user, one would expect the output of a node to be passed to all inputs connected to it. Dynamo does this, but up until now, it did this by duplicating code.

You can see this in the compiled form of the above workflow:

(list (+ (square 3) 3)
      (+ (square 3) 3) 
      (* (square 3) (square 4)) 
      (+ (square 4) (square 4)))

When Dynamo evaluates this expression, it will run all of the duplicated code: (square 3)for example, will be run 3 times. This is two more times than necessary.


That doesn't seem like big deal

It's true that for basic calculations such as in this example, the code that's duplicated doesn't make a big difference. However, if the code that's duplicated takes a long time to run--or worse, performs a mutation such as adding geometry to a Revit document--then the duplication turns out to be a very big deal.

Fortunately there was a way to work around this: the only nodes where duplication never really mattered were ones that had no inputs themselves, such as numbers or variables. This is because it's just as fast to "calculate" the output of these nodes as it would be to lookup a cached result. So what you could do is create a new node and pass in the results you want to cache as inputs. The results are then cached as the node's variables, and then you can access them as many times as you like.

To remove duplication from the above workflow using this method, the compiled expression would look like this:

((lambda (a)
  ((lambda (b)
     ((lambda (c)
        ((lambda (d)
           (list c c (* b d) (+ d d)))
         (square 4))) 
      (+ b a))) 
   (square a))) 
 3)

In the actual Dynamo UI, this would require four separate user-defined nodes, and is obviously too painful to have to do. Ideally, all of this would be done under the hood for us.

There is a better way

Scheme has a built-in construct called let for dealing with caching values, without having to use intermediate helper functions. Using let* (shorthand for nested lets), the above can be rewritten as:

(let* ([a 3]
       [b (square a)]
       [c (+ b a)]
       [d (square 4)])
  (list c c (* b d) (+ d d)))

It's actually significantly easier to have Dynamo generate a let* then having to create all of the intermediate functions. When passed to the underlying FScheme engine, it will eventually all be converted to these helper functions anyway.

So how does this happen?

Lots of graph analysis. When compiling the Expression used for evaluation, Dynamo will now check for duplicated code by looking for nodes with an output being used in more than one place. It figures out dependencies among these nodes (so that they are ordered properly in the let*) and calculates the extent of where the code will be used (to determine the "entry-point" of the let*).

For any node with more than one output, in order to determine where a let* will be placed, the lowest single ancestor for the node must be calculated. In graph theory, the LSA for a node v is another node l that lies on all paths from the root of a directed acyclic graph (which is what a Dynamo workflow is) to v, and none of l's descendants also have this property (otherwise the root node would always satisfy the problem). I found a great paper by Fischer and Huson outlining how to implement an LSA lookup.

Tuesday, January 8, 2013

Dynamo - FScheme Refactor Progress

I've been spending the past few weeks working on various improvements to FScheme, the core scripting engine of Dynamo. Originally I was focusing on performance, but I ended up adding a bunch of things to bring the language closer to actual Scheme. The core language is getting there, but the library is still missing a lot of what's in the R5RS Scheme standard.

What's FScheme?

Way back when Ian first created Dynamo, it used an event structure to notify nodes that their inputs have been evaluated. This was slow already, and the fact that it couldn't work inside of one Revit API transaction meant that each node had it's own transaction (something that present-day Dynamo does in Debug mode) meant that updates to Dynamo could take forever to propagate to Revit.

Starting in March of 2012, I began working on modifying the Dynamo engine so that the node interface could be converted to a Scheme-like expression which could then be evaluated, thus making Dynamo itself as full-featured a programming language as Scheme (in theory). In May, I committed these changes to GitHub.

FScheme began as a prototype in Python that I made in order to demonstrate that you can imperatively create Scheme expressions and then execute them. The idea is that the imperative interface could be wrapped by a GUI. I then set out to port the prototype to .NET so that it would be compatible with the Revit API and the existing Dynamo code. Originally, I was going to write the engine in C#, but after doing some research on tail call optimization in the language, I realized this wouldn't work. After doing a little more research, I came to the conclusion that F# would be the best candidate: it has full tail call optimization (even for x86) and it compiles to CLR code, meaning that it is not interpreted. The fact that F# code is callable from C# means that I could access this engine directly from the existing Dynamo code.

The problem with writing it in F# was that I didn't know anything about the language; the closest language to F# that I had used in the past was Haskell, and I had only used it briefly.  Fortunately, I found this great tutorial about writing a Scheme interpreter in F#, and--even better--the code was open source on GitHub. After doing some basic modifications to facilitate some features that Dynamo should support (with respect to performance and dynamic updating), I had a basic Scheme language that would be hidden to the end user behind Dynamo's UI.

So What's New?

It turns out that there were a ton of problems with the FScheme language outlined by the tutorial. The first problem was that lexical scope was broken and would prevent tail calls from being optimized properly. Macros were also first-class and evaluated at runtime, which on the one hand was cool because you can then pass macros as arguments just like you can pass functions, but it made writing library functions in F# a total headache. It also greatly messed up evaluating lists, since the evaluator made no distinction between a list data-structure and a syntax-list used for function calls.

After attempting to fix some of these things, I quickly realized that it would be simpler to rewrite the language from the ground up. So that's what I did. FScheme is now much closer to normal Scheme, except it retains the features I need for interoperability with Dynamo.

Unfortunately, a lot of these changes are not user-facing. Macros are now evaluated at compile-time, and so are no longer first class. Fortunately people don't usually pass if statements as arguments to functions, so this won't be missed. I also had to remove call/cc since nobody uses it and it was bogging down performance. Finally, I added a (simple) compiler that performs a bunch of optimizations and produces quick F# code.

For a basic benchmark, I ran the following expression in both the old FScheme implementation and the new one, and compared the execution times:

(begin (define counter (lambda (x) (if (<= x 0) x (counter (- x 1))))) (counter 1000000))

Avg. Old FScheme Execution Time: 5130ms
Avg. New FScheme Execution Time: 2539ms

Admittedly, this test only really benchmarks function call speed, but since that's the most common operation in FScheme, the fact that it runs over twice as fast is a significant accomplishment.

You can see all the new changes to the FScheme language in the fscheme-improvements branch on GitHub.