Friday, March 28, 2014

Tutorial: Functional Reactive Programming in F# and WPF (Part 1)

When it comes to programming, my least favorite thing to work on is GUI back-ends. Traditional GUI programming involves a lot of mutable variables which represent the current state of your program; if at any point you want to modify part of the program, very often you have to take all of the potential states of your program into account. Things really start to get fun when there is more than one developer working on the project; the majority of the bugs I fix for Dynamo involve the user interface, and most often it's because of actions occurring in an incorrect state.

Reactive Programming aims to solve a lot of these issues. In a nutshell, all user-interaction with the UI can be represented as events; you can write code that responds to events by updating some state and UI. This isn't a new concept: anyone who has written applications in .NET is probably familiar with events. Reactive Programming takes events to the next level by allowing you to generate new events from existing ones: you can take an event that's fired when a Button is clicked, and make a new event that gets fired only when the Button was clicked with the left mouse button. You can take two events and combine them into a single event that's fired when either original event is fired. You can turn an event into a new event that returns the last two results of the original event. This alone can cut down on the complexity of a GUI implementation, but eventually you will still be manipulating state. Enter: Functional Reactive Programming.

As people who follow this blog already know, I'm a pretty big Functional Programming evangelist. Programming in a state-less style makes programs easier to reason about, easier to abstract and make modular, easier to test, and--once you get the hang of it-- easier to write. Unfortunately, when it comes to GUIs, Functional Programming usually falls short; eventually you will have to handle the state of your application as users interact with it. By combining Reactive Programming with Functional Programming, however, you can minimize the amount of stateful code in your application, and potentially limit it only to updating the interface itself.

In this tutorial, we will create a small application with somewhat complex user interaction and several moving parts. We will limit all mutative code to small modular commands which will be used to initialize and update the UI. Other than mutating fields on UI Elements, we will not be directly changing any application state. We will be writing this application in F# in order to take advantage of the facilities it provides for functional programming. Arguably we could implement this in C#, but if we're writing functional-style stateless code then we might as well use a language that facilitates it. Lastly, we will be using Windows Presentation Foundation for the UI framework, which will make it easy to setup the UI and expose events to the application.

Part 1 of this tutorial will give us a very basic interactive application that we will expand on in later parts. For now we just want to get something working before worrying about things like architecture, MVVM, modularity, extensibility, etc.

Goal: Window with a rectangle that can be dragged around.

You can access the completed code on GitHub.

Step 1: Create our Project

To simplify things, we can use download a project template from the template browser in Visual Studio that is setup for an empty F# WPF application. The name of the template we want is F# Empty Windows App (WPF), but searching for "f# wpf" should make it appear as one of the top results.


Using this template may cause a new Type Provider security warning window to appear in Visual Studio. This is just to allow static, strongly-typed access to the XAML file inside of the F# code, which is a nice feature to have.

The template will put a few files in the Project by default. The ones we're interested in are MainWindow.xaml where we will define the View of the application, and App.fs where we will program the behavior of the application.

Step 2: Define our UI View

Inside of MainWindow.xaml we will define the UI Components of our application. Since the goal is to have a Rectangle that we can drag around, all we really need is a Canvas that contains a Rectangle. I slightly increased the size of the window, and then added a Canvas with a 100x100 black Rectangle inside of it.


The two components we're really concerned with manipulating are the Canvas and the Rectangle, so I gave them both names so that they can be accessed via F#. The Canvas will be accessible through the Canvas property of the MainWindow instance, and the Rectangle through the Rectangle property.

Step 3: Setup our Application Logic

Looking at the code the template gave us, we see that there is a loadWindow() function where a MainWindow is created and its Root property is returned. Like the comment says, this is where we can write awesome code that can access our XAML elements.

The eventual goal of the exercise is to be able to drag around the rectangle and have its position update as the mouse pointer moves. To keep things simple, lets try to get a basic prototype working first.

Step 3a: Clicking anywhere on the Canvas will move the rectangle to that position


This is pretty simple to do, especially if you're already familiar with events. The idea is:  when the Canvas.MouseDown event is triggered, get the position of the mouse and move the rectangle the that position.

The System.Observable module gives us access to several functions that will be the basic building blocks of a reactive workflow. The key type here is the IObservable<'a> interface, which represents a stream of data. Events in F# implement the IObservable<'a> interface; events are technically a stream of the data they produce when they are triggered. We can use the Observable.subscribe function to connect a handler to an event.
let mouse_down_handler (args : Input.MouseButtonEventArgs) : unit =
    let mouse_position = args.GetPosition window.Canvas
    Canvas.SetLeft(window.Rectangle, mouse_position.X)
    Canvas.SetTop(window.Rectangle, mouse_position.Y)
let subscription = Observable.subscribe mouse_down_handler window.Canvas.MouseDown
This is pretty straightforward: the bottom line is where we subscribe to the Canvas.MouseDown event with our handler function, mouse_down_handler. This function first gets the position of the mouse relative to the Canvas, and then sets the Rectangle's position to match the mouse position. The Observable.subscribe call returns an IDisposable which we can use to terminate the subscription, thus unregistering the handler from the event. For the purposes of this tutorial we will not be cancelling any subscriptions in the middle of the program, but it's good to know that it's there.

At this point you can compile and run the program! The UI you designed in the XAML file will appear, and clicking anywhere on the Canvas will move the Rectangle such that the top-left corner is where the click occurred.

What we have currently is a good start; we have a handler reacts to user interaction. It doesn't give us a great idea about how data flows, however. To demonstrate, first let's break apart mouse_down_handler:
let get_canvas_position (args : Input.MouseButtonEventArgs) : Point =
    args.GetPosition window.Canvas

let set_rect_position (position : Point) : unit =
    Canvas.SetLeft(window.Rectangle, position.X)
    Canvas.SetTop(window.Rectangle, position.Y)
The functions in the Observable module give the ability to transform the data stream. I like to think of these transformations as pipe-lining the data, since you can compose them in the order the data will pass through them. Basically, the Observable functions allow you to create new IObservable<'a> instances from existing ones, where they take the data from the existing one and then transform it.

The first function we'll look at is Observable.map. If you're familiar with mapping over lists with List.map, this should look familiar. The function takes an IObservable<'a>, a transformation function of type ('a -> 'b) that is used to transform the data coming through the stream, and returns a new IObservable<'b> representing a stream of the transformed data.

With this in mind, we can re-write the subscription portion of the application:
let subscription =
    Observable.subscribe 
        set_rect_position 
        (Observable.map get_canvas_position window.Canvas.MouseDown)
When I work with the Observable module, I like to use the |> forward function application operator to represent the order that the data flows through the stream:
let subscription =
    window.Canvas.MouseDown
    |> Observable.map get_canvas_position 
    |> Observable.subscribe set_rect_position
So now you can try to compile and run the program again, and see that it's working exactly the way it was before we changed the code, which is a good thing. You may be wondering what the point of changing the code was. Besides demonstrating what the data flow looks like, it also makes the code a lot more modular, which is useful when you're working on UI applications.

Step 3b: Only allow left-clicks


Right now the rectangle will move to where the cursor is when the user clicks any of the mouse buttons. Let's change that so that it only moves if the user clicks the left mouse button. We can quickly define a function that can check that from a MouseButtonEventArgs instance:
let is_left_click (args : Input.MouseButtonEventArgs) : bool =
    args.ChangedButton = Input.MouseButton.Left
And then we can use Observable.filter to filter the stream so that only data coming from a left mouse click progresses:
let subscription =
    window.Canvas.MouseDown
    |> Observable.filter is_left_click
    |> Observable.map get_canvas_position
    |> Observable.subscribe set_rect_position
If you compile and run at this point, you should see that the rectangle only moves when clicking the left mouse button in the Canvas. Using any of the other buttons will result in nothing happening. This is due to how Observable.filter works: all data coming into the filter is passed through the given function, in this case is_left_click. If the function returns true, the data is passed along the stream. If it returns false, it is stopped and the rest of the stream will not receive the data.

Step 3c: Modify prototype to allow for dragging


Now that we have our prototype working and understand how we can express data flow from events into handlers, we can implement actual dragging. A drag operation actually requires some state management: we'll need a flag that indicates whether or not the mouse is being held down, and we'll use this flag to determine whether or not we should be updating the rectangle's position as the mouse moves. The question is, if we're avoiding mutation, how do we properly handle this kind of stateful interaction?

First, let's represent what a drag operation looks like:
type DragState = { dragging : bool; position: Point }
We have a data structure that keeps track of whether or not we're dragging, and where the current position of the drag is. Now, let's write some utility functions that can update this structure:
let update_dragging (dragging : bool) (state : DragState) : DragState =
    { state with dragging=dragging }

let update_drag_pos (position : Point) (state : DragState) : DragState =
    if state.dragging
    then { state with position=position }
    else state
With these in hand, we can now work on defining the behavior of the drag operation within the application. When the Rectangle.MouseDown event is triggered, we want to set the dragging flag to true. When the Canvas.MouseUp event is triggered, we want to set the dragging flag to false. Finally, when the Canvas.MouseMove event is triggered, we want to update the position.

The trick to managing the state between these three events is the Observable.scan function. This function takes in a state manipulation function of type ('a -> 'b -> 'a), an initial state of type 'a, and an IObservable<'b>. It returns an IObservable<'a>. From the types alone it's simple to deduce what Observable.scan does:  'a represents the type of the state, and 'b represents the type of data flowing in. When data comes in via the given IObservable<'b>, it is passed to the state manipulation function along with the current state. This function returns a new state which will then be used when the next piece of data comes in. The first time data comes in, it will use the given initial state. Each time the state updates, the new state comes out of the result IObservable<'a>.

Of course, the Observable.scan function only accepts one IObservable as an input. We have three events that will need to modify this state. We can combine the three events into a single IObservable by using the Observable.merge function. This function takes two IObservable<'a> instances, and returns one new IObservable<'a> that receives data from each of the original ones. This is the last piece of the puzzle; we have all the tools we need to create our drag operation.

One nice thing about reactive programs is that they lend themselves to flow charts. This flow chart represents the entire drag operation:
We can use this flow chart to write the code piece by piece. First, we need to represent what a change to the DragState is:
type DragChange = StartDrag | StopDrag | UpdatePosition of Point
Next, we need to transform our event streams so that they produce the appropriate change commands:
let start_stream =
    window.Rectangle.MouseDown 
    |> Observable.filter is_left_click
    |> Observable.map (fun _ -> StartDrag)
let stop_stream =
    window.Canvas.MouseUp 
    |> Observable.filter is_left_click
    |> Observable.map (fun _ -> StopDrag)
let move_stream =
    window.Canvas.MouseMove
    |> Observable.map (get_canvas_position >> UpdatePosition)
We also need to adjust the signature of get_canvas_position to match the data coming in from the MouseMove event:
let get_canvas_position (args : Input.MouseEventArgs) : Point =
    ...
We can then write a function that takes in a DragState and a DragChange and produces a new DragState:
let update_drag_state (state : DragState) (change : DragChange) : DragState =
    match change with
    | StartDrag -> update_dragging true state
    | StopDrag -> update_dragging false state
    | UpdatePosition(pos) -> update_drag_pos pos state
We also need to define our initial state:
let initial_state = { dragging=false; position=new Point() }
Finally, we need a function that checks if a drag is active, and a function that gets the new drag position:
let currently_dragging (state : DragState) : bool = state.dragging
let get_drag_position (state : DragState) : Point = state.position
And now we just construct the flow chart using all of our defined functions as building blocks:
let subscription =
    Observable.merge start_stream stop_stream |> Observable.merge move_stream
    |> Observable.scan update_drag_state initial_state
    |> Observable.filter currently_dragging
    |> Observable.map get_drag_position
    |> Observable.subscribe set_rect_position
At this point, you should be able to compile and run the application. When you left-click on the rectangle and move the mouse without releasing the button, the Rectangle will follow the cursor!

Step 3d: Perform drag while keeping the rectangle in the same position relative to the cursor


Currently, no matter where you click on the Rectangle, as soon as you move the mouse it moves so that the top-left corner is under the cursor. To demonstrate how straightforward it is to modify our workflow, let's fix this.

First, we need to remember what position we started the drag from, relative to the Rectangle. We can add another entry in our DragState structure to track this:
type DragState = { dragging : bool; position : Point; offset : Point }
With that added, we can change get_drag_position to take the offset into account:
let get_drag_position (state : DragState) : Point = 
    let diff = state.position - state.offset
    new Point(diff.X, diff.Y)
Next, we need to make sure that the offset is recorded when the drag starts. To do this, we must first update DragChange.StartDrag to track the offset:
type DragChange = StartDrag of Point | StopDrag | UpdatePosition of Point
And then modify update_drag_state to pass the offset to the state:
let update_drag_state (state : DragState) (change : DragChange) : DragState =
    match change with
    | StartDrag(offset) -> { state with dragging=true; offset=offset }
    ...
Finally, we need to update start_stream so that the offset is recorded from the Rectangle.MouseDown event. First, let's refactor get_canvas_position so that we can get the position relative to any UIElement, as opposed to just the Canvas:
let get_mouse_position (rel_to : IInputElement) (args : Input.MouseEventArgs) : Point =
    args.GetPosition rel_to

let get_canvas_position = get_mouse_position window.Canvas
let get_rectangle_position = get_mouse_position window.Rectangle
With that done, we can easily update start_stream to get the dragging offset:
let start_stream =
    window.Rectangle.MouseDown
    |> Observable.filter is_left_click
    |> Observable.map (get_rectangle_position >> StartDrag)
And that's it! If you compile and run the program, now when you drag the Rectangle, it doesn't snap to the top-left corner, but stays in the same position relative to the cursor.

Summary

This tutorial has (hopefully) demonstrated the following key concepts:
  • By using the Observable module, events can be treated as data streams, which allows for easy, composable manipulation.
  • Stateful UI actions can be represented using a structure to represent the state, and another structure to represent how that state can be manipulated by the application. The Observable.scan function can be used to connect these two structures.
  • Breaking your program down into functional building blocks makes it a lot simpler to modify the behavior of your application.
Of course, this is a pretty contrived and basic example. What happens when you have a lot more moving pieces and need to link a lot of events together? In later parts of the tutorial, we will attempt to scale these concepts up in a way that they can apply to any production-quality application. One important thing to take away now is that even though you can do these things in a traditional imperative style, strictly following the Functional Reactive Programming methodology forces you to write this kind of code in a way that is inherently safe(er) and more maintainable in the long run.

Sunday, October 6, 2013

Dynamo Fractals Part II: L-Systems and Turtle Graphics

Link to Part I: http://steellworks.blogspot.com/2013/08/dynamo-fractals.html

What's an L-System?

An L-System, short for Lindenmayer System, is a string re-writing algorithm that can be used to produce strings that can then be processed to generate geometry. In a nutshell, you take a starting string, called an axiom, and a list of rewrite rules, called productions, which you use to rewrite the axiom into a new string, which can than be rewritten again and again, ad infinitum.

An example:

Axiom: A
Productions:
A AB
B A

0: A
1: AB
2: ABA
3: ABAAB
4: ABAABABA
5: ABAABABAABAAB
6: ABAABABAABAABABAABABA

What's Turtle Graphics?

As mentioned above, you can take a string generated from an L-System and use it to generate geometry. The most popular method of doing this is by using a cursor whose state is manipulated by the symbols in the string produced by the L-System. For fun, we can imagine that this cursor is a turtle.

The turtle has three attributes:  a location (initialized to the origin), an orientation (initialized to the X unit vector), and a pen. It can be commanded relative to the current state of these attributes:  rather than saying "draw a line from (0, 0) to (3, 0)", instead you would say "move forward 3 units while drawing". If you wanted to make a square with end points (0, 0), (3, 0), (3, 3), and (0, 3), you would say "move forward 3 units while drawing, turn 90 degrees, move forward 3 units while drawing, turn 90 degrees, move forward 3 units while drawing, turn 90 degrees, move forward 3 units while drawing".

Obviously that's a lot of text, so each command is represented by a single character:
  • + = Turn left
  • - = Turn right
  • ^ = Pitch up
  • & = Pitch down
  • \ = Roll counter-clockwise
  • / = Roll clockwise
  • | = Turn around
  • F = Move forward while drawing
  • f = Move forward without drawing
  • [ = Save current position and orientation
  • ] = Restore saved position and orientation
The rotation commands all rotate the same amount, which is determined when the turtle is initialized. The same goes for the distance the movement commands travel.

If we were to create the same square as before using actual Turtle Graphics commands:
Rotation: 90 degrees
Movement: 3 units
Commands: F+F+F+F

How does this relate to L-Systems?

The turtle receives its commands in the form of a string of characters. Any characters which are not one of the commands are simply ignored. This means that we can use L-Systems to generate strings containing Turtle Graphics commands!

Sounds cool, I want to use these in Dynamo!

Great, because I created a Turtle Graphics package that's available for download via the Dynamo Package Manager!

The main interface that the package provides is the Turtle Graphics node:


  • start: The axiom of the L-System
  • rules: Productions for the L-System
  • iterations: Number of times to apply the productions to the L-system product
  • initial transform: Starting transformation for the turtle
  • move amt: Amount the turtle will move during the F and f commands.
  • turn amt: Amount the turtle will turn during the +, -, &, ^, \, and / commands.
  • line maker: Function that consumes the two end points of a line drawn with the F command. This is used to generate the output of the Turtle Graphics node.
The Turtle Graphics node returns a list of all the lines it has drawn. In order to give the user some control over how lines are represented, I've provided the line maker input, which takes a function that will receive the two end points of a line, and what it produces will be stored in the final lines list that the node will output.

For simplicity, the rules port takes the productions as a string, consisting of a list of rules separated by newlines, each composed in the following format:

[x]->[rewrite string]

(Where x is a character, and rewrite string is the string of characters to replace x with.)

Example: Sierpiński Arrowhead



Axiom: XF
Productions:
XYF+XF+Y
YXF−YF−X

If you read the previous blog post, you may already understand how this is set up. We plug the axiom and productions (note the format) in as defined, iterations is plugged in from the node's order input, the starting transform is the identity transform if order is odd, and if it's true then start rotated the move amount is calculated so that the triangle's size is proportional only to the scale input (it won't scale with order.) The turn amount is set to 60 degrees.

We use the Line node as the line maker; for each line the turtle draws, Line will receive two end points (in the form of XYZs) and make a line object, which will be returned in the output list of the Turtle Graphics node.

(Note that you could use any node that can receive two XYZs in the place of the Line node, and it will change the data being output. You could, for example, plug in a List node with two input ports, and then the lines output of the Turtle Graphics node will return a list of lists that each contain two XYZs.)

"Sierpiński" example, order 8

Other Examples


Included in the Turtle Graphics package are examples of various L-Systems that you can use for reference.

"Plant" example, order 5
"Hilbert 3D" example, order 3

Thursday, August 22, 2013

Dynamo Fractals

We've had a very productive and busy few days over here in the factory, but that hasn't stopped me from implementing something very dear to my heart; something I've been toying around with doing for a while now.

What's a Fractal?

Fractals are geometric constructs consisting of self-similar patterns. Most, if not all fractals are generated using simple recursive rules. It is the recursive property of a fractal's definition that gives it it's unique self-similar structure. Wikipedia has a great article, as usual.

For my Dynamo exercise, I looked for fractals which lend themselves to a simpler geometric definition rather than a pixel-by-pixel definition (such as the Mandelbrot or Julia sets). The two I decided to implement are the Sierpiński Arrowhead and the Dragon Curve. Both of these fractals start as a simple geometric figure (a line) and then on each iteration of its recursive creation, all occurrences of the original geometric figure are replaced with a new figure that contains multiple occurrences of the original figure (which can then all be replaced in the next iteration).

These fractals can both be drawn by emulating a pen: we keep track of the direction the pen is facing, and we can make the pen draw lines of a certain length in its current direction. We can turn the pen, and we can draw with the pen. These can be emulated with Transformations: turning applies a rotation to the pen's transformation, and drawing adds a translation (and the end point of the draw operation can be stored as an XYZ for creating line geometry later).

Sierpiński Arrowhead

One of the most famous fractals is the Sierpiński Triangle (also known as the Sierpiński Gasket). It's starting geometry is a solid equilateral triangle, and on each iteration an inverted equilateral triangle is removed from the center of the solid triangle, creating three solid equilateral triangles around the new hole.

The Sierpiński Arrowhead is used to approximate the whole triangle, and is useful for this exercise because it forms one complete curve (one can draw the entire Arrowhead without lifting up a pen). It's starting geometry is a line segment, that is replaced on each iteration with three line segments half the length of the original with 60 degrees between them. This replacement is inverted on every other application: the "bump" will alternate between being right-side-up and upside-down.

Based on this information, the algorithm looks like this:

Let:

  • direction = direction of replacement
  • length = length of each segment
  • order = number of iterations to draw. Input by the user.
  • scale = size of the total image. Input by the user.
In:
  1. direction is initialized to upside-down. length is initialized to scale/(2^order). If order is odd, then turn 60 degrees (this will keep the triangle upright).
  2. If order = 0: draw a line of length length.
  3. Recur with order - 1 and direction flipped.
  4. Turn 60 degrees in the current direction.
  5. Recur with order - 1 and current direction.
  6. Turn 60 degrees in the current direction.
  7. Recur with order - 1 and direction flipped.
Step 1 is the initialization of a recursive loop, so in Dynamo all of this will be done in a node that wraps a recursive node.

The length initialization is calculated by dividing the starting scale by 2 for each iteration that will occur.

Step 2 is the start of a recursive function. All recursive calls will start there. Step 2 also represents the base case: if we started this algorithm with order as 0, then we would immediately draw a line of length scale along the X axis, which is the correct result.

Steps 3, 5, and 7 draw each of the three sub-sections of a single iteration, separated by turns in the current direction. This is exactly the definition of a single iteration of the arrowhead:  three sub-sections with 60 degrees between them. If order is 1, then the recursive calls will draw lines, and the result will be the one "bump", which is the correct result.

With that in mind, let's look at the Dynamo implementation.



At the top, we calculate the length of all drawn segments and create a translation transformation out of it. Rotation is initialized to -60 degrees. The starting transformation is set to either the identity transformation if the starting order is even, or a 60 degree rotation if the order is odd. We also initialize an accumulator that will store all of the results; since we will be storing the end-points of all the translations (emulating the draw function), we need to supply the starting point, which is the origin. All of this is passed to arrowhead curve, which contains the recursive loop.



Some notes on the inputs:
  • order is the current order we are drawing
  • result is the accumulated XYZs representing the end points of the drawn lines
  • transform is the accumulated transformation that contains all of the translations and rotations corresponding to the draws and turns used to generate the XYZs
  • translate is the pre-calculated translation used to draw the segments
  • rotation is the rotation amount to be applied for turns in this iteration
If the order is 0, then we perform a draw by applying the translate to the current transform, and then applying this to an XYZ at the origin to get the appropriate XYZ. The XYZ is added to the accumulator, and the updated accumulator is returned. The updated transform with the translate applied is also returned.

If the order is not 0, the we draw the three sub-sections. We draw the first sub-section with the rotation flipped (which flips the direction). We take the result transform from the first sub-section and rotate it by the current rotation, which emulates the first turn. We pass this turned transform to the second sub-section, along with the current (non-flipped) rotation, and the results of the first sub-section. We turn the result transform again before drawing the third sub-section, which again uses the flipped rotation and the results from the second (and first) sub-section. For all three sub-sections, the order is subtracted by 1, which allows the recursion to approach the base case of order = 0. The results of the third sub-section drawing are then used as the results of the recursive branch.

Sierpinski Arrowhead in Revit, order of 7.

Dragon Curve

The Dragon Curve (also known as the Jurassic Park Curve) is another popular fractal. It is created using 45 and 90 degree angles: its starting geometry is a single line segment, and on each iteration, the single line segment is replaced by two smaller line segments of equal length joined at a 90 degree angle and oriented such that the first new segment forms a 45 degree angle with the original segment. Like the Sierpiński Arrowhead, these replacements alternate between right-side-up and upside-down.

The algorithm looks like this:

Let:
  • direction = direction of replacement
  • length = length of each segment
  • order = number of iterations to draw. Input by the user.
  • scale = size of the total image. Input by the user.
In:
  1. direction is initialized to right-side-up. length is initialized to scale/2^(.5*order).
  2. If order = 0: draw a line of length length.
  3. Turn 45 degrees in the current direction.
  4. Recur with order - 1 and direction = right-side-up.
  5. Turn 90 degrees in the opposite direction.
  6. Recur with order - 1 and direction = upside-down.
  7. Turn 45 degrees in the current direction.
Step 1 is the initialization of a recursive loop, so in Dynamo all of this will be done in a node that wraps a recursive node.

The length initialization is calculated by dividing the starting scale by the square root of 2 for each iteration that will occur. (Square root of 2 is used because a  45-45-90 right-triangle with a hypotenuse of 1 yields a side-length of root-2 in accordance with the Pythagorean theorem).

Step 2 is the start of a recursive function. All recursive calls will start there. Step 2 also represents the base case: if we started this algorithm with order as 0, then we would immediately draw a line of length scale along the X axis, which is the correct result.

Steps 4 and 6 draw both sub-sections of a single iteration, separated by a 90 degrees turn in the opposite direction. Before and after the sub-sections are drawn, there is a 45 degrees turn in the current direction. This is exactly the definition of a single iteration of the arrowhead:  two sub-sections that form a  45-45-90 right-triangle with the original segment. The first sub-section is drawn right-side up, and the second is drawn upside-down. If order is 1, then the recursive calls will draw lines, and the result will be the one "bump", which is the correct result.

Now let's look at the Dynamo implementation.



This should look familiar if you've already seen the Sierpiński Arrowhead. At the top, we generate the translation transformation to be used for emulating draw. The transformation is initialized to the identity transform, the direction starts right-side-up, and the accumulator is initialized to contain the starting point.



Again, this should be familiar. The base case is set up identically: we perform a draw the same way as before. The recursive case is different but should be straight-forward after seeing the Arrowhead version: first we turn 45 degrees in in the current direction, then we draw the first sub-section right-side up, then we turn 90 degrees in the opposite direction, then we draw the second sub-section, and finally we turn 45 degrees in the current direction. Like last time, the accumulator and the transformation are passed along to the results.

Dragon Curve in Revit, order of 12

Friday, February 22, 2013

Dynamo: Preventing Unnecessary Evaluation (Part 2)

In my last blog post, I spoke about how we could use graph analysis in Dynamo to avoid duplicating code when compiling Dynamo workflows down to the FScheme engine. I am pleased to announce that this has now been implemented and is live on GitHub!

Of course, nothing ever goes right the first time. Hours after pushing all the new code to GitHub, during a moment of quiet reflection, I discovered why the approach I originally outlined was flawed. As with most issues in programming--I find--the problem has to do with mutation.

So what's wrong?

Let's use the following workflow as an example:



If we were to optimize this using the original approach, the resulting FScheme code would look like this:

(λ (path)
  (let ([a (read-file path)])
    (list (begin (write-file path "text")
                a
                path)
          a)))

(Note that this is a definition for a new node, so it gets compiled down to a function. Hence the λ at the beginning.)

At first glance, the problem may not seem immediately obvious. The Read File node is connected to two inputs, Perform All and List. We use the lowest single ancestor algorithm to find where we can place a let binding: in this case, it's around the List node. We evaluate the Read File node, store it in a new identifier a, and then in both places that it's used, we refer to a.

...so what's wrong?

FScheme--like most Schemes--is an eager language. This means that it will evaluate all that it can as soon as it can. When traversing the above code, first FScheme will first evaluate (read-file path) and store it in a. Then it will evaluate the body of the let binding, where (write-file path "text") will be evaluated. The problem here is that Perform All (which compiles to the begin expression) must evaluate its inputs in the order they are listed, otherwise side-effects could occur in the wrong order. Since the order in which side-effects occur matters a lot, this is a big problem.

What we really want is to not evaluate (read-file path) until we normally would, after (write-file path "text"). Then, we want to store that value somewhere and reference it later when we need it. In more traditional imperative programming languages, this is familiar. Take a look at the following snippet of C#:

delegate (string path)
{
  string a;
  writeFile(path, "text");
  a = readFile(path);
  return new List<string>() { path, a };
};

Notice how we store a after we perform writeFile, and then later we just refer to a again. Well, we can do something similar in Scheme:

(λ (path)
  (let ([a (begin)])
    (list (begin (write-file path "text")
                 (begin (set! a (read-file path))
                        a)
                 path)
          a)))

This may look strange, but we're following the same pattern as in C#:  we declare a new identifier a that's uninitialized (the (begin) code returns a value that, when used, will result in an error), then where a would normally first be referenced, we evaluate (set! a (read-file path)) which stores an actual value in a. In all other places, we can just refer to a which now contains an actual value.

Putting it all together

The full algorithm is as follows:
  1. For each node X that has multiple outputs connected:
    1. Assign a unique string ID to be used as a storage variable
    2. Look up the lowest single ancestor LSA(X)
  2. Compile the dynamo graph starting from the entry point (node with no connected output).
    1. When reaching LSA(X), insert a let binding, binding ID to a (begin)
    2. The first time X is reached, insert a begin that first binds the compiled form of X to ID, and then returns ID.
    3. All subsequent times X is reached, simply insert ID

Sunday, February 17, 2013

Dynamo: Preventing Unnecessary Evaluation

Hello again! Today I'm pleased to present some progress with automatic removal of duplicate code, an issue that has plagued Dynamo workflows for a long time.

What's the problem?

Take a look at the following workflow:


Notice that several of the nodes in the workflow have their outputs connected to multiple places. Therein lies the heart of the problem: as a user, one would expect the output of a node to be passed to all inputs connected to it. Dynamo does this, but up until now, it did this by duplicating code.

You can see this in the compiled form of the above workflow:

(list (+ (square 3) 3)
      (+ (square 3) 3) 
      (* (square 3) (square 4)) 
      (+ (square 4) (square 4)))

When Dynamo evaluates this expression, it will run all of the duplicated code: (square 3)for example, will be run 3 times. This is two more times than necessary.


That doesn't seem like big deal

It's true that for basic calculations such as in this example, the code that's duplicated doesn't make a big difference. However, if the code that's duplicated takes a long time to run--or worse, performs a mutation such as adding geometry to a Revit document--then the duplication turns out to be a very big deal.

Fortunately there was a way to work around this: the only nodes where duplication never really mattered were ones that had no inputs themselves, such as numbers or variables. This is because it's just as fast to "calculate" the output of these nodes as it would be to lookup a cached result. So what you could do is create a new node and pass in the results you want to cache as inputs. The results are then cached as the node's variables, and then you can access them as many times as you like.

To remove duplication from the above workflow using this method, the compiled expression would look like this:

((lambda (a)
  ((lambda (b)
     ((lambda (c)
        ((lambda (d)
           (list c c (* b d) (+ d d)))
         (square 4))) 
      (+ b a))) 
   (square a))) 
 3)

In the actual Dynamo UI, this would require four separate user-defined nodes, and is obviously too painful to have to do. Ideally, all of this would be done under the hood for us.

There is a better way

Scheme has a built-in construct called let for dealing with caching values, without having to use intermediate helper functions. Using let* (shorthand for nested lets), the above can be rewritten as:

(let* ([a 3]
       [b (square a)]
       [c (+ b a)]
       [d (square 4)])
  (list c c (* b d) (+ d d)))

It's actually significantly easier to have Dynamo generate a let* then having to create all of the intermediate functions. When passed to the underlying FScheme engine, it will eventually all be converted to these helper functions anyway.

So how does this happen?

Lots of graph analysis. When compiling the Expression used for evaluation, Dynamo will now check for duplicated code by looking for nodes with an output being used in more than one place. It figures out dependencies among these nodes (so that they are ordered properly in the let*) and calculates the extent of where the code will be used (to determine the "entry-point" of the let*).

For any node with more than one output, in order to determine where a let* will be placed, the lowest single ancestor for the node must be calculated. In graph theory, the LSA for a node v is another node l that lies on all paths from the root of a directed acyclic graph (which is what a Dynamo workflow is) to v, and none of l's descendants also have this property (otherwise the root node would always satisfy the problem). I found a great paper by Fischer and Huson outlining how to implement an LSA lookup.

Tuesday, January 8, 2013

Dynamo - FScheme Refactor Progress

I've been spending the past few weeks working on various improvements to FScheme, the core scripting engine of Dynamo. Originally I was focusing on performance, but I ended up adding a bunch of things to bring the language closer to actual Scheme. The core language is getting there, but the library is still missing a lot of what's in the R5RS Scheme standard.

What's FScheme?

Way back when Ian first created Dynamo, it used an event structure to notify nodes that their inputs have been evaluated. This was slow already, and the fact that it couldn't work inside of one Revit API transaction meant that each node had it's own transaction (something that present-day Dynamo does in Debug mode) meant that updates to Dynamo could take forever to propagate to Revit.

Starting in March of 2012, I began working on modifying the Dynamo engine so that the node interface could be converted to a Scheme-like expression which could then be evaluated, thus making Dynamo itself as full-featured a programming language as Scheme (in theory). In May, I committed these changes to GitHub.

FScheme began as a prototype in Python that I made in order to demonstrate that you can imperatively create Scheme expressions and then execute them. The idea is that the imperative interface could be wrapped by a GUI. I then set out to port the prototype to .NET so that it would be compatible with the Revit API and the existing Dynamo code. Originally, I was going to write the engine in C#, but after doing some research on tail call optimization in the language, I realized this wouldn't work. After doing a little more research, I came to the conclusion that F# would be the best candidate: it has full tail call optimization (even for x86) and it compiles to CLR code, meaning that it is not interpreted. The fact that F# code is callable from C# means that I could access this engine directly from the existing Dynamo code.

The problem with writing it in F# was that I didn't know anything about the language; the closest language to F# that I had used in the past was Haskell, and I had only used it briefly.  Fortunately, I found this great tutorial about writing a Scheme interpreter in F#, and--even better--the code was open source on GitHub. After doing some basic modifications to facilitate some features that Dynamo should support (with respect to performance and dynamic updating), I had a basic Scheme language that would be hidden to the end user behind Dynamo's UI.

So What's New?

It turns out that there were a ton of problems with the FScheme language outlined by the tutorial. The first problem was that lexical scope was broken and would prevent tail calls from being optimized properly. Macros were also first-class and evaluated at runtime, which on the one hand was cool because you can then pass macros as arguments just like you can pass functions, but it made writing library functions in F# a total headache. It also greatly messed up evaluating lists, since the evaluator made no distinction between a list data-structure and a syntax-list used for function calls.

After attempting to fix some of these things, I quickly realized that it would be simpler to rewrite the language from the ground up. So that's what I did. FScheme is now much closer to normal Scheme, except it retains the features I need for interoperability with Dynamo.

Unfortunately, a lot of these changes are not user-facing. Macros are now evaluated at compile-time, and so are no longer first class. Fortunately people don't usually pass if statements as arguments to functions, so this won't be missed. I also had to remove call/cc since nobody uses it and it was bogging down performance. Finally, I added a (simple) compiler that performs a bunch of optimizations and produces quick F# code.

For a basic benchmark, I ran the following expression in both the old FScheme implementation and the new one, and compared the execution times:

(begin (define counter (lambda (x) (if (<= x 0) x (counter (- x 1))))) (counter 1000000))

Avg. Old FScheme Execution Time: 5130ms
Avg. New FScheme Execution Time: 2539ms

Admittedly, this test only really benchmarks function call speed, but since that's the most common operation in FScheme, the fact that it runs over twice as fast is a significant accomplishment.

You can see all the new changes to the FScheme language in the fscheme-improvements branch on GitHub.

Tuesday, December 18, 2012

Dynamo - Manual Transactions

Hey there, kids! I hope you're all getting ready for the holiday festivities, but maybe some of you are using some of your downtime to tinker with Dynamo on the side. Today, I'll be talking about the Transaction node in Dynamo, and what it can do for your holiday experimentation.

How Dynamo evaluates nodes

I mentioned in my previous blog post that one of Dynamo's internal modes of interacting with the Revit API via Transactions is Manual Transaction mode, and Dynamo only evaluates in this mode when there is a Transaction node in the execution workflow. To summarize again: Dynamo starts evaluating the right-most nodes (nodes with nothing connected to the output). If a node has any inputs, then it first evaluates its inputs, and then passes the evaluated inputs to the node, which is then used to evaluate the node.

Take a look at the following image, in which I've labeled the order nodes are evaluated:


First, Dynamo tries to evaluate the Reference Point node. It sees that the node has an input, so it then tries to evaluate the input, which is the GraphFunction node. That also has inputs, so it start evaluating those, starting with the first (top-most). The multiplication node also has inputs, so it evaluates those. This brings us to the first Number node. It has no inputs, so it can be evaluated as-is. The number -1 is evaluated. Now the other input to the multiplication node needs to be evaluated. The pi node has no inputs, so it just returns 3.4159... Now, the inputs to the multiplication node are satisfied, and we can evaluate the node, which multiplies them. Dynamo will now start evaluating the second input to the GraphFunction node, and the same process will occur.

What the Transaction node does

The Transaction node hooks into this evaluation tree-traversal in order to wrap some of the evaluation in a Revit API transaction. Let's say we had a Transaction node in between the division node and the GraphFunction node. When it would normally come time to attempt to evaluate the division node, it would first run into the new Transaction node. The Transaction node inserts some extra code in the process that initializes a new Revit API transaction, and the continues with the evaluation process. This means that the division node and its inputs will be evaluated with a transaction active. Once the division node has finished being evaluated, it's time for the Transaction node to be evaluated. For its evaluation, it ends the Transaction it created, and then passes its only input along as the result of the evaluation.

Caveats of the Transaction node

When a Transaction node is in the evaluation tree, Dynamo can no longer automatically manage the transactions of its nodes, since the user has explicitly determined when a transaction begins and--more importantly--ends. This means that any node which requires a transaction must be manually connected to a Transaction node by the user. This will prove to be rather inconvenient if you're using a third-party node that uses a transaction internally. However, it mirrors how writing any code for the Revit API involving transactions works, so if you're used to working with it in code, code with it in Dynamo should be familiar.

So when should I use a Transaction node?

Any time you need discreet control of a transaction in a Dynamo workflow. Without a Transaction node in the workflow, Dynamo will automatically detect if any of the nodes require a transaction, and if they do, it will run the entire workflow inside of one. If this functionality cannot accomplish what you need, that's when you use a Transaction node.

As mentioned in my previous blog post, a great example is the Solar Radiation Optimizer sample. It first adjust the value of a family instance parameter, then needs to recalculate the amount of incident solar radiation with the new value applied. The Solar Radiation plugin cannot recalculate until the parameter change has been applied, which requires the transaction the parameter change occurred in to have ended. When I wrote the Solar Radiation Optimizer sample, I used a Transaction node in order to set the new family instance parameter value. After the value was set, the transaction would end, the solar radiation plugin would update and spit out its results to a file, Dynamo would pick up on the change to the file and read in the new data, then compare the data with what it has stored, and then repeat the process until the end of the loop.