Towards a Workable INumeric, Part 1

December 2nd, 2012 matt Posted in Programming | 3 Comments »

Over the past couple of years (ok, decade? Since the days of .NET 1.1, I suppose) I’ve increasingly found myself wanting to write high-performance numerical code in C#. For example, while I was getting my Masters in electrical engineering at Ohio State I was working on computational electromagnetics software, most of which was written in C or FORTRAN, but I was drawn to the productivity boost of managed languages like C# and F#.

In scenarios like this, performance is king – the codes typically took multiple days and/or weeks to run, so a 10% performance improvement could save quite a bit of time. It was worth the effort to micro-optimize. Algorithms that were originally written for double-precision numbers could be sped up by using single-precision under certain circumstances. Some pieces of code could be improved by using integer arithmetic. Inevitably common code like sums and averages need to be maintained.

For example, just for two types (int and float) and one method (Sum), the code starts to add up:

    public static class Utilities
    {

        public static int Sum(int[] items)
        {
            int sum = 0;
            foreach (int item in items)
            {
                sum += item;
            }
            return sum;
        }

        public static float Sum(float[] items)
        {
            float sum = 0;
            foreach (float item in items)
            {
                sum += item;
            }
            return sum;
        }
    }

If the algorithm is more complicated, it starts to become difficult to keep things in sync. You have to remember to make changes in multiple places at the same time. Typically when you encounter code like this it’s a good indicator that some refactoring is necessary, but because the code is performance-critical that’s not possible.

Once .NET 2.0 came out and I discovered generics, I thought that would certainly solve the problem – I could just write the method once in terms of a type T, and then use it for int, float, double, etc. Wrong Smile

This is some code that I wish I could have written, but alas, that is not the case. Warning, unrealistic code ahead.

        public static T Sum<T>(T[] items)
            where T : int, float, double
        {
            T sum = 0;
            foreach (T item in items)
            {
                sum += item;
            }
            return sum;
        }

Generic type constraints are not allowed to specify individual numeric types (int, float, double), only that the type T must be either a struct (value type) or a class (reference type).  Even if a “where T : struct” clause is used, it cannot be assumed that 0 (an int) can be assigned to type T. “default(T)” helps, but typically doesn’t always express the concept of “zero” for all possible value types (even used-defined ones) that could be specified. Even after that, not all value types have the “+” operator defined – int, float, and double are special-cased by the C# compiler.

Generic type constraints could specify that the type T must implement a specific interface, but there is no interface that the built-in numeric value types (int, float, double, uint, etc) all implement (and has methods for the Add, Subtract, Multiply, and Divide operators). The BCL doesn’t contain such an interface. INumeric would be a good name for such a thing.

That seemed to be the end of the road to that idea. I reluctantly chose to continue maintaining multiple sets of the same code for different data types.

Since then I’ve had the nagging feeling that there has to be a better way, and tried a few different avenues to get past the roadblock- F# inline functions, the C# dynamic keyword, and  a custom INumeric implementation (the topic of this post and a few more).

F# inline functions

A few years after this I stumbled on F# when it was just an early beta that installed into Visual Studio 2005. I was just trying to find an interactive scripting environment like MATLAB that had good interop with .NET, not necessarily trying to find something geared towards science/engineering. I’d never heard of functional programming but picked it up quickly since F# was just so amazing. 

F# took a much more elegant approach to this problem, allowing inline functions where the type inference engine would allow constraints over individual static methods (and since F# is a functional languages, everything is a function, so the +,-,*,/ operators were also included in this). For example:

let inline Sum (items : 'a array) (zero : 'a) =
    items |> Array.fold (fun acc item -> acc + item) zero

The type signature of Sum (from F# interactive) is:

> 
val inline Sum :
   ^a array ->  ^a ->  ^a when  ^a : (static member ( + ) :  ^a *  ^a ->  ^a)

That’s great, and I love F#, but there were some practical reasons why I couldn’t just migrate all of my code into F#. C#/F# interop is fine, but sometimes leads to strange APIs, not to mention that all of the numerical code would need to be refactored out into a separate project/DLL. It’s a good solution, but switching languages kinda evades the original problem of “high-performance numerical code in C# without having to maintain multiple copies”.

(Note: This Sum method is just for illustration and parity with the above examples in C#. Obviously there is a built-in Sum function that is part of the core libraries that you should use Smile)

C# dynamic keyword

Another interesting development happened more recently with .NET 4 and C# 4’s ‘dynamic’ keyword.

Luca Bolognese has a good post about the approach.

The downside to this is that the method binding happens at runtime and has quite a bit of overhead, so that really shoots down the “high performance” part. I didn’t explore this option much more.

Custom INumeric implementation

And that sufficiently sets the stage enough to talk about this next idea that I’ve been pondering for awhile and the topic of this post.

The main idea here is that:

  • C# allows user-defined structs
  • Typically a user-defined struct contains more than one private value-type fields, but it’s still possible to create a struct with only one value-type field
  • If a user-defined struct has only one value-type field, and since structs are allocated on the stack, from a memory/bytes/bits perspective there is no difference between an instance of the user-defined struct and an instance of the underlying private value type
  • The user-defined struct is, however, a completely different C# type that is not (effectively) sealed like the built-in numerical value types, so it can implement whatever interfaces we need
  • We can define an INumeric interface to represent a numeric value type
  • We can just create “numeric wrapper types” for each numeric value type that implement this INumeric interface

Ok, I know this seems pretty bizarre, so let me explain with some code. (Caveat – this is just a first pass at the code to get the idea across. There are some fairly large usability problems with it at the moment, but hopefully they can be fine-tuned later.)

Here’s INumeric:

    public interface INumeric<T>
        where T : struct
    {
        T Add(T item);
        T Subtract(T item);
        T Multiply(T item);
        T Divide(T item);
    }

And here are some implementations for int and float:

    public struct IntNumeric : INumeric<IntNumeric>
    {
        private int _value;

        public IntNumeric(int value) { _value = value; }

        public IntNumeric Add(IntNumeric item) 
        {
            return new IntNumeric(this._value + item._value);
        }

        public IntNumeric Subtract(IntNumeric item)
        {
            return new IntNumeric(this._value - item._value);
        }

        public IntNumeric Multiply(IntNumeric item)
        {
            return new IntNumeric(this._value * item._value);
        }

        public IntNumeric Divide(IntNumeric item)
        {
            return new IntNumeric(this._value / item._value);
        }

        // ...
    }
   public struct FloatNumeric : INumeric<FloatNumeric>
    {
        private float _value;

        public FloatNumeric(float value)
        {
            _value = value;
        }

        public FloatNumeric Zero()
        {
            return new FloatNumeric(0.0f);
        }

        public FloatNumeric Add(FloatNumeric item)
        {
            return new FloatNumeric(this._value + item._value);
        }

        public FloatNumeric Subtract(FloatNumeric item)
        {
            return new FloatNumeric(this._value - item._value);
        }

        public FloatNumeric Multiply(FloatNumeric item)
        {
            return new FloatNumeric(this._value * item._value);
        }

        public FloatNumeric Divide(FloatNumeric item)
        {
            return new FloatNumeric(this._value / item._value);
        }


        // ...
    }

IntNumeric and FloatNumeric basically just provide method wrappers around the operators +,-,*,/.

One thing that always comes up with numerical computations is the need for array and matrix data structures. It would be nice to have a generic Array<T> or Matrix<T> where T is an INumeric. This allows common algorithms to be written once without having to maintain completely separate codebases for MatrixOfInt, MatrixOfFloat, etc.

Here’s a very simple implementation of a NumericArray<T>, just as an example:

   public class NumericArray<T>
        where T : struct, INumeric<T>
    {
        private T[] _array;

        public NumericArray(params T[] items)
        {            
            int size = items.Length;
            _array = new T[size];
            Array.Copy(items, _array, size);
        }

        public T this[int index]
        {
            get { return _array[index]; }
            set { _array[index] = value; }
        }

        public T Sum()
        {
            T sum = new T();
            foreach (T item in _array)
            {
                sum = sum.Add(item);
            }
            return sum;
        }
    }

Note that the Sum method is generic on T. The call to the default constructor “new T()” is understood (an implicit interface requirement, I suppose) to create a numeric value type of value 0. Instead of using a + operator, the .Add method is used on the INumeric type T.

It certainly seems like a lot of work since there will need to be conversions from regular value types to “wrapped” value types. Hopefully that can be addressed.

We also don’t know what the performance implications of this are – is there overhead for the .Add method invocation? As it turns out, there isn’t! The CLR JITter is incredibly smart when it comes to generating x86 code for these wrapped value types – it just treats NumericInt the same as an int.  It simply inlines the value-type appropriate opcode (add or faddp for int and float). I’ll dig into this in much more detail in an upcoming post. So, I think that the performance of this approach is very promising. Smile

This is just a first pass at the approach. I’m still trying to perfect it and iron out some of the wrinkles that make it hard to use – for example, having to convert every float to a FloatNumeric, etc. To be fully-featured numeric data types in .NET, they also need to implement all of the interfaces that the built-in numeric data types (IEquatable, IComparable, etc). I think it’s worth investigating, and I’ll try to get to the bottom of its feasibility in the next few posts.

Keep in mind that a lot of this is dependent on particular implementation details of the JITter in the .NET CLR, so things could work now and change in the future. Also keep in mind that there are actually 4 different CLRs – one for each combination of {.NET 2.0, .NET 4.0} and {x86, x64}. (Ok, actually 6, since there is .NET 1.1, but it’s ancient.)

Also, one more note to put credit where credit is due – this approach for INumeric stems from a number of conversations I had with two friends (and all-around brilliant guys), Tom Jackson and Stuart Bowers.

Stay tuned!

AddThis Social Bookmark Button

Finding the Size of SQL Azure Database Tables

November 23rd, 2012 matt Posted in Programming | No Comments »

Recently I needed to know exactly how much storage a SQL Azure database was consuming. I was also interested in seeing the usage on a per-table basis, to see which tables were contributing most to the size.

I came up with the following query:

WITH 
[TableSize] AS 
(
   SELECT 
          sys.objects.name AS [TableName]
          ,SUM(reserved_page_count) * 8.0 / 1024 AS [SizeMB]
     FROM sys.dm_db_partition_stats, sys.objects
    WHERE sys.dm_db_partition_stats.object_id = sys.objects.object_id
      AND reserved_page_count > 0
      AND sys.objects.is_ms_shipped = 0
 GROUP BY sys.objects.name
),
[Total] AS
(
   SELECT SUM([SizeMB]) AS [TotalMB] FROM [TableSize]
)
SELECT [TableName]
       ,[SizeMB]
       ,([SizeMB] / (SELECT [TotalMB] FROM [Total])) * 100.0 AS [Percent]
       FROM [TableSize]
UNION ALL
  SELECT 'Total', (SELECT [TotalMB] FROM [Total]), 100.0

On a sidenote, CTEs (Common Table Expressions) are my new favorite tool when writing SQL statements. They allow one to “refactor” common segments of code into a named expression that can be re-used later on. Using CTEs makes for much more readable (and maintainable) SQL code. Just like in C# when I am considering copy/pasting a piece of code and decide to refactor it into a method instead, when considering copy/pasting segments of a SQL query I’ve found that refactoring those segments into a CTE is a good move.

AddThis Social Bookmark Button

Run Tracking Apps for WP7

November 22nd, 2012 matt Posted in Phone, Running | No Comments »

Over the last year or so I’ve tried to get back into running. I did Cross Country back in school, but unfortunately gave it up when I got to college. Last summer I finished my first “competitive” race – the Seattle Rock ‘N’ Roll Half Marathon.

One of the things that really kept me motivated was the ability to obtain very detailed data about my runs and track it online. It was really helpful to be able to see my pace slow down as I was going uphill, to compare split times between runs, etc. Even more than just being able to make improvements in my running (well, I had better split times when I tied my shoelaces a particular way, or when I had 2 granola bars for breakfast instead of one), it was also psychologically helpful since other people could see my progress. As silly as it sounds, the “peer pressure” of not going to the gym and posting a new update with a workout was a great motivation for me.

At first I used RunKeeper.com. It was great – it had a fairly intuitive user interface, good mapping functionality, and they had a WP7 phone application. To be honest, the app was incredibly flaky and crashed all the time. I’d used it enough to know how to use it without crashing it (waiting 3 seconds after every button press, etc), but one day it wouldn’t boot and I tried to reinstall it. Turns out, RunKeeper had removed the app from the Marketplace a few months earlier, and I couldn’t reinstall it. ARRRRGGHH. I’d even been enticed to upgrade my RunKeeper account to the Elite status for $30, which gave me the ability to do live streaming of my position so that my family could watch me online during the half marathon. That send me on a hunt to try out as many other run tracking apps for Windows Phone 7 in search of the best one.

So far, I’d tried RunKeeper, Endomondo, and MapMyRun. As far as I can tell, here are some pro/cons for anyone that is looking for an in-depth comparison.

RunKeeper

Pros – The web interface is clean and intuitive. I like the ability to plan ahead and create routes of a certain length. You can view your workout route and pace/elevation charts. I also like the fitness reports so that you can compare your totals for the week/month/etc and see how you are doing. The fitness alerts are also very encouraging, so if you beat a personal best in any number of categories you get a notification email. It has the ability to import/export to TCX and GPX formats.

Cons – The JavaScript map functions can be flaky sometimes, though, so you have to learn the tricks. Also, it doesn’t seem like they have enough classifications for activities (for example, there is no designation for a stair-climbing machine, only an elliptical). As mentioned before, there is no RunKeeper app for WP7 anymore, which is a huge bummer.

Endomondo

The Endomondo web interface has a lot of the same functionality of RunKeeper, though it isn’t as complete.

Pros – There is a WP7 app. I love the app’s design – it’s nice and clean, easy to use, and not buggy at all. I like it a lot. The app will audibly tell you your split times every mile so you can keep your phone in your pocket and still know how you’re doing. The web interface has the ability to view your workouts and see split times and pace/elevation charts. It has the ability to import/export to TCX and GPX formats.

Cons – The Endomondo web interface is…I’m not really sure how to describe it. “In your face”? It’s frenetic. It much of the same functionality of RunKeeper, but the buttons are small, all the menus shout at you, and there are just too many things crammed onto one page. I don’t understand the organization of the site. Color is an overused designator – it would be very hard to use it you were color blind. The site layout and functionality also changes all the time (sometimes weekly).

MapMyRun

Pros – There is a WP7 app. The web interface lets you see the map of your workout.

Cons – There aren’t a lot of features. The WP7 app is very basic – just plots your GPS location on a map and provides minimal statistics.

Conclusion

In comparison, I like the RunKeeper web interface the best and I like the Endomondo WP7 app the best. So, I use the Endomondo app during my workout, then log into the Endomondo web interface, export a TCX file, then import the TCX file into the RunKeeper website. It’s kludgy, but it works. It’s the best of both worlds, I guess.

I suppose there is also a bit of “data stickyness” at play, too – since RunKeeper was the first site I started using and already has a considerable amount of data there, it was harder to migrate between the sites. None of the sites have bulk export (or import) functionality.

Also – I just found another one called Tracks, but haven’t tried it out yet. Looks interesting.

AddThis Social Bookmark Button

Installing WP LaTeX

August 17th, 2010 matt Posted in Server | No Comments »

When I first started blogging I was really interested in having LaTeX equation support. After all, software development hasn’t always been my vocation – I did go to school for electrical engineering and have had my fair share of math classes :)

2 years ago getting LaTeX working within WordPress was incredibly difficult. It required installing ImageMagick, dvips, and a myriad of other tools on your hosting server. I asked Hostgator support if this was possible, only to be told an emphatic ‘no’ – the Linux VMs used to host my website were significantly sandboxed and this was not possible, so I gave up.

Fast forward to today – not only has WordPress made significant advances in update functionality – it is now trivial to add new plugins. What used to take hours on an SSH commandline now is possible with only a few clicks.

Here are the steps to follow:

  1. Log into the admin console and click on the Plugins tab.
  2. Click on Add New
    image
  3. Search for ‘latex’
  4. Find “WP Latex” in the list and click on ‘Install Now’:
    image
  5. Go to the Plugins tab, find ‘WP LaTeX’, and click ‘Active’

Now it’s just a matter of composing a blog post with equations embedded between image tags.

I like using Windows Live Writer for composing blog post, and this works perfectly. All that I need to do is create a new paragraph, enter the equation text in LaTeX, and then center align it (screenshot of WLW):

image

Here’s the rendered equation:

e^{i\pi}+1=0

Very cool. :)

AddThis Social Bookmark Button

On “Preinstalled Software” and “One Button Restore”

August 14th, 2010 matt Posted in Personal | 3 Comments »

Most laptop computers these days come with a number of “features” that drive me completely insane. Bear with me – I’ll try to stay calm :)

In particular – “preinstalled software” and “one button restore” are particularly egregious. My new Lenovo G555 is no exception.

The amount of preinstalled software was staggering. No, I don’t want to have facial recognition software protecting my computer from unauthorized logins. No, I don’t want identity theft protection management utilities from some software company I’ve never heard of. No, I don’t want silly wireless network control software that looks like it was written for a target audience of 6 year olds. No, I don’t want McAfee virus protection that only works for 30 days before it mercilessly hounds me with unending popups to pay for a subscription.

If the preinstalled pieces of software are not totally useless, they’re predatory. What’s the value in this from the consumer’s point of view? As a purchaser of a computer, I feel intense frustration with Lenovo for lessening the value of my purchase by installing a lot of things I don’t want, forcing me to spend my time cleaning things up to make my computer tolerable again. As for the predatory software, I feel “sold out” by Lenovo – they were probably able to “subsidize” the cost of my laptop hardware by letting 3rd-parties pay to install their programs on my machine. This is ridiculous!

As if this wasn’t bad enough, there is literally no way to remove all of the crapware to return to the original unmarred operating system. Sure, I could try to remove everything in the “Add/Remove Programs” dialog section of the Control Panel, but how could I be sure that the uninstallers actually removed everything? What if there are files and registry keys left – I don’t trust any of these software vendors (many of whom I’ve never even heard of) to do what I requested and not just leave backdoors behind.

After being foiled in that attempt, I tried to reinstall Windows 7. Oh wait, I can’t. I did not receive a Windows 7 DVD. My only option is to use the “one key recovery” button. What does this do? Oh, it just reverts the computer back to its original state, invasive crapware and all. No thanks.

My solution? Without any other options, the only workable solution is to go out and buy a brand new copy of Windows 7. This really irks me, seeing that I just paid for a perfectly good copy of Windows 7, but I can’t use it because Lenovo’s crappy “solutions” are getting in the way. Honestly? This is criminal, especially since there is no way to get a refund for the preinstalled copy of Windows 7.

So, after purchasing a new copy of Windows 7 (which wasn’t a total waste since I really want Ultimate instead of Home Premium), is it easy to install? No, of course not. The hard drive has special partitions to hold the one-key backup (in lieu of shipping the installation media). Really? Nothing on the website ever said “500GB* hard drive, * where you can’t use 25GB+ of the space because we’re too lazy to ship you installation media for the software you just purchased”. When did I give Lenovo permission to take some of the hard drive space that I just paid for to use for their misguided attempts in software distribution?

After blowing away these partitions (Shift+F10 during installation and diskpart overrides to the rescue) I was able to pave the machine and successfully install Windows 7 Ultimate.

What happened to the days where customers could buy hardware without all of this extra crap? Extra things (e.g. O/S, software, etc) should cost extra, and consumers should have a choice to purchase as little or as much as they need.

Sadly, this approach is not limited to Lenovo – I’ve had a similar experience with Dell and have heard horror stories from other consumers regarding other laptop vendors.

Has anyone else had a similarly frustrating experience?

AddThis Social Bookmark Button

The Blog is Back

August 14th, 2010 matt Posted in Personal | 1 Comment »

Well, it’s been awhile, but (hopefully), the blog is back. Why has the blog been dark for over a year?

I thought about it for awhile, trying to get to the real reason. The “no time” excuse got dismissed pretty quickly, since I had various small pockets of time that I could have used.  Similarly, the “nothing to say” argument fades away quickly since I’ve continued learning and exploring the new advanced in technology that have happened over the last year.

What’s the real reason? It might sound lame, but…the barrier to writing a post wasn’t low anymore. Why? The general pace of technology advancement simply outstripped my current computing capacity.  More specifically, new advances with Windows 7 and Visual Studio 2010 raised the hardware requirement bar so high that my 3-year-old Windows XP dual-core laptop couldn’t keep up.

Trust me, I tried. I waited 16+ hours for the installation(s) to finish, but the end result was a way-too-sluggish machine. 2GB of RAM just doesn’t cut it, and virtual memory off of a 5400rpm hard drive is doesn’t help much.

I guess this is to be expected – advances in software typically push the limits of available hardware (or is it the other way around?). This has always been evident in the high-end graphics card market.  I just found that the hardware requirement bar got pushed significantly faster these last few years due to Vista/Win7 and Visual Studio 2010.

So, the blog is back basically because I just got a new laptop :)

AddThis Social Bookmark Button

Great IoC and DI Articles

December 10th, 2009 matt Posted in Programming | 5 Comments »

I’ve been interested in learning more about Inversion of Control (IoC) and Dependency Injection (DI) containers for awhile now, so I decided to take a look.

My interest was piqued by an article by Mark Seemann from his upcoming book “DI in .NET”. This was a good introduction, but not very in-depth – I’m certainly looking forward to his book now. :)

After digging in more, I discovered a few more good articles (also on .NET Slackers) that explain IoC and DI using the Castle Project – “Inversion of Control and Dependency Injection with Castle Windsor Container” – check them out:

Very cool stuff.

AddThis Social Bookmark Button

F# Jobs on the Rise

June 19th, 2009 matt Posted in Programming | No Comments »

I’ve received 3 inquiries in the last 2 weeks as to my “job availability status”, all surrounding my experience with F# on my resume. I’m no longer looking for a job (thankfully) and thought I had removed all references to my resume from my website, Dice, Monster, etc.  I suppose there are still a few floating around somewhere.

I don’t know a lot of details about the positions these recruiters are trying to fill, but I did find out that one is in Columbus, OH and one is in Redmond, WA. In an effort to help out the F# community I figured I’d post about these jobs.  There’s nothing in it for me.

If you are interested, shoot me an email and I’ll send you the contact information for the folks who called/emailed/talked to me.

Here’s my info:

let email = 
    [(5,“gmail”); (2,“.”); (3,“valerio”); (6,“.”); (1,“matt”);
     (7,“com”); (4,“@”)]
    |> List.sortBy fst
    |> List.map snd
    |> List.reduce (^)

:)

AddThis Social Bookmark Button

Hunting the Elusive ‘tail’ Opcode in F#

June 18th, 2009 matt Posted in Programming | 1 Comment »

Awhile back I wrote a post about tail-call optimizations that the F# compiler used to eliminate stack overflows. Brian McNamera commented about another optimization that I didn’t illustrate – the ‘tail’ opcode that appears when mutually-recursive and indirectly-recursive functions are encountered. Tail-call optimization is one of the really powerful features of F#, so I really wanted to see how this worked under the hood.

The first thing we need is a pair of mutually-recursive functions.  The easiest (laziest? :)) way to do this is to write one function (e.g. f1) and duplicate its implementation as another name (e.g. f2):

// Hunting for tail calls

 

// 6.17.09

 

 

 

open

System

 

 

 

let

rec sum1 n acc =

 

    match n with

 

    | 0 -> acc

 

    | _ -> sum2 (n-1) (acc+n)

 

 

 

and

sum2 n acc =

 

    match n with

 

    | 0 -> acc

 

    | _ -> sum1 (n-1) (acc+n)

 

   

 

let

sum n = sum1 n 0

 

 

 

let

main () =

 

   Console.WriteLine(“Hello”)

 

   printfn “%A” (sum 100000)

 

   Console.WriteLine(“Press Enter to continue…”)

 

   Console.ReadLine() |> ignore

 

   

 

main ()

 

After compiling this, I popped open the resulting executable in Reflector. Of course I wasn’t going to find the ‘tail’ opcode by looking at the C# – I needed to disassemble the IL. I hunted and hunted for the ‘tail’ opcode, but couldn’t find it!  Every call to f1 from f2 (and f2 from f1) used the stack to pass around n and acc. Even stranger – this is the first F# program I’d written using Visual Studio 2010, and I could have sworn that I’d done the same thing with F# in Visual Studio 2008 a couple months ago.

After building the project again, I noticed the command-line arguments passed to fsc:

—— Build started: Project: HuntingTailcalls, Configuration: Debug Any CPU ——

 

              C:\Program Files\Microsoft F#\v4.0\fsc.exe -o:obj\Debug\HuntingTailcalls.exe -g –debug:full –noframework –define:DEBUG –define:TRACE –optimize- –tailcalls- -r:”C:\Program Files\Microsoft F#\v4.0\FSharp.Core.dll” -r:”C:\Program Files\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0\mscorlib.dll” -r:”C:\Program Files\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0\System.Core.dll” -r:”C:\Program Files\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0\System.dll” –target:exe –warn:3 –warnaserror:76 –vserrors –utf8output –fullpaths –flaterrors Program.fs

 

“—tailcalls-“? Somehow tailcalls are being turned off. Maybe there’s something in the project settings that are disabling the tailcalls? Ah-ha! The checkbox was unchecked :)

image

After poking around a bit more, I discovered that by default the “Generate tail calls” box is unchecked fo Debug mode, and checked by default for Release mode.  Hmmm, interesting.

After switching to Release mode, I rebuild the project and opened the .exe in Reflector.  Here we go! There’s the elusive ‘tail’ opcode:

.method

public static int32 sum1(int32 n, int32 acc) cil managed

 

{

 

    .maxstack 5

 

    L_0000: ldarg.0

 

    L_0001: switch (L_0019)

 

    L_000a: nop

 

    L_000b: ldarg.0

 

    L_000c: ldc.i4.1

 

    L_000d: sub

 

    L_000e: ldarg.1

 

    L_000f: ldarg.0

 

    L_0010: add

 

    L_0011: tail

 

    L_0013: call int32 Program::sum2(int32, int32)

 

    L_0018: ret

 

    L_0019: ldarg.1

 

    L_001a: ret

 

}

(The IL code for sum2 looks identical.) Interestingly enough, the C# code from Reflector looks exactly the same between Debug and Release modes (with and without tail calls) – C# doesn’t have the capability to make tail calls.

Well, there you have it! We finally found the elusive ‘tail’ opcode.

That being said, be sure to keep this in mind – the default settings of Visual Studio 2010 for F# development are drastically different between Debug and Release mode.  Bugs might crop up in Debug mode (e.g. StackOverflowExceptions) that don’t rear their heads in Release mode.

I think the motivation for this is that using tail calls severely limit the usefulness of the Visual Studio debugger since it relies on traversing the stack frame (that the tail opcode destroys) to display debugging information.

For example, without tailcall optimizations setting a breakpoint on sum1 looks like this:

image

image

The callstack shows some useful debugging information, specifically the values of n and the accumulator.

However, if we enable tailcall optimization, this breaks down after running through the breakpoint 10 times, each time it shows one line with different information:

image

image

… You get the idea.

Hope that sheds some light on tailcall optimization, as well as some of the new features of F# in Visual Studio 2010!

AddThis Social Bookmark Button

Hosting Subversion In the Cloud with Live Mesh

February 22nd, 2009 matt Posted in Utilities | 3 Comments »

This afternoon I was going back through some of the code I’d written for various blog posts that I’d kept in a Subversion repository.  During the move things have been in limbo and I haven’t had time to set up the SVN server again. I thought “Hmm, I wonder if I could host my Subversion repository in the cloud”.

Enter Live Mesh. It lets you add multiple devices to your mesh network and automatically synchronize your files between devices.  Pretty cool stuff.

At first I thought it would be a great place to put all of my source code — then I could have the code on every computer.  However, what if you have working code on your desktop, then open up the code on your laptop and introduce a few bugs.  When you go to open up the project on the desktop again, those bugs are there automatically.  The synchronization is great, but there’s no way to keep version information in case you want to revert to a previous snapshot of the files.

Anyone familiar with Subversion will remember that there are two main parts to an installation — the Subversion repository (could be local or remote) and another folder with the checked-out files. A not-uncommon setup for personal development work is to have a local Subversion repository as a directory on the local file system.  I wonder what would happen if I used slapped a local repository installation into a Live Mesh folder? Well, it would get automatically synchronized between machines. All devices in the network would have an SVN client installed (e.g. TortoiseSVN) and pointed to the Mesh-synchronized folder.  I think this just might work :)

For anyone interested, here are the steps that I followed to set this up.

Open up your Live Mesh Folders from My Computer:image

Set up a new folder named “Subversion”. Make sure that all of the devices in your Live Mesh network are set to “When files are added or modified” in the synchronization options.image

Browse over to the location (Subversion folder on my desktop), open it up, and create another folder inside called “Repository”.

Then, right-click on the Repository folder -> TortoiseSVN -> Create repository here. It’s important to note here that the extra “Repository” directory is important since you can’t directly create the repository in the “Subversion” folder one level up.  There is some interaction between TortoiseSVN and Live Mesh that keeps it from working.

Then, go back to the desktop (or any other place) and make a folder called “Checkout”.  Right-click on the Checkout folder and select “SVN Checkout…”

image

Make sure that the “URL of repository” field is pointed towards the “Subversion/Repository” directory and the “Checkout directory” field is pointed towards the “Checkout\Repository” directory and click OK.

There you go — use the checked-out subversion repository as you wish and just point your SVN client on each device to the Mesh-synchronized folder.

Hope someone finds this useful! :)

UPDATE: Looks like I’m not the first to think of doing this :)

AddThis Social Bookmark Button