Calling MySQL Stored Procedures on AppHarbor MySQL Sequelizer

Yesterday I moved a web application from my home machine to AppHarbor. I’ve been meaning to do this for a long time now, but finally got around to it on a rainy Fourth of July.

The application runs on ASP .NET Web Forms using MySQL as the database. Since there is a nice (and free) MySQL add-on for AppHarbor (which is also free), I figured that all I would need to do was move the app and data to the server and it’d be done.

Well I moved the app and got it working, and then I moved all the data – everything was pretty easy to figure out to this point. However, I ran into one snag with running stored procedures. On my machine, it works fine because I gave the correct permissions to the database user the application uses. However, within the AppHarbor environment, I kept getting the following error:

SELECT Command denied for user ‘username’ for table ‘proc’

It turns out that the MySQL .NET Connector selects from the mysql.proc table in order to determine the stored procedure’s parameters. I don’t exactly know why it needs to do this, but apparently there’s no way around it. I trust that this is a “feature” of the library.

I could get rid of all the stored procedures, which means move to an ORM or in-line SQL and re-write all data access code to use that method. Fine, but I’m worried about moving to an ORM. Who knows what permissions that thing needs to run its jumbled SQL hairballs? I don’t know, it might work. And inline SQL is just a pain, so I’d rather get those stored procedures working. I noodled on this problem for a bit.

I came up with another option. This involves building a little bit of inline SQL which calls a stored procedure with the correct parameters and returns the result. This allows the command to be a “Text” type command rather than a “Stored Procedure” type command, effectively skipping the query to the ‘proc’ table.

Since all my stored procedure calls were going through one or two methods in my data access layer, all I had to do was write a quick shim that those methods call which helps convert a stored procedure command to the equivalent SQL. Something like this:

private String GetSPCallText(String spName, MySqlParameter[] params) {
  // Put the stored procedure name in a CALL statement. This opens a paren.
  String spCall = "CALL " + spName + "(";

  // The command should be passed the parameters given to this method, and the
  // SQL generated by this method should use those parameters to pass to the
  // called stored procedure. Parameter order matters!!
  foreach (MySqlParameter param in params) {
    // Use the parameter given to the statement to pass the value into the
    // stored procedure call. Make sure the references to the parameter
    // begins with "@". Throw a comma after it (the trailing comma will be
    // trimmed later).
    spCall += "@" + param.ParameterName.TrimStart("@") + ",";
  }

  // Trim that trailing comma.
  spCall = spCall.TrimEnd(",");

  // Close the paren of the CALL statement.
  spCall += ");"

  // Return the command text.
  return spCall;
}

If I want to run a stored procedure, my data access layer doesn’t create a stored procedure command any more. It creates a text command by calling GetSPCallText and passes the parameters to the command as it normally would.

It works like a charm! I expected to run into some issues, but have run into zero so far. The only thing to note with this is that the order of the parameters passed to this method matters. Unfortunately, MySQL doesn’t support named parameter passing to stored procedures using the CALL statement, otherwise the GetSPCallText method could be written such that parameter order doesn’t matter. Oh well, this works.

Seems a bit like a hack, but isn’t that what programming is?

How to Love Your Job

Brandon at the office

Where I work is special. I love it. It’s challenging, and there are definitely moments when I don’t love it. Overall though, I love it. Here’s why.

I get to build software every day all day long. This is my passion. It’s what I do on the weekends when nobody is even paying me to do it. The fact that I get paid to do this is pretty cool.

I get to wear jeans and a t-shirt if I want. We aren’t into regulations and restrictions here. We provide just enough structure to provide a backbone to the organization and keep everyone rowing in the same direction. We use Agile methodology, for instance. But we know that too much structure hinders creativity and individuality, and ultimately, great ideas. It also drives people away who have those good, but different, ideas. We need those people helping us build the next thing. That’s why we have a casual atmosphere. We have dedicated “hack” days where we get to work on fun projects outside of our regular plans. Spontaneous Nerf gun wars have been known to break out in the office. A remote controlled Airsoft-gun-equipped tank rolls around every once in a while sending people ducking for cover. These things don’t happen every day – that would be entirely too distracting. But they happen enough to keep the feel of the place light and fun.

I could go on and on about the work environment and how cool it is. But I think true satisfaction in what you do is not necessarily about liking the work you do. I think if you’re helping people, you will be truly satisfied with what you do. You have to do something which makes society better in some way.

It just so happens that I’m really involved in building software which makes people’s lives better. Many of our customers are executives at universities. Most of them work in IT. All of them are constantly battling to show that they and their IT department provide value to the university. This is a hard thing to do. Most people see IT as a cost incurring center and not a value producing center. Our software helps those IT leaders justify what they and their department do, and ultimately, that helps people become more educated – something everyone agrees takes us forward as a society.

I have come to develop relationships with several of our customers who are in IT leadership positions, and we’ve helped them to better organize their projects and teams, align themselves with the goals of the university, and show their true value to the rest of the university. It’s rewarding when you realize you are building software which helps people who do good work do better work and actually prove that they’re doing good work to everyone else around them every day.

The other thing that’s important is the people you work with. I’m very fortunate to work with some of the smartest people I know. We pride ourselves on hiring only the best people, so I know that as we grow, the team will always be good to work with. I spend half my life with these people, and I’m OK with that. That’s saying something.

I felt like sharing this especially because we are growing right now and we need good software developers! If you are interested, send me a DM on Twitter – my handle is @bmonty.

Making some simple jQuery code more efficient

Here’s a little snippet I see all the time in our code, and it bugs me.

$(document).ready(function() {
  var offset = $("#tblReportCriteria").height() + 2;
  $("#divContent").height($(window).height() - offset);
  $(window).resize(function() {
    $("#divContent").height($(window).height() - offset);
  });
});

What’s wrong, you say? Lots!

1. We typed too much.

When you pass a function to the jQuery $ function, it automatically passes it to $(document).ready(). Do this instead:

$(function() {
  /* body */
});

It’s less typing, and more readable.

2. Repeated code.

We are setting the height of divContent the same way from two different spots. Refactor.

$(function() {
  var offset = $("#tblReportCriteria").height() + 2;
  var resize = function() {
    $("#divContent").height($(window).height() - offset);
  };

  resize();
  $(window).resize(resize);
});

Don’t repeat yourself (DRY).

3. Repeated running of jQuery selector.

We are finding divContent in the DOM every time a resize event fires. If you’ve ever watched how often the resize event is called while a user resizes the window, this will alarm you. Running a jQuery selector that many times is sure to be sluggish. Cache the selector instead:

$(function() {
  var offset = $("#tblReportCriteria").height() + 2;
  var divContent = $("#divContent");

  var resize = function() {
    divContent.height($(window).height() - offset);
  };

  resize();
  $(window).resize(resize);
});

Now, this is the fastest selector you can do, since it’s essentially calling through to document.getElementById, but it’s still going to perform better to cache that reference.

Ahhhhhh, that’s better.

That’s all. Simple changes, but now we have clean, DRY, efficient code.

Add some Syntactical Sugar to your CoffeeScript

I was reading a blog post yesterday which had some bits of CoffeeScript in it which I didn’t recognize. I [unfortunately] don’t write CoffeeScript every day, so I was delightfully surprised to discover a couple of things that the CoffeeScript syntax provides which can make my JavaScript writing life a whole lot easier.

I recently build a little templating engine in CoffeeScript, and I built a separate class for helping me step through a string while I parse the contents of a template. It’s just a little StringReader which provides some convenience methods for stepping through a string. I’ll step through how these features I just discovered actually help me write better, more concise code for my StringReader class. You’ll see CoffeeScript compared to the equivalent JavaScript.

@

If you’re familiar with Ruby, using an @ at the beginning of a variable name actually turns it into a member of the class rather that just a variable scoped to the method. CoffeeScript also supports this syntax. You can even use it in method signatures, which saves even more typing. In fact, when I came across this on the blog post I was reading, I thought I was looking at Ruby code at first because of the presence of the @ characters all over the place.

CS

StringReader =
  createStringReader: (@input) ->
    @currentIndex = 0
    @

The @ looks a little lonely on the last line, and seems pretty strange as far as syntax goes. But this is here because CoffeeScript will return the result of the last statement within a function, and in this case (no pun intended), I want to return this.

JS

var StringReader;
StringReader = {
  createStringReader: function(input) {
    this.input = input;
    this.currentIndex = 0;
    return this;
  }
};

String Interpolation

The next thing I noticed which is really, really nice is the fact that CoffeeScript supports string interpolation. If you use double-quotes for a string literal, you can put #{variable} constructs inside the string, and CoffeeScript will replace those references with the contents of the variables referenced. Again, this looks like Ruby.

CS

currentToken = reader.readUntil ' '
console.log "current token is: '#{currentToken}'"

JS

var currentToken = reader.readUntil(' ');
console.log("current token is: '" + currentToken + "'");

I love this one because it’s such a time saver. Typing all those single and double quotes, and making sure to escape them and concatenate them correctly makes me think a little too hard for just building a plain old string. I’d rather focus on more important problems.

Solving the Browser Caching Problem in ASP .NET

Last week we rolled out a major release of our web app. We’ve had to put out fires quite a bit, but all in all, it hasn’t been too bad. One of the things we had to deal with at the last minute before the release was the fact that users’ browsers were caching the old versions of Javascript files and CSS style sheets. This was frustrating, because users don’t necessarily know that they need to clear their temporary internet files in order for our application to work correctly.

This is so frustrating to me! I cannot believe that there isn’t better handling for this built into ASP .NET. I also can’t believe that after a few Google searches, I came up short on ASP .NET solutions to what must be a very common problem.

Rails Goodness

Since I’ve dabbled in Rails, I know that this code in a Rails view will write out the appropriate style sheet include HTML:

<%= stylesheet_link_tag "main" %>

Results in the following HTML:

<link href="/stylesheets/main.css?1230601161" media="screen" rel="stylesheet" />

What’s that number in the URL? That’s how Rails handles the browser caching situation.

Since browsers cache static resources based on the URL used to retrieve them, Rails appends the file’s modified date as a query string on the file’s URL. This “tricks” the browser into thinking it needs a totally different file, which it kind of does (sometimes). Useful! And what’s great is, Rails developers don’t have to worry about this at all, or configure it to work that way – this is OOB Rails behavior.

FYI, this is Rails’s old strategy. Newer versions of Rails do something called “Fingerprinting,” and it’s pretty cool. Fingerprinting actually changes the name of the file in the URL, rather than simply appending a query string to the URL. It only changes the name if the contents change, rather than if the modified date on the file changes. It’s definitely a better strategy. The resulting HTML looks more like this:

<link href="/stylesheets/main-908e25f4bf641868d8683022a5b62f54.css" media="screen" rel="stylesheet" />

This way, when you roll out the application and the modified date changes on all the files, only the files whose content has changed will have their URL’s changed by Rails when the stylesheet_link_tag helper is used.

A Solution for ASP .NET

I decided to replicate the “modified date” strategy in ASP .NET, since it’s simpler to do, and still works pretty well.

Our ASPX pages reference these files like this:

<link rel="stylesheet" href="../Styles/main.min.css" />
<script src="../Scripts/script.min.js"></script>

Since I’m trying to make ASP .NET work like Rails here, I want to make it so that developers can use this code:

<%=IncludeStylesheet("../Styles/main.min.css") %>
<%=IncludeJavascript("../Scripts/script.min.js") %>

This will then output the correct HTML with the right URL’s to make sure we adjust for browser caching. Here’s what I came up with (excuse my language…cough…VB):

  Public Shared Function IncludeStylesheet(filePath As String) As String

    Return String.Format(
      "<link href=""{0}"" rel=""stylesheet"" type=""text/css"" />", 
      AppendLastModifiedDateString(filePath))

  End Function

  Public Shared Function IncludeJavaScript(filePath As String) As String

    Return String.Format(
      "<script src=""{0}"" type=""text/javascript""></script>", 
      AppendLastModifiedDateString(filePath))

  End Function

  Public Shared Function AppendLastModifiedDateString(filePath As String) As String

    Dim lastModified As String = GetLastModifiedDateString(filePath)
    If Not String.IsNullOrWhiteSpace(lastModified) Then
      filePath = filePath + "?v=" + lastModified
    End If

    Return filePath

  End Function

  Private Shared Function GetLastModifiedDateString(filePath As String) As String

    Dim fileInfo As New IO.FileInfo(System.Web.HttpContext.Current.Server.MapPath(filePath))
    If fileInfo.Exists Then
      Return String.Format("{0:yyyyMMddHHmmss}", fileInfo.LastWriteTimeUtc)
    End If

    Return String.Empty

  End Function

The IncludeJavaScript and IncludeStylesheet methods simply call through to AppendLastModifiedDate, and then format the URL into the proper HTML string. The AppendLastModifiedDate method calls through to GetLastModifiedDateString to get a formatted modified date for the file in question, and appends that to the URL of the file. The GetLastModifiedDateString method uses Server.MapPath to find the file on disk based on the URL, and then it reads the modified date from the file.

All this is very fast, and gives us the results we need in our HTML responses. I also added this nice little method, which makes this even easier on developers.

  Public Shared Function Include(ParamArray filePaths As String()) As String

    Dim includes As New StringBuilder()

    For Each filePath As String In filePaths

      If Not String.IsNullOrWhiteSpace(filePath) Then

        'determine if it's JS or CSS
        Dim extension As String = IO.Path.GetExtension(filePath)

        If String.Equals(extension, ".css", StringComparison.OrdinalIgnoreCase) Then
          includes.AppendLine(IncludeStylesheet(filePath))
        ElseIf String.Equals(extension, ".js", StringComparison.OrdinalIgnoreCase) Then
          includes.AppendLine(IncludeJavaScript(filePath))
        End If

      End If

    Next

    Return includes.ToString()

  End Function

Now developers can write this code in their ASPX pages, and the helper will automatically detect the file type, and they don’t need multiple server tags – they can do it in one fell swoop:

<%=Include("../Styles/main.min.css", "../Scripts/script.min.js") %>

It’s all wicked fast, too. Now I just need to run a Regex replace on the entire directory to get those link and script tags out of our ASPX pages. :)

Next is to figure out how to do the MD5 hashing version…

CoffeeScript in ASP .NET Revisited

In my previous post about CoffeeScript in ASP .NET, I basically concluded that I was ambivalent toward whether or not it’s actually useful in the ASP .NET world. I focused on the fact that CS allows you to type less code and get [mostly] the same results.

In actuality, CoffeeScript does a lot for you behind the scenes to make sure you’re a good JavaScript developer. Josh Harrison pointed this out to me a few weeks ago, and since then I’ve formed a…more educated opinion.

If you’ve ever looked at JavaScript best practices, there’s a lot of them, and there are a lot of nuances you must understand about the language in order to really make sure your code is clean, safe, and just plain rainbows-and-sunshine good. This includes not only a good understanding of variable scoping, value comparisons, type checking, etc., but also where all the pitfalls with those things are.

Well, if you’re using CoffeeScript, it takes care of a lot of those things for you. Try CoffeeScript, write some quick bits of code in CoffeeScript, and see what it does.

What is CoffeeScript?

I won’t spend too much time on this. CoffeeScript is a language which converts to JavaScript when compiled. Its syntax is more concise and Ruby-like, and I think that’s pretty cool. But the compiler adds the value I will talk about in this post.

Variable Scope

The first thing I want to point out is that when you’re writing plain old JavaScript, the variables you declare are public and global by default. This means that if you have the following code:

var foo = 'bar';

Anyone can access this variable outside of the context of your JavaScript file, because foo is scoped globally:

alert(window.foo); // alerts 'bar'

Sometimes you want this, but most of the time, it’s a good idea to keep your variables within your own closure. Closures allow you to control access variables, and remove the possibility that some other JavaScript file might use the same variable on accident. Good news, though, CoffeeScript does this for you! Check it out:

foo = 'bar'

The above CoffeeScript code compiles to this bit of JavaScript:

(function() {
  var foo;
  foo = 'bar';
}).call(this);

That right there is whatcha call a good ole self-executing enclosure! (if you don’t know what that is, you should.)

No More Var!

You may have noticed that there is not a var keyword in the CoffeeScript snippet above. Isn’t that brilliant?! I think so. CoffeeScript is just smart enough to figure out what variable you are using. Let’s run through a basic scenario.

(function() {
  var foo = 'bar';
  var fn2 = function() {
    var foo = 'baz';
    alert(foo);
  };
  fn2();
  alert(foo);
})();

This JS will first alert the value ‘baz’, then it will alert the value ‘bar’. This is because the first var foo is scoped to the outermost closure. The next var foo statement actually creates a new variable which is scoped specifically to the fn2 function, therefore using the foo variable within that function does not affect the outer closure’s foo variable at all. CoffeeScript doesn’t really let this happen. Here’s what the CS would look like if we tried to do this:

foo = 'bar'
fn2 = ->
  foo = 'baz'
  alert foo
fn2()
alert foo

And this compiles to the following JS:

(function() {
  var fn2, foo;
  foo = 'bar';
  fn2 = function() {
    foo = 'baz';
    return alert(foo);
  };
  fn2();
  alert(foo);
}).call(this);

The key thing to notice here is that there is only one declaration of the variable foo, and it is in the outermost closure. This means that the fn2 function will reference that variable, rather than its own foo variable. So this script will alert ‘baz’ twice.

Strict Comparisons

One of the nuances of JavaScript is the difference between == and ===. Examples of these differences are all over the ‘net, but to sum it up, == will try to coerce variables to be of the same type to perform comparison, while === will not. So, alert(1 == '1'); will alert ‘true’, but alert(1 === '1'); will alert ‘false’. The safest way to do comparison is to always use the === operator (or !==). In CoffeeScript, you use is and isnt to do comparisons. Here’s some CS:

foo = '1'
bar = 1
alert foo is bar
alert foo isnt bar
alert foo == bar

This compiles to the following JS:

(function() {
  var bar, foo;
  foo = '1';
  bar = 1;
  alert(foo === bar);
  alert(foo !== bar);
  alert(foo === bar);
}).call(this);

Whoa, CoffeeScript even takes my == and turns it into a === operator! Pretty cool. I like to use the more readable is and isnt operators though.

Now, if you’re not used to using === and !==, then you’ll have a little bit of frustration as you are used to just letting JavaScript figure this stuff out for you. But let me tell you, it is much better to get into the habit of using === and !== because it will save  you much pain in the long run. To quote Jeremy Ashkenas here,

If you want to compare numbers as strings, convert them to strings … and if you want to compare strings as numbers, parse them as numbers.

JSLint Compliance

If you’re like me, whenever you write JavaScript files, before you finally let them go into production, you run your script through JSLint. CoffeeScript automatically compiles into JSLint-compliant JavaScript, which is wonderful. You can know that the generated JavaScript complies into JavaScript which is clean, and adheres to best practices.

ASP .NET Application

Since I’ve shown some pretty strong business cases for using CoffeeScript to write and compile JavaScript, I suppose I should tell you how to use CoffeeScript from within Visual Studio. I can tell you that I would want to have built-in support for CoffeeScript in Visual Studio, including syntax highlighting, and I would want it to compile to JavaScript and minify it automatically, without a lot of manual steps on my part. Scott Hanselman has an excellent post about Sass and CoffeeScript, and he links to the (free) Mindscape Web Workbench, which adds CoffeeScript (and Sass) support to Visual Studio. It gives you syntax highlighting Pretty awesome. The only sucky thing about is that the free version does not minify your JavaScript (or CSS files) automatically. I understand they’re trying to make money off of this product, but come on, we’re developers – we’re going to find a way around this. Either way, it’s a pretty cool addition to Visual Studio.

Another Business Case for DVCS

Outsourced Development

We work with a client that usually outsources its development work to consulting companies (us, or otherwise). They have existing code bases which they dole out to consultants when they need some changes done. What I’ve noticed is that the source control workflow looks like this (from the consultants’ perspective):

  1. Get a copy of the source and the database for the application.
  2. Add that source to your own source control server.
  3. Spin up a new database on your own development servers.
  4. Work. Make commits to your own source control server.
  5. Finish. Demo, acceptance test, etc. Get the client to pay up.
  6. Deliver the finished product in some way to the client.

The client then takes the finished product and adds it to their own source control. This makes sense. It’s a good idea to own your own source code.

The Problem

Here’s the problem, and you’ve probably already spotted it. Typically, when the code is delivered to the client, what ends up happening is that the new code is put into the client’s source code server using a single commit. That commit to the client’s source control is a large, sweeping commit which changes hundreds of files, renames, removes, and adds files all over the place. This single commit may represent hundreds of commits by the consultants, all mashed into one horrendous commit. If you looked at the diff for a particular file for that commit, you’d probably see the left pane highlighted entirely in red, and the right pane highlighted entirely in green. Which basically means that so many changes were made to the file that the diff utility can’t determine a more efficient way in which the file was changed except to say “this person deleted everything in version N, and added this text and committed version N+1.” Which is not really what happened. What really happened was that there were about 20 commits to the file, but those got collapsed into one commit when the client took delivery of the source and committed it to their own source control system.

The result of this is that going forward, using the client’s code base, nobody (not even the consultants who did the work) can figure out why certain changes were made. This ends up costing the client and the consultancy time, and therefore money. Both maintenance and enhancement of the product becomes very difficult. I’ve realize this first-hand when we’ve tried to fix issues with their software, and all we see is that massive commit, which gives us no context whatsoever.

DVCS Can Help

If a DVCS was used in both our clients’ and the consultants’ teams, then the consulting team could just push the changes from their repository into the client’s repository when everything was finished, and all the changes would be there in the client’s repository, giving them rich historical context for the changes the consultants made to their code. This is a much better way of doing things from the client’s perspective, and besides learning to use a DVCS (which is pretty fun anyways), there is no down-side to this approach from the consulting side either.

I haven’t tried this, but I have used DVCS’s, and this workflow seems like a good idea. I wonder if there are any organizations out there using this type of workflow?

CoffeeScript in ASP .NET – Useful?

I like JavaScript. Actually, I like scripting languages in general, but I’m most comfortable in JavaScript. I suppose this is natural since I’m mainly a web programmer. JavaScript is my go-to language for whipping up something simple quickly, like if I’m at my parents’ house during a holiday and I’m bored. I fire up Notepad and hack up some HTML+JS+jQuery code until I come up with something mildly entertaining to show my little brother.

Anyway, in my JavaScript researching, I came across CoffeeScript. If you’re not familiar, CoffeeScript is just JavaScript, but uses a different syntax. Yay! A new toy to try! I like new toys.

After taking a look at the very helpful CoffeeScript home page (http://coffeescript.org/), CoffeeScript reminds me a little of Ruby. Not a lot of curly braces, and no semicolons to speak of. Operators like && and || and === are more  readable and less distracting because they’re “and”, “or” and “is” in CoffeeScript. A nice, clean look. Cool.

From what I gather, CoffeeScript is generally used in the context of Node.js, which, in itself, is quite fun to play with. But I’m a web programmer. How could I use this new stuff in my every day life? Since, when compiled, CoffeeScript just turns into JavaScript code, I realize that it’s just a different way of writing JavaScript. But to really appreciate any efficiency improvements that could possibly be gained, I decided to cut over one of my simple jQuery plugins to CoffeeScript.

I’m mainly an ASP .NET guy, and I was wondering if I could use CoffeeScript in that context. Is there a module that will generate JavaScript from a CoffeeScript file for use within the browser? My colleague, Travis Smith pointed out that Paul Betts has a project on GitHub called SassAndCoffee for the specific purpose of equipping an ASP .NET application to compile CoffeeScript and Sass files into their browser-understandable equivalents, JavaScript, and CSS. After a few minutes, I had a VS 2010 WebForms web project serving me a JavaScript file compiled from a CoffeeScript file I added to the project. Piece of cake. Mr. Betts has done a nice job of making this easy with NuGet.

All in all, it was a fun little project. I had objects, functions, anonymous functions, variables, loops, and conditionals all down after a little while. The syntax is nice and straightforward, without a lot of symbols getting in the way (like &&, ||, a === b ? ‘yes’ : ‘no’, etc.). Again, it reminded me of Ruby. It was pleasant to have something that was a bit more concise and easier to read, and still get the same result. The original JavaScript file was 117 lines with 2,806 characters. The equivalent CoffeeScript file turned out to be 108 lines with 2,624 characters. I saved 182 characters. That’s a reduction of 6.5% in the number of characters, producing the same result. Pretty cool.

It’s also much easier to read and understand from the outside. Things read like plain English, in most cases. Well, not like plain English, but more like plain English.

I did notice a few gotchas, especially with the fact that white space and indentation actually mean something in CoffeeScript, but I imagine you just get used to that, like when you code in Python (you do code in Python sometimes, right?). It did feel a little wrong when I had to put a comma on the next line down due to a the compiler not spitting out the JavaScript I intended. Overall, I felt like the code was cleaner and easier to write and read. The only other thing was that Visual Studio doesn’t support syntax highlighting for CoffeeScript files. Easily solved with a little digging around – I threw the code into Vim and configured it to support CoffeeScript syntax highlighting (you do code in Vim sometimes, right?).

I successfully created an ASP .NET application that is able to compile CoffeeScript files to serve JavaScript to client browsers, and the SassAndCoffee module will even compress the JavaScript for me! Well, that was fun. Now I have to get serious and consider this from a business standpoint:

Overall Analysis

Benefits of CoffeeScript:

  • Fewer lines and characters of code
  • Fewer symbols within the code
  • Easier to read and understand
  • Prettier

Drawbacks of CoffeeScript:

  • Whitespace is meaningful, causing some frustrations in certain cases
  • No native support for syntax highlighting in Visual Studio
  • Initial compile time required for the first time the file is served

It’s not going to perform better. Actually, if anything, it will reduce performance a bit.

It’s not going to reduce maintenance cost. People will have to do some learning of the CoffeeScript syntax in order to maintain the code. Yes, the code might be easier to read and understand, and might be easier on the eyes, but any benefit from that will be lost in learning time. Plus, most of the syntax is very much the same.

It’s not going to [significantly] increase development productivity. We have so much JavaScript currently that developers will be switching in and out of CoffeeScript and JavaScript modes, and they’re so similar that they will type those semicolons and other symbols anyway until they get used to the CoffeeScript syntax.

Summary

The benefits of CoffeeScript are purely cosmetic. If it doesn’t get us any real benefit, why would we adopt a technology that requires another 3rd party dependency?

Well, it’s fun. I’m not going to push this at our organization. In fact, I’d probably lean toward us NOT using it, unless it can be proven that it actually gets us something. I wouldn’t push too hard against it though, because it’s not really much impact either way. Anyway, for personal projects, I’ll use it, because I like playing with new toys.

For the Sake of Argument

I dislike confrontation. Some people love it and cause it on purpose, just because they love it. Maybe they do it just because they want to feel something…anything. Just like in the movies.

It’s not that I’m afraid of confrontation, but it just takes a lot of effort to explain the thoughts in my head in a way that others will understand. It’s just frustrating sometimes, that’s all.

I’ve learned that arguments often produce the best results. My colleagues and I have discussions (which turn into arguments) all the time about the best way to design software. Sometimes we’re coming from completely different viewpoints altogether, and sometimes our ideas are similar, but different enough to cause heated arguments. We’re all smarty, analytic, opinionated, and semi-narcissistic. Sometimes the argument ends for the day and nothing is decided. But when each of us acknowledges the other’s perspective, and gets some time to think about it on our own, we often come up with the best answer to the problem at hand. I often get so angry that I can’t think any more – it distracts me from being “actually productive”, and sometimes I just feel like quitting and fleeing with my laptop and my ideals and never returning, happily making software the way I want to, and nobody else can screw it up any more.

I just have to remember that the argument, though it can be frustrating, is healthy, and usually produces the best results.

The grass is greener on the DVCS side of the fence

A situation in which a DVCS could help us out a lot: Adam’s work on re-architecting and reorganizing our domain  layer.

First, a little background. We use Subversion, a CVCS (centralized version control system), at work. I geek out a little (a lot) on source control. I use DVCS‘s (both Git and Mercurial) for personal projects, but I’m still using Subversion at work.

Back to the re-architecture project. As you can tell, Adam’s making some seriously disruptive code changes. They’re necessary, but they’re disruptive. He’s been checking in all his changes to our [Subversion] development branch, where everyone else is also doing development on their own features. We’re three weeks into the project.

I was thinking today: what would happen if we said, “Hey, this re-architecture project is not worth the risk. Let’s drop it for now and come back to it later.” Or maybe, “We’re going about this the wrong way. We need to basically undo what we did and start over.”

The entire development team would probably simultaneously crap their pants and look at each other with embarrassed looks on our faces. We didn’t think of that. Now what do we do? Go through and pick out Adam’s re-architecture changes one-by-one and undo them? Or maybe we should start from the version before he started committing his re-architecting changes and applying all NON-re-architecting changes back in one-by-one? Ugh. Every possible approach to this sounds awfully tedious and extremely error-prone. I don’t trust him to be able to do that safely, do you? Of course not, you don’t even know him. We’ve comprehensively backed ourselves into a corner. We can get out, yes, but not without a lot of work and pain.

Hindsight is 20/20

What would’ve been better is for him to create a “feature branch” for the re-architecture work, and work with that branch until his re-architecting work was complete. Then – and only then – he would merge his changes into the main development branch. This way, if we decided to abort that re-architecture project, he can just throw away the branch (or just stop working on it for now), and go back to using the main development branch, which has been kept clean from all those risky re-architecting changes.

Let’s Think this Through

Let’s say we created that nice feature branch for him to work with. He goes along, quietly humming to himself, happily pulling out the proverbial rug from beneath the application code that relies on it, and fixing all the build errors (or not) and committing changes…all in a nice, isolated feature branch. But then how does he ensure a successful merge if and when we complete that re-architecture project? He’s now made major changes to most of our core objects, and others are referencing those objects and even making changes to the very same objects. If after 6 weeks he decides to try to check in all his changes, it’s going to be a Merge From Hell, as you can imagine.

The ideal case is that Adam works in his feature branch, and continually (every morning perhaps) merges changes from the development branch into his feature branch. This way he can make sure he resolves any conflicts soon after they are committed, keeping his feature branch in a state that allows him to merge his branch into the main development branch at just about any time with minimal effort. This ensures that he makes the right decisions when resolving those sometimes-nasty manual merges. He can confer with the necessary developers to help resolve any conflicts while he and that developer both have those pieces of code fresh in their minds. Not to mention, they have time to fix the merge conflict right because they’re not up against a deadline.

This method also avoids the aforementioned Merge From Hell which could easily take several long, tedious, mind-boggling 12-hour days to complete. And any time you have a tired, frustrated developer merging changes which involve others’ work he’s not sure about, you’re bound to run into problems. You might not end up with build errors, mind you, but you will almost certainly have issues with behavior of the system. Sometimes that button that says “Yes, overwrite the other developer’s changes with my current working copy” looks tempting when you’ve been staring at diffs and merges all day, it’s 11pm, and your sole source of nourishment today (or lack thereof) has been coffee.

So, About the Greener Grass on the Other Side?

What does this have to do with DVCS vs. CVCS, you say? Well, it hasn’t so far – until now. In order to work a feature branch in Subversion, Adam would need to make a branch of the entire code base. This itself takes a while. Then, those daily merges need to happen. Those merges will incur about the same overhead in both version control systems (VCS’s), I would say. That Final Merge – you know, the one where the feature is complete and ready to be merged into the development branch – is not handled so well by Subversion. Subversion has strange ways of “remembering” which changes were already merged into a branch.It “remembers” the changes which were merged into a branch using svn:mergeinfo property settings on folders. It’s cluttered and messy, and sometimes just plain doesn’t work. I’ve had problems with these properties in the past where Subversion won’t even let me commit the changes it made to the svn:mergeinfo properties due to some sort of corruption issue, so I regrettably had to manually remove them. In any case, it’s weird and unreliable, and causes much weeping and gnashing of teeth.

DVCS’s handle continuous merging much better because every commit is given a unique hash, and that makes it easy to determine if a change has already been merged into a particular branch. This means that when the DVCS merges two branches, it doesn’t duplicate merges, and is generally better about automatically resolving any merge conflicts because of the way changes are tracked.

Good Habits

Being good with version control systems in general is partly about forming good habits and avoiding bad habits: always add meaningful, concise comments when committing changes; make a tag whenever you do a deployment; learn how to handle merge conflicts (no, pressing “Resolve Conflict” does not resolve the conflict!!!), etc. The good habit to form here is to always create a feature branch when developing new features.

This keeps feature-related commits isolated until they are ready for general consumption. It also gives you the flexibility to abort or pause development on a feature without the risk of deploying unfinished code.

Good Habits Must Be Convenient

It’s arguable that this whole “Always Create a Feature Branch” thing is a good habit no matter what version control system (VCS) you use. However, DVCS’s make this and extremely inexpensive operation, while Subversion (and other CVCS’s) present a hurdle when creating feature branches. The right way to do things has to be convenient.

Have you ever signed up for a gym membership that wasn’t on your way to and from work? If you have, you probably went a couple of times, but it soon became too inconvenient to go to the gym. It was too out-of-the-way, and you just didn’t have time to work out, and so your abs retreated further into your gut (you’re convinced they’re still there somewhere). You knew in your mind that going to the gym was the right thing to do, but you just didn’t have time. The real problem is that it was too inconvenient.

Subversion is like that gym across town. It allows you to do the right thing (create feature branches), sure – but it’s not convenient. In Subversion, you have to go through a whole process of creating a branch, which happens on the server. Then, you have to either switch your working copy over to that new branch, or check out the branch somewhere on your disk. It’s a process which is seemingly reserved for version control purists.

In Mercurial, here’s what you would do from your working copy directory to create a feature branch:

hg branch re-architect

That’s it! And it’s INSTANT. You won’t have time to even toggle back to your browser, let alone read that latest Onion article, as you would when waiting for that new branch to check out on your disk when using Subversion. Even projects with thousands of files will take no time at all for a DVCS to make a new branch. The point is that DVCS’s make feature branches convenient, and that one of the biggest reasons why I like DVCS’s. I know that once I get my team to overcome the hurdle which is the switch from Subversion to Mercurial (the time is coming), I can train them to make feature branches because it’s convenient. This habit will make us more agile, and more adaptable to a constantly changing, increasingly competitive environment.

Follow

Get every new post delivered to your Inbox.