spec#: a framework I can get behind

I’m generally not in favour of frameworks. I think that what the world needs is less frameworks to help programmers be productive. A competing API is one thing, but so many people offering competing paradigms make things hard. And no community traction means that things only ever get to “alpha” quality. And let’s be honest, most of them are unecessary; replacing tricky to understand code with tricky to understand config files to set up your abstract factory-factory? That isn’t progress, that is being an architecture astronaut.

But spec# from microsoft is an exception.

For one, you need something like it, even if you don’t think you do. Programming by contract is good stuff. If you roll your own by having a coding standard that checks validity of parameters and throws known exceptions or sort of attribute decoration on the properties of parameter objects then that is great. But you could do it with a framework like spec#.

For two, this is real framework territory. This stuff can be used on maybe 50% of all code that you write. Ok, not every method on every class needs to have a contract, but anything that you start to reuse would benefit and anything that you start thinking of a library or a common class needs it.

For three, this isn’t some fly-by-night on sourceforge where some CS student has checked in their year end project thinking “when I apply to Google I can show them this and they’ll love me” this is Microsoft. And looking at the dates on the Spec# page they have been at this a long time. Since 2004? My goodness, whole OSs have come and gone in that time!

Quality is a feature, not a constant

I can hear you objecting already! Quality, says the Agile handbook, is non-negotiable. If you want feature X, then you have to have it with 100% quality as determined by the developers. No user/customer is allowed to say “Oh I want features x AND y this iteration.. and I know you can’t do them properly but I’m sure you can think up some quick fix”. What they are asking for is a hack. And we don’t do hacks, do we? So the response is to say.. well you can have feature x1 and y1 this month which is half the work and features x2 and y2 next month, which will complete the other half.

That is not the sort of quality I’m talking about.

The sort of thing I’m talking about is application quality not about code quality. So the user can ask for things like “the server can never be down for more than 5 minutes in a year”. This kind of user story is a type called a constraint story, which is different from the usual feature-led type. This is an odd kind of user story as even though is has the user experience at its core, it still knows about “the system”, which is generally not the way user stories work. In any case, this story will clearly require work, so it will need to becosted and included in an iteration at some time. Although we might prefer to put this kind of story near the end, if the constraint story was “The server must never lose an order request” then we might want to know about that up front. In fact, given the word never we might want to start looking for another job!

In any case, that kind of request is not only a feature it is a very expensive feature. To get something like “..never lose..” working you will not only need to implement some serious enterprise code but you will need some serious test capability that will go way beyond what you can get out of unit testing (although, of course, unit testing will provide the “atomic” tests and give the safety net to refactor towards this incredible reliability). So this is a lot of work. In fact, I would go so far as to say that this kind of unconstrained constraint story will provide a near infinite 😉 amount of work. It will be difficult to time-box the work because until you can break it you can’t know the limits and when was the last time you broke SQL server?

So lesson #1 is don’t accept constraint stories that don’t have some sort of limit, even if they are crazy  (like the system will not lose more than one order in 100 million).

The other interesting thing is, where is this quality coming from? Clearly, this kind of “enterprise” strength quality won’t come from having a full set of unit tests. Although that might be necessary it is not sufficient. Code quality will provide the basis for refactoring towards our mega-reliable, ultimately scaleable solution but it won’t do it alone. However, can those building blocks of those “enterprise” applications be developed in an agile way? I’m not sure, but it worked for Linux. I know, it is a special type of programming and you wouldn’t expect a electrical engineer who can build a power grid for a country to be able to design an audio amplifier for a TV, but I can’t even see where to start when it comes to things like creating an middleware application like MSMQ or SQL server. Still a long way to go before I get that journeyman badge. Sigh.

Why team development is a must

It should be obvious why working in a team is good. Things like when you start measuring the amount of work in an interesting project you come up with a figure in man-years, that tells you that you might need a little help on this one. Of course, there are instances – mostly back in the grand old days of personal computer and minicomputer development – where wonderful things were made by one or two developers. Like the ‘C’ language compiler, the Altair BASIC compiler, VisiCalc and so on.

So why shouldn’t we all develop that way? Well, those people who developed those things were incredibile geniuses whose passion and ambition for developing unbelieveably good software was matched only by their skill at actually doing it. They had incredibly deep and broad knowledge that went all the way down to the hardware, and now, with applications involving graphics, the web, databases, messaging middleware etc etc, body of knowledge is now so vast that it is hard to imagine one person knowing it all in depth and keeping it current. You’ll also notice that the list of one-person miracles is rather short. No doubt there will millions of young keen developers out there writing language compilers and dBase variants and most of them are long gone; most of them never even compiled. Have you ever browsed around at sourceforge and seen the huge number of projects that have never produced a single working version of their product or even checked in anything that compiles? That is the normal fate of the work of the lone developer, oblivion.

When we allow lone developers to work for a long period of time in a company the result is almost as bad. Eventually we will want to integrate the efforts of all these lone developers. When  we do we end up with a system that is only as good as the worst developer. In terms of stability, maintainability and usability having each new developers learn lessons in their own time rather than being taught by an expert is a waste of time and money.

With a small amount of review or pair-programming we can spread our knowledge into the team in such a way that the less-expert developers learn from the more experts ones and everyone ends up getting a little better at only a small cost in extra work. Also, the system that results is as good as the best developer. The sad truth is that there is only one way to really learn the hard bits of programming, and that is by doing hard things. The hard things might be super-high performance, it might be backwards compatibility, it might be working with 25 year old legacy code or it might be, for the business programmer, coping with code complexity and being under pressure to do things as fast as possible. With the latter case, we have all kinds of books and patterns and practices, but you’ll never understand patterns by looking at example code; you have to look at things like 20,000 line classes or enterprise applications with 200 different screens or databases with 600 stored procedures and 400 tables. You have to look at code that has years of complexity hammered into every line. You have to look at code that has been written by people who have never told themselves no more hacks.

When code becomes a database

I saw some code recently that made me think “that person has lost their mind”; it was a class with a public enumeration in it with 15,000 (yes fifteen thousand) members. I noticed it when using resharper; mostly it takes a few seconds to open a solution when it works through all the files, but in this case it gets hung up for minutes. My first reaction is that this was mad, but I did at least try to understand why they had done it.

The enumerations were clearly imported in bulk from some external file (they were Bloomberg terminal commands). My first reaction is that this was not code, it was a database, and should be stored in a database. Any claim that the enumeration could be used for Intellisense is clearly rubbish as there are so many members that you need to know the thing you are looking for, thus eliminating the purpose of having the enumeration prompt you with the right options. I’m not really sure what it is good for. Of course, as soon as things go anywhere but in code, then we are in some sort of late-bound, text-based nightmare when we are just tossing strings around and hope we have the right string and we won’t know until runtime whether things will work. So, damned if you do, and damned if you don’t.

Obviously all the members aren’t used, but maybe having them all in one place is useful, so where should they be? In a relational database? Seems a little excessive to put them in a database as a) the data isn’t really relational, it is just a list and b) all the other features of a relational database – shared access, powerful query and update language, centralised management – don’t really seem that useful. The really key feature of a database is that it stores data. What is data? Well, one way to look at it is that it isn’t logic. If the logic is the same for all the inputs then the inputs are data. So these commands aren’t really data, as they all do different things that mean we get different results.

So if it is not data, what is it and where else could it be. In an XML file? Is that list of commands “config” for my program? I have some logic and I want to issue a certain command and so I store the command in a config file. Of course, normally one would store only the commands that you needed, and tightly bind the config file to the logic so we’d be more hopeful that we wouldn’t get run time errors when logic and config didn’t connect.

In general, I’d say that we have some sort of hierarchy that goes from code to config to database. We expect that order to work for size and changeability. If the enumeration of 15,000 is huge but never changes is it code or database? I don’t know but it is definitely a hack.

Today was a good day

So, today was a good day.

I’ve had an application in production for a couple of years. It’s a server-side framework providing a rich context for server-side work “atoms” of a type that has been written, no doubt, hundreds of times (and made obsolete by things like IIS 7 and WAS). Of course, it is heavily multithreaded and has all kinds of stuff going on like AppDomains, MSMQ, external libraries and all kinds of contributions from all kinds of developers in the company, some of whom understand unmanaged resources better than others. So you can see where this is going: over the years more and more functionality has accumulated in this beast and now the thing is running 24×6, responding to requests every minute of every day and now the bugs are coming to the surface. The worse of these is under very heavy load, the occasional request just goes missing.

Well, why didn’t you test it? I hear you cry. Well, I did. 2 years ago I had the unit tests, all the automated builds and automated functional command line test clients and so on. Of course, once it was working I stopped testing. And tweak after tweak has accumulated and when I went into the code to investigate the problem I found not only could I reproduce the bug with an embarassingly small number of concurrent requests but, even worse, over time I’d commented out all the unit tests on the critical section where all the threading issues were centred! And when I uncommented them, none of them worked.


So, I tried a little free solo refactoring, making some changes in the code to try and clean it up just using my command line client to load test the server. Obviously, that didn’t work so I bit the bullet and started on a new set of unit tests.

After I started, I realised what a lot I’ve learned in the last 2 years. How much more I understand testing and how much better my code is. I saw so many things that now made me cringe. But I’ve stayed true and written my tests first, using plain old stubs that removed all the connections to MSMQ and so on. And today, with the help of NUnit, TestDriven.Net and some old-school logging I’ve found the damn thing and bingo! concurrency up in the hundreds.

It feels even better as I was getting an ear-bashing all day about how writing unit tests isn’t very useful if you can’t tell whether the tests are failing because there is a problem with the multi-threaded test context. Actually, chaps, there is nothing wrong with the test context. It is the SUT code that is broken, as always.

So, bug fixed. Ready to refactor on Monday!

Welcome, Welcome

So welcome to my new blog blah blah blah.

I’m doing this as I have some ideas about programming popping up with no outlet in my work environment.

In my programming I feel I’m approaching a similar phase to the end of my physics PhD. When I started my physics PhD, I could answer textbook problems in a textbook manner; but if you have ever seen real physics research it is nothing like a textbook. I could solve technical problems in a “fiddle with it until it works” way, which is the only way that graduate students work; analytic solutions are not possible when you are working with 15 independent parameters. As I got to the end of my 3 years of doing physics research, I found that I entered a great new intellectual state: I was able to use my textbook learning with my experience that had been gained from soldering wires and spending hours trying to align laser beams. At the end of my PhD I was able to balance the two to solve real problems and my knowledge of the specific domain of the research meant that new ideas just came bubbling out. I could do physics just by talking to other physicists.

Then, of course, I left physics and started all over again in programming. And of course, I started with the experimental “fiddling” style that is the recourse of those with no formal training. Now after 5 years, I’m starting to put the pieces together and trying to view problems in a context that isn’t academic but, like academic learning, I’m trying to extract the essential “truth” from repeated similar experiences. Of course, many people have done it this is just my 2 cents. There isn’t, of course, “one truth” but I think that there is a “body of knowledge” that can be re-used and there are parts of that knowledge that already exists and part that still needs to evolve and part that can be taken from other disciplines. There are similarities with other human activities – say, mathematics and mechanical engineering – and differences, of course.

I also think that it’s for me personally to move from being a guy who makes programs to being a software engineer and all that is implied by that. I’m not there, but I’m on my way. However, in order to do that there needs to be some breadth of learning and depth of learning that only time can produce; but also a commitment to the craft. Hence the title. I’ve served part of my apprenticeship and now it’s time to become a journeyman and learn from the masters of the programming craft, and maybe contribute a little too.