Project thoughts braindump

I went on a project management course recently. It was pretty 101 level but good enough for a n00b like me. It fired off a few thoughts that I want to dump here, not much insight here, but I want to write some more about it in another post.

What was interesting is that I feel that Agile is already accepted as being a great thing for business software, but this course basically ignored it on the grounds that
a) MOST people are still doing waterfall, in business or not
b) project planners sort of dislike it as it is hard to get the predicitability of waterfall.

a) is, I think, undeniable. b) is wrong but interesting. The problem (as I note below) is that people like waterfall because it feels precise and ordered, but when it breaks you only find out late, and their is no way for it to degrade gracefully.

The problem with Agile is, of course, little emphasis on documentation of design or user guides etc etc. The emphasis is on producing working software of known, high quality in a fixed time with all the flexibilty coming from precisely what functionality is delivered. To some people on the course (e.g., people writing firmware for tomahawk cruise missiles – no, really!) flexible function and no documentation is not an acceptable compromise, not matter how fast and cheap you can write the software!

* you must succeed and be seen to succeed: reporting is not an afterthought

* project manager’s role is to be looking ahead to the next phase, not working on the current set of tasks.

* checklists and templates are a kind of “project process lite”, not a substitute for real thought but provide something to react to. Most of project management is based on using what you already did that was similar. Good checklists/templates to have
* project stakeholders
* task estimation spreadsheet with 3-point estimate logic
* project milestone/gate reports
* risk checklist

* project owners are the people who have the right to judge whether the project was a success. project owners measure success in terms of things that were delivered 100%, not in terms of effort and tasks completed.

* a project goal statement is the definition of success; it is not high-level requirements list, nor a brief definition of the project owner’s problem, nor a desecription of the proposed solution.

* project stakeholders are any people who are impacted by the project.. not neccessarily people on the project team doing the work.

* by creating and publishing a project goala you may flush out a unhappy – and previoulsy unknown – stakeholders. we can then correct the project goal at very low cost in effort and political standing. It won’t be easy, but it will be easier than changing course later.

* it is better to have one project goal that everyone knows about. For projects that have stakeholders that are outside the organization – or have such disparate opinions and viewpoints – then multiple goal statements may be needed. Consider your honesty and what you do if you are “found out”.

* the project management lifecycle is separate from – but interlaced with – the software/system development lifecycle. The initial gathering of high-level project objectives are correlated to early requirements capture for development, but project goals are NOT the same as the user requirements that will be needed to create even high-level architecture of the system/software.

* the advantage of the waterfall model is that the project can proceed with many specialists who are coordinated by the project manager and work on different phases. only a few project “masters” are needed who understand all phases and work on the project for the whole cycle.
… the disadvantage of this is that different specialists will have little ownership and will not feel involved when their part is “done” and that can lead to different parts of the project team being at war (e.g., developers and

* project plan estimates are often nonsense: project managers pad time estimates for tasks in the plan and then senior managers cut these estimates arbitrarily because they know that they have been padded! How do we avoid these games and keep the plan a real tool for planning our projects? The answer: keep estimates ACCURATE but IMPRECISE by using a RANGE. The more imprecise the estimate is, the higher the risk. In some cases, highly imprecise or otherwise highly uncertain/risky tasks should be pushed towards the upper end of the estimate range. This means that only a few tasks are padded, and if senior management want to cut estimates for risky items then they can take responsibility for these specific cases!

* Building good, accurate plans is only possible where estimates are accurate. We must estimate then measure, re-estimate the remaining tasks and re-plan the project tasks to ensure that the project plan is still relevant. We must ensure that the estimate ranges are compared with the measured duration of the tasks so people can improve their estimation skills.

* Project planning and software design is inherently iterative; you must revisit earlier stages as you discover changes in the scope and deliverables of the project. The advantage of agile processes is that it recognises this explicitly; the disadvantage of waterfall is that it maintains the illusion of precision for too long; the iterative corrections are a footnote that we only discover late in the project.

* The balance between agile and waterfall is how many corrections we have to make; if the scope doesn’t change significantly then we can get away with the waterfall and get the advantages of simplicity, predictability and resourcing and staff with a single skill-set. If changes are constant or large then we need to recognise it and use agile which are unpredicatable (you don’t know what you will get, you only know that you will get it when the iteration ends) and requires highly-skilled, multi-talkented developers who are highly motivated.

* Reducing length of the critical path by overlapping dependent activities is possible but entails increased risk. (could be financial risk of sunk costs in a cancelled project)

* Risk control activities: prevention, reduction (of impact or probability), acceptance (absorbing risk), transfer (of control or the risk itself), contingency (what we do when the risk comes to pass).

* Research to try and evaluate probability or impact of risk is a TASK that should be on the project plan.

* If we find a risk but don’t want to carry out the risk control work but don’t want to accept the risk then we can use risk monitoring TRIPWIRES to regularly monitor some metric that will indicate increased probability or impact of a given risk. The effort of monitoring the metric is some small repeated task that will enable the FULL risk control task effort to be saved unless it is needed.

* Project reporting for milestones should be deliverable based, not task based. No project owner casres what effort you have put in, only what you have delivered 100% complete. A 90% complete feature is a missing feature. For reporting inside the team, you can be activity based, as long as people know what these activities are. For informal, regular (say daily or weekly) meetings you can report what people are doing now.

* remember the project manager’s job is to be ahead of the team preparing the ground ahead so you should be able to report what you will be doing NEXT. You should also be clearing behind the team ensuring that milestones/gates are truly delivered.



If you can’t name it, you can’t write it

Some people in my company have recently decided that they need some some more layers in their architecture. They are right, as it happens; but they are going about it all wrong. They are attempting to create a framework and we all know what happens when you put on your architecture hat and start building frameworks. Architecture astronauts only. Rather than solving specific problems and rolling them up in a framework they have started with the idea that they want a framework that wraps the database up and produces objects. And it should use LINQ. And it should do all sorts of cool stuff.

Unfortunately, since they don’t know what cool stuff it will do, they didn’t know what to call it so they held a little competition for people to suggest a name. And that is the problem, if you don’t even know what a thing should be called, how do you know what it is? If you don’t know what it is, you cannot sit down and write the code.

Of course, I pointed this out and made them very unhappy. A quick browse throught the source control history revealed that they had not started coding without any requirements. Of course, there were lots of empty interfaces with one object implementing the interface with only the method signature! And they really funny thing is that they have already changed the name.. When they started they called it ClientAPI (actually the “client” refers to the customers of the business, not client as in “client-server”). The later they changed it to ClientBusinessObjects. Great name.

And the really, really funny thing is that they aren’t business objects, they are actually database objects and that factory classes from the database! It will be really fun when they start reading pattern books too!

XP since the beginning…

I had an interesting conversation with someone the other day on the subject of XP. I told him that I’d been involved in an “early attempt at doing XP” back in 2001. We chatted a little about how it was only a couple of years after some of the books on XP were published but that, of course, there were many things about the attempt that were not perfect. He then casually mentioned that he used to work for ThoughtWorks and asked if I’d heard of it. I said, of course – because of Martin Fowler – he then told me that he was lucky enough, while at ThoughtWorks, to have been taught XP by Kent Beck. I asked him how and he said that he was lucky enough to go on the second ever ObjectMentor “total immersion” XP course.

Wow. Pretty impeccable agile credentials.


Extension methods are interesting

Not my idea, but it amused me. Thanks Dave.

namespace ExtensionMethodsAreEvil
 class ExtensionMethodsAreEvil
 public static void Main()
 string myNullString = null;
 Console.WriteLine("Is the string empty? " + myNullString.IsNullOrEmpty());

 public static class ExtendString
 public static bool IsNullOrEmpty(this string s)
 return string.IsNullOrEmpty(s);

spec#: a framework I can get behind

I’m generally not in favour of frameworks. I think that what the world needs is less frameworks to help programmers be productive. A competing API is one thing, but so many people offering competing paradigms make things hard. And no community traction means that things only ever get to “alpha” quality. And let’s be honest, most of them are unecessary; replacing tricky to understand code with tricky to understand config files to set up your abstract factory-factory? That isn’t progress, that is being an architecture astronaut.

But spec# from microsoft is an exception.

For one, you need something like it, even if you don’t think you do. Programming by contract is good stuff. If you roll your own by having a coding standard that checks validity of parameters and throws known exceptions or sort of attribute decoration on the properties of parameter objects then that is great. But you could do it with a framework like spec#.

For two, this is real framework territory. This stuff can be used on maybe 50% of all code that you write. Ok, not every method on every class needs to have a contract, but anything that you start to reuse would benefit and anything that you start thinking of a library or a common class needs it.

For three, this isn’t some fly-by-night on sourceforge where some CS student has checked in their year end project thinking “when I apply to Google I can show them this and they’ll love me” this is Microsoft. And looking at the dates on the Spec# page they have been at this a long time. Since 2004? My goodness, whole OSs have come and gone in that time!

Quality is a feature, not a constant

I can hear you objecting already! Quality, says the Agile handbook, is non-negotiable. If you want feature X, then you have to have it with 100% quality as determined by the developers. No user/customer is allowed to say “Oh I want features x AND y this iteration.. and I know you can’t do them properly but I’m sure you can think up some quick fix”. What they are asking for is a hack. And we don’t do hacks, do we? So the response is to say.. well you can have feature x1 and y1 this month which is half the work and features x2 and y2 next month, which will complete the other half.

That is not the sort of quality I’m talking about.

The sort of thing I’m talking about is application quality not about code quality. So the user can ask for things like “the server can never be down for more than 5 minutes in a year”. This kind of user story is a type called a constraint story, which is different from the usual feature-led type. This is an odd kind of user story as even though is has the user experience at its core, it still knows about “the system”, which is generally not the way user stories work. In any case, this story will clearly require work, so it will need to becosted and included in an iteration at some time. Although we might prefer to put this kind of story near the end, if the constraint story was “The server must never lose an order request” then we might want to know about that up front. In fact, given the word never we might want to start looking for another job!

In any case, that kind of request is not only a feature it is a very expensive feature. To get something like “..never lose..” working you will not only need to implement some serious enterprise code but you will need some serious test capability that will go way beyond what you can get out of unit testing (although, of course, unit testing will provide the “atomic” tests and give the safety net to refactor towards this incredible reliability). So this is a lot of work. In fact, I would go so far as to say that this kind of unconstrained constraint story will provide a near infinite 😉 amount of work. It will be difficult to time-box the work because until you can break it you can’t know the limits and when was the last time you broke SQL server?

So lesson #1 is don’t accept constraint stories that don’t have some sort of limit, even if they are crazy  (like the system will not lose more than one order in 100 million).

The other interesting thing is, where is this quality coming from? Clearly, this kind of “enterprise” strength quality won’t come from having a full set of unit tests. Although that might be necessary it is not sufficient. Code quality will provide the basis for refactoring towards our mega-reliable, ultimately scaleable solution but it won’t do it alone. However, can those building blocks of those “enterprise” applications be developed in an agile way? I’m not sure, but it worked for Linux. I know, it is a special type of programming and you wouldn’t expect a electrical engineer who can build a power grid for a country to be able to design an audio amplifier for a TV, but I can’t even see where to start when it comes to things like creating an middleware application like MSMQ or SQL server. Still a long way to go before I get that journeyman badge. Sigh.

Why team development is a must

It should be obvious why working in a team is good. Things like when you start measuring the amount of work in an interesting project you come up with a figure in man-years, that tells you that you might need a little help on this one. Of course, there are instances – mostly back in the grand old days of personal computer and minicomputer development – where wonderful things were made by one or two developers. Like the ‘C’ language compiler, the Altair BASIC compiler, VisiCalc and so on.

So why shouldn’t we all develop that way? Well, those people who developed those things were incredibile geniuses whose passion and ambition for developing unbelieveably good software was matched only by their skill at actually doing it. They had incredibly deep and broad knowledge that went all the way down to the hardware, and now, with applications involving graphics, the web, databases, messaging middleware etc etc, body of knowledge is now so vast that it is hard to imagine one person knowing it all in depth and keeping it current. You’ll also notice that the list of one-person miracles is rather short. No doubt there will millions of young keen developers out there writing language compilers and dBase variants and most of them are long gone; most of them never even compiled. Have you ever browsed around at sourceforge and seen the huge number of projects that have never produced a single working version of their product or even checked in anything that compiles? That is the normal fate of the work of the lone developer, oblivion.

When we allow lone developers to work for a long period of time in a company the result is almost as bad. Eventually we will want to integrate the efforts of all these lone developers. When  we do we end up with a system that is only as good as the worst developer. In terms of stability, maintainability and usability having each new developers learn lessons in their own time rather than being taught by an expert is a waste of time and money.

With a small amount of review or pair-programming we can spread our knowledge into the team in such a way that the less-expert developers learn from the more experts ones and everyone ends up getting a little better at only a small cost in extra work. Also, the system that results is as good as the best developer. The sad truth is that there is only one way to really learn the hard bits of programming, and that is by doing hard things. The hard things might be super-high performance, it might be backwards compatibility, it might be working with 25 year old legacy code or it might be, for the business programmer, coping with code complexity and being under pressure to do things as fast as possible. With the latter case, we have all kinds of books and patterns and practices, but you’ll never understand patterns by looking at example code; you have to look at things like 20,000 line classes or enterprise applications with 200 different screens or databases with 600 stored procedures and 400 tables. You have to look at code that has years of complexity hammered into every line. You have to look at code that has been written by people who have never told themselves no more hacks.