Conway’s law revisited

You may have heard of Conway’s law. It is a hypothesis that when an organisation makes a piece of software, the architecture of the software mirrors the org structure.

However, I just looked at the original paper, rather than the Wikipedia article and I found it worth a read. He presents not only the main theory on structure, but also other observations about how we break a large problem down and why we break it down. There are also some great points on how managers avoiding risk/blame makes these kind of consequences inevitable. Trend number 1, people aren’t getting any smarter. Basically, whichever comes first the software or the organisation, this kind of thing is almost inevitable.

You have to spend a little time getting over the constant military-industrial complex references (you can just feel the cold war hanging over you as you read it). Remember this is 1968 people scarcely even recognised programming as an activity, I quote:

    “The term interface, which is becoming popular among systems people, refers to the inter-subsystem communication path or branch represented by a line in Fig. 1. Alternatively, the interface is the plug or flange by which the path coming out of one node couples to the path coming out of another node.”

It’s been a while since I heard people explain what an interface is. Flange.

De-pivot to pivot

If you’ve ever wanted to plot some data that isn’t pivoted in quite the right way, but you can’t figure out how to transform it…

Hadley Wickham (who writes of lot of great R libraries) wrote this article called Tidy Data. It’s the source of many great 5 dollar words. Even if you don’t use R, it’s worth a read.

One I particularly like is what you do when you have data that isn’t pivoted in the right way, but you need to de-pivot it in order to pivot again.

This data:

Year

Joiners

Leavers

Transfer in

Transfer out

2012

10

4

0

6

2013

15

7

1

6

2014

6

6

5

3

2015

10

8

5

4

 

Is already in a summary form. If I were working in SQL I might do:

Select year, Joiners, ‘joiners’ as type

    union

Select year, Leavers, ‘leavers’ as type

    union

Select year, TransferIn, ‘TransferIn’ as type

    union

Select year, TransferOut, ‘TransferOut’ as type

 

Wickham’s word for this is “melt” and he has a function for it in the reshape package.

In R (I save my data as CSV, yuck!):

before <- read.csv(“C:\\temp\\head-before.csv”)

melt(before, id=c(Year”))

 

and the result:

Year variable value

1 2012 Joiners 10.0

2 2013 Joiners 15.0

3 2014 Joiners 6.0

4 2015 Joiners 10.0

5 2012 Leavers 4.0

6 2013 Leavers 7.0

7 2014 Leavers 6.0

8 2015 Leavers 8.0

.

.

 

Damn hard to do in Excel.

Too busy to change: Experiment to find upsides, limit the downsides

OK so you’ve learned something new; you want to start doing it for real. How do you find the time?

Orson Scott Card said that

“The essence of training is to allow error without consequence.”

Learning stuff is great, but it’s only useful if we can put it into action. Unfortunately, we can’t rehearse the relevant skills in isolation. There is no training arena to test out project management systems, so we have to blend our training with our work.

Agile Vs Waterfall.

We have to find a way to learn and try things out while doing “real work”. We must look for options that have an upside, but find a way to control the downsides. If we can’t control the consequences, then we will be too afraid of making errors to try anything new. And the longer that goes on, the more risk averse we become and the easier it is to say “it’s worked up until now, why change?”

Also: manage upwards; get them to buy you some time and help you control downsides; the payoffs will come.

Here’s a few suggestions, all rather obvious by now:

  • Book some time. Pick a spot in your diary and block some time every week.
  • Start small. Pick something isolated, where you can take a risk. And deliberately take a risk and do something new.
  • Don’t delay. Waiting for the right moment is probably not a good idea, it’s unlikely to ever come along. OK yes, don’t pick an absolute crisis either, but everything is relative; most of our work crises are “first-world problems”
  • Learn with others, learning can be social experience. If you are learning together you can reinforce each other’s commitment to try new things
  • Learn from others on the same path if you have to do all the experiments yourself, you’ll age to death before you figure it out. Share experiences. It’s hard to know how much their experiences apply, of course.
  • Don’t stop many small experiments is better than one big change. Don’t settle for just trying something new once.

Finally

Be patient. Maybe the new thing won’t work first time? That doesn’t mean that it won’t ever work, or that doing new things is a waste of time. Share your experiences with your peers, be honest with your manager and your team about the fact you tried something and it didn’t quite work. Ask for forgiveness, rinse and repeat.

Moral Map of the Web

Enjoy this.

https://archive.org/stream/TheWebIsAgreement/web#page/n0/mode/1up

I liked this part, where MS is a dark, satanic mill, vomiting poison into the pastures of the web. Harsh.

And I also learned a new phrase: Paving the cow path

Mostly I think of technology as a job. Then when I read something like the UK government digital task force blog, it seems like a noble quest. Making the world a better place, one crappy government department at a time.

PowerShell: quick custom object

TL;DR: I parse some filenames and pack the results into a quick custom object.

I found that I had a lot of files to go through, they were all dated very nicely:

Sadly, some of the dates in the filenames didn’t match the date modified, so I can’t use that to sort them.

And I can’t use the filename because of the date format doesn’t sort correctly. If only they had nicely formatted them as 2014-02-28, then all would be good.

PoSh it is then:

#get the files into some data 
$files = gci "\\folder"

# grab the names
# pick out the date with a regex, remember to out-null to get rid of the True, True, True, True... that will come out of the pipeline
# then take up the matches using the $matches object, packing them into a little hash table, the hash will work as a hobo object
$result = $files.name |% {$_ -imatch ".+-(\d\d)\.(\d\d)\.(\d\d\d\d)\..+" | `
Out-Null; @{ "date" = (get-date -Day $Matches[1] -Month $Matches[2] -Year $Matches[3] ); "filename"= $_}}

#select that out, like a boss, find the file you've been dreaming of
$result | select {$_.filename}, @{e={$_.date}; l="date"} | sort date

PowerShell : slow data in a file

Say you have a script that gets data from a lot of different places, it’s slow to run when you are developing but you probably want to connect to them all when you run for real. If that’s you, then you can save the results of your slow connections in a file with just one line of script and keep using that file unless you have to re-get fresh data.

I’ve found using PoSh’s Export-CliXml and Import-CliXml to store the results from a slow operation nice and easy, but it can output any PoSh object.

Param
(
 $resultFile
 = (join-path $PSScriptRoot
 "resultsFile.xml"),
 $resultsFileRedo = $false
)
$bigListOfThings = get-ListOfThings
if($resultsFileRedo)
{
 write-host "about to get the slow data from $($bigListOfThings.count) things"
 foreach($eachThing in $bigListOfThings)
 {
 write-host "doing $eachThing"
 $eachThingData = Get-SlowDataThatTakesAges $eachThing
 $slowData.add($eachThing , $eachThingData)
 }
 del $resultFile
 $slowData | export-clixml $resultFile
}
else
{
 $slowData = import-clixml $resultFile
}