The Toyota Way

is an excellent book by Jeffrey Liker (isbn 978-0-07-139231-0). As usual I'm going to quote from a few pages:
One day a Ford Taurus mysteriously disappeared. It had been in the factory so they could try fitting it with some prototype mirrors. When it vanished, they even filed a police report. Then it turned up months later. Guess where it was. In the back of the plant, surrounded by inventory.
Extra inventory hides problems... Ohno considered the fundamental waste to be overproduction, since it causes most of the other wastes… big buffers (inventory between processes) lead to other suboptimal behaviour, like reducing your motivation to continuously improve your operation.
…was that data was one step removed from the process, merely "indicators" of what was going on.
Building a culture takes years of applying a consistent approach with consistent principles.
It seems the typical U.S. company regularly alternates between the extremes of stunningly successful and borderline bankrupt.
Flow where you can, pull where you must.
When I interviewed [Fujio] Cho for this book, I asked him about differences in cultures between what he experienced starting up the Georgetown, Kentucky plant and managing Toyota plants in Japan. He did not hesitate to note that his number-one problem was getting group leaders and team members to stop the assembly line.
Every repair situation is unique.
The more inventory a company has,… the less likely they will have what they need [Taiichi Ohno]
I posit here that Toyota has evolved the most effective form of industrial organisation ever devised. At the heart of that organisation is a focus on its own survival. [John Shook]
You cannot measure an engineer's value-added productivity by looking at what he or she is doing. You have to follow the progress of an actual product the engineer is working on as it is being transformed into a final product (or service).
Everyone should tackle some great project at least once in their life. [Sakichi Toyoda]

Sense and respond

is an excellent book by Susan Barlow, Stephen Parry, and Mike Faulkner (isbn 1-4039-4573-X). As usual I'm going to quote from a few pages:
Continuous improvement… is not enough… what is needed also is continuous value creation.
…they continue to create products 'just in case' rather than 'just in time'.
The intelligent business therefore embraces voluntary evolution, designing its own fitness to survive and thrive.
...measure the value creation to value restoration ratio.
Most fast-food burger chains follow, to a large extent, the batch-and-queue principle… Contrast this kind of flow with the one-piece flow achieved by another fast-food company that makes sandwiches and 'subs'… What has been standardised therefore, is not the product but the production method...
Very often the traditional organisation passes work from one department to another in a batch-and-queue system, and with this approach it is not atypical to discover that a task that could be done in ten minutes may actually take ten days to complete. The reason for this is simple: the process is designed that way.
Working together in a cross-functional way actually joins up the company, as well as reinforcing and strengthening the value chain.
Two options exist for businesses: to make offers to customers or to respond to customers' needs.
Is the customer purchasing an electric drill or holes in the wall?
Adaptiveness… cannot just be added on to an organisation's existing capabilities: the organisation itself must become adaptive.
Working together in a cross-functional way actually joins up the company, as well as reinforcing and strengthening the value chain. The result is a critical mass of value creation around flow instead of around functions.

Agilis Deliberate Practice

Here's the slide-deck I presented at the Agilis conference in Iceland. It contains numerous examples of the kind of improvements a group of developers typically work through in just a few facilitated CyberDojo iterations.

Intention revealing #include ?

In a previous post I described how C and C++ have a third #include mechanism. It occurs to me that this idea has possibilities beyond simply using LOCAL(header) as a synonym for "header" and SYSTEM(header) as a synonym for <header> and then using the resulting seam to gain some leverage for testing. You could also add intention revealing names. For example, something like this:

#include "dice_thrower.hpp"
#include <vector>

class stub_dice_thrower : public dice_thrower
{
...
private:
    std::vector<int> stubbed;
};

could be written like this:

#include REALIZES(dice_thrower.hpp)
#include COMPOSES(vector)

class stub_dice_thrower : public dice_thrower
{
...
private:
    std::vector<int> stubbed;
};

Caveat emptor: I don't have any actual examples of this in real code. It's just an idea. It feels a bit like a solution looking for a problem. But I thought I would mention the idea here to see if anyone thinks it has any legs...

Responsibility

I had the pleasure of attending the Agilis conference in Iceland recently.

Christopher Avery gave an excellent keynote and spoke about the difference between accountability and responsibility; you are accountable to someone else but responsibility is personal.

He presented his six step ladder of responsibility:

  • Responsibility
  • Obligation
  • Shame
  • Justify
  • Lay Blame
  • Denial
For example, I'm typing this in TextEdit on my Macbook whilst on a train to XP Day. The font is small and my eyesight is fading. For a moment I struggled to discern the tif in the word Justify. I could almost hear a tiny voice inside my head starting to blame. But then I jumped to Responsibility because I realised the fault was not with TextEdit but with me. I simply enlarged the font size.

Christopher handed out a sheet expanding a little on the six step ladder above.
  • Responsibility is owning your ability and power to create, choose, and attract. It's about your ability to make a response - to respond.
  • Obligation is doing what you have to do instead of what you want to. As always, if you listen carefully, you can hear this distinction in patterns of speech "… but I have to …".
  • Justify where we attempt to rationalise the blame, to use excuses for things being the way they are; we make things just in our mind.
  • Shame is laying blame on oneself (often felt as guilt).
  • Lay Blame (which is not a French verb ;-) is holding others at fault for causing something.
My son Patrick has Asperger's Syndrome and he has a strong tendency to blame. For example, if he bumps his elbow on the door he gets angry and blames the door. He finds it very difficult to move past this blame, to get inside a positive feedback loop which helps him become less clumsy. So I think laying blame is more than holding other people at fault, it can take the form of blaming anything - anything except oneself.

There is no best practice

I had the pleasure of speaking at the Agilis conference in Iceland recently. While preparing my slides on Deliberate Practice I was naturally thinking about the word practice. As far as I can tell, "Best practice" is the most common phrase with the word practice in it. I searched for "Best practice" on goggle and got over 270 million hits. I searched for "Better practices" on google and got a paltry 20 million hits.

Best practice…
  • focuses on achieving someone else's perfect future state
  • assumes there is only one best practice
  • implies improvement beyond the best practice is impossible
  • emphasises the noun practice
  • fits with the waterfall-defined-fixed mindset
  • doesn't start from where you are now
Better practises…
  • focuses on improving your own imperfect present state
  • assumes there are many possible better practices
  • implies improvement beyond the best practice is always possible
  • emphasises the verb - to practice
  • fits with the agile-empirical-growth mindset
  • starts from where you are now


Float fishing rivers

is an excellent (out of print) book by Ken Giles and Dave Harrell (isbn 0-947674-23-3). As usual I'm going to quote from a few pages:
I always tend to feed the line off the [closed face] spool by hand.
It is also important, regardless of which brand of line you use, to use them in conjunction with a silicon spray. … it does make a big difference. I always spray it on my spool at the start and even if the wind gets up later on, I find I can still sink the line when I have to and then leave it on the surface again if the wind drops.
If you are fishing a very slow moving river such as the Nene or the Welland, where there is a strong wind, then you need the float to be loaded, because when you cast, it goes into the water like an arrow, completely burying itself and helps you to sink your line without it being pulled away from the far shelf.
Holding back is generally only used after the first frost of winter.
The most important point that must be covered on stick float fishing is the need to keep the line behind the float at all times. This is a must. It just does not work if the line is allowed to go in front of the float.
As a line gets older, it has a greater tendency to sink, so by always having fresh line on your reels this problem is easily overcome.
When I hold back, I do not hold back really hard.
Regardless of the method, be it stick float or wagglier, you must keep changing your depth around between being a couple of feet over depth to a couple of feet below the surface. Also, it is important to keep altering your shooting to find and keep in touch with the fish.
You do not select the float you want, you select the amount of weight you want to reach where you intend to fish and then pick a float to suit that.
I also feel that it pays to feed two lines like this anyway, to allow you to rest one against the other.
When waggler fishing I am ringing the changes far more often than I need to with the stick float.
The worst thing you can do… is to feed out of habit, as opposed to in response to the fish.
I think that work rate is the key to it all.

10,000 warnings

Suppose you are working on a codebase that has 10,000 warnings. Perhaps you have discussed how you can get rid of the warnings. Perhaps you have discussed it more than once. But you still have 10,000 warnings. What can you do? There is no magic bullet. You are not going to be able to buy a magic tool, or wave a magic wand, and with almost no effort, get rid of them all. If there was you would already have waved the wand.

Even if there was such a silver bullet it would create a shallow improvement, rather than a deep one. It would create no dynamic whatsoever to encourage the developers to learn how not to introduce warnings in the first place. It would do the opposite.

Naturally new warnings keep appearing. Tomorrow there will be 10,001 warnings. But the 1 new warning gets lost in the sea of 10,000 others. It's not even noticed. The number of warnings is trending relentlessly upwards.

The only way you can get rid of 10,000 warnings is the same way you got them in the first place. A little bit at a time. With effort. Effort that shows you care.

The first thing to do is put a finger in the hole. Cap the number of warnings. Make new warnings count as errors from now on. Don't just say "new warnings count as errors from now on". You said that last time and it didn't work. Alter the way you build your software so that new warnings cause a build failure. If module X has 3,663 warnings then 3,663 is the stake in the ground. If it gets more than 3,663 make that a build failure.

This creates pressure to remove warnings on the code actually being worked on. In a culture where warnings are ignored to the extent that they've grown to be 10,000 strong, developers are not likely to see the merits of removing warnings from old code that seems to work and no one has touched for a long time.

Once the number of warnings has stabilized, or even started to trend slightly downwards, you can work on reducing the number of warnings. If the maximum number of warnings in the build is currently 10,000 then set a target. Agree that in a weeks time the number will have dropped to 9,900. Or agree to look at the classes of warnings and target the worst one.

You don't have to get to zero. Getting to zero would mean removing warnings from old code that hasn't been touched for ages. That's not as important as changing the dynamics of the how people act so that the number of warnings is going down rather than going up.

Once you get to a level your happy with, don't turn the build-checks off. Don't take your finger out of the hole. If you do that warnings can easily start to rise again. Leave them in place.

Once you get to a level your happy with, look at ways you can improve further. Buy new tools that detect new warnings. And start again. Focus on improvement not perfection.

#include - there is a third way

Isolating legacy code from external dependencies can be awkward. Code naturally resists being isolated if it isn't designed to be isolatable. In C and C++ the transitive nature of #includes is the most obvious and direct reflection of the high-coupling such code exhibits. There is a technique that you can use to isolate a source file by cutting all it's #includes. It relies on a little known third way of writing a #include. From the C standard:

6.10.2 Source file inclusion
...
A preprocessing directive of the form:
  #include pp-tokens 
(that does not match one of the two previous forms) is permitted. The preprocessing tokens after include in the directive are processed just as in normal text. ... The directive resulting after all replacements shall match one of the two previous forms.


An example. Suppose you have a legacy source file that you want to write some unit tests for. For example:
/*  legacy.c  */
#include "wibble.h"
#include <stdio.h>

int legacy(void)
{
    ...
    info = external_dependency(stdout);
    ...
}


First create a file called nothing.h as follows:
/* nothing! */
nothing.h is a file containing nothing and is an example of the Null Object Pattern). Then refactor legacy.c to this:
/* legacy.c */
#if defined(UNIT_TEST)
#  define LOCAL(header) "nothing.h"
#  define SYSTEM(header) "nothing.h"
#else
#  define LOCAL(header) #header
#  define SYSTEM(header) <header>
#endif

#include LOCAL(wibble.h)  /* <--- */
#include SYSTEM(stdio.h)  /* <--- */

int legacy(void)
{
    ...
    info = external_dependency(stdout);
    ...
}


Now structure your unit-tests for legacy.c as follows:
First you write the fake implementations of the external dependencies. Note that the type of stdout is not FILE*.
/* legacy.test.c: Part 1 */

int stdout;

int external_dependency(int stream)
{   
    ...
    return 42;
}
Then #include the source file. Note carefully that we're #including legacy.c here and not legacy.h
/* legacy.test.c: Part 2 */
#include "legacy.c" 
Then write your tests:
/* legacy.test.c: Part 3 */

#include <assert.h>

void first_unit_test_for_legacy(void)
{
    ...
    assert(legacy() == expected);
    ...
}

int main(void)
{
    first_unit_test_for_legacy();
    return 0;
}


Then compile legacy.test.c with the -D UNIT_TEST option.

This is pretty brutal, but it might just allow you to create an initial seam which you can then gradually prise open. If nothing else it provides a way to create characterisation tests to familiarize yourself with legacy code.

The -include compiler option might also prove useful.

-include file
    Process file as if #include "file" appeared as the first line of the primary source file.


Using this you can create the following file:
/* include_seam.h */
#ifndef INCLUDE_SEAM
#define INCLUDE_SEAM

#if defined(UNIT_TEST)
#  define LOCAL(header) "nothing.h"
#  define SYSTEM(header) "nothing.h"
#else
#  define LOCAL(header) #header
#  define SYSTEM(header) <header>
#endif

#endif

and then compile with the -include include_seam.h option.

Fred Foster's Swing Tipping

is an excellent (out of print) book by Fred Foster (isbn 0-304-29467-5). As usual I'm going to quote from a few pages:
Imagine you are just lifting the hook gently but firmly into the fish and you've got the idea.
As soon as the bomb has settled, I tighten up to it in the normal way and set the swing tip. That's the starting position. I leave the bait there for one minute and then twitch it forward for the first time. ... I give my bait a twitch once every 30 seconds after leaving it for the opening minute.
On hard fished waters, my aim is always to fish as far out from the bank as the prevailing conditions will permit.
As I see it, the hook is always far more visible to the fish when the bait is suspended (as it mostly is with the float) than it is when the bait is on the bottom as it always is when swing tipping.
I prefer a 3 foot hooklength.
In my experience, accurate casting is more vital when swing tipping than in any other form of fishing I have known.
There isn't such a thing as a magic bait and if those who suspected it spent their time practising their tipping instead of wasting it on wild goose chases like this they'd start to give the likes of me a closer run for our money.
Accurate synchronization in the placing of the hookbait in relationship to the feed is one of the most important requirements if you are to achieve any real success.
It's a case of practice makes perfect until you can drop that bomb on the same spot every time.

Quality Software Management
Vol 2. First-Order Measurement

is the title of an excellent book by Jerry Weinberg (isbn 0-932633-24-2). This is the second snippet review for this book (here's the first). As usual I'm going to quote from a few pages:
The update cycle on the project control panel should be scaled to something less than the longest period of time the project can afford to be late.
Large projects always fail when their communication systems fail.
The slowdown of fault removal is a major reason why project times are underestimated.
In the end, it's not the observation that counts, it's the response to the observation. That's why Zen masters teach patience in response.
Culture makes its presence known through patterns that persist over time.
What power corrupts most thoroughly is the ability to make meaning of observations.
Incongruent behaviour is the number one enemy of quality, because it disguises what people truly value.
If you can see it you can review it.
The switch from cost observation to value observation is the strongest indication that an organization has made the transition from Pattern 2 [Routine] to Pattern 3 [Steering].
In my consulting, I frequently talk to managers who seem obsessed with cutting the cost of software or reducing development time, but I seldom find a manager obsessed with improving value.
No other observational skill may be more important to software engineering than precision listening.

Ian Heaps on Fishing

is an excellent (out of print) book by Ian Heaps (isbn 0-907675-02-6). As usual I'm going to quote from a few pages:
The best way to improve is to practice.
You must feed according to how many fish there are and how they are feeding.
There was a method though, and that was slowing the bait down… the caster needing fishing in a very different way to the maggot.
You must keep it as simple as possible: fish as efficiently as you can.
You must learn to feed on a regular basis so that the fish are charged up, and they get to know exactly when the next amount of food, is going to arrive.
Especially on a river which is flowing, the most important thing is to get a feeding pattern going. Then, there are a thousand and one shooting patterns which will catch them.
If you see the float lifting up a bit when it should be bullying its way through the swim, it is not big enough to do the job.
For me, far too many people are obsessed nowadays with fishing light. All too often they do not experiment and put on enough lead.
Generally speaking, I have found that the more lead you can get on the line and still present the bait properly the more, and the bigger, fish you will catch.
Practice makes perfect, and there is no substitute for experimenting yourself.
If you put on a number of smaller shot and bunch them together, you can soon space them out if the feeding pattern changes.
John [Dean] did the same thing as everyone else - only he did everything just a little bit better.

The lady tasting tea

is an excellent book by David Salsburg (isbn 0-8050-7143-2). As usual I'm going to quote from a few pages:
It was a summer afternoon in Cambridge, England, in the late 1920s. A group of university dons, their wives, and some guests were sitting around an outdoor table for afternoon tea. One of the women was insisting that tea tasted different depending upon whether the tea was poured into the milk or whether the milk was poured into the tea.
What I discovered working at Pfizer was that very little scientific research can be done alone. It usually requires a combination of minds. This is because it is so easy to make mistakes.
Galton discovered a phenomenon he called "regression to the mean."
The numbers that identify the distribution are not the same type of "numbers" as the measurements. These numbers can never be observed but can be inferred from the way in which the measurements scatter. These numbers were later to be called parameters - from the Greek for "almost measurements."
Bliss invented a procedure he called "probate analysis." … The most important parameter his model generated is called the "50 percent lethal dose," usually referred to as the "LD-50." … The further you get from the 50 percent point, the more massive the experiment that is needed to get a good estimate.
If you are willing to settle for knowing the two parameters of a normal distribution within two significant figures, you need collect only about 50 measurements.
It is better to do mathematics on a chalkboard than on a piece of paper because chalk is easier to erase, and mathematical research is always filled with mistakes. Very few mathematicians work alone. If you are a mathematician, you need to talk about what you are doing. You need to expose your new ideas to the criticism of others.
In the deterministic approach, there is a fixed number, the gravitational constant, that describes how things fall to the Earth. In the statistical approach, our measurements of the gravitational constant will always differ from one another, and the scatter of their distribution is what we wish to establish in order to "understand" falling bodies.
No test can be powerful against all possible alternatives.
In the near disaster of American nuclear power plant at Three Mile Island power plant in Pennsylvania in the 1980s, the operators of the reactor had a large board of dials and indicators to follow the progress of the reactor. Among these were warning lights, some of which had been faulty and presented false alarms in the past. The prior beliefs of the operators were such that any new pattern of warning lights would be viewed as a false alarm. Even as the pattern of warning lights and associated dials produced a consistent picture of low water in the reactor, they continued to dismiss the evidence.
Kolmogorov called a sequence of numbers collected over time with successive values related to previous ones a "stochastic process."