The Method Grid

Hero is one of several methods we have of solving problems.  And, as you can tell, gentle reader, it’s not a particularly Agile method of solving problems.  Indeed, the term was originally coined in order to be used as a foil to carefully differentiate between a few standard problem-solving methods.

However, in our exploration, we discovered a few more methods, and discovered that you could categorize them neatly.  Let’s begin with hero.  When we explored the characteristics of a heroic approach, we identified two factors as being the primary signifiers.

First, hero is about independent actors, each doing what they do well.  Perhaps they’re in a group, and perhaps not, but they remain independent.  Dirty Harry and John McClane (Die Hard) are iconic individual movie heroes.  The more recent Avengers movie show us a group heroic effort.  Each hero does his or her own thing, brilliantly, and independently (except for a few touching scenes in which they help one another for a few seconds).  But the independence is central to the heroics.  Each participant helps, but they each work solo.

Second, we might call out that the heroic approach relies on expert intuition to solve problems.  Each participant is required to be expert in order to solve the problem.  And it is by virtue of their expertise that the problem gets solved.  In the case of the two cop-hero movies, it is their grit, determination, individual excellence, and their long careers as cops that make them succeed.  In the Avengers movie, it’s their individual superpowers (a very impressive form of expertise).

If we were drawing a diagram, we might see hero sitting at the intersection of independent and intuition:

Optimizing / Grouping Independent
IntuitiON Hero

With hero as the first method, the second method we named is now generally referred to as Director.  Director is modeled after the approach of any legendary movie director.  Steven Speilburg is among the best of the modern examples.  How does Spielburg develop a movie?  He does it very differently than a hero would, both on the grouping axis and on the optimizing axis.

On the optimizing axis, Spielburg makes a plan.  Not only does he make a plan, he takes years to make a plan.  Get a thousand scripts.  Read them all.  Throw them all away, except the best two.  Rewrite/improve those scripts 30 times each.  Pick the better of the two.  Rewrite it another 30 times.  Nail down every shot location, every line, and every look in the movie.  And then?  Hire some actors to make it work.  Spielburg is a planner.  A darn good planner, but his fundamental method for excellence is nonetheless to plan exceptionally well.

On the grouping axis, Speilburg manages his group.   Not only does he have a plan, and share his plan with the team, but when the team doesn’t follow his plan to the tee, he cajoles sometimes, yells sometimes, fires someone even sometimes, and then gets back to the business of managing the details of all 7000 participants in the movie-making process.

A director, to be successful, plans and manages his team.  A heroic group operates with intuition and independence.  These are hugely different.  Putting these two into the grid, we now have this:

Optimizing / Grouping Independent Managed
IntuitiON Hero
PlannING Director

Once we see this relationship, it becomes pretty easy to fill in the other two quadrants.  Intuitive-Managed is Supernanny or Gordon Ramsay or any other Managing Expert approach, including essentially all management consulting.

Planned independent is a lone caterer, prepping a meal for 500.  Alternately, an individual carefully planning his or her stops on a cross-country driving vacation.

Our graph has grown

OPTIMIZING / GROUPING INDEPENDENT MANAGED
INTUITIon Hero Supernanny
PLANNING Caterer Director

This exhausts most normal methods for solving problems.  However, we assert there are more, and that they fit nicely into our grid.

Let’s take another method, and see if we can place it.

The notion of a high-performing team (HPT) has been discussed in management literature for the last 60 years.  What are the characteristics?  The three primaries are group-decision-making, and highly competent individuals who flow into one another’s roles as needed.  Rather clearly, they optimize their decisions based on intuition and expertise, but they organize in a completely different fashion from the independent and managed groups.    The simplest way to describe their method of organization is tribal.  The team forms a common bond with a common purpose, and in a very egalitarian fashion (while recognizing individual expertise) moves towards a solution as a tribe.

This expands the grid to look like this:

OPTIMIZING / GROUPING INDEPENDENT MANAGED TRIBAL
INTUITION Hero Supernanny HPT
PLANNING Caterer Director

But wait, there’s more (It’s not sold in any store).   We know of another method for solving problems besides intuition and planning.  What does a scientist do?  A scientist proposes a hypothesis, and then runs an experiment to determine if (s)he is right or not.  Rather than planning a correct result, or relying on years of expertise and the hard-won intuition, a scientist gets to the truth via feedback.  Michaelson-Morley figured they’d measure the directional flow of the ether.  Turns out that they couldn’t find one.  Indeed, that experiment in which they attempted to measure something turned into one of the foundations of Einstein’s theory of relativity.  Run an experiment.  Measure the results.  Decide what to test next.  Run another experiment.  Eventually we converge upon the truth, but we get there only via feedback systems.   Expert intuition, or even perfect planning are simply insufficient.

Moore’s Law

Note: I am posting this in the off chance you haven’t read something like it before.

Gordon Moore1 observed in 1965 that the surface area of a transistor was being reduced by 50% each year. The press called this observation Moore’s Law. It is especially significant to us because the transistor is the fundamental building block of computation technology, the integrated circuit.

He predicted in 1975 that, for the foreseeable future, computer chip density would double every two years and with it computer power. At the same time, Moore observed that the cost to manufacture the computer chips was remaining relatively constant. If you bought your first new microcomputer in 1975, according to Moore’s Law you have observed the following: 

1977 – New computers are 2 times faster than mine in 1975
1979 – New computers are 4 times faster than mine in 1975
1981 – New computers are 8 times faster than mine in 1975
1983 – New computers are 16 times faster than mine in 1975
1985 – New computers are 32 times faster than mine in 1975
1987 – New computers are 64 times faster than mine in 1975
1989 – New computers are 128 times faster than mine in 1975
1991 – New computers are 256 times faster than mine in 1975
1993 – New computers are 512 times faster than mine in 1975
1995 – New computers are 1024 times faster than mine in 1975
1997 – New computers are 2048 times faster than mine in 1975
1999 – New computers are 4096 times faster than mine in 1975
2001 – New computers are 8192 times faster than mine in 1975
2003 – New computers are 16,000 times faster than mine in 1975
2005 – New computers are 32,000 times faster than mine in 1975
2007 – New computers are 64,000 times faster than mine in 1975
2009 – New computers are 128,000 times faster than mine in 1975
2011 – New computers are 256,000 times faster than mine in 1975
2013 – New computers are 512,000 times faster than mine in 1975

Do you have enough computational power to solve your business problems yet? 

Practically speaking, Moore’s Law has been in operation throughout the entire careers of most people in the software development industry. Our computer hardware is threatened with obsolescence every few years.

You can also think of Moore’s Law another way. In 1975 you would have paid $67,000 for computational ability you can buy today for a few cents. In the world of computers the golden triangle of faster, smaller, and cheaper is actually true.

Moore’s Law impacts us in ways we often don’t first realize. For example, answer the following question: How many computers do you own? Before you jump to a quick answer think for a moment. How many computers are in your car? In your home? In your kitchen alone (the microwave, stove, refrigerator, toaster oven, etc. all likely have computers inside)?  How many computers are you carrying on you right now?

Moore’s law affects a lot more than just the “personal computer.” It has created a demand for customized computers and software for just about every electronic device imaginable; more devices, more computers, more software, more complexity in the technology that runs our society.

 Are you developing software to run on your customers’ current computers? How many changes of operating systems and hardware should your software be designed to survive?  What will it look like in the new iPad, on a larger cell phone, in Google Glass?

To survive in this environment you need to make and continually revisit strategic business decisions including which new technology to adopt, how much and when. Yesterday’s answers may not be at all appropriate today for the simple reason that you can no longer even acquire yesterday’s technology. This is a game you cannot get out of, some of us have tried. 

1 Dr. Gordon Moore co-founded Intel Corporation in 1968 and served as CEO from 1979 to 1987.

Moore, Metcalfe, and Disruptive Technology

Context matters. The context for developing software products is a hyper-accelerated society. Technology advances at an ever accelerating rate and business is a direct beneficiary, and sometimes casualty.  The world has seen more technological innovations in the past 50 years than in all the years of previous human history combined. And there is no sign that the rate of change is letting up. In my lifetime, I’ve watched our society embrace and adopt the following significant new technologies in an increasing accelerated pace:

    • Tablet Computers: 3.5 years to adoption by ¼ of U.S. population
    •  
    • The Internet Browser: 7 years to adoption by ¼ of U.S. population

 

  • Cellular Telephones: 14 years to adoption by ¼ of U.S. population
  • Personal Computers: 16 years to adoption by ¼ of U.S. population

Do you see the trend? It is important to understand the context of the society we find ourselves in, an accelerated society. The following three principles help capture the this context:

  • Moore’s Law
  • Metcalfe’s Useful Equation
  • Disruptive vs. Sustaining Technology

I will post on them separately, or if you are impatient you can Google them yourself. If you are going to do agile development, these are worth studying and understanding.

Security Crisis

The Department of Homeland Security, U.S. Cert, and other private organization continue to raise concerns about the significant vulnerabilities that exist in U.S. Information Technology (IT) infrastructure (e.g. computers, operating systems, phones, software, servers, databases, and networks).

Our economy has become significantly dependent on our IT infrastructure to conduct almost all business and this trend continues to expand. Unfortunately, there is reason to believe that a highly coordinated and sophisticated attempt to disrupt the operations of this infrastructure could succeed. Clearly, our networks and computers are vulnerable to attacks where even unsophisticated high schools students can inflict more economic costs than a Florida hurricane.

Attacks come in many forms. Hacking is where a user gains direct control over a computer, usually by thwarting the log-in and firewall mechanisms. Viruses are self replicating code fragments that infect computers automatically. Trojan horses are social tricks, where the user is tricked into executing hostile code by appearing to be something else.

The current responses to these security threats is a reactive one, typified by updating virus software and downloading various application and operating system software patches. The weakness with a reactive response is that it typically occurs only after an attack has been successful. It takes time to identify new attacks, it takes time to update the virus filters and security holes in the operating systems and other software, and it takes time to distribute these new updates to all of the computers on the network. During all of this time, significant damage is occurring to the economy.

The reactive response does provide increased security but at great risk. Reactive responses require that the filters be continually updated, and because each new attack requires a customized response, even hundreds of people can’t properly keep up with all of the attacks. These reactive programs, in an attempt to combat the attacks, continue to grow in complexity and size causing a signification reduction in machine performance, plus they tend to introduce new defects and more vulnerabilities, and negatively impact worker productivity.

We have been fortunate that no enemy has really released a truly morphing virus, which continually changes form and method of attack. Such a virus resists all standard filter attempts. We have been fortunate that no enemy has tried more subtle attacks; such as changing just a few of the numbers in every spreadsheet on a machine and then deleting itself. These types of attacks could bring a halt to the information economy as companies spend trillions of dollars trying to sort out good data from compromised data.

So, what conditions have most led to our current crisis of vulnerability? Interestingly enough it is our historic strengths in mass production and uniformity that cause these vulnerabilities. Currently, all of our machines are fundamentally the same. If you can successfully infect one machine you can successfully infect most of the machines. Hundreds of millions of machines in government, business, and private homes all have the same software installed, the exact same version, they look exactly alike. If you break one machine, you have successfully broken a hundred million machines.

Note, these vulnerabilities are almost impossible to eliminate simply by better programming.

We need a new solution. A solution that takes the hundreds of millions of existing machines that all are exactly alike and makes them all dramatically different—automatically. We need a proactive active strategy to protect against infection, before a new attack is even conceived. A proactive solution does not wait to see how computers are compromised and then add a new filter to stop that attack. Instead, the system leverage the best of encryption, advanced pattern detection, and proprietary polymorphic behavior (i.e. continually changing forms) to insure that a virus, hacker, or Trojan has no place to go.

A proactive solution continues to work even if a machine in successfully attacked, that machine automatically identifies the attack and actively attempts to remove it and restore operations.

A proactive security world has all of the machines expressing in different forms. If any individual machine is compromised, it is highly unlikely that that technique can be used to attack any other machine. A system acting more like a human body, responding automatically to infection by changing its defensive forms until the hostile code can no longer survive. Our machines must actively fight infection.

This will only be accomplished by taking a fundamentally different approach to the problem. We must leveraging the power of the processors already in the computer to full advantage as security processors. Security must become a fundamental property of the machine, not an afterthought downloaded from a virus software vendor. No virus updates and search patterns to download. No large teams attempting to respond to the latest attack. No single points of failure.

In a way, the machine is becoming more self-aware. Using the resources of the computer and the processors to build, monitor, and continually change defenses on that machine that are totally unique to that machine. In this way, almost all of the security holes and vulnerabilities that currently exist in our IT infrastructure can be closed.

Of course, these technique must run quickly and compactly and not require a lot of additional resources. The techniques must scale both up and down the spectrum, allowing for a style of security to be applied to all computational devices from mainframes, to servers, to desktops, to laptop computers, to hand-held devices and cell phones.

Our economy has become too dependent on IT infrastructure to allow security to be handled haphazardly.

It is time for drastic change.

Mathematics, not Programming

The coming software productivity revolution will be based in the rigorous application of mathematics, not in clever programming tricks, “enterprise” Java or language du jour  “integrated” development tools.

MIT’s Technology Magazine reported that IT teams spend up to 80% of their budgets removing defects they themselves introduced into the code.1 Imagine the possible savings if a software product could be produced defect free the very first time. The only way to achieve this is to have a mathematically rigorous process of creating software, a mathematically rigorous process of turning business needs into executable systems.

Although loath to admit it, most software developers will confess that the internals of their software systems have much more in common with a Rube Goldberg cartoon than a mathematical equation. This is unfortunate, for only the rigorous application of mathematics enables the rapid production of error-free software systems.

I’ve seen it done, repeatedly.

The day is coming, burning like a furnace, when traditional development will be chaff; that day will set it ablaze, leaving neither root nor branch.2

I look forward to that day.

-Tom

1 MIT Technology Magazine, “Why is Software So Bad,” August 2003.

2 My homage to Malachi 4. Still waiting for the arrogant and evildoers to be chaff.

Don’t Offshore, Automate

Automation changes the nature of work. It improves productivity and significantly reduces defects, by reducing opportunities for human error. Automation improves quality, while decreasing costs.

This is true for manufacturing, it is also true for software. New software products can be produced for significantly less money, in dramatically less time, with little or no defects–through extremely aggressive automation.

Automation is the future of software products as sure as it has been the path for all other commodity industries. If you are not actively engaged in discovering how to make your products a part of the software productivity revolution then it is definitely time to begin.

I’ve watched large corporate client after large corporate client offshore software development to reduce costs. May I make a suggestion?

Don’t offshore, automate. The answer to building software products faster, better, and less expensively is not cheap labor. The answer is in eliminating most costly and error prone manual labor altogether.

A study of over 30,000 software development projects reported that two-thirds experience major problems and over one-quarter fail outright. In one recent year alone over 30,000 projects failed wasting over 56 billion investment dollars1. The rate of failure is so high and the variation so great that the success or failure of any given project is, to most managers, essentially random.

It is not surprising that sponsors are reticent to support software development initiatives. It is not surprising that so many companies are eager to send software projects overseas where at least they diminish the costs of failure.

Currently, market forces are acting on the belief that the future of software development is offshore cheap labor2. The emphasis on the single characteristic of unit costs per programmer hour is tragically flawed and outdated.

Cheap labor diminishes costs, but it does not improve productivity or quality.

I am making a bold pronouncement, I say it is possible to eliminate 90% of the programming labor of most projects entirely, and I have the case studies to prove it.

Although cutting unit costs per programmer hour is a reasonable goal, the benefits gained from this approach are insignificant when compared to automating most of the programming and testing tasks and eliminating most manual labor entirely.

If you were going to dig a tunnel from England to France, would you seek to hire 5,000 Indian laborers and arm them with picks and shovels? They are really cheap per day!

No – it is an insane way to dig a tunnel. It’s an insane way to build software. Eventually the industry will wake up. But they haven’t yet. So if you learn the secrets of automation you can be way ahead of your competition.

Cheap labor WAS NOT the most efficient way to build the Chunnel.

Cheap labor IS NOT the most efficient way to build software.

Automation is.

Automation makes the current trends in off shoring software development irrelevant.

This is my notice to the software industry, it is time to seriously raise your game.

1 “Standish Group Chaos Report,” 2003.

2 Wired, “The New Face of the Silicon Age,” February 2004.

Experiments in Standup

As Agilists, we all do standups.  Three questions.  What did you do since the last standup?  What will you do before the next one?  Do you have any impediments?  And maybe a fourth: Do I need to have a parking lot conversation after the standup about anything?

Or at least, we all try to do them.  Oftentimes, we run into difficulties.  Sometimes the scrum master tries to lead the standup, and take reports.  Sometimes the standup turns into a set of conversations.  Sometimes the team drones on.  Sometimes even the scrum master is unable to prevent herself from asking questions of the folks reporting.

Is there anything that can be done about these problems?  Funny you should ask.

In the last few months we’ve seen a few interesting experiments tried in our standups.  The simplest standard experiment in standup is to choose a talking token, and to require that the only person who gets to talk is the one with the talking token.  In general, this is a good practice, and if the standup has any tendency to revert to conversation rather than a short team-to-team reporting, the talking token is a generically good idea.

A step in that direction that went very well for some of our teams over the last year occurred when trying to get remote workers to report /participate during the standup.  We ended up having difficulties getting the polycom system to work so we ended up using a cell-phone, and calling the shared number.  Then, the person reporting would hold the cellphone, and speak into it, allowing the remote worker(s) to hear what was being said.  As a side effect, it was among the most well-obeyed talking tokens I’ve seen.

A different step in that direction came up when one of our most conscientious scrum masters found that she couldn’t prevent herself from asking questions during standup, and also determined that not everyone was following the rules.  Her response was to print up a set of four index cards, each with one of the 3+1 questions on it.  The new rule on that team is: not only can you not talk unless you’re holding the deck of cards, but you also can only talk about what card you’re showing.  It makes standups extremely focused.

Also, because one of our standup’s quality metrics is that we do 4-days a week of correct standup behavior, that same team has Cowboy fridays, in which they do a 90% correct standup, but relax a little bit, and allow themselves to not follow the rules perfectly.

But the last experiment may be the best.  An awful lot of scrum teams have a difficulty with just yammering on during the standup, rather than talking about what tasks they accomplished, and what tasks they will be working on.  The trick we’ve recently tried to address this concern (besides the normal bribery and threats) is to attempt silent standups.

The rules:  Do your standup correctly, but no spoken words (or sign language).  Move tasks, from WIP to complete, from not started to WIP, or update their hours.  Also, write up impediments, and parking lot items into the right locations.  We’ve tried a few.  They are incredibly good at forcing the team back to the task-focused nature of the standups.  Might be the best experiment we’ve tried related to standups.  Also, silent standups end up being really short.  A modification would be to just do Today/Tomorrow silent, and allow impediments and parking lot items to be vocalized.

While I wouldn’t suggest that silent becomes the norm, doing it once every week or two really reminds teams to focus on talking to tasks, skip talking if they have nothing to say, and to put tasks up that are being missed in normal discussion.

What else have you seen work to make standups better?

The Process Trap

It is an easy trap to fall into, a project somewhere in the company has a few struggles or even outright fails and management sends in a team to “fix the process.”

After studying the failure, the team suggests a new document or checklist to add so that other teams will not have this problem in the future too.

Problem solved.

Unfortunately, this solution likely simply exacerbates the problem. The solution was yet another checklist, yet another way to remind people there is something they need to be thinking about, yet another form to fill out so the team is reminded ”do not forget this important thing:.

The Agilist understands there is another way, a better way, to help teams be successful.

First, keep the tribe together dedicated and co-located so important knowledge about the system is maintained in the group automatically over time. Keeping the tribe together eliminates the need for a lot of documents and checklists.

Second, pair everybody. Ensure key knowledge is spread across the entire tribe by pairing and eliminating specialists, so everybody has an opportunity to be exposed to core issues.

Third, if it is so important you are inclined to put it on a check list to make the team constantly review it then AUTOMATE the test for it. If you are worried you need to remember something then eliminate the worry by automating the test for it.

UNTIL YOU HAVE DONE ALL THREE OF THE ABOUT DON’T CREATE NEW CHECKLISTS!

There is a really important reason why, care to guess what it is?

 

 

 

Hero

An extremely popular approach to business, almost iconic in the United States, is one we’ll call Hero. Hero is what most people actually are doing when they say they are being Agile. They sometimes look alike, but are fundamentally different.

Companies who have a Hero approach and who think they are doing Agile–will truly struggle to be Agile.

So what is Hero?

To illustrate Hero we use the 1971 Clint Eastwood motion picture, Dirty Harry.

In the movie, a killer threatens to randomly kill a person each day unless the city of San Francisco pays him a ransom.To lead the investigation the Chief of Police and the Mayor of San Francisco assign “Dirty” Harry Calaghan the task.

Why Dirty Harry?

Because he gets things done… he is a hero you can rely on. He doesn’t care about rights, doesn’t follow rules, breaks laws… in order to get the job done. A more recent example of this type of character in popular fiction is terrorist fighter “Jack Bauer” in the TV series 24.

The Hero Approach

The Hero is a maverick. Independent. Confident, yet highly skilled. He is driven by his own internal moral compass and doesn’t let rules get in his way.

Americans like Heroes, and our biggest grossing films almost always feature “Super Heroes.” A business philosophy built around heroes says we make our own rules to get the work done. We do not constrain our teams with bureaucracy and paperwork, but leave them to themselves to discover the best way to succeed.

Laws don’t really apply to super heroes. Or to software heroes.

It is a compelling way to work, and some of the most interesting American companies and products were birthed in a heroic fashion. Very significant companies such as Apple, Google, and Facebook followed a hero launch pattern.

What does Hero look like in business?

A small group of people working together in a small room or garage dedicated to a specific task. Apple and Google started in garages, Facebook in a dorm room.

Smart, dedicated, passionate people working closely on a project they love. Send pizza and Mt. Dew into the room and hope something great comes out.

Large businesses actually ask for this type of thing all the time: “Give me a war room and get out of my way.” The executives are known to say, when something they think is really important needs to be done.

Of course, the executives also steal the best people from every other project to put in their “war” room. Heroic methods require, no, demand heroic staff.

However, hero has some down side. Care to guess what they may be?

 

Uncertainty

An attribute of Agile software development I have been pondering lately is uncertainty.

We might argue that uncertainty is just one of many sources of friction, but because
it is such a pervasive trait of software development, I will treat it singly.

All actions in our software development life cycle take place in an atmosphere of uncertainty. Uncertainty pervades our operations in the form of unknowns about the competition, about the environment, and even about our own business. While we try to reduce these unknowns by gathering information, we must realize that we cannot eliminate them—or even come close. The very nature of business makes certainty impossible; all actions in business will be based on incomplete, inaccurate, or even contradictory information.

Business is intrinsically unpredictable. At best, we can hope to determine possibilities and probabilities. This implies a certain standard of executive judgment:

What is possible and what is not?

What is probable and what is not?

By judging probability,we make an estimate of our competitor’s designs and act accordingly. Having said this, we realize that it is precisely those actions that seem improbable that often have the greatest impact on our outcomes.

Because we can never eliminate uncertainty, we must learn to act effectively despite it. We can do this by developing simple, flexible plans; planning for likely contingencies; developing standing operating procedures; and fostering initiative
among subordinates.

One important source of uncertainty is a property known as nonlinearity. Here the term describes systems in which causes and effects are disproportionate. Minor incidents or actions can have decisive effects. Outcomes of battles can hinge on the actions of a few individuals, and as Clausewitz observed, “issues can be decided by chances and incidents so minute as to figure in histories simply as anecdotes.”

By its nature, uncertainty invariably involves the estimation and acceptance of risk. Risk is inherent in business and is involved in every project. Risk is equally common to action and inaction. Risk may be related to gain; greater potential gain often requires greater risk. The practice of concentrating business resources toward the main effort necessitates the willingness to accept prudent risk elsewhere. However, we should clearly understand that the acceptance of risk does not equate to the imprudent willingness to gamble the entire likelihood of success on a single improbable event.

Part of uncertainty is the ungovernable element of chance.Chance is a universal characteristic of business and a continuous source of friction. Chance consists of turns of events that cannot reasonably be foreseen and over which we and our competitors
have no control.

The constant potential for chance to influence outcomes in our business initiatives, combined with the inability to prevent chance from impacting on plans and actions, creates psychological friction. However, we should remember that chance favors no one exclusively. Consequently, we must view chance not only as a threat but also as an opportunity which we must be ever ready to exploit.

(Note: This is an exercise in rewriting existing text created for another purpose. Any guess as to the source material for this post?)