This blog has moved. Go to SoftwareDevelopmentToday.com for the latest posts.

Tuesday, March 25, 2008

Proximity in the office is a sure way to increase communication (and the quality of it)

I get this question often: "How should we organize our office?" or the variation: "How can I prove to the people in my team that we should work in the same room?"

The reasons for working in the same room are many, but there's also some trade-offs. First let's look at some indications of proximity (working in the same space or co-located) on communication (and therefore on productivity - my assumption).

There’s some work done on "proximity" and it’s impact on communication and knowledge sharing.
Link to a Google paper.
This is related to "how people behave, and how that is affected by proximity". It is only laterally about "knowledge sharing in a office environment", but does point out that if you want people to behave "consistently" you need to put them close together(see table 13, the correlation on behaviour and proximity).

Book by Alistair Cockburn that mentions cost of communication (and Osmotic Communication)
In the first edition of this book chapter 3 is entirely about communication, it’s cost and the impact of office layout on communication.

Again no hard figures on productivity, but since we transform “knowledge/information” into customer value, it is hard to make the case that an office layout that makes communication harder can be good for business.

There are some caveats to this. For example you need to consider the need for “silent hours”. Hours in which no-one is allowed to interrupt the rest of the room (if you have a team room for example).

The company where I'm currently working has opted for a mixed layout. We have team rooms, but we don’t have a full open space where all teams would be located. That is seen as balancing the problems with interruptions with the need for communication.

How did you solve the trade-off between communication and avoiding too much interruptions?

at 10:19 | 0 comments
RSS link

Bookmark and Share

Monday, March 24, 2008

Separation of design and requirements may have destroyed more software projects than waterfall...

The
people that convinced us to never mix design with requirements may have wrecked more software projects and destroyed more value in the software industry than anyone else!

Just think what happens when you separate requirements from design:
  1. You don't validate assumptions to more than a very superficial level (the "I don't care about the technology, just give me the business needs"-level)
  2. You never evaluate the system's constraints and how those may make the system possible or not (hardware, technology, etc.)
  3. You never think of how testable the system will be (as in "is it possible to ever prove that this requirement is implemented?")
  4. You don't write down the tests that will prove that the system works (until it is too late and you have to hire a gazillion testers to test the whole system using a script that was developed by people that never really participated in developing/defining the requirements)
  5. You never visualize how the system will be used by real users
  6. You probably - I'm being generous here - did not involve the testers/coders/designers in the requirements definition
  7. You spend way too much time disconnected from the reality of your business. Remember that our business is software development (as in coding/testing and getting feedback from real users) not requirements management (even though some tool-vendors will make you believe otherwise).
Now, having been in the software industry for more than I care to admit I know why we had requirements documents back in the old days. The short story is: we did not know better, the long story is way too long to tell in one blog post and involves some famous academic people and a junior process guy called Winston Royce.

During those requirement writing sessions we tried our best to stay away from the design discussions because we knew that if we did go into design (which we wanted and felt we should do) the requirements would never be ready. So we decided to do the wrong thing because we knew there was no way we would ever get the requirements written and the Requirements Gate passed if we did not do it.

Now that I look back I find it amazing that we ever got the requirements right! Think about it, if you don't validate the assumptions you make but to a superficial level you are saying to yourself "I know that there's no way I can prove that I'm writing the right requirements for my system to succeed". How's that for a smart way to do software development? But that's how we got taught in University (still are today to a great extent), and that's what our process trainers brainwash us to believe in.

This separation of Requirements and Design is, in my view, the single biggest reason for software projects to fail, and it is still happening today!

Define the most important requirements first, then worry about the rest

This is why it is so important to delay writing the low-priority (as in "the ones you are not sure you need") requirements until the last responsible moment (aka "delay commitment"). You should take an appropriate set of requirements that fit in one iteration (using the highest-risk, highest-value heuristic) and develop those. Get a slice of the system working, don't get caught in the useless (and never ending) discussions that do not relate to the requirements being implemented (note that these requirements can be non-functionals also, e.g. performance or security).

Once you have a set of requirements implemented you have validated (or disproved) the most important assumptions about your system because those are the ones that relate to the highest-risk items of your requirements list.

Since you have avoided to think about all the requirements you have not either lost any time discussing requirements that make no sense to implement and therefore should have little (or no) importance when making the major decisions for the system, like architecture, technology, etc.

Another important thing to consider is that when you think about design, you are in effect creating feedback to the requirements cycle. If you separate requirements from design you are also removing all possibility of having early (and needed) feedback to your requirements writing!

If there are some times when I think I did stupid things in the past this is one. Boy, we were dumb back then! (and may are still today...)

Blogged with the Flock Browser

Labels: , , , , , , , ,

at 21:48 | 2 comments
RSS link

Bookmark and Share

Wednesday, March 19, 2008

Busy vs. Productive, which one are you?

Eye-opening piece about the biggest cliché in the work-world: Work smarter not harder (or longer).

Do you see yourself in this?
Busy-ness is impressive. It puts you in the heat of the action. It gives you an elevated sense of importance. You’re always late for social engagements, barely have enough time for family get-togethers, and hardly get a moment’s sleep. Emails get exchanged, meetings fill up your schedule, worldwide teleconferences become the norm–there’s even the occasional hope of revenue exceeding expenses. You’re like a rock star without the music.
The problem is that working smarter takes a lot of mental effort. Constant prioritizing, not answering e-mails when they come, but when you are ready to answer them, stopping to think about the long term and planning your present based on that.

In the spirit of gaining control over my work-week I've started using a process based on Scrum, I call this process Personal Scrum. I'll post some details of that over the next few weeks. It would be really good to get your input on what techniques you are using to manager your personal work-time. Let the community know by adding a comment to this post.

As for working smarter... try it out, it may even work for you. It certainly does for me.
Blogged with the Flock Browser

Labels: , , ,

at 20:01 | 0 comments
RSS link

Bookmark and Share

Admitting mistakes is the first step to learning, not just for you, but also for your team and company

Here is
an excellent piece from the blog Evolving excellence about how a worker at Toyota battled his fear of admitting a mistake and was rewarded by his pears and supervisor for not hiding, but rather disclosing the mistake he had done.

Admitting you committed a mistake is a very important part of continuous improvement. The andon cord (a sort of error alarm) should be pulled as soon as an mistake/error/defect is created or found.

Finding mistakes is not a blame game in Lean thinking, it is a key part of finding ways to avoid mistakes altogether through poka-yoke or mistake proofing our work methods!

Behind this willingness to show and learn from mistakes we make are some concepts in the Toyota Product System (TPS):
  1. If the student has not learned the professor has not taught.
  2. Most mistakes are caused by the situation or the system and not by people's incompetence or willingness to do their best.
  3. Respect for people (one of the key pillars of the TPS)
These concepts together with other key concepts in TPS allow people to concentrate and focus on continuous improvement and not play the very ineficient and unproductive blame game that mostly impedes learning.


Updated: with a link to the Respect for people principle in Toyota's website.
Blogged with the Flock Browser

Labels: , , , , , , , ,

at 19:37 | 0 comments
RSS link

Bookmark and Share

Monday, March 17, 2008

Testing to script is waste, creative testing is extremely valuable

Testing is a hard job. Imagine this, you have to make sure an application of more than 2 million lines of code is ready to ship. It all depends on you, and your team of 2 testers.

How do you do it? Well, one way to do it is to make sure you cover all the possible use cases and test for those. But that can go into the thousands. There's no hope you can test all of those cases in a short period of time (let's say 3 months or so...). Well, now we just made it even more difficult: we have to release every 4 weeks. Oh, and did we tell you that we are changing the architecture as we go? Incrementally of course, but nevertheless.

How would you cope with a situation like this? Yes, the answer is you would not (just setting up my answer... wait for it). The answer is that you must make sure you are never in the this position!

How to avoid being in a position to have to test a large piece of code with large code changes ongoing and still release every 4 weeks? Test automation. All tests that can be automated should be so, and at all levels: unit, integration, system, performance, reliability, you name it.

The point is this, testers brain power is wasted if all they can/are allowed to do is to test against a static specification with a few tests added every 4 weeks. That's not the best way to get the most out of the smart people you have in your company. If you are not a tester, just imagine yourself having to go over the same 40-50 pages of tests every single iteration, month-in, month-out. How long would it take you to quit? I suspect not too much...

Additionally, if you consider the effect of familiarity (reading the same test cases 2-3 times a month for several months) on the quality of the testing you quickly realize that manual testing against a script over and over again is the best way to get problems to escape even the most dedicated tester's eyes.

So, what next? Well, test automation is one solution. The next step is to train your testers to be expert "breakers", their goal should be to find more and more ways in which to break your software. Specifically ways you have not thought about!

The message is: testers are way too valuable and way to smart to have them spend their work-hours going over a brainless set of tests-to-spec, you will get a lot more from your test team if you automate the repetitive tasks and let the loose on your code.

This is, BTW, what Richard Feynman
advocated when he reviewed the Challenger disaster in the 80's:
"(...) take an adversary attitude to the software development group, and tests and verifies the software as if it were a customer of the delivered product."

Labels: , , , , , ,

at 21:42 | 0 comments
RSS link

Bookmark and Share

Thursday, March 13, 2008

Adaptive Path discovers Apple's Mojo, but Toyota got there first

In a post called "Apple’s Design Process Through a Keyhole", the blog over at
Adaptive Path mentions one technique used at Apple when designing product. The basic idea is that in Apple, designers come up with 10 possible designs for a new feature (and I bet more than 10 for a new product). Then diligently choose the best 3 and then continue to iteratively improve all 3 options they chose for some set period of time. Once they have worked for a while in all 3 options they finally decide on 1 and perfect it.

Even though this seems to be "amazing" and "innovative" for the folk at Adaptive Path (and I bet they are not the only ones thinking that way), this is actually a very old technique called Set-Based Concurrent Engineering (SBCE, also in software).

This technique is similar to techniques used in brainstorming sessions where participants are encouraged to generate many ideas (broaden the horizon), improve on them incrementally by "using" other people's ideas and enhancing them (improve on other's ideas), and finally to select the most appropriate idea for implementation (narrow and select).

Set-Based Concurrent Engineering is also used to ensure quality when a team (or set of teams) must meet a hard-deadline (as in a deadline that cannot be changed) with a solution that is much better than if you would just go with your first impulse/idea and try to improve on that.

One of the key advantages for Apple in using this technique, is that when they get to the 3 mid-step ideas they actually have syntethized all of the best points of all the other 7 ideas into those select 3. And then they still improve on those!

Good to see that Adaptive Path picked up on this technique, I hope that many other UI/UX people start paying attention to this old, but proven technique!

Labels: , , , , ,

at 21:42 | 2 comments
RSS link

Bookmark and Share

Friday, March 07, 2008

Communications of the ACM does not have proper review process for articles about Agile

I was quite stunned (to say the least) when I read an article published in Communication of the ACM (October 2006 issue). The article has some fundamental misunderstandings about Agile Software Development and Methodologies in general. Here are some of the excerpts that caught my attention and then my comments on them:

Here's a few examples:
"(...) agile development relies on ongoing negotiations between the developer and the customer for determining the acceptable levels of quality at various stages of development. How can we achieve a balance between fixed and evolving quality requirements?"

  • In this paragraph alone (page 42) there are several misconceptions:
  • First, many Agile methodologies (XP for example) have very strict focus on quality
  • Second, Lean (which is an inspiration for some agile methods, like scrum) defend quality is a non-negotiable
  • Third, quality is very much linked to the concept of DONE in scrum. In Scrum the team defines what DONE means, not the customer... Of course they talk to the customer, but quality is non-negotiable because we know that bad quality (even if the customer wants that) will lead to very slow pace of development.
  • This paragraph is a clear statement to the lack of understanding that the authors of the article have over Agile Software Development and the underlying (experimental) background.

"Agile environments, on the other hand, are more people-oriented and control is established through informal processes. What would be the appropriate balance between people- and process¬-oriented control in agile distributed environment?"
  • In this paragraph (page 42) they mix the communication barriers problem (which is typical in distributed environments) with the level of ceremony (to use a Cockburn term) for a certain process. This is another basic mistake in non-Agile experts. Agile development processes are "empirical" but "strict", they are not "informal" even if an experienced team will have the "strict" process as tacit knowledge, in which case the process will _appear_ informal.
"The practices that may be characterized as agile but disciplined have evolved in the three organization after repeated experimentation"
  • Experimentation is at the core of Agile. So using this conclusion (which they don't back-up with evidence - just state) we could assume that the process the organizations are using is indeed an Inspect & Adapt type process which would fit in the Agile set of values. However, they also use the phrase "agile but disciplined" to describe the practices, which in turn shows that the authors do not understand Agile at all. Look at XP, XP is an Agile methodology with a high level of discipline. There's no contradiction between Agile and discipline... Basic stuff but the authors don't get it.

"Skeptical of agile development that does not include adequate upfront design, both Manco and Consult devoted the first two or three iteration of a project to finalize critical requirements and develop a high-level architecture."
  • (page 42) Having iterations for Design only is not Agile. FDD (another Agile method) does advocate that you do some upfront design, but it also states that you should go to coding quite soon after that and refine the design as you go. The biggest problem with this paragraph is that they don't define "adequate upfront design" — this is a basic mistake and should never have been published...
"At Telco, the offshore team felt that such minimally documented requirements were more helpful than just informal communication."
  • Again in this paragraph (page 43) they confuse communication with methodological philosophy. Documentation (as described in the article) is needed due to distribution, not per se. The Agile manifesto states that documents are also needed, but you should put your effort in making communication work (face-to-face if possible) rather than having _everything_ carefully documented before coding. Another basic confusion and misconception patent in the paragraph.

"Short-cycle but not time-boxed development"
  • (page 43) Time-box is not a requirement of Agile, but recent knowledge does advocate quite strongly that time-boxes are the right way to tackle software development. On the other hand, if the teams were not time-boxing, that means they were scope-boxing. I'd suggesting going with FDD for those cases when the scope if fixed (BTW: FDD also uses time-boxes, but those are used to establish a rhythm for the development, not for defining the final scope).

"Project leads and champions at Consult were on call almost round-the-clock via their Blackberries"
  • This phrase (page 44) suggests that even though some people in the project may have been using some Agile methods (or more likely practices) they were not following something that is a key agile principal (number 8): Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • If we take the stand that the principles must be followed for the project to be considered Agile, then the conclusion in the article is just plain wrong. Note that it is possible to use some XP practices even in a waterfall project...
"The practices presented here demonstrate how a balance between agile and distributed approaches can help meet these challenges."
  • (page 46) Balance between agile and distributed? Jeff Sutherland has an article where he describes an Agile project (fully, not "balanced") can happen in a distributed environment. It is foolish to "balance" distributed and Agile, you cannot "do" agile, you have to "be" agile. If you are not following the principles and values (practices are not mandatory) then you "are not" agile. This last phrase in the conclusion section of the article further reflects the lack of understanding of the authors of this article.
  • Jeff Sutherland's article on distributed agile: "Distributed Scrum: Agile Project Management with Outsourced Development Teams", Sutherland et al., Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS'07)

at 11:32 | 0 comments
RSS link

Bookmark and Share

Sunday, March 02, 2008

On the subject of problem solving, and problem root causes

Problem solving is key to the discipline of software development. What we do when we code/test/release a software product is essentially problem solving. We learn about a problem we analyze the root causes and then we come up with a solution.

This post by Brett Schuchert reminds us that very often the apparent causes of problems in our efforts are not the right ones. In his example, a developer is blamed for "doing the wrong thing" when in fact the policies and rules in place force developers to actually do the "wrong thing". Often, problems in software development come from policies and rules, not from people wanting to be bad.

If you are analyzing a problem in your team right now, ask yourself: are there any policies or rules that may cause this behavior?

W. Edwards Deming suggested that in 96% of the cases the deep root causes of a problem lie within the web of rules and policies put up to "regulate" the work (he called it the system) as opposed to suggesting that the problems are caused by people, on purpose or due to incompetence.

Labels: , , , ,

at 09:41 | 0 comments
RSS link

Bookmark and Share

Why failing is learning, and learning is needed

This week we had a couple of world-class trainers and speakers at our company, and boy did we learn!

Dyson, the guy who re-invented the vacuum cleaner said that failure is the essential part of innovation and learning. In his company's site he states that "15 years (...) and 5000 failed experiments" were required for the new dyson vacuum cleaner to come to light. Think about it: 5000 failed experiments!

Would you say he is a failure because he failed so many times? No, failure was needed for the re-invention to happen!

Very often we hear in the software world: "Don't try it, do it!", or "Plan it better so that you don't fail!". These phrases are bad for us, and they are bad for business. Failure is the key part of learning, which in turn means that if you never fail you will never learn, and in the software business not learning is as good as being dead.

So this week was both a humbling and learning experience for me and I suspect for many other people at our company. Being face to face with world-class people and listening to their experiences made me (and I hope others) understand that even though we do know a lot, we are still learning. The future is bright with the lights of knowledge waiting to be discovered! Bring it on!

Labels: , , ,

at 09:38 | 2 comments
RSS link

Bookmark and Share

 
(c) All rights reserved