The following is a very useful slide show which was presented in a QA Lead forum but a QA Manager. Please contact me if you want to contact the original author.
Realities
•Most “successful” projects were
deliberately over-estimated at the
start
(Standish – 2001)
•64% of features in products are
rarely or never used (Standish
– 2002)
•The
average project exceeds its schedule by
63% (Standish – 2001)
•50% of project failures are due
to missing or misunderstood
requirements
(Ravenflow – 2006)
Estimation accuracy vs. effort
Why are we so bad
at estimating?
•We plan
by activity rather than by feature
– Think
about typical project charts
– do they show activities based around the value of features,
or do they show activities to create the whole thing?
•Lateness gets passed down the
schedule
– We need
everything to go right to stay on schedule
– But
only one thing late makes the rest late
•Activities
aren’t independent
– We talk
about the “project critical path” rather than the project critical feature set
•Partially
done work has no value!
-50% of
the way into a project we normally have all features 50% done, rather than
having 50% of the features completed.
If estimates are so bad, why get
them?
•People in charge want to make
sure we have a plan where we don’t
look
too foolish
•Estimates convey some info even
if it is wrong
•We reduce risk by having a
starting point rather than being foggy
about
everything
“The Problem”
We
need estimates
BUT
BUT
we
aren’t very good at estimating!
Sizing
What is “sizing?”
How does it differ from
estimating?
Definition of “estimate”
Key differences
Some questions for you …
•Do our
sizes/estimates have to be accurate?
Why or why not?
•What is
an acceptable percentage of accuracy?
•Which
is better, being consistent or being accurate?
•Which
is easier to obtain, consistency or accuracy?
Consistency vs. Accuracy
•If a team says the size of a
specific story is a 3, but it turns out it
should
have been a size 2, was the team accurate?
•The big question is, the next
time a similar story comes up, will they
learn
and get it right, or will they over-react and get it wrong again?
– What does our personal experience tell us?
– Why don’t we get better at estimating as we
learn?
•If they get it wrong again,
when will they get it completely accurate?
•Wouldn’t we rather know that
their 3 really just translated to a 2 and
have
them be consistent?
The verdict is in…
Many
studies have proven that being consistent is better in the long run than
continuing to try to get more accurate.
We also have a trump card to
play…
What is velocity?
The
amount of work a team can accomplish within a given time period
How does velocity help us?
• If we
are consistent in our estimates, then velocity will be consistent as well.
• If we
keep trying to get more accurate and become less consistent, then we won’t have
a consistent velocity.
• If we have sizes, we can derive duration once we
know velocity:
Total
Size / Velocity = Duration
50
points / (20 points velocity) = 2.5 time periods
or
2.5 Iterations
Things to remember
•Eliminate waste
– The simplest solution that works
– Be relentless about eliminating any unneeded MRF,
US, or Task
•Sizing needs to take into
account everything required to get to “done”
– Coding
– Testing
– Documentation
– Other…
•Sizes are always relative to
each other, not specific durations!!!
Performing Sizing
•Don’t
allow for things to be too fine-grained
– Causes
people to agonize over size differences that are meaningless – Tends to make people think more
about time than relative size
•Best sizing sequence to use is
1, 2, 3, 5, 8, 13, 21, 40, 70, 100, 200,
300,
500, 800, 1500
– User
stories are in the range 1-13
– Larger
numbers used in the case of Epic or MRF
– We size
almost exclusively at the story level
•Don’t
struggle – if it’s bigger than a 5, it’s an 8!
Some goals
Should
be done quickly (agile principles – eliminate waste, optimize the whole)
Accuracy should be “good
enough” based on accuracy vs. effort data
Size should represent consensus of entire team
Sizes are all relative, not absolute
Introducing Planning Poker
•Each
participant gets a deck of estimation cards (A, 2, 3, 5, 8, K)
•The moderator (usually the
Product Champion or Scrum Master), presents
one
user story at a time
•The
Product Champion answers any questions the team might have.
•Each
participant privately
selects a card
representing his or her estimate.
•When everybody is ready with an
estimate, all cards are presented simultaneously.
•In the (very likely) event that
the estimates differ, the high and low
estimators
defend their estimates.
•The
group briefly debates the arguments
•A new round of estimation is
made.
•Continue until consensus has
been reached.
•The moderator notes the
estimate, and the group continues with the next user story.
How to get started
•Question: Everything is
relative, so what are we relating to with the first size?
•Answer: It doesn’t matter!
–
If you
pick a size for the first item then everything afterward becomes relative to that item
and it will all just work out
–
You can
quickly pick the smallest story out of a group and call that a 1 and everything after will
work out again
–
You can
pick something that seems average and call it a 3 or 5 and again everything after will
just work out
Why do it this way?
•It’s
easy
•It’s
fast
•We get
input from the entire team
•It’s
fun (and can cause some good-natured laughter)
Now we can plan!
• Product
Champion takes sizes and business value to determine if a feature is worth
doing
• Team
and Product Champion can make an initial guess about what can fit in a release
or in an iteration
Iterations
After
the initial guess about scope is created we still have some work to do.
The process
For each story in an iteration the team will:
•Ask any clarifying questions
•Break the work to complete the
story into tasks
–
Ideally
tasks are things different people could do
•Each
task will be estimated for duration using a very simple scale – .5, 1, 1.5 or 2 days are the
allowable task durations
•The task list must be
everything required to make the story be “done”
•Everyone on the team checks the
tasks and durations to make sure
they
are in agreement
Checking the work
•Once all the stories are broken
into tasks, the total task duration for
each
story is calculated
•For each story a number
representing task days per size point is
calculated
–
For
example, if a story is a size 3 and the task estimates add to 4 days, the number calculated
would be 4/3 or 1.33
•After all stories have this
number calculated, any story with a number
that is
not near the average is re-examined
– Does
the story need to be re-sized? (do it)
– Did we
forget some tasks? (add them)
– Is it
just rounding error and it is ok?
The final step
Changing a size of a story may
affect it’s priority
• The Product Champion needs to decide
• Asking questions is allowed (both ways)
Once
the sizes and tasks appear correct to the team, they make their final
commitment
When to re-size
•When a previously sized Epic,
MRF or User Story changes it MUST be
re-sized.
•When the team learns they have
incorrectly sized a specific
type of user story they have the right
to re-size stories of that type to be consistent with their new knowledge.
•When a story “grows” and is
being split, the two pieces need to
be sized separately to match the
reality of what the team knows.
Why not just estimate task
duration?
• Takes
longer
• Sizing
is usually accurate enough
• Generates
extra work for items that may not end up being in scope
• Useful
to be able to double-check the work!
The take away phrase
No comments:
Post a Comment