Tuesday, January 7, 2025

Feature vs Goods, or Estimations vs Real effort, and the iterative nuances of software development

Here we have the classic forms of common sentences circulating in the corridors of (almost) every single team on the world:
 

- "When will feature Y be ready?"'

 - "When will this feature X be delivered?"

- "How much effort will feature Z require?"

One would suspect, there is nothing abnormal to (effectively) "plan ahead", as such, and wanting to know the future. Well, to my experience, these questions have at least one (inconclusive) valid answer: "When it's ready".  And this answer defies the question, of course :).

In my experience, also, providing good estimations (note: _good_ estimations, not *exact* estimations) is a hard hard-problem (yes its twice as hard). There is in fact two effort values at play - first, a tentative / speculative one; issued _before_ the implementation; and second, the *real time spent*, which accrues through the time spent for realizing / implementing a given feature X. 

And Estimating is a science of it's own, really. In fact, there is different types of estimations, both in waterfall and in agile / scrum (hard estimations), or only in agile (guesstimations).  A guesstimate is what it means, a guess at best. This usually is the _first_ value you get out of a developer that just got caught in the "How long can this feature take to implement?" type of trap question. Given its provided out of the blue, by a developer who is - most likely - focusing on something else; and knows most likely zero of the details (which are all required to get a true idea of how much it might actually take) it is a guess at best. 

How are estimations born? If you ask a carpenter:

"How long will it take you to build me a house? " he will ask : "Well what kind of a house?". So it seems that in order to get more precise estimations, we need a certain level of detail. This is "easy" in the non-software world, in a way that things are standardized. Materials are known, tools are known and usually simple to use; processes are clear and straightforward; tasks are solid a well-defined so they can be easily allocated. The carpenter will even know how many bricks will be required to build the house you want. Simple math. Done. But what are the bricks in software development?

In software, things are not as straightforward, mostly due to the nature of it - software is abstract, and more importantly, software development is an *iterative* process. Even if you need to "just build another house (software)"; the  house (software) that you will build is (usally) a totally different beast from the last one; starting with the materials, which migh be something unkown and new; and the tools that are  required to be used to implement it. Also, software is not concrete in a sense like a house is - as in made with bricks, that you can actually touch and carry around, and count.
For instance, any team member will have to shape it's own understanding of a given feature. N team members? N perspectives on the specification of feature X; N _versions_ of feature X.... Communication can mitigate this by having everyone on "the same page", but it's challening, especially when you are in time constraint (which usually is always the case).  How many features will you need in order to deliver the user stories in question?

So if it's challenging to agree on what feature X actually really is, within the team, how can we really even think about agreeing on a given effort value for feature X?

Let's consider waterfall and agile approaches.
Agile solves this with the help of User Stories and slicing into smaller items. Waterfall, with relentless ironing out every. single. detail. of the feature itself, the _implementation of feature_ and (lastly as it's waterfall) the *testing* of the feature. Well then planning helps, no? As we have seen, it does perhaps help in delivering (i.e. having something to ship); but it fails at making sure that what you deliver is actually what can be used by the rest of the world, or, by the users.

Waterfall comes from the old factory lines. And as such is suited for a production line, where the outcome is a good, not a product. We create _the same_  good, X amount of times, _in the same way_ (which btw must be the most efficient in terms of cost and time). Let's take cars for example; every car _has_ to be the same. Same features: same number of windows, same number of wheels (thank you!), same number of pistons in the engine; and so on.

But features are _not_ the same as cars. Every. Feature. Is. Different. Let me write this again, in caps: *every feature is different _and unique_* !!!  Thinking that one can find _one_ process to implement _any_ feature in _the same_ way, is simply limiting,  counterproductive and silly. Features are _not_ goods. An individual feature is more like a *model* of a car, which as such requires a specific tailored *assembling process*. You see where this is going, right? And we haven't even started talking about the fact that the users might not really *want* to drive the models of cars the factory produces.

So as you can see, the space of possible outcomes to implement feature X explodes in the software world even before we start implementing it (and we have also not talked about the variety of tools / technologies etc yet!), which makes effort estimation even more challening.

Now how would a programmer know how much time it takes to implement feature X? Yes, there might be a spec, and predefined tools to be used ( programming language, development environment, testing environment etc). But that's merely the starting point, as we now enter the _implementation process_. 

A common implementation process can be described with a list of "activities" to be done (in agile, referred to as "Definition of Done"). It can look like this:

- The software has a clear and well-understood specification

- The software is implemented

- Relevant unit-tests are implemented as well, with a given coverage (say 80% if you like the number), and no unit-test fails, ever.

- The code for the given feature X is reviewed and merged into a main repository

-  QA is run on the defined user stories.

The question here is now, can this list of activities help us achieve a better estimation?

In fact, it provides some detail, at a very generic level, which can be applied to any feature that we might want to implement; and as such, segments the effort nicely towards a - potentially realistic - value of an estimation: as each developer can simply tell how much time it will take her to understand the spec, implement the feature, write the tests cases etc. This is the clear, straightforward side of the estimation process: based on a concrete list of facts (the "bricks" for instance).

But of course there is also some nuances here. These are a bit more subtle, as they show only deep within the process of software development, and are of _iterative_ nature, unfortunately.
Four relevant insights can be clarified in question form:

- How much effort will it take for a developer to prepare the implementation of feature X?

- How much effort will it take to get all the tests to _pass_? 

- How much effort will each QA round take?

- How many rounds of QA will be required *before* the feature can finally be deployed?


These are the big unknowns that put the first "hard" in the "hard hard-problem" part, and these questions are the _bricks_ of the software development (house).
When you find answers to these, you answe the question "how many bricks do I need to build the (software) house?"


No comments: