More insights about stories
There are also other measures for effective research and development stories. While this will be the focus of a future article, here are two to already consider:
-
Thorough discussions and estimations within the development team before taking the stories into work.
-
Collaboration between the developers, and with the product owner during the implementation phase.
Takeaways
-
The boundary between "how" and "what" moves up in the hierarchy making stories and even epics more technical.
-
New stories and epics are, more often than not, initiated by developers, while initiatives and some epics come from product stakeholders.
-
Not all backlog items should be stories.
-
There are bug tickets. From the product owner's perspective, the main difference between fixing a bug and other activities is that bug fixing doesn't really add new value to the product.
-
Other types of backlog items also could be introduced. For instance, some maintenance activities such as refactoring are not really stories. They could be called tasks.
Stories: knowledge and feature
Stories can be further classified into "new knowledge" and "new feature" stories.
Let me explain.
The high uncertainty of whether a machine learning idea will work often makes it risky to combine the feasibility investigation (and other kinds of research activities) with the implementation of that idea within the same iteration of the plan. Thus, the epics contain multiple backlog items of a different kind.
If the team has an idea on how to improve its machine learning product often, it makes sense to make a "new knowledge" story, which would have knowledge as a result. One important benefit of this approach is that many new ideas tend to be ineffective, so it is good to carry out some investigation and experimentation, then report on what has been discovered, present the results to the team and then iterate further.
As I mentioned before, the epic could end up with a negative result (i.e. the idea didn't work out). By dividing the "new knowledge" and "new feature" stories, we make sure this fact is discovered as early as possible. In case the idea is actually effective and feasible for the product, this distinction would still help to achieve the desired result sooner.
The "new knowledge" idea is somewhat contradictory to the notion of a user story, as it implies that every iteration of work should have a direct impact on the customer. However, I believe this is the "lesser evil" given the benefits that approach gives.
In the case of "knowledge" stories, it is especially important for the team to properly manage that knowledge in a clear, consistent and persistent way. In the next article, I'll reveal how we manage knowledge in TomTom Autonomous Driving machine learning teams.
An example of a "new knowledge" story would be:
-
What: run a set of experiments for the "deep pothole" model optimization
-
Why: prove that the pothole location accuracy can be further improved
-
Who will benefit: developers
Acceptance criteria:
-
The key internal metrics for the pothole localization should improve on the evaluated dataset
-
The evaluation should take different road surface types and geographies into account
-
Report the results in a written form
Yes, it's the developers who benefit most of all from that work.
Then the "new feature" story would look like:
-
What: implement the "deep pothole" model optimization
-
Why: to improve the pothole location accuracy
-
Who will benefit: HD Map users
Acceptance criteria:
So far, I haven’t mentioned the definition of done (DoD). Unlike the acceptance criteria, the DoD is not specific to one story, but rather a generic checklist of all items to be done before a story of a certain kind can be considered complete. That checklist is created and maintained by the team and is usually specific for different projects. For "knowledge" and "feature" stories, the DoDs are also supposed to differ for obvious reasons.
Wrap-up
In the first article of the series, I explained the "why", the conceptual part of the "Agile machine learning" mindset. In this article, we focused on the "static" side of the story, namely, how to structure, divide and formulate the work for machine learning research and development teams.
Next time, I will unveil some "dynamic" aspects. I will tell you how to apply specific Agile methodologies like Scrum and XP in the machine learning setup. I will also share some home-grown best practices to gather, grow and manage the knowledge in order to apply the research ideas in real software projects.
Stay tuned!