Talking to your Team Lead about Stepsize

Stepsize works with organisations like yours to unlock unprecedented improvements in velocity, quality and employee retention.

Proving impact with Stepsize

Getting the most out of Stepsize requires both the tool and a process.

In this guide, we'll describe a tried and tested experiment that you can run with your team to prove the impact of Stepsize.

When your team lead understands the sheer impact of managing issues within the codebase, you're taking an important step towards unlocking a radically better way of working.

This will allow them to prioritise the technical issues that your team think are most important, while aligning them with company objectives for maximum impact

Expected results from your experiment

Creating issues in your PM tool (like Jira) interrupts work. At the same time, conversations about issues often lack context.

When you use Stepsize...

  1. Engineers track more issues, because issue tracking happens natively in the codebase

  2. Engineers are aware of existing issues, for the same reason

  3. Issues are much easier to prioritise, because they are innately contextful and directly linked to code.

On average, we expect that over the course of an experiment, engineering teams will...

The experiment

Track one, fix one.

  1. Select a time period for your experiment. This could be 1-2 weeks or a sprint, for example.

  2. Get on the same page. Hold a meeting to kick off the experiment.

  3. Track one. Each engineer documents at least one new issue over the time period.

  4. Fix one. Collectively agree on an issue to resolve as a team. We suggest holding a short meeting to do this.

If you can, record anything you observe which might be relevant to make your case, such as behavioural changes in the way engineers use Jira.

Reviewing the experiment

  • Visibility - How many issues got created? How does this number compare to your normal number when using your standard issue tracker alone?

  • Awareness - How many issues were viewed? How does this number compare to your normal number when using your standard issue tracker alone?

  • Actionability - How many issues did you resolve? How easy were they to prioritise (e.g. using the codebase visualisation – for more information, see our guide)

What qualitative data do you have? For example:

  • What feedback did the engineers have?

  • What changes did you observe in the quality of issues?

Talking to your Team Lead

Before you discuss running this experiment with your team lead, ask yourself...

  1. How might running an experiment like this empower us to allocate resources better?

  2. What labels might be useful for your team? e.g. morale, velocity, security...

We can help

Stepsize works with organisations like yours to make the most out of your engineering resources and manage tech risk better

Last updated