case study- technical leadership

Technical Leadership

This article describes the infrastructure and the methodologies we put in place for managing the technical leadership of a project with

  • a team of 25 people
  • a stakeholder audience of 270 people

 

Initially, SpazioCodice’s team was an independent development cell within the project. Gradually, we’ve been asked to manage the entire development cycle in terms of 

  • Planning
  • Monitoring
  • Software Repository Management
  • Release and Version Management
  • Automated Testing

 

Here’s a detailed description how we organized the above. 

What did the customer expect from us?

The customer wanted us to implement a mix of technical and management skills. 

  • team management
  • release management
  • planning
  • monitoring and reporting
  • observable and reliable environments where each feature or fix could be easily verified after it is implemented 
  • automatic (re)installation of the environments after a feature is merged
  • automated, centralized, historicized, and observable builds on all branches (develop, release, work) 
  • notifications through a Slack channel

How did we fullfil those requirements?

In the following sections, we summarised the main things that compose our approach. As you can note, each of them is a vast topic, so I just wrote an essential summary which should give, I think, a good idea about that.

In any case, I would love to hear questions, doubts, and feedback about them. Feel free to contact us in that case.

Sprints

Sprints are an agile and flexible way to put verifiable small-sized milestones in a medium/long project.

In our experience, a Sprint should have a reasonable duration in terms of weeks, where “reasonable” means

  • not too short otherwise, meetings (see below) are useless and too close to each other (e.g., people have little or nothing to say in a weekly retrospective)
  • not too long;  a long Sprint necessarily includes a lot of work, which is hard to complete and verify 

 

Although the exact meaning of “short” and “long” depends on the concrete project context, we usually set the sprint duration to 3 weeks

Plan, Track, and Release: JIRA

Atlassian JIRA is a very powerful tool for managing a software project. It offers many features (at least more than we use) and integrations with external tools like software repositories and build systems.

Every relevant stakeholder (developers included) has an account in JIRA. The project is a Scrum project, organized in a 3-week Sprint.

The development workflow is simple. There’s a backlog, where any user can create tickets (bugs, ideas, requests) at any time, and a board that contains the amount of work planned for the active Sprint (i.e., moved in the Sprint before or during the planning meeting).

The Sprint board includes four different ticket states:

  • TODO: Tickets are waiting for the development
  • IN PROGRESS: Tickets are being worked 
  • REVIEW: The ticket output (i.e., code, analysis, findings) is verified by another person. The reviewer checks the code and its formal correctness and ensures there’s a (formal) good test coverage.
  • VERIFY: The feature/fix has been deployed in the verifiable environment; one person should manually verify its correctness. Before this phase, the original assignee writes detailed verification steps. 
  • DONE: feature/fix implemented, reviewed, and verified successfully. The task is completed. 

In the case of analysis tickets, the verification phase is skipped, while instead, for code tickets, the transition between one step and another can be done only after the so-called “green build” (i.e., the automatic test suite is executed by the build system and all tests successfully pass) 

Project Roadmap

A Sprint is an excellent way to organize short-term commitments. However, projects are usually longer than a Sprint, and customers want to have a high-level overview of the overall amount of work in terms of 

  • What 
  • When 
  • How much (effort and cost)

 

Therefore, the first thing we usually do in a project where the functional requirements have been collected is a technical analysis where we analyze/discuss/explore requirement by requirement and create the corresponding “technical” JIRA tickets. That process happens at two different levels.

First, we define the functional and technical high-level areas; for each, we create a specific kind of JIRA ticket called Epic: any other ticket we create must belong to an Epic (that is, to a functional or technical requirements area). Tickets can capture the following types of work:

  • Analysis: usually created as “Story” (another type of JIRA ticket), they require the assignee to investigate a specific area and, in the end, to create the corresponding implementation tasks.
  • Implementation: the kind of work developers love; an implementation task is something, a feature, for example, the assignee must implement
  • Fixes: Bugs are usually not created in the first phase (we do not have a testable system). Even later, they are very hard to predict and plan. Therefore, they are not part of the roadmap.   

 

So far, so good: we have the epics, and they are populated with the tickets created during the technical analysis. Now, it’s time to spread them across a timeline. 

JIRA comes in help again with a feature called (not surprisingly) Timeline. It allows the creation of a roadmap that includes the epics and the corresponding tasks.

The following picture is an example

The left column lists the epics. For each epic, a colored bar indicates its duration across the timeline. Below the timeline, you can see several gray blocks representing the Sprints and the blue bullets representing the releases (0.9 and 1.0 in the example).

Although this perspective offers a good high-level overview of the project, we can expand the epics and see the concrete tasks (analysis and/or implementation) together with their dependencies.

See the following example where an epic has been expanded. 

As you can imagine, if we open all epics, the resulting view is almost unreadable. However, the perspective is very helpful because dependencies between tasks are included, and in case there’s something wrong, they are highlighted in red (like in the example). 

Meetings

We usually have regular and on-demand meetings. While the second category includes, as the name suggests, meetings that are needed for a specific need, regular meetings are the following: 

Daily Stand-Up (15 minutes)

One at a time, team members briefly describe what they are doing, together with potential challenges or blockers. In case of any challenge/blocker, a dedicated meeting is scheduled between the reporter and the people who can help.    

Sizing (1 hour)

When a new ticket is created, regardless of its type, it goes into the backlog tail. At this time, it is unestimated and without any priority. 

During the sizing meeting, one by one, tickets are discussed, improved in their details (e.g., description, fix versions, acceptance criteria), estimated, and prioritized.

Planning (1 hour)

Before a new Sprint starts, its content, initially defined by the roadmap, is checked and eventually adjusted, considering new priorities raised. Those changes may imply a (possibly small) re-iteration on the roadmap.

Kick-off and Retrospective (1 hour)

At the beginning of each Sprint, usually the first day, before pressing the “Start” button, a short kick-off meeting is helpful to ensure everyone is aligned and happy about the Sprint content. 

Demo time: If there’s a feature, a fix, or an analysis output helpful to share with the team, the assignee runs a short demo/presentation.   

Finally, there are 5 minutes where each team member can  fill out a Google document about

  • Thanks and congratulations
  • What went well
  • What can be improved (+ action points)   

 

Things written in the document are then read and discussed together.

Verifiable Environments

As said above, someone runs a verification in a realistic environment before a task can be marked as completed

That environment must include the whole set of software modules/components. The only difference is in the underlying data, which should be 

  • small: the environment is rebuilt/reinstalled multiple times daily; that process should be as quick as possible.
  • controlled and deterministic: data is created by domain experts; when the system starts, we know precisely its state.    

There are two types of environments where that verification happens: 

  • System Integration Test (SIT): this is the environment directly connected to the developments. Every time a feature/fix is merged in the main, service pack, or hotfix branch, a corresponding SIT environment is rebuilt to include such a change. SIT environments contain the latest changes but are very unstable because they are frequently restarted/reinstalled.
  • User Acceptance Test (UAT): as the name suggests, this environment is primarily for the end users/testers (on the customer side). It is a place where they can try the latest stable and verified features implemented in the project. The environment is built periodically (e.g., weekly), so it is stable during that period.

As a side note, consider there’s not only one SIT and UAT. Instead, there’s one of them for each release. That is because if a feature is implemented only in a specific version, there should be a verifiable environment pair (SIT and UAT) for that specific branch.     

Code Repository: BitBucket

Is there a software project without code? Code, documentation, and related artifacts must be stored, versioned, managed, and manipulated.

We chose BitBucket as a software repository for the project we’re discussing. Being an Atlassian product, it integrates deeply with JIRA. 

Changes are managed, organized, and applied in branches, and there are several kinds of components: 

  • Working branches are the temporary branches where features or fixes are implemented. 
  • Main branch: the branch where the next major version will be released. Every feature or fix goes here first. 
  • Service Pack/Hotfix branches: immediately after the first release (e.g., 1.0.0), the develop branch can no longer capture the versions installed in the different environments. As a consequence of that, we need two additional types of branches in the software repository. Service Packs are intended to capture things, features, and even fixes that do not represent relevant features but at the same time introduce blocking, no retro-compatible changes. On the other side, Hotfixes are those things we can immediately apply to an existing release, mostly simple bug fixes.

Package & Image Registry

Software artifacts must be stored and versioned. Unfortunately, Bitbucket does not provide those features, available instead in GitHub and GitLab. We have chosen Gitlab without any particular reason (the artifact registries they offer are similar).

Project artifacts are:

  • Java libraries: stored in the Gitlab Maven Package Registry, they are used within the project as dependencies
  • Docker images: held in the Gitlab Image Registry, they are used for tests and deployments 

Continuous Integration: Jenkins

Every time someone pushes a change in the code, a dedicated Continuous Integration server, Jenkins is in charge of ensuring everything is ok with the integration suite. 

Specifically, 

  • If the change is related to a working branch, the code is pulled, and the integration test suite is executed.
  • If the change is merged into the main branch, the code is pulled, the integration test suite is executed, and the System Integration Test (SIT) environment is reinstalled from scratch
  • If the change is merged into a service pack or hotfix branch, the code is pulled, the integration test suite is executed, and the System Integration Test of the corresponding environment (SIT-SP, SIT-HOTFIX) is reinstalled from scratch.

Everything Jenkins does should be notified to interested users. For this kind of notifications, we replaced emails with a dedicated Slack channel: 

Major Versions, Service Packs, Hotfixes: 

We decided to use a simple approach for dealing with versions and releases. First, we’ve used a semantic versioning approach, which identifies a given version using 3 numbers X.Y.Z

  • X, the leftmost number, denotes a major version, which is a set of relevant features (most probably breaking the existing API).
  • Y is the minor version. We increase that number when there are isolated, blocking features that we need to release before a major version. 
  • Z is the patch version. We increase that number for applying fixes and no-relevant, no-blocking changes to an existing release. 

Sprint Report

At the end of each Sprint, the team leader sends an informal summary report that includes 

  • general considerations
  • implementation areas that have been covered  
  • challenges, blockers
  • main goal of the next Sprint

Conclusions

Technical Leadership is a very high-level definition for grouping a set of services that help drive a project in the right direction. It can be implemented in several ways. What we described in this case study is one of them; it is what we chose for managing a project.

As said above, we would love to hear questions, doubts, and feedback about them. Feel free to contact us in that case.

Share this post

Leave a Reply