Questions from my Continuous Delivery Talk

My short talk on how we do Continuous Delivery at 7digital generated many questions from both the audiences at Devs in the ‘ditch and London ACCU.  Also, a couple more were asked on Twitter after the events.  Here are are the ones I can remember and my answers.  If anyone has any more questions please add them to the comments.

Can you choose which release version to deploy?

As we deliver web-based services, not products, we are always aiming to release the latest version which is almost always the of HEAD of the Master/main integration branch merged into a Release branch.

We rely heavily on TeamCity to trigger our deployments as well as our continuous integration.  It includes a feature called ‘pinning a build‘, which prevents it or it’s artifacts from being deleted in a clean-up policy.  It also allows us to reference these artifacts in another build, such as a deployment build.

Once the Release branch has been updated with the changes in the HEAD of the Master branch, and all of the tests have passed and we are happy, the build is ‘pinned’ in TeamCity and we kick off the Deploy to Live  build which picks up the pinned artifacts and deploys them to the production servers.

We can choose what build should be pinned and therefore what artifacts are released to Live.  We don’t necessarily version our releases because we never refer back to the versions and only a single version is ever live at one time.

How do you do a rollback?

We ‘un-pin’ the build and artifacts of the ‘bad’ release, ‘re-pin’ the build and artifacts of the previously known ‘good’ release and run the Deploy to Live step once again.  This effectively does a ‘roll forward’ with known good artifacts.

What role do QA have in the process and do they trust the automation?

QA are generally involved throughout the process.  Together with the Developers we will both fulfill the traditional role of a BA and analyse a piece of work, creating Acceptance Criteria which normally form the basis of the Automated Acceptance Tests.  Also, this means that QA are fully versed in the feature or change when it comes to UAT and explanatory testing and together we can make a judgement call as to whether a change actually needs QA manual testing or is sufficiently covered by the automated tests.  Being involved all the way through gives them confidence in the process.

A point to make is that we don’t have a QA Team as such, each development team includes a QA person and a Product Manager.  We all sit together and attend the daily stand-up so everyone is aware of what is taking place, the mood of a change and can jump in at any point.

How do you handle large features/pieces of work?

We hold an analysis session within the team, including the developers, QA and Product Manager to break down the work into as small a user story as possible, aiming for under a day.  Each story needs to be a single contained piece of the functionality which can be released on it’s own.  This is not always possible and in these times we employ Feature Toggles which will hide a feature until it is ready.

What we don’t do is have feature branches.  This is something that must be avoided to ensure that we are always integrating all changes and any problems are highlighted as early as possible in the development cycle.

What about database schema changes?

We use a tool we developed internally, but have since Open Sourced: DBMigraine.  There are a couple of blog posts on the 7digital Dev Blog here and here which explain it in more detail, but in essence it builds databases from a set of creation scripts applies migration scripts, and performs consistency checks between databases.

Using this tool we build a database from the scripts and migrations at the beginning of each Integration test suite and run the tests against the new schema.  This should hopefully flag up any major problems before these migrations are also applied to the Systest and UAT databases which are integration points for all of our apps sharing the same database.

It’s worth noting that we try to avoid destructive migrations, but this process has allowed us to gradually delete unused tables in a tested manner.

————————————–
Edit – new Question from @AgileSteveSmith

What cycle time reductions have you seen?

In my reply, I linked Steve to the following two posts on the 7digital Developers’ Blog related to productivity at 7digital: “Productivity = Throughput and Cycle Time” & “Development Team Productivity at 7digital“.

The posts illustrate, with data tracked from our own work items, that there was an incredible reduction of Cycle Time in over the course of 2009 to 2011 – you can even see the massive spike at one point where things got worse before they got better, as I mentioned in my presentation!

A full report was put together, with even more charts and graphs, which can be downloaded from the second blog post.

Advertisements

2 Comments

  1. Expand on the question I posted through twitter about code quality metrics.

    You know how if you reward people for finding bugs then it's likely the number of bugs will increase, or the idea that focusing on one objective might cause other 'undesirables' to appear, example might be concentrating on delivery so much that coding standards drop, or delivery happens with code that is worse just to get the delivery.

    We're only human.

    Question is do you have measures in place as part of the build or separate review to check if this is not happening? That focusing on delivery isn't causing shortcuts?

    Like

    Reply

  2. When I joined 7digital over 2 years ago there was a big focus on code quality metrics. These were generated as automated builds in TeamCity and run overnight. The metrics would not 'fail' the build, they were purely informational and included stats such as cyclomatic complexity, class coupling and lack of cohesion in methods (LCOM).

    The whole Development team, which was about 15 people at the time, would gather around a projector once a week to discuss these metrics – what each meant, what improving them would mean for the code and whether they'd improved or not in the last week. At this point our codebase was very much 'legacy' code by Micheal Feathers' definition and we were very much focused on refactoring to reduce complexity and add tests.

    In the last year these Metrics builds have become largely ignored. The codebase is far more distributed, composed of multiple smaller projects each maintained by a small focused team of developers. With this level of focus we've drifted away from relying on the metrics as we feel that any 'bad' code would be noticed and fixed.

    The Metrics builds are still running on some of our longer standing projects, which are still being actively developed and refactored, and I'm now very interested in taking the time to see if the code quality has decreased or possibly improved in the last year…

    Like

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s