Questions from my Continuous Delivery Talk

My short talk on how we do Continuous Delivery at 7digital generated many questions from both the audiences at Devs in the ‘ditch and London ACCU.  Also, a couple more were asked on Twitter after the events.  Here are are the ones I can remember and my answers.  If anyone has any more questions please add them to the comments.

Can you choose which release version to deploy?

As we deliver web-based services, not products, we are always aiming to release the latest version which is almost always the of HEAD of the Master/main integration branch merged into a Release branch.

We rely heavily on TeamCity to trigger our deployments as well as our continuous integration.  It includes a feature called ‘pinning a build‘, which prevents it or it’s artifacts from being deleted in a clean-up policy.  It also allows us to reference these artifacts in another build, such as a deployment build.

Once the Release branch has been updated with the changes in the HEAD of the Master branch, and all of the tests have passed and we are happy, the build is ‘pinned’ in TeamCity and we kick off the Deploy to Live  build which picks up the pinned artifacts and deploys them to the production servers.

We can choose what build should be pinned and therefore what artifacts are released to Live.  We don’t necessarily version our releases because we never refer back to the versions and only a single version is ever live at one time.

How do you do a rollback?

We ‘un-pin’ the build and artifacts of the ‘bad’ release, ‘re-pin’ the build and artifacts of the previously known ‘good’ release and run the Deploy to Live step once again.  This effectively does a ‘roll forward’ with known good artifacts.

What role do QA have in the process and do they trust the automation?

QA are generally involved throughout the process.  Together with the Developers we will both fulfill the traditional role of a BA and analyse a piece of work, creating Acceptance Criteria which normally form the basis of the Automated Acceptance Tests.  Also, this means that QA are fully versed in the feature or change when it comes to UAT and explanatory testing and together we can make a judgement call as to whether a change actually needs QA manual testing or is sufficiently covered by the automated tests.  Being involved all the way through gives them confidence in the process.

A point to make is that we don’t have a QA Team as such, each development team includes a QA person and a Product Manager.  We all sit together and attend the daily stand-up so everyone is aware of what is taking place, the mood of a change and can jump in at any point.

How do you handle large features/pieces of work?

We hold an analysis session within the team, including the developers, QA and Product Manager to break down the work into as small a user story as possible, aiming for under a day.  Each story needs to be a single contained piece of the functionality which can be released on it’s own.  This is not always possible and in these times we employ Feature Toggles which will hide a feature until it is ready.

What we don’t do is have feature branches.  This is something that must be avoided to ensure that we are always integrating all changes and any problems are highlighted as early as possible in the development cycle.

What about database schema changes?

We use a tool we developed internally, but have since Open Sourced: DBMigraine.  There are a couple of blog posts on the 7digital Dev Blog here and here which explain it in more detail, but in essence it builds databases from a set of creation scripts applies migration scripts, and performs consistency checks between databases.

Using this tool we build a database from the scripts and migrations at the beginning of each Integration test suite and run the tests against the new schema.  This should hopefully flag up any major problems before these migrations are also applied to the Systest and UAT databases which are integration points for all of our apps sharing the same database.

It’s worth noting that we try to avoid destructive migrations, but this process has allowed us to gradually delete unused tables in a tested manner.

————————————–
Edit – new Question from @AgileSteveSmith

What cycle time reductions have you seen?

In my reply, I linked Steve to the following two posts on the 7digital Developers’ Blog related to productivity at 7digital: “Productivity = Throughput and Cycle Time” & “Development Team Productivity at 7digital“.

The posts illustrate, with data tracked from our own work items, that there was an incredible reduction of Cycle Time in over the course of 2009 to 2011 – you can even see the massive spike at one point where things got worse before they got better, as I mentioned in my presentation!

A full report was put together, with even more charts and graphs, which can be downloaded from the second blog post.

Advertisements

Continuous Delivery at 7digital

It began with an off-hand comment on the ACCU General mailing list that at 7digital we release on average 50 times per week, across all of our products.  I thought nothing of it, virtually all of our products are web-based, which makes it relatively easy to deploy on a regular basis, but it seemed that others were interested in how we do it and so I was cajoled into giving my first presentation.

I began by explaining what we understand as Continuous Delivery – a set of repeatable, reliable steps which a change must go through before being released.  In our case most of these steps are automated.

I described where we came from and how we approached the change, in both a technical and business manner, and where we would like to see ourselves going.  I then included a flowchart of  A Day in the Life of a Change at 7digital, which Prezi allows me to ‘bounce’ around as we hit different stages in the pipeline.

I answered many questions clarifying how we handle rolling back a bad release (we actually ‘roll-forward’ with artifacts of the previous ‘good’ release), whether our QA team are comfortable with the process (yes, they help write the criteria),  and how large pieces of work are tackled (we try to break them down into deployable pieces no bigger than a day).

Here are the slides:

http://prezi.com/bin/preziloader.swf

Group Feedback

Cross-posted from the 7digital Dev Blog
Whiteboard with Group Feedback cards arranged showing Pairing and Communication as the biggest groups

The API Team decided to trial a Group Feedback in our last Retrospective influenced by this post from Mark Needham – we were hoping to promote a safe atmosphere for everyone to talk about each other in a frank manner.  Initially the idea was received with trepidation and a fear of public humiliation, but we were willing to try a new approach.

Hibri, our Team Lead, suggested that we should present our feedback as three points in a similar vein to the “Start, Stop, Continue, More of, Less of Wheel” but sticking to just Stop, Start and Continue.  This would focus the feedback and prevent any potential for the retrospective from spiralling off course.  The team took a vote on whether to try this approach before starting.

The ‘Subject’ sat on a chair, called the Hot Seat, facing the team whilst everyone else had 5 minutes to jot down on three cards an item for that person to Stop, Start or Continue.  The cards were then tacked onto the whiteboard behind the Subject.  Everyone took their turn in this manner as the Subject and afterwards we got a chance to read the points suggested and comment if we wish. The original plan was for the team to read out the cards to the Subject, but we decided against that and reviewed the feedback as a group.

We found that there were overall themes for each person and it was felt that nothing on the board was a complete surprise to anyone.  This may have been due to it being the first time we’ve tried this as it was noted that it was harder to write the cards than be the Subject being assessed.  As expected everyone had areas they could focus on more and areas where they were strong.  Each person was encouraged to create their own personal actions from the feedback, which was not divulged to the team.

We also found, whilst reviewing the cards, that we had a couple of common themes across the team, and we decided to rearrange the cards and group them up. This showed that the team felt we were focussing really well on Refactoring, but that we needed to address Pairing and Communication (as well as being late for standup…).  We discussed the cards further, noting the overall themes and devised a couple of actions;  Pomodoro techniques whilst pairing and Bi-Monthly team “Show-and-Tell” sessions for anything that needs communicating, such as major refactorings undertaken and any lessons learned.

The exercise turned out to be very useful and was a different way to find areas within the team as a whole which needed attention.  Also, we were focussing on ourselves and not our output, our processes or our environment as per most retrospective exercises and everyone received positive along with any negative feedback.