Posts by Chris

C# .Net Web Developer

2012 Retrospective

As all Agile methodologies teach us, the action of looking back and reviewing your progress is an essential part of improvement. As such, I’m going to try and run a simple retrospective for my 2012.

Review

Lets start by reviewing the last year on a timeline. As it’s such a long period I’ll stick with monthly increments and pick out notable events.

January: 
 – Discovered WebGL and my messing about got picked up by LearningWebGL.com

February: 
 – Nothing of note 😦

March:
 – Attended first GDC Meet a Mentor event
 – Started planning the “Devs in the ‘ditch” events

April:
 – Organised a successful first “Devs in the ‘ditch” event
 – Organised the 7digital stand at the “Find your Ninja” event.
 – Attended GeekGirl Meetup in London

May:
 – Organised and helped man the 7digital stand at Silicon Milk Roundabout
 – Attended the Progressive.Net Tutorials

June:
 – Organised second “Devs in the ‘ditch” event

July: 
 – Became a STEM Ambassador

August:
 – Nothing of note 😦

September:
 – Organised third “Devs in the ‘ditch” event
 – Stepped up to take over the ACCU Mentored Developers from Paul Grenyer and thereby co-opted into the Committee.
 – Kicked off an ACCU Mentored Devs book group on JavaScript: The Good Parts
 – Attended DDD10

October:
 – Organised and presented at the fourth “Devs in the ‘ditch” event.
 – Presented the same talk at ACCU London
 – Became the API Team Lead Developer

November:
 – Took part in a Year 9 Careers Speed Networking event as part of being a STEM Ambassador
 – Presented to undergraduates at the Middlesex University IT Careers Forum
 – Passed Grade 1 of Cello

December:
 – Presented at APIDays 2012 in Paris

Other Stuff:
 – Adopted a lovely Cat
 – Attended at least 18 events, and spoke at 3.
 – Attended 4 Weddings (one of which was a surprise)
 – Lost just over 10lbs in weight and no longer officially overweight
 – Visited Portugal, twice, and Paris.
 – Finished reading 5 books, and started at least twice as many.
 – Wrote 12 blog posts
 – Created 13 Github Repos

Good / Bad / Change

Now is the part where I reflect on the year, using items that I brought up in my review as an aid and write down what I felt went well, went badly and what I’d like to change going forward.

Good

– Finally started speaking at community events, I’d been wanting to do so for so long, but was afraid
– Attended many events and conferences
– Improved my overall health 🙂
– Began organising events and learned a lot from this
– Began putting myself forward as role model to younger people as part of the STEM Ambassador work

Bad

– Didn’t read or finish as many books as I’d have liked
– Didn’t really travel or visit many places
– Only wrote a few blog posts
– Didn’t finish any programming projects I started
– I wanted to have done more with WebGL
– I seem to have coasted through the first quarter of the year

Change

– Must commit to projects and finish them before starting new ones
– Must commit to finishing reading books that I start before starting new ones
– Try to read more books
– Blog more, it improves writing and communication skills
– Stop procrastinating!

Goals & Actions

Normally the team would vote on which topics they’d like to promote to goals and actions but as it’s just me I’ll pick three that I feel are the most important.

Goal: Make 2013 The Year of Reading
Action: Read and finish at least one book a month.  Preferably a technical book but any kind counts. I’ve put the GoodReads Challenge in my sidebar as a reminder.

Goal: Present and speak more.
Action: Volunteer for more events. I did three last year, I’ll aim to improve on that.  I’ve already put myself forward for an event with the London Continuous Delivery group in March

Goal: Visit a new country
Action: I’ll make a point this year to take holiday somewhere that I haven’t been before.  It doesn’t have to be far, just somewhere new to me.

Retrospective Close

I feel that this was indeed a useful exercise.  It took a couple hours to do and write up mostly because a year is a long time period and I wanted to go back through my calender and emails to ensure I picked up as many events as I could.

Looking back I can say that that I’ve both done a lot of things, but also not enough things.  I picked up new items such as speaking and becoming more involved in the ACCU, but I also let things slip such as blogging and reading.  It’s so easy to let things slip by you when they happen one day at a time.

I’m hoping that 2013 will continue along the trend that I started as there is so much that I want to see.  At the end of the year I will run another retrospective and compare it with the actions above, which should be interesting!

APIdays 2012

On Monday 3rd and Tuesday 4th December Paris held host to the first international API focused event in Europe – APIdays.io.  Myself, and my colleague Hibri, eagerly took part and we gave a short presentation on how 7digital grew their public API, the lessons we learned and the effect it had on the way we work.  You can view the slides at the end of this post.

We received great feedback from our slides – it felt as if many people are just getting started in the world of APIs whilst 7digital have had their public API for many years and that they were very interested in hearing our real-world story.

The format of the event was a little odd, with talks in slots of less than 30 minutes, which on the plus side meant that we got to see a lot of different viewpoints and experiences but that there wasn’t enough time for anyone to get deep into a topic.  I’d like to suggest that a technical track has available 1 hour slots for anyone who wants to host a full-on technical presentation and debate – it felt like we barely scratched the surface in less than 30 minutes.

It was a great couple of days, and the first time I’d ever been to Paris.  I’m hoping that this event becomes a regular conference and that next time we can get far more technical with the content and swap the really gritty stories of lessons learned.

//speakerdeck.com/assets/embed.js

Questions from my Continuous Delivery Talk

My short talk on how we do Continuous Delivery at 7digital generated many questions from both the audiences at Devs in the ‘ditch and London ACCU.  Also, a couple more were asked on Twitter after the events.  Here are are the ones I can remember and my answers.  If anyone has any more questions please add them to the comments.

Can you choose which release version to deploy?

As we deliver web-based services, not products, we are always aiming to release the latest version which is almost always the of HEAD of the Master/main integration branch merged into a Release branch.

We rely heavily on TeamCity to trigger our deployments as well as our continuous integration.  It includes a feature called ‘pinning a build‘, which prevents it or it’s artifacts from being deleted in a clean-up policy.  It also allows us to reference these artifacts in another build, such as a deployment build.

Once the Release branch has been updated with the changes in the HEAD of the Master branch, and all of the tests have passed and we are happy, the build is ‘pinned’ in TeamCity and we kick off the Deploy to Live  build which picks up the pinned artifacts and deploys them to the production servers.

We can choose what build should be pinned and therefore what artifacts are released to Live.  We don’t necessarily version our releases because we never refer back to the versions and only a single version is ever live at one time.

How do you do a rollback?

We ‘un-pin’ the build and artifacts of the ‘bad’ release, ‘re-pin’ the build and artifacts of the previously known ‘good’ release and run the Deploy to Live step once again.  This effectively does a ‘roll forward’ with known good artifacts.

What role do QA have in the process and do they trust the automation?

QA are generally involved throughout the process.  Together with the Developers we will both fulfill the traditional role of a BA and analyse a piece of work, creating Acceptance Criteria which normally form the basis of the Automated Acceptance Tests.  Also, this means that QA are fully versed in the feature or change when it comes to UAT and explanatory testing and together we can make a judgement call as to whether a change actually needs QA manual testing or is sufficiently covered by the automated tests.  Being involved all the way through gives them confidence in the process.

A point to make is that we don’t have a QA Team as such, each development team includes a QA person and a Product Manager.  We all sit together and attend the daily stand-up so everyone is aware of what is taking place, the mood of a change and can jump in at any point.

How do you handle large features/pieces of work?

We hold an analysis session within the team, including the developers, QA and Product Manager to break down the work into as small a user story as possible, aiming for under a day.  Each story needs to be a single contained piece of the functionality which can be released on it’s own.  This is not always possible and in these times we employ Feature Toggles which will hide a feature until it is ready.

What we don’t do is have feature branches.  This is something that must be avoided to ensure that we are always integrating all changes and any problems are highlighted as early as possible in the development cycle.

What about database schema changes?

We use a tool we developed internally, but have since Open Sourced: DBMigraine.  There are a couple of blog posts on the 7digital Dev Blog here and here which explain it in more detail, but in essence it builds databases from a set of creation scripts applies migration scripts, and performs consistency checks between databases.

Using this tool we build a database from the scripts and migrations at the beginning of each Integration test suite and run the tests against the new schema.  This should hopefully flag up any major problems before these migrations are also applied to the Systest and UAT databases which are integration points for all of our apps sharing the same database.

It’s worth noting that we try to avoid destructive migrations, but this process has allowed us to gradually delete unused tables in a tested manner.

————————————–
Edit – new Question from @AgileSteveSmith

What cycle time reductions have you seen?

In my reply, I linked Steve to the following two posts on the 7digital Developers’ Blog related to productivity at 7digital: “Productivity = Throughput and Cycle Time” & “Development Team Productivity at 7digital“.

The posts illustrate, with data tracked from our own work items, that there was an incredible reduction of Cycle Time in over the course of 2009 to 2011 – you can even see the massive spike at one point where things got worse before they got better, as I mentioned in my presentation!

A full report was put together, with even more charts and graphs, which can be downloaded from the second blog post.

Continuous Delivery at 7digital

It began with an off-hand comment on the ACCU General mailing list that at 7digital we release on average 50 times per week, across all of our products.  I thought nothing of it, virtually all of our products are web-based, which makes it relatively easy to deploy on a regular basis, but it seemed that others were interested in how we do it and so I was cajoled into giving my first presentation.

I began by explaining what we understand as Continuous Delivery – a set of repeatable, reliable steps which a change must go through before being released.  In our case most of these steps are automated.

I described where we came from and how we approached the change, in both a technical and business manner, and where we would like to see ourselves going.  I then included a flowchart of  A Day in the Life of a Change at 7digital, which Prezi allows me to ‘bounce’ around as we hit different stages in the pipeline.

I answered many questions clarifying how we handle rolling back a bad release (we actually ‘roll-forward’ with artifacts of the previous ‘good’ release), whether our QA team are comfortable with the process (yes, they help write the criteria),  and how large pieces of work are tackled (we try to break them down into deployable pieces no bigger than a day).

Here are the slides:

http://prezi.com/bin/preziloader.swf

DDD10 – The day of REST

I was on holiday when the sign-up for DDD10 went live and I missed the window by about 5 minutes.  Still, I added myself to the Waiting List in hopes that it would work and on Monday I got an email letting me know I had a place – Excellent!

Reading through the schedule, I decided to take advantage of those which relate to my current role as a Lead Developer in 7digital’s API Team, which meant the three REST talks.
The first session, Jacob Reimers’ “Taking REST beyond the pretty URL“, failed to meet me lofty expectations.  It was more of an introductory talk, which did not go much beyond level 2 of the Richardson Maturity Model.  From the questions of other attendees, this talk did get some people thinking, with questions wondering how interactions could be reproduced using only the HTTP 1.1 Verbs, but I was not one of them.  It is at times like these that I realise just how forward-thinking London is and that there are many places where this kind of approach is not a given.
The second REST focussed session I attended was Dan Haywood’s “Restful Objects – a hypermedia API for domain object models” and simply put, it boggled my mind, but not in a good way.  It was obvious that Dan had done his research, but when it came to his implementation, something had gone terribly wrong, and it was like entering the Twilight Zone.  Restful Objects is a specification which aims to standardise and genericise REST.  I’m not a fan of “generic”.  In my experience anything that aims to be generic ends up being complex, difficult to use and maintain, and incredibly inflexible, as such I mentally flinch whenever someone uses that word to describe a system.  
The biggest things which stand out in the spec is the inclusion of an “action” in the URI and a requirement for ~/{resource}/invoke endpoints – this misses the point of using the verbs on the resources themselves!  It is littered with URIs such as http://~/objects/ORD/123/actions/placeOrder/invoke. It also advocates exposing the properties on the URI too, which I believe pulls this property out of the context of the resource.  Also, it seems to advocate exposing your domain as the resources thereby creating systems which end up with tight coupling and knowing too much about each other.

Dan does caveat his design by stating he believes there is no need to hinder and complicate your system too much if it is to be entirely internal.  I disagree, to some extent, all of these systems will become “public” in that they will eventually be integrating with other systems and teams that you do not have direct control over and therefore need to minimise the coupling.  At 7digital we design all of our internal APIs with the same approaches and philosophies as if they were publicly exposed, and at some future date, that may even become the case.

Finally, there was Mark Rendle’s session “Simple.Web 101“.  Although, not a session focussing on REST itself, but on a REST framework.  I had seen Mark unveil Simple.Web back in June at the Progressive .Net Tutorials and I was curious to see how far it has come along. At 7digital we mostly use OpenRasta, but we have a few APIs written in NancyFX too and we’re always open to finding new tools which may be better for the job.  As this was only a 101 talk it didn’t go into enough depth for me to see what the improvements have been, but Mark is giving a free in-depth discussion this SkillsMatter this Monday.  Unfortunately I cannot make Mondays as I have my Cello lessons, but I will catch the podcast later.  

Outside of my mini REST-fest, I attended Neil Barnwell’s talk on “CQRS and Event Sourcing… how do I actually DO it?“.  I have unwittingly managed to avoid learning anything about this area and was hoping Neil could educate me fully in only an hour, which I feel he succeeded to do.  It was an overview with some code examples thrown in which have removed the mystery for me.

Out of curiousity, I attended Jimmy Skowronski’s talk “C# + PowerShell = happy couple” and learned a little about writing your own commandlets.  Finally, Garry Shutler took us through “10 practices that make me the developer I am today“, which he described as a retrospective of his career thus far and hoped to impart some useful lessons.

I had a great day and I wish to thank all of the organisers and everyone who helped and I’m already looking forward to next year.

Progressive .Net Tutorials

This week I attended the Progressive .Net Tutorials at Skillsmatter.  It’s a 3-day conference of hands-on tutorials focussing on the .Net Platform.

Day One

Don Syme talking about F# (Photo from @AnaisatSM)

On the morning of the first day I attended “Practical Functional-First Programming” starting with Don Syme giving a presentation of the benefits of using FSharp.  To me the presenation felt far too “sales-y” for my liking containing statements along the lines of F# will make you write robust code faster and with less bugs – I’m sure I’ve heard that said before about almost every language out there…  We then got our hands-on some F# with Phil Trelford introducing the F# Koans.  I’ve completed these before, and feel that they are a fantastic practical introduction and I recommend looking for Koans in any language you plan to learn.

After lunch I went to Liam Westley’s “Async and C# 5“.  This was very much the same talk I had seen at DDD9 in January last year, but as it was a tutorial we got our hands on trying out the Async ourselves – once I’d finally downloaded the VS2011 Beta of course.

Day Two

Look! It’s me in the audience 😀  (Photo from @AnaisatSM)  

For the first talk of the second day, I took a chance and saw Mark Rendle‘s “Introduction to Simple.Web“.  As Mark himself said, he’d not talked about Simple.Web before, so he was pleased that so many people came to see him talk about something no-one had heard of, but I had heard of Mark and his tool Simple.Data.  It turned out to be the beginnings of a REST framework, but unlike OpenRasta, which we use heavily at 7digital for our APIs, it has HTML representations as a main focus and works very nicely with Razor.

It’s still very much a fledgeling project which doesn’t handle errors or logging very well, and I daren’t ask about which IoC containers it integrated with, but it was only created at the end of Feb!  I’ve already taken my own fork of the repo and will see if there’s anything I can contribute.  You can read more about it on Mark’s Blog.

In the afternoon I chose Gary Short’s “End-to-End Javascript“.  I’m afraid that I’m one of those developers who shies away from the front-end and anything related to it.  As such, I’m quite perturbed by Javascript and it’s craziness and felt this talk would hopefully give me a kick up the backside to get it all figured out.  Sadly, it did not.  Instead, I felt inundated and lost track rather quickly.  It was a tall order – MongoDB as the data store, NodeJS for the middle tier and JQuery on the front end – all to be introduced and linked together in 3 hours.  I didn’t have MongoDB installed and soon got left behind trying to get that downloaded.  NodeJS installed quickly, but I couldn’t do anything without a data store.  By this point  Gary was already talking about JQuery and I gave up trying to catch up.

Day Three

I could not decide between Dan Thomas’ talk on “HTML5” or Dylan Beattie‘s tutorial on “Security and Identity in the .Net World“.  I went with Dylan as I wished to know more about OAuth as we use this for our APIs at 7digital, also with Skillsmatter recording all the talks I could catch up on the HTML5 one.

Dylan’s tutorial was probably the most organised.  He came fully prepared with printed out tutorial worksheets, also available online, which meant that everyone could move at their own pace and not get lost trying to keep up with the presenter’s typing on the projector.  He gave a quick presentation and description as to why it is important to consider identity and that it is probably best not to write your own solution when the built in tools from ASP.Net are so easy to set up – in fact he did just that whilst holding his breath to show how quick it can be done.

A third alternative, is to get someone else to do the identity checks for you, such as Google or Facebook, and use OpenID or OAuth to manage this.  The talk focused on OAuth 2.0 which seems far easier to grasp and more featured than the OAuth 1.0 we are using at 7digital.

After lunch I decided to attend Ashic Mahtab’s tutorial on “Messaging – It’s not just about Large Scale Integration“.  I had missed Ian Cooper’s talk on “Messaging 101“, plus the description included things such as dependency injection and aspect orientation – it promised to be a fun talk.  Unfortunately, it was impossible to keep up with Ashic as he typed away on the projector and many others also got lost along the way.

Coding makes us smile! (photo from @AnaisatSM)

He dived straight into coding with generics and actions, which can take a few moments to decipher if you’re not sure what the intent is.  It seemed as though he wanted to show the steps you would go through to grow a codebase which would implement dependency injection and aspect orientation from the ground up – therefore not needing any other frameworks.  He really seemed to really know what he was talking about, but it was presented in a hardcore lecture style at a very fast pace.  I got so lost that I decided to bail after the break and hang out in the breakout area getting to know the enemy guys at Huddle.

Thank you

It was a great conference, with great presenters, great talks and great attendees.  I have a lot to think about and play with now.  I also want to thank Skillsmatter for being great hosts (the food was wonderful!) and for recording the sessions I missed and making them available online.

Thoughts on the Friday of Flossie 2012

I did not know what to expect from Flossie 2012.  I’d been to the About page on their website to find out more (the programme wasn’t available at the time) and discovered that it’s open to all women, whether they code or tinker and in this loose definition I fit, but the bullet points list digital arts, social innovation, open data and knowledge management, research, education, open access and open culture.  So, I knew I wasn’t in store for something heavy on the code, but that’s fine, I wanted to expand my horizons.

I must admit, that I was also apprehensive about it being a Women’s Only event.  I’m not particularly keen on Women Only events, I feel very similar to Trisha Gee about them, but as this one was about Open Source, I felt it would be good to go along as it’s something I want to get more involved in.

Yay! Stickers 😀

Upon registration I was presented with a goodie bag stuffed with stickers from Open Source symbols like Ubuntu, GPL3 and the GNU gnu plus a Google pen and a Google crystal keyring that was so heavy it could double up as a murder weapon.

The introduction welcomed everyone in followed by a welcome from Professor Elaine Chew of the hosts Queen Mary University.  A panel of speakers was brought together which included Laura Czajkowski who is a prominent figure in the Ubuntu community and introduced us to Ubuntu Women.  Alexandra Haché then showed us a short preview video of an interesting research project she is conducting on Women Hackers.

The panel was then opened to questions and it wasn’t long before the subject turned to gender politics and sexism.  Sweeping statements containing phrases such as “women do this” and “men do that” were being thrown around.   “Men give up on problems” was one of my favourites, with the implication that women don’t, ever.  We were no longer discussing Open Source, but gender.  I say “we”, but I was sitting there wishing I had decided not to come.

There was a break in which I grabbed a cup of coffee and talked with a like-minded woman who also felt that the earlier discussion had gotten way off topic.  I guess we were wrong though, because the talk after the break was “Gender issues in Free/Libre Open Source Software Communities” in which I got lesson in all the different kinds of Feminism, which was actually fascinating as I had no idea such strongly opposing views existed under a single banner.  A shame though, as I was here to learn about Open Source.

There was a talk on Linux User Groups, which I believe stated that these places can feel unwelcoming to complete beginners, but it’s not necessarily a gender thing and that the banter exists for everyone and both genders can find this uncomfortable.  This too, devolved into a discussion about how “men discuss things differently to women” e.g. “in a more detached and impersonal manner, while women involve their feelings resulting in them feeling personally attacked”.  I also heard the view that one of the reasons women don’t like technology is because they are more “connected to their bodies than men and feel a need to be moving and not be sedentary in a chair all day”.  I’ll have to tell my male colleagues who go running and swimming at lunchtime that they are doing it wrong.

More of those sweeping statements that made me feel sick, but I saw nodding faces all around the room, so I sadly must have been alone in this.

After lunch the “Arts Strand” began, and I sat through talks about Collaboration in Art, Wearable Technology and sound projects using SuperCollider and Audacity.  These talks were fun and interesting, but were too much into the Art side of things for me, for example a teapot that changed colour when tweeted to and a necklace that shows your heart rate.

The Startup experiences of the “What Size am I?” team

The people behind “What Size Am I?” told us about their experiences as a Startup using purely Open Source technologies and how it affected their development timescales – getting up and running with Django was amazingly easy, but customisation later proved more difficult. This talk felt more like what I would have said to be “on track”, but I guess my expectations were completely out of whack with the event’s aims which seemed to be to discuss sexism in tech.

I couldn’t attend the Saturday part as I had a prior BBQ engagement, which turned out to be a Surprise Wedding, but I don’t think I would have gone anyway – as interesting as some of the talks were, I simply don’t believe I am the target audience as I would rather focus on the tech, irrespective of the speaker’s gender.