Category

Continuous Delivery

Agile, Continuous Delivery

Why do DevOps?

Background:

According to the Puppet’s 2016 State of DevOps Report?DevOps?is no longer a mere fad or buzz word but an understood set of practices and cultural patterns.

DevOps was originally coined by Patrick Debois and Andrew Shafer in 2008. The velocity conference??community made DevOps a known term with the famous 10+ deploys per day: Dev and Ops cooperation at Flickr?in 2009.

What is DevOps?:

Daniel Greene?in techcrunch?defines DevOps as a set of practices, tool and policies that lead to improved quality and automated delivery. But I really like Gene Kim’s definition in his book the phoenix project?where he refers to DevOps as the outcome of applying Lean principles to the IT value stream. These principles are age old but are now used to accelerate flow of work?in software development?through product management, development, test, operations and Information Security. ?Adapting lean principles and shift left, Toyota’s mantra is to stop and fix quality issues before they manifest at the end and also deliver just in time parts and support.

ProductionSystem_tcm-11-406919

Source:?Toyota website

Imagine applying these principles to a software development lifecycle and while coding and running automated testing, checking for known vulnerabilities, testing for security, continuously integrating code and deploying several times a day in various environments including production. This is shifting left the activities that were performed at the very end, which caused problems to be discovered late and teams could never build in quality. This lead to developers spending less time reworking their code while Ops spending time in late nights and weekends to deploy code into production. In fact according to the phoenix project, DevOps has benefited from an astounding convergence of philosophical movements such as Lean Startup, Innovation Culture, Toyota Kata, Rugged Computing and the Velocity community.

What do the development and operations typically see?:

What Operations sees? What development sees?
Fragile applications are prone to failure More urgent, date-driven projects put into the queue
Long time required to figure out ?which bit got flipped? Even more fragile code put into production
Detective control is a salesperson More releases have increasingly ?turbulent installs?
Too much time required to restore service Release cycles lengthen to amortize ?cost of deployments?
Too much firefighting and unplanned work Failing bigger deployments more difficult to diagnose
Planned project work cannot complete ?Most senior and constrained IT ops resources have less time to fix underlying process problems
?Frustrated customers leave ?Ever increasing backlog of infrastructure projects that could fix root cause and reduce costs
?Market share goes down ?Ever increasing amount of tension between IT Ops and Development

 

Source:?2011 06 15 velocity conf from visible ops to dev ops final

The above leads to a mindset that eventually breeds mistrust and a not so healthy competition between these teams.

Lets see how DevOps changes this:

Looking at some statistics of companies showed high performers delivered on faster time to market, increased customer satisfaction and market share, team and employee productivity and happiness as well as allowing organizations to win in the marketplace.

Amazon in 2011 deployed code every 11.6 seconds, Netflix several times a day and Google several time a week. ?Gary Gruver’s description of their transformation of HP’s laserjet division where?400 people distributed across 300 continents is able to integrate around 150 changes or 100,000 lines of code ?into the trunk on their 10m lines of code base every day.

BenefitsofDevOps

We know our quality within 24 hours of any fix going into the system and we can broadly test for even last minute fixes to ensure that a bug fix does not cause unexpected failures

Check out Guy Gruver’s talk?in 2013?on how they did it.

Summary:

DevOps transformation at scale is hard and needs changes in both organization and technology bringing together teams that have been in the past operated separately and introduces innovation?that automates end to end software development pipeline.

Continuous Delivery

Catch your continuous integration and deployment pipeline today

My review of Bitbucket pipeline beta

Screen Shot 2016-07-24 at 4.30.21 PM

So you would like to build, test and deploy your code continuously? but don’t want to install and configure your local instances of Jenkins or Bamboo (Check out our thoughts on both tools), you don’t have to anymore. Atlassian is replacing its Bamboo on Cloud offering with Bitbucket pipeline. This offering is for small and medium sized businesses.

I recently signed up for the Bitbucket pipeline beta program at bitbucket.org and got accepted a month ago. Read on for my sum up of this new Atlassian offering

Key features:

  1. A continuous integration (CI) and Continuous deployment (CD) engine is ready for use on any of your git or mercurial repositories by just adding a bitbucket-pipeline.yml file in your project root. You pipeline is triggered as soon a change is pushed to any or a selected branch. See the below simple example of the bitbucket-pipeline.yml file.
    image: node:5.11.0
    pipelines:
    ? default:
    ??? - step:
    ??????? script:
    ????????? - echo "This script runs on all branches that don't have any specific pipeline assigned in 'branches'."
    ? branches:
    ??? master:
    ????? - step:
    ????????? script:
    ??????????? - echo "This script runs only on commit to the master branch."
    ??? feature/*:
    ????? - step:
    ????????? image: java:openjdk-9 # This step uses its own image
    ????????? script:
    ??????????? - echo "This script runs only on commit to branches with names that match the feature/* pattern."
  2. Employs docker images to build, test and deploy your code. Out of the box has images for the following languages Screen Shot 2016-07-24 at 6.07.10 PM?
  3. Support for builds in any docker image. You can build your own docker image with your language support and build it using the pipeline. Docker images for additional languages below are available at? docker hub.

    I needed a docker image that has the atlassian sdk that is built on top of maven to build my sample atlassian plugin. I used a docker image built by Translucent. You can get it at the?docker hub. Take a look at my bitbucket.pipeline.yml file below or get the entire source at github

    image: translucent/atlassian-plugin-sdk
    
    pipelines:
      default:
        - step:
            script: # Modify the commands below to build your repository.
              - cd adminUI
              - pwd
              - atlas-version
              - atlas-mvn clean install
  4. Environment variables allow passing security parameters and other configurable variables.? You can write scripts in the yml file or invoke nested scripts from your project folder.
  5. Community support is increasing and well documented
  6. Several integration images are available to test, monitor and deploy your code.Screen Shot 2016-07-24 at 6.26.01 PM

The appeal:

  • Simple to configure and you can start building with a flip of a switch
  • Simple architecture and ability to invoke parallel scripts and trigger different docker images delivers great scalability
  • Industry supported docker images and ability to create your own images makes this offering flexible

?Limitations:

  • No Email and HipChat notifications
  • No Jira integration
  • Build speeds and concurrency is limited

The product is still in Beta, but will appeal to most due to its simplicity and integrated seamlessly into the Bitbucket UI.

Feel free to share your experience

 

Continuous Delivery

5 tips on developing your first Atlassian server plugin

We at Nimble love to develop and experiment with different technologies and share our experiences with our community. This time around developing an Atlassian plugin became our fancy. Atlassian of course offers tools for development, collaboration and continuous delivery such as Jira, Confluence, Bitbucket and Bamboo. Atlassian also?offers a rich REST API to build integrations and plugins that one can add/extend their basic tool functionality.

So I started on my journey and found a few gotchas that?I think are interesting if you happen to go down that path.

1. The basics

a. Sign up for an account at http://developers.atlassian.com. Atlassian offers two plugin development SDKs for their cloud based and their in-house server based versions.

b. Consider signing up for an account at bitbucket.org if you don’t already have a SCM or if you want to try out a nice cloud based code collaboration tool that provides Git repos to host your code and also offers integration with several tool suites including a cloud based continuous integration and delivery tool. More on this later…

2. Issues with the tutorial

I started exploring the server side development and accessed the tutorial at?https://developer.atlassian.com/docs/getting-started/set-up-the-atlassian-plugin-sdk-and-build-a-project. The tutorial does a great job at the setup and gets you going, but it is dated.

Most of my time really went into trouble shooting issues with dependencies in pom.xml and conflicting directives in the plugin config file: atlassian-plugin.xml

Following a few sub links in the tutorial:

You are directed to convert the plugin component to a servlet model at?https://developer.atlassian.com/docs/getting-started/learn-the-development-platform-by-example/convert-component-to-servlet-module by adding the following lines in the atlassian-plugin.xml

<component key="myPluginComponent" class="com.atlassian.plugins.tutorial.refapp.MyPluginComponent" public="true">
 <interface>com.atlassian.plugins.tutorial.refapp.MyPluginServlet</interface>
</component>
<servlet name="adminUI" class="com.atlassian.plugins.tutorial.refapp.MyPluginServlet" key="test">
 <url-pattern>/test</url-pattern>
</servlet>

Next there are SAL (Secure Access Layer) class components that need to be imported into the project. Look at?https://developer.atlassian.com/docs/getting-started/learn-the-development-platform-by-example/control-access-with-sal which directs you to add the following lines to your atlassian-plugin.xml

<component-import key="templateRenderer" interface="com.atlassian.templaterenderer.TemplateRenderer" filter=""/>
<component-import key="userManager" interface="com.atlassian.sal.api.user.UserManager" filter=""/>
<component-import key="loginUriProvider" interface="com.atlassian.sal.api.auth.LoginUriProvider" filter=""/>
<component-import key="pluginSettingsFactory" interface="com.atlassian.sal.api.pluginsettings.PluginSettingsFactory" filter=""/>

Compiling this code promptly delivers an?error messsage while using the latest version of Atlassian spring scanner and JDK 1.8. ?The error message is that servlet directive and the component import tags are not compatible

After a lot of time troubleshooting, I hit upon the reason.?The latest version of spring scanner supports annotations instead of tags. Take a look at the revamp suggested at?https://bitbucket.org/atlassian/atlassian-spring-scanner

The Atlassian-plugin.xml file should?not contain any of the <component-import> tags ?and in your servlet code add annotations such as the below

@ComponentImport
private final UserManager userManager;
	
@ComponentImport
private final LoginUriProvider loginUriProvider;
	
@ComponentImport
private final TemplateRenderer templateRenderer;
	
@ComponentImport
private final PluginSettingsFactory pluginSettingsFactory;

This needs to be updated in the tutorial.

3. Use a RefApp

Instead of developing for a particular tool like jira or confluence, developing for RefApp gives you the flexibility of completing your plugin development irrespective of the tool and then generating a plugin version for the target tool.?RefApp provides the shared framework between all Atlassian applications. This means that you can develop your plugin without accidentally relying on dependencies or features specific to one application, or encountering an application-specific bug later on. Developing a plugin with RefApp eliminates guesswork about the functionality of your project. You can rest assured that since all Atlassian applications share at least the framework present in RefApp, your plugin will work as expected.

?4. Leverage bitbucket pipeline

One of the other advantages of hosting your code at bitbucket.org is that you can use Bitbucket pipeline. Bitbucket pipleline is a new offering and still in beta. I happen to be part of the beta program and got an opportunity to kick the tires. In short the tool is a continuous integration and continuous delivery tool that relies on the presence of a bitbucket-pipelines.yml file in your top level of your project. The yml file with the below directives helped me build my plugin on the cloud. The pipeline relies on running builds, tests and deployment with pre-configured docker images. Anytime you update your?code, the pipleline runs based on the directives?as in the below yml file.

# You can use a Docker image from Docker Hub or your own container
# registry for your build environment.

image: translucent/atlassian-plugin-sdk

pipelines:
  default:
    - step:
        script: # Modify the commands below to build your repository.
          - cd adminUI
          - pwd
          - atlas-version
          - atlas-mvn clean install

 

5. Develop an Atlassian plugin like its 2016

Some interesting ideas?to explore once you have the basic tutorial up and running is how to take your plugin to the next level with ideas on how data should be retrieved only using the REST API and UI should be rendered on the client site by adding javascript libraries such as?Babel?and?Webpack

Take a look at the details at the following two write ups….

https://developer.atlassian.com/blog/2016/06/jira-add-on-dev-2016-part-1/

https://developer.atlassian.com/blog/2016/06/jira-add-on-dev-2016-part-2/

You can find my latest code and use to kick start your plugin development at https://github.com/aniljaising/atlassian-plugin-tutorial

I would love to get your feedback on this article as well feel free to ask any questions…

 

Agile, Continuous Delivery

Developer activity metrics Part 1: Gitinspector

Measuring developer productivity is one topic destined to be a cause of heated discussions. Leaving all sentiments aside, here’s a short list of tools with ambition to provide analytical analysis to the developer activity.

  1. Gitinspector – free and down-to-earth statistical tool for Git repositories
  2. Hello2morrow – builds code dependencies graphs and identifies software architecture rules violations
  3. Semmle – a commercial tool with main focus on ability to view results of static code analysis under different angles
  4. BlueOptima -Developer Efficiency Analyzer – a commercial tool that based on the company’s algorithm is able to translate number of committed lines into hours of developer effort

What we can measure with Gitinspector:

1. The size of a pull request

Historical commit information by author actually has an
interesting twist:

   (insertions + deletions) / number of commits 
                         = the size of each commit

This is an objective indicator of cries of pain you might’ve let escape your chest while facing overwhelming pull requests.

Screen Shot 2016-05-08 at 2.21.14 PM

2. Efficiency of the change

Number of rows from each author that have survived and are still intact in the current revision Generate the report for a time frame of a sprint or a release development cycle, and the tool will display a number of survived lines for each author. So, if during the sprint, you add or modify 100 lines, but only 10 of them survive at the end of the sprint, it’s time to focus on quality and pertinence of
your work.
Screen Shot 2016-05-08 at 3.06.28 PM

3. Cadence of the change

Ignore most of the History timeline info and focus on Modified Rows for each month:

Modified Rows / Total Code Base Rows = % of monthly change
% of monthly change * Coefficient of the ave test time = 
estimated time you need to test your changes

Yes, this is the approximate metric, but given a few months on the project, you’ll be able to determine the coefficient and hence estimate the testing time

Screen Shot 2016-05-08 at 3.06.28 PM

4. Areas of expertise

Is mostly responsible for section can be easily called Authors’ areas of expertise – we always tend to spend more time coding what we like to code, or we end up becoming experts in what we have to code. So, file extensions indicate language preferences, and file locations show level of comfort with the code base – after all, making a change to the framework core carries more responsibility than implementing a new handler or a view.

Screen Shot 2016-05-08 at 3.24.03 PM

Continuous Delivery

Look at Bamboo with Jenkins fan’s eyes

For the past 10 years, my projects’?Continuous Integration were done on Hudson/Jenkins. Cannot say I’d ever been a Jenkins fan though, but I haven’t had a chance to work?with?alternatives either. Until I stumbled over Atlassian Bamboo.
At the end of the day, both?Bamboo and Jenkins are Continuous Integration (CI) tools,?but while Jenkins is doing its best to support Continuous Delivery (CD) via newly developed plugins, Bamboo is built with CD in?its backbone.?

?Native support for Continuous Delivery?Pipeline

Bamboo CD pipeline is a chain of stages. Each stage contains 1 ore more jobs. Each job consists of one or more tasks.
For example, a project continuous delivery pipeline consist of the following stages: package, deploy to QA, run tests in QA, deploy to UAT, smoke tests in UAT.
“Package” stage contains one “Build and Package” job that has “compile”, “unit test”, and “install into Artifactory” tasks.
“Run Tests in QA” stage contains 2 jobs that are executed in parallel: “Run Regression tests” and “Run Functional tests?.
All jobs in one stage have to be completed successfully before moving to the next stage.
Steps can be configured to pause for manual input.
As a part of a pipeline, a release candidate can be manually approved and promoted to higher environments. For example, a build that successfully passed all stages including a smoke test in UAT can be marked as approved by UAT team and promoted to Prod by Operate team. Bamboo groups permissions would ensure correct access to these functions.
?


Many daily CI features are put on auto-pilot:

  1. Choose one of two merge models to?avoid manual merging?? to trigger a feature build, Bamboo will use either the ?Branch Updater? to update a feature branch with the integration branch or the ?Gatekeeper? to update the integration with the feature branch.
  2. Enable ?push on success? to merge a pull request to the feature branch via Branch Updater or to the integration branch via the Gatekeeper.
  3. Configure Bamboo to automatically delete configurations for branches that are deleted in Stash

Tight integration with JIRA:

  1. Generate Release Notes for each build and shows a difference between current and any previous builds.
  2. Link issues to dev branches to see real-time build status inside the issue
  3. Get related build results inside issues and the issue’s vital stats on the build result in Bamboo.
  4. Track each issue’s deployment status throughout your environments.
  5. See a roll-up of all the JIRA?issues and commits included in each release candidate.
  6. Create an issue from Bamboo failed job to resolve it
?


Tight support of Docker build agents and tasks:

  1. Build a Docker image, run a Docker container, and push the image to Docker Registry
  2. Run as a Bamboo Agent inside a Docker container to ease up distribution of changes to Bamboo instances and avoid conflicts between remote agents on the same host
?
Continuous Delivery

A month with Git or life after SVN

? Our’s is a typical old-fashioned large-scaled?project with a bloated codebase that reached 1,000?lines of code in about 3 years and keeps moving with a velocity of 300 lines a day. Yes, we do have design and code quality issues, but that?s the topic for another day.

One sunny weekend? we moved to Git with GitFlow workflow? covered in details by revered “A successful Git branching model and heavily supported by Atlassian

The first dust had settled in a month. We still couldn?t offer ?The? answer to life, the universe and everything? better than a 42, but we surely learned a few things about life in Git era.


?Two heads are not necessarily better than one….there is always someone to disagree with you.


Mistake #1: Pick a single?Git client to begin with.

We gave the team an option to choose from one of 3 Git clients – TortoiseGit, Eclipse EGit, and command line client. And we’ve run into a classic example of “The Paradox of Choice – Why More Is Less“.

Every time someone was stuck, they had to either find a buddy from their preaching camp or revise their tool allegiance.? That is, if there was someone around to help… If no one who knew ?how to? was available, the guys would attempt to find a solution on their own and with any tool ending up joggling between all three Git clients. You can imagine the slope of the learning curve, the level of productivity degradation, and the volume of pain cries.

Majority of folk chose TortoiseGit. It is a breather to get up and running with and it feels like home for those who come from using TortoiseSVN. And that was the very disaster – people felt they’d got a hang of Git without understanding the fundamental differences. Lion’s share of bad merges, lost files, and delayed pushes came from this group.

The curious ones stuck to Eclipse EGit. It can do a lot of things a command line client can. One limitation that we found out quite early was EGit could not prune dead branches, but we could live with that for a while. The real problem was ? the tool was very very buggy. We had to watch numerous tutorials on Youtube in order to find workarounds to merge files, resolve conflicts, reset branches, and much more.

The old school used the?command line client – the all-purpose tool until it came to conflicts resolution where a graphical client was much needed. At that point, it was a split of schools again: Eclipse, TortoiseGit Diff, or any other stand-alone Git compare tool.


?It may disturb you. It scares the willies out of me.


Mistake #2: Do not try to stop people from making mistakes.

Barely overcoming the ancient fear of cutting a branch, we couldn?t bring ourselves up to easily exposing off them. ?What if we need to come back in a week to fix a feature bug?? or ?My bugfix hasn?t passed QA yet!? were cried out a lot too often. Yes, we did study theory and conduct hands-on sessions, but obviously they were largely useless. What worked was to step back, let them keep original branches for as long as they like and then watch them cut a new branch when a bug in that feature had been discovered or the fix had failed QA.


?Not entirely unlike


Mistake #3: Don’t count on a tool upgrade to upgrade mindsets

The habit of isolating one?s changes from the rest of us by delaying syncing with SVN trunk till the ?you?ve got to complete this now? time caused us to update our local code bases once in a week or two. The pain of conflicts resolution was significant and regressions were introduced. Yes, there are ways to monitor and discourage such behavior, but once again ? we did have serious continuous delivery challenges. Indeed, many of us had accepted the fact that to avoid that pain we had to sync our code bases daily. Feature branches at first glance looked like a fallback to ?isolated development?. The point of daily syncing features with the develop branch and bugfixes with the release branch had to be hammered into each coding brain.


?If you want to survive out here, you’ve got to know where your towel is.


So to get ?The answer to life, the universe and everything?, we still have a few mountains to climb:

We don?t see how we can ease up the burden of merging bugfix branches into both release and develop branches. In theory – yes, we push bugfix into release, and after enabling the auto-merge the release will be pushed to develop. But by then, our develop is way ahead of release and the conflicts are the matter-of-fact.

Another unanswered question is competing pull requests. Say, I created a perfectly valid pull request, but my peers haven?t reviewed it on time and someone else?s request was merged before mine causing mine become a conflicting request. The next thing I know is that after I had moved onto another super urgent task, I have to stop my current work, switch to the old feature branch, withdraw the old request, merge conflicts, and repeat the push request again.

No questions about Git?s benefits, but porting a fast-paced large-scoped and organically grown project is definitely not a task for the faint-hearted.