Deprecated: Function create_function() is deprecated in /home/qualit96/public_html/wp-content/plugins/revslider/includes/framework/functions-wordpress.class.php on line 258
CI / CD / DevOps Archives - Quality Spectrum

CI / CD / DevOps

Introducing testers to Docker

By |2019-11-20T21:05:21+05:00June 12th, 2018|CI / CD / DevOps|

In case you are wondering about the blue whale up there, it’s the logo of Docker signifying containers being delivered. Before we go onto all the techie stuff, let’s start with a short and sweet history lesson.

 

Evolution towards docker

Briefcase computers

Once upon a time we had desktop machines and everything resided on that one dreaded machine. With Laptops we were happy seeing we could pick up that otherwise big suitcase and take it wherever we wanted. To duplicate our environment though, we had all sorts of shortcuts but all of them were just to speed up the installation process where EVERTHING had to be installed on a clean slate.

VMs

Then some clever chaps found out a way to ‘virtualize’ the whole thing and called it ‘virtual environments’, VMs for short. Now on top of our main OS we could install a new ‘environment’. The interesting part was saving and transporting that VM. We could save these environments, copy them, share them, maintain histories of them, and life was merry.

VMs in the cloud

Then the cloud came along, another group of smarty pants had the idea of hosting these VMs as a cloud service. That meant, instead of me maintaining a machine with all I want, I can just ‘rent’ an environment for as long as I want and then give it back to them. That made creating environments and especially sharing them a lot more easier and very cost effective in some cases.

Docker comes along

Then some folks who probably loved whales (or it was just the designer) came up with another great concept. Instead of creating VMs which had their own OS and moving around large files, they could run the desired application on the host OS, but in “isolation”. That meant the host does not know about our running app nor does anything else on the host affect our app, which was kind of one of the main reasons for VMs. Interestingly, all of this was done through some existing concepts in the Linux filing system.

While almost all the aforementioned concepts are still in fashion these days in different shape or form, the point here is to shed some light on how it evolved.

 

About Docker

Essentially following are the few fundamental concepts to docker. First would be how docker compares with VMs in how it functions. This most commonly referred diagram illustrates it well:

In summary it means Docker environments, known as containers, utilize the host operating system to run the applications instead of creating a new guest operating system. At the same time, all out 3 apps in the image are running in isolation from each other and the host. This apparently not so significant detail has made a big difference.

Running Docker

Walking through the basic steps of how a container is created would give a better idea into what docker does.

Docker Engine

As from the image above, the Docker engine is running on our work machine’s OS like any other software or VM player. The docker engine does all the magic of connecting our App with required OS resources and bridging communication between the container and the host.

Docker Image

The analogy of a class and an object would fit well here. The docker image is like an object-oriented class or like a template. The image itself has the required programs which can run on any host OS (since the docker engine is in between providing universal connectivity).

The image use used to create the actual containers which would run, from our analogy that would be like instantiating an object from a class.

Docker Hub

The image can be placed on your local machine / network or commonly placed on Docker Hub (https://hub.docker.com/). Most of the common software are placed by their vendors on the Docker hub site. From your machine’s command line, you can download a docker from directly from docker hub and create a container with it.

Docker File

The configurations of a docker image are placed in, what is called, a docker file. To keep it simple, the file has details about the image and its configuration. To create an image one would have to create a docker file specifying details about the image.

Docker Container

Now with the docker image (which has its own docker file), we will copy over the image onto our own machine. To create a container, we can either create one directly from the image file, or we can edit the image to our own needs, by updating the docker file, and then create an image from it.

From our class analogy, we can create an object from the class directly, or create a subclass inherited from the original class to add /edit any additional attributes we want and then create an object from the subclass.

Volume

An important and powerful concept in docker is Volumes. Every program running on a docker container would have some data to save. Each container keeps the data pertaining to that container separately in what is called ‘volumes’. Every container has one volume by default. Once the container is destroyed, it’s volume goes with it.

Things get interesting from here. Volumes can be ‘shared’, meaning we can have volumes which can persist even after the container is destroyed. This way, the same dataset can be used by ‘many’ containers.

Benefits for automation engineers and testers

The benefits of using docker for testers are great. I’m not going into detail of them, but just hinting to a few here.

If your AUT already has a micro-service architecture, your team would already be using docker most probably, and it would only make sense for you to do so.

Creating new environments

Creating new test environments and tearing them down becomes child’s play once things are in place. It’s just a matter of running a simple docker command from the CLI to create a new container from a saved image of your AUT.

Saving app state

Saving state of the AUT becomes a lot easier with containers as well. We can save volumes of the AUT with desired test data to start from the same AUT state. For application where reproducing issues from the field is a pain, this can be a blessing.

In case your AUT does not have a micro-service architecture, still you can create steps to run your AUT’s certain portions, like the DB, in a docker container and snapshot your data at required states. This would vary a lot from case to case on how to implement this.

Multiple execution environments

For automation folks, the way we can create multiple AUT environments, we can create multiple automation execution environments as well. If your automation tool has a docker image (most open source tools have one), it becomes so easy to create multiple execution environments in real time and once the tests are done can tear down them.

Jenkins – Installation Background

By |2019-11-20T21:05:11+05:00March 7th, 2018|CI / CD / DevOps|

In the previous article a brief history, main features and a very basic control flow was discussed. Moving on I’d like to go over some important points while installing Jenkins and running it the first time. There are quite a few tutorials showing this part, I feel some lack some introduction to new terms which is what this article is all about. Also, most tutorials start directly with the technical details. I’ve always been more comfortable knowing the ‘what’ and ‘why’ before going into the ‘how’. Keeping that tradition, let me go over ‘what’ and ‘why’ first.

Pre-Requisites – Java and Chocolately

Jenkins is written in Java and therefore requires the ‘JDK’ (Java Development Kit) including the compiler and executor. A lot of tools out there which help to install java, the one I like to use most is Chocolately. They describe it as ‘the package manager for windows’. Essentially it makes installation a lot easier for different packages (software / libraries). Once you get the hang of it (which is very simple), saves a lot of grief and time.

For installing Java you can go to it’s website, download and execute the files, and then add the system variables. Or you can install Chocolately and reduce a few steps. For chocolately installation, you don’t download a file. Instead you just run a batch command and chocolately installs itself! The installation page has all the details. Open the command prompt in administrative mode, run the command and you’re all set.

Now installing Java is a piece of cake. Guess what you have to do? Just run a command in cmd again. You can type ‘Java’ in the search to see all the variants, we would run the command:

choco install jdk8

‘choco’ making it a chocolately command, ‘install jdk8’ asking to install the JDK package version 8. How easy was that! The best part is, any other future installations are going to be as easy. For instance, installation maven, Angular, Node JS etc are all just one batch command away.

Installation methods

If you go to the Jenkins website’s download page you’ll see many different installation types. Jenkins can be run on a lot of platforms including docker. For Windows users, there are two main options, 1) using the windows installer or 2) get the ‘WAR’ file.

The ‘WAR’ file is Java’s ‘Web Application Resource’ file which includes a collection of different Java files including JARs, JSPs, XML files, web pages and other resources. Jenkins works pretty much the same with both files operation wise as far as my experience, however the method of running them and portability is slightly different.

With windows setup files, run the installer and follow along the wizard, that’s all there is as far as installation goes.

Using command prompt

When I started learning new technologies, I noticed more savvy users were using command prompt versions of most applications. At first felt kind of difficult and less intuitive. However, with time realized once you get the hang of it, things get quite easy and fast. With Jenkins there isn’t much of a difference except how you start the application, however using this way makes things a lot easier.

To run the WAR file, download the file and open the command prompt. CD into the directory where the file is placed and type

Java -jar jenkins.war

That’s it, Jenkins would initialize itself and start running, that’s all as far as ‘installation’ goes. If you are wondering, your right Jenkins does not need to be ‘installed’ here, it runs as an executable. To close the Jenkins server, press ‘CTRL+C’ on the command prompt running Jenkins and the server will close. To restart run the same command for the WAR file as before

Local server

Jenkins deploys a local server which is accessed through ‘localhost:8080’. A local host ‘server’ is used to ‘serve’ web pages / an application when requested by a client, pretty much the same way as web server does. The only difference for the client being the way we access it. A web server will require to enter a ‘URL’ which translates to an IP for a server machine. For our local host, there is no IP needed and it’s just ‘localhost’, and 8080 being the port for accessing.

Back in the day installing a local host server was not a behind the scene process and used to take a while, with layers of abstraction and advancement in development environments, this is now something that gets done behind the scene without the user knowing what all this entails.

Initial Setup

From here on, things are quite straight forward. One first creates an admin user which first requires entering a secure password to access the console. To obtain the password, follow the instructions on the page (In this case, copy the password from the given file):

Jenkins 2.0 installs quite a few plugins as part of the default installation. They are very helpful, like some highlighted below.

Jenkins uses XML files for configuration. By default these files are placed at C:\Users\<logged in user>\.jenkins\ and can be viewed (and edited) to see and update Jenkins configurations.

Basic Security

Jenkins deployed on a local ‘server’ means the application can be accessed anywhere on the network, which leads to the need for ‘security’.

You can have a lot of users for logging in to Jenkins with different privileges (and even access without logging in). Once the plugins are installed, the first Admin User creation dialog appears. Pretty straight forward.

That’s not all for security. Jenkins has a Global security configuration setting also. Here are a few recommended settings to do here:

Read only access

For teams triggering automated tests through Jenkins this is a very powerful feature. You would want a wide access to the reports from your automated runs, which gets a lot easier with this feature. If checked as shown below, anyone on the network can access the localhost server and ‘view’ all jobs and test results, which in my opinion is the way it should be.

If you would like to have intermediate user accounts (between the admin and read only users), you can add / manage user accounts here:

Docker friendly

Before I end this, I must mention Jenkins does run on docker too. That means you can have multiple Jenkins versions running. Or you can have one version but multiple backups of it which takes your environment management to a whole new level.

 

We talked about pre-requisites of Jenkins, ways to install and run it, using the command prompt, plugins installation and default security. This guide might not server well as a stand-alone piece, however would be great reference material while going through the process.

Jenkins – A quick introduction

By |2019-11-20T21:05:03+05:00February 21st, 2018|CI / CD / DevOps|

Most testers must have heard about Jenkins, and an increasing number of people are using or looking to use it in their automation and continuous integration efforts. The popularity of Jenkins is unquestionable and still is the most widely used CI tool, due to being open source and an eco-system building around it providing support for integrations with lots of platforms and libraries.

In this post, I thought to create a quick read on what is Jenkins and its main feature. I’m not using any technical terms from the tool and keeping it simple to make it easier for someone new to Jenkins.

Why Jenkins?

But before going into it, why we needed Jenkins? The process of deploying software (on multiple environments) can be tedious. If it can be done automatically will reduce a lot of time and effort which can be used elsewhere. Therefore, the main problem solved is building and deploying software on multiple environments throughout the different SDLC stages and run tests where needed.

History

I naively thought CI was a more recent phenomenon. Jenkins however, was first developed in 2004 by sun microsystems under the name ‘Hudson’. Over the years and legal battles, the name changed to Jenkins along with many changes in ownership till the community became an independent project. Surely this has been one a vibrant and flourishing community resulting in the widely accepted CI tool for many.

What is Jenkins

It’s a CI tool. This sentence provides no value into explaining what the tool does. So, let’s try again. It’s a tool which can build and deploy software and run post deployment tasks. A picture speaks a thousand words, so let me ‘draw’ a thousand words to elaborate that:

This process can start from different type of triggers provided. The AUT’s code is fetched, built and deployed, tests are run and moved on to the next stage to repeat the cycle or wait for a decision maker to confirm the next transition. With time there is a lot more you can do with Jenkins through the many plugins it has, however the main functionality is around these few main stages.

Jenkins does not bound the user to concrete stages (at least in Jenkins 2), these headings are therefore not representing exact phases you must use in Jenkins. Rather to provide an easy understanding about how it’s generally done.

Triggers

Back in the day someone would confirm from stakeholders once we are ready to move to the next stage of SDLC, then the release manager / team would deploy the next version from source control on the test environment. One this stage was completed; the next stage would require another correspondence and confirmation from a bunch of people followed up by hours long activities to patch the new environment.

Jenkins provides few basic triggers used to start the process of rolling our ‘Continuous Integration’ snowball down the hill. Triggers can be defined against a new code changes / new version in our source control. As soon as a new version is found by Jenkins on the SCM (Source Control Management), it triggers the process. Other common triggers are time / date specific or other configurations the tool can poll for. Mostly it’s based on new versions in the SCM.

Build Process

The first thing once the process initiates is to build the code. For larger applications this can mean a lot of small steps. To keep it simple, you will check out the latest queries for building the database and latest code repo for your code project and build (compile and package) the code. This would vary from team to team on how the processes are designed so there no single best practice IMHO (there certainly are bad practices which are beyond the scope of this article).

In many cases this would not be a complex activity, a few configurations and you’d be good to go. Specially if you are using build automation / package manager tools like maven or npm. Checking out the code would be one config line, and another to compile and build that code and your done for this stage.

Deploy

This is going to vary a lot, perhaps might not even be considered as a separate stage and you’d just add another step after building the code. Or if you are creating a new environment on the runtime for your tests (perhaps using docker) this would be a separate stage. The purpose here is to just deploy your AUT in the environment you want it to be tested in.

An important point to mention, if you have automation scripts to run (which should be the case) that would also be a part of the checking out and ‘deploying’. These tests might not be installed on a server, but certainly before you run those tests they would be fetched from a source control. Therefore, if you are building your environment, any pre-requisite tool deployments and test code / tools needed would be a part of the process here as well.

Test and Package

Once you have your environment ready and tests in hand, it’s time to run them and find out if we have any problems. Jenkins has its own reporting process which can be customized using plugins. This allows to see the results of all the tests executed, and can be consumed in different forms including sent as notifications in emails. You can also archive your code base if needed for revisioning purposes at the end of this stage.

Agents (Slave nodes)

I too dislike the slave notation but it makes things easier to understand. Jenkins can interact with a lot of tools sitting on different machines / environments. Typically, a decent Jenkins setup might be interacting with quite a few environments at the same time giving instructions and getting feedback. This is how you can have parallel test runs using Jenkins.

Environments can be created using different methods ranging from a physical machine to VMs to containerized environments (e.g. docker) or use SaaS tools like Sauce Labs / Browser Stack or Selenium grid. Out of these pros and cons for each and method to decide what to use would be a separate topic to discuss.

This example might be non-practical but illustrates the different ways these environments can be created / tests executed.

Plugins and integrations

The best thing I like about Jenkins is the number of plugins available to support many tools and libraries. All commonly used platforms provide plugins for Jenkins including test libraries and tools. Other plugins can be used to further configure Jenkins and allow to perform other complex tasks including customization and exporting of test results.

I must mention here, it’s a good idea to make sure whatever tool you are using around CI stages, it must support Jenkins. Most tools do, but if you get stuck with one that does not, it is not far-fetched if you’d have to look for a replacement which does.

Jenkins 2.0

There has been a lot of advancement in Jenkins since its earlier versions, especially when CI is becoming a more common practice across the industry. One major difference is the ‘pipeline’ method of creating ‘jobs’ in Jenkins which was not available in the earlier version. The traditional ‘freestyle jobs’ of Jenkins has specified stages for a job in a specific sequence, however the pipeline type job available in Jenkins 2.0 is script based written in Groovy. This allows for a lot more flexibility and customization to the process.

Other considerable changes in version 2.0 include a greater set of default plugins installed (making the first time use of the tool easier) and many supported plugins including the blue ocean plugin (creates a different GUI for the tool). It’s recommended to use version 2.0 for someone starting off now to take advantage of all the features.

The Magic of Continuous Testing

By |2019-11-20T21:04:55+05:00September 11th, 2017|CI / CD / DevOps|

 

Have you ever reached release time for your product only to find out major bugs coming out of testing? Does this sound like what you face on a routine basis? Trying to drill into finding the root cause for this many times ended up inconclusive, there was no one practice or person to be held responsible at the end. This is where continuous testing comes in to save the day.

This is a huge topic and one can easily write a book on this. This post along with this video attempts to highlight probably the very basics I feel are most important.

The bug bottle neck

I call the last minute barrage of bugs ‘the bug bottleneck’. To some it seems the only obstacle coming in between releasing a new product version are these testers coming up with last minute bugs. Why were these issues not found earlier and had to be reported a day before release.

The bug bottleneck develops not over a few days, it’s a culmination of issues which could have been prevented from the very first sprint cycle. More complex and huge the product is, the more difficult it becomes to test it adequately causing more undetected bugs. Here is a typical sprint cycle with manual testing at the end:

Dev team implements the new feature for the sprint with some fixes. This introduces new bugs some being directly related to implementation of the new feature and some ripple effects due to the new feature / fixes. The testers should now ideally test all the new features with regression around each one of them in detail. However usually there is not enough time for them to test the feature AND do regression around them during a sprint, especially if this is a complex application. Bugs do slip from this stage and keep piling up. A few sprints down the line nearing release dates, there is an army of undetected bugs which start getting caught as soon as the regression cycle hits which is then too late.

What is continuous testing?

In our example, testers were not able to cover enough ground from the very first sprint cycle causing the melt down at the end. Adding more testers is not going to be a sustainable solution either, especially for large applications. The solution is automating as many tests possible and running them as part of the sprint process.

Continuous testing is defined in many ways. I understand it to be a process with testing ingrained into every step of the sprint including development, and keep evolving the tests based on feedback of what works and what does not. Testing is therefore a ‘continuous’ process and not just an activity done when code is pushed to QA.

It would start from running automation during development and testing. As soon as a developer checks in his / her code, tests for at least the changed areas of the application are executed giving the developer an immediate feedback if anything fundamental is not working. Similarly when testing kicks in, automation should run a regression on the application while manual testers can focus purely on understanding and testing the new features.

Why Continuous Testing

This is the next step in the software development processes evolution. The last big change was Agile which replaced waterfall. There were some forms of agile practices already in place like iterative models, however agile methodology was classified as a different species.

We have reached the next step of evolution here, and it’s for the same purpose agile came into being, reduce time to market and increase quality. Like the iterative model, the fundamentals to this next step of evolution were hidden in agile but not prominent and were being neglected, necessitating the next step. This next step is known by many names including continuous integration, continuous delivery, DevOps, continuous testing. All of them have different meaning and implementation, but work towards the same purpose, reduce time to market and increase product quality. To truly have a DevOps model, one would need continuous delivery, needing continuous testing needing continuous integration at the end creating one system interconnected for the same purpose.

Back to our example, with continuous testing using automation in place, our sprint would look something like this:

Issues being captured during development coupled with a higher ratio during testing cycle lead to a smaller number of bugs getting slipped passed our net. Come regression week (s), we will find far less issues during that time and hopefully no major ones.

 Requirements for Continuous Testing

There can be a very long list here, the few I found important for one of the products I worked on are discussed briefly.

 Increase test coverage

Unless we have a decent coverage of the application, the purpose of capturing bugs from code changes will not be accomplished. Today’s applications are far more complex which cannot be automated just from the UI. Careful planning and ingenuity is needed to have a decent coverage.

 Make results meaningful

Tests a developer would like to see on adding a feature / updating code should be there. Yes, unit tests would be reflecting that, but also API and some UI level would be helpful too. Similarly for the tester, an automation batch run should be augmenting a tester’s wish list of areas to test. Therefore feedback and contribution from both these groups is vital.

Secondly the results should come in time. If we send the developer a report for 2 major bugs in a feature he / she implemented a week later, that’s of no use. The result should be there in hours while the implementation details are fresh in mind. Also for the tester, they should base their tests looking also at the automation report. For instance, “Areas A, B and C passed, I can test a slight variation there and that would be enough. D and E failed, probably there are more issues around this. “. Unless the report is delivered BEFORE the manual tester starts planning his test, it’s going to be late.

 Test before check in

Before a piece of code is checked in, it should be tested through the automated scripts and added to the branch only when passing. This would be easier than done since it would require a lot of pre-requisites. Continuous integration portions implemented, automated tests running efficiently and processes institutionalizing the branching process. however once done, the benefit would be far beyond the cost.

 Test quickly

That was the whole idea right? To test quickly with quality. If a tester is able to utilize the automation results as mentioned, it can help reduce the testing time per issue, allowing to test more areas and test quickly. This freed up time should be used to the advantage of shrinking the test cycle and increase time to market.

Word of caution here, by no measure this means to cut regression in half (or the team for that matter), rather increase the number of releases per year and have a quick turn around.

 Collaboration

To get the desired results, all departments need to coordinate with one another sharing what worked and what can be improved. This would involve not only development and QA, but support, sales and other departments giving information on how the updates went with the customers and what next to be prioritized, what areas need more automated tests around, what should be included in manual tests and so on.

Continuous testing is going to be an ongoing journey, not a destination. For it to be continuous, it necessitates evolution and continuous improvement. Being an emerging concept and mostly early adopters in the arena having it fully implemented, I feel a lot of evolution is in store around implementing this idea and how to do it well. One thing is certain, teams, products and companies to adopt this early on will be the most successful.

Go to Top