Deprecated: Function create_function() is deprecated in /home/qualit96/public_html/wp-content/plugins/revslider/includes/framework/functions-wordpress.class.php on line 258
Ali Khalid, Author at Quality Spectrum - Page 12 of 43

alikhalid

About Ali Khalid

This author has not yet filled in any details.
So far Ali Khalid has created 426 blog entries.

BDD Workshop Main Points

By |2020-03-03T20:25:04+05:00March 3rd, 2020|daily post|

Few important things I stressed on while delivering a BDD / Three Amigo session workshop today:

Cycle of BDD
1. Collaboration to clarify system behavior
2. Formulation of behavior in business terminology
3. AT THE END, Automate documented behavior – WHERE POSSIBLE!

BDD != Cucumber !!

Write stories in the format: As a if I do , I should see

Slice your stories vertically, each story should have an action and corresponding behavior

Write stories, rules & examples clearly, so someone can understand it even 6 months later

#RSQ #BDD #ThreeAmigoSession #Agile #TestAutomation

Don’t shoehorn DevOps everywhere

By |2020-02-29T20:45:39+05:00February 29th, 2020|daily post|

It might seem common wisdom for every software product should be able to deliver at a daily / weekly cadence,

However the Gartern’s Pace layered application strategy would beg to differ.

Some products require a lot of experimentation and rapid adaptation according to the changing surroundings.

However for other products being precise and efficient are far more important.

And that’s in a nutshell what the PACE layered strategy talks about, and divides software products in three types:
– Systems of innovation (revolutionary product)
– System of differentiation (improved product)
– Systems of records (efficient & legacy product)

#RSQ #ProductStrategy #DevOps

http://quality-spectrum.com/intro-to-pace-layered-application-strategy/

KPIs & Stats

By |2020-02-26T20:41:46+05:00February 26th, 2020|daily post|

Engineers usually dislike talking about KPIs and stats, you hardly see any useful ones

Managers mostly want to talk about just stats and KPIs, and feel they are all valuable..

IMHO KPIs & stats are required to take decisions based on objective data, so they are needed

However understanding that stats don’t give you the whole story is vital

To engineers I say: better you come up with a stat, or someone with no clue of what you do will come up with one

To management I say, use KPIs & stats, but do factor in qualitative measures and listen to your engineers, not everything can fit on a spreadsheet.

#RSQ #RedefiningSoftwareQuality #KPIs #Transformation

API Automation Learning Path Summary

By |2020-02-19T20:02:04+05:00February 19th, 2020|daily post|

The summary of Automation learning paths I designed over the past few months.

API Learning path – Beginner level:

> Testing fundamentals
– BDD
– Risk based testing
– Test strategy fundamentals

> Programming
– Basic Java

> Technical knowledge
– How do APIs work
– JSON & XML structures
– Swagger fundamentals

> How and why of automation
– How to do automation the right way

> Automation tools
– Rest Assured
– Basic API framework

Each course has online content or a workshop designed along with assessment material.

#RSQ #Automation #Training

Automation Guild 2020 – Big data 101 & Importance of Automation

By |2020-02-16T20:27:30+05:00February 15th, 2020|Uncategorized|

Among all the online conferences, Automation Guild is the best automation conference I happily attend, this year was a pleasure to speak at it again. If you are in automation, I think this is a must attend conference. In this post I’ll give a brief overview of my talk at the conference.

The subject of big data is exciting, but I’ve felt there is a general lack of testing maturity in the space. I guess since the industry itself is comparatively new and is evolving. The walk was to share some basics about big data and how testing & automation works in this field.

About big data

The evolution into big data has been fueled by technologies which have made processing lots of data at high speeds easy, and most importantly the ability to react to the insights very quickly. We discussed all these factors quickly, summarized in this image:

What are big data projects all about

The objective of big data projects is to gather insights / analytics to understand and solve problems. For that to happen, data from few or many sources may be needed to run analytics on. Now acquiring the data is usually not a big problem, to get it into a structure where it all makes sense collectively – is the challenge.

That’s where the concept of a data pipeline comes in. The data is passed through different stages of ‘transformation’ / ETL (Extract, transform, load) to make it more usable for our problem at hand.

Testing in Big data

Like Web applications have some standard tests that happen, similarly in big data there are some tests which are common. However, they are nothing like the ones we do for web applications.

In data projects, all we are dealing with is ‘data’, data in and data out. The challenge is transforming the data as expected and building models which actually solve our problems. Therefore, most testing in this industry revolves around ‘Data Quality’.

Within the three stages of the data pipeline, there can be many ETL activities happening within. For each ETL, deciding what type of data quality checks are needed is important. In the talk we walked through a basic process of how to determine that.

Automating tests

Because of the kind of tests we have in the big data space, automation also works quite differently. It’s more about fetching sets of data and checking if the right logic / business rules was applied. To perform these activities, some data platforms provide the capability of doing that easily, if not the technologies used to build these ETL flows are also then used to test them.

Talking about languages, Python is used widely because of it’s data processing capability. These scripts are used within workflows to do the required validations. The most common validation is checking of all data has been copied from point A to B. Sometimes while moving data from one space to another, files or records get missed, maybe they get truncated or other reasons. This is just one of the 6 quality dimensions.

Data quality across the pipeline

In the talk we walked through a sample pipeline and explained the kind of tests that can be done and how these tests would be executed. The image below summarizes all the checks discussed. The data pipeline was also expanded to show activities happening within the three stages and how they are tested.

If you were not part of the automation guild, you can still get access to it since all the talks and Q&A sessions are recorded. This talk would serve well for those willing to get into and expand within the big data field from a testing perspective.

Be ready to learn and be Hands on

By |2020-02-13T20:22:07+05:00February 13th, 2020|daily post|

If your in IT, you have to be open to learning ALL THE TIME.

Was reminded of this while developing an app in PowerApps today,

For a massive training program, was trying to figure out a more intuitive way to do registrations,

After few days of searching couldn’t find anything in-house I could use, had three options:

1) Use whatever mechanism we have even if it’s shitty, 2) wait on some one to get free and do it or 3) do it myself.

Ended up learning PowerApps and building a crude version of the app in a day

Could have easily said, we don’t have a solution, let’s just call it quits

Instead I learned something new, enjoyed it, and made others life easier to get the training

Bottom line, if your in IT, always be ready to learn and work hands on.

#QsDaily #Learning #Coding

Testing synergies across the customer journey

By |2020-02-10T20:26:34+05:00February 10th, 2020|daily post|

A major lacking in large software enterprises is:

Lack of a holistic test strategy across the customer journey

Scrum teams are mostly thinking within their small box,

Worried about getting just the next story out the door, and I don’t blame them. That’s what they are judged on.

But to stitch all that together and build synergies between teams should be a prime objective of testers

The customer does not care about what happened with one story

The customer deals with the complete end product, which is a conglomerate of multiple products with many stories

Let’s not be penny wise pound foolish and ignore investing in the big picture.

#QsDaily #Testing #TestStrategy #TestingAcumen #RSQ #RedefiningSoftwareQuality

Designing Test Automation Training

By |2020-02-09T20:07:16+05:00February 9th, 2020|daily post|

Busy designing an automation training program, here are the six main areas:

1. Testing Acumen
2. Programming fundamentals
3. Learning the tech stack
4. Building automation enablers
5. CI / DevOps basics
6. Automation tools and framework design

Drop any of these, and you wouldn’t have people who can build useful automation practices.

I know that’s a lot to do and to prepare, but that’s just me. Mr. Perfectionist.

I want anyone that goes through the training should be self sufficient.

Test data for Big Data projects

By |2020-02-06T19:12:00+05:00February 6th, 2020|daily post|

Got a great question in my talk’s Q&A at the #AutomationGuild 2020 :

How to create test data for #BigData projects?

Generally there are three types of ‘test data management’ you want to focus,

1. Mocks / stubs
2. Generate synthetic test data
3. Masked production data

For big data, the most important one is masked production data.

You would also need to create synthetic data too, but will not be enough to see if the model is working properly or not

So make an effort to get masked production data to have greater confidence in your data pipeline and data models.

#QsDaily #BigData #TestData #Testing

Disliking long written test cases

By |2020-02-01T19:44:19+05:00February 1st, 2020|daily post|

Over the years I’ve started to dislike writing formal and long test cases

We need test cases, but written in a better way..

Traditionally we write them detailing each step along the way,

The idea was even if someone is not a domain expert can use them, or to make sure we don’t miss a step and know what exactly was tested

These monster documents become shackles: time consuming to write, a nightmare to update

Instead I prefer writing the ‘test scenario’ in brief with the most important check / validation over a full blown detailed list of steps

Also, importantly, using mind maps instead of test case management tools / documents

One does sacrifice the details this way, but makes things much quicker

Also provides a great holistic view of the many test types / scenarios

#QsDaily #Testing #TestCaseWriting #LeanThinking

Go to Top