Anyone who has written automation scripts has faced flakiness, and we all hate it to our bone.
But everyone defines a flaky test a bit differently..
For me, a flaky test definition could be “a test running multiple times under the same conditions gives different results”
How would you define flakiness?
#QsDaily #automation #flakytests
A test prone to give false negatives. If the test gives false positives 100% of the time you wouldn’t know it (you’d assume it’s always passing and working.) If it gives false negatives then it is probably giving you feedback that the feature being tested is broken, when in reality it’s not, it may just be that the test needs to be tweaked to make it more reliable (assuming it’s worth the cost of doing so).
True, having false positives is the worst, no doubt about that. However if we have too many false negatives AKA flaky tests, then it dilutes the effect of automation too.
And as you said, needs to be investigated if it’s worth trying to fix that issue too.