Tuesday, 9 July 2013

Software Testing continues to be on healthy growth !!

          

Was reading an assessment and forecast report which states that spending on Software Testing has been growing by an estimated ~ 7-8% which is more than 3 times the overall IT Services spending across the world.

Factors seems to be largely centralization of testing function by the clients and adoption of factory-based delivery with India being the core delivery hub.

The "New Offerings" that is catching up in the industry as per the report is Data-related offerings, Mobile Application Testing, ERP and COTS Testing Services, QA & Consulting Services, support services such as Test Environment Management, Test Data Management etc.

With the increase in business demand into niche areas, it is imperative to keep the competency and knowledge level high to be able to compete. The coming days seems to be more challenging and let us watch & see...
 
 
- By,
Anil Balan
Last Mile Consultants

Thursday, 6 September 2012

Realising the Benefits of Automation through Effective Tool & Framework Selection

With Test Automation implementation gaining momentum and the myriad of Tools (COTS & Open Source) available in the market today, the choice of tool and the corresponding framework is of prime importance. This decision plays a key role in determining the ROI one can expect to see from Automation. Blindly automating a test pack will not necessarily produce returns. How does one go about deciding the “right-fit”?
In general, there is an innate tendency to fall back on existing frameworks – It’s easy, requires minimal technical knowledge and it has worked for ‘n’ applications - so it will work for this one. This thinking might hold good more often than not – however, with this logic, the behaviour of the application tends to take the back seat and this can prove to be very counter-productive. Application behaviour should act as the foundation for building an Automation Framework, failing which you might find the entire automation structure falling apart.

For instance, consider a standalone application built on Java. Assume one chooses to adopt a hybrid framework encompassing the modular and data driven approaches to automate the application. A few months down the line - the customer decides to change the underlying technology from Java to Siebel. Although both the applications are functionally alike, the existing automation scripts will be rendered redundant – owing to the structure of the application in terms of object identification and hierarchy.
Anticipation of inevitable changes to the application and its behaviour will greatly help in creating a robust framework that accommodates application (and client) temperament. Current trends lean towards a more generic framework that will cater to all your automation needs. While this offers greater speed and flexibility in developing automated test cases, care must be taken to design the framework in such a way that
  1.  Modifying any aspect of the core script layer will not have an impact on the developed test cases
  2. There is flexibility to plug in Enhancements to the framework and 
  3. Minimal maintenance overhead if the underlying technology of an application changes
Ideally, this sort of framework should be built by disassociating the core scripting layer from application technology and business functionality. Hybrid frameworks encompassing the Keyword and Data driven aspects (5th gen frameworks) - that have the capacity to emulate Business Process Testing behaviour work best in maintaining an Automation suite in turbulent conditions. This also allows one to keep Test Case design to excel sheets and/or test management tools - where they rightfully belong.
Tool selection is slightly more straightforward when compared to frameworks. For starters, a thorough assessment of what the tool has to offer to the current test environment is required. Detachment from the tool is vital in this process – assess a tool for what it can and has to do rather than what it doesn’t need to do. A higher level of exposure to one tool will only magnify the “shortcomings” so to speak, of the other. In most scenarios the license cost associated with a tool ranks the highest in the decision tree; pinning the entire decision on cost without giving tool suitability and extendibility their due credit however, will not be conducive.
Unlike wrist watches, Tools and Frameworks do not always adhere to the concept of “One Size Fits All” unless there is a very fine print associated with it! Think of this more along the lines of how you would buy a pair of shoes for example: what are you going to use it for? Does it serve that purpose well? Is it expensive? If you know you are going trekking and you buy dancing shoes – you will see the ill effects only after the trek begins and the nasty fall that follows. Similarly, the Tangible and Intangible benefits of automation can be realised gradually through a carefully thought out and well planned implementation.

Sunday, 15 July 2012

Metrics


What’s the score today?

It goes back to ancient times.  The dreaded question, which you wish was never asked, but at the end of the day, you feel the breathing down your neck, followed by the sound, “How many <defects> have ye?” 

Of course, a variant of this question has been asked to all of us, asking us to provide an update on the progress we made on the day, week or month.  It is a perfectly legitimate question, it is a form of providing information and without adequate information, any formof management can fail.

Yes, I’m talking about metrics here, considered by many as evil and as important as oxygen by others.One of the famous quotes on metrics is ‘If it cannot be measured, it cannot be managed’.  I have somewhat ambivalent feelings about this quote; it implies that all the activities need to be measured and subsequent activities are decided by these measurements.  In real world scenarios, activities are not run by numbers alone.  Imagine a scenario where someone prepares a plan basing only on a set of measurements, without considering the people involved, their abilities or capabilities.  It is a recipe for disaster.  You can argue that the capabilities can also be measured; however, measuring people’s capabilities by numbers is one of the most controversial metrics in our industry.

Don’t get me wrong, metrics by itself is not bad or evil; some of them are really useful.  Take for example the burn down chart; it is the number one tool for the project manager to assess the progress of development.  At the same time, poorly defined or documented metrics can be misleading as well.  The statement ‘95% of the test cases are passed’ doesn’t convey any useful information to the stakeholder.  Without knowing the context, it can convey a very misleading message to the recipient. 

Metrics can be evil in the following circumstances.

1.      They are misinterpreted without knowing the context.

Imagine a scenario where is someone is trying to assess the performance of the test team members by analysing the number of defects logged by each person per day.  This is one of the classic scenarios where metrics are misinterpreted.  If person A is logging more defects than person B, it doesn’t necessarily mean that person A is doing a better job than person B.  It may be that person A is working on a really buggy piece of code or complex scenarios whereas person B could be working on less complex areas or already tested areas or on a product from a remarkably structured and efficient development effort.

Without knowing the context, results can be gravely misinterpreted.

2.      Poorly defined metrics.

Sometimes metrics are defined without analysing the consequences.  A task has been assigned to the development team, requiring them to fix a certain number of defects per day.  Whatever measure you take to ensure the code quality, the primary aim of the developers will be to meet the numbers, resulting in inadequate quality.

Dilbert has got it right.

3.      Metrics are generated just for creating beautiful charts, not really adding any value.

I have seen reports with n number of sheets, crammed with data, detailing each and every activity each and every person has done for every hour!  If it requires a data mining team to get any sense of the report you are sending, the metrics are irrelevant.

4.      When they are used for micro-management of individuals.

Sadly this happens, albeit rarely. 

All metrics are not bad and they are really useful for making informed business decisions, with the caveat that both the sender and receiver know what they want and all the parameters are clearly understood.

The following article lists a number of useful metrics for agile management.


The next article emphasise the importance of proper analysis of the data you send.

Well, how many comments have ye?





By -
Finny Mathews, TCoE, RM ESI


Sunday, 1 July 2012

Escaping The Commoditization War


A recent article in the Economic Times by the CEO of a leading IT Services provider claimed that Indian IT services organisations did not innovate and relied very heavily on cost arbitrage. What he implied, and which I believe too, was that the IT services offerings was headed towards, if not already, a commoditised set of services.
To understand commoditisation, let us ask ourselves as to how we as providers of services are perceived in the eyes of our customers. Are they able to distinguish our service from our competitors?Arewe seen as an enabler of solutionsor do we respond to proposals? Do we get challenged on price as the bottom line of all discussions?

Having seen the growth and growth of the Software Testing industry, I can see how some of our solutions are now way out of tune with customer demands and expectations. In our hunger for volume, we have broken down the offerings into development or test factories, standard processes, and relied heavily on the “my-price-is-lesser-than-yours” kind of distinction. More importantly, we have driven innovation inwards – in the execution & delivery, and not outward - in the solving of the customer’s problem. So begging to differ from the CEO, we have innovated alright, in being able to deliver with a wide base at the pyramid (thus managing costs), in standardising our offerings to be able to delivered by varied skills within teams, by projecting our management of the global delivery model and its governance, by implementing processes like CMMI, TMMI, etc.
What we notice is that the rest of the providers are doing exactly the same. Take a look at some of the websites of people offering testing services – they more or less talk the same thing – build your Centerof Excellence; create automation labs; cutting edge processes – and the same mantra repeated for cloud test labs; mobility test labs etc.Analyst reports suggest that given the over-crowdedness, vendor service offerings are more or less similar – with size and reach – being key differentiators!

Just another indicator of the fact that we havemoved towards a commoditisation of our services or gotten into the zone of moving towards commoditised services.

In economic terms, there is no distinguishing factor for the service other than PRICE. There is a certain sameness of service; minimalistic differentiation based on how well you have optimised internally; and a focus on price – and most importantly – the inability to articulate true value.
Don’t get me wrong here, commoditisation isin’t such a bad thing. There is a service that is being provided at a certain price point, which one is able to sustain – however, that price point will soon be under threat or the (lack of) distinguishing characteristics of it will soon be under threat.

So how does one escape that?
One of the most difficult, but a game changer, is the value of the service. If you are able to articulate the true value – which means that the service you (and only you) can provide will allow the customer to meet his business targets – whether it means meeting the launch date, gaining market share, gaining brand share etc. Hypothetically, If you had been a vendor to Apple when it was preparing to launch the iPhones; and was desperate to meet a launch date, and you were asked to test it; what would you price that service at? If you were sure of the value the service provided; backed your execution capabilities – would you price it as a Time &Material service; a Fixed Price service or base it on sales / rejects of iPhones and the cost incurred by Apple to fix them – thus unlocking the value of your service to Apple.

Another way of looking at it is the way we project software testing as a service. We continue to see it as an end-of-development activity. If we did project it as a start of deployment activity, where the stakes are still higher – we are projecting its value differently! If so, then would you re-strategise your testing when you viewed it from this perspective? Would you be able to enhance the service to include other business elements – like training for BPO?

A big black hole in the software testing services is that of being a credence service provider. As an ex-CEO explained to me the other day, you could be an eye-surgeon, specialising in the operations on the left eye, and only w.r.t retinal surgeries, and only certain types of conditions (for which there is obviously a market) – the value of the surgeon for these types of operations would be immense. A bit like Underwriter Laboratories, which puts its stamp on, say, electrical equipment. You trust the equipment, which means you trust that UL has done its job. Unfortunately, there are no clear credence service providers in IT. Will there arise one? Not sure – but it is worth aiming for. Indian IT services organisations have a depth in domain they highly under-utilise. This depth could be put to good use to be THE credence provider in this space.

Another aspect, and going down a well-trodden path, is customer centricity. Nothing can be less emphasised. Even if you are not a credence provider for goods world-wide, if your customer thinks of you that way, then that’s the way to go! Which means, aligning your sales team, your delivery management, your test teams to the customer’s business need. If you don’t talk your customer’s language, you lose. If you understood the problem, and were keen to solve it, you won.

Lastly, be the standard, be the brand. We need to evolve to that standard that says,“if you talk mobile testing – our 5 step process will certify you against mobile fraud”. It could be areas that are unexplored at the moment (from a testing perspective, and in the Indian IT context), areas like Sustainability, Mobility, Accessibility are a few.

To implement these, one can summarise these strategies into 3 bundles –
Extending your services to different segments or markets (so Accessibility testing as applied to the Banking sector – will probably have its own rules & regulations);

Enhancing your offerings, by bundling. 2+2 is no longer 4, the customer expects it to be 5. So if you are talking Performance testing, it means bundling it with Usability. I am not sure there are many people out there who would appreciate a fantastically responsive site that is a bummer to navigate!
Lastly, innovation from a business perspective.Innovation is probably about seeing your testing services in a different light. Getting out of the engineering mindset and viewing it from the business mindset. The ability to look on testing as a start of deployment activity – which can lead to possibilities around can I then manage your business processes; can I then train your staff; can I then take on the entire IT services, now that I know it, and help you reduce your costs?


So there are ways to beat the commoditisation, it’s a matter of which end of the innovation cycle we need to be stuck in – the business or the delivery!