Popular Posts

Sunday, January 24, 2016

My Quora answer: Is software quality assurance a good career? Why or why not?

A few weeks back, I answered a query on Quora about the software quality assurance profession.
It became one of the mostly viewed posts in SQA topic and noticed a lot of upvotes recently.
Hence, thought to share this here for the readers of my blog.

Is software quality assurance a good career? Why or why not?

Let me try to answer with respect to my career as a software quality assurance engineer for 12 years. 

I was unable to pursue bachelor degree with computer science as a major. But I badly wanted to join software industry for whatever position which someone could offer me. Back in 2003, I, with another set of my university batch mates were interviewed by a software services provider for a position called, "Associate software quality assurance engineer". I had no idea what software quality assurance means. Read two chapters in software engineering books by Ian Somerville and Roger Pressman. That was the only knowledge that I had about software quality assurance when I faced my first interview. I was able to get through the interview and landed in my first job.
Within the first few weeks of my career, I realized that having end-to-end understanding of software is the key to test it effectively. I spent long hours learning the software that I was asked to test. Eventually, I was able to uncover many critical bugs based on my understanding of the software as well as its eco-system. That helped me to build the trust on fellow team mates including PMs, SDEs etc..
I advanced in my career simply due to my approach towards testing. In a few years, I switched companies and lead a QA team in open-source enterprise middleware provider, WSO2. The mere testability of middleware had been a challenge but I was able to research and find interesting approaches to make untestable software testable and fill bug tracking systems with hundreds of real bugs.
With this, I was able to earn the trust of fellow engineers and I felt that I was adding value to the team. I was flexible and context-driven for project needs. Never adhered to process standards taught in some software QA certifications but spent time on learning new trends, technologies specific to my domain.
I still spend my time on reading whatever specific to QA and share my learnings through my blog. With my passion and attitude towards software quality assurance, I realized that I would move forward and explore more on how software is built and tested in top tech companies. My attempt on exploring was successful.
Nearly 1.5 years back, I was contacted by Amazon for a software quality assurance engineer position. I decided to take the opportunity and proceeded with amazon. I was offered software quality assurance engineer position after an extremely tough interview process. I quickly adapted to the environment but did not change my approach towards testing. I started at amazon with spending long hours learning the applications that I was assigned to test. I mastered the applications as well as all services that made up the system. That way, I was able to uncover critical bugs, help to deliver software quickly, which eventually made me a key contributor of the team.
Software quality assurance has never been a boring job for me. In some web applications which involve rich UI interactions with repetitive UI clicks through multiple pages, I made testing interesting by covering all business logic testing at the web services level. When feature designs are complete, I start test strategy and plan. I never act as quality police or a gatekeeper who merely control software releases at the end of cycle. I collaborate with developers, understand the changes/implementation and build my test strategy accordingly. This way, I reduce time which spent unnecessarily on regression. I trust test automation pyramid and focus on automating more on service layer instead of brittle/unmaintainable UI. I admire great testers/autohrs of our era such as Elisabeth Hendrickson, James Bach, Michael Bolton etc and adjust my approaches based on their great suggestions.
Thus, software quality assurance has been an AWESOME career for me. I was appreciated, recognized, paid well and promoted based on my contributions as a software quality assurance engineer.
My advice is, software quality assurance is a great career. But you should follow simple set of principles to become successful as a QAE. In my experience, those are

  • Enthusiasm in learning new business domains, technologies
  • Passion and commitment
  • Flexibility
  • Team working
  • Ownership

Similarly, QA can become boring, easily replaceable, mundane job if you build your careers based on the followings

  • Enforcing old school policies on engineering teams that make faster deliveries impossible. This will make you the sole "approver" of software and the job will become hectic.
  • Not investing time on reducing repetitive manual testing overhead
  • Considering software as a complete black-box in testing and not focusing on how everything is connected together
  • Adhering to principles that you learned in software testing certifications without adjusting your approach based on project context

  • Treating testing as checking (following list of test cases, mark them pass/fail and announce testing is done)

Sunday, September 27, 2015

Testing in microservice architecture

Microservice architecture is no longer a strange term. There are many discussions about testing approaches in microservice based systems too. For example, this article explains a very nice strategy to adopt when testing a solution based on microservice architecture.
I thought to share some of my recent experiences with microservices testing and how microservices enable testability of a complex distributed systems. Note that, this post is not about suggesting another strategy for testing microservice architectures but some key learnings in testing microservices.

Microservices has become a widely discussed topic during the last few years. In September 2011, well before the microservice term was introduced, I did a session on SOA testing in open source software conference, WSO2Con. I have introduced a component based testing approach for SOA during my presentation and discussed about various levels of testing in a service-oriented solution.
I highlighted the importance of component level testing in detail during that presentation and suggested it as one of the key pieces in any SOA test strategy.

After 2-3 years, I see the same methodology is being suggested for microservices testing by many industry leaders/technical advocates. The strategy that I presented back in 2011, made possible mostly under microservices architecture since it enables independent development/deployment of services.

Amazon has been a pioneer in service oriented architecture  since its inception and each functionality/feature at amazon is built as a webservice. You could find more information about this in some public references such as this and this. Amazon has been indirectly adopting microservices architecture as explained in those references. Since I was employed at Amazon nearly 1 year back, I got the opportunity to observe and then adopt many service oriented testing strategies that I discussed during the early days of SOA. This post is intended to summarize two key points related to testing in microservice architecture.

Individual services testing

Microservices architecture reveal a new set of boundaries of individual components in a software. This allows higher level of decomposition of software which can be tested quite independently.
For example, feature X is decomposed into restful web services (or APIs) X1, X2... Xn. A good microservice should be independently developed and deployed. Thus some of these individual web services may just provide an interface to CRUD (Create, Read, Update, Delete) operations in a database. Testing such a service should be trivial through automated tests (e.g:- HTTP clients). In case the service consists of multiple dependent services which are not available at the time of testing, stubbed version (mocks) of those services should facilitate proceeding with individual services testing.

Microservices enable continuous deployment

One of the major advantages of adopting microservices architecture is, it facilitates fast phased deployment of web services in fully automated fashion. Testing teams can start working on service level automated tests well before the code is pushed into testing environment. Service stubs and API specs allow testing team to build basic test utilities in advance. Once the code is ready in test environment, the automated tests exercise all resources in RESTful API (microservice).
When the service is integrated with other dependent services, the same set of tests with minimal or no modifications could be run as integration tests.
With these automated tests, microservices are deployed to production without minimal human intervention.

Testing in microservices architecture closely resembles well-known test pyramid. I will relate the layers in test pyramid into microservices testing and discuss further in my next blog post. Stay tuned!

Saturday, August 1, 2015

Fuzz testing web service APIs

Fuzzing is a mechanism to exercise software with random inputs. Fuzz testing is an integral component of API verification and it helps to uncover potential failures due to incorrect input handling.
While you can find more information about fuzz testing from various web references, this post intends to summarize some key principles/best practices associated with fuzzing.

Fuzz test planning

Your AUT (Application Under Test) may consists of hundreds of APIs. However, it will not make sense to exercise all APIs with fuzz testing. For example, I usually take the APIs which are directly called by consumer applications for fuzz testing. Similarly, study your APIs and choose the APIs which are highly sensitive for user interactions. 

Execute Fuzz testing

Once you identify the APIs that are important to be fuzzed, figure out an approach to execute fuzz testing. Manual fuzzing should be out of scope. You should plan for an automated fuzzing mechanism.
You may try fuzzing APIs with web services testing tool such as soapUI. soapUI NG Pro provides you with fuzz testing facility as part of its security testing component. 

You will also consider building a custom fuzz testing framework instead of using a separate tool. A custom framework can analyze API model ( or WADL in a REST service or WSDL in case of SOAP based services) and generate random inputs. Building such a framework will not be a complex effort and you should be able to plug it as part of continuous integration system. So that, fuzzing will be done seamless manner without any human interaction. Due to the flexibility and ease of maintenance, I prefer the second approach of having a custom in-house fuzzing framework. 

Analyze Results

Regardless of the tool/framework used for fuzz testing, it will be important to analyze the results either automated or manual approach. You will automatically assert Expected Exceptions using the in-built facilities provided by testing frameworks.  

Tuesday, April 28, 2015

Exploratory Testing 3.0

James Bach and crew have re-defined software testing as follows.

“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.”

I believe this perfectly makes sense and a very important step towards building "responsible testers". You could read more about this in http://www.satisfice.com/blog/archives/1509

Sunday, December 21, 2014

Myth of 100% test automation

Can ALL tests be automated? Needless to say that putting effort on automating "some % of" scenarios is not worthwhile. I have identified some obvious examples;

  • Scenarios which involve time consuming work-flow executions
  • Unstable features
  • Some end-to-end scenarios where assessing multiple outputs by humans are must
  • Usability assessments
  • Tricky timing related/inconsistent bugs

Why do some engineering teams praise on 100% automated coverage even knowing these well-known facts?
My understanding is, they do not know what QA is! When quality is just an afterthought and presence of critical bugs in products is not an important matter, praising 100% automation and living in that dream is the best way forward just for the sake of temporary satisfaction. First, understanding the difference between Testing and Checking is very important since that can prevent setting the unrealistic goals and expecting something not possible from test automation.
Michael Bolton once defined these two terms concisely in here and then refined the same by James Bach in here. I would recommend these two articles for anyone who needs more understanding about what professional testing is.

--Open for further discussion

Sunday, November 2, 2014

Testing Service-Oriented Solutions

I have been discussing about testing service oriented solutions through a lot of blog posts as well as this book. I just want to summarize some key points, specially since I switched from SOA middleware testing to service-oriented solutions testing.

Automation is the key

You should not even think about QA approach without automation when it comes to service oriented solutions. I observe three distinct entities involved in service oriented solution testing approaches.

Services Testing
End-to-End Testing
UI Testing

Individual services can be tested as they make available for testing. In case, they are not ready or under implementation and QA is blocked to verify integration flows, you can use service simulation mechanisms. soapUI mock services will be handy in this. Testing the functionalities of individual services should be 100% automated and continuously deployed to the release/testing branches. Since SOAP web services are already dead, there is no need to spend time on experimenting various testing tools. RESTful services can be tested using simple HTTP clients and can easily be integrated into main code branches without much efforts.

End-to-End testing should also be automated as much as possible. An 80% goal may be realistic and achievable. End-to-End testing should also be done by by-passing front-end and combining invocations of various services.

UI testing will also be equally important based on the nature of the solution. The UI regressions are tedious and must be automated using a web test framework such as Selenium web driver. However, the focus on UI automation should be the last out of all automation approaches since the maintainence burden of UI automation is high comparing to the rest.

Even if you do majority of automated testing, you cannot replace human brain when releasing software. With a large percentage of automated tests, the testers could spend a lot of time analyzing user behavior to derive exploratory tests.

QA should not be an after-thought

This is not about TDD (Test-driven development). But everyone in the team should equally responsible for the quality of the final deliverable. Therefore, each service developer must work with QA teams to design high quality test plans and review the test cases since the initial phases of release cycles.

Study your users

According to my understanding, the best way to derive test scenarios is studying your users closely. See how they use the previous versions of your service oriented solution. If it is a brand new version, study similar/competitive products. Then, derive test scenarios which mimics the real world use cases.

Saturday, July 12, 2014

How to enforce a default HTTP Content-Type for requests in WSO2 ESB

Occasionally, you will get requests from legacy client applications that do not include HTTP Content-Type header. WSO2 ESB proceeds with the mediation flow only after building the relevant SOAP infoset using the message builders registered against the Content-Type of the incoming request. In case the Content-Type is blank, you will experience an error similar to the following in the log.

ERROR - RelayUtils Error while building Passthrough stream
java.lang.StringIndexOutOfBoundsException: String index out of range: -1 

If modification of client applications to include the mandatory Content-Type HTTP header in POST requests is out of your control, we should be able to set a default content type for such incoming requests. In WSO2 ESB, you can use the following property to set a default Content-Type for the incoming requests in case the HTTP request does not have Content-Type header.


<parameter name="DEFAULT_REQUEST_CONTENT_TYPE" locked="false">application/json</parameter>

Set this property in ESB_HOME/repository/conf/axis2/axis2.xml and restart the server. Here, we have configured application/json as the default content type. So that, when a request similar to the following reaches ESB, the default JSON builder registered for application/json content (org.apache.synapse.commons.json.JsonStreamBuilder in WSO2 ESB-4.8.1) takes care of building the message and hands it over to mediation engine to proceed with the mediation flow.

POST http://host:8280/services/ContentTest HTTP/1.1
Accept-Encoding: gzip,deflate
Content-Length: 30
Host: host:8280
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.1.1 (java 1.5)

  "in": { "test": "wso2" }

Sunday, May 25, 2014

Handling JSON responses in Apache JMeter

There are various types of post processor elements that we can use out of the box when handling responses in a JMeter test plan. For example, Regular Expression Extractor can be used to capture a specific value or set of values from a XML, HTML or JSON response.
However, handling JSON responses through the default regular expression extractor will be a daunting task when the JSON response becomes complex. When handling complex XML responses, Xpath extractor is the obvious choice in JMeter. Similarly, it will be quite efficient to have a similar post processor element to handle complex JSON responses.
JSONPath is a way to extract parts of a given JSON document and is now available in many programming languages. In this simple post, we will look at how we can use JSONPath expressions to extract values from JSON responses in JMeter.


Download and install JMeter version 2.8 or later

Step 1

Download this JMeter plugin library and unzip JMeterPlugins-ExtrasLibs-1.1.3.zip
Copy the content under lib/ directory to JMETER_HOME/lib (this will merge with the existing content)

Step 2

Restart JMeter and start to create a new test plan. Add a new thread group. Then add a HTTP Request sampler. Enter the following values.
Server name : ip.jsontest.com
Method : GET

Leave the other attribute values as they are.

Step 3

Right-click on the above HTTP Request Sampler and select Add --> Post Processors --> jp@gc - JSON Path Extractor
Enter the following values.

Variable name: ip_address (this is just a reference)
JSON Path: $.ip
Default value: NO_VALUE

Step 4

Add a Debug sampler so that we can check the extracted value. You can add Debug sampler by right clicking on the thread group and selecting Add --> Sampler --> Debug Sampler

Step 5

Add View Results Tree listener and save the test plan. Then run the test. You will see the following response.

{"ip": ""}

Step 6

The above simple JSONPath expression ($.ip) should return the ip address value from the JSON response. Check the Debug sampler output. It will look similar to the following.


The ip address value was extracted from theSON response and assigned to ip_address variable which we have specified in the JSON Path Extractor definition.

We can now use this variable down in our JMeter test plan to use in various other samplers. 

Thursday, March 27, 2014

Common mistakes to avoid in WSO2 API Manager - "ERROR - APIAuthenticationHandler API authentication failure" for a API call with valid access token

In the third post of the common mistakes to avoid in WSO2 Carbon platform blog series, I'm going to look at another frequently raised question.  I have been struggling to get rid of this issue for few hours recently and figured out the fix by consulting one of my colleagues, Nuwan
Let's look at the problem in detail.


Suppose I'm calling a REST API hosted in WSO2 API Manager with a set of query parameters as shown below.


In order to match with the GET URL/query parameters, I define the url-pattern in API publisher as shown below.

We cannot define a uri-template in API publisher UI in the latest version of API Manager (At the time of writing, it is API Manager 1.6.0).

However, the specified url-pattern, /GetData/* does not match with the request URL since my API call contains a set of query parameters.

Thus, I open the API configuration file which is stored in repository/deployment/server/synapse-configs/default/api directory in API Manager distribution and modify the resource definition as follows.

 <resource methods="GET" uri-template="/GetData?*">

I simply modified url-mapping into uri-template and changed its value to "/GetData?*" so that my request will be accepted by the API  resource definition.

After saving the configuration in file system, I subscribe to the API in store and send a GET request.
I expect everything is correct since the API definition matches with the request perfectly. But I get a HTTP 403 response with the following error in log!

ERROR - APIAuthenticationHandler API authentication failure
org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityException: Access failure for API: /qa, version: 1.0 with key: _Zj7PHU1pvw16lWGq0JHCDSFoE8a
    at org.wso2.carbon.apimgt.gateway.handlers.security.oauth.OAuthAuthenticator.authenticate(OAuthAuthenticator.java:139)
    at org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler.handleRequest(APIAuthenticationHandler.java:92)
    at org.apache.synapse.rest.API.process(API.java:285)
    at org.apache.synapse.rest.RESTRequestHandler.dispatchToAPI(RESTRequestHandler.java:76)
    at org.apache.synapse.rest.RESTRequestHandler.process(RESTRequestHandler.java:63)

This confuses me a lot. I double check my access token. It is valid. I try again generating a new token. No luck. Still getting the same authentication failure. Why??

What is wrong here?

When you publish an API through the API publisher UI, the corresponding api config artifact is created in file system. But that is not the only reference. AM_API_URL_MAPPING table, which is in API Manager database (WSO2AM_DB by default in H2)) is also updated with the specified url-pattern value. So in our example, it will be written to that table as "/GetData/*" (this is what we have specified when publishing API in API Publisher UI)
Even though we have changed the API definition in file system as mentioned above, this value in database is not changed. Also, when matching a particular url-pattern, API manager does it at two levels. It validates the auth type of the resource against token using url-mapping (in the above table) and then proceed with the url-pattern validation using the corresponding API definition (file system).

In our example, though we have changed the API resource definition in API configuration file, the database still contains "/GetData/*" as the url-mapping value. Hence, the first level of validation of the request (matching auth type of the resource against token using url-mapping in AM_API_URL_MAPPING table) fails and returns the above error.

How can we fix this?

The fix is simple. Go to API publisher UI and select the published API. Then click on Edit to update the API.
Modify the URL pattern as "/*".

Next, you have to do the same resource modification in API configuration file in repository/deployment/server/synapse-configs/default/api

 <resource methods="GET" uri-template="/GetData?*">

Save everything and send a request again. It will be successful.

Friday, February 28, 2014

Common mistakes to avoid in WSO2 Carbon - 2 - "java.sql.SQLException: Total number of available connections are less than the total number of closed connections"

This is the second post of common mistakes blog series which I'm planning to share with you. In this post, we are looking into another common mistake which we do when working with WSO2 Carbon platform.

Registry mounting is a way of federating the registry space across multiple servers in a product cluster. For example, if you have a WSO2 ESB cluster, you can use a single registry space to store all configuration data common to cluster nodes.
There are 3 different registry spaces provided by each WSO2 Carbon product; local, configuration and governance. You can find more details about these spaces in here.

We have to keep in mind a few important concepts when building a shared registry setup. You cannot share the local registry space among multiple cluster nodes. The local registry space is used to store node-specific data hence it should not be shared among other nodes in the cluster. However, we mistakenly do this when configuring shared registry setups and experience many unexpected issues. The following weird startup error is one such occurrence due to incorrect mounting configurations (I removed some part of the complete stack trace for clarity).

ERROR - RegistryCoreServiceComponent Failed to activate Registry Core bundle
org.wso2.carbon.registry.core.exceptions.RegistryException: Failed to close transaction.
    at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCTransactionManager.endTransaction(JDBCTransactionManager.java:183)

Caused by: java.sql.SQLException: Total number of available connections are less than the total number of closed connections
    at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCDatabaseTransaction$ManagedRegistryConnection.close(JDBCDatabaseTransaction.java:1349)
    at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCTransactionManager.endTransaction(JDBCTransactionManager.java:178)

This error does not give any clue about a problem related to mounting. You may have spent many hours unnecessarily to tune up your DBMS since the error says about DB connections! 

Let's explore this error in detail.

Step 1


We are going to have a shared registry database (which is used as configuration and governance registry spaces in a ESB cluster). I will use mySQL and create a database first.

mysql> create database sharedreg_db;

Next, create the registry DB schema using mySQL database scripts available in CARBON_HOME/dbscripts directory.

mysql> use sharedreg_db;
mysql> source /home/charitha/products/esb/tmp/wso2esb-4.8.1/dbscripts/mysql.sql;

Step 2


We will register this new database in master-datasources.xml which can be found at
CARBON_HOME/repository/conf/datasources directory

            <description>The datasource used for shared registry</description>
            <definition type="RDBMS">
                    <validationQuery>SELECT 1</validationQuery>

Step 3


Now, we have a shared registry database. We need to mount the registry collections to this remote database. There are 3 mounting mechanisms; JDBC, Atom and WS. The commonly used pattern is JDBC mounting. Hence I will use the same.
Mounting configuration can be done in CARBON_HOME/repository/conf/registry.xml as shown below.

<remoteInstance url="https://localhost:9443/registry">







    <mount path="/_system/config" overwrite="true">



    <mount path="/_system/governance" overwrite="true">




Make a note of the highlighted configuration parameter.

Now, the database configuration referred by the mount is defined at the top of registry.xml as follows.
We simply change the JNDI name of the default db config; jdbc/WSO2CarbonDB to the JNDI name of our shared registry database; jdbc/WSO2SharedRegDB





    <dbConfig name="wso2registry">



OK. Assuming everything is configured correctly, we start Carbon server. Unfortunately, you will get the above meaningless error at the server startup.

What is wrong here?


By defining a common data source for mount configuration as well as local registry definition (under currentDBConfig element), we have done a big mistake. This will eventually leads to share local registry space among heterogeneous product cluster nodes which is theoretically incorrect.  

How can we fix this?


Simple. You can define a separate, unique database configuration for the shared registry db.

<dbConfig name="sharedregistry">



Then, that will be referenced by the remote mounting configuration.

<remoteInstance url="https://localhost:9443/registry">







Finally, make sure to change the local registry definition back to its default so that it will use the WSO2 Carbon DB (usually H2).

<dbConfig name="wso2registry">



Restart the server. The error will disappear!