Popular Posts

Sunday, September 27, 2015

Testing in microservice architecture

Microservice architecture is no longer a strange term. There are many discussions about testing approaches in microservice based systems too. For example, this article explains a very nice strategy to adopt when testing a solution based on microservice architecture.
I thought to share some of my recent experiences with microservices testing and how microservices enable testability of a complex distributed systems. Note that, this post is not about suggesting another strategy for testing microservice architectures but some key learnings in testing microservices.

Microservices has become a widely discussed topic during the last few years. In September 2011, well before the microservice term was introduced, I did a session on SOA testing in open source software conference, WSO2Con. I have introduced a component based testing approach for SOA during my presentation and discussed about various levels of testing in a service-oriented solution.
I highlighted the importance of component level testing in detail during that presentation and suggested it as one of the key pieces in any SOA test strategy.

After 2-3 years, I see the same methodology is being suggested for microservices testing by many industry leaders/technical advocates. The strategy that I presented back in 2011, made possible mostly under microservices architecture since it enables independent development/deployment of services.

Amazon has been a pioneer in service oriented architecture  since its inception and each functionality/feature at amazon is built as a webservice. You could find more information about this in some public references such as this and this. Amazon has been indirectly adopting microservices architecture as explained in those references. Since I was employed at Amazon nearly 1 year back, I got the opportunity to observe and then adopt many service oriented testing strategies that I discussed during the early days of SOA. This post is intended to summarize two key points related to testing in microservice architecture.

Individual services testing


Microservices architecture reveal a new set of boundaries of individual components in a software. This allows higher level of decomposition of software which can be tested quite independently.
For example, feature X is decomposed into restful web services (or APIs) X1, X2... Xn. A good microservice should be independently developed and deployed. Thus some of these individual web services may just provide an interface to CRUD (Create, Read, Update, Delete) operations in a database. Testing such a service should be trivial through automated tests (e.g:- HTTP clients). In case the service consists of multiple dependent services which are not available at the time of testing, stubbed version (mocks) of those services should facilitate proceeding with individual services testing.

Microservices enable continuous deployment

One of the major advantages of adopting microservices architecture is, it facilitates fast phased deployment of web services in fully automated fashion. Testing teams can start working on service level automated tests well before the code is pushed into testing environment. Service stubs and API specs allow testing team to build basic test utilities in advance. Once the code is ready in test environment, the automated tests exercise all resources in RESTful API (microservice).
When the service is integrated with other dependent services, the same set of tests with minimal or no modifications could be run as integration tests.
With these automated tests, microservices are deployed to production without minimal human intervention.

Testing in microservices architecture closely resembles well-known test pyramid. I will relate the layers in test pyramid into microservices testing and discuss further in my next blog post. Stay tuned!




Saturday, August 1, 2015

Fuzz testing web service APIs

Fuzzing is a mechanism to exercise software with random inputs. Fuzz testing is an integral component of API verification and it helps to uncover potential failures due to incorrect input handling.
While you can find more information about fuzz testing from various web references, this post intends to summarize some key principles/best practices associated with fuzzing.


Fuzz test planning

Your AUT (Application Under Test) may consists of hundreds of APIs. However, it will not make sense to exercise all APIs with fuzz testing. For example, I usually take the APIs which are directly called by consumer applications for fuzz testing. Similarly, study your APIs and choose the APIs which are highly sensitive for user interactions. 


Execute Fuzz testing

Once you identify the APIs that are important to be fuzzed, figure out an approach to execute fuzz testing. Manual fuzzing should be out of scope. You should plan for an automated fuzzing mechanism.
You may try fuzzing APIs with web services testing tool such as soapUI. soapUI NG Pro provides you with fuzz testing facility as part of its security testing component. 

You will also consider building a custom fuzz testing framework instead of using a separate tool. A custom framework can analyze API model ( or WADL in a REST service or WSDL in case of SOAP based services) and generate random inputs. Building such a framework will not be a complex effort and you should be able to plug it as part of continuous integration system. So that, fuzzing will be done seamless manner without any human interaction. Due to the flexibility and ease of maintenance, I prefer the second approach of having a custom in-house fuzzing framework. 


Analyze Results

Regardless of the tool/framework used for fuzz testing, it will be important to analyze the results either automated or manual approach. You will automatically assert Expected Exceptions using the in-built facilities provided by testing frameworks.  

Tuesday, April 28, 2015

Exploratory Testing 3.0

James Bach and crew have re-defined software testing as follows.

“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.”

I believe this perfectly makes sense and a very important step towards building "responsible testers". You could read more about this in http://www.satisfice.com/blog/archives/1509