Setting up minimum deployment of WSO2 product cluster with management/worker separation

The latest version of WSO2 Carbon (Carbon-4.0.0) platform (WSO2 ESB-4.5.0, WSO2 Application Server-5.0.0, WSO2 Governance Registry - 4.5.0 etc) supports a new deployment model which allows you to setup product clusters with separated worker and management nodes. In a worker/manager separated cluster setup, the management node(s) is used to deploy and configure the deployment artifacts where as the worker nodes are used to serve the requests received by clients.
I'm not going to spend much time on a detailed explanation about this new deployment approach since my objective is to take you through the minimum worker/manager deployment setup of WSO2 Application Server cluster as quickly as possible. You can find more about the theoretical aspects of this new deployment model in Afkham Azeez's blog.

Deployment diagram



















We are going to setup the minimum cluster setup in our local machines. Thus, we will use 3 product instances as follows.

Load Balancer - used to load balance the requests between Application Server cluster nodes.
WSO2 Application Server management node - all deployments and configurations are performed through this node.
WSO2 Application Server worker node - Application server requests are served by this node.

Pre-requisites:

Download the latest version (5.0.0) of WSO2 Application Server from here.
Download the latest version(2.0.0) of WSO2 Elastic Load Balancer from here.

Setting up WSO2 Elastic Load Balancer


Step 1:
Extract wso2elb-2.0.0.zip into a directory in your local file system. Let the extracted directory be WSO2_LB_HOME

Step 2:
Go to WSO2_LB_HOME/repository/conf and edit the loadbalancer.conf as follows.

appserver {
        domains   {
            CharithaASdomain {
                hosts mgt.charitha.appserver.wso2.com;
                sub_domain mgt;
                tenant_range    *;

            }
            CharithaASdomain {
                hosts charitha.appserver.wso2.com;
                sub_domain worker;
                tenant_range    *;
            }
        
        }
    }

As you can see above, we have defined two cluster sub domains, mgt and worker under a single clustering domain which is named as "CharithaASdomain".
Both these sub domains consists of one WSO2 Application server instance in each. mgt sub domain includes the management node of our cluster and the worker sub domain consists of a worker node. (See the deployment diagram above).
The two WSO2 Application Server instances (in our case cluster sub domains, because each sub domain has only one instance) can be setup in two physical servers, 2 VM instances or a single machine. In our example, we will set everything up in our local machine. In order to identify the two nodes uniquely, we will define hostnames for both. Therefore, mgt subdomain will be identified using mgt.charitha.appserver.wso2.com and the worker subdomain will be identified by  charitha.appserver.wso2.com
Since we are running everything in local machine, let's update the /etc/hosts file to match up with the hostnames we use.

127.0.0.1 mgt.charitha.appserver.wso2.com
127.0.0.1 charitha.appserver.wso2.com

Step 3: 
Uncomment localMemberHost element in WSO2_LB_HOME/repository/conf/axis2/axis2.xml and specify the IP address (or host name) which you are going to advertise to the members of the cluster.

<parameter name="localMemberHost">127.0.0.1</parameter>

Save loadbalancer.conf, axis2.xml and /etc/hosts files.

The above are the only configurations which need to be done in WSO2 LB. You do not have to configure anything in other configuration files in the minimum deployment setup.
Now, start WSO2 Load Balancer by running wso2server.sh script which can be found at WSO2_LB_HOME/bin. You will see the following logs at server startup.

[2012-09-13 20:04:08,407]  INFO - TribesClusteringAgent Managing group application domain:CharithaASdomain, sub-domain:mgt using agent class org.wso2.carbon.lb.endpoint.SubDomainAwareGroupManagementAgent
[2012-09-13 20:04:08,408]  INFO - TribesClusteringAgent Managing group application domain:CharithaASdomain, sub-domain:worker using agent class org.wso2.carbon.lb.endpoint.SubDomainAwareGroupManagementAgent
[2012-09-13 20:04:08,410]  INFO - ServerManager Server ready for processing...
[2012-09-13 20:04:08,483]  INFO - AutoscalerTaskServiceComponent Autoscaling is disabled.
[2012-09-13 20:04:08,537]  INFO - PassThroughHttpSSLListener Starting Pass-through HTTPS Listener...
[2012-09-13 20:04:08,550]  INFO - PassThroughHttpSSLListener Pass-through HTTPS Listener started on port : 8243
[2012-09-13 20:04:08,550]  INFO - PassThroughHttpListener Starting Pass-through HTTP Listener...
[2012-09-13 20:04:08,555]  INFO - PassThroughHttpListener Pass-through HTTP Listener started on port : 8280
[2012-09-13 20:04:08,557]  INFO - TribesClusteringAgent Initializing cluster...
[2012-09-13 20:04:08,583]  INFO - TribesClusteringAgent Cluster domain: wso2.as.lb.domain
[2012-09-13 20:04:08,587]  INFO - TribesClusteringAgent Using wka based membership management scheme
[2012-09-13 20:04:08,599]  INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/127.0.0.1:4000
[2012-09-13 20:04:08,699]  INFO - WkaBasedMembershipScheme Receiver Server Socket bound to:/127.0.0.1:4000
[2012-09-13 20:04:08,846]  INFO - TribesClusteringAgent Local Member 127.0.0.1:4000(wso2.as.lb.domain)
[2012-09-13 20:04:08,847]  INFO - TribesUtil No members in current cluster
[2012-09-13 20:04:08,849]  INFO - TribesClusteringAgent Cluster initialization completed.
[2012-09-13 20:04:09,412]  INFO - StartupFinalizerServiceComponent WSO2 Carbon started in 14 sec

Let's proceed with configuring WSO2 Application Server management node.

Setting up WSO2 Application Server Management Node


Step 1:

Extract wso2as-5.0.0.zip into a directory in your local file system. Let the extracted directory be WSO2_AS_MGR_HOME.

Step 2 (axis2.xml configuration):

First, we need to enable clustering at axis2 level in order for management node to communicate with load balancer and the worker nodes. Open  WSO2_AS_MGR_HOME/repository/conf/axis2/axis2.xml and update the clustering configuration as shown below.

<clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">

<parameter name="membershipScheme">wka</parameter>

Specify the cluster domain. Note that, this must be the same which we defined in loadbalancer.conf

<parameter name="domain">CharithaASdomain</parameter>

<parameter name="localMemberHost">mgt.charitha.appserver.wso2.com</parameter>

<parameter name="localMemberPort">4250</parameter>

We will add a new property "subDomain" and set it to "mgt" to denote that this node belongs to mgt subdomain of the cluster as we defined in loadbalancer.conf.
  <parameter name="properties">
            <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
            <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
            <property name="subDomain" value="mgt"/>

        </parameter>

Now, add load balancer IP or host name (in our case, we will refer to the load balancer using 127.0.0.1 and the local member port (4000) as defined in the axis2.xml of WSO2 LB) as a well-known member.
<members>
             <member>
                <hostName>127.0.0.1</hostName>
                <port>4000</port>
            </member>
        </members>

Step 3 - catalina-server.xml configuration:
WSO2 Application Server management node is fronted by Load Balancer. Therefore, we need to configure the proxy ports which are associated with HTTP and HTTPS connectors. These proxy ports are the corresponding transport receiver ports opened by WSO2 LB (configured in transport listeners section in axis2.xml).
Open WSO2_AS_MGR_HOME/repository/conf/tomcat/catalina-server.xml and add the proxyPort attribute for both HTTP and HTTPS connectors as shown below.

<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                port="9763"
                proxyPort="8280"

 <Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
                port="9443"
                proxyPort="8243"

Step 4 - carbon.xml configuration:

Since we run multiple WSO2 Carbon based products in same host, we must avoid the possible port conflicts. Therefore, edit the following element in WSO2_AS_MGR_HOME/repository/conf/carbon.xml by assigning port offset 1 to the Application Server management node.

<Offset>1</Offset>

Update mgtHostName and HostName elements in carbon.xml as shown below.

<HostName>charitha.appserver.wso2.com</HostName>

<MgtHostName>mgt.charitha.appserver.wso2.com</MgtHostName>

We must do one more configuration change in carbon.xml before moving to the next step. As we discussed at the beginning, Application Server management node is used for deploying artifacts such as web applications and web services. The deployed artifacts in management node must be synchronized automatically to the worker nodes in cluster. This deployment synchronization mechanism is pretty straight-forward in WSO2 Carbon based products.  The default SVN based deployment synchronizer can be used to auto-commit the deployment artifacts to a pre-configured SVN repository. Then, the worker nodes can be configured to automatically check-out the artifacts from the same SVN location.

Let's configure SVN based deployment synchronizer in carbon.xml of management node.
<DeploymentSynchronizer>
        <Enabled>true</Enabled>
        <AutoCommit>true</AutoCommit>
        <AutoCheckout>true</AutoCheckout>
        <RepositoryType>svn</RepositoryType>
        <SvnUrl>http://10.100.3.115/svn/repos/as</SvnUrl>
        <SvnUser>wso2</SvnUser>
        <SvnPassword>wso2123</SvnPassword>
        <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
    </DeploymentSynchronizer>

Make sure to replace the SvnUrl, SvnUser and SvnPassword according to your SVN repository.

Step 5: Starting the management node of WSO2 Application Server cluster

Go to WSO2_AS_MGR_HOME/bin and run wso2server.sh to start the server. During the server startup, you should see cluster initialization messages similar to the following.

[2012-09-13 23:04:25,820]  INFO {org.apache.axis2.clustering.tribes.TribesClusteringAgent} -  Initializing cluster...
[2012-09-13 23:04:25,840]  INFO {org.apache.axis2.clustering.tribes.TribesClusteringAgent} -  Cluster domain: CharithaASdomain
[2012-09-13 23:04:25,843]  INFO {org.apache.axis2.clustering.tribes.TribesClusteringAgent} -  Using wka based membership management scheme
[2012-09-13 23:04:25,852]  INFO {org.apache.axis2.clustering.tribes.WkaBasedMembershipScheme} -  Receiver Server Socket bound to:/127.0.0.1:4250
[2012-09-13 23:04:25,960]  INFO {org.apache.axis2.clustering.tribes.WkaBasedMembershipScheme} -  Added static member 127.0.0.1:4000(CharithaASdomain)


At the same time, look at the logs of Load Balancer.

[2012-09-13 23:04:41,012]  INFO - RpcMembershipRequestHandler Received JOIN message from 127.0.0.1:4250(CharithaASdomain)
[2012-09-13 23:04:41,012]  INFO - MembershipManager Application member 127.0.0.1:4250(CharithaASdomain) joined group CharithaASdomain
[2012-09-13 23:04:41,013]  INFO - ClusterManagementMode Member 127.0.0.1:4250(CharithaASdomain) joined cluster
[2012-09-13 23:04:52,016]  INFO - DefaultGroupManagementAgent Application member Host:127.0.0.1, Port: 4250, HTTP:9764, HTTPS:9444, Domain: CharithaASdomain, Sub-domain:mgt, Active:true joined application cluster

By looking at the above logs, we can conclude that Application Server management node has successfully joined cluster and ready to receive requests through the load balancer.
Now, we can log in to the management console of the Application Server management node. Access https://mgt.charitha.appserver.wso2.com:8243/carbon/ and log in to the management console with the default administrator credentials (admin/admin). Go to the "Deployed Services" page and click on WSDL1.1 link of the HelloService. You will get an HTTP 500 error since the server looks for the URL, http://charitha.appserver.wso2.com:8280/services/HelloService?wsdl, which is not yet available. As you can see from this URL, though the service is deployed in the management node, all requests are served by the worker nodes. charitha.appserver.wso2.com is the hostname which we have given for the worker node and we have not started the worker node yet.


Setting up WSO2 Application Server Worker node


Since we have completed setting up the management node, worker node configuration is quite straight-forward. Let's configure the worker node in our local machines.

Step 1:
We are not going to extract another fresh version of wso2as-5.0.0.zip. Instead, make a copy of the management node into a different directory, so that we can reuse most of the settings which we used above. Rename the directory as wso2as-5.0.0-worker. Let this directory be WSO2_AS_WORKER_HOME.

Step 2 - axis2.xml configuration:
In addition to the changes which we did in axis2.xml of the management node, there are few modifications need to be done for worker. First, we update localMemberPort element since we launch two instances of cluster nodes in the same machine. Open WSO2_AS_WORKER_HOME/repository/conf/axis2/axis2.xml and edit the following element.

<parameter name="localMemberPort">4251</parameter>

Application Server worker node belongs to the "worker" sub domain of the cluster domain as we have configured in loadbalancer.conf of WSO2 Load Balancer. As we did with the management node, we can add a new property, "subDomain" as follows to represent this.

<parameter name="properties">
            <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
            <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
            <property name="subDomain" value="worker"/>
</parameter>

Step 3: carbon.xml configuration:

First, we should specify a new port offset for the worker node. Edit WSO2_AS_WORKER_HOME/repository/conf/carbon.xml and update the portOffset value as follows.

 <Offset>2</Offset>

Also, update the HostName element as shown below. We do not need to specify MgtHostName element since this node is designated as the worker node in Application Server cluster.

<HostName>charitha.appserver.wso2.com</HostName>

Now, we need to configure the SVN based deployment synchronizer to automatically check-out deployment artifacts from a common SVN repository. The worker nodes of a cluster SHOULD NOT commit (write) artifacts hence we must disable AutoCommit property in the deployment synchronizer configuration as shown below.

<DeploymentSynchronizer>
        <Enabled>true</Enabled>
        <AutoCommit>false</AutoCommit>
        <AutoCheckout>true</AutoCheckout>
        <RepositoryType>svn</RepositoryType>
        <SvnUrl>http://10.100.3.115/svn/repos/as</SvnUrl>
        <SvnUser>wso2</SvnUser>
        <SvnPassword>wso2123</SvnPassword>
        <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
 </DeploymentSynchronizer>

Step 4: Start WSO2 Application Server Worker node

We are ready to start the worker node of our cluster. Go to WSO2_AS_WORKER_HOME/bin and start the server as follows. Note that, the workerNode system property must be set to true when starting the workers in a cluster.

sh wso2server.sh -DworkerNode=true

Once the worker node joins the cluster, you will see the following messages in WSO2 Load Balancer logs.
[2012-09-14 06:32:41,050]  INFO - RpcMembershipRequestHandler Received JOIN message from 127.0.0.1:4251(CharithaASdomain)
[2012-09-14 06:32:41,051]  INFO - MembershipManager Application member 127.0.0.1:4251(CharithaASdomain) joined group CharithaASdomain
[2012-09-14 06:32:44,387]  INFO - ClusterManagementMode Member 127.0.0.1:4251(CharithaASdomain) joined cluster
[2012-09-14 06:32:52,052]  INFO - DefaultGroupManagementAgent Application member Host:127.0.0.1, Port: 4251, HTTP:9765, HTTPS:9445, Domain: CharithaASdomain, Sub-domain:worker, Active:true joined application cluster

Testing the cluster

There are many ways to test our cluster deployment. Let's follow the simplest path. 

- Log in to the management console of Application Server management node 
- Deploy a new Axis2 Web service (Go to Manage --> Axis2 Services --> Add --> AAR service)
- Once the service is deployed in the management node go to the services list and click on Tryit
- Invoke the service

You will notice that the requests are served by the worker node of the cluster. If you click on WSDL1.1 link of any service, the WSDL will be served by the worker (http://charitha.appserver.wso2.com:8280/services/HelloService?wsdl)

Let me know if you come across any issues when setting up the cluster. 

Comments

B.Ajanthan said…
Hi,
How to hide port in resulted urls?
Thanks.
Unknown said…
Excellent article. Could you comment on how you can add another "worker node" that is on another host (in other words this separately)
Alan Daniel said…
Nice article!
I have a question: Do you know how to disable the Administrative Console on the Worker nodes? only setting the DworkerNode does not work for me.
Anonymous said…
Hi Charitha,

Nice article with sufficient details for a newbie.
What does happen behind the scene when we run the server as -DworkerNode=true.

I have this weird question that What would happen if we start the manager node (the node we configure as the manager) is started with -DworkerNode startup parameter?

Popular posts from this blog

Working with HTTP multipart requests in soapUI

Common mistakes to avoid in WSO2 ESB - 1 - "org.apache.axis2.AxisFault: The system cannot infer the transport information from the URL"

How to deploy JSR181 annotated class in Apache Axis2