Sunday, March 29, 2015

How to configure a ESB proxy service as a consumer to listen to two message broker queues

In this blog post, we will look at how we can configure multiple transport receivers and senders with WSO2 ESB and configure a proxy service to have multiple transport receivers.

In order to test our scenario, we need to start two message broker instances.

Lets configure active MQ to run as two instances.

1. Download activemq and extract it.
2. Run the following command to create an instance of it.

$ ./activemq create instanceA
$ ./activemq create instanceB


Running these two commands will create two directories inside activemq bin directory with configuration files and start-up scripts duplicated within them. Now we can modify the configuration files to use different ports so that when we start the two mq instances, there wont be port conflicts.

Open InstanceB/conf/activemq.xml file and modify the ports under transportConnectors.

<transportConnectors>
               <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireformat.maxFrameSize=104857600"/>
            <transportConnector name="amqp" uri="amqp://0.0.0.0:5682?maximumConnections=1000&amp;wireformat.maxFrameSize=104857600"/>
 </transportConnectors>


Now open jetty.xml in the same directory and modify the ui port from 8161 to different port.

Now we are ready to start the two activemq instances. 

cd instanceA/bin
./instanceA console

cd instanceB/bin
./instanceB console

Now we have two activemq instance running in console mode.

Log into activemq instanceA ui and create a queue named MyJMSQueue.
Similiary, log into activemq instanceB and create a queue with the same name.

Use http://localhost:8161/admin and username and password admin for defaults.

Now, we have done our configurations for activemq broker.

Now copy the following jar files to repository/components/lib directory of ESB.

activemq-broker-5.8.0.jar
activemq-client-5.8.0.jar
geronimo-j2ee-management_1.1_spec-1.0.1.jar
geronimo-jms_1.1_spec-1.1.1.jar
hawtbuf-1.9.jar


Configuring axis2.xml

Now go to repository/conf/axis2/axis2.xml and uncomment jms transport section for activemq and duplicate it with a transport named jms1. Make sure to update the provider url port with the value you specified in activemq.xml. My configuration looks like the following.

<transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">
        <parameter name="myTopicConnectionFactory" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">TopicConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">topic</parameter>
        </parameter>

        <parameter name="myQueueConnectionFactory" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        </parameter>

        <parameter name="default" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61616</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        </parameter>
    </transportReceiver>

    <transportReceiver name="jms1" class="org.apache.axis2.transport.jms.JMSListener">
        <parameter name="myTopicConnectionFactory1" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61636</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">TopicConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">topic</parameter>
        </parameter>

        <parameter name="myQueueConnectionFactory1" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61636</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        </parameter>

        <parameter name="default" locked="false">
                <parameter name="java.naming.factory.initial" locked="false">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url" locked="false">tcp://localhost:61636</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>
                    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
        </parameter>


Now start ESB and deploy the following proxy service.

<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="JMSListenerProxy"
       transports="jms jms1"
       startOnLoad="true"
       trace="disable">
   <description/>
   <target>
      <inSequence>
         <log level="full"/>
         <drop/>
      </inSequence>
   </target>
   <parameter name="transport.jms.Destination">MyJMSQueue</parameter>
</proxy>

Now if you publish message to queue MyJMSQueue of either activemq instance, you will notice that the message is consumed by our proxy service and logged.

How does it work ?

In our scenario, since we are going to have to have different configurations for transports jms and jms1, we cannot specify the connection factory details in the proxy service itself. Hence we have resorted to use the default configurations specified in the axis2.xml.

However, we can specify the jms destination name in our proxy service. This make sense as this kind of approach would only be required for mq high availability scenario and hence we can afford to have the same queue name for both message broker instances. 

Saturday, March 28, 2015

How to configure IBM MQ 8 With WSO2 ESB

In this blog post, we will look at how to configure IBM MQ version 8 with WSO2 ESB and implement a proxy service to consume messages from a queue in IBM MQ.

Following are the steps we need to follow in order to configure ESB and implement our proxy service. 


1. Create the relevant JMS Administrative objects in IBM MQ.
2. Generate the JNDI binding file from IBM MQ
3. Configure WSO2 ESB JMS transport with the generated binding file and connection factory information.
4. Implement the proxy service and deploy it.
5. Publish a message to MQ and observe how it is consumed by ESB.

Create Queue Manager and Queue and Server Connection Channel in MQ

Step1.

Start the Web Sphere MQ Explorer. If you are not running on an administrator account, right click on the icon and select Run as Administrator option.


Step 2.

Click on the Queue Managers and Select New => Queue Manager to create a new queue manager.

We will name the queue manager as ESBQManager. Select create server connection channel option as you pass through the wizard with next button. You will get the option to specify the port this queue manager will use. Since we do not have any queue managers at the moment, we can use the default 1414 port.







Now we have created a queue manager object. Next we need to create a local queue which we will used to publish massages and consume from ESB. Lets name this queue as LocalQueue1.

Expand newly created ESBQManager and click on Queues and select New => Local Queue.





We will use default options for our local queue.

Next we need to create a server connection channel which will be used to connect to the queue manager.

Select Channels => New => Server-connection Channel option and give the channel name mychannel. Select default options for creating the channel.




Now we have created our queue manager, queue and server connection channel.

Generating the binding file

   Next we need to generate the binding file which will be used by IBM MQ client libraries for JNDI Look-up.  For that, we need to first create a directory where this binding file will be stored. I have created a directory named G:\jndidirectory for this purpose. 

Now go to MQ Explorer, click on JMS Administered Objects and select Add Initial Context.



In the connection details wizard, select File System option and browse to our newly created directory and click next and click finish.


Now, under the JMS Administered objects, we should be able to see our file initial context.



Expand it and click on Connection Factories to create a new connection factory.



We will name our connection factory as MyQueueConnectionFactory. For the connection factory type, select Queue Connection Factory.




Click next and click finish. Now Click on the newly created Connection Factory and select properties. Click on the connections option, browse and select our queue manager. You can also configure the port and the host name for connection factory. Since we used default values, we do not need to do any changes here. 






For the other options, go with the defaults. Next , we need to create a JMS Destination. We will use the same queue name LocalQueue1 as our destination and map it to our queue LocalQueue1 . Click on Destinations and select New => Destination. and provide name LocalQueue1. When you get the option to select the queue manager and queue browse and select ESBQManager and LocalQueue1 .





Now we are done with creating the Initial Context. If you now browse to the directory we specified, you should be able to see the newly generated binding file.



In order to connect to the Queue, we need to configure channel authentication. For the ease of use, lets disable channel authentication for our scenario. For that run the command runmqsc from the command line and execute the following two commands. Note that you have to start command prompt as admin user.

runmqsc ESBQManager

ALTER QMGR CHLAUTH(DISABLED)

REFRESH SECURITY TYPE(CONNAUTH)


Now we are done with configuring the IBM MQ.  

Configuring WSO2 ESB JMS Transport. 


open axis2.xml found in wso2esb-4.8.1\repository\conf\axis2 directory and add the following entries to it near the commented out jms transport receiver section.

<transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">
  <parameter name="default" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>
  </parameter>

  <parameter name="myQueueConnectionFactory1" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>
  </parameter>
</transportReceiver>

Similarly add jms transport sender section as follows.

<transportSender name="jms" class="org.apache.axis2.transport.jms.JMSSender">
  <parameter name="default" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>
  </parameter>

  <parameter name="myQueueConnectionFactory1" locked="false">
    <parameter name="java.naming.factory.initial" locked="false">com.sun.jndi.fscontext.RefFSContextFactory</parameter>
    <parameter name="java.naming.provider.url" locked="false">file:/G:/jndidirectory</parameter>
    <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">MyQueueConnectionFactory</parameter>
    <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>
    <parameter name="transport.jms.UserName" locked="false">nandika</parameter>
    <parameter name="transport.jms.Password" locked="false">password</parameter>
  </parameter>
</transportSender>

Since we are using IBM MQ queue manager default configuration, it is expecting username password client authentication. Here, the username and password is the login information of your logged in operating system account.


Copy MQ client libraries to respective directories.


Copy jta.jar and jms.jar to repository/components/lib directory.
Copy com.ibm.mq_2.0.0.jar and fscontext_1.0.0.jar to repository/components/dropins directory. Download the jar files from here.

Deploy JMSListener Proxy Service.

Now start esb and deploy the following simple proxy service. This proxy service act as a listener to our queue LocalQueue1 and when ever we put a message to this queue, the proxy service will pull that message out of the queue and log it.

<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="MyJMSProxy"
       transports="jms"
       startOnLoad="true"
       trace="disable">
   <description/>
   <target>
      <inSequence>
         <log level="full"/>
         <drop/>
      </inSequence>
   </target>
   <parameter name="transport.jms.Destination">LocalQueue1</parameter>
</proxy>

Testing our proxy service

Go to MQ Explorer and add a message to local queue. 



Now you will be able to see the message logged in ESB console as well as in the log file.

Enjoy JMS with IBM MQ

Sunday, December 21, 2014

Packt's $5 ebook Bonanza

Following the success of last year’s offer, Packt Publishing will be celebrating the holiday season with a bigger $5 offer. Check it out here  http://bit.ly/1DQhFk6 From Thursday 18th December, every eBook and video will be available on the publisher’s website for just $5. Customers are invited to purchase as many as they like before the offer ends on Tuesday January 6th, making it the perfect opportunity to try something new or to take your skills to the next level as 2015 begins.

Sunday, November 16, 2014

WS-BPEL 2.0 Beginner's Guide Book Review



I had the opportunity to read the WS-BPEL 2.0 Beginners Guide Book from PACKT publishing. The authors of this book has done a very good job in explaining the concepts in a simple and concise manner.

It is a very descriptive and practical guide to a beginners in BPEL. Writing an executable BPEL process is a very different task compared to write code in a general purpose programming language. The reason for that is, you need to have background knowledge on a lot of technologies in order to properly understand and implement a BPEL process. Minimum set of those technologies include SOAP / HTTP web services, WSDL, XML ,  Xml schema and XPath.

Hence , WS-BPEL 2.0 beginners guide takes an ideal approach for a beginner. It starts by introducing the basic concepts and straightaway goes into a practical example. It chooses oracle SOA Suite as the target technology stack and  JDeveloper  as its development environment for BPEL and provides step by step screen shots on how to implement a process.  Next it explains each and every step taken in implementing the sample process and how to deploy and test the process. I find this approach very useful, simply because, when learning a complex technology like BPEL, the best approach is to start with simple exercises to get the feel for the technology and then dive into the more complex topics step by step.

This pattern is followed for all the chapters as well. Each new chapter introduces a concept from BPEL, and goes onto a practical example explaining the details and finally testing the process. Hence, when you finish reading the book, not only you will understand the concepts in BPEL, but also, you would have mastered to BPEL development tool. As BPEL is developed mostly by using graphical tools, mastering the development environment is an essential skill for becoming a skilled BPEL developer.

The book explains the concepts in words as well as using diagrams. Book covers all the concepts from BPEL specification including topics such as synchronous processes, asynchronous processes, message correlation, fault handing , compensation handling ect.

In addition to BPEL concepts, book also covers the WS-Human Tasks space as well. The human tasks tooling capabilities of JDeveloper as well as the concepts are explained in a concise manner. In many practical process implementations in the industry there will be BPEL as well as human tasks. Hence for a beginner, this book is an ideal guide to master the BPEL based workflow technologies. Also, This book can be useful for an experienced BPEL developer migrating from another tool to JDeveloper.

Finally , I would recommend this book to anyone who is new to BPEL and is looking for a practical guide to learning BPEL related workflow technologies.   

Monday, February 24, 2014

Proxy Service Version Management with WSO2 ESB

Versioning of proxy services in an SOA environment is a common requirement. Versioning is required when you want to add / update or change the functionality of a proxy service without affecting the existing consumers of that proxy service. 




















Above diagram shows a typical versioning scenario. If the change in Service X 2.0 is compatible with Service X 1.0, then we can simply point to service X version 2.0 and consumers will not be affected by the change. However, if the change is in compatible, then we will have to introduce a new proxy service version.

General Principles of versioning

1. Client should not be forced to use the new version immediately
  • Gradual client migration
  • Retire services gracefully
2. Support multiple versions concurrently 
  • Limit the number of versions though governance
  • Only the latest version is discover-able

Solution 1. 

Create two versions of the proxy service. Consumer A can access the version 1.0 of the service and Consumer B can access version 2.0 of the service. Gradually migrate Consumer A to proxy service version 2.0. This way, consumer A can live with version 1.0 and plan for upgrading to version 2.0. Both versions of the proxy service will exist till version 1.0 is deprecated.
























Versioning with WSO2 ESB


Easiest way to version proxy services is to create a new version of the proxy service and related artifacts by appending the version information to the proxy service name. It is best to add version information to artifacts as a best practice. 

For example consider we have to  proxy a web service named StockQuote. Then we can name the proxy service as StockQuoteProxyV1. All artifacts contained with the proxy service should also be named accordingly. For example out endpoint pointing to the StockQuote service can be named as StockQuoteEndpointV1. 

Now creating and deploying new version of the proxy service becomes a simple task. We just need to update all the related artifacts with the new version number.


Future Improvements

Another approach to implementing proxy service versioning by having version as an attribute has been tried in the parent synapse project and there is a GSOC project on the same topic. These improvements are planed for future releases of WSO2 ESB.

References. 

Saturday, February 22, 2014

How to configure a BPEL process to consume JMS Queue

Since BPS is based on Axis2, All axis2 transports are available for BPEL published services as well. I will describe the steps required to consume a message from a JMS queue to complete a BPEL process.

We will use ActiveMQ for this sample.

Following is the step wise guide to do it.

Step 1. 
Download and extract Apache ActiveMQ 5.6.

Step 2.
Download Extract WSO2 BPS 3.2.0

Step 3.
un comment the TransportReceiver and TransportSender sections of axis2.xml corresponding to ActiveMQ. You can find axis2.xml located at /repository/conf/axis2 directory.


    <transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">
        <parameter name="myTopicConnectionFactory">
                <parameter name="java.naming.factory.initial">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url">tcp://localhost:61616</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName">TopicConnectionFactory</parameter>
        </parameter>

        <parameter name="myQueueConnectionFactory">
                <parameter name="java.naming.factory.initial">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url">tcp://localhost:61616</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName">QueueConnectionFactory</parameter>
        </parameter>

        <parameter name="default">
                <parameter name="java.naming.factory.initial">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>
                <parameter name="java.naming.provider.url">tcp://localhost:61616</parameter>
                <parameter name="transport.jms.ConnectionFactoryJNDIName">QueueConnectionFactory</parameter>
        </parameter>
    </transportReceiver>


<transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener">

Step 4.

Copy following jar files from activemq lib directory to /repository/components/lib directory.

activemq-core-5.6.0.jar
geronimo-j2ee-management_1.1_spec-1.0.1.jar
geronimo-jms_1.1_spec-1.1.1.jar


Step 5.
Start active mq from console

apache-activemq-5.6.0/bin $ ./activemq console
Use management console of ActiveMQ to view the queues and topics. ActiveMQ management console is available at http://0.0.0.0:8161/admin



Step 6.
Start BPS from console.

wso2bps-3.2.0/bin $ sh wso2server.sh

From the management console , deploy the HelloWorld2.zip file that is available in the repository/samples/bpel directory of BPS.

From the services list view, select the HelloWorld service. 



As you can see, the jms endpoint is also available for the newly deployed process. 

Step 7.

Now go to ActiveMQ management console and go to queues section. You will find that there is a queue named HelloService. 






Use the send to section to add a message to the HelloService. You can generate the sample message for the HelloService wsdl using soap ui.



How to Cluster WSO2 BPS 3.2.0

Cluster Architecture


Server clustering is done mainly in order to achieve high availability and scalability.


High Availability


High availability means there is redundancy in the system such that service is available to outside world irrespective of individual component failures. For example, if we have a two node cluster, even if one node fails, the other node would continue to serve requests till the failed node is restored again.


Scalability


Scalability means increasing the processing capacity by adding more server nodes.


Load Balancer


Load balancing is the method of distributing workload to multiple server nodes.  In order to achieve proper clustering function you would require a Load Balancer. The function of the load balancer is to monitor the availability of the server nodes in the cluster and route requests to all the available nodes in a fair manner. Load balancer would be the external facing interface of the cluster and it would receive all the requests coming to the cluster. Then it would distribute this load to all available nodes. If a node has failed, then the load balancer will not route requests to that node till that node is back online.


WSO2 Business Process Server Cluster Architecture


In order to build a wso2 business process server cluster you would require the following.

  1.        Load balancer
  2.       Hardware / VM nodes for BPS Nodes
  3.       Database Server
Following diagram depicts the deployment of a two node WSO2 bps cluster.





Load Balancer will receive all the requests and distribute the load (Requests) to the two BPS nodes. BPS Nodes can be configured as master node and slave node. A BPS cluster can have one master node and multiple slave nodes.


BPS Master Nodes / Slave Nodes


Master node is where the workflow artifacts (Business processes / Human Tasks) are first deployed.  The slave nodes will look at the configuration generated by the master node for a given deployment artifact and then deploy those artifacts in its runtime.
WSO2 BPS requires this method of deployment because it does automatic versioning of the deployed bpel /human task artifacts. Hence, in order to have the same version number for a given deployment artifact across all the nodes, we need to do the versioning at one node (Master Node).
A BPS server decides whether it is a master node or a slave node by looking at its configuration registry mounting configuration. We will look at that configuration in detail later.


BPS and Registry


In the simplest terms, registry is an abstraction over a database schema. It provides an API using which you can store data and retrieve data to a database. WSO2 BPS embeds the registry component and hence has a build in registry.  Registry is divided into three spaces.

Local Registry


Local registry is used to store information local to a server node.

Configuration Registry


                Configuration Registry is used to store information that needs to be shared across same type of server nodes. For example, configuration registry is shared across BPS server nodes. However, this same configuration registry would not be shared across another type of server nodes.

Governance Registry 


Governance Registry is used to store information that can be shared across multiple clusters of different type of servers. For example governance registry can be shared across BPS and ESB cluster. In the above diagram, these different registry configurations are depicted as individual databases.
Note:
BPS Master Node refers to the configuration registry using a Read/Write link while the BPS Slave nodes refer to the configuration registry using a Read/Only link.


BPS and User Store and Authorization


BPS management console requires a user to login to the system in order to do management activities. Additionally various permissions levels can be configured for access management. In human tasks, depending on the logged in user, what he can do with tasks will change.
All this access control/authentication/authorization functions are inherited to the BPS server from carbon kernel.  You can also configure an external LDAP/Active directory to grant users access to the server. All this user information / permission information is kept in the user store database. In the above diagram, UM DB refers to this database. This database is also shared across all the cluster nodes.


BPS Persistence DB


BPS handles long running processes and human tasks. This means, the runtime state of the process instances/ human task instances have to be persisted to a database. BPS persistence database is the databases where we store these process / t ask configuration data and process / task instance state.


Configuring the BPS Cluster


Now that we have understood the individual components depicted in the above diagram, we can proceed to implement our BPS cluster.  I will break down the steps in configuring the cluster into following steps.  The only major difference between the master node and slave node is in registry.xml configuration.
If you are using two machines (hardware or VM) all other configurations are identical for master node and slave node except IP addresses, ports and deployment synchronizer entry.  However, if you are configuring the cluster on the same machine for testing purpose , you will need to change multiple files as port conflicts can occur.

  1. Create database schemas.
  2. Configure the master-datasource.xml  ( Registry and User manager databases )
  3. Configure datasources.properties  ( BPS Persistence database )
  4. Configure registry.xml ( Different for master node and slave node)
  5. Configure the user-mgt.xml
  6. Configure axis2.xml
  7. Configure tasks-config.xml
  8. Configure bps.xml
  9. Configure carbon.xml
  10. Configure the server start-up script


Creating database Schema's


WSO2 BPS supports the following major databases.
1.       Oracle
2.       MySQL
3.       MSSQL
4.       PostgreSQL

                In the above diagram, we have depicted 5 databases. We can use H2 as the local registry for each BPS Node. We can create one schema for registry and configure registry mounting configuration for configuration registry and governance registry. Hence we will have to create 3 more databases for registry, user store and BPS persistence db.

Database Schema Requirement


DB Name

Configuration/Governance Registry
REGISTRY_DB
User store database
UM_DB
BPS Persistence database
BPS_DB

You can find the corresponding SQL scripts for creating registry databases from wso2bps-3.2.0/dbscripts directorySQL script for bps persistence database can be found at wso2bps-3.2.0/dbscripts/bps directory.

As an example of creating a database, we will show the steps for creating a database using MySql.

mysql> create database REGISTRY_DB;
mysql> use REGISTRY_DB;
mysql> source /dbscripts/mysql.sql;
mysql> grant all on REGISTRY_DB.* TO username@localhost identified by "password";

Download and copy the MySql connector to /repository/components/lib directory. 

Configuring master-datasources.xml


You can configure data sources for registry and user store in master-datasources.xml file found in / repository/conf/datasources directory.

<datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration">
  <providers>
    <provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
  </providers>

  <datasources>
    <datasource>
      <name>WSO2_CARBON_DB</name>
      <description>The datasource used for registry and user manager</description>
      <jndiConfig>
        <name>jdbc/WSO2CarbonDB</name>
      </jndiConfig>
      <definition type="RDBMS">
        <configuration>          <url>jdbc:h2:repository/database/WSO2CARBON_DB;DB_CLOSE_ON_EXIT=FALSE;LOCK_TIMEOUT=60000</url>
          <username>wso2carbon</username>
          <password>wso2carbon</password>
          <driverClassName>org.h2.Driver</driverClassName>
          <maxActive>50</maxActive>
          <maxWait>60000</maxWait>
          <testOnBorrow>true</testOnBorrow>
          <validationQuery>SELECT 1</validationQuery>
          <validationInterval>30000</validationInterval>
        </configuration>
      </definition>
    </datasource>

    <datasource>
      <name>WSO2_REGISTRY_DB</name>
      <description>The datasource used for registry- config/governance</description>
      <jndiConfig>
        <name>jdbc/WSO2RegistryDB</name>
      </jndiConfig>
      <definition type="RDBMS">
        <configuration>
          <url>jdbc:mysql://localhost:3306/REGISTRY_DB?autoReconnect=true</url>
          <username>root</username>
          <password>root</password>
          <driverClassName>com.mysql.jdbc.Driver</driverClassName>
          <maxActive>50</maxActive>
          <maxWait>60000</maxWait>
          <testOnBorrow>true</testOnBorrow>
          <validationQuery>SELECT 1</validationQuery>
          <validationInterval>30000</validationInterval>
        </configuration>
      </definition>
    </datasource>

    <datasource>
      <name>WSO2_UM_DB</name>
      <description>The datasource used for registry- local</description>
      <jndiConfig>
        <name>jdbc/WSO2UMDB</name>
      </jndiConfig>
      <definition type="RDBMS">
        <configuration>
          <url>jdbc:mysql://localhost:3306/UM_DB?autoReconnect=true</url>
          <username>root</username>
          <password>root</password>
          <driverClassName>com.mysql.jdbc.Driver</driverClassName>
          <maxActive>50</maxActive>
          <maxWait>60000</maxWait>
          <testOnBorrow>true</testOnBorrow>
          <validationQuery>SELECT 1</validationQuery>
          <validationInterval>30000</validationInterval>
        </configuration>
      </definition>
    </datasource>
  </datasources>
</datasources-configuration>

Most of the entries are self-explanatory.

Configure datasources.properties  ( BPS Persistence database )


Open /repository/conf/datasources.properties and add the relevant entries such as database name, driver class and database connection url.  Following is the matching configuration for mysql.

synapse.datasources=bpsds
synapse.datasources.icFactory=com.sun.jndi.rmi.registry.RegistryContextFactory
synapse.datasources.providerPort=2199
synapse.datasources.bpsds.registry=JNDI
synapse.datasources.bpsds.type=BasicDataSource
synapse.datasources.bpsds.driverClassName=com.mysql.jdbc.Driver
synapse.datasources.bpsds.url=jdbc:mysql://localhost:3306/BPS_DB?autoReconnect=true
synapse.datasources.bpsds.username=root
synapse.datasources.bpsds.password=root
synapse.datasources.bpsds.validationQuery=SELECT 1
synapse.datasources.bpsds.dsName=bpsds
synapse.datasources.bpsds.maxActive=100
synapse.datasources.bpsds.maxIdle=20
synapse.datasources.bpsds.maxWait=10000

You need to do this for each node in the cluster.

Configure registry.xml


Registry mount path is used to identify the type of registry. For example” /_system/config” refers to configuration registry and "/_system/governance" refers to governance registry. Following is an example configuration for bps mount. I will highlight each section and describe them below.
I will only describe the additions to the registry.xml file below. Leave the configuration for local registry as it is and add following new entries.

Registry configuration for BPS master node


<dbConfig name="wso2bpsregistry">
  <dataSource>jdbc/WSO2RegistryDB</dataSource>
</dbConfig>

<remoteInstance url="https://localhost:9443/registry">
  <id>instanceid</id>
  <dbConfig>wso2bpsregistry</dbConfig>
  <readOnly>false</readOnly>
  <enableCache>true</enableCache>
  <registryRoot>/</registryRoot>
  <cacheId>root@jdbc:mysql://localhost:3306/ REGISTRY_DB</cacheId>
</remoteInstance>

<mount path="/_system/config" overwrite="virtual">
  <instanceId>instanceid</instanceId>
  <targetPath>/_system/bpsConfig</targetPath>
</mount>

<mount path="/_system/governance" overwrite="virtual">
  <instanceId>instanceid</instanceId>
  <targetPath>/_system/governance</targetPath>
</mount>



Let’s look at above configuration in detail. We are identifying the data source we configured in the master datasources xml using the dbConfig entry and we give a unique name to refer to that datasource entry which is “wso2bpsregistry”;
          Remote instance section refers to an external registry mount. We can specify the read only/read write nature of this instance as well as caching configurations and registry root location. Additionally we need to specify cacheID for caching to function properly in the clustered environment. Note that cacheId is same as the jdbc connection URL to our registry database.
We define a unique name “id” for each remote instance which is then referred from mount configurations. In the above example, our unique id for remote instance is instanceId. In each of the mounting configurations, we specify the actual mount patch and target mount path.


Registry configuration for BPS Salve node



<dbConfig name="wso2bpsregistry">
  <dataSource>jdbc/WSO2RegistryDB</dataSource>
</dbConfig>

<remoteInstance url="https://localhost:9443/registry">
  <id>instanceid</id>
  <dbConfig>wso2bpsregistry</dbConfig>
  <readOnly>true</readOnly>
  <enableCache>true</enableCache>
  <registryRoot>/</registryRoot>
  <cacheId>root@jdbc:mysql://localhost:3306/ REGISTRY_DB</cacheId>
</remoteInstance>

<mount path="/_system/config" overwrite="virtual">
  <instanceId>instanceid</instanceId>
  <targetPath>/_system/bpsConfig</targetPath>
</mount>

<mount path="/_system/governance" overwrite="virtual">
  <instanceId>instanceid</instanceId>
  <targetPath>/_system/governance</targetPath>
</mount>

This configuration is same as above with readOnly property set to true for remote instance configuration.

Configure user-mgt.xml

    
In the user-mgt.xml enter the datasource information for user store which we configured previously in master-datasoures.xml file. You can change the admin username and password as well. However, you should do this before starting the server.

<Configuration>
  <AddAdmin>true</AddAdmin>
  <AdminRole>admin</AdminRole>
  <AdminUser>
    <UserName>admin</UserName>
    <Password>admin</Password>
  </AdminUser>
  <EveryOneRoleName>everyone</EveryOneRoleName>
  <Property name="dataSource">jdbc/WSO2UMDB</Property>
</Configuration>


Configure axis2.xml


We use axis2.xml to enable clustering. We will use well known address (WKA) based clustering method. In WKA based clustering, we need to have a subset of cluster members configured in all the members of the cluster. At least one well known member has to be operational at all times.
In the axis2.xml , find the clustering section.

<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"  enable="true">
  <parameter name="membershipScheme">wka</parameter>
  <parameter name="localMemberHost">127.0.0.1</parameter>
  <parameter name="localMemberPort">4000</parameter>
  <members>
    <member>
      <hostName>10.100.1.1</hostName>
      <port>4000</port>
    </member>
    <member>
      <hostName>10.100.1.2</hostName>
      <port>4010</port>
    </member>
  </members>
</clustering>


Change enabled  parameter to true. Find the parameter membershipSchema and set wka option. Then configure the loadMemberHost and LocalMemberport Entries. Under the members section, add the host name and port for each wka member. As we have only two nodes in our sample cluster configuration, we will configure both nodes as WKA nodes.


Configure task-config.xml


BPS packages the task server component as well. By default, when we enable clustering, this component waits for two task server nodes. Hence we need to change this entry in order to start the bps server. Open task-config.xml and change task server count to 1.
<taskServerCount>1</taskServerCount>

Configure bps.xml


In bps.xml, you need to configure the following entries.
Enable distributed lock

<tns:UseDistributedLock>true</tns:UseDistributedLock>
This entry enables hazelcast based synchronizations mechanism in order to prevent concurrent modification of instance state by cluster members.

 Configure scheduler thread pool size

<tns:ODESchedulerThreadPoolSize>0</tns:ODESchedulerThreadPoolSize>

Thread pool size should always be smaller than maxActive database connections configured in datasources.properties file.   When configuring the thread pool size allocate 10-15 threads per core depending on your setup. Then leave some additional number of database connections since bps uses database connections for management API as well.

Example settings for a two node cluster.
                MySQL Server configured database connection size   250.
                maxActive entry in datasource.properties file for each node 100
                SchedulerTreadPool size for each node 50

Configure carbon.xml


If you want automatic deployment of artifacts across the cluster nodes, you can enable deployment synchronizer feature from carbon.xml.

<DeploymentSynchronizer>
  <Enabled>true</Enabled>
  <AutoCommit>true</AutoCommit>
  <AutoCheckout>true</AutoCheckout>
  <RepositoryType>svn</RepositoryType>
  <SvnUrl>http://10.100.3.115/svn/repos/as</SvnUrl>
  <SvnUser>wso2</SvnUser>
  <SvnPassword>wso2123</SvnPassword>
  <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>

Deployment synchronizer functions by committing the artifacts to the configured svn location from one node (Node with AutoCommit option set to true) and sending cluster messages to all other nodes about the addition / change of the artifact. When the cluster message is received, all other nodes will do an svn update resulting in obtaining the changes to relevant deployment directories. Now the server will automatically deploy these artifacts.
For the master node, keep AutoCommit and AutoCheckout entries as true. For all other nodes, change autoCommit entry to false.


Configure the server start-up script


In the server startup script, you can configure the memory allocation for the server node as well as jvm tuning parameters.  If you open the wso2server.sh or wso2server.bat file located at the /bin directory and go to the bottom of the file , you will find those parameters.  Change them according to the expected server load.

Following is the default memory allocation for a wso2 server.

-Xms256m -Xmx1024m -XX:MaxPermSize=256m


Cluster artifact deployment best practices

  1. Always deploy the artifact on the master node first and on slave nodes after some delay.
  2.  Use deployment synchronizer if a protected svn repository is available in the network.
  3. Otherwise you can use simple file coping to deploy artifacts