In this knowledge article, you will learn how to leverage two paxata servers to test failover and load balance configuration. Note this configuration is for testing purpose only, and not for production usage. Please refer to Reference Configuration Guide for recommended Production Deployment diagram, or contact servicedesk@paxata.com for detail.
Server #1 is Core Server for Web/Frontend/Rest API/Automation request, while Server #2 is for Data Ingestion/Publish work only. Since you are not using Automation, we don't need to specifically configure that for the time being. Load balancer always points to Server #1 unless it's down, so that end users always reach Server #1 for Web/REST API Requests. Server #2 should not be reachable by end users.
A. Configuration
1. Server #1: Web/Frontend Server + Active Pipeline Server.
2. Server #2: Pipelet (Data Ingestion/Publish work only) Server + Backup/Passive Pipeline Server
server1 and server2 below are FQDN of the hostnames of respective servers.
px.properties in Server #1:
px.clientId=server1
px.pipeline.url=http://server1:8090
px.library.url=http://server2:9080/library
px.messaging.mode=Embedded
px.messaging.remote.hosts=server2
px.messaging.local.host=server1
px.messaging.port=5445
px.properties in Server #2:
px.clientId=server2
px.pipeline.url=http://server1:8090
px.library.url=http://server2:9080/library
px.messaging.mode=Embedded
px.messaging.remote.hosts=server1
px.messaging.local.host=server2
px.messaging.port=5445
Other properties files, namely pes.properties, filesystem.properties, database.properties, jetty.properties, jdbc-driver.properties, should be identical between Server #1 and Server #2.
B. Startup procedure:
1. Start server1, you will see warning messages in frontend.log about waiting for remote HornetQ server to start up.
2. Start server2, now server1's Warning message should go away and both servers started, pointing to same MongoDB Cluster and Same HDFS for data library.
C. Core Server version upgrade procedure:
0. Stop paxata-server on both server1 and serve2. Then "yum localupdate" to target core server version on both servers.
1. To trigger MongoDB schema update from only one server. Only use Server #1 to start up service 1st time after yum update.
Comment out the px.messaging lines in px.properties in Server #1:
px.clientId=server1
px.pipeline.url=http://server1:8090
px.library.url=http://server2:9080/library
#px.messaging.mode=Embedded
#px.messaging.remote.hosts=server2
#px.messaging.local.host=server1
#px.messaging.port=5445
2. Restart server1 paxata-server service only. Do not touch server2.
3. After server1 starts up successfully, test UI login.
4. Restore server1 px.properties setting as before (uncomment the px.messaging lines)
5. Restart server1 and then server2, as normal startup procedure as B.
D. Failover procedure:
Server #1 goes down: the load balancer would redirect the business traffic to Server #2.
Update px.properties in Server #2 as follows:
NEW px.properties in Server #2:
px.clientId=server2
px.pipeline.url=http://server2:8090
px.library.url=http://server2:9080/library
#px.messaging.mode=Embedded
#px.messaging.remote.hosts=server1
#px.messaging.local.host=server2
#px.messaging.port=5445
Restart paxata server service on Server #2 to make change effective.
Server #2 goes down: no load balancer update needed.
Update px.properties in Server #1 as follows:
NEW px.properties in Server #1:
px.clientId=server1
px.pipeline.url=http://server1:8090
px.library.url=http://server1:9080/library
#px.messaging.mode=Embedded
#px.messaging.remote.hosts=server1
#px.messaging.local.host=server2
#px.messaging.port=5445
Start backup pipeline service on server #2, and restart paxata server service on Server #2 to make change effective.