Configure High Availability
The Incorta Direct Data Platform stores data on disk in a compressed format with compression rates between 1-10x, and 2x for EBS data. A High Availability Incorta Cluster configuration requires double the disk space.
This document discusses High Availability and Disaster Recovery architectures at a high level.
Audience: Infrastructure and operations teams who are responsible for installing and maintaining Incorta.
Architecture Goals
- Functioning Primary system in case of node failures
- Switching to DR system in case of site failures
Architecture Considerations
The following is a list of architecture principles to be used for the Incorta DR Architecture
- DR site will be in a different Data Center than the Primary Site
- There will be at least one DR site for the Incorta solution
- The Primary and DR sites can have different time zones
- Data located on Primary site is replicated to DR site asynchronously.
- Some manual steps are expected to switch to DR servers.
Incorta HA Architecture
A Typical Incorta High Availability architecture consists of
- Incorta cluster with at least 2 nodes
- Zookeeper Ensemble with 3 nodes
- Database Cluster
- Shared Storage
- Spark Cluster (Optional)
The High Availability architecture deals with individual node failures and do not take care of disasters where whole site fails. The following figure illustrates the various components in details within a High Availability architecture.
HA Architecture Details
A sample HA Architecture for the primary site may consist of the following:
- Incorta cluster with 2 nodes - Both the nodes will be in sync.
- Zookeeper cluster with 3 nodes - Zookeeper is used to coordinate Incorta and Spark nodes
- Shared Storage - Stores the Extracted data
- DB Cluster - Stores key metadata
- Spark cluster with 2 nodes - Spark is optional and used for complex transformations
One half of the cluster consist of Incorta Node-1 , Spark Node-1 and Zookeeper Node-1 and resides on Server 1. The other half of the cluster consist of Incorta Node-2, Spark Node-2 and Zookeeper Node-2 and resides on Server 2.
Since Zookeeper Ensemble requires at least 3 nodes, the third zookeeper node can be placed on any small VM.
Metadata database should also be highly available. It can be a MySQL cluster.
In case of individual node failures on any of the servers, the backup nodes on the other server will still be available to keep Incorta functioning.
Disaster Recovery Solution
There are various solutions to enable Disaster Recovery. The following architecture uses duplication of the primary site High Availability architecture to a Disaster Recovery site.
High Level DR Architecture
DR Architecture involves replication of two key components from Primary Site to DR Site.
- Incorta Tenant data stored in Parquet and Snapshot locations
- Incorta Metadata database (MySQL)
Replication
A replication of metadata database and the contents of shared storage from primary site to disaster recovery site can be configured.
Metadata database is a lightweight database and is used to hold dictionary information related to Incorta. It can be MySQL.
Shared storage is used to store the actual user data extracted from source systems.
In case of total primary site failure, Incorta on the Disaster Recovery site should be started. Since the actual data and the metadata is replicated from primary site to DR site, Incorta will be up and running. If the replication process is near real time then there will be no loss of data.