Guides → Upgrade from 5.1.x or 5.2.x to 6.0
Prior to upgrade, please review the Release Model. The Release Model describes the release statuses: Preview and Generally Available. Refer to the Release Model to identify the release version and status that is most suitable for upgrading your Sandbox, Developer, User Acceptance Testing (UAT), or Production environment.
Upgrade Considerations
All hosts must have adequate disk space for shared storage to support the migration to a new shared storage directory structure. Lastly, if your Incorta cluster has a separate Apache Zookeeper cluster, you must also upgrade Apache Zookeeper.
You must have a valid license key before you start the upgrade process. To obtain your license key, please contact Incorta Support.
With the introduction of the new-generation loader, Incorta automatically detects inter-object dependencies within a load plan during the Planning phase of a load job. The Loader Service utilizes these detected dependencies and the user-defined load order within the schema to create an execution plan for loading objects. However, it’s important to note that using both the MVs' user-defined load order and automatically detected dependencies may result in an execution plan with cyclic dependencies, leading to load job failures. To avoid such failures, it is recommended to delete the MVs' user-defined load order before upgrading to the 6.0 release.
Upgrade from Incorta 5.1.x or 5.2.x to 6.0
This guides details how to upgrade a standalone Incorta cluster. Upgrading your Incorta cluster to Release 6.0 requires team resources:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
- a CMC Administrator
- a Database Administrator
- a Super User that can access each tenant in the Incorta environment
- an Incorta Developer to resolve identified issues with formula expressions, schema alias, joins between tables, and dependencies between objects such as dashboards and business schemas
It also requires time as these general timelines for various procedures and processes indicate:
Stage | Estimated Time |
---|---|
Prepare for Upgrade Readiness | 15 minutes to 3 hours |
Achieve Upgrade Readiness | 2 hours to 3 days |
Stop the Incorta cluster | 5 minutes to 30 minutes |
Create backups | 15 minutes to 3 hours |
Upgrade Apache Zookeeper | 15 minutes to 1 hour |
Upgrade the Incorta cluster | 15 minutes to 3 hours |
Start the Incorta CMC | 5 minutes to 5 hours |
Upgrade the Incorta Metadata database | 5 minutes to 3 hours |
Start the Incorta cluster | 5 minutes to 5 hours |
Verify the successful upgrade | 15 minutes to 1 day |
If you have a cloud file system, please make sure to run the update core site script after the upgrade and before starting your cluster, and not to paste the core-site.xml
file manually.
Prepare for Upgrade Readiness
To prepare for Upgrade Readiness requires:
- running Spark 3.3 validation tool
- a System Administrator with root access to the host or hosts running Incorta Nodes as well as the host running the Cluster Management Console (CMC)
- a CMC Administrator
- a Database Administrator
The estimated time to complete the following is from 15 minutes to 3 hours:
- Backup the
core-site.xml
file if you have a cloud file system - Pause all scheduled jobs
- Export all tenants
- Add a Create View database grant
Spark 3.3 Upgrade Readiness
Incorta upgraded to Spark 3.3 in the 6.0 release. Before you start upgrading to this release, make sure to run the Spark compatibility check tool spark-upgrade-issues-detector.sh
located in the 6.0 extracted folder, and to fulfill the Spark upgrade prerequisites.
To run the tool, use the following command:
sh <EXTRACTION_DIR>/spark-upgrade-issues-detector.sh
The tool will create two files:
- potential-upgrade-issues.csv, which contains the potential compatibility issues that you have.
- spark-issues-references.xlsx, which contains examples and references to the compatibility concerns and how can you fix them.
Make sure to handle the compatibility concerns reported in the .csv
file, before your proceed to upgrade.
Spark Upgrade Prerequisites
- Customers who are using external Spark must upgrade it to 3.3.
- Customers who are upgrading from previous Incorta releases and are using ADLS storage must create a copy of the
IncortaNode/hadoop/etc/hadoop/core-site.xml
file underIncortaNode/spark/conf/
.
Important notes
- While Spark 3.3 supports Python versions from 3.7 to 3.10, Incorta supports Python 3.8 and 3.9 only
If you have multiple Python binaries on the same machine, set the following attributes:
python.path
to Python 3 path in theIncortaAnalytics/IncortaNode/node.properties
export PYSPARK_PYTHON = /usr/bin/python3
inIncortaNode/notebooks/services/<SERVICE-ID>/conf/zeppelin-env.sh
- PySpark requires Pandas 1.0.5 or later.
- PySpark requires PyArrow 0.12.1 or later.
Contact Incorta Support if you require further assistance.
The Spark workers, SQLi, and MVs might fail with java.lang.UnsatisfiedLinkError
, to solve this issue do the following:
- Create a new
tmp
dir with read, write, and execute permissions or use the<InstallationPath>/IncortaNode/spark/tmp
directory. - Add the following configurations to this file
<InstallationPath>/IncortaNode/spark/conf/spark-env.sh
:SPARK_WORKER_OPTS="-Djava.io.tmpdir=<DirWithPremissions>"SPARK_LOCAL_DIRS=<DirWithPremissions> - In the CMC > Server Configurations > Spark Integration > Extra options for Materialized views and notebooks, add the following options:spark.driver.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions>;spark.executor.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions>
- In the CMC > Server Configurations > Spark Integration > SQL App Extra options, add the following options:spark.driver.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions> -Dorg.xerial.snappy.tempdir=<DirWithPremissions>;spark.executor.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions> -Dorg.xerial.snappy.tempdir=<DirWithPremissions>
Make sure to replace <DirWithPremissions>
with the tmp
folder that Spark uses, whether a directory you create with the required permissions or <InstallationPath>/IncortaNode/spark/tmp
.
For more details, refer to Troubleshoot Spark > Spark applications failing.
Pause all scheduled jobs in the CMC
Enable this setting to pause active scheduled schema loads, dashboards, and data alerts. This is helpful when importing or exporting an existing tenant. Here are the steps to enable this option as default tenant configuration:
- In the Navigation bar, select Clusters.
- In the cluster list, select a Cluster name.
- In the canvas tabs, select Cluster Configurations.
- In the panel tabs, select Default Tenant Configurations.
- In the left pane, select Data Loading.
- Enable the Pause Scheduled Jobs setting.
- Select Save.
Export of all tenants with the Tenant Management Tool
A System Administrator with root access to the host running the Cluster Management Console (CMC) is able to run the Tenant Management Tool (TMT). Here are the steps:
- Secure shell in to the CMC host.
- As the incorta user, navigate to the installation path of the TMT. The default installation path for the TMT is:
<CMC_INSTALLATION_PATH>/IncortaAnalytics/cmc/tmt
- Export ALL tenants
./exportAlltenants.sh -c <CLUSTER_NAME> -f False /tmp/<TENANT_EXPORT>.zip
Add a Create View database grant
A Database Administrator with root access to the MySQL or Oracle database server that runs the Incorta Metadata database is able to add the Create View database grant. The estimated time to complete the following is from 5 minutes.
MySQL
Here are the steps for MySQL:
- Sign in to the MySQL Incorta metadata database as the root user.
mysql -h0 -uroot -proot_password incorta_metadata
-h = host, where 0 is a shorthand reference for localhost
-u = user, where root is the user
-p = password, where the password is root_password
incorta_metadata is the database
- Verify the
incorta
database user for theincorta_metadata
database.
SELECT User, Host FROM mysql.user WHERE user = 'incorta';
- Verify the current grants for all users.
SHOW GRANTS for 'incorta'@'locahost';SHOW GRANTS for 'incorta'@'127.0.0.1';SHOW GRANTS for 'incorta'@'192.168.128.101';
- If needed, add the
CREATE VIEW
grant to the allincorta
users.
GRANT CREATE VIEW ON `incorta_metadata`.* TO 'incorta'@'localhost';GRANT CREATE VIEW ON `incorta_metadata`.* TO 'incorta'@'127.0.0.1';GRANT CREATE VIEW ON `incorta_metadata`.* TO 'incorta'@'192.168.128.101';
Oracle
Supported starting the 6.0.1 release.
To add grants for a user in an Oracle database, please refer to Oracle dDatabase SQL Language Reference.
Achieve Upgrade Readiness
Please review Concepts → Upgrade Readiness. Achieving Upgrade Readiness requires:
- a System Administrator with root access to the host or hosts running Incorta Nodes as well as the host running the Cluster Management Console (CMC)
- a CMC Administrator
- a Super User that can access each tenant in the Incorta environment
- an Incorta Developer to resolve identified issues with formula expressions
The estimated time to complete the following is from 2 hours to 3 days:
- Resolve alias issues with the Alias Sync Tool
- Resolve Severity-1 issues that the Inspector Tool identifies
Resolve alias issues with the Alias Sync Tool
Here are the resources required to run the Alias Sync Tool:
- A System Administrator with root access to the host running an Incorta Node is able to run the Alias Sync Tool.
To resolve issues with Alias tables, you must download the alias_sync.py
file, secure copy the file to IncortaNode/bin
directory, and the run the script for each tenant in your cluster.
To learn more, please review Tools → Alias Sync Tool.
Resolve Severity-1 issues that the Inspector Tool identifies
Here are the resources required to run the Inspector Tool:
- For a given tenant, a CMC Administrator enables the Inspector Tool Scheduler and schedules an Inspector Tools job. A CMC Administrator also downloads the Inspector Tool related schema, business schema, and dashboards files for all tenants.
- A Super User that can access each tenant in the Incorta cluster.
- An Incorta Developer to resolve the identified issues in the 1- Validation UseCases dashboard.
For a given tenant, the Inspector Tool checks the lineage references of Incorta metadata objects including tables, schemas, business schemas, business schema views, dashboards, and session variables. It also checks for inconsistencies and validation errors in joins, tables, views, formulas, and dashboards.
Prior to upgrading Incorta, you must enable and configure the Inspector Tool for all tenants. In addition, you must resolve all Severity-1 issues.
To learn more, please review Tools → Inspector Tool.
Stop the Incorta cluster
Here are the resources required to stop all the services in the Incorta cluster:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
The estimated time to stop the Incorta cluster and all related services is from 5 minutes to 30 minutes. Here are the steps involved in stopping the Incorta cluster:
- Stop the Notebook Add-On Service
- Stop the Analytics Service
- Stop the Loader Service
- Stop Apache Spark
- Stop the CMC
- Stop the Node Agent
- Stop Apache Zookeeper
Stop the Notebook Add-on Service
Your Incorta cluster may not have enabled and configured the Notebook Add-on Service for a given tenant. You enable the Notebook Add-on as an Incorta Labs feature.
In order to stop the Notebook Add-on Service, you need to know the name of the service. You can read the services.index
file to find out the name of Notebook Add-on running on an Incorta Node that is running the Analytics Service.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/notebooks/services/services.index
Once you know the name of the Notebook Add-on Service, then execute the following:
NOTEBOOK_ADD_ON=<SERVICE_NAME><INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stopNotebook.sh ${NOTEBOOK_ADD_ON}
Stop the Analytics Service
In order to stop the Analytics Service, you need to know the name of the service. You can read the services.index
file to find out the name of the services running on an Incorta Node.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/services/services.index
Once you know the name of the Analytics Service, you can then execute the following:
ANALYTICS_SERVICE=<SERVICE_NAME><INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stopService.sh ${ANALYTICS_SERVICE}
Stop the Loader Service
In order to stop the Loader Service, you need to know the name of the service. You can read the services.index
file to find out the name of the services running on an Incorta Node.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/services/services.index
Once you know the name of the Loader Service, you can then execute the following:
LOADER_SERVICE=<SERVICE_NAME><INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stopService.sh ${LOADER_SERVICE}
Stop Apache Spark
You can stop Apache Spark using the stopSpark.sh
shell script:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stopSpark.sh
Stop the CMC
The default directory for the CMC is ~/IncortaAnalytics/cmc
. Stop the CMC with the stop-cmc.sh
shell script:
<CMC_INSTALLATION_PATH>/cmc/stop-cmc.sh
Stop the Node Agent
For each Incorta Node, run the following:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/nodeAgent/agent.sh stop
Stop Apache Zookeeper
To stop Apache Zookeeper, run the following:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/stop-zookeeper.sh
Create backups
Here are the resources required to create a various backups:
- A Database Administrator with root access to the MySQL database server that runs the Incorta Metadata database.
- A System Administrator with root access to the host or hosts running Incorta Nodes, the Cluster Management Console (CMC), and Apache Spark.
The estimated time to complete the following is from 30 minutes to 3 hours:
- Create a backup of the Incorta Metadata database
- Create a backup of the IncortaAnalytics directory
- Create a backup of the Apache Spark configuration files
Create a backup of the Incorta Metadata database
Here are the resources required to create a backup of the Incorta Metadata database:
- A Database Administrator with root access to the MySQL database server that runs the Incorta Metadata database.
MySQL
To create a backup of the incorta metadata database, use mysqldump
command line utility:
mysqldump -u [user] -p [database_name] > [filename].sql
Example
Here is example with the MySql user as root with the password incorta_root:
mysqldump -uroot -pincorta_root incorta_metadata > /tmp/incorta_metadata.sql
Create a backup of the Incorta installation directory
To create a backup of the Incorta installation directory, use the following command:
zip -r IncortaAnalytics_Backup.zip <INCORTA_NODE_INSTALLATION_PATH>
Create a backup of the Apache Spark configuration files
Create a backup of the following spark configuration files present in the $SPARK_HOME/conf
directory:
spark-defaults.conf
spark-env.sh
SPARK_HOME=<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/sparkcd $SPARK_HOME/confzip -r Spark_Conf_Backup.zip spark-defaults.conf spark-env.sh
Upgrade Apache Zookeeper
This release requires Apache Zookeeper v3.6.1 to support SSL. To enable SSL for Zookeeper, please review Security → Enable Zookeeper SSL.
If you are using an external version of Zookeeper that is not bundled with Incorta, you must upgrade your Zookeeper instance manually with the following steps:
- Replace the existing
zookeeper
folder with the one from<INCORTA_INSTALLATION_PATH>/IncortaNode
, with the exception of thezookeeper/conf/zoo.cfg
file. - Add the
admin.enableServer=false
property tozoo.cfg
. - Delete any files inside the
<INCORTA_INSTALLATION_PATH>/IncortaNode/zookeeper_data
folder. - Restart Zookeeper.
If you have multiple nodes, repeat the above steps for each Zookeeper node.
The Zookeeper upgrade to v3.6.1 is backward compatible with all Incorta versions.
Upgrade the Incorta cluster
Here are the resources required to upgrade the Incorta cluster:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
- start the Incorta CMC. Do not start the other Incorta cluster services
To begin, run the incorta-installer.jar
file from the shell:
java -jar incorta-installer.jar -i console
In the Incorta Installer console, enter these values for a standalone (Typical) upgrade:
Welcome : EnterLicense Agreement/Copyright : EnterLicense Agreement/Copyright : YInstallation Type : 2- UpgradeInstallation Set : 1- TypicalChoose Installation Folder : Enter- DefaultInstallation Status : EnterStart CMC : 3- Finish without starting CMC
Kill unwanted processes
After upgrading, you will want to kill any processes related to Incorta as you will start Incorta manually. To kill any unwanted processes, run the following commands:
sudo kill -9 $(ps -aux | grep '[n]odeAgent.jar' | awk '{print $2}')sudo kill -9 $(ps -aux | grep '[d]erby' | awk '{print $2}')sudo kill -9 $(ps -aux | grep '[e]xportServer' | awk '{print $2}')sudo kill -9 $(ps -aux | grep '[z]ookeeper' | awk '{print $2}')sudo kill -9 $(ps -aux | grep '[c]mc' | awk '{print $2}')sudo kill -9 $(ps -aux | grep '[s]park' | awk '{print $2}')sudo kill -9 $(ps -aux | grep '[h]adoop' | awk '{print $2}')sudo kill -9 $(ps -aux | grep '[p]ostgres' | awk '{print $2}')sudo kill -9 $(ps -aux | grep '[I]ncortaNode' | awk '{print $2}')
Upgrade an external Apache Spark environment
If the Incorta cluster is using an external Apache Spark environment, you must also upgrade the Apache Spark environment by following these steps:
- Zip the bundled
spark
directory under IncortaNode:
zip -r Incorta-Bundled-Spark.zip <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/spark
- Zip the bundled
hadoop
directory under IncortaNode:
zip -r Incorta-Bundled-Hadoop.zip <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/hadoop
- Copy
Incorta-Bundled-Spark.zip
andIncorta-Bundled-Hadoop.zip
to the external Apache Spark environment. - In the external Apache Spark environment, remove the
spark
directory. - Unzip
Incorta-Bundled-Spark.zip
to recreate the Spark environment. - Unzip
Incorta-Bundled-Hadoop.zip
to recreate the Hadoop environment.
Review the Upgrade logs
Check to see if there are any critical errors with the upgrade in the following log files and directories:
- Installer log
cat /tmp/DebuggingLog.log
- Incorta Node upgrade logs
cd <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/logs/
- CMC logs
ls -l <CMC>/logs/
Configure SSO
If you already have an SSO configuration and are upgrading from Incorta version 5.1.2 or above your SSO configurations will carry over in the CMC during the upgrade automatically. As a result, you do not need to perform the following SSO configuration steps.
If you are upgrading from an Incorta version of 5.1.1 or earlier you will need to perform the following SSO configurations as well as the configurations in Configure SSO using CMC.
- Paste the contents of the SSO properties file that you have previously configured, in the Tenant Configuration > Security > Provider configurationsNote
This property appears after changing the Authentication Type property to SSO.
- Configure the other Security properties as required.
- Ensure that the following is commented out in the Analytics service server.xml file:
- SSO <Valve ../> tag :
<!-- <Valve.. /> -->
- SSO Valve tag example:
<Valve className="com.incorta.sso.valves.OneLoginValve" confFilesMap="Tenant_Name=SSO_properties_file_absolute_path" LoggingEnabled = "true" />
- SSO <Valve ../> tag :
There might be other Valve tags in the file but you need to comment out the SSO Valve tag only.
Run the update core site script
This step is applicable only if you have a cloud file system.
- Place the backup core-site.xml under
cmc/bin
andIncortaNode/bin
. - Run the following locations script files, using the python command (for example,
python3 update_core_site_incortaNode.py
):<INCORTA_HOME>/cmc/bin/update_core_site_cmc.py
<INCORTA_HOME>/IncortaNode/bin/update_core_site_incortaNode.py
The core-site.xml file is automatically copied to the following locations:
Under
IncortaNode
:INCORTA_HOME/IncortaNode/spark/conf/
INCORTA_HOME/IncortaNode/runtime/lib/
INCORTA_HOME/IncortaNode/runtime/webapps/incorta/WEB-INF/lib/
INCORTA_HOME/IncortaNode/sqli/runtime/lib/
Under
cmc
:INCORTA_HOME/cmc/lib/
INCORTA_HOME/cmc/tmt/lib/
INCORTA_HOME/cmc/inspector/
Start the Incorta cluster
Here are the resources required to start all the services in the Incorta cluster:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
The estimated time to start the Incorta cluster and all related services is from 5 minutes to 5 hours. Depending on schema data size and various tenant configurations, it may take the Incorta Analytics Service several hours to load schemas into memory.
After performing an upgrade and in the case of using a Single Sign-On (SSO) provider to secure access to the Incorta Direct Data Platform™, you need to manually remove one of the following jar files, before starting the Analytics Service, from the <INCORTA_NODE_INSTALLATION_PATH>/runtime/lib
directory depending upon your SSO provider.
Here are the steps to start the Incorta cluster:
- Start the CMC
- Start Apache Zookeeper
- Start Apache Spark
- Start the Node Agent
- Start the Loader Service
- Start the Analytics Service
- Start the Notebook Add-On Service
Start the CMC
The default directory for the CMC is ~/IncortaAnalytics/cmc
. Start the CMC with the start-cmc.sh
shell script:
<CMC_INSTALLATION_PATH>/cmc/start-cmc.sh
Start Apache Zookeeper
To start Apache Zookeeper, run the following:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/start-zookeeper.sh
Start Apache Spark
You can start Apache Spark using the startSpark.sh
shell script:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/startSpark.sh
Start the Node Agent
For each Incorta Node, run the following to start the node agent:
<INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/nodeAgent/agent.sh start
Start the Loader Service
In order to start the Loader Service, you need to know the name of the service. You can read the services.index
file to find out the name of the services running on an Incorta Node.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/services/services.index
Once you know the name of the Loader Service, you can then execute the following:
LOADER_SERVICE=<SERVICE_NAME><INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/startService.sh ${LOADER_SERVICE}
Start the Analytics Service
In order to start the Analytics Service, you need to know the name of the service. You can read the services.index
file to find out the name of the services running on an Incorta Node.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/services/services.index
Once you know the name of the Analytics Service, you can then execute the following:
ANALYTICS_SERVICE=<SERVICE_NAME><INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/startService.sh ${ANALYTICS_SERVICE}
Start the Notebook Add-on Service
Your Incorta cluster may not have enabled and configured the Notebook Add-on Service for a given tenant. You enable the Notebook Add-on as an Incorta Labs feature.
In order to start the Notebook Add-on Service, you need to know the name of the service. You can read the services.index
file to find out the name of Notebook Add-on running on an Incorta Node that is running the Analytics Service.
cat <INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/notebooks/services/services.index
Once you know the name of the Notebook Add-on Service, then execute the following:
NOTEBOOK_ADD_ON=<SERVICE_NAME><INCORTA_NODE_INSTALLATION_PATH>/IncortaNode/startNotebook.sh ${NOTEBOOK_ADD_ON}
Upgrade the Incorta metadata database
A CMC Administrator is able to upgrade the Incorta metadata database. Depending on the number of tenants and schemas in your Incorta cluster, the process can take between 5 minutes and 3 hours.
To sign in to the Cluster Management Console (CMC), visit your CMC host at one of the following:
http://<Public_IP>:6060/cmc
http://<Public_DNS>:6060/cmc
http://<Private_IP>:6060/cmc
http://<Private_DNS>:6060/cmc
The default port for the CMC is 6060. Sign in to the CMC using your CMC administrator username and password.
To upgrade the Cluster Metadata database, follow the steps:
- In the Navigation bar, select Clusters.
- For each cluster name in the Cluster list, in the Actions column, select Upgrade Cluster Metadata.
A dialog indicates to restart the Incorta Services. In the dialog, select OK.
What to expect after upgrading to 6.0
If you are upgrading from 5.1.x to 6.0:
- A new directory structure where the
source
andddm
directories replace the oldparquet
andsnapshots
directories respectively. - A new setting in the Cluster Configurations to set the Cleanup job time interval.
- Enabling the Sync In Background feature will not cause data inconsistency issues as experienced in some cases before.
- Some files will not be available or used anymore, such as
.zxt
,.zxs
, andload-time.log
files.
To learn more about the new implementation, review References → Data Consistency and Availability.
Verify the successfully upgrade
Next, verify the successful upgrade. Here are the resources required:
- a System Administrator with root access to the host or hosts running Incorta Nodes, the host running the Cluster Management Console (CMC), and the host or hosts running Apache Spark
- a CMC Administrator
- a Super User that can access each tenant in the Incorta environment
- an Incorta Developer to resolve identified issues with formula expressions, schema alias, joins between tables, and dependencies between objects such as dashboards and business schemas
Unpause all scheduled jobs
A CMC Administrator is able unpause scheduled jobs. Here are the steps to disable this option as a default tenant configuration:
- In the Navigation bar, select Clusters.
- In the cluster list, select a Cluster name.
- In the canvas tabs, select Cluster Configurations.
- In the panel tabs, select Default Tenant Configurations.
- In the left pane, select Data Loading.
- Disable the Pause Scheduled Jobs setting.
- Select Save.
Review and Monitor scheduled jobs
As the tenant Super User, sign in to each each tenant and review the scheduled jobs.