Guides → Upgrade to 2024.x On-Premises Releases

Prior to upgrade, please review the Release Model. The Release Model describes the various release statuses: Preview and Generally Available. Refer to the Release Model to identify the release version and status that is most suitable for upgrading your Sandbox, Developer, User Acceptance Testing (UAT), or Production environment.

Upgrade Considerations

Important

If you are upgrading from an earlier release, please make sure to review the upgrade considerations of releases that follow your current release as well.

Considerations for MySQL 8

Before upgrading clusters that use a MySQL 8 metadata database from a 6.0.x or earlier release to 2024.1.x or 2024.7.1, execute the following against the Incorta metadata database:

ALTER TABLE `NOTIFICATION` MODIFY COLUMN `EXPIRATION_DATE` TIMESTAMP NULL DEFAULT NULL;
UPDATE `NOTIFICATION` SET `EXPIRATION_DATE` = NULL WHERE CAST(`EXPIRATION_DATE` AS CHAR(19)) = '0000-00-00 00:00:00';
COMMIT;
Note

This issue has been resolved starting 2024.7.2

Spark 3.x

Spark 3.x requires a tmp directory with read, write, and execute permissions. You need also to specify this directory in the CMC and the spark-env.sh file.

  1. Create a new tmp directory with the required permissions or use the <InstallationPath>/IncortaNode/spark/tmp directory.
  2. Add the following configurations to this file <InstallationPath>/IncortaNode/spark/conf/spark-env.sh:
    SPARK_WORKER_OPTS="-Djava.io.tmpdir=<DirWithPremissions>"
    SPARK_LOCAL_DIRS=<DirWithPremissions>
  3. In the CMC > Server Configurations > Spark Integration > Extra options for Materialized views and notebooks, add the following options:
    spark.driver.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions>;spark.executor.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions>
  4. In the CMC > Server Configurations > Spark Integration > SQL App extra options, add the following options:
    spark.driver.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions> -Dorg.xerial.snappy.tempdir=<DirWithPremissions>;spark.executor.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions> -Dorg.xerial.snappy.tempdir=<DirWithPremissions>
Important

Make sure to replace <DirWithPremissions> with the tmp folder that Spark uses, whether a directory you create with the required permissions or <InstallationPath>/IncortaNode/spark/tmp.

Next-generation loader

With the introduction of the new-generation loader, Incorta automatically detects inter-object dependencies within a load plan during the Planning phase of a load job. The Loader Service utilizes these detected dependencies and the user-defined load order within the schema to create an execution plan for loading objects. However, it’s important to note that using both the MVs' user-defined load order and automatically detected dependencies may result in an execution plan with cyclic dependencies, leading to load job failures. To avoid such failures, it is recommended to delete the MVs' user-defined load order before upgrading from a release before 6.0.

Connectors Marketplace

Starting 2024.1.3 On-Premises release, Incorta has introduced the Connectors Marketplace, allowing administrators to install, upgrade, and downgrade connectors independently from any Incorta release.

  • After upgrading to 2024.1.3, the connectors’ files must be placed under the /marketplace/connectors/ directory. If this directory does not exist, create it under the system tenant path, which may vary according to Incorta installation. Afterward, you must unzip the ext_connectors.zip file that exists in the Incorta installation package and copy all the connectors’ folders from the unzipped directory to /marketplace/connectors/.
  • Incorta moved the custom CData connectors from <IncortaNode>/extensions/connectors/customCData/ to <IncortaNode>/extensions/connectors/shared-libs/.
  • For custom SQL connectors, you must move your custom SQL JDBC driver jars from <IncortaNode>/runtime/lib/ to <IncortaNode>/extensions/connectors/shared-libs/. In case of using a Data Agent, you must also copy these jars to incorta.dataagent/extensions/connectors/shared-libs/ on the Data Agent host.
  • In case you need additional jars to use with the Oracle connector to support advanced options like XML, you must add or move the necessary libraries from <IncortaNode>/runtime/lib/ to <IncortaNode>/extensions/connectors/shared-libs/sql-oracle/.
Note

For Cloud installations, contact the Support team to help you move your custom SQL JDBC driver jars. In case of using a Data Agent, you must also copy these jars to incorta.dataagent/extensions/connectors/shared-libs/ on the Data Agent host.

Metadata database check

Upgrading to this release requires upgrading the metadata database to support multi-group load plans and migrate existing schema load jobs or load plans. Before upgrading to 2024.1.3, contact Incorta Support to perform a database check to inspect and detect any issues with your metadata database that might cause the metadata database upgrade to fail. If issues are detected, Incorta Support will run scripts against the metadata database to delete the records causing these issues.

Note

If the records that should be deleted are related to a multi-schema scheduled load job, the scheduled load job will be deleted altogether, and you will need to recreate the load plan manually if required.

SAP connector JCo files

Starting 2024.1.3, the SAP ERP Connector will stop bundling the JCo 3 libraries. For more information, refer to License Terms for SAP Connectors. Accordingly, you must manually get the needed libraries from SAP Java Connector and place them under <INCORTA_HOME>/IncortaNode/extensions/connectors/shared-libs/sapjco3.

Public API v2

The response of the /schema/{schemaName}/list endpoint has been updated. Some parameters have been renamed while additional parameters are now available to display error messages, the source column of runtime business view columns, and their data types. For more details, refer to Public API v2 → List Schema Objects.

Columns with Integer data type

In previous releases, Incorta wrote Integer columns in Parquet as Long. Starting 2024.1.3, Incorta writes Integer columns in Parquet as Integer for all newly created tables. For previously created tables, Incorta converts Integer columns written as Long in Parquet files to Integer during full load jobs, while Incorta keeps these columns unchanged during incremental loads.

As a result, after performing a full load of a table with Integer columns, it is recommended that you perform a full load of its dependent schema objects to ensure data consistency.

To migrate all Parquet files to have Integer data without fully loading your objects, the administrator can turn on the Enable Parquet Migration at Staging option in the Cluster Management Console (CMC) > Server Configurations > Tuning and perform a load from staging for all your objects with Integer columns.

Notes:

  • Turning the Enable Parquet Migration at Staging option on adds a new step that runs on Spark to the staging load jobs. Ensure Spark has sufficient resources to migrate the required Parquet files.
  • The migration of Parquet files during staging load occurs only once per object.
  • Load tables before loading MVs that read from these tables.
  • When loading an object fully, you must load dependent objects or load all objects from staging after turning the Enable Parquet Migration at Staging option on.
  • The new behavior might affect existing scripts referencing Integer or Long data, whether columns, values, or variables.

Schema names

Starting 2024.1.3, the names of newly created schemas and business schemas are not case-sensitive. Therefore, you cannot create a new schema or business schema that shares the same name with existing schemas or business schemas using different letter cases. To maintain backward compatibility, the names of existing schemas and business schemas will not be affected.

Caching mechanism enhancements

The caching mechanism in Analyzer views for dashboards has been enhanced by caching all columns in the view to prevent query inconsistencies. To optimize performance and reduce off-heap memory usage, creating views with only the essential columns used in your dashboards is recommended.

Like function fix

As a result of fixing an issue with the like() function, you must load all tables that use this function from staging if you are upgrading from releases before 6.0.2.

External Notebooks

In the case of using external Notebooks, you must force reinstalling the Python library after upgrading from a release before 6.0.

SDK Component installation

For now, SDK Component installation is only supported on clusters that use a MySQL metadata database.

SQLi service consideration

Starting 2024.1.4, the SQLi service must be started before creating or processing SQL views over non-optimized tables or Incorta-over-Incorta tables via Spark port; otherwise, Incorta throws an error.

Single Logout for SAML2 SSO providers

Starting 2024.1.5, Incorta supports Single Logout (SLO) for SAML2 SSO identity providers, including Azure Active Directory (AD) and OneLogin. When users sign out from Incorta, they are automatically signed out from their SAML2 SSO identity providers.

The Single Logout URL on both Incorta and the identity provider must be set to Incorta’s login page URL, for example, https://myCluster.cloud.incorta.com/incorta/!myTenant/ or https://10.10.1.5:8080/incorta/!myTenant/.

In the case of OneLogin, set the SAML initiator option to Service Provider.

Update the SSO configurations on Incorta and add the following settings:

  • onelogin.saml2.idp.single_logout_service.url: The logout URL provided in your SSO provider's metadata.xml
  • onelogin.saml2.sp.single_logout_service.url: Incorta’s login URL, for example, https://myCluster.cloud.incorta.com/incorta/!myTenant/ or https://10.10.1.5:8080/incorta/!myTenant/
  • onelogin.saml2.sp.single_logout_service.response.url: Incorta’s login URL

Important: Considerations for the MySQL Driver

Starting 2024.1.3 release, the MySQL driver will no longer be included in the Incorta installation package or the Data Agent package. You must provide your own MySQL driver.

Here are the guides detailing how to upgrade to the 2024.1.3 On-Premises (2024.1.3 OP) Release:

Upgrades to Later 2024.x On-Premises Releases

For upgrades to later 2024.x On-Premises releases, such as 2024.1.4 and 2024.7.x, follow the same steps to upgrade to 2024.1.3. However, there are additional considerations for upgrading to 2024.7.x:

  • JDK 8u144 is no longer supported and you must upgrade to JDK 8u171 or later.
  • Starting this release, Incorta uses Spark 3.4.1. All Incorta applications, including the Advanced SQL Interface, will use one unified Spark instance (<incorta_home>/IncortaNode/spark/) to handle the different requests. For more details about the difference between the Spark versions, refer to the 3.4.1 Migration Guide.
    • As a result of using a unified Spark instance, you must consider the following when upgrading On-Premises clusters that have the Advanced SQLi previously enabled and configured:
      • Take into consideration the resources required by the Advanced SQLi when configuring the Spark worker resources in the CMC.
      • Post upgrade, make sure that the CMC is started and running before starting the services related to Advanced SQLi, such as the Advanced SQL service and Kyuubi, to have the configuration values updated to refer to the unified Spark instance instead of SparkX.
  • Before starting the upgrade process, do the following:
    • For each tenant in your cluster, pause the load plan schedulers: In the CMC > Tenant Configurations > Data Management, turn on the Pause Load Plans toggle.
    • Stop all load jobs, including in-progress and in-queue jobs: In the Schema Manager, select Load Jobs, and then select Stop for each job you want to stop.
  • During the upgrade, existing load filters for a table will be migrated as partial loading filters. After the upgrade, tables with load filters will require a load from staging.
  • Users who created business Notebooks in previous releases of Incorta will need the newly released Advanced Analyze User role for continued access in 2024.7.x.
  • Starting 2024.7.x, Incorta Loader accounts for null values in the following contexts whether or not you enable Null Handling in the CMC:
    • Deduplication and PK-index calculations: The Loader Service will consider null values as undefined values. Thus, null values are no longer considered zeros or empty values.
    • Join calculations and join filters: The Loader Service will account for records with null values individually. During join calculations, a null value does not equal another null value, zero, or empty value. Additionally, the Loader Service will account for null values when evaluating the join filter.
  • Starting this release, Incorta has deprecated its custom Delta Lake reader and started to use only Spark Delta Lake reader to read Delta Lake files.
  • After upgrading to 2024.7.2 and with the introduction of Incorta Premium, the following features will be impacted till Incorta Premium and its related options are enabled:
    • Notebook for business users
    • Spark SQL Views
    • Data Studio
    • Copilot and generative AI capabilities
  • Semantic search within the Business Schemas list won’t be supported in 2024.7.2 even after enabling the Incorta Premium and Copilot.