Guides → Upgrade to Release 6.0

Prior to upgrade, please review the Release Model. The Release Model describes the various release statuses: Preview and Generally Available. Refer to the Release Model to identify the release version and status that is most suitable for upgrading your Sandbox, Developer, User Acceptance Testing (UAT), or Production environment.

Important
  • All hosts must have adequate disk space for shared storage to support the migration to a new shared storage directory structure. Lastly, if your Incorta cluster has a separate Apache Zookeeper cluster, you must also upgrade Apache Zookeeper.

  • If you are using Oracle as your metadata database, you need to migrate to MySQL. For more information, refer to Metadata Database Migration. Starting the 6.0.1 release, you can use your Oracle metadata database.

  • In the case of using external Notebooks, you must force reinstalling the Python library after upgrading to 6.0.

  • With the introduction of the new-generation loader, Incorta automatically detects inter-object dependencies within a load plan during the Planning phase of a load job. The Loader Service utilizes these detected dependencies and the user-defined load order within the schema to create an execution plan for loading objects. However, it’s important to note that using both the MVs' user-defined load order and automatically detected dependencies may result in an execution plan with cyclic dependencies, leading to load job failures. To avoid such failures, it is recommended to delete the MVs' user-defined load order before upgrading to the 6.0 release.

Spark applications failing after upgrade from 5.1.x and 5.2.x

The Spark workers, SQLi, and MVs might fail with java.lang.UnsatisfiedLinkError, to solve this issue do the following:

  1. Create a new tmp dir with read, write, and execute permissions or use the <InstallationPath>/IncortaNode/spark/tmp directory.
  2. Add the following configurations to this file <InstallationPath>/IncortaNode/spark/conf/spark-env.sh:
    SPARK_WORKER_OPTS="-Djava.io.tmpdir=<DirWithPremissions>"
    SPARK_LOCAL_DIRS=<DirWithPremissions>
  3. In the CMC > Server Configurations > Spark Integration > Extra options for Materialized views and notebooks, add the following options:
    spark.driver.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions>;spark.executor.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions>
  4. In the CMC > Server Configurations > Spark Integration > SQL App Extra options, add the following options:
    spark.driver.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions> -Dorg.xerial.snappy.tempdir=<DirWithPremissions>;spark.executor.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions> -Dorg.xerial.snappy.tempdir=<DirWithPremissions>

Make sure to replace <DirWithPremissions> with the tmp folder that Spark uses, whether a directory you create with the required permissions or <InstallationPath>/IncortaNode/spark/tmp.

For more details, refer to Troubleshoot Spark > Spark applications failing.

There are four guides detailing how to upgrade to Release 6.0: