Known Issues in Latest Releases

The following are the known issues in Incorta releases as of 2024.1.x along with the workaround and fix version if any. This list may contain issues from earlier releases that haven't been resolved yet.

Known issueStatusAffected VersionsWorkaroundFix Version
The load plan won’t appear on the Load Jobs list if the latest execution does not include the load group with the schema that login user has access to, for example, when aborting a load plan execution before the related load group starts.Open2023.7.0, 2024.1.3 On-Premises, and later
For specific versions of the Data Lake connectors (Azure Gen2, Data Lake Local, FTP, Google Cloud Storage, Apache Hadoop (HDFS), Amazon S3, and SFTP), users who use Wildcard Union on directories containing a large number of files might encounter load failures or experience longer load times.ResolvedVersions from 2.0.1.0 to 2.0.1.7 of the Data Lake connectorsUpgrade to connector version 2.0.1.8
Using internal session variables with SQLi service might cause out of memory error.Open6.0 and laterAdd -Dengine.max_off_heap_memory=<value in Bytes> in the <installationPath>/IncortaNode/sqli_extra_jvm.properties, where the SQLi service is installed to set an off heap limit. For example, -Dengine.max_off_heap_memory=10737418240 will set the off heap memory to 10 GBs.
Load from staging jobs will get stuck and keep running endlessly with the following CMC configurations set as follows:
  ●  The Enable automatic merging of parquet while loading from staging toggle is turned on.
  ●  The Enable dynamic allocation in MVs toggle is turned off.
  ●  The value of Materialized view application cores is less than the value of spark.executor.cores in the Extra options for Parquet merge option.
Open2024.1.3 On-PremisesDo one of the following in the CMC > Server Configurations > Spark Integration:
  ●  Make sure that the Materialized view application cores is greater than or equal to the value of spark.executor.cores in the Extra options for Parquet merge option.
  ●  Turn on the Enable dynamic allocation in MVs toggle.
An issue with the unique index sequential calculation might cause the failure of loading tables and join calculations during Post-load.Resolved2024.1.3 On-PremisesMake sure that the unique index parallel calculation ( the default behavior) is NOT disabled. Open the engine.properties file on each Loader Service node and make sure to remove the engine.parallel_index_creation=false entry or set the value to true.2024.1.4
Uploading custom-built visualization components (.inc files) to the marketplace of a cluster that uses an Oracle metadata database results in an Internal SQL exception error.Open2024.1.3 On-Premises
Sending multiple schemas concurrently to the same target schema (dataset) in a Google BigQuery data destination may fail for the first time if the target schema (the BigQuery dataset) does not yet exist.Open2024.1.xDo one of the following:
  ●  Send (load) one schema first, and then send all other schemas. You can create a load plan with one schema in a group and all other schemas in another group.
  ●  Create the dataset in the BigQuery project before sending the schemas concurrently.
  ●  Execute the load plan again or manually load the failed schemas.
When filtering the load jobs list by the In Queue status, single-group load plans that are in queue won’t appear on the list; however, they will appear when filtering by the In Progress status.Open2023.7.0, 6.0, and later
When enabling Spark-based extraction on a data source while running Spark 3.3, the extraction of the respective table fails.OpenSpark 3.3Turn off the Enable Spark Based Extraction in the Table Data Source or Edit Dataset dialog.
A formula column that only references a physical column with null values shows wrong results as the Loader Service does not respect the null values when materializing the formula column.Open2022.12.0, 6.0, and later
The count and distinct functions return null values instead of zeros when using them in a flat table or a session variable.Open2022.12.0, 6.0, and later
When the Pause Scheduled Jobs option is enabled in the Cluster Management Console (CMC), the emails of manually activated scheduled jobs are not sent.Open2022.9.0, 6.0, and later
Scheduled jobs of overwritten dashboards and physical schemas are not deleted.Open2022.9.0, 6.0, and later
Access rights to a parent folder are not propagated to child folders and dashboards if the same access rights were revoked before.Open2022.9.0, 6.0, and later
Aggregated tables with hierarchical data do not support conditional formatting based on other measures when adding multiple attributes.Open2022.9.0, 6.0, and later
Exporting a tenant with a global variable that uses the newly supported syntax (query or queryDistinct expressions) from a cluster that runs a release prior to 2022.5.1 and then importing it to a cluster that runs the same or a later release will cause the Edit Global Variable dialog to show an empty Value.Open2022.4.0 - 2022.5.0You can manually edit the value to enter the query expression again or upgrade both of the source and target clusters to the 2020.5.1 release before exporting the tenant that you want.

Note:
This issue does not apply to exporting a tenant from a cluster that runs 2022.5.1 or a later release and then importing it to the same or another cluster that runs 2022.5.1 or later.
No validation on functions is triggered in the Formula Builder when creating global variables.

Open2022.4.0, 6.0, and later
Global Variables that return a list of values are not functioning properly when referenced in materialized views (MVs) or individual filters.

Open2022.4.0, 6.0, and later
Global Variables will not appear on the list of variables when referenced in a column individual filter, that is when you type $ in the Search Values box.Open2022.4.0, 6.0, and laterYou can add the global variable manually, if applicable. In Query, Contains, and Starts With are examples of functions that accept a global variable as a filter value to be added manually.
Using the DISTINCT operator on a measure in a Pivot table that is built over an Incorta SQL View throws errors.

Open2022.4.0, 6.0, and later
In an imported physical schema or tenant, the Disable Full Load option will be automatically enabled after changing the data source of an object to an Incorta SQL Table or vice versa, which will lead to skipping these updated objects in the next schema full load jobs.Open2022.3.0, 6.0, and laterYou need to either disable the feature manually for objects that support disabling full loads (physical schema tables and MVs with incremental load enabled) or recreate the objects.

Note:
This issue does not apply to newly created Incorta SQL Tables.
Incrementally loading a table without a key for the first time results in an error.

Open2022.3.0, 6.0, and laterAttempt to load the table incrementally again
Insights might not render when you run the same query multiple times for the same dashboard.

Open2022.3.0, 6.0, and later
Email addresses added as CC or BCC in a scheduled data notification are not saved.

Open2021.4.3, 5.2.1, and later
In the CMC under Nodes, the Restart button beside each Service does not function properly.OpenFor Cloud environments, use the Cloud Admin Portal to restart the Services from the Configurations tab.
The Show Empty Groups option in Listing and Aggregated tables does not work properly when enabling it for a formula in the Grouping Dimension tray.Open
The dashboard Search box won’t return any matching results when you search by the values of a formula column in a business schema view, an insight, or a result set. However, using the Find all records containing… link filters the related insights correctly.Open
Searching the values of a formula column in the Filters dialog does not return any matching value in the case of formula columns from business schema views or formula columns added as prompts.Resolved2024.1.0 Cloud and 2024.1.3 On-Premises