Release Notes Incorta On-Premises 2024.1.3

Release Highlights

In the 2024.1.3 On-Premises release, Incorta is introducing multiple new features that aim to enhance and ease the user experience:

  • Introducing a new advanced SQL interface as an alternative to the existing SQL-Interface that is fully Spark SQL compliant, more performant, scalable, and provides improved integration with external analytical tools
  • A new Connectors marketplace is now available. With the connectors marketplace, you can install, upgrade, or downgrade connectors without waiting for releases.
  • More enhancements to dashboard free-form layouts to help build visually appealing dashboards.
  • Schema managers can now create load plans with multiple schemas that can be executed in parallel or sequentially via load groups. To make load plans easier to manage, there is also a preview feature that shows the execution dependencies within a load group.
  • Incorta has recently introduced dynamic fields. This feature enables dashboard developers to specify a range of measures, which dashboard viewers can then utilize to control the display of charts in the dashboard.
  • Incorta has expanded its data delivery capabilities by including Google BigQuery and Microsoft Azure Synapse Analytics as data destinations. As a result, the integration between complex data sources and the new cloud data destinations is seamless.
  • A new version of the Data Lineage Viewer now tracks and identifies both the upstream and downstream lineage of most system entities, including tables, views, variables, dashboards, and insights.
  • An enhanced version of the Microsoft SharePoint now connects to Excel and CSV files.
Important

This release uses the Data Agent version 8.2.0. Please upgrade to this version.

The Data Agent package will no longer include the MySQL driver, and you will need to provide your own driver (you can download the MySQL jar version 5.1.48 from maven repository). After unzipping the Data Agent package, copy the jar file to <UNZIPPED_DATA_AGENT_PATH>/lib before starting the Data Agent.

New customers and new installation considerations

New installations will not come with the MySQL driver as part of the installation package. You will need to provide your own MySQL driver. Existing customers, when upgrading existing instances to the 2024.1.3 on-Premises release, will not experience any change of behavior .

For details about how to install your MySQL driver, refer to Guides → Install MySQL Driver.

Upgrade considerations

SDK Component installation

For now, SDK Component installation is only supported on clusters that use a MySQL metadata database.

Connectors

  • For the offline Marketplace, the connectors’ files must be placed under the /marketplace/connectors/ directory. If this directory does not exist, create it under the system tenant path. Afterward, you must unzip the ext_connectors.zip file that exists in the Incorta installation package and copy all the connectors’ folders from the unzipped directory to /marketplace/connectors/.
  • During the upgrade, Incorta moves the custom CData connectors from <IncortaNode>/extensions/connectors/customCData/ to <IncortaNode>/extensions/connectors/shared-libs/.
  • For custom SQL connectors, you must move your files from <IncortaNode>/runtime/lib/ to <IncortaNode>/extensions/connectors/shared-libs/.
  • In case you need additional jars to use with the Oracle connector to support advanced options like XML, you must add or move the necessary libraries from <IncortaNode>/runtime/lib/ to <IncortaNode>/extensions/connectors/shared-libs/sql-oracle/.

Metadata database check

Upgrading to this release requires upgrading the metadata database to support multi-group load plans and migrate existing schema load jobs or load plans. Before upgrading to a 2024.1.3 release, contact Incorta Support to perform a database check to inspect and detect any issues with your metadata database that might cause the metadata database upgrade to fail. If issues are detected, Incorta Support will run scripts against the metadata database to delete the records causing these issues.

Note

If the records that should be deleted are related to a multi-schema scheduled load job, the scheduled load job will be deleted altogether, and you will need to recreate the load plan manually if required.

SAP connector JCo files

Starting this release, the SAP ERP Connector will stop bundling the JCo 3 libraries. For more information, refer to License Terms for SAP Connectors. Accordingly, you must manually get the needed libraries from SAP Java Connector and place them under <INCORTA_HOME>/IncortaNode/extensions/connectors/shared-libs/sapjco3.

Public API v2

The response of the /schema/{schemaName}/list endpoint has been updated. Some parameters have been renamed while additional parameters are now available to display error messages, the source column of runtime business view columns, and their data types. For more details, refer to Public API v2 → List Schema Objects.

Columns with Integer data type

In this release, Incorta can write columns with Integer data type as Integer (instead of Long) in Parquet files. Loading objects with Integer columns from source (full load) creates new Parquet files with Integer data and requires a full load for all their dependent objects. You can instruct the Loader Service to create new Parquet files and migrate these columns to Integer during loading from staging. For more details, see Maintaining Integer data during data loading.

Caching mechanism enhancements

The caching mechanism in Analyzer views for dashboards has been enhanced by caching all columns in the view to prevent query inconsistencies. To optimize performance and reduce off-heap memory usage, creating views with only the essential columns used in your dashboards is recommended.

Schema names

Starting with this release, the names of newly created schemas and business schemas are not case-sensitive. Therefore, you cannot create a new schema or business schema that shares the same name with existing schemas or business schemas using different letter cases. To maintain backward compatibility, the names of existing schemas and business schemas will not be affected.

Like function fix

As a result of fixing an issue with the like() function, you must load all tables that use this function from staging if you are upgrading from releases before 6.0.2.

New Features

Dashboards, Visualizations, and Analytics

Data Management Layer

Architecture and Application Layer

Cluster Management Console

SQLi

Features Details

Dashboards, Visualizations, and Analytics

Dashboard comments collaboration

Introducing the new dashboard comments feature, where you can collaborate and communicate with other users by commenting on available dashboards.

Using this feature, you will be able to:

  • Post a comment.
  • Reply to comment.
  • Edit a comment.
  • Delete a comment.
  • Mention other users.
  • Get notified via email in the following cases:
    • Another user mentions you in a comment.
    • A comment you are mentioned in is edited.
    • A reply is posted to a thread you are following.
    • Users reply to your comments.

Knowing that the emails contain the following information:

  • Who posted the comment.
  • Type of comment if it was a mention, a reply, or an edited comment.
  • The posted comment/reply.
  • The dashboard link.

With the comments collaboration that can occur between multiple users, Incorta still maintains dashboard privacy according to their sharing scheme, which means:

  • A user mentioned in a comment but does not have access to the dashboard cannot view the dashboard.
  • A user must request access to be able to view the dashboard from the dashboard owner.
  • Users with view access to the dashboard can still leave comments.
  • Disabled or inactive users’ comments will be visible, where their icons are grayed out.

If a dashboard has comments, Incorta displays the number of comments next to the chat bubble icon.

For more information, refer to Tools → Dashboard Manager.

Group insights in free-form layout

In the free-form layout, you now have the ability to add and organize multiple insights in a group, which enhances the user experience in handling and manipulating these insights. Whether you want to highlight them or relegate them, managing insights becomes more seamless. Additionally, you can effortlessly incorporate new insights into an existing group by simply dragging and dropping them.

You can also do the following:

  • Name a group.
  • Delete a group.
  • Copy a group.
  • Ungroup insights.

When dealing with a group, you deal with the insights included within as one entity, which means that selecting one and moving it around within the layout, will move the rest of the insights around as well.

Note

You cannot create a group within a group.

In addition to grouping insights and naming groups, you can rename your insights without editing their titles. You can do this by just double-clicking the insight type name, and typing in a new name, knowing that this action will not affect original insights titles in the dashboard.

For more information, refer to Tools → Dashboard Manager.

Summary Component

In this release, Incorta is introducing a new smart Summary component that when added to a dashboard tab, it summarizes all insights within this tab.

You can find the Summary component under the Others category in the Add Insight panel. The Summary component is a native component that does not require any AI integration.

Usually, the Summary component will summarize the available insights in the form of titles and brief bullet points; where each title is equivalent to an insight in the dashboard.

If the insight does not have a title, the component will display the identification number of this component.

Important

For the time being, the Summary component only supports the briefing of the following: Line, Sankey, Scatter, Waterfall, Column, Pie, and Donut visualizations and their variations.

Shapes Components

With the 2024.1.3 release, Incorta is enabling you to elevate dashboard designs with the new shapes and icons feature.

You can now integrate lines as delimiters between insights, creating a clear and organized visual hierarchy. Use rectangles strategically to differentiate between various insights.

With this addition, you can incorporate lines, arrows, rectangles, circles, and a variety of other shapes and icons into your dashboard creations.

You can also embed text within your added shapes, resize them, add shadows, and change icon styles.

You can find the new Shapes category in the Add Insight panel with the following available:

  • Rectangle
  • Circle
  • Line
  • Arrow
  • Icon
  • Text

Under Appearance, you can customize the appearance of the shapes using various settings and controls, such as shadows and text.

For more information, refer to Visualizations → Shapes.

Dashboard presentation mode

Now you can present your dashboard directly through the new dashboard presentation mode introduced in this release. A new presentation icon is added in the action bar in the Dashboard Manager that enables you to show your dashboard tabs in fullscreen mode.

When selecting the Presentation icon, Incorta displays the currently selected tab in full screen.

During the presentation, you can switch between tabs using the available arrow icons and keyboard arrows; or simply select the Play button in the Control bar.

Using the Control bar, you can also adjust the displaying time for each tab, as well as choose how to fit the content of your tab within the screen.

For more information, refer to Tools → Dashboard Manager.

Send a single insight

You can now schedule the sending of a single insight from a dashboard without the need to send the entire dashboard, utilizing the Send/Schedule a Report option. This feature is introduced to facilitate the sending of tabular insights that include pagination.

You can access this capability in the More Options menu of an insight, replacing the previous Send to Data Destination option. Upon choosing the Send/Schedule a Report option, Incorta opens the Scheduler, allowing you to configure the delivery of the insight.

Recommendation

When sending a tabular insight with multiple pages, it is recommended to send it in XLSX format.

Sharing dashboard to Slack and Microsoft Teams

In this release, Incorta is enabling you to send dashboards using messages to Slack and Microsoft Teams (MS Teams). You can send dashboards to both private and public channels.

As a CMC admin, you must first configure Slack, MS Teams, or both to enable Incorta users to share dashboards using these communication platforms.

Important

If you enable or disable the integrations while having Analytics open, you must ask your users to refresh the browser for the change to take effect.

After configuring either Slack or MS Teams, do the following to send or schedule a dashboard:

  1. Log in to Incorta Analytics.
  2. Select your dashboard.
  3. Select the Share icon from the Action Bar, then Send/Schedule a Report.
  4. Select Communication Platform as your sharing option.
For MS TeamsFor Slack

  ●  Select the Platform as Microsoft Teams
  ●  For the first time, you use this option:
1. Enter a Channel Name.
2. Paste the webhook URL you configured for this channel.
3. Select whether to show it to all users or just yourself.
4. Select Add Channel.
  ●   For already existing channels, select the needed channel.

  ●  Select the platform as Slack.
  ●  Select the channel you want to send the report to, knowing that:
1. Incorta lists all public channels in your Slack workspace whether you have invited the bot or not, so you must make sure you have invited the bot to be able to send the report.
2. For private channels, Incorta only displays private channels that have the bot invited.
  ●  (Optionally)Enter the file name and select its format.
  ●  Choose whether to append a timestamp to the file name.
  1. (Optionally) Enter a message to associate with the shared dashboard.
  2. Select Done.

For now, Incorta supports only sharing dashboard URL to Microsoft Teams; while it supports the following formats to share to Slack:

  • PDF
  • CSV
  • XLSX

You can also search the available channels using a channel name.

For more information, refer to Slack Integration and Microsoft Teams Integration documents.

Important

If you enable or disable the integrations while having Analytics open, you must ask your users to refresh the browser for the change to take effect.

Incorta is introducing a new method to share a dashboard with other users. Now you can share a dashboard link via copy and paste.

You can find the Copy link option in the following places:

  • The More options (⋮ vertical ellipsis) icon in the Content manager page beside a dashboard.
  • Share icon in a dashboard’s Action bar.
  • Dashboard’s tab More options (⋮ vertical ellipsis) icon.

In sharing the dashboard link, dashboards filters and bookmarks by default. By selecting “Include current filters in the URL” option, the filters will be added to the URL string. This can be especially useful when leveraging URLs from third-party tools.

Note

Use the “Include current filters in the URL” option if you are drilling down to Incorta or embedding the dashboard.

Dynamic fields

A dynamic field is a group of dynamically interchangeable fields. The analyze user, building insights and dashboards, would add multiple measures to a dynamic field from the Manage Dashboard Filters within the dashboard scope. The analyze user can then add the dynamic field to the measure tray for any insight created within the same dashboard scope. The dashboard viewer would dynamically switch between the different measures of the dynamic field.

Note

This feature is only available in interactive dashboards. Currently, sending and downloading a dashboard as PDF respects the selected dynamic field.

For more information, refer to Concepts → Dynamic field and Tools → Dashboard Filters Manager.

List of values in presentation variables

Presentation variables now accept a user-defined list of values. This feature lets you easily filter data based on manual user input at runtime instead of pre-existing data columns.

When you create a presentation variable, you can choose to manually enter a set of defined values as the source of your presentation variable. The set of values can be a comma or line-separated list entered into the bulk edit of the filter values. Make sure to always select the default value.

For more information, refer to Concepts → Presentation variable.

Hide prompts from the filter bar

This release introduces a new option that will hide prompts from the dashboard filter bar. The Hide from filter bar option is now available for prompts. This option can be enabled while setting up the prompt in the Dashboard Filters Manager.

Incorta will apply the prompt to your dashboard regardless of enabling this option.

Note

The Clear All option clears all shown and hidden prompts.

For more information refer to Concepts → Prompts and Tools → Dashboard Filters Manager.

Grouping measure color palette for Pie and Donut visualizations

Additional configurations are available to control the colors of Donut and Pie charts to align them with dashboard themes. The Format Color Palette option is now available for grouping dimensions in pie and donut visualizations.

Limit prompt filters dashboard selection

In this release, Incorta is enabling Cluster Management Console (CMC) admin to limit the number of selections in a dashboard prompt filter that a dashboard consumer can select to apply on a dashboard via a new configuration in the CMC. This limitation is applicable only for the following operators:

  • In
  • Not In
  • Contains
  • Does Not Contain
  • Starts With
  • Does Not Start With
  • Ends With

You can find the new configuration in the CMC under Default Tenant Configuration > Incorta Labs > Max no. of selections for contains filter. The default value for the configuration is -1, which indicates an unlimited number of selections. 0 indicates an unlimited number of selections as well. Any number greater than 0 will indicate the maximum number of selections a dashboard consumer can select in a prompt filter.

Multiple default groups by dimensions in Aggregated Tables

The dynamic group-by is a powerful feature in Incorta's aggregated tables. In the latest update, users can override the default grouping behavior in which the first dimension in the grouping dimension tray is displayed. Instead, users can select which grouping dimension(s) to show as a default for dashboard consumers.

Dynamic Pivot Table Analysis

In this release, pivot tables have added two powerful enhancements:

  • The ability to define a dynamic group by logic (previously only available in aggregated tables)
  • The ability to define dynamic columns

As a result, users can interactively select and de-select rows and columns to display in their pivot table. As a bonus, these two new features also allow users to set a default view.

Wrap the label text in the Dual X-axis chart

There is now a new lab format option to wrap the text label for the Dual X-axis chart. This format option applies to the upper X-axis. You can keep the original behavior, where the text exceeding the reserved area of the label is trimmed, or toggle label wrapping in the pill settings.

Export current dashboard state

Incorta can now export dashboards for PDF and HTML in their current view state in this release instead of reverting to their default state. The following actions supported from an end-user interaction that can be exported are:

  • Sorting
  • Dynamic Group By (in tables)
  • Dynamic Measures (in tables)
  • Dynamic Fields (used in measures)

If a dashboard consumer has used the above actions before exporting, the exported in a direct or scheduled dashboard export will reflect their changes. When exporting the dashboard, under the Choose Bookmark option, you can select existing bookmarks or “Current Dashboard State”. Note that dashboard states have the option to save to bookmarks.

Data Management Layer

Connectors Marketplace

Incorta is introducing the new Connectors Marketplace to On-Premises users. With the new marketplace, you can install, upgrade, and downgrade your connector version independently from any Incorta release.

By default, the On-Premises Connectors Marketplace operates in offline mode. In this mode, you must request connector files from the Incorta Support team. Afterwards, you need to extract the connector's .zip file and place the extracted folder under the system tenant path in the following directory: /marketplace/connectors/.

Notes
  • For the offline Marketplace, the connectors’ files must be placed under the /marketplace/connectors/ directory. If this directory does not exist, create it under the system tenant path. Afterward, you must unzip the ext_connectors.zip file that exists in the Incorta installation package and copy all the connectors’ folders from the unzipped directory to /marketplace/connectors/.
  • The system tenant path may vary according to Incorta installation.

An online Marketplace option is also available for On-Premises installations to enable schema managers to instantly access available connectors and updates in Incorta.

To activate the connectors online marketplace:

  • Log into the CMC.
  • Go to Cluster Configurations > Clustering > Connectors Marketplace Mode.
  • Change the mode from Offline to Online, and then restart both Loader and Analytics services.

Incorta configures the Marketplace information by default in the CMC under Cluster Configurations > Integration > Incorta CMS URL and Incorta CMS Token. For further assistance, please contact Incorta Support.

Upgrade Considerations
  • You must move any drivers or files required by the custom SQL connectors to <IncortaNode>/extensions/connectors/shared-libs/.
  • In case you need additional jars to use with the Oracle connector to support advanced options like XML, you must add/move the necessary libraries from <IncortaNode>/runtime/lib/ to <IncortaNode>/extensions/connectors/shared-libs/sql-oracle/.
  • During the upgrade, Incorta moves the CData drivers that you use before the upgrade from <IncortaNode>/extensions/connectors/customCData/ to <IncortaNode>/extensions/connectors/shared-libs/.
Important

You must contact your CMC administrator to enable CData connectors so you can install them.

In the marketplace, the connectors are categorized according to their type and functionality. The connectors are displayed as cards within the marketplace, each card contains the following information:

  • Connector name
  • Connector version
  • Connector category
  • Green tag if it is installed
  • Yellow tag if it is new or has an available update

On a connector details page, you can view a brief description of the connector, a link to the connector’s full documentation, and a list of available updates (if they exist).

For fresh installations, the following connectors are installed by default:

  • MySQL
  • Oracle
  • Microsoft SQL Server
  • Custom Cdata
  • Custom SQL
  • Local Files

For upgrades from previous versions, connectors that you are using with physical schemas will exist, in addition to the above connectors. You can install any other connector that you might need from the Marketplace.

For more information, refer to References → Connectors.

Microsoft SharePoint Connector

The SharePoint connector receives significant improvement to it's options. Now, you can not only select SharePoint lists, but also connect to Excel and CSV files.

For more information, refer to Connectors → Microsoft SharePoint.

Oracle Cloud Applications (BICC) connector

Incorta is introducing a new enhanced and improved version of the Oracle Cloud Application connector, where you can control who triggers the BICC jobs (BICC or Incorta) and avoid any synchronization issues that may occur.

In addition, you can control the list of the BICC jobs that you need to run by defining it in the connector. You can add the BICC IDs in a list separated by a comma, space, or a new line.

For more information, refer to Connectors → Oracle Cloud Applications (BICC).

GraphQL connector

Incorta is introducing the new GraphQL connector in this release. GraphQL is a query language for APIs and a runtime for fulfilling queries with existing data. The GraphQL connector uses the cdata.jdbc.graphql.jar driver to connect to a GraphQL resource and get data.

Note

The GraphQL connector is a preview connector.

For more information, refer to Connectors → GraphQL.

NetSuite Searches connector

Incorta added a new connector for NetSuite Saved Searches in this release. NetSuite enables users to search for any record type in the system and save this search in the form of a Saved Search. Incorta connects to retrieve this data to be able to process it and build insights.

Note

The NetSuite Searches connector is a preview connector.

For more information, refer to Connectors → NetSuite Searches.

OneDrive connector

The OneDrive connectors enable you to connect to your Microsoft OneDrive. The OneDrive cloud service connects you to all your files stored in the cloud. It lets you store and protect your files, share them with others, and get to them from anywhere on all your devices.

Note

The OneDrive connector is a preview connector.

For more information refer to Connectors → Microsoft OneDrive.

Kyuubi connector

The new connector is a JDBC connector that enables you to connect to Apache Kyuubi. Apache Kyuubi is a distributed and multi-tenant gateway to provide serverless SQL on Data Warehouses and Lakehouses. Kyuubi builds distributed SQL query engines on top of various kinds of modern computing frameworks.

Note

The Kyuubi connector is a preview connector.

For more information refer to Connectors → Kyuubi.

MariaDB connector

In this release, Incorta is introducing the new MariaDB connector. MariaDB is one of the popular open-source relational databases. It is also the default database in most Linux distributions.

Note

The MariaDB connector is a preview connector.

For more information refer to Connectors → MariaDB.

Oracle Configure, Price, and Quote connector

With the addition of the Oracle CPQ REST API connector, Incorta now allows you to connect to your Oracle CPQ system seamlessly. By utilizing this connector, you can collect valuable data and gain meaningful insights that will help you optimize your opportunity-to-quote-to-order process.

Note

This connector is available for preview only.

For more information, refer to Connectors → Oracle CPQ.

Oracle Transportation and Global Trade Management connector

With Incorta, you can now easily connect to your Oracle CPQ system using the Oracle OTM/GTM connector, which is based on REST APIs. This connector allows you to collect data on all your transportation activities across your global supply chain and trading.

Note

This connector only supports Oracle OTM/GTM version 23A and is available for preview.

For more information, refer to Connectors → Oracle OTM/GTM.

Autoline connector

With the new Autoline connector, you can easily access your Autoline suite and directly query your data. This connector uses the cdata.jdbc.jdbcodbc.jar driver to establish a seamless connection to Autoline.

Note

The Incorta Autoline connector is available for preview only.

For more information, refer to Connectors → Autoline.

SAP ERP connector callback extraction mode

Incorta is introducing a new callback extraction mode in the existing SAP ERP connector. The new extraction mode is faster and easier to set up.

Note

It is recommended to use the callback mode instead of the asynchronous mode for better performance.

For more information, refer to Connectors → SAP ERP.

Log-based incremental load

Incorta is now supporting log-based incremental load using the change data capture (CDC).

CDC is the process of identifying and capturing changes made to data in a database using logs and then delivering those changes in real time to a downstream process or system.

Note

Currently, the log-based incremental load is a preview feature.

Prerequisites

To be able to use the log-based incremental load, you need to be aware of and apply the following:

  1. Install and configure Apache Kafka and Kafka Connect.
  2. Configure Debezium connector, knowing that Incorta recommends using Debezium version 2.4.1.
  3. Disable snapshot while configuring Debezium.
  4. Make sure the Debezium connector is configured to send data types to Incorta by adding the propagate property.
  5. Log-based incremental load only supports database physical tables.
  6. Tables must have primary keys.

The log-based incremental load is currently supported for the following SQL-based connectors:

  • MySQL
  • Microsoft SQL Server
  • Oracle
  • PostgreSQL

While creating a dataset for a physical schema, you can choose the log-based incremental load method to load this schema incrementally.

Data source connector configuration example using Debezium

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d ‘{
"name": "inventory-connector1",
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.hostname": "127.0.0.1",
"database.port": "3306",
"database.user": "debezium",
"database.password": "debezium_1234",
"database.server.id": "184054",
"topic.prefix": "kafka_mysql",
"database.include.list": "inventory",
"schema.history.internal.kafka.bootstrap.servers": "127.0.0.1:9092",
"schema.history.internal.kafka.topic": "schemahistory.kafka_mysql",
"include.schema.changes": "true",
"column.propagate.source.type": ".*",
"snapshot.mode": "schema_only"
}
}

Known limitations

  • For the time being, Incorta does not track deletion updates through the log-based incremental load (CDC).
  • Minimal mismatch in column INTERVAL data types.
  • This feature supports Kafka topics that use a single partition only.

Multi-schema load plan enhancements

In this release, Incorta introduces a lot of enhancements regarding the multi-schema load plans:

  • Multi-schema load plan as a general availability (GA) feature
  • Orchestrating load plans via sequential groups
  • Visualizing the execution of load plans using Directed Acyclic Graphs (DAGs)
Multi-schema load plan: a general availability (GA) feature

The Multi-schema load plan introduced previously as a preview feature is now a GA feature that is always enabled.

Orchestrating load plans via sequential groups

You can now organize schemas into load groups, simplifying the management of complex data applications that involve multiple schemas. Schemas can be loaded sequentially by ordering them by group. Within each group, schemas will load simultaneously (as resources allow), ensuring faster data refreshing cycles and maintaining dependencies between schemas

The Loader Service simultaneously loads all schemas in the first group, then starts loading the next group, and so on.

For more details, refer to Tools → Scheduler.

Visualizing the execution of load plans using DAGs

You can now preview the execution plan of each load plan visualized as a directed acyclic graph (DAG), which shows the load plan dependencies and order of execution.

To preview the DAG of a load plan:

  1. Select Scheduler > Load Plans.
  2. For the load plan you want, select Show DAG.
  3. Expand the nodes to show the included objects and processes.

The DAG of a load plan comes with the following capabilities:

  • Hide and show task groupings. Grouped nodes show the number of tasks included.
  • Search and filter the group panel (which now shows the included schemas and their objects) by the schema or object name. Selecting a schema or object filters the diagram to show only the tasks and nodes for the selected schema or object and also filters the diagram legend to show the related task or node types only.
  • Search the diagram by the object name. The search box shows matching objects categorized by task type. The search is limited to the current diagram; therefore, it is affected by the filters applied using the group panel. Selecting an object from the search list highlights the object in the respective node and also highlights it along with its upstream and downstream tasks in all expanded nodes across the diagram, showing the full path of the selected object.

The nodes represent the tasks or stages of the load plan while node order represents the dependency between objects. The DAG shows nodes for extraction, transformation (enrichment), PK-index creation, deduplication (compaction), locking, loading, and Post-Load calculations.

For more details, refer to Tools → Load Plan DAG Viewer.

Data lineage enhancements

A new version of the Data Lineage Viewer is now available as a preview feature. It displays an entity’s (column, variable, object…) upstream and downstream lineage. Upstream lineage lists the entities referenced in the current entity while downstream lineage lists entities where the current entity is referenced. The tool has an enhanced diagram that shows the hierarchy of dependencies where you can track upstream and downstream lineage.

In addition to tracking columns and formula columns, you can also track the lineage of other entities like:

  • Physical schema tables
  • Business runtime views
  • Session and global variables
  • Dashboards and insights

Incorta has also improved the accessibility of data lineage in the platform. You can now:

  • Quickly identify lineage object types through improved color-coding and icon-coding.
  • Access dashboards from within the lineage diagram.

For more details, refer to Tools → Data Lineage Viewer v2.

Maintaining Integer data during data loading

In previous releases, Incorta wrote Integer columns in Parquet as Long. However, starting this release, Incorta writes Integer columns in Parquet as Integer for all newly created tables. For previously created tables, Incorta converts Integer columns written as Long in Parquet files to Integer during full load jobs, while Incorta keeps these columns unchanged during incremental loads.

As a result, after performing a full load of a table with Integer columns, it is recommended that you perform a full load of its dependent schema objects to ensure data consistency.

To migrate all Parquet files to have Integer data without fully loading your objects, the administrator can turn on the Enable Parquet Migration at Staging option in the Cluster Management Console (CMC) > Server Configurations > Tuning and perform a load from staging for all your objects with Integer columns.

Notes:

  • Turning the Enable Parquet Migration at Staging option on adds a new step that runs on Spark to the staging load jobs. Ensure Spark has sufficient resources to migrate the required Parquet files.
  • The migration of Parquet files during staging load occurs only once per object.
  • Load tables before loading MVs that read from these tables.
  • When loading an object fully, you must load dependent objects or load all objects from staging after turning the Enable Parquet Migration at Staging option on.
  • The new behavior might affect existing scripts referencing Integer or Long data, whether columns, values, or variables.

Detecting duplicates during unique index calculations

In previous releases, when the Enforce Primary Key Constraint option was disabled for a data source that does not enforce the primary key constraint, such as CSV files; or primary keys of a table are not properly set; unique index calculations would not fail.

Starting with this release, in such a case, the unique index calculation will fail, and the load job will finish with errors. You must do one of the following to have the unique index correctly calculated for physical tables or MVs:

  • Enable the Enforce Primary Key Constraint option and load tables from staging.
  • Select the right key columns that ensure row uniqueness and fully load the tables.

In the case of derived tables, select the right key columns that ensure row uniqueness, and the unique index will be correctly calculated during the schema update job.

Null handling enhancements

The Analytics Service has extended its support for null value handling in the following areas:

  • Functions:
    • Arithmetic Functions
    • Boolean Functions
    • Conversion Functions
    • Date Functions
    • Miscellaneous Functions
    • Analytics Functions
  • Filtering data based on formula columns.
  • Sorting based on formula columns.
  • Formulas involving logical and comparison operators.
  • Arithmetic operations.

This expansion of support for null value handling in these areas signifies an important improvement in the capabilities of the Analytics Service, allowing for more comprehensive and accurate data analysis and manipulation.

In addition, sorting and filtering data based on physical columns now account for null values.

  • When sorting data in an insight, Analyzer table, or Analyzer view in ascending order, null values will be first.
  • Filters respect null values when applying filters to insights, dashboards, Analyzer tables, or Analyzer views.

Sorting and filtering based on formula columns do not account for null values for now.

Note

The Null Handling option has been moved to CMC > Server Configurations > Incorta Labs.

For more information, refer to References → Null Handling.

Expanded data delivery capabilities

In this release, Incorta introduces Google BigQuery and Microsoft Azure Synapse Analytics destinations for data delivery. With data destinations, Incorta will not only ingest and enrich data, but also push full and incremental loads of data to another cloud analytics platform. The new data delivery capabilities streamline the integration between complex data sources and Microsoft Azure Synapse Analytics and Google BigQuery. It also accelerates data mart deployment and automates data model design and source schema mapping with Incorta data applications.

You can choose which tables to send to your data destination. While setting up a data destination for a schema, you find a new section that is collapsed by default called Tables. Using this section, you can configure which tables you want to send to the data destination during the load process.

Through the Tables section, you can filter tables by entering a keyword in the search bar. You will also find the Show only selected toggle that shows only selected tables when enabled.

For Microsoft Azure Synapse, Incorta now discovers and stores a column string length in the metadata. Hence, when you load your data and send it to the destination, the data is sent based on the discovered column length if it is available, otherwise, Incorta uses the Default String Length you have previously configured.

After upgrading to this release, Incorta can create the string length column with the next full ingest you perform if it is available.

Known limitation

There are cases where column length cannot be discovered, such as calculated columns or columns of data sources based on text files such as Excel and CSV files.

For more information, refer to the Incorta Data Delivery configuration documents.

Default date and number format in a business view

Now, you can set the date and number formats per column or formula column in a business schema view. Once configured, this format will be the column’s default format whenever the column or formula column is added to an insight. However, you can still apply a different format to the insight column. You can also update columns in existing insights to inherit the view column format.

Notes:

  • After upgrading to 2024.1.3, updating the view column format will not impact existing insight columns unless you configure insight columns to inherit the view column format.
  • Changing the view column format affects all insights where the insight column inherits the view column format.
  • Format updates at the view column level will not override the format specified at the insight column level.
  • If no format is specified at the insight or view levels, the format will be as follows:
    • Date columns: Short Date
    • Timestamp columns: no format
    • Numeric columns: no format, except for decimal columns
  • You cannot set the default format for columns in Incorta Analyzer or SQL views.

SQL-compliant verified business schema views

The Analytics Engine can now validate business views for compatibility with external BI tools and SQL compliance. This enhancement will empower you to make more informed decisions when selecting business views for your analytics and reporting needs. You can easily identify a verified business view by the green icon displayed next to it in the Business Schema Designer.

A Verified view is a business schema view that:

  • Does not contain formulas referencing columns from Analyzer Views or SQL Views.
  • Does not contain aggregation functions.
  • Has a valid query plan, which means any of the following:
    • A base table is explicitly defined and is valid for all the view columns (constitutes a valid join path).
    • If there is no defined explicit base table, there must be a valid implicit base table connecting all columns in the view.

If the view does not meet any of the above criteria, it will not be considered verified. You can view the reason why it is not verified by hovering over the gray icon and clicking Show Details.

Note

External BI tools connecting via the Advanced SQL Interface and Notebook for Analyzer users will have access only to the verified views in a business schema. However, external BI tools connecting via the classic SQL Interface will have access to both verified and unverified views.

Current limitations and known issues

  • This feature applies only to business schema views.
  • Analyzer and SQL views are excluded from the verification process.
  • If a view is considered unverified due to multiple reasons, only the first encountered one will be displayed.
  • If a business schema view contains only session or global variables, it is considered unverified.

New Public API endpoints

This release introduces the following new endpoints:

Content Manager (Catalog) endpoints
  • Search Catalog (catalog/search): searches for dashboards and folders that a user owns or has access to in the Content Manager (Catalog) and its subfolders by a defined keyword.
  • List Catalog Content (catalog): lists folders and dashboards that a user owns or has access to in the root directory of the Content Manager (Catalog).
  • List Catalog Folder Content (catalog/folder/{folderId}): lists folders and dashboards that a user owns or has access to in a specific folder in the Content Manager (Catalog).
Load plan execution and status endpoints
  • Execute Load Plan (/load-plan/execute): executes a specific load plan and returns the load plan execution ID.
  • Load Plan Execution Status (load-plan/status/{execution-id}): returns the status of a load plan execution, with or without the details per load group, schema, or object.

New Date functions

This release introduces the following new Date functions:

  • yearQuarter(date_timestamp exp): Returns an integer that represents the year and quarter of a given date or timestamp column or expression. In the returned value, the first four digits represent the year while the last two digits represent the quarter. Example: this function, yearQuarter(date("2023-10-25")), returns 202304.
  • yearQuarter(): Returns an integer that represents the current year and quarter. In the returned value, the first four digits represent the year while the last two digits represent the quarter. Example: 202401.
  • yearMonth(date_timestamp exp): Returns an integer that represents the year and month of a given date or timestamp column or expression. In the returned value, the first four digits represent the year while the last two digits represent the month. Example: this function, yearMonth(date("2023-10-25")), returns 202310.
  • yearMonth(): Returns an integer that represents the current year and month. In the returned value, the first four digits represent the year while the last two digits represent the month. Example: 202401.

Architecture and Application Layer

Enhancements to data load notifications

This release introduces significant enhancements to data load notifications, including notifications on load plans instead of schemas and a new notification type for jobs that take more time than expected.

Load plan notifications

Now, you can create data load email notifications at the load plan level rather than the schema. The Schema Notifications tab is no longer available and is replaced by the Notifications tab.

  • The notifications list will show only notifications that the logged-in user has created. However, the Super User and users with the SuperRole will have full access to all notifications.
  • Deleting a load plan deletes its data load notifications.

Upgrade considerations

Existing schema load notifications will be migrated to load plan notifications during the release upgrade process. Each schema load notification will be automatically assigned to all single-schema load plans for this schema. The migration process will not migrate schema notifications to load plans having multiple schemas. Additionally, schemas with no single-schema load plans will not have their notification settings migrated.

  • The Last Modified Date of a notification will reflect the migration date.
  • Migrating a notification will not change its owner.
  • Migrated notifications will follow the following naming convention: [notification name]_[load plan name].
Notifications of jobs taking longer than expected

You can also create email notifications for load jobs that take longer than expected, based on the load time of recent load jobs for the same load plan. This feature helps you detect delays in data refresh cycles as early as possible and act accordingly.

Note

As this feature depends on the load plan job history, the Scheduler will skip this type of notification for load plans with no job history.

Retention of load job tracking data

With high-frequency refresh cycles, the size of load job tracking data, including load job history, load plan executions, and schema update jobs tends to increase over time, which might impact the system’s performance and the metadata database’s volume. That is why Incorta is introducing the Retention period of load job tracking data (In months) option available to the Cluster Management Console (CMC) admins to control the period for which Incorta retains load job tracking data. This option exists under Server Configurations > Tuning, and admins can set the retention period in months. The default is Never, which means that the feature is disabled.

If you enable this feature, a cleanup job runs whenever the Analytics Service starts and every 24 hours afterward and deletes tracking data that exceeds the specified retention period. However, the tracking data of the latest successful load job or schema update job will not be deleted.

Notes and recommendations

  • When the cleanup job runs for the first time, it locks the metadata database during the deletion process. The locking duration depends on the number of records that the job will delete.
  • It is recommended that you suspend the Scheduler before enabling the feature. Then, start the Analytics Service only and wait for a few minutes before you start the Loader Service.
  • It is also recommended that you first configure the feature to start with a long retention period, then change the configuration afterward to a shorter period, and so on until you reach the required retention period. This will reduce the database lock time when the cleanup job runs for the first time.

Exporting and importing load plans

To facilitate migrating load plans from one environment to another or one cluster to another, you can now export and import one or more load plans, along with their scheduler details. When importing load plans, you can overwrite existing load plans that share the same name.

Note: Importing load plans will fail in the following cases:

  • The imported load plan has one or more schemas that do not exist in the target tenant.
  • You do not have edit access rights to all schemas in the load plan.
  • The imported load plan shares the same name with an existing load plan while the Overwrite existing load plans option is not selected.

Support for manual load plan execution

You can now manually execute a load plan from the Scheduler > Load Plans regardless of its schedule status: Active, Suspended, Completed, or Not Scheduled. Only users who own or have Edit access to all schemas in the load plan can execute it manually. Manually executing a load plan from the Load Plans list will not impact the next scheduled run if exists.

Default schema load type at the load plan level

You can now specify the default load type for schemas that you will add to a load plan. Whenever you add a new schema to the load plan, it inherits the default load type. If you change the default load type, it will not affect the schemas you have already added.

Load Job Details Viewer enhancements

  • Displaying the name of the executed load plan if available.
  • Displaying an indicator of the rejected rows per load group.
  • Table details enhancements:
    • Display the deduplication phase for tables, which tracks the duration and status of the PK-index creation and Parquet compaction processes.
    • The ability to collapse and expand schema objects.
    • The ability to filter by the table load status and preserve the filter during navigation where applicable.
    • The ability to change the column width.

Support for merging Parquet segments during loading from staging

A new Spark job can run as part of the load from staging jobs to merge Parquet segments, which are the result of incremental load jobs. This new step increases the system resilience and enhances the performance in clusters with an increased number of small Parquet files.

To enable this feature, turn on the Enable automatic merging of parquet while loading from staging option in the CMC Server Configurations > Tuning. However, during load from staging jobs, the following conditions must be met, along with enabling the feature, to start the merging process for an object:

  • The Parquet Long to Int Migration option is turned off or the object doesn’t require migrating Long data to Integer.
  • The object’s recent compacted version matches the recently extracted version, otherwise, the Loader Service performs a compaction recovery process before merging. If the recovery fails, the merge process will not start.
  • The table has over 1000 eligible Parquet segments for merging.
  • The estimated reduction in the number of files exceeds 50%.

To change these configurations, edit the Loader Service’s engine.properties file and set different values for the loader.parquet.merge.min.file.count and loader.parquet.merge.compression.ratio.threshold properties. Setting these properties to 0 or 1 means to merge whatever the file count or the gain percentage is.

Notes:

  • The Parquet merge step creates a new Parquet version of the object on the disk (similar to the version created as a result of a full load job).
  • The Parquet merge does not change the data merged. However, it changes the order of the records in the output Parquet files. After the merge, dashboards will show the same data, but in different order (if no sorting is applied).

Enhanced the responsiveness of Post-load interruption

This release enhances the responsiveness of interrupting load jobs during Post-load calculations. Previously, the Loader Service would wait for running calculations to complete before stopping the load job.

Enhanced performance with parallel snapshot reading

Now, Incorta can read the snapshot DMM files of joins and formula columns in parallel using multiple threads to enhance and speed up reading these files. The performance enhancement may vary according to the number of columns and joins to read concurrently and the available resources.

This feature is disabled by default. To enable it, add the following two parameters to the engine.properties file in each Loader and Analytics node on your cluster, set them to true, and restart the services.

  • store.parallel_column_snapshot_read
  • store.compressed_dictionary_rehash_lock_striped_enabled

Improved handling of cyclic dependencies in variables

This enhancement focuses on improving system resilience and stability by resolving issues related to infinite loops and overflow exceptions caused by cyclic dependencies in variables. However, it's essential to acknowledge that a standardized error message or expected output has not yet been established. Users may encounter varied outputs, including #Error, 0 rows, empty strings, or variable name displays, depending on the context. Ongoing efforts are dedicated to achieving uniform error handling and output consistency.

Cluster Management Console

Limit downloading insights enhancement

In this release, Incorta renamed the Users with “User” or “Individual Analyzer” roles can download insights option previously introduced in the 6.x release to be “Download insights”. Disable this option to prevent users with “User” or “Individual Analyzer” roles from downloading insights.

Enhanced logging framework

Incorta has made several improvements to its logging framework, including the handling of file naming, file size, and archiving. In this framework, a log file is automatically zipped to incorta-<date>-<file_number>.log.zip when it reaches 1GB or at the end of the day.

The following is an example for the logging process on February 3rd, 2023:

  1. Incorta started logging events in incorta.log file with the start of the day.
  2. When the log file reached 1GB, Incorta zipped the current log file into incorta-2023-02-03-01.log.zip file.
  3. Incorta recreated another incorta.log file to continue logging events for the same day.
  4. By the end of the day, the second log file is zipped under the name: incorta-2023-02-03-02.log.zip.

By default, Incorta retains the log files for a maximum of 6 months. Afterwards, the log files are deleted. If you wish to retain the files for a longer period, you can simply take a copy of the log files or modify the /IncortaNode/services/<service_id>/incorta/incorta-log4j2.xml file adjusting the value of each mention of the IfLastModified attribute. Knowing that the default value is age=”P183D”, which is roughly 6 months.

For example:

"PT20S" > parses as "20 seconds"
"PT15M" > parses as "15 minutes" (where a minute is 60 seconds)
"PT10H" > parses as "10 hours" (where an hour is 3600 seconds)
"P2D" > parses as "2 days" (where a day is 24 hours or 86400 seconds)
"P2DT3H4M" > parses as "2 days, 3 hours and 4 minutes"
Note

Existing log files created before the upgrade to this release will always exist.

Performance enhancement for rendering and downloading insights

  • Queueing of download and render requests: The Analytics Service now queues requests for the same insight with the same format, preventing simultaneous execution of duplicate requests.
  • In-memory caching of downloaded insights: the Cluster Management Console (CMC) admin can configure the Analytics Service to maintain an in-memory cached version of insights downloaded, sent to data destinations, saved in a download folder, or shared via email in XLSX and CSV formats.

The Analytics Service checks for a cached version before executing the insight query, regardless of the requesting user. The new Export to CSV/XLSX caching limit (In megabytes) option, located in CMC > Tenant Configurations > Tuning, controls the maximum data size of the query result that the Analytics Service caches for downloaded insights. By default, this feature is disabled. To enable it, set the caching limit to a value greater than 0. Insights with a query result size exceeding the defined limit will not be cached. Additionally, if any of the Maximum Cached Memory (%) or Maximum Cached Entries options are disabled or exceeded, the Analytics Service won’t cache downloaded insights.

These improvements collectively enhance performance, optimize resource utilization, and improve the overall user experience within the Analytics Service.

For more details, refer to Tenant Configurations > Tuning.

Announcements banner

Incorta can now broadcast announcements via an announcement banner to users within the Incorta platform. The banner announcement can now be set up and sent out through the CMC using the Notifications feature found in the Server Configurations section. This can be done by the CMC admin.

The banner will display to all tenant users. When users dismiss the notification banner, it will only appear to them again once the admin creates a new announcement.

For more information on how to configure the notifications, refer to Guides → Configure Server.

Pause scheduled jobs enhancements

Starting this release, you can choose which scheduled jobs you need to pause. Incorta replaced the "Pause Scheduled Jobs" toggle in the CMC with three toggles to give you more flexibility on which scheduler jobs you want to pause.

In the CMC > Clusters > cluster-name > Cluster Configurations > Default Tenant Configurations > Data Management (known as Data Loading in previous releases), you can find the three new toggles:

  • Pause Load Plans
  • Pause Scheduled Dashboards
  • Pause Data Notifications

These options are disabled by default, enabling them to display a message in the Analytics to indicate that this service is disabled.

For more information refer to Guides → Configure Tenant.

Incorta has also applied the same change while importing tenants. During the import process, you can choose which scheduled jobs you need to pause for the tenant you are importing.

For more information refer to Tools → CMC Tenant Manager.

Support OAuth for JDBC connection

Incorta now supports OAuth for JDBC connections. You can enable this option from the CMC. Log into the CMC and Navigate to Clusters > cluster-name > Cluster Configurations > Default Tenant Configurations > Integration. Enable the option OAuth 2.0-based authentication for JDBC connection, knowing that by enabling this option you cannot use personal access tokens for JDBC authentication.

Enabling this option will show the field OAuth 2.0 authorization server base URL, where you can type in your authorization server URL. Any URL change requires restarting the Analytics service to take effect.

Monitoring the file system usage

This release introduces a new feature to monitor how Incorta services use the file system and to collect the file system metrics. For now, this feature supports Google Cloud Storage (GCS) only.

This feature is disabled by default. Contact Incorta Support to enable the feature for each service that you want to monitor its requests (method calls) and set the interval to log the file system metrics in the service log file and the newly introduced file system audit file. Additionally, the feature can be configured to log detailed metrics in the tenant log file; however, it is not recommended to always enable this property as it might cause performance degradation.

Note

Enabling or disabling this feature does not require restarting the respective service; however, updating any other property, including changing the time interval or enabling or disabling the detailed metrics logging, requires restarting the related service.

SQLi

Advanced SQL Interface

In this release, Incorta is introducing a new advanced SQLi that is fully Spark SQL compliant. With this compliance, the new SQLi introduces a new enhanced performance and compatibility with more external tools. You can also use the advanced SQLi with Incorta using the Kyuubi connector.

Note

Advanced SQL Interface is a preview feature.

The Advanced SQL interface is disabled by default. To enable it, the CMC admin must log in and turn on the Server Configurations > Incorta Labs > Enable Advanced SQL Interface toggle.

When enabling and working with the advanced SQLi, you must be aware of the following:

  1. Enabling the advanced SQLi will automatically enable the Null Handling feature, regardless of any previous configurations.
  2. You will need to restart both Analytics and Loader services.
  3. The username is always written in the format: <username>%<tenant-name>
  4. To authenticate Incorta, you must generate a personal access token (PAT) to use as a password while connecting to the advanced SQLi.
  5. If you want to use OAuth 2.0 for authentication instead of PAT, you must enable OAuth 2.0-based authentication for JDBC connection from the CMC under Default Tenant Configurations > Integrations.
  6. Advanced SQLi retrieves verified Views only. When you try querying a non-verified view, SQLi will result in a "not found" error.
  7. Using the advanced SQLi requires data to be synched with the Spark Metastore.

Known limitations

  • Only a single analytics service is supported.
  • Geo data type is not supported.
  • Spark Metastore does not support synchronizing SQL views and Analyzer views.
  • Only one schema is synchronized in case multiple exist with the same name, regardless of case sensitivity.

For more information, refer to References → Advanced SQL Interface.

Querying non-optimized tables via Advanced SQL Interface

Non-optimized tables can now be accessible via the Advanced SQL Interface. As a result, external tools, such as Tableau and Power BI, can now discover and query non-optimized tables that do not have any of the following:

  • Security filters
  • Formula columns
  • Encrypted columns

Contact Incorta Support to help you configure your cluster.

Allowing direct communication between SQLi and Data Agent

After introducing the stand-alone SQLi Service in previous releases, this release introduces an enhancement to allow the direct communication between the data agent and SQLi services so that the SQLi service can successfully:

  • Discover and query tables and views that contain external session variables based on a data source utilizing the data agent.
  • Resolve external session variables based on a data source utilizing the data agent.
Important

This release uses the Data Agent version 8.2.1. You must upgrade to this version to avoid data loading failures due to the communication issue between SQLi and data agent. Contact Incorta Support to get the installation files.

After upgrading to 2024.1.3 On-Premises release from a release before 6.0.2, follow these steps:

  1. Stop the old data agent version.
  2. Go to the installation path of SQLi service: /<InstallationPath>/IncortaNode/sqli/services/<service_guid>/incorta/
  3. Open the service.properties file, add the following line, and save your changes. service.type = sqli
  4. Under The Cluster Management Console (CMC) → Server Configurations → Data Agent, set the following new properties:
    • SQLi Data Agent Port: The port where the SQLi service listens to the data agent connection requests.
    • SQLi Public Hosts and Ports: The host and port used by the data agent to establish a connection with the SQLi service.
  5. Restart the SQLi service.
  6. Sign in to the Incorta Analytics platform, go to the Data Manager, select the Data Agents tab, and for the data agent you want, select Regenerate Authentication File.
  7. Deploy the new data agent.
  8. Copy the newly generated authentication file to the /conf directory under the new data agent installation folder.
  9. Start the data agent.

Fixes and Enhancements

In addition to the new features and significant enhancements mentioned above, this release introduces some additional enhancements and fixes that help make Incorta more stable, engaging, and reliable.

Enhancements

EnhancementArea
Displaying the exact reason for query interruption, whether it is due to an API call, exceeding the query timeout limit, or updating the underlying data.Analytics
The default value of the Insight Max Groups UI Default option in the CMC > Tenant Configurations > Advanced has been adjusted to 500 million, down from 1 billion, to improve the overall system performance and prevent excessive resource usage during the calculation of a large number of group dimension values.
Notes:
  ●  If you have changed the value of this property before, the set value will be maintained; otherwise, the default value will be adjusted automatically.
  ●  This adjustment might affect insights that exceed the specified group count and do not have a configuration set at the insight level.
CMC
SDK components now support building insights over result sets.Component SDK
Several enhancements in the slicer, visual table, and image components.Component SDK
Sending a dashboard ensures all insights are rendered before sending.Component SDK
Oracle fusion and SQL-based connectors now support decimal data types.Connectors
Salesforce connector bulk query now supports PK chunking.Connectors
Extracting fusion tables are more resilient.Connectors
When filtering a numeric column on a dashboard runtime filter, all the values of the numeric column are displayed instead of limiting the number of returned values to 500.Dashboards
When adding multiple hierarchical prompts or hierarchical grouping dimensions from the same table, the Dashboard Filters dialog narrows down the available values of the child levels according to the selected values at the higher levels.Dashboards
Multiple enhancements in the dashboard’s edit mode to improve user experience.Dashboards
Better dashboard performance for insights with more than 1000 pages.Dashboards
Enhanced the mechanism of fetching the values of a prompt to allow using the caching capabilities and the query timeout featureDashboards
Dashboards can have more than 10 presentation variables.Dashboards
Enhanced the performance when filtering data using the Not In operatorDashboards
The Engine Concurrent Service Thread Throttling feature is now enabled by default to allow running more queries concurrently.
Additionally, Incorta log files will include a record each time the thread throttling is initiated for a query.
Engine
The validation on the lookup() function introduced in releases 6.0.1 or 5.2.9 is now relaxed to allow:
  ●  Using columns from business schema views with columns from the source physical schema object in the same expression
  ●  Limiting the validation to creating new or updating existing expressions
  ●  Existing expressions where the result lookup field and primary key field parameters are not from the same object or its business schema views not to return an #Error value
Functions
Multiple enhancements in the Dual KPI Component.Marketplace Components
Apache Zeppelin notebook version upgraded to 0.10.1.Notebook
Enhanced the Scheduler to respect the start date when scheduling jobs to run every defined number of months.Scheduler
Schema load jobs now handle daylight saving time more resiliently.Scheduler
Show the date on insight y-axis when adding a measure of type date.Visualizations
Enhanced the calculation of the year function when used in an insight dimension.Visualizations
Enhanced the performance and reduced memory usage when rendering flat listing tables, which include only measures, without any filtering or sorting options applied to the insight itself, the dashboard, or the source table.Visualizations

Fixes

Fixed IssueArea
The like() function did not handle trailing patterns properly as it returned true when the field contained the pattern regardless of the place of the matching characters.
Note:
Fixing this issue requires loading all tables that use this function from staging after upgrading to this release from releases before 6.0.2.
Built-in Functions
Tenant backup failure as a result of stuck threads in the inspector tool, which causes the inspector tool to fail as well.CMC
Selecting text in a cell did not filter data.Dashboards
Total row in aggregated table displayed twice if a filter did not retrieve any values.Dashboards
Search results for prompts not ordered properly according to relevance with exact value on top.Dashboards
Issues when downloading and sharing dashboards in PDF and HTML formats.Dashboards
Couldn’t set the value of prompts based on formulas that referenced date columns.Dashboards
Formatting and color options applied to insights were not maintained when sharing dashboards in PDF format via email.Dashboards
Searching the values of a formula column in the Filters dialog did not return any matching value in the case of formula columns from business schema views or formula columns added as prompts.Dashboards
Issues with the system overall performance, especially when rendering dashboards due to excessive logging operations related to the Data Lineage featureData Lineage
Intermittent Could not get downstream dependency error when trying to view dependencies for materialized views.Data Lineage
MV intermittently failing with the error Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Did not get the first delta.Materialized Views
MV stuck due to an exception produced by Spark.Materialized Views
The Search bar in the Table Manager searched only by the column name and ignored searching by the column label.Physical Schemas
The Save button was not available when trying to save changes to alias tables.Physical Schemas
Compaction failed with an error: ArrayIndexOutOfBoundsException.Physical Schemas
Load jobs might get stuck in the queue if some load jobs were still in the Zookeeper queue before upgrading to 2022.11.0 or later.Physical Schemas
Monthly jobs with a start date in December are scheduled one year ahead.Scheduler
When selecting the end date of a between operator or editing a selected period, the calendar didn’t respect the selected dates as it used to do in earlier releases and showed the current date instead.Visualizations
Rich text and Advanced Map insights did not resolve variables.Visualizations

Known Issues

Known IssueWorkaround
Sending multiple schemas concurrently to the same target schema (dataset) in a Google BigQuery data destination may fail for the first time if the target schema (the BigQuery dataset) does not yet exist.Do one of the following:
  ●  Send (load) one schema first, and then send all other schemas. You can create a load plan with one schema in a group and all other schemas in another group.
  ●  Create the dataset in the BigQuery project before sending the schemas concurrently.
  ●  Execute the load plan again or manually load the failed schemas.
Uploading custom-built visualization components (.inc files) to the marketplace of a cluster that uses an Oracle metadata database results in an Internal SQL exception error.
An issue with the unique index sequential calculation might cause the failure of loading tables and join calculations during Post-load.Make sure that the unique index parallel calculation ( the default behavior) is NOT disabled. Open the engine.properties file on each Loader Service node and make sure to remove the engine.parallel_index_creation=false entry or set the value to true.
Load from staging jobs will get stuck and keep running endlessly with the following CMC configurations set as follows:
  ●  The Enable automatic merging of parquet while loading from staging toggle is turned on.
  ●  The Enable dynamic allocation in MVs toggle is turned off.
  ●  The value of Materialized view application cores is less than the value of spark.executor.cores in the Extra options for Parquet merge option.
Do one of the following in the CMC > Server Configurations > Spark Integration:
  ●  Make sure that the Materialized view application cores is greater than or equal to the value of spark.executor.cores in the Extra options for Parquet merge option.
  ●  Turn on the Enable dynamic allocation in MVs toggle.
For versions from 2.0.1.0 to 2.0.1.7 of the Data Lake connectors (Azure Gen2, Data Lake Local, FTP, Google Cloud Storage, Apache Hadoop (HDFS), Amazon S3, and SFTP), users who use Wildcard Union on directories containing a large number of files might encounter load failures or experience longer load times.Upgrade to connector version 2.0.1.8

For all of the known issues and workarounds in Incorta latest releases, refer to Known Issues.