Release Notes 2021.3.1

Release Highlights

The goal of the Incorta 2021.3.1 release is to enhance data management and analytics capabilities. This release introduces several major improvements to the Incorta Loader Service, and Incorta Analytics Service, in addition to the other services. This release introduces a new major feature, Data Wizard, which will guide you step by step from importing your data from an external data source to creating insights. The release also offers a new Cosmos DB connector and other enhancements, such as a configurable chart legend box.


New features

There are several new features in this release:

Data Wizard

This release introduces a new interactive Data Wizard. Use the Data Wizard to create an external data source, local data file, or local data folder. It also allows you to create a physical schema, load data, and start building insights through easy sequential steps.

Note

The Data Wizard is available in a new Incorta Cloud cluster. If you update an existing Incorta Cloud cluster to 2021.3.1, the Data Wizard is not available for you to use.

Open the Data Wizard

To open the Data Wizard, follow these steps:

  • In the Navigation bar, select the Home tab.
  • In Home, select the Data Wizard card.
Connect with your data

This step allows you to select a connector or to upload a data file or a folder of data files.

  • Select one of Connectors
  • Specify the values for various properties or upload one or more data files or folder of files.
  • Consult the administrator of the data source to help specify properties.
  • Select Test and Create Connection
Define Schema

After you create an external data source connection or upload one more files, you cna then select the objects that you want to create as physical schema tables.

  • Select the data source from the All Tables panel
  • Select all tables to review and edit their details, such as the Incorta label, type, and function.
  • Select Proceed.
  • Enter a name and description for your physical schema.
  • Review the physical schema summary and edit if needed.
Note

You can also create Custom SQL tables. For more information, review the Schema Wizard document.

Select Load Data

Load Data represents an Extraction, Transformation, and Load process. When you load data with the Data Wizard, you extract data from the data source, create Parquet files in Shared Storage, enrich your data, create a Direct Data Map for related physical schema tables, and then load into memory the Direct Data Map and the data for the physical schema.

If applicable, the Data Wizard will prompt you to create joins between physical schema tables for the physical schema.

Note

Depending on the connector, the Data Wizard may create circular joins that result in errors.

After the load process is finished, select Explore Data.

Explore Data

You can use the Analyzer to explore and analyze your physical schema.

  • To create insights for a new or existing dashboard, select Explore Data.

Cosmos DB Connector

Azure Cosmos DB is a fully managed NoSQL database for modern application development. It is Microsoft's proprietary, globally distributed, multi-model database service for managing data on a global scale. It is a schema-agnostic and horizontally scalable database service as well.

To learn more, review Cosmos DB connector.

Oracle B2C Service Connector

Oracle B2C Service delivers comprehensive customer experience applications that drive revenue, increase efficiency, and build loyalty.

You can easily extract data from the Oracle B2C Service REST API into Incorta. Using the Data Wizard or Schema Wizard, the connector supports extracting data from the Connect Common Object Model (CCOM).

In addition, the connector also supports read-only extraction from the RightNow Object Query Language (ROQL) managed tables. After creating your initial physical schema with the Data Wizard, you can create additional physical schema tables for your Oracle B2C physical schema that specify in the table data source properties both a Discovery Query and normal Query using ROQL.

To learn more, review Oracle B2C Service Connector.


Additional improvements and enhancements

This release has the following improvement and enhancements:

Scalable PK index calculation

This release supports the scalability of the PK index calculation process, especially during an incremental load job. The implemented enhancements are as follows:

  • Parallel and scalable handling of parquet files during an incremental load job to enhance CPU utilization
  • Using temporary files when comparing existing and new records to reduce the required disk space
  • Loading only new parquet files into memory instead of loading new and old files to reduce memory overhead
Important

This is the default configuration for an Incorta Cloud cluster.

Enforce Primary Key constraint

A user who belongs to a group with the Schema Manager role often defines one or more key columns for a physical schema entity object such as a physical schema table or materialized view, the table data source for this object usually contains unique values for the key column or at least unique records.

In this release, you now have the option to either enforce the calculation of the primary key at the object level or skip this calculation to optimize data load time and performance. This scenario applies to full load jobs only. In incremental load jobs, the Loader Service must compare existing and new data to avoid data duplication when key columns are defined.

When the physical schema object (that is, table or materialized view) has at least one key column, the Table Editor for this object shows the Enforce Primary Key Constraint option to enable or disable this feature.

Important

This feature requires enabling the scalable PK index calculation at the engine level. This is the default configuration for a Incorta Cloud cluster.

In the case of enabling the Enforce Primary Key Constraint option, which is the default state, the Loader Service evaluates and checks each value in the key column(s) and enforces record uniqueness. This process requires heavy resource utilization, especially for huge datasets.

If you disable this option for an object, the Loader Service loads the source data without checking record uniqueness. This helps in enhancing the load time and performance.

Automatic load ordering for materialized views

This release introduces the automation of load ordering of materialized views within a load group. The automated approach includes detecting the materialized view dependencies within the load group and ordering the load accordingly. This should both relieve the burden of manually defining the materialized view load ordering and allow for better performance and optimal resource utilization.

The new implementation allows detecting the materialized view dependencies and virtually dividing materialized views within a load group to sub-groups where independent materialized views will load first and then their dependent materialized views, and so on. You can include the materialized views you want in one load group, and leave the load ordering decision to the load job.

For materialized views with cyclic dependencies, ordering the materialized view load depends mainly on the alphabetical order of the materialized view names.

Note

The release still maintains backward compatibility with the existing load ordering implementation through user-defined materialized view load groups. You can still enforce the materialized view load order by defining and ordering separate groups for loading materialized views.

Materialized view column encryption

This release introduces support for encrypting and decrypting a column in a materialized view. Using the Table Editor, you can set the encryption property for a materialized view column.

Materialized view column support for integer value dates

By default, Spark 2.4.3 does not handle direct subtraction and addition operations between dates and integers. This release uses the date_add() function to resolve operations related to adding and subtracting dates with integer values.

Dashboard Filters Manager

This release introduce a new user interface for the Dashboard Filters Manager that is similar to the Formula Builder and Analyzer. To learn more, review Dashboard Filters Manager.

Sort within Groups insight settings property

This release introduces a new Sort within Groups insight settings property. A dashboard developer can configure this property in the Settings panel using the Analyzer. When enabled, a dashboard consumer can sort columns for the insight within the grouping dimension. When disabled, a dashboard consumer can sort the insight without respect to the grouping dimension.

Note

The Sort within Groups property is only available when the Merge Rows property or the Subtotals property is disabled.

Legend position insight settings property

A dashboard developer can now configure the legend position for insights with supported visualizations. Here are the steps to configure the Legend Postions insight settings property:

  • In the Action bar, select Settings (gear icon).
  • In the Layout section, enable the Legend toggle.
  • For Legend Postion, select Top, Left, Bottom, Right, or Auto (default).

Format insight title

A new Edit insight title dialog is now available in the Visualization canvas in the Analyzer. You can change the font style, size, position, color, and alignment position of the insight title.

Manage datasets with the Manage Data Set panel

In the Data panel of the Analyzer, the Formula Builder and the Dashboard Filters Manager, you can now manage the related datasets as follows:

  • For the Analyzer, the related datasets pertain to insight, Incorta Analyzer Table, or Incorta Analyzer View.
  • For Dashboard Filters Manager, the related datasets pertain to the dashboard for prompts, filter options, and applied filters, as well as for new insights.
  • For the Formula Builder, for the formula expression or filter expression.

The Manage Data Set panel contains two tabs: Views and Tables. The Views tab contains one or more business schemas in a tree. A business schema can contain a business schema view or an Incorta Analyzer View. The Tables tab contains one or more physical schemas in a tree. A physical schema can contain an alias, physical schema table, materialized view, Incorta Analyzer Table, or Incorta SQL Table.

Use the Manage Data Set panel to select one or more items for the dataset.

Note: Incorta Analyzer View limitations

For a dashboard dataset, avoid combining an Incorta Analyzer View with another business schema or physical schema. An Incorta Analyzer View has runtime limitations that may negate dashboard filters and dashboard runtime filters for insight's with differing query plans from the Incorta Analyzer View.

Here are some of the options available in the Manage Data Set panel:

  • search for a specific physical schema, business schema, table, or column
  • select a parent or child item in the tree

Remove unused tables or views

In the Data panel, this release introduces a new option to remove unused tables or views in the Analyzer. Here is how:

  • In the Data panel, select More Options (⋮ vertical ellipsis icon), and then select Remove Unused Tables/Views.

Drill down indicator on a column

A new dashboard user interface enhancement is available in this release. When a column has a drilldown option in an insight, an indicator appears on the top right corner of the column. Hover over the drilldown indicator to view the Tooltip information: “Drilldown is enabled for this column”.

Count and Distinct functions default format

In this release, in the Analyzer, a measure pill with the Aggregation property set to Count or Distinct has the following Format property default values:

Format propertyDefault Value
Number FormatRounded
Decimal Places0
Thousands Separatortrue (enabled)

User Manager role improvements

In this release, a user that belongs to a group with the User Manager role or other roles other than the SuperRole can no longer manage the assignment of the SuperRole. Now, only a Super User or a user that belongs to a group with the SuperRole can manage the assignment of the SuperRole and perform the following actions:

  • Assign the SuperRole to a group
  • Add a user to a group with the SuperRole
  • Roll back or unassign the SuperRole from a group
  • Remove a user from a group with the SuperRole
  • Delete a user that belongs to a group with the SuperRole
  • Delete a group with the SuperRole
Note

The new implementation will not roll back the SuperRole assignment for existing users or groups that a User Manager assigned the SuperRole to.