Release Notes 5.1.2
The goal of the 5.1.2 release is to enhance the analytical capabilities, data management, security, automation, resource utilization, integration, and performance of the Incorta Direct Data Platform™. To that end, this release introduces major improvements in the lifecycle management of the physical schema update process, Incorta Analyzer and SQL tables, and materialized views( MVs). There are also several enhancements related to dashboard interactivity, data visualizations, and advanced geospatial capabilities.
This release also comes with new built-in functions, multiple new data source connectors, and data APIs for external machine learning tools. In this release also, Microsoft OneDrive is available as a data destination. In addition, the release improves security management and simplifies configuration. The release alleviates the impact of daylight savings time on job schedules through the use of time zones versus GMT offsets. This release is also compatible with the new version of the Excel Add-in.
The Oracle 11g database version is no longer supported for the Incorta Metadata database. For more details, refer to the Additional Upgrade Information section below.
- Dashboard Filters Manager new user interface
- Analyzer enhancements
- Advanced Map visualization enhancements
- Multiple drill down configurations for a single column
- Y-axes minimum and maximum values for the Combo Dual-Axis visualization
- Time Series Analytic, Conversion, and Boolean built-in functions
- Support for additional parameters in boolean functions
- Formula Builder comments
- Oracle B2C Service connector
- Cosmos DB connector
- SFTP and FTP Data connectors
- Oracle Cloud ERP connector enhancements
- Microsoft OneDrive as a Data Destination
- LDAP synchronization from the Security Manager
- User Manager role improvements
- Cluster Management Console (CMC) configurations for Single Sign-On (SSO), MV memory, Python version, and Machine Learning (ML) library
- Scheduler support for time zones and running jobs between daily time windows
- Tenant Management Tool (TMT) command to migrate scheduled jobs for daylight savings time (DST)
- Azure Active Directory Authentication Support for SQLi Connections
- Inspector Tool lineage insight and support for MVs based on Scala and R
- Enhanced lifecycle management of the physical schema update process
- Interrupt long-running dashboards that block sync processes
- Materializing Incorta Analyzer and SQL tables to parquet
- Scalable PK index calculation
- Skip or enforce Primary Key constraint
- Parquet file read optimization
- Disable chunking for incremental loads
- Materialized view enhancements
- Reduced I/O operations when reading columns from the same object
In addition to other enhancements and fixes.
This release introduces a new user interface for the Dashboard Filters Manager that is similar to the Formula Builder and Analyzer. With this new user interface, you are able to drag and drop multiple columns as dashboard filters. To learn more, refer to Dashboard Filters Manager.
This release introduces multiple enhancements to the Analyzer including the following:
- A new information icon is now available for columns in the Attribute field of a Hierarchy table. To view the information, click > to the right of a column pill, and then click Column Details (i icon) at the top of the Attribute panel. This feature is available for all columns in the Attribute field except the first column.
- A new Base table indicator is now available to show which column pills in the Measure tray of an insight have the Base Field set. To learn about defining a base table, refer to Concepts → Base Table.
- The Analyzer’s color palette is now consistent with the Dashboard’s preset color palette in Configure Settings. Additionally, the hex code of a customized color is now visible in the Properties panel of a column pill.
- Sort By option for formula columns in the Grouping Dimension tray in Aggregated and Pivot Table visualizations
- The ability to configure the legend position for insights with supported visualizations
- The ability to format the insight title in the Analyzer. You can change the font style, size, position, color, and alignment position of the insight title.
- Support for removing unused tables or views in the Analyzer Data panel. In the Data panel, select More Options (⋮ vertical ellipsis icon) → Remove Unused Tables/Views.
- A new default format for a measure pill with the aggregated function set to Count or Distinct: rounded number with thousands separator and zero decimal places
- Support for sorting within groups. Enable this feature in the insight Settings panel. When enabled, a dashboard consumer can sort columns for the insight within the grouping dimension. When disabled, a dashboard consumer can sort the insight regardless of the grouping dimension.
- Enhanced dataset management where you can manage the related datasets as follows:
- For the Analyzer, the related datasets pertain to insight, Incorta Analyzer Table, or Incorta View.
- For Dashboard Filters Manager, the related datasets pertain to the dashboard for prompts, filter options, and applied filters, as well as for new insights.
- For the Formula Builder, for the formula expression or filter expression.
The Advanced Map visualization now includes the following:
- Clustering, which allows you to group large numbers of data points together on a map to improve performance and presentation.
- A new Custom Shape option that is available as a Geo Role in the Geo Attribute properties panel of the Layer tray.
- Improved query performance as the result of separate queries run for each individual map layer.
For more information, refer to Visualizations → Advanced Map.
Starting with this release, you can drill down to different dashboard tabs from a single insight column. If you add multiple configurations for a given column to drill down to the same dashboard tab, only one option for all of them will be available when you select this column on the insight.
In addition, when a column has a drill-down option in an insight, an indicator appears on the top right corner of the column. Hover over the drilldown indicator to view the Tooltip information: Drilldown is enabled for this column.
You can now set minimum and maximum values for the y-axes in the Combo Dual-Axis visualization settings. For more information, refer to Visualizations → Combo Dual-Axis
The following new built-in functions are now available:
- ago() time series analytic function
- toDate() time series analytic function
- toDate() with ago() combined time series analytic function
- dayOfYear() conversion function
- isAlpha() boolean function
- isNumeric() boolean function
The Formula Builder now supports using multi-line comments as follows:
/* Your multi-line commenthere */
Oracle B2C Service delivers comprehensive customer experience applications that drive revenue, increase efficiency, and build loyalty. You can easily extract data from the Oracle B2C Service REST API into Incorta. Using the Schema Wizard, the connector supports extracting data from the Connect Common Object Model (CCOM). In addition, the connector supports read-only extraction from the RightNow Object Query Language (ROQL) managed tables. You can create physical schema tables for your Oracle B2C physical schema that specify both a Discovery Query and normal Query using ROQL in the table data source properties. To learn more, refer to Oracle B2C Service Connector.
Azure Cosmos DB is a fully managed NoSQL database for modern application development. It is Microsoft’s proprietary, globally distributed, multi-model database service for managing data on a global scale. It is a schema-agnostic and horizontally scalable database service as well.
To learn more, refer to Cosmos DB connector.
New Data Connectors are available for SFTP and FTP to enable you to connect to data lakes that use secure and non-secure file transfer protocols. For more information, refer to Connectors → SFTP and Connectors → FTP.
This release enhanced the Oracle Cloud ERP connector as follows:
- The File Name Pattern property now accepts a custom value for an Oracle Universal Content Management (UCM) extract in addition to the default.
- A File Extension property is now available to support the
.pkcsvfile format from the Oracle UCM. This file contains only primary keys, and is generally used for row deletions
This release introduces Microsoft OneDrive as a data destination to which you can export one or more supported insights. To learn more, refer to Concepts → Data Destination.
This release introduces an enhanced process to import and synchronize domain users and groups with Incorta using the LDAP protocol. As a CMC administrator or a Super User, you can access the Security Manager and synchronize domain users, groups, and their relations using a
.properties configuration file that maps the LDAP attributes to user and group details in the Incorta metadata database. You can also download a template of this file to help you provide the required information. You can then upload this file to Incorta and have users and groups imported to or updated in the metadata database.
For more information, refer to Tools → Security Manager → Import and synchronize users and groups.
In this release, a user that belongs to a group with the User Manager role or other roles other than the SuperRole can no longer manage the assignment of the SuperRole. Only a SuperUser or a user that belongs to a group with the SuperRole can manage the assignment of the SuperRole. For more information, refer to Tools → Security Manager → User Manager role improvements.
You can now configure the following in the CMC:
- Single Sign-On (SSO). Note that: Additional configurations in the Default Tenant Configurations are required whether you are upgrading your Incorta instance or configuring the SSO for the first time. For more information, refer to Secure Login Access → Configure SSO using CMC.
- Materialized View driver memory. The default value is 1GB. You can edit this property in Cluster Configurations → Server Configurations → Spark Integration.
- Python Path, which contains the default binary executable to use for Python materialized views in both drivers and executors. An example value is ‘python2’. This setting is used when the Python binary is not specified in the Python materialized view. You can edit this setting for a specific tenant or in Default Tenant Configurations → Advanced.
- Automatic download of the latest Machine Learning (ML) library that is compatible with your Incorta environment. Enable the Enable automatic ML library download setting in Cluster Configurations → Server Configurations → Spark Integrations. The machines that host your Analytics and Loader services must be connected to the Internet.
- Enable SQL App without restarting the Analytics service. You can enable this setting in Cluster Configurations → Server Configurations → Spark Integrations.
The Scheduler now supports the following:
- Time zones instead of GMT offsets, which will alleviate the impact of daylight savings time (DST). You have the option to select a time zone if your scheduled job is not automatically converted to a time zone from a GMT offset.
- Scheduling daily jobs between a specific start time and end time, such as 11am - 5pm, and selecting the recurrence of the load job in minutes or hours within the specified time interval. This feature is available for a tenant user with a Schema Manager role.
For more information, refer to Tools → Scheduler.
A new TMT command is now available to migrate scheduled jobs created after DST began to adjust the time zone when DST ends. The result is the jobs will run at the same time before and after DST ends.
For more information, refer to Tools → Tenant Management Tool.
Azure Active Directory (AD) authentication is now available when you connect to Incorta via SQLi from third-party tools, such as DBVisualizer and Tableau.
- To enable Azure AD authentication at the default tenant level, go to Cluster Configurations → Default Tenant Configurations → Security → Authentication Type → select Azure AD.
- To enable Azure AD authentication at the tenant level, Tenant Configurations → Security → Authentication Type → select Azure AD.
In either case, enter your Azure AD Client ID and Client Secret. Then, save and restart the CMC, Analytics Service, and Loader Service.
For more information on how to obtain the Azure AD Client ID and Client Secret, refer to Quickstart: Register an app in the Microsoft identity platform.
The Inspector tool in the Cluster Management Console (CMC) now supports the following:
- A new minified lineage insight that detects dependencies between dashboards, columns, and physical schemas
- A tenant that contains materialized views based on Scala and R. To inspect a tenant in the CMC, navigate to Clusters → Tenants, and for a given tenant row, select ⋮(vertical ellipsis icon) → Execute inspector now.
- The ability to make multiple changes and then push them in one schema update job
- The ability to save changes to a draft or directly publish them into a saved version
- Indicators for updates that require loading or validating data
- The ability to list running load or update jobs that block other update jobs
- The ability to follow up with the synchronization status related to an update job
In previous releases, whenever you added, modified, or deleted a physical schema object, join relationship, formula column, runtime security filter, or load filter, a schema update job was triggered, which caused overhead on the Loader Service and the available resources.
In this release, as a Schema Manager user, you can make multiple changes to the physical schema and then apply all of these changes in one schema update job. Alternatively, you can save these changes to a draft version of the physical schema and apply them later as appropriate.
In this release, you can immediately apply the changes you have made to the physical schema using the Save Changes option, or you can keep these changes in a draft version. A draft is created automatically whenever you add, modify, or delete a physical schema object, a formula column, a join relationship, a runtime security filter, or a load filter.
For more information, refer to Tools → Schema Designer → Schema Designer modes.
This release uses indicators to highlight the need for specific user actions, such as loading data after specific updates or validating updates for materialized views and physical schema tables.
Some updates you make to the physical schema objects require loading data to ensure data consistency, either a load from source (full load) or load from staging. When you save these updates to a saved version, the Schema Designer and Schema Manager show multiple indicators of the objects that you must load. These indicators keep showing until you or another user performs the required load or a scheduled load job runs the required load for the related objects.
For more information, refer to Tools → Schema Designer → Saving changes that require data load.
As a Schema Manager user, you can update a physical schema table or a materialized view without validating the updates. Whether you save the updates to a draft or a saved version, the object Data Source property in both the Schema Designer and the Table Editor shows that the object has updates that are not validated.
When an update job or a load job is in the commit phase or the load and post-load phase, respectively, it blocks update and load jobs on dependent physical schemas. The blocked update jobs stay in the In Queue state until all blocking jobs are completed while the blocked load jobs stay in the Load phase. In this release, as a Schema Manager user, you can view a list of jobs that block an update job in the Schema Manager and the Model Update Viewer. This list shows the blocking physical schema, the type of the blocking job, load or update, and the current status of each blocking update job. You can also access the details of the blocking job in the Load Job Viewer or the Model Update Viewer according to the type of the blocking job.
A Sync Status section is added to the Model Update Details page. The Sync Status provides the following information for the selected model update service:
- The name of the node to sync
- The sync update status for the node
For more information, refer to Tools → Model Update Viewer.
This release introduces a solution for long-running dashboard queries that block synchronization processes, and accordingly other queries, without a major effect on the dashboard performance. The solution includes interrupting long-running queries that block a synchronization operation after a configured time period starting from acquiring the read lock by the running query.
Although multiple operations acquire read lock on resources (physical schema objects and joins), such as searching in a dashboard, query plan discovery, and synchronization in the background, the solution handles only read locks acquired by dashboard queries that the engine runs. This includes the following:
- Rendering a dashboard
- Exporting a dashboard
- Downloading a dashboard tab or an insight
- Sending a dashboard, dashboard tab, or insight to a data destination
- Sending a dashboard via email
- Running a scheduled job to send a dashboard via email or to a data destination
- Rendering a SQLi query that uses the engine port: the default is
Depending on the interrupted process, whenever a query is interrupted, a message is displayed in the Analyzer, sent to the user via email, or displayed in the SQLi audit files. The message denotes that the query is interrupted because the underlying data is being updated.
This feature is disabled by default. To enable this feature or change the configured time, you need to add two keys to the
engine.properties file that exists in the Analytics Service directory (
<installation_path>/IncortaNode/services/<analytics_service_directory>/incorta). The two keys that control this feature are as follows:
|Enable or disable the interrupting long-running queries feature||Boolean|
|Set the time (in minutes) to wait since the running query that blocks a sync process acquires the read lock before interrupting the query||Integer||Number of minutes |
The minimum value is
Contact Incorta Support if you need help to configure these keys.
- Due to the Java implementation to interrupt running processes and avoid major performance degradation, the interrupted query does not release the lock immediately. It may take some time until it hits an interruption check first.
- This solution does not apply to synchronization that runs in the background.
In this release, the Loader Service writes full parquet files, instead of snapshot ddm files, for Incorta Analyzer and SQL tables to allow the following:
- Better support for external tools access to Incorta Analyzer tables and Incorta SQL tables through SQLi
- Materialized views to reference columns in Incorta Analyzer and SQL tables (in other physical schemas only)
- Formula columns to be in parquet files by materializing formula columns in Incorta Analyzer or SQL tables instead of physical schema tables, which enhances formula column performance and availability to external tools
- Using flattened result sets in many use cases, such as data destinations and analytics and machine learning workflows
This release supports the scalability of the PK index calculation process, especially during an incremental load job. The aim of this feature is to reduce memory and disk requirements and improve CPU utilization.
This feature requires enabling the scalable PK index calculation at the engine level, which is the default configuration.
For more information, refer to References → Data Ingestion and Loading → Scalable PK index calculation.
In this release, you can have the option to either enforce the calculation of the primary key at the object level or skip this calculation to optimize data load time and performance. This scenario applies to full load jobs only. When the physical schema table or materialized view has at least one key column, the Table Editor for this object shows the Enforce Primary Key Constraint option to enforce or skip the PK index calculation.
This feature requires enabling the scalable PK index calculation at the engine level, which is the default configuration.
For more information, refer to References → Data Ingestion and Loading → Skip or enforce PK constraint.
This release introduces an enhanced mechanism for reading parquet files with duplicate rows. This leads to optimized performance, resource utilization, and memory usage when performing incremental loads and rendering dashboards.
Data chunking allows for parallel extraction of large tables. Incorta supports data chunking during the extraction phase for some connectors, such as SQL Server, Oracle DB, and Amazon Web Services (AWS) S3. Starting with this release, Incorta no longer supports data chunking with incremental loads. When you enable the Chunking setting for a physical schema table, this applies only to full loads.
This release introduces the following enhancements to MVs:
- Automatic load ordering for MVs
The release still maintains backward compatibility with the existing load ordering implementation through user-defined materialized view load groups. You can still enforce the materialized view load order by defining and ordering separate groups for loading materialized views.
For more information, refer to References → Data Ingestion and Loading → Load order and queueing.
Support for encrypting and decrypting a column in a MV using the Table Editor
date_add()function to resolve operations related to adding and subtracting dates with integer values
Support for using question marks (?) within the MV scripts. For example:
SELECT * FROM example WHERE example.QUESTION='How are you\?', this statement selects all rows from the
QUESTIONhas the exact value of
How are you?.
SELECT * FROM age WHERE age.Inquiry='How old are you\\?', this statement selects all rows from the
agetable, for each
Inquirythat has the exact value of
How old are you\?.
In previous versions, a question mark (?) was replaced with the
from_unixtime SQL function.
This release introduces a new mechanism for reading multiple columns that exist in the same physical schema object from shared storage to reduce the number of threads used and accordingly the number of I/O operations.
In the previous implementation, the engine forked a separate thread for each column in the read request, whether in the same physical schema object or not. Each thread opened the object parquet file(s) to read the respective column. Accordingly, multiple threads could open the same parquet file, which led to unnecessary heavy I/O operations.
In the new mechanism, the engine forks only one thread for all columns required from a single physical schema object in the read request, whether a load job or a query. This feature is enabled by default for both types of read requests. However, you can disable it separately for each request type. You need to add a key to the
engine.properties file that exists in the Loader Service directory or the Analytics Service directory, respectively. The path of this file is
<installation_path>/IncortaNode/services/<service_directory>/incorta. The key that controls this feature is
engine.read_parquet_columns_in_groups and you need to set its value to
false to disable this feature.
This release is compatible with the new version of the Excel Add-in. Here are some key features of the new version:
- It is bundled with Incorta to minimize manual configuration
- It is cross-platform, supporting both Excel for Windows and Excel for Mac.
- It supports Centralized Deployment, simplifying the deployment of the Excel Add-in to local user machines.
- It supports automatic updates.
To learn more, review External Integrations → Excel Add-in.
New Incorta Data APIs allow you to access data stored in Incorta, run queries on the data, and save data back to Incorta from the machine learning tools you prefer, including external notebooks such as Jupyter or Zeppelin. These RESTful APIs are accompanied by a Python library to allow you to seamlessly perform read, query, and save operations.
For more information, refer to Tools → Data APIs for External Notebooks.
Incorta now supports PostgreSQL interval operations. The interval data type enables you to store and manage a period of time in years, months, days, hours, minutes, seconds, and so on. The following is the syntax for the interval:
@ interval [ fields ] [ (p) ]
interval '2 months ago';interval '3 hours 20 minutes';
Refer to the PostgreSQL Interval Data Type documentation for more information.
In addition to the new features and major enhancements mentioned above, this release introduces some additional enhancements and fixes that help to make Incorta more stable, engaging, and reliable.
- Fixed an issue with physical schema load from ADLS and GCS, in which incremental schema loads caused the load process to be stuck at the writing tables stage.
- Fixed an issue with interrupting load jobs in which the schema was stuck in the waiting queue and reloading the schema skipped creating a snapshot file
- Fixed an issue with Preview data in the Columns section of the Table Editor in which timestamp values are displayed in epoch format
- Fixed an issue with the Runtime security filter where the value disappears after setting it in a physical schema table.
- Enhanced the Schema Wizard to match the data types of non-measure Excel formula columns with the right data type
- Fixed an issue in the Advanced Map visualization in which adding a Color By column to only one layer of a multi-layer insight caused unexpected behavior
- Fixed an issue in the Area visualization in which the average line did not appear in the insight when + Average Line was selected for a measure
- Fixed an issue with the Pivot Table visualization in which disabling the Merge Rows setting resulted in the repetition of the column header value
- Fixed an issue with the Sankey visualization in which the aggregate filter was not applied and no data was displayed
- Memory usage optimization in Aggregated Tables by having a full garbage collector (GC) when the Aggregation property of a measure pill is set to Distinct
This release introduces the following enhancements and fixes to the Dashboard:
- The Filter changes, such as adding a new filter or deleting one, can now be updated to an existing bookmark(s).
- Enhanced the style of tables and charts, such as font size, weight, and color
- Enhanced the font color and size in dashboard tables.
- Fixed an issue with emailing dashboards in the case of headless browser rendering
- Fixed an issue in which an error message did not display when setting the Prompts filter as a Default filter failed.
- Fixed an issue caused by runtime security filters variables were not evaluated at runtime
- Fixed an issue with the
- Fixed an issue with the default value of the presentation variables used in scheduling dashboards delivery that caused the displayed date to be incorrect
- Fixed an issue with session variables while rendering dashboards, in which a session variable error will persist even if the error was resolved
This release introduces the following new features, enhancements, and fixes to the Analyzer:
- The ability to update filters in an existing bookmark
- Improved style of tables and charts on Dashboards including enhanced font color and size in dashboard tables
- Fixed an issue in which the Aggregate Filter did not work for business view formula columns. The Aggregate property no longer displays for these types of columns.
- Fixed an issue that caused building header titles for a Pivot or an Aggregated Table to throw errors when the data set or the group array was empty
- Fixed an issue with the Inspector tool in which parsing fails if a session variable has an ampersand (
&) in its description
- Fixed an issue in which the Inspector generated empty files when used with OpenJDK 11
- Enhanced Tableau queries to execute in parallel to improve performance for SQLi
- Fixed an issue with the Tableau external integration in which extracting large datasets (greater than four million rows) throws errors
- Resolved the connection error when using SQLi to connect to Incorta from Power BI on the SQL interface port or data store port
- Support for the KML file format in the File System connector to display geographical data.
- Fixed an issue where testing the connection with external file systems (Box, Dropbox, and Google Drive) from the Data Source dialog caused current related load jobs to fail
- Fixed an issue that caused loading from Box files not to start due to a previous stuck load job for the same physical schema object. In this release, the request sent to Box will time out after a specific time interval and the stuck load job will fail to release the resources required by the next load jobs
- Fixed multiple issues when editing the NetSuite Web Services data source connection
- Incorta SQL engine support for OR in join conditions, which results in optimal execution time and enhanced memory utilization.
- Fixed an issue that caused creating a materialized view from a remote data source to fail
- Fixed an issue that caused SQLi to not show data for an alias when the user did not have access to the underlying object
- Fixed an issue where SQLi failed to extract large amounts of data or render long-running queries due to a problem with the SQLi shared load balancer
- Fixed an issue with SQLi queries that referenced a business schema view formula column that used the
lookupfunction when SQLi was configured to use the Engine port
- Fixed an issue with upgrading or installing Incorta from Master in which an error occurred during the upgrade of metadata or cluster creation based on derby metadata
- To configure Incorta to send emails using Microsoft Office365, you must ensure your Office365 email is not using the multi-factor authentication method
- Fixed an issue with the Include Prompt Selections in Excel Export feature in the CMC, where enabling the feature and adding a system variable as a Prompt filter, and then downloading the insight as Excel, resulted in rendering of the variable’s name instead of value
- Fixed a case where the Analytics Service did not release the read locks after rendering a dashboard
- Fixed an issue in which a trailing space or hidden character in a value in the
engine.propertiesfile caused reading a wrong value and eventually using the default value
- Fixed an incorrect directory path in the dirExport.sh script
- Fixed an issue with the Data Agent that caused sending a dashboard via email to fail
- Enhanced the logging of the PK-index calculation process to decrease the size of its logs
Due to the changes applied to the Incorta Analyzer and SQL tables in this release, after upgrading to release 5.1.2, you must review Incorta Analyzer tables and Incorta SQL tables in all physical schemas to disable or remove unsupported configurations. Incorta Analyzer tables and Incorta SQL tables no longer support key columns, encrypted columns, load filters, disabled performance optimization, or self joins.
You must also perform a load from staging for all physical schemas with Incorta Analyzer tables or Incorta SQL tables to materialize them to parquet files rather than snapshot Direct Data Mapping (ddm) files.
The 5.1.2 release does not support using a version of Oracle prior to 12c for the Incorta metadata database. In addition, you need to increase the maximum size of string columns in the metadata database by setting the
MAX_STRING_SIZE property to
EXTENDED. Contact your database administrator or Incorta Support to help configure this property depending on your database type. Note that once you change this property to
EXTENDED, this change will affect all objects in the database and you cannot change it back to
For more information, refer to Oracle Documentation.
Starting with this release, Incorta uses Apache Spark 2.4.7 instead of 2.4.3 to solve the vulnerability of remote code execution (RCE). No compatibility issues or limitations with old MVs or SQLi tables should arise.
Users who are using external Spark should upgrade to Spark 2.4.7 or later.
It is also recommended to limit or restrict the network access to cluster machines to trusted hosts only.
For changes from Spark 2.4.3 through Spark 2.4.7, refer to the Spark Migration Guide.
It is recommended to allocate 3 GB of on-heap memory to the Analytics Service to avoid issues when you have multiple connections to Incorta using SQLi .
The following table illustrates the known issues and workarounds in this release:
|After importing a schema that has an Incorta Analyzer or SQL table with encrypted columns, the loading job will succeed. However, dashboards using the Incorta Analyzer or SQL table will fail.||Edit the Schema Definition and disable the Encryption flag on encrypted columns.|
|After importing a schema that has an Incorta Analyzer or SQL table with a disabled Performance Optimized flag, you cannot enable the flag from the Table Editor.||Edit the Schema Definition, enable the Performance Optimized flag for that Incorta Analyzer or SQL table, and then load the physical schema from Staging. You can also use the Schema Designer to enable this feature for the Incorta Analyzer or SQL tables.|
|An SSO application cannot login to Incorta tenant when using upper case letter in the tenant name||Use a lower case tenant name while configuring your SSO application. |
For example, if your Incorta tenant is called Demo or DEMO, you should use demo (all lower case) while configuring your SSO application.
If you have a problem with editing the schema definition, contact Incorta Support.