Guides → Configure Server
You can manage your server through the Cluster Management Console (CMC) by configuring various settings, such as an SQL interface port, proxy read timeout, and other settings. Here are the steps to configure your server:
- Sign in to the CMC.
- In the Navigation bar, select Clusters > Cluster-name.
- In the Canvas tabs, select Cluster Configurations > Server Configurations.
The following server configurations are available in the CMC:
- Clustering
- SQL Interface
- Spark Integration
- Tuning
- API Security
- Customizations
- Diagnostics
- Data Agent
- Security
- Notifications
- Analytics Workload Management - available starting the 2024.7.x release
- Incorta Premium - available starting the 2024.7.2 release
- Incorta Labs- available starting the 2024.1.x release
- Incorta Data Studio - available starting the 2024.7.2 release
- Incorta Copilot - available starting the 2024.1.0 release on Cloud installations only. Contact Incorta Support to avail the Copilot.
To avoid losing data after configuring your settings on a page, select Save before navigating to another page.
While all the configurations mentioned on this document are available to the CMC admin on On-Premises environments, some configurations are not available when using the cloud admin account to sign in to the CMC on a Cloud environment. These configurations are marked on this document with an asterisk (*). Contact Incorta Support to have any of these configurations set or altered.
Starting 2024.7.2, a Premium tag will mark Premium features. To have access to Incorta Premium features, you need to enable Incorta Premium and the related options as well. For details, refer to Incorta Premium.
Clustering
The following table illustrates the cluster configuration property:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Kafka Consumer Service Name* | text box | Yes, Loader service only | Enter the name of the Loader service that acts as a Kafka consumer in the following format: <NODE_NAME>.<SERVICE_NAME> Note the following: ● If you do not configure the name of the Kafka Loader service, a random loader service will be automatically assigned leading to unexpected results. ● If you change the Kafka Loader service consumer from one Loader service (A) to another Loader service (B), you must restart the current Loader service (A) first, and then restart the Loader service (B). |
Connectors Marketplace Mode* | dropdown list | Yes, All services | This option is available for On-Premises clusters only. Select the mode of the Connectors Marketplace: ● Offline (default): The Marketplace reads the available connector packages from /marketplace/connectors . ● Online: The Marketplace reads from Incorta connectors CMS. |
You must specify the Online Marketplace URL (Incorta CMS URL) and token (Incorta CMS Token) in the Tenant Configurations > Integrations. To obtain the CMS URL and token, contact Incorta Support.
Additionally, if your on-premises cluster resides behind a firewall, you must allow connections to the following two domains to access the online Connectors Marketplace:
incorta-cms-production.incortaops.com
storage.googleapis.com/ic-production-bucket
SQL Interface
Incorta can act as a PostgreSQL database using the Structured Query Language Interface (SQLi). Thus, you can utilize Incorta’s powerful engine performance and features through other BI tools, such as Tableau.
The following table illustrates the configuration properties of the SQL interface:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
SQL interface port* | spin box | Yes, Analytics service only | Enter a port number to connect to the Incorta Engine. The Engine mode uses this port and runs the SQL interface queries against the data loaded in memory. |
Data Store (DS) port* | spin box | Yes, Analytics service only | Enter a port number to run queries directly using Spark. The Spark mode (Parquet mode) runs the SQL interface queries against the data loaded in the staging area using Incorta compacted parquet files. |
New Metadata handler port* | spin box | Yes, Analytics service only | Enter a port number for an auxiliary process to resolve metadata for SQLI clients. |
Enable SSL for SQL interface ports | toggle | No | Enable this property to use Secure Sockets Layer (SSL) for SQLI connections. You must provide the SSL certificate details (path and passphrase). |
SSL certificate(JKS) Path used for SQL interface | text box | No | Enter the path to SQLI SSL certificate. This property is available when Enable SSL for SQL interface ports is toggled on. |
SSL certificate(JKS) passphrase used for SQL interface | text box | No | Enter the passphrase to SQLI SSL certificate. This property is available when Enable SSL for SQL interface ports is toggled on. |
Enforce Accepting Only SSL Connections | toggle | No | Enable this property to allow SSL connections only. This property is available when Enable SSL for SQL interface ports is toggled on. |
External Tables CSV file path* | text box | Yes, Analytics service only | Enter the CSV file path of your external tables. |
Enable Cache* | toggle | No | Enable this property to cache recurring SQL operations and enhance the performance of query execution, if there is an available cache size. |
Cache size (In gigabytes)* | spin box | No | Enter the maximum caching size (in GBs) of the data returned by the SQLi queries. When the cache size exceeds the limit, the least recently used (LRU) data is evicted to avail space for newer cache. You must set the cache size based on the available memory on the Incorta host server, and the size of the common query results. |
Cached query result max size* | spin box | No | Enter the maximum threshold size for each query result, which is the table cell count (rows multiplied by columns). If the size of a query result exceeds the threshold value, it will not be cached. |
Enable cache auto refresh* | toggle | No | Enable this property to automatically refresh the cache at specified intervals. |
Refresh cache interval (In minutes)* | spin box | No | Enter the interval in minutes to automatically refresh the cache. This property is available when Enable cache auto refresh is toggled on. |
Use column labels instead of column names | toggle | No | Enable this property to use table columns labels instead of column names during the SQL execution. This affects the column list specified in the query and the returned column names in the result. |
SQLi Spark Connection Timeout (In minutes)* | spin box | No | Enter the needed time in minutes after which the SQLi connection to spark would timeout. Default is 720 minutes (12 hours). |
Max retries to connect to running SQLApp* | spin box | No | Enter maximum number of retrials to connect to SQLApp. Default is 10 retrials. |
Timeout interval with no connections before SQLApp terminates (In seconds)* | spin box | No | Enter the timeout interval time when no connections are made between SQLi or analytics and SQLApp. Default is 30 seconds. |
Max number of concurrent connections from SQLi to SQLApp | spin box | No | Enter the maximum allowed number of concurrent connections from SQLi to SQLApp, the default is 100. When the limit is reached, the SQLi service waits for a configurable interval (5 minutes by default) before trying to acquire a connection to SQLApp. If it fails to acquire the connection after this interval, it throws a timeout exception. This option is available starting 2024.1.4 and 2024.7.x. |
Timeout to allocate memory before SQLi rejects connection (In seconds)* | spin box | No | Enter the maximum time of retrials in case the whole allocated memory is consumed. For example, if 30 connections approximately consumes 1GB, the connection number 31 will wait 300 seconds retrying to connect before it timeouts. Default is 300 seconds. |
% of memory before throttling the new connections* | spin box | No | Enter the maximum percentage of memory to consume. If exceeded, SQLi rejects more connections. Default is 75. |
SQL Engine cache | toggle | No | Enable this property to cache SQL engine result sets. This helps speed up repetitive operations performed during query processing and execution. The default cache size is 5% of off-heap memory. |
Spark Integration
Incorta utilizes Spark to execute complex queries that are not currently supported and to perform queries on data residing in the staging area without loading those data into Incorta’s memory.
Starting 2024.7.x, all Incorta applications, including the Advanced SQL Interface, will use one unified Spark instance to handle the different requests. Therefore, you must consider the following when upgrading On-Premises clusters that have the Advanced SQLi previously enabled and configured:
- Take into consideration the resources required by the Advanced SQLi when configuring the Spark worker resources in the CMC.
- Post upgrade, make sure that the CMC is started and running before starting the services related to Advanced SQLi, such as the Advanced SQL service and Kyuubi, to have the configuration values updated to refer to the unified Spark instance instead of SparkX.
The following table illustrates the Spark integration configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Spark master URL* | text box | Yes, all services | Enter the Spark Master URL for the Apache Spark instance to execute materialized views (MVs) or SQL queries. Here are the steps to acquire the Spark Master information: ● Navigate to the Spark host server UI (from any browser), using the following format: <SPARK_HOST_SERVER>:<SPARK_PORT_NO> ● Copy the Spark Master connection string in the format: spark://<CONNECTION_STRING>:<SPARK_PORT_NO> ● The default port number for installed Spark on Incorta is 7077 . |
Enable a custom Delta Lake Reader* | toggle | No | Enable this property to instruct Spark to use the Incorta Custom Delta Lake Reader to read data from the Delta Lake files instead of using Spark native Delta Lake reader . The Incorta Custom Delta Lake Reader is optimized for queries on relatively small data sets. Switching between these two readers may cause the first SQLi query on the Spark port to consume more time because the SQLi engine will be restarted. Starting 2024.7.x, Incorta has deprecated its custom Delta Lake reader and started to use only Spark Delta Lake reader to read Delta Lake files. |
Enable SQL App* | toggle | No | Enable this property to start the SQL App, and then use Engine and Parquet modes to execute incoming SQL queries. |
Enable spark fallback* | toggle | Yes, SQLi services only | This property is enabled by default. Disabling this feature prevents the In-Memory engine from falling back to the Spark or Parquet mode when the query is not executed, instead, an SQL error is returned. This property is available when Enable SQL App is toggled on. |
SQL App driver memory | spin box | No | Enter the app drive memory (in GBs) to be used by the SQL interface Spark. This memory is only used for constructing (not calculating) the final results. Consult with the Spark admin to set this value. |
SQL App cores | spin box | No | Enter the number of dedicated Central Processing Unit (CPU) cores reserved for the SQL interface Spark application. Ensure that there are enough cores for the OS, other services, and applications. |
SQL App memory | spin box | No | Enter the maximum memory (in GBs) used by SQL interface Spark queries. Note the following: ● You must set an adequate memory size when using materialized views. ● The memory required for both applications combined cannot exceed the Worker Memory. |
SQL App executors | spin box | No | Enter the maximum number of executors that can be spawned on a single worker. Each of the executors will allocate some of the cores defined in sql.spark.application.cores , and will use part of the memory defined in sql.spark.application.memory . The cores and memory assigned per executor are equal. Thus, the number of SQL App cores and SQL App memory is divided by the number of executors. If the number of executors is greater than the SQL App memory, the executors will consume the assigned memory at 1 GB/executor. Examples: ● If the SQL App cores = 7 and the SQL App executors = 3, each executor will take 2 cores, and 1 of the cores will not be utilized. ● If the SQL App memory = 5 and the SQL App executors = 7, then 5 cores will be created, with 1 GB each. |
SQL App shuffle partitions | spin box | No | Enter the number of SQL interface shuffle partitions. A single shuffle represents a block of data being processed to perform joins and/or aggregations. Note the following: ● The size of a shuffle partition increases as the size of processed data increases. ● The optimum shuffle partition size is around 128 MBs. ● Increase the shuffle partitions as the processed data size increases. However, an increase in shuffle partitions can lead to an increase in CPU utilization. ● If a query operates on a trivial amount of data, an increased amount of partitions will lead to a small partition size which may increase the query execution time due to the overhead of managing the unnecessary amount of partitions. Inadequate partition size may cause the query to fail. |
SQL App extra options | text box | No | Enter the extra Spark options that you want to pass to the SQL interface Spark bridge application. These options can be used to override default configurations. Here are sample values: spark.sql.shuffle.partitions=8; spark.executor.memory=4g; spark.driver.memory=4g |
Enable SQL App Dynamic Allocation | toggle | Yes, SQLi services only | Enable this property to dynamically allocate the Data Hub Spark application by adjusting the number of application executors based on the workload. This property optimizes resource utilization as it removes idle executors, and claims them again if the workload increases. |
Spark App port* | spin box | Yes, Analytics service | Enter the port number to connect Incorta to Spark and access the Data Hub. |
Spark App control channel port* | spin box | No | Enter the control channel port number to send a shutdown signal to the Spark SQL app when the Incorta server requires Spark to shut down. |
Materialized view application cores | spin box | No | Enter the number of CPU cores for MVs. The default value is 1 . The allocated cores for all running Spark applications cannot exceed the dedicated cores for the cluster unless Dynamic Allocation is enabled. Thus, the value will be used to compute the CPU cores for the initial executors. |
Materialized view application memory | spin box | No | Enter the maximum memory size (in GBs) to use for MVs. The default is 1 GB. The memory for all Spark applications combined cannot exceed the cluster memory. |
Materialized view application executors | spin box | No | Enter the maximum number of executors that can be spawned by a single materialized view application. Each of the executors will allocate some of the cores defined in sql.spark.mv.cores , and will consume part of the memory defined in sql.spark.mv.memory . The cores and memory assigned per executor are equal. For example, configuring an application with cores=4, memory=8, executors=2, will result in spawning 2 executors, in which each executor consumes 2cores/4GB from the cluster. |
Materialized view driver memory | spin box | No | Enter the memory size (in GBs) to use in the MV driver process. The default value is 1 GB. |
Spark Submit Deployment Mode* | drop down list | No | This property is available on Cloud installations only. Specify the Spark deploy mode: client or cluster. Before this feature, all Spark jobs were executed in client mode. ● In cluster deploy mode, when executing Spark jobs (loading an MV, for example), the driver application will be created in the Spark cluster instead of the Incorta cluster where the Loader Service exists. ● In the client deploy mode, the driver application will be created in the Incorta cluster when submitting the job to Spark. A schema developer can specify the Spark deploy mode at the MV level. If the deploy mode selected at the MV level differs from the one specified at the server level, the MV configuration overrides the server configuration. |
Extra options for Materialized views and notebooks | text box | No | Enter the extra Spark options that you want to pass to MVs and notebooks. These options can be used to override default configurations. Here are sample values: spark.driver.extraJavaOptions=-Duser.timezone=America/Los_Angeles;spark.sql.caseSensitive=true |
Enable dynamic allocation in MVs | toggle | No | Enable this property to dynamically allocate resources in MVs by scaling the number of application executors up and down based on the workload. This property requires running the Spark shuffle service. |
Extra options for Data purge / Parquet merge | text box | No | Enter the Spark configurations utilized by the Data Purge and Parquet Merge Spark applications. They override default configurations, allowing users to tailor the behavior of the Spark application to meet specific requirements. For instance, users can adjust parameters such as application cores, application memory, executor memory, and driver memory. This option is available starting 2024.1.x releases for Parquet merge jobs only while it is available for both Parquet merge and data purge starting 2024.7.x. |
Enable dynamic allocation in Data purge / Parquet merge | toggle | No | Enable this property to use dynamic resource allocation, which scales the number of executors registered with the Spark applications up and down based on the workload. This option requires Spark shuffle service to be running. This option is available starting 2024.1.x releases for Parquet merge only while it is available for both Parquet merge and data purge starting 2024.7.x. |
Enable automatic ML library download | toggle | No | Enable this property to automatically download the Machine Learning (ML) library or use the bundled one. |
Starting 2024.7.x:
- The default value of the maximum executors is now limited to 1000.
- You can enable or disable the resource dynamic allocation at the MV level and also change the maximum executors limit.
Before 2024.7.x and 2024.1.4, both the Analytics and the SQLi services could start SQLApp, which resulted in resilience issues and scattered logs. The new enhancements aim to improve resilience and address recent issues related to SQLApp.
- Ownership and startup
- Only the SQLi service can start SQLApp. As a result, the SQLi service must be started and the Spark port must be enabled when creating or running Incorta-over_Incorta tables or SQL views based on non-optimized tables.
- Starting a SQLi service will start its associated SQLApp and terminate any existing SQLApp.
- Shutting down a SQLi service will shut down the associated SQLApp.
- Logging improvements
- Logs are now appropriately managed and not cluttered within the Analytics services.
- Enhanced SQLApp logging for easier troubleshooting.
- Connection throttling
- Introducing a new CMC option (SQL Interface > Max number of concurrent connections from SQLi to SQLApp) to control the maximum number of connections from SQLi and SQLApp.
- When the limit is reached, the SQLi service waits for a configurable interval (5 minutes by default) before trying to acquire a connection to SQLApp. If it fails to acquire a connection after this interval, it throws a timeout exception.
Spark 3.x requires a tmp
directory with read, write, and execute permissions. You need also to specify this directory in the CMC as the SQLi reads the spark.executor.extraJavaOptions
and spark.driver.extraJavaOptions
configurations from SQL App extra options.
- Create a new
tmp
directory with the required permissions or use the<InstallationPath>/IncortaNode/spark/tmp
directory. - In the CMC > Server Configurations > Spark Integration > SQL App extra options, add the following options:
Make sure to replacespark.driver.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions> -Dorg.xerial.snappy.tempdir=<DirWithPremissions>;spark.executor.extraJavaOptions=-Djava.io.tmpdir=<DirWithPremissions> -Dorg.xerial.snappy.tempdir=<DirWithPremissions><DirWithPremissions>
with thetmp
folder that Spark uses, whether a directory you create with the required permissions or<InstallationPath>/IncortaNode/spark/tmp
.
Tuning
The following table illustrates the tuning configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Enable Parquet Migration at Staging | toggle | No | Enable this property to instruct the Loader Service to create new parquet files for objects with Integer columns during load from staging jobs instead of performing a full load of these objects. This occurs only once per object. Writing new parquet files allows the Loader Service to save Integer columns as Integer instead of Long as it used to do in previous releases. |
Enable automatic merging of parquet while loading from staging | toggle | No | Available starting with 2024.1.x. Enable this property to allow Incorta to merge Parquet files of a table during load from staging jobs if required. |
Enable parquet file Compression* | toggle | No | Enable this property to compress your parquet files. It is necessary to compact parquet files when using Materialized views. |
Proxy read timeout* | spin box | Yes, Analytics service only | Enter the required read timeout for rendering a UI request in seconds. You can adjust the value based on the proxy server configurations. |
Cleanup Job Refresh Time in Minutes* | spin box | Yes, Loader service only | Enter the time interval in minutes for a cleanup job schedule to remove unnecessary files from the shared storage. The interval value cannot be less than 10 minutes. |
Retention period of load job tracking data (In months)* | drop down list | No | Available starting with 2024.1.x. Enter the period for which Incorta retains load job tracking data, including load job history, load plan executions, and schema update jobs. The default is Never, which means that the feature is disabled. When you enable this feature, a cleanup job runs whenever the Analytics Service starts and every 24 hours afterward and deletes tracking data that exceeds the specified retention period. However, the tracking data of the latest successful load job or schema update job will not be deleted. |
Scheduler Pool Size* | spin box | Yes, Analytics services only | This option is available starting 2024.7.x. Specify the number of threads available to the Scheduler for the concurrent execution of scheduled jobs. A valid value is between 1 and 200 inclusive. A recommended value is between 1 and 100 . The default value is 35 . |
Schedule Misfire Threshold* | spin box | Yes, Analytics services only | This option is available starting 2024.7.x. Specify the time (in milliseconds) allowed for a trigger to execute after its scheduled fire time before the Scheduler considers it misfired. The default value is 590000 milliseconds. |
Scheduler Batch Trigger Acquisition Count* | spin box | Yes, Analytics services only | This option is available starting 2024.7.x. Specify the maximum number of triggers that a scheduler node is allowed to acquire (for firing) at once. A valid value is between 1 and 1000 inclusive. The default value is 1 . |
Extra quartz configs* | text box | Yes, Analytics services only | This option is available starting 2024.7.x. Enter additional Quartz configurations in the form of key=value pairs, each pair in a separate line. |
- When the cleanup job runs for the first time, it locks the metadata database during the deletion process. The locking duration depends on the number of records that the job will delete. Load jobs that start during the locking period will fail.
- To avoid failed load jobs while the cleanup job is running for the first time, make sure that you suspend schedules before enabling the feature. Then, start the Analytics Service only and wait for a few minutes before you start the Loader Service.
- It is recommended that you first configure the feature to start with a long retention period, then change the configuration afterward to a shorter period, and so on until you reach the required retention period. This will reduce the database lock time and the number of failed jobs when the cleanup job runs for the first time.
API Security
The following table illustrates the available configurations for API security:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Allow OAuth2.0 for Public APIs Authentication | toggle | No | This configuration is disabled by default. Note: You must have your own authorization server. When enabled, it works with public APIs v2 only and that you cannot use Incorta PAT for public APIs authentication, and you must update any running scripts. When disabled, it reverts back to any previously configured PAT. |
OAuth 2.0 authorization server base URL | text box | No | Available when you enable the Allow OAuth2.0 for Public APIs Authentication option. Enter your authorization server base URL. For example, https://auth0-tenant.com/ . |
Customizations
The following table illustrates the configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Color Palette Mode | drop down list | No | Choose a color palette for your insights and dashboards. The options are: ● Sophisticated ● Classic ● Contemporary ● Bright & Bold ● 90s Retro ● Custom |
Custom Palette Color | text box | No | Enter the HEX codes for 12 colors to be used by the custom palette as follows: #255888, #D1481A, #285F6C,... This property is available when you set Color Palette Mode to Custom. |
Fiscal Year | drop down list | Yes (all services) | Choose the beginning month of your fiscal year. The default is January. You must restart all services for the change to take place. |
Diagnostics
The following table illustrates the diagnostics configuration property:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Logging* | text box | No | Enter the logging level to specify the required information in your log files. Here are the available logging levels in descending order: ● OFF - disables logging ● SEVERE (highest value) - this level logs severe events that can cause termination. ● WARNING ● INFO ● CONFIG ● FINE ● FINER ● FINEST (lowest value) ● ALL - logs all messages The expression must be as follows: component1=level1:component2=level2 . For example, you can reduce the level of logging on your production system for the Loader service by setting the logging level to SEVERE . Enter: com.incorta.engine=SEVERE:com.incorta.server=SEVERE:com.incorta.loader=SEVERE |
Engine Tasks Lifecycle Structured Logs* | toggle | No | Turn this toggle on to enable adding extra logs in a JSON format to track the state updates of the Engine tasks, such as search and query operations. Enabling this option may result in a slight increase in the size of log files. This option is available starting 2024.7.x. |
Allow Cloud Disk Space Monitoring | toggle | No | Disabled by default. This option applies to cloud storage only, enabling it would monitor the disk space every 5 min which is not recommended as it might increase number of I/O operations. |
Data Agent
The following table illustrates the data agent configuration property:
- In the 2024.1.x releases, enabling or disabling the Data Agent feature or changing the service ports does not require restarting the cluster or any service.
- In 2024.7.x, Incorta has introduced the Data Agent Controller to allow managing data agents from within the Incorta Analytics UI.
For details, refer to Tools → Data Agent and Tools → Data Manager → Data Agents.
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Enable Data Agent | toggle | No | Enable this property to allow Incorta to connect and extract data from one or more data sources behind a firewall. For more information, refer to the Concepts → Data Agent document. |
Analytics Data Agent Port | spin box | No | Enter the port number of your Analytics data agent. The value cannot be less than 1 or greater than 65535 . This property is available when Enable Data Agent is toggled on. |
Analytics Data Agent Controller Port | spin box | No | Enter the port of the Data Agent Controller. The Analytics Service listens to a data agent controller service on this local port. The value cannot be less than 1 or greater than 65535 . This property is available starting 2024.7.x when Enable Data Agent is toggled on. |
Loader Data Agent Port | spin box | No | Enter the port number of your Loader data agent. The value cannot be less than 1 or greater than 65535 . This property is available when Enable Data Agent is toggled on. |
SQLi Data Agent Port* | spin box | Yes, SQLi service only | This property is available on On-Premises installations only. Enter the port where the SQLi service listens to the data agent connection requests. The value cannot be less than 1 or greater than 65535 . This property is available when Enable Data Agent is toggled on. |
Analytics Public Hosts and Ports | text box | No | Enter your Analytics public hosts and ports. Note the following: ● Changing the Analytics public hosts and ports requires re-generating the .auth files ● This property is available when Enable Data Agent is toggled on. |
Analytics Public Controller Hosts and Ports | spin box | No | Enter the public DNS names or IP addresses and ports of the Data Agent Controller hosts. The data agent controller service connects to the Analytics Service using this HOST:PORT . The connection is forwarded to the specified Analytics Data Agent Controller Port. Note the following: ● Changing the public hosts and ports requires re-generating the .auth files ● This property is available starting 2024.7.x when Enable Data Agent is toggled on. |
Loader Public Hosts and Ports | text box | No | Enter your Loader public hosts and ports. Note the following: ● Changing the Loader public hosts and ports requires re-generating the .auth files ● This property is available when Enable Data Agent is toggled on. |
SQLi Public Hosts and Ports* | text box | No | This property is available on On-Premises installations only. Enter the public host and port used by the data agent to establish a connection with the SQLi service. Note the following: ● Changing the SQLi public host and port requires re-generating the .auth file. ● This property is available when Enable Data Agent is toggled on. |
The following table illustrates the email configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Local Rendering URL Protocol* | drop down list | No | Select the local rendering URL protocol: ● http ● https |
Local Rendering Host* | text box | No | Enter the local rendering host. For example, 127.0.0.1 . |
Local Rendering Port* | spin box | No | Enter the local rendering port number. For example, 8080 . |
Headless Browser Rendering (in milliseconds)* | spin box | No | Enter the number of milliseconds after which rendering a dashboard would timeout. For example, 9000 . |
Security
The following table illustrates the security configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Enable Iframe inclusion | toggle | No | Enable external websites to host Incorta dashboard(s) in an iFrame. Enabling this option mandates that Incorta is accessed through SSL connections (HTTPS). |
Same Site | drop down list | No | Available only when you turn the Enable Iframe inclusion toggle off. Restrict sending session cookies to a first-party (same-site) context only and prevent the browser from sending cookies along with cross-site requests. Available options are: ● Lax (default): To prevent sending cookies on normal cross-site requests but allow sending them when navigating to your Incorta cluster. This option provides a reasonable balance between security and usability. It allows maintaining the user’s session information when accessing the Incorta cluster using an external link, for example, while blocking it in Cross-Site Request Forgery (CSRF)-prone request methods (such as POST ). ● Strict: To send cookies in a first-party context and not to send them along with requests initiated by third-party websites. This will prevent the browser from sending cookies to the target site in all cross-site browsing contexts, even when following a regular link. |
Secure | toggle | No | Available only when you turn the Enable Iframe inclusion toggle off. Allow sending cookies to the server only when a request is made using an HTTPS scheme. |
- To put the security changes into action, users have to sign out and sign in again to reset their sessions.
- Turning the Enable Iframe inclusion toggle on is similar to setting the
SameSite
attribute toNone
. The browser attaches the session cookies in all cross-site browsing contexts.
Notifications
The following table illustrates the notifications configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
User Announcements | toggle | No | Enable to start configuring notification messages to be displayed for all users in all tenants. |
Announcement Type | drop down list | No | Available only when you turn the User Announcements toggle on. Select the notification type to set the banner color. Available values: ● Alert (Default)- for a yellow banner ● Warning - for a red banner |
Announcement Title | text box | No | Available only when you turn the User Announcements toggle on. Enter the title you need to display in the announcement banner. Maximum number of characters that you can enter is 50. |
Announcement Body | text box | No | Available only when you turn the User Announcements toggle on. Enter the notification details you need to display in the announcement banner. Maximum number of characters that you can enter is 250. |
Enable In-App Notifications* | toggle | No | Turn on this toggle to enable in-app notifications. This option is available starting 2024.7.x. |
You will need to disable the Enable In-App Notifications option for Cloud installations if you face a Loading chunk failed
error. Contact Incorta Support to disable this feature on your Cloud clusters.
Analytics Workload Management
This tab is available starting the 2024.7.x release. The aim of these new configurations is to enhance the monitoring, management, and resilience of the Analytics infrastructure, ensuring proper system utilization and stability.
The Analytics Workload Management feature provides:
- Enhanced Auditing: Provides comprehensive visibility into the system activities and usage patterns, enabling better troubleshooting and performance analysis.
- Memory warning and starvation thresholds: Logs alerts when off-heap memory usage exceeds the warning or starvation threshold, facilitating early detection of potential issues.
- Support for automatic recovery (Beta feature): Attempts to recover the Analytics platform when a starvation threshold is exceeded, ensuring continuity and stability by managing memory resources effectively.
You can find additional configurations for the Analytics Workload Management in the Tenant Configurations.
The following table illustrates the Analytics Workload Management configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Analytics Auditing | dropdown list | Yes, Analytics services only | Specify the detail level of the Analytics audit logs. Available options are: ● Standard: Maintains the existing CSV audit files in the audit folder, which provide only essential audit information. ● Enhanced: Generates new CSV audit files in the engine_audit.2.0 folder with additional columns and detailed tracking information, such as request source, used off-heap memory, and task state time. This is the recommended option. ● Both: Generates both Standard and Enhanced audit logs. This is the default option. For details, refer to References → Engine Audit. |
System Monitoring and Workload Management | toggle | Yes, Analytics services only | Enable this option to monitor and collect system performance metrics, such as CPU load, off-heap memory usage, etc. This option is enabled by default. |
System Metrics Auditing | toggle | Yes, Analytics services only | This option is available when the System Monitoring and Workload Management toggle is turned on. Enable this option to monitor and collect system performance metrics, such as CPU load, off-heap memory usage, etc. This option is disabled by default. |
System Audit Tenant | text box | Yes, Analytics services only | This option is available when the System Monitoring and Workload Management toggle is turned on. Specify the tenant where the system metrics audit file will be stored. To easily load the captured system metrics to Incorta Analytics and create dashboards on top of them, make sure to enter an active tenant that you can access. Entering an invalid tenant saves the audit file under the system tenant. |
Off-heap Memory Warning Percentage Threshold | text box | Yes, Analytics services only | This option is available when the System Monitoring and Workload Management toggle is turned on. Specify the usage percentage of the off-heap memory that logs a warning alert in the system metrics audit file. When the off-heap memory usage readings over 15 minutes consistently reach this threshold, a warning is logged. The default value is 90 . Setting the value to 100 disables this functionality. |
Off-heap Memory Starvation Percentage Threshold | text box | Yes, Analytics services only | This option is available when the System Monitoring and Workload Management toggle is turned on. Specify the usage percentage of the off-heap memory that logs a potential starvation alert in the system metrics audit file. When off-heap memory usage readings over 15 minutes consistently reach this threshold, a starvation alert is logged. The default value is 95 . Setting the value to 100 disables this functionality. |
Automatic Recovery | toggle | Yes, Analytics services only | This option is available when the System Monitoring and Workload Management toggle is turned on. Enable this option to allow the system to automatically attempt to recover when a starvation alert is logged. The system evicts unused columns from memory and terminates the top off-heap consuming queries until the usage percentage is less than the starvation threshold. This option is disabled by default. |
Incorta Premium
This tab is available starting the 2024.7.2 release for CMC admins only.
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Incorta Premium | toggle | No | Enabling this setting activates the Incorta Premium offering, providing access to advanced features such as Incorta Copilot, Business User Notebook, and Data Studio. Note: Incorta Premium package has licensing implications. Please contact your account executive to confirm that you are licensed to use these features in production. |
Incorta Labs
An Incorta Labs feature is experimental and can be either promoted to a product feature or deprecated without notice.
The following table illustrates the Incorta Labs configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Advanced SQL Interface* | toggle | Yes (all services) | Turn this toggle on to enable a new advanced SQL interface that supports Spark SQL queries leveraging Advanced SQL Service. Enabling this feature will automatically enable the Null Handling feature in releases before 2024.7.3. Starting 2024.7.3, Null Handling is an independent feature, separate from Advanced SQL Interface. You can enable or disable it as needed. For more details, refer to Advanced SQLi and Null Handling. This option is available starting with 2024.1.3 for On-Premises installations only. Note: To enable this feature on Cloud environments, sign in to the Cloud Admin Portal (CAP) and turn on the Enable Advanced SQL Interface toggle on the Configurations tab. |
Spark SQL View (Premium) | toggle | No | Turn on this toggle to enable fully Spark SQL-Compliant queries using the Advanced SQL Interface engine. Dashboards can leverage these views to query and analyze data. Notes: ● Enabling the Advanced SQL Interface is required. ● This option is available starting 2024.7.2, and you must enable Incorta Premium to be able to turn it on. ● In 2024.7.1, enabling Advanced SQLi automatically enables Spark SQL views. |
Null Handling | toggle | Yes (all services) | Enable this setting to differentiate between null values, zeros, empty strings, and empty dates during joins, filters, and formula evaluations. This option was first introduced under Customizations and was moved to Incorta Labs in 2024.1.x. |
- Null Handling is available as a preview feature.
- After enabling or disabling this option, data loading is necessary for physical schemas containing functions, formulas, aggregations, load filters, or security filters to have nulls handled accordingly.
- In releases before 2024.7.x, you must perform a load from staging. However, staging load is recommended starting 2024.7.x to avoid delays in subsequent load jobs.
- In 2024.7.1 and 2024.7.2, joins and Primary Key calculations consistently account for null values, regardless of this setting.
- Starting 2024.7.3, the Loader Service will handle null values during join calculations based on the Null Handling CMC setting
- Disabled: The Loader Service treats Null values as zeros for numeric columns, empty strings for text columns, and empty dates for date columns.
- Enabled: The Loader Service treats Null values as distinct values, not equivalent to zeros, empty strings, or other null values.
For details on supported areas and limitations, refer to References → Null Handling.
Incorta Data Studio
This tab is available starting 2024.7.2 release. You must enable Incorta Premium to be able to enable and configure Data Studio. In previous releases, Incorta Data Studio was a Preview feature. For details, refer to Enabling the Data Studio.
The Enable Data Studio and Data Studio max allowed dataflows options were initially introduced under Tenant Configurations > Incorta Labs, and the Data Studio LLM Recipe Models option was under Server Configurations > Incorta Copilot.
The following table illustrates the Incorta Data Studio configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Enable Data Studio (Premium) | toggle | No | Turn on this toggle to enable the automatic generation of PySpark code using a simple drag-and-drop interface. Users can apply transformations to their data using predefined recipes and seamlessly save the transformed data as a new Materialized View (MV). You must enable Incorta Premium to be able to enable this feature. |
Data Studio max allowed dataflows | text box | Yes, Analytics services only | This option is available when you turn on Enable Data Studio. Enter the maximum number of active Dataflows that can be run concurrently. The oldest running Dataflows will be shut down to accommodate new Dataflows when the maximum number is reached. |
Data Studio LLM Recipe Models | text box | Yes, Analytics services only | This option is available when you turn on Enable Data Studio. Enter a JSON array containing all LLM models that will be accessed through the Dataflow LLM recipe. |
Incorta Copilot
- Incorta Copilot is available starting the 2024.1.0 release on Cloud installations only. Contact Incorta Support to avail the Copilot.
- Starting 2024.7.2, you must enable Incorta Premium to be able to enable Incorta Copilot.
Incorta Nexus is a new option available for the primary and embedding model AI providers starting 2024.7.2. Incorta Nexus is based on Llama 3.1 LLM model, providing a trained model and advanced prompts that help better answer your questions in Incorta.
Incorta Nexus doesn’t require any extra configurations by the cloud admin, such as a Vector database or tokens.
For now, only Incorta Nexus is supported in 2024.7.2.
The following table illustrates the Incorta Copilot configuration properties:
Configuration Property | Control | Requires Restart | Description |
---|---|---|---|
Enable Incorta Copilot | toggle | Yes | Turn on this toggle to enable using the copilot in different areas of Incorta, knowing that you must fill in all required configurations. |
Primary Model AI Provider | drop-down | Yes | Choose your AI provider from the available values: ● OpenAI ● Azure OpenAI ● Incorta Nexus (Available starting 2024.7.2) |
OpenAI Text Completion Model Name | text box | Yes | Select the OpenAI Model you want to use from the available values: ● gpt-4 (Recommended) ● gpt-3.5-turbo Note: This option is available when you select OpenAI as your AI provider. |
OpenAI Text Completion Token | text box | Yes | Enter your OpenAI-generated token. Note: This option is available when you select OpenAI as your AI provider. |
OpenAI Text Completion Organization | text box | Yes | Enter the organization associated with the OpenAI text completion service. Note: This option is available when you select OpenAI as your AI provider. |
OpenAI Embedding Token | text box | Yes | Enter the OpenAI token you have obtained through your OpenAI subscription. Note: This option is available when you select OpenAI as your AI provider. |
OpenAI Embedding Model Name | text box | Yes | Enter the OpenAI model name you will be using. Note: This option is available when you select OpenAI as your AI provider. |
Azure OpenAI Text Completion Endpoint | text box | Yes | Enter Azure OpenAI completion URL, which will help auto-complete and generate insights. Note: This option is available when you select Azure OpenAI as your AI provider. |
Azure OpenAI Text Completion Token | text box | Yes | Enter your Azure OpenAI completion generated token. Note: This option is available when you select Azure OpenAI as your AI provider. |
Azure OpenAI Text Completion Deployment Name | text box | Yes | Enter your Azure OpenAI deployment name. Recommended value: gpt4-model. Note: This option is available when you select Azure OpenAI as your AI provider. |
Azure OpenAI Embedding Endpoint | text box | Yes | Enter your Azure OpenAI embedding URL, which will help index metadata in the Pinecone Vector database. Note: This option is available when you select Azure OpenAI as your AI provider. |
Azure OpenAI Embedding Token | text box | Yes | Enter your Azure OpenAI embedding generated token. Note: This option is available when you select Azure OpenAI as your AI provider. |
Azure OpenAI Embedding Deployment Name | text box | Yes | Enter your Azure OpenAI-generated token. Note: This option is available when you select Azure OpenAI as your AI provider. |
Use Secondary Model | toggle | Yes | Enable this option to configure an optional secondary AI model to use for tasks that do not require high accuracy. |
Secondary Model AI Provider | drop-down | Yes | Choose your secondary AI provider from the available values: ● OpenAI ● Azure OpenAI |
OpenAI Text Completion Model Name | text box | Yes | Enter the OpenAI Model you want to use, which should be different from the primary one. For example, gpt-3.5-turbo. Note: This options you select OpenAI as your secondary AI provider. |
OpenAI Text Completion Token | text box | Yes | Enter the OpenAI generated token for your chosen secondary model. Note: This options you select OpenAI as your secondary AI provider. |
OpenAI Text Completion Organization | text box | Yes | Enter the organization associated with the OpenAI text completion service when using the secondary model. Note: This options you select OpenAI as your secondary AI provider. |
Azure OpenAI Text Completion Endpoint | text box | Yes | Enter Azure OpenAI completion URL, which will help auto complete and generate insights. Note: This options you select Azure OpenAI as your secondary AI provider. |
Azure OpenAI Text Completion Token | text box | Yes | Enter your Azure OpenAI completion generated token. Note: This options you select Azure OpenAI as your secondary AI provider. |
Azure OpenAI Text Completion Deployment Name | text box | Yes | Enter your Azure OpenAI deployment name. Note: This options you select Azure OpenAI as your secondary AI provider. |
Vector Database Connector | drop-down | Yes | Database connector is automatically selected as Pinecone. You must ensure setting up the Pinecone Vector database before proceeding. Fetch the environment name, API key, and project name. |
Pinecone Environment Name | text box | Yes | Enter the Pinecone environment name you have created. Enter the Pinecone API key you generated. |
Pinecone API Key | text box | Yes | Enter PineCone API key you have generated. |
Pinecone Project Name | text box | Yes | Enter the Pinecone project name you have created. |
Pinecone Index Name | text box | Yes | Enter the Pinecone index name you have created. |
Embedding Model AI Provider | drop-down | Yes | Choose your embedding model AI provider from the available values: ● OpenAI ● Azure OpenAI ● Incorta Nexus (Available starting 2024.7.2) |
DataFlow LLM Models | text box | No | Enter a json array containing all llm models to be accessed from the Dataflow LLM Recipe. This option is available starting 2024.7.1; however it has been moved to the Incorta Data Studio tab starting 2024.7.2. |
Plugins Configs | text box | No | Enter the configuration settings for Copilot plugins. This option is available starting 2024.7.2. |