Guides → Configure Server

You can manage your server through the Cluster Management Console (CMC) by configuring various settings, such as an SQL interface port, proxy read timeout, and other settings. Here are the steps to configure your server:

  • Sign in to the CMC.
  • In the Navigation bar, select Clusters > Cluster-name.
  • In the Canvas tabs, select Cluster Configurations > Server Configurations.

The following server configurations are available in the CMC:

Warning: Save Changes

To avoid losing data after configuring your settings on a page, select Save before navigating to another page.

Important: Cloud Admin Limited Access

Some configurations are not available when using the cloud admin account to sign in to the CMC. These configurations are marked on this document with an asterisk (*). Contact Incorta Support to have any of these configurations set or altered.

Clustering

The following table illustrates the cluster configuration property:

Configuration PropertyControlRequires RestartDescription
Kafka Consumer Service Name*text boxYes, Loader service onlyEnter the name of the Loader service that acts as a Kafka consumer in the following format: <NODE_NAME>.<SERVICE_NAME>

Note the following:
  ●   If you do not configure the name of the Kafka Loader service, a random loader service will be automatically assigned leading to unexpected results.
  ●   If you change the Kafka Loader service consumer from one Loader service (A) to another Loader service (B), you must restart the current Loader service (A) first, and then restart the Loader service (B).

SQL Interface

Incorta can act as a PostgreSQL database using the Structured Query Language Interface (SQLi). Thus, you can utilize Incorta’s powerful engine performance and features through other BI tools, such as Tableau.

The following table illustrates the configuration properties of the SQL interface:

Configuration PropertyControlRequires RestartDescription
SQL interface port*spin boxYes, Analytics service onlyEnter a port number to connect to the Incorta Engine. The Engine mode uses this port and runs the SQL interface queries against the data loaded in memory.
Data Store (DS) port*spin boxYes, Analytics service onlyEnter a port number to run queries directly using Spark. The Spark mode (Parquet mode) runs the SQL interface queries against the data loaded in the staging area using Incorta compacted parquet files.
New Metadata handler port*spin boxYes, Analytics service onlyEnter a port number for an auxiliary process to resolve metadata for SQLI clients.
Enable SSL for SQL interface portstoggleNoEnable this property to use Secure Sockets Layer (SSL) for SQLI connections. You must provide the SSL certificate details (path and passphrase).
SSL certificate(JKS) Path used for SQL interfacetext boxNoEnter the path to SQLI SSL certificate.

This property is only available when Enable SSL for SQL interface ports is toggled on.
SSL certificate(JKS) passphrase used for SQL interfacetext boxNoEnter the passphrase to SQLI SSL certificate.

This property is only available when Enable SSL for SQL interface ports is toggled on.
Enforce Accepting Only SSL ConnectionstoggleNoEnable this property to allow SSL connections only.

This property is only available when Enable SSL for SQL interface ports is toggled on.
External Tables CSV file path*text boxYes, Analytics service onlyEnter the CSV file path of your external tables.
Enable Cache*toggleNoEnable this property to cache recurring SQL operations and enhance the performance of query execution, if there is an available cache size.
Cache size (In gigabytes)*spin boxNoEnter the maximum caching size (in GBs) of the data returned by the SQLi queries. When the cache size exceeds the limit, the least recently used (LRU) data is evicted to avail space for newer cache.

You must set the cache size based on the available memory on the Incorta host server, and the size of the common query results.
Cached query result max size*spin boxNoEnter the maximum threshold size for each query result, which is the table cell count (rows multiplied by columns).

If the size of a query result exceeds the threshold value, it will not be cached.
Enable cache auto refresh*toggleNoEnable this property to automatically refresh the cache at specified intervals.
Refresh cache interval (In minutes)spin boxNoEnter the interval in minutes to automatically refresh the cache.

This property is only available when Enable cache auto refresh is toggled on.
Use column labels instead of column namestoggleNoEnable this property to use table columns labels instead of column names during the SQL execution. This affects the column list specified in the query and the returned column names in the result.
SQL Engine cachetoggleNoEnable this property to cache SQL engine result sets. This helps speed up repetitive operations performed during query processing and execution. The default cache size is 5% of off-heap memory.

Spark Integration

Incorta utilizes Spark to execute complex queries that are not currently supported and to perform queries on data residing in the staging area without loading those data into Incorta’s memory.

The following table illustrates the Spark integration configuration properties:

Configuration PropertyControlRequires RestartDescription
Spark master URL*text boxYes, all servicesEnter the Spark Master URL for the Apache Spark instance to execute materialized views (MVs) or SQL queries.

Here are the steps to acquire the Spark Master information:
  ●   Navigate to the Spark host server UI (from any browser), using the following format: <SPARK_HOST_SERVER>:<SPARK_PORT_NO>
  ●   Copy the Spark Master connection string in the format: spark://<CONNECTION_STRING>:<SPARK_PORT_NO>
  ●   The default port number for installed Spark on Incorta is 7077.
Enable a custom Delta Lake Reader*toggleNoEnable this property to instruct Spark to use the Incorta Custom Delta Lake Reader to read data from the Delta Lake files instead of using Spark native Delta Lake reader . The Incorta Custom Delta Lake Reader is optimized for queries on relatively small data sets.
Switching between these two readers may cause the first SQLi query on the Spark port to consume more time because the SQLi engine will be restarted.
Enable SQL App*toggleNoEnable this property to start the SQL App, and then use Engine and Parquet modes to execute incoming SQL queries.
Enable spark fallback*toggleYes, Analytics serviceThis property is enabled by default. Disabling this feature prevents the In-Memory engine from falling back to the Spark or Parquet mode when the query is not executed, instead, an SQL error is returned.

This property is only available when Enable SQL App is toggled on.
SQL App driver memoryspin boxNoEnter the app drive memory (in GBs) to be used by the SQL interface Spark. This memory is only used for constructing (not calculating) the final results. Consult with the Spark admin to set this value.
SQL App coresspin boxNoEnter the number of dedicated Central Processing Unit (CPU) cores reserved for the SQL interface Spark application.

Ensure that there are enough cores for the OS, other services, and applications.
SQL App memoryspin boxNoEnter the maximum memory (in GBs) used by SQL interface Spark queries.

Note the following:
  ●   You must set an adequate memory size when using materialized views.
  ●   The memory required for both applications combined cannot exceed the Worker Memory.
SQL App executorsspin boxNoEnter the maximum number of executors that can be spawned on a single worker.

Each of the executors will allocate some of the cores defined in sql.spark.application.cores, and will use part of the memory defined in sql.spark.application.memory .

The cores and memory assigned per executor are equal. Thus, the number of SQL App cores and SQL App memory is divided by the number of executors. If the number of executors is greater than the SQL App memory, the executors will consume the assigned memory at 1 GB/executor.

Examples:
  ●   If the SQL App cores = 7 and the SQL App executors = 3, each executor will take 2 cores, and 1 of the cores will not be utilized.
  ●   If the SQL App memory = 5 and the SQL App executors = 7, then 5 cores will be created, with 1 GB each.
SQL App shuffle partitionsspin boxNoEnter the number of SQL interface shuffle partitions. A single shuffle represents a block of data being processed to perform joins and/or aggregations.

Note the following:
  ●   The size of a shuffle partition increases as the size of processed data increases.
  ●   The optimum shuffle partition size is around 128 MBs.
  ●   Increase the shuffle partitions as the processed data size increases. However, an increase in shuffle partitions can lead to an increase in CPU utilization.
  ●   If a query operates on a trivial amount of data, an increased amount of partitions will lead to a small partition size which may increase the query execution time due to the overhead of managing the unnecessary amount of partitions. Inadequate partition size may cause the query to fail.
SQL App extra optionstext boxNoEnter the extra Spark options that you want to pass to the SQL interface Spark bridge application. These options can be used to override default configurations.

Here are sample values: spark.sql.shuffle.partitions=8; spark.executor.memory=4g; spark.driver.memory=4g
Enable SQL App Dynamic AllocationtoggleYes, Analytics serviceEnable this property to dynamically allocate the Data Hub Spark application by adjusting the number of application executors based on the workload.

This property optimizes resource utilization as it removes idle executors, and claims them again if the workload increases.
Spark App port*spin boxYes, Analytics serviceEnter the port number to connect Incorta to Spark and access the Data Hub.
Spark App control channel port*spin boxNoEnter the control channel port number to send a shutdown signal to the Spark SQL app when the Incorta server requires Spark to shut down.
Materialized view application coresspin boxNoEnter the number of CPU cores for MVs. The default value is 1.

The allocated cores for all running Spark applications cannot exceed the dedicated cores for the cluster unless Dynamic Allocation is enabled. Thus, the value will be used to compute the CPU cores for the initial executors.
Materialized view application memoryspin boxNoEnter the maximum memory size (in GBs) to use for MVs. The default is 1 GB.

The memory for all Spark applications combined cannot exceed the cluster memory.
Materialized view application executorsspin boxNoEnter the maximum number of executors that can be spawned by a single materialized view application.

Each of the executors will allocate some of the cores defined in sql.spark.mv.cores, and will consume part of the memory defined in sql.spark.mv.memory.

The cores and memory assigned per executor are equal. For example, configuring an application with cores=4, memory=8, executors=2, will result in spawning 2 executors, in which each executor consumes 2cores/4GB from the cluster.
Materialized view driver memoryspin boxNoEnter the memory size (in GBs) to use in the MV driver process. The default value is 1 GB.
Spark Submit Deployment Mode*drop down listNoSpecify the Spark deploy mode: client or cluster.
This property is available starting with the 2022.1.0 release. Before this release, all Spark jobs were executed in client mode.
  ●  In cluster deploy mode, when executing Spark jobs (loading an MV, for example), the driver application will be created in the Spark cluster instead of the Incorta cluster where the Loader Service exists.
  ●  In the client deploy mode, the driver application will be created in the Incorta cluster when submitting the job to Spark.
A schema developer can specify the Spark deploy mode at the MV level. If the deploy mode selected at the MV level differs from the one specified at the server level, the MV configuration overrides the server configuration.
Enable dynamic allocation in MVstoggleNoEnable this property to dynamically allocate resources in MVs by scaling the number of application executors up and down based on the workload.

This property requires running the Spark shuffle service.
Enable automatic ML library downloadtoggleNoEnable this property to automatically download the Machine Learning (ML) library or use the bundled one.

API Security

The following table illustrates the available configurations for API security:

Configuration PropertyControlRequires RestartDescription
Allow OAuth2.0 for Public APIs AuthenticationtoggleNoThis configuration is disabled by default.
Note: You must have your own authorization server.
When enabled, it works with public APIs v2 only and that you cannot use Incorta PAT for public APIs authentication, and you must update any running scripts.
When disabled, it reverts back to any previously configured PAT.
OAuth 2.0 authorization server base URLtext boxNoAvailable when you enable the Allow OAuth2.0 for Public APIs Authentication option.
Enter your authorization server base URL. For example, https://auth0-tenant.com/.

Tuning

The following table illustrates the tuning configuration properties:

Configuration PropertyControlRequires RestartDescription
Enable Parquet Migration at StagingtoggleNoAvailable starting with 2023.7.0.
Enable this property to instruct the Loader Service to create new parquet files for objects with Integer columns during load from staging jobs instead of performing a full load of these objects. This occurs only once per object after upgrading to 2023.7.0.
Writing new parquet files allows the Loader Service to save Integer columns as Integer instead of Long as it used to do in previous releases.
Enable automatic merging of parquet while loading from stagingtoggleNoAvailable starting with 2024.1.0.
Enable this property to allow Incorta to merge Parquet files of a table during load from staging jobs if required.
Enable parquet file Compression*toggleNoEnable this property to compress your parquet files.

It is necessary to compact parquet files when using Materialized views.
Proxy read timeout*spin boxYes, Analytics service onlyEnter the required read timeout for rendering a UI request in seconds. You can adjust the value based on the proxy server configurations.
Cleanup Job Refresh Time in Minutes*spin boxYes, Loader service onlyEnter the time interval in minutes for a cleanup job schedule to remove unnecessary files from the shared storage.

The interval value cannot be less than 10 minutes.
Retention period of load job tracking data (In months)*drop down listNoEnter the period for which Incorta retains load job tracking data, including load job history, load plan executions, and schema update jobs.
The default is Never, which means that the feature is disabled.
When you enable this feature, a cleanup job runs whenever the Analytics Service starts and every 24 hours afterward and deletes tracking data that exceeds the specified retention period. However, the tracking data of the latest successful load job or schema update job will not be deleted.
Retention period: Notes and recommendations
  • When the cleanup job runs for the first time, it locks the metadata database during the deletion process. The locking duration depends on the number of records that the job will delete. Load jobs that start during the locking period will fail.
  • To avoid failed load jobs while the cleanup job is running for the first time, make sure that you suspend schedules before enabling the feature. Then, start the Analytics Service only and wait for a few minutes before you start the Loader Service.
  • It is recommended that you first configure the feature to start with a long retention period, then change the configuration afterward to a shorter period, and so on until you reach the required retention period. This will reduce the database lock time and the number of failed jobs when the cleanup job runs for the first time.

Customizations

The following table illustrates the configuration properties:

Configuration PropertyControlRequires RestartDescription
Color Palette Modedrop down listNoChoose a color palette for your insights and dashboards. The options are:
  ●   Sophisticated
  ●   Classic
  ●   Contemporary
  ●   Bright & Bold
  ●   90s Retro
  ●   Custom
Custom Palette Colortext boxNoEnter the HEX codes for 12 colors to be used by the custom palette as follows: #255888, #D1481A, #285F6C,...

This property is only available when you set Color Palette Mode to Custom.
Fiscal Yeardrop down listYes (all services)Choose the beginning month of your fiscal year. The default is January. You must restart all services for the change to take place.
Null HandlingtoggleYes (all services)Enable this setting so that Incorta differentiates between null and real values, such as an empty string and zero.
This option has been moved to Incorta Labs starting with 2024.1.0.
Note

Null Handling is available as a preview feature starting with 2022.12.0. You must perform a load from staging for all physical schemas referenced in supported functions, formulas, aggregations, and filters. For details on supported areas and limitations, refer to Handling null values.

Diagnostics

The following table illustrates the diagnostics configuration property:

Configuration PropertyControlRequires RestartDescription
Logging*text boxNoEnter the logging level to specify the required information in your log files.

Here are the available logging levels in descending order:
  ●   OFF - disables logging
  ●   SEVERE (highest value) - this level logs severe events that can cause termination.
  ●   WARNING
  ●   INFO
  ●   CONFIG
  ●   FINE
  ●   FINER
  ●   FINEST (lowest value)
  ●   ALL - logs all messages

The expression must be as follows: component1=level1:component2=level2.

For example, you can reduce the level of logging on your production system for the Loader service by setting the logging level to SEVERE. Enter: com.incorta.engine=SEVERE:com.incorta.server=SEVERE:com.incorta.loader=SEVERE
Allow Cloud Disk Space MonitoringtoggleNoDisabled by default. This option applies to cloud storage only, enabling it would monitor the disk space every 5 min which is not recommended as it might increase number of I/O operations.

Data Agent

The following table illustrates the data agent configuration property:

Note

Starting with the 2024.1.0 release, enabling or disabling the Data Agent feature or changing the service ports does not require restarting the cluster or any service.

Configuration PropertyControlRequires RestartDescription
Enable Data AgenttoggleYes, all services
(Releases before 2024.1.0)
Enable this property to allow Incorta to connect and extract data from one or more data sources behind a firewall.

For more information, refer to the Concepts → Data Agent document.
Analytics Data Agent Portspin boxYes, Analytics service only
(Releases before 2024.1.0))
Enter the port number of your Analytics data agent. The value cannot be less than 1 or greater than 65535. This property is only available when Enable Data Agent is toggled on.
Loader Data Agent Portspin boxYes, Loader service only
(Releases before 2024.1.0)
Enter the port number of your Loader data agent. The value cannot be less than 1 or greater than 65535.

This property is only available when Enable Data Agent is toggled on.
Analytics Public Hosts and Portstext boxNoEnter your Analytics public hosts and ports.

Note the following:
  ●   Changing the Analytics public hosts and ports requires re-generating the .auth files
  ●   This property is only available when Enable Data Agent is toggled on.
Loader Public Hosts and Portstext boxNoEnter your Loader public hosts and ports.

Note the following:
  ●   Changing the Loader public hosts and ports requires re-generating the .auth files
  ●   This property is only available when Enable Data Agent is toggled on.

Email

The following table illustrates the email configuration properties:

Configuration PropertyControlRequires RestartDescription
Local Rendering URL Protocol*drop down listNoSelect the local rendering URL protocol:
  ●   http
  ●   https
Local Rendering Host*text boxNoEnter the local rendering host. For example, 127.0.0.1.
Local Rendering Port*spin boxNoEnter the local rendering port number. For example, 8080.
Headless Browser Rendering (in milliseconds)*spin boxNoEnter the number of milliseconds after which rendering a dashboard would timeout. For example, 9000.

Security

The following table illustrates the security configuration properties:

Configuration PropertyControlRequires RestartDescription
Enable Iframe inclusiontoggleNoEnable external websites to host Incorta dashboard(s) in an iFrame. Enabling this option mandates that Incorta is accessed through SSL connections (HTTPS).
Same Sitedrop down listNoAvailable only when you turn the Enable Iframe inclusion toggle off.
Restrict sending session cookies to a first-party (same-site) context only and prevent the browser from sending cookies along with cross-site requests.
Available options are:
  ●   Lax (default): To prevent sending cookies on normal cross-site requests but allow sending them when navigating to your Incorta cluster. This option provides a reasonable balance between security and usability. It allows maintaining the user’s session information when accessing the Incorta cluster using an external link, for example, while blocking it in Cross-Site Request Forgery (CSRF)-prone request methods (such as POST).
  ●   Strict: To send cookies in a first-party context and not to send them along with requests initiated by third-party websites. This will prevent the browser from sending cookies to the target site in all cross-site browsing contexts, even when following a regular link.
SecuretoggleNoAvailable only when you turn the Enable Iframe inclusion toggle off.
Allow sending cookies to the server only when a request is made using an HTTPS scheme.
Note
  • The Secure Site and Secure properties are available starting with the 2022.10.0 release.
  • To put the security changes into action, users have to sign out and sign in again to reset their sessions.
  • Turning the Enable Iframe inclusion toggle on is similar to setting the SameSite attribute to None. The browser attaches the session cookies in all cross-site browsing contexts.

Notifications

The following table illustrates the notifications configuration properties:

Configuration PropertyControlRequires RestartDescription
User AnnouncementstoggleNoEnable to start configuring notification messages to be displayed for all users in all tenants.
Announcement Typedrop down listNoAvailable only when you turn the User Announcements toggle on.
Select the notification type to set the banner color.
Available values:
  ●   Alert (Default)- for a yellow banner
  ●   Warning - for a red banner
Announcement Titletext boxNoAvailable only when you turn the User Announcements toggle on.
Enter the title you need to display in the announcement banner. Maximum number of characters that you can enter is 50.
Announcement Bodytext boxNoAvailable only when you turn the User Announcements toggle on.
Enter the notification details you need to display in the announcement banner. Maximum number of characters that you can enter is 250.

Incorta Labs

The following table illustrates the Incorta Labs configuration properties:

Configuration PropertyControlRequires RestartDescription
Null HandlingtoggleYes (all services)Enable this property so that Incorta differentiates between nulls and real values, such as empty strings and zeros. This option was first introduced under Customizations in 2022.12.0 and was moved to Incorta Labs in 2024.1.0.
For more information, refer to References → Null Handling.

Incorta Copilot

The following table illustrates the Incorta Copilot configuration properties:

Configuration PropertyControlRequires RestartDescription
Enable Incorta CopilottoggleYesTurn on this toggle to enable using the copilot in different areas of Incorta, knowing that you must fill in all required configurations.
Primary Model AI Providerdrop-downYesChoose your AI provider from the available values:
  ●  OpenAI
  ●  Azure OpenAI
OpenAI Text Completion Model Nametext boxYesSelect the OpenAI Model you want to use from the available values:
  ●  gpt-4 (Recommended)
  ●  gpt-3.5-turbo
Note: This option is available when you select OpenAI as your AI provider.
OpenAI Text Completion Tokentext boxYesEnter your OpenAI-generated token.
Note: This option is available when you select OpenAI as your AI provider.
OpenAI Text Completion Organizationtext boxYesEnter the organization associated with the OpenAI text completion service.
Note: This option is available when you select OpenAI as your AI provider.
OpenAI Embedding Tokentext boxYesEnter the OpenAI token you have obtained through your OpenAI subscription.
Note: This option is available when you select OpenAI as your AI provider.
OpenAI Embedding Model Nametext boxYesEnter the OpenAI model name you will be using.
Note: This option is available when you select OpenAI as your AI provider.
Azure OpenAI Text Completion Endpointtext boxYesEnter Azure OpenAI completion URL, which will help auto-complete and generate insights.
Note: This option is available when you select Azure OpenAI as your AI provider.
Azure OpenAI Text Completion Tokentext boxYesEnter your Azure OpenAI completion generated token.
Note: This option is available when you select Azure OpenAI as your AI provider.
Azure OpenAI Text Completion Deployment Nametext boxYesEnter your Azure OpenAI deployment name.
Recommended value: gpt4-model.
Note: This option is available when you select Azure OpenAI as your AI provider.
Azure OpenAI Embedding Endpointtext boxYesEnter your Azure OpenAI embedding URL, which will help index metadata in the Pinecone Vector database.
Note: This option is available when you select Azure OpenAI as your AI provider.
Azure OpenAI Embedding Tokentext boxYesEnter your Azure OpenAI embedding generated token.
Note: This option is available when you select Azure OpenAI as your AI provider.
Azure OpenAI Embedding Deployment Nametext boxYesEnter your Azure OpenAI-generated token.
Note: This option is available when you select Azure OpenAI as your AI provider.
Use Secondary ModeltoggleYesEnable this option to configure an optional secondary AI model to use for tasks that do not require high accuracy.
Secondary Model AI Providerdrop-downYesChoose your secondary AI provider from the available values:
  ●  OpenAI
  ●  Azure OpenAI
OpenAI Text Completion Model Nametext boxYesEnter the OpenAI Model you want to use, which should be different from the primary one. For example, gpt-3.5-turbo.
Note: This options you select OpenAI as your secondary AI provider.
OpenAI Text Completion Tokentext boxYesEnter the OpenAI generated token for your chosen secondary model.
Note: This options you select OpenAI as your secondary AI provider.
OpenAI Text Completion Organizationtext boxYesEnter the organization associated with the OpenAI text completion service when using the secondary model.
Note: This options you select OpenAI as your secondary AI provider.
Azure OpenAI Text Completion Endpointtext boxYesEnter Azure OpenAI completion URL, which will help auto complete and generate insights.
Note: This options you select Azure OpenAI as your secondary AI provider.
Azure OpenAI Text Completion Tokentext boxYesEnter your Azure OpenAI completion generated token.
Note: This options you select Azure OpenAI as your secondary AI provider.
Azure OpenAI Text Completion Deployment Nametext boxYesEnter your Azure OpenAI deployment name.
Note: This options you select Azure OpenAI as your secondary AI provider.
Vector Database Connectordrop-downYesDatabase connector is automatically selected as Pinecone.
You must ensure setting up the Pinecone Vector database before proceeding. Fetch the environment name, API key, and project name.
Pinecone Environment Nametext boxYesEnter the Pinecone environment name you have created.
Enter the Pinecone API key you generated.
Pinecone API Keytext boxYesEnter PineCone API key you have generated.
Pinecone Project Nametext boxYesEnter the Pinecone project name you have created.