Connectors → Custom SQL
About the Custom SQL Connector
The Custom SQL Connector enables Incorta to access data stored in any SQL database. It is recommended that this connector be used when a dedicated Incorta database connector does not already exist. You can access the data you want with a standard SQL query. The Custom SQL connector supports the following Incorta-specific functionality:
Feature | Supported |
---|---|
Chunking | ✔ |
Data Agent | ✔ |
Encryption at Ingest | ✔ |
Incremental Load | ✔ |
Multi-Source | ✔ |
OAuth | |
Performance Optimized | ✔ |
Remote | |
Single-Source | ✔ |
Spark Extraction | ✔ |
Webhook Callbacks | ✔ |
Custom SQL Connector Updates
This section is to explore the updates in the newer versions of the Custom SQL connector available on the Incorta connectors marketplace.
In order to get the newer version of the connector, please update the connector using the marketplace.
Version | Updates |
---|---|
2.2.0.2 | A new extra option, sql.chunks.limit, is now available to specify the maximum number of chunks that can be created by a data set when chunking is enabled. The default is 100, and the maximum is 300. If you specify a limit that exceeds 300, the connector resets it to 300. |
Keep your connector up-to-date with the latest connector version released to get all introduced fixes and enhancements.
Deploy the JAR file
The Custom SQL connector requires that you have a custom JDBC JAR file for the database vendor. The following is a generic example
company.jdbc.jar
Please contact Incorta Support to assist you with deploying the JAR on an Incorta Cloud cluster.
For an On-Premises cluster, a System Administrator with root access will need to copy the JDBC driver JAR file of the database to each Incorta Node in an Incorta cluster. A CMC Administrator will need to restart the Analytics and Loader Services in the cluster.
Here are the steps to copy the JAR file to standalone Incorta cluster:
- Download the JDBC driver JAR file from the database vendor.
- Secure copy the
company.jdbc.jar
file to the host. Here is an example using scp:
INCORTA_NODE_HOST=100.101.102.103INCORTA_NODE_HOST_PEM_FILE="host_key.pem"INCORTA_NODE_HOST_USER="incorta"CUSTOM_JDBC_JAR_FILE="company.jdbc.jar"cd ~/Downloadsscp -i ~/.ssh/${INCORTA_NODE_HOST_PEM_FILE} ${CUSTOM_JDBC_JAR_FILE} ${INCORTA_NODE_HOST_USER}@${INCORTA_NODE_HOST}:/tmp/
- Secure shell into the host
ssh -i ~/.ssh/${INCORTA_NODE_HOST_PEM_FILE} ${INCORTA_NODE_HOST_USER}@${INCORTA_NODE_HOST}
- Copy the
company.jdbc.jar
to theIncortaNode/extensions/connectors/shared-libs
directory in bash shell
INCORTA_INSTALLATION_PATH=/home/incorta/IncortaAnalyticsCUSTOM_JDBC_JAR_FILE="company.jdbc.jar"cp /tmp/${CUSTOM_JDBC_JAR_FILE} $INCORTA_INSTALLATION_PATH/IncortaNode/extensions/connectors/shared-libs/${CUSTOM_JDBC_JAR_FILE}
- In releases before 2024.1.x, the custom SQL connectors were added to
<IncortaNode>/runtime/lib/
. - After upgrading to 2024.1.x, you must move the driver jars of any deployed custom SQL connector from
<IncortaNode>/runtime/lib/
to<IncortaNode>/extensions/connectors/shared-libs/
. - In case of using a Data Agent, you must also copy these jars to
incorta.dataagent/extensions/connectors/shared-libs/
on the Data Agent host. - For Cloud installations, contact the Support team to help you move these connectors.
Restart the Analytics and Loader Services
Here are the steps to restart the Analytics and Loader Services in an Incorta Cluster from the Cluster Management Console (CMC).
- As the CMC Administrator, sign in to the CMC.
- In the Navigation bar, select Clusters.
- In the cluster list, select a Cluster name.
- Select the Details tab, if not already selected.
- In the footer, select Restart.
Steps to connect a Custom SQL Database and Incorta
To connect a Custom SQL Database and Incorta, here are the high level steps, tools, and procedures:
- Create an external data source
- Create a physical schema with the Schema Wizard
- or, Create a physical schema with the Schema Designer
- Load the physical schema
- Explore the physical schema
Create an external data source
Here are the steps to create a external data source with the Custom SQL connector:
- Sign in to the Incorta Direct Data Platform™.
- In the Navigation bar, select Data.
- In the Action bar, select + New → Add Data Source.
- In the Choose a Data Source dialog, in Custom, select Custom SQL.
- In the New Data Source dialog, specify the applicable connector properties.
- To test, select Test Connection.
- Select Ok to save your changes.
Custom SQL connector properties
Here are the properties for the Custom SQL connector:
Property | Control | Description |
---|---|---|
Data Source Name | text box | Enter the name of the data source |
Username | text box | Enter the database username |
Password | text box | Enter the database password |
Connection Pool | text box | Enter the connection pool. The default is 30. |
Driver Class | text box | Enter the driver class for the database. For example, the Informix driver class is com.informix.jdbc.IfxDriver . |
Connection String | text box | Enter the database connection string. For example, the Informix database connection string is: jdbc:informix-sqli://<host>:<port>/<database>:informixserver=<dbservername> |
Connection Properties | text box | Optionally enter connector properties for a custom connection to the database in the format: propertyName=propertyValue , where each connector property is on a new line.The available connector properties are specified by the database JDBC driver. |
Validation Query | text box | Optional. Enter the database-specific validation query. The Validation query is a SQL query that can be used by the pool to validate connections before they are returned to the application. This query must be a SQL SELECT statement that returns at least one row. Examples of validation queries: ● IBM Informix: SELECT COUNT(*) FROM SYSTABLES ● Apache Derby: VALUES 1 or SELECT 1 FROM SYSIBM.SYSDUMMY1 ● HSQLDB: SELECT 1 FROM INFORMATION_SCHEMA.SYSTEM_USERS ● Firebird: SELECT 1 FROM rdb$database |
Current Time Query | text box | Optional. Enter the database-specific Current Time query. The Current Time query is a SQL query that Incorta can use to get the database server current time. Examples of Current Time queries: ● IBM Informix: SELECT CURRENT or SELECT SYSDATE ● Apache Derby: VALUES CURRENT TIMESTAMP ● HSQLDB: VALUES (current_timestamp) or CALL current_timestamp ● Firebird: Select timestamp 'NOW' from rdb$database or Select CURRENT_TIMESTAMP From rdb$database |
Set Fetch Direction | toggle | Enable to have the fetch direction set to forward. This property is only compatible with a data sources that supports this feature. |
Extra Options | text box | Enter supported extra options in the form of key=value . |
Use Data Agent | toggle | Enable using a data agent to securely ingest data from an external data source that is behind a firewall. For more information, please review Tools → Data Agent and Tools → Data Manager. |
Data Agent | drop down list | Enable Use Data Agent to configure this property. Select from the data agents created in the tenant, if any. |
A data agent is a service that runs on a remote host. It is also a data agent object in the Data Manager for a given tenant. An authentication file shared between the data agent object and the data agent service enables an authorized connection without using a VPN or SSH tunnel. With a data agent, you can securely extract data from one or more databases behind a firewall to an Incorta cluster. Your Incorta cluster can reside on-premises or in the cloud.
You must have the Incorta cluster enabled and configured to support the use of Data Agents. To learn more, see Concepts → Data Agent and Tools → Data Agent.
Create a physical schema with the Schema Wizard
Here are the steps to create a Custom SQL physical schema with the Schema Wizard:
- Sign in to the Incorta Direct Data Platform™.
- In the Navigation bar, select Schema.
- In the Action bar, select + New → Schema Wizard
- In (1) Choose a Source, specify the following:
- For Enter a name, enter the physical schema name.
- For Select a Datasource, select the Custom SQL external data source.
- Optionally create a description.
- In the Schema Wizard footer, select Next.
- In (2) Manage Tables, in the Data Panel, navigate the directory tree as necessary to select the Custom SQL tables. You can either check the Select All checkbox or select individual sheets.
- In the Schema Wizard footer, select Next.
- In (3) Finalize, in the Schema Wizard footer, select Create Schema.
Create a physical schema with the Schema Designer
Here are the steps to create a custom SQL physical schema using the Schema Designer:
- Sign in to the Incorta Direct Data Platform™.
- In the Navigation bar, select Schema.
- In the Action bar, select + New → Create Schema.
- In Name, specify the physical schema name, and select Save.
- In Start adding tables to your schema, select SQL Database.
- In the Data Source dialog, specify the Custom SQL table data source properties.
- Select Add.
- In the Table Editor, in the Table Summary section, enter the table name.
- To save your changes, select Done in the Action bar.
Custom SQL table data source properties
For a physical schema table in Incorta, you can define the following Custom SQL specific data source properties as follows:
Property | Control | Description |
---|---|---|
Type | drop down list | Default is SQL Database |
Data Source | drop down list | Select the Custom SQL external data source |
Incremental | toggle | Enable the incremental load configuration for the physical schema table |
Fetch Size | text box | Used for performance improvement, fetch size defines the number of records that will be retrieved from the database in each batch until all records are retrieved. The default is 5000. |
Query | text box | Enter the SQL query to retrieve data from the Custom SQL database table |
Update Query | text box | Enable Incremental to configure this property. Enter the SQL query to retrieve data updates from the Custom SQL database table. |
Incremental Field Type | drop down list | Enable Incremental to configure this property. Select the format of the table date column: ● Timestamp ● Unix Epoch (seconds) ● Unix Epoch (milliseconds) |
Chunking Method | drop down list | Chunking methods allow for parallel extraction of large tables. The default is No Chunking. There are two chunking methods: ● By Size of Chunking (Single Table) ● By Date/Timestamp |
Chunk Size | text box | Select By Size of Chunking for the Chunking Method to set this property. Enter the number of records to extract in each chunk in relation to the Fetch Size. The default is 3 times the Fetch Size. |
Order Column | drop down list | Select By Size of Chunking for the Chunking Method to set this property. Select a column in the source table you want to order by before chunking. It's typically an ID column and it must be numeric. |
Upper Bound for Order Column | text box | Optional. Enter the maximum value for the order column. |
Lower Bound for Order Column | text box | Optional. Enter the minimum value for the order column. |
Order Column [Date/Timestamp] | drop down list | Select By Date/Timestamp for the Chunking Method to set this property. Select a column in the source table you want to order by before chunking. It should be a Date/Timestamp column. |
Chunk Period | drop down list | Select the chunk period that will be used in dividing chunks: ● Daily ● Weekly (default) ● Monthly ● Yearly ● Custom |
Number of days | text box | Select Custom for the Chunk Period to set this property. Enter the chunking period in days |
Enable Spark Based Extraction (Deprecated) | toggle | Enable a Spark job to parallelize the data ingest. This feature is no longer supported with plans to remove it in future connector versions. |
Max Number of Parallel Queries | text box | Enable Spark Based Extraction to configure this property. Enter the maximum number of parallel queries to run at a time |
Column to Parallelize Queries on | drop down list | Enable Spark Based Extraction to configure this property. Select a numerical column in the source table that you want Spark to parallelize the extraction queries on. |
Memory per Extractor | text box | Enable Spark Based Extraction to configure this property. Enter the numerical amount of memory to use per extractor in gigabytes (GB). |
Callback | toggle | Enable this option to call back on the source data set |
Callback URL | text box | Enable Callback to configure this property. Specify the URL. |
View the physical schema diagram with the Schema Diagram Viewer
Here are the steps to view the physical schema diagram using the Schema Diagram Viewer:
- Sign in to the Incorta Direct Data Platform™.
- In the Navigation bar, select Schema.
- In the list of schemas, select the Custom SQL schema.
- In the Schema Designer, in the Action bar, select Diagram.
Load the physical schema
Here are the steps to perform a Full Load of the Custom SQL physical schema using the Schema Designer:
- Sign in to the Incorta Direct Data Platform™.
- In the Navigation bar, select Schema.
- In the list of schemas, select the Custom SQL schema.
- In the Schema Designer, in the Action bar, select Load → Full Load.
- To review the load status, in Last Load Status, select the date.
Explore the physical schema
With the full load of the Custom SQL physical schema complete, you can use the Analyzer to explore the schema, create your first insight, and save the insight to a new dashboard.
To open the Analyzer from the schema, follow these steps:
- In the Navigation bar, select Schema.
- In the Schema Manager, in the List view, select the Custom SQL schema.
- In the Schema Designer, in the Action bar, select Explore Data.