Connectors → Oracle

About Oracle Database

Oracle Database is one of the most popular relational database management systems(RDMS) in the world. Oracle Database has an extensive history as a RDMS starting in 1979. It can be configured and managed as an on-premises, cloud, or hybrid installation. Oracle Databases have two computing versions, grid and cloud.

As a RDMS, the logical data structure of an Oracle Database, or schema, is independent of the physical data storage. The schema is a collection of schema objects, or logical data structures. Users define schema objects. Schema objects have many types in an Oracle Database, but the two most common are tables and indexes. A table represents an entity. Attributes of an entity are defined by columns in the table. Rows in the table identify instances of those entities. An index can be added to the columns of a table to increase data retrieval efficiency. Indexes are logically and physically independent of data and as a result can be removed or added without affecting the corresponding table.

All operations on data in an Oracle Database use SQL statements. Oracle Database utilizes a procedural extension (PL/SQL) of SQL. PL/SQL enables the ability to store application logic in the database. Java procedures can also be published as SQL and stored in the database. Java and PL/SQL procedures can be interchangeably called.

Oracle Database grid computing, denoted by a “g” in the version number, uses a distributed architecture over a common network. The grid system assigns individual managers to nodes in the grid system. A single system centralizes all resource control of the grid.

Oracle Database Cloud, denoted by a “c” in the version number, utilizes remote servers to store and manage data. The cloud implementation of Oracle Database centralizes management and resources of the entire cloud network.

Client applications connecting to an Oracle Database connect through the same procedure of client/server process connection.

Oracle Connector Updates

This section is to explore the updates in the newer versions of the Oracle connector available on the Incorta connectors marketplace.

In order to get the newer version of the connector, please update the connector using the marketplace.

VersionUpdates
2.0.1.3You can start use Oracle additional libraries that you might need by contacting the Incorta Support to add them in the shared-libs/sql-oracle folder.
2.0.2.0Introduce log-based incremental load support with a known limitation that in case of deleting records and performing full load, you must reconfigure the Debezium connector to avoid retrieving deleted records from the snapshot.
2.0.2.2Fixed the known limitation in 2.0.2.0.
Recommendation

Keep your connector up-to-date with the latest connector version released to get all introduced fixes and enhancements.

About the Oracle Database Connector

Incorta loads data from an Oracle database using the Oracle connector. The Incorta Oracle Connector uses the ojdbc8.jar driver. The Java Database Connectivity (JDBC) Application Programming Interface (API) is the industry standard in connectivity between Java programming and databases. The Java Development Kit (JDK) is part of the core technology package for Java programming. The other two major packages are the Java Virtual Machine (JVM) and Java Runtime Environment (JRE). The Oracle JDBC driver is always compliant with the current JDK version at time of release. You should review Oracle database, JDBC, and JDK version compatibility when preparing to make BI connections. Incorta connects to an Oracle Database using the _thin_ driver type, which is a 100% Java driver for client side use. This configuration is specified in the Connection String.

The Oracle connector supports the following Incorta specific functionality:

FeatureSupported
Chunking
Data Agent
Encryption at Ingest
Incremental Load
Multi-Source
OAuth
Performance Optimized
Remote
Single-Source
Spark Extraction
Webhook Callbacks
Pre-SQL support
Note

The Oracle Connector supports two types of incremental load, including support for using a numeric column. To learn more see Types of Incremental Load.

Note

The Pre-SQL support feature is available starting with the 2021.4.2 release. To learn more, see Pre-SQL support.

Steps to connect Oracle Database and Incorta

To connect Oracle and Incorta, here are the high-level steps, tools, and procedures:

Create an external data source

Here are the steps to create an external data source with the Oracle connector:

  • Sign in to the Incorta Direct Data Platform.
  • In the Navigation bar, select Data.
  • In the Action bar, select + NewAdd Data Source.
  • In the Choose a Data Source dialog, in Database, select Oracle.
  • In the New Data Source dialog, specify the applicable connector properties.
  • To test, select Test Connection.
  • Select Ok to save your changes.

Oracle connector properties

Here are the properties for the Oracle connector:

PropertyControlDescription
Data Source Nametext boxEnter the name of the data source
Usernametext boxEnter the database username
Passwordtext boxEnter the database password
Connection Pooltext boxEnter the connection pool. The default is 30.
Connection Stringtext boxEnter the connection string for the Oracle JDBC driver. The format is: jdbc:oracle:thin:@<HOST>:<PORT>:<DATABASE_NAME>
The default JDBC port for Oracle Database is: 1521.
Connection Propertiestext boxEnter the customized key/value properties, as applicable. See below for details on connection properties.
Use Data AgenttoggleEnable using a data agent to securely ingest data from an external data source that is behind a firewall.
For more information, please review Tools → Data Agent and Tools → Data Manager.
Data Agentdrop down listEnable Use Data Agent to configure this property. Select from the data agents created in the tenant, if any.
Important: Data Agent

A data agent is a service that runs on a remote host. It is also a data agent object in the Data Manager for a given tenant. An authentication file shared between the data agent object and the data agent service enables an authorized connection without using a VPN or SSH tunnel. With a data agent, you can securely extract data from one or more databases behind a firewall to an Incorta cluster. Your Incorta cluster can reside on-premises or in the cloud. A CMC Administrator must enable and configure an Incorta cluster to support the use of Data Agents. Only a Tenant Administrator (Super User) or user that belongs to a group with the SuperRole role for a given tenant can create a data agent that connects to a data agent service. To learn more, see Concepts → Data Agent and Tools → Data Agent.

Connection Properties

The connection properties allow for customized connection to your Oracle database. The properties are accepted in a key=value format. The type of connector properties can range from username and password to SSL settings. The ojdbc8.jar driver determines the available connection properties. For a list of connection properties available with Incorta and an Oracle database refer to Configuration Properties for Connector.

Create a schema with the Schema Wizard

Here are the steps to create an Oracle schema with the Schema Wizard:

  • Sign in to the Incorta Direct Data Platform.
  • In the Navigation bar, select Schema.
  • In the Action bar, select + New → Schema Wizard.
  • In (1) Choose a Source, specify the following:
    • For Enter a name, enter the schema name.
    • For Select a Datasource, select the Oracle external data source.
    • Optionally create a description.
  • In the Schema Wizard footer, select Next.
  • In (2) Manage Tables, in the Data Panel, first select the name of the Data Source, and then check the Select All checkbox.
  • In the Schema Wizard footer, select Next.
  • In (3) Finalize, in the Schema Wizard footer, select Create Schema.

Create a schema with the Schema Designer

Here are the steps to create an Oracle schema using the Schema Designer:

  • Sign in to the Incorta Direct Data Platform.
  • In the Navigation bar, select Schema.
  • In the Action bar, select + New → Create Schema.
  • In Name, specify the schema name, and select Save.
  • In Start adding tables to your schema, select SQL Database.
  • In the Data Source dialog, specify the Oracle table data source properties.
  • Select Add.
  • In the Table Editor, in the Table Summary section, enter the table name.
  • To save your changes, select Done in the Action bar.

Oracle table data source properties

For a schema table in Incorta, you can define the following Oracle specific data source properties as follows:

PropertyControlDescription
Typedrop down listDefault is SQL Database
Data Sourcedrop down listSelect the Oracle external data source
IncrementaltoggleEnable the incremental load configuration for the schema table. See Types of Incremental Load.
Incremental Extract Usingdrop down listEnable Incremental to configure this property. Select between Last Successful Extract Time and Maximum Value of a Column. See Types of Incremental Load.
Incremental Columndrop down listEnable Incremental to configure this property. Select the column to be used in a Maximum Value of a Column. The loader will track and use the greatest value, most recent timestamp, for each load operation.
Querytext boxEnter the SQL query to retrieve data from the Oracle database
Update Querytext boxEnable Incremental to configure this property. Enter the SQL update query to use during an incremental load.
Incremental Field Typedrop down listEnable Incremental to configure this property. Select the format of the incremental field type:
  ●  Timestamp
  ●  Unix Epoch (seconds)
  ●   Unix Epoch (milliseconds).
Pre-SQLtoggleEnable running SQL statements or calling stored procedures before executing the original extraction query or incremental query during a load job
Pre-SQL statementtext boxEnter the SQL statements or call the stored procedures that you want to execute before the query or incremental query during a load job
Fetch Sizetext boxUsed for performance improvement, fetch size defines the number of records that will be retrieved from the database in each batch until all records are retrieved. The default is 5000.
Chunking Methoddrop down listChunking methods allow for parallel extraction of large tables. The default is No Chunking. There are two chunking methods:
  ●   By Size of Chunking (Single Table)
  ●   By Date/Timestamp
Chunk Sizetext boxSelect By Size of Chunking for the Chunking Method to set this property. Enter the number of records to extract in each chunk in relation to the Fetch Size. The default is 3 times the Fetch Size.
Order Columndrop down listSelect By Size of Chunking for the Chunking Method to set this property. Select a column in the source table you want to order by before chunking. It's typically an ID column and it must be numeric.
Upper Bound for Order Columntext boxOptional. Enter the maximum value for the order column.
Lower Bound for Order Columntext boxOptional. Enter the minimum value for the order column.
Order Column [Date/Timestamp]drop down listSelect By Date/Timestamp for the Chunking Method to set this property. Select a column in the source table you want to order by before chunking. It should be a Date/Timestamp column.
Chunk Perioddrop down listSelect the chunk period that will be used in dividing chunks:
  ●   Daily
  ●   Weekly (default)
  ●  Monthly
  ●  Yearly
  ●  Custom
Number of daystext boxSelect Custom for the Chunk Period to set this property. Enter the chunking period in days
Set Extraction TimeouttoggleEnable the termination of extraction (loading) jobs for the SQL query for this table if the extraction time exceeds the configured time
Stop after (Minutes)text boxEnable Set Extraction Timeout to configure this property. Enter the maximum number of minutes allowed for the loading job to complete before terminating it.
Enable Spark Based ExtractiontoggleEnable the configuration of a Spark job to parallelize the data ingest.
Max Number of Parallel Queriestext boxEnable Spark Based Extraction to configure this property. In a Spark job, specify the maximum number of parallel queries.
Column to Parallelize Queries ondrop down listEnable Spark Based Extraction to configure this property. Select the numerical column in the source table that Spark will parallelize the extraction queries from.
Memory Per Extractortext boxEnable Spark Based Extraction to configure this property. In a Spark job, allocate the amount of memory per extractor. Units are in gigabytes.
CallbacktoggleEnable post extraction callback, that is, enable callback on the data source data set(s) by invoking a certain callback URL with parameters containing details about the load job.
Callback URLtext boxEnable Callback to configure this property. Specify the callback URL.

View the schema diagram with the Schema Diagram Viewer

Here are the steps to view the schema diagram using the Schema Diagram Viewer:

  • Sign in to the Incorta Direct Data Platform.
  • In the Navigation bar, select Schema.
  • In the list of schemas, select the Oracle schema.
  • In the Schema Designer, in the Action bar, select Diagram.

Load the schema

Here are the steps to perform a Full Load of the Oracle schema using the Schema Designer:

  • Sign in to the Incorta Direct Data Platform.
  • In the Navigation bar, select Schema.
  • In the list of schemas, select the Oracle schema.
  • In the Schema Designer, in the Action bar, select LoadFull Load.
  • To review the load status, in Last Load Status, select the date.

Explore the schema

With the full load of the Oracle schema complete, you can use the Analyzer to explore the schema, create your first insight, and save the insight to a new dashboard.

To open the Analyzer from the schema, follow these steps:

  • In the Navigation bar, select Schema.
  • In the Schema Manager, in the List view, select the Oracle schema.
  • In the Schema Designer, in the Action bar, select Explore Data.

Additional Considerations

Types of Incremental Load

You can enable Incremental Load for a Oracle table data source. There are two types of incremental extracts:

Last Successful Extract Time

Fetch updates since the last time the tables were loaded. Determined by the difference between the current time and the database timestamp.

Maximum Value of a Column

The column-based strategy depends on an extra column called "Incremental Column" in each table. The Oracle connector supports both timestamp and numeric columns. A timestamp column is of the type date or timestamp. A numeric column is of the type int or long.

Note

Changing the incremental load strategy requires a full load to ensure data integrity.

Log-based incremental load

Starting the connector version 2.0.2.0, Incorta supports the log-based incremental load using the change data capture (CDC).

CDC is the process of identifying and capturing changes made to data in a database using logs and then delivering those changes in real time to a downstream process or system.

Note

Currently, the log-based incremental load is a preview feature.

Prerequisites

To be able to use the log-based incremental load, you need to be aware of and apply the following:

  1. Install and configure Apache Kafka and Kafka Connect.
  2. Configure Debezium connector, knowing that Incorta recommends using Debezium version 2.4.1.
  3. Disable snapshot while configuring Debezium.
  4. Make sure Debezium connector is configured to send data types to Incorta by adding the propagate property.
  5. Log-based incremental load only supports database physical tables.
  6. Tables must have primary keys.

Known limitations

  • For the time being, Incorta does not track deletion updates through the log-based incremental load (CDC).
  • Minimal mismatch in column INTERVAL data types.
  • This feature supports Kafka topics that use a single partition only.

Incremental Load Example

In this example, the invoices table must contain a column of the type Date or Timestamp in order to load the table incrementally with a last successful extract time strategy. In this case, the name of the date column is ModifiedDate and the format of the column is Timestamp.

Here are the data source property values for this example:

Incremental is enabled

Query contains SELECT * FROM `invoices`

Update Query contains SELECT * FROM `invoices` WHERE `ModifiedDate` > ?

Note

? is a variable in the update query that contains the last schema refresh date.

Incremental Field Type = Timestamp

Note

If running an update query for an incremental load, you are able to use the ? reference character. The ? character will be replaced with the last incremental reference to construct a valid query to the database. The ? reference character is not valid in a standard query.

Valid Query Types

When creating a query for your Oracle Connector only SELECT statements are valid. You are not able to use INSERTS or UPDATES as a query in your Oracle Connector.

Pre-SQL support

This feature allows executing SQL statements before executing the original extraction query or incremental query during a load job. This helps in different scenarios, for example, when you need to do one of the following:

  • Set the security context of the query before extracting data
  • Run stored procedures
  • Update the data source before extraction
  • Create a temporary table before executing the extraction query
  • Delete old records in the source so that the source includes only the latest records
Note

The data source database management system determines the accepted statements and supported syntax. if the Pre-SQL statement fails during a load job, the object data extraction and loading fail and an error message appears. Logs also will contain the failure reason.