Release Notes 5.1
Release Highlights
The goal of the Incorta 5.1 release is to enhance data management and analytics capabilities.
This release introduces several major improvements to the Cluster Management Console (CMC), Incorta Loader Service, and Incorta Analytics Service, in addition to the other services.
This release improves physical schema synchronization and enhances both data consistency and availability. The release also offers a new Snowflake connector, a new Waterfall visualization, a new Stacked Column and Line visualization, a new search bar in the Insight panel, support for remote table queries using the SQLi interface, user authentication for the Apache Spark web user interfaces, the ability to configure a default schema in the CMC, the ability to control column width in Table insights, the ability to copy and paste formula pill(s) to other trays in the Analyzer, and much more.
When migrating shared storage files from one Incorta cluster to another, for example, from a User Acceptance Testing (UAT) environment to a Production environment, you must first copy the source
directory that contains the parquet files, and then perform a load from staging. Only copying the directories that contain the snapshot ddm files and the source parquet files from shared storage between Incorta clusters environments will not have the same result.
To migrate only one object in a physical schema, you need to copy the whole object directory (contains all the parquet files) that exists under the physical schema in the source
directory. The path to the object directory that you need to copy is as follows: /home/incorta/IncortaAnalytics/Tenants/<tenant_name>/source/<schema_name>/<object_name>
.
Both environments must run an Incorta release that supports file versioning and the copied files should not have records in the FILES_VERSIONS or VERSION_LOCK metadata database tables.
Important new features and enhancements
There are several important features in this release:
- Data consistency and availability
- Physical Schema synchronization
- Waterfall visualization
- Stacked Column and Line visualization
- Dashboard enhancements
- Snowflake Connector
- Business Schema Manager enhancements
- User Authentication for the Apache Spark web user interfaces
- Chromium headless browser requirement
Additional improvements and enhancements
- Zookeeper Upgrade to v3.6.1
- Treemap visualization enhancements
- Control column width in Table insights
- Visualizations menu search bar
- A new default color palette
- Support for the to_char PostgreSQL format function
- Accessibility enhancements
- Dashboard search improvements
- Performance improvements
Chromium headless browser requirement
This release requires the installation of various operating system libraries for a Chromium headless browser. The Chromium headless browser supports the generation of a dashboard in either HTML or PDF format.
CentOS and RedHat operating systems
A Linux system administrator with root access to the host or host in the Incorta cluster must install the following packages prior to upgrade or for a new installation:
sudo yum install pango.x86_64 libXcomposite.x86_64 libXcursor.x86_64 libXdamage.x86_64 libXext.x86_64 libXi.x86_64 libXtst.x86_64 cups-libs.x86_64 libXScrnSaver.x86_64 libXrandr.x86_64 GConf2.x86_64 alsa-lib.x86_64 atk.x86_64 gtk3.x86_64 -ysudo yum install ipa-gothic-fonts xorg-x11-fonts-100dpi xorg-x11-fonts-75dpi xorg-x11-utils xorg-x11-fonts-cyrillic xorg-x11-fonts-Type1 xorg-x11-fonts-misc -y
To install Chinese, Japanese, and Korean fonts, install the following packages:
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpmsudo yum install -y google-noto-cjk-fonts
Ubuntu operating systems
A Linux system administrator with root access to the host or host in the Incorta cluster must install the following packages prior to upgrade or for a new installation:
sudo apt install libxkbcommon-x11-0 libgbm1 libgtk-3-0
To install Chinese, Japanese, and Korean fonts, install the following packages:
sudo apt-get install fonts-notosudo apt-get install fonts-noto-cjk
Chromium headless browser dependencies
Here is a list of libraries and packages that a Chromium headless browser requires:
libpthread.so.0libdl.so.2librt.so.1libm.so.6libc.so.6ld-linux-x86-64.so.2libX11.so.6libX11-xcb.so.1libxcb.so.1libXcomposite.so.1libXcursor.so.1libXdamage.so.1libXext.so.6libXfixes.so.3libXi.so.6libXrender.so.1libXtst.so.6libgobject-2.0.so.0libglib-2.0.so.0libnss3.solibnssutil3.solibsmime3.solibnspr4.solibcups.so.2libdbus-1.so.3libexpat.so.1libXss.so.1libXrandr.so.2libgio-2.0.so.0libasound.so.2libpangocairo-1.0.so.0libpango-1.0.so.0libcairo.so.2libatk-bridge-2.0.so.0libgtk-3.so.0libgdk-3.so.0libgdk_pixbuf-2.0.so.0libgcc_s.so.1
Cluster Management Console
In this release, there are new features and enhancements in the CMC:
- Zookeeper Upgrade to v3.6.1
- A new default color palette
- Configure a default physical schema in the CMC
Zookeeper Upgrade to v3.6.1
In this release, the version of Zookeeper used with Incorta has been upgraded to v3.6.1 to support SSL. To enable SSL for Zookeeper, please review Security → Enable Zookeeper SSL.
If you are using an external version of Zookeeper that is not bundled with Incorta, you must upgrade your Zookeeper instance manually with the following steps:
- Replace the existing
zookeeper
folder with the one from<INCORTA_INSTALLATION_PATH>/IncortaNode
, with the exception of thezookeeper/conf/zoo.cfg
file. - Add the
admin.enableServer=false
property tozoo.cfg
. - Delete any files inside the
<INCORTA_INSTALLATION_PATH>/IncortaNode/zookeeper_data
folder. - Restart Zookeeper.
If you have multiple nodes, repeat the above steps for each Zookeeper node.
The Zookeeper upgrade to v3.6.1 is backward compatible with all Incorta versions.
A new default color palette
In this release, a new default color palette for the cluster tenants is available: Sophisticated
- After upgrade, if the cluster already had a configured palette, the previous palette remains selected.
- If there is no configured palette, the default color palette will be Sophisticated.
- If you select a specific palette for a dashboard, when you export or download this dashboard, a tab, or an insight on it, export or download will be in the selected palette.
- If there is no configured dashboard palette, export or download will be in the cluster default palette.
Configure a default physical schema in the CMC
If you specify a table name in a query, but do not precede the table name with a physical schema name, Incorta will use the default schema when the table name exists in multiple schemas.
Configure a default schema for the Default Tenant Configuration
Here are the steps to configure a default physical schema for the Default Tenant Configuration:
- Sign in to the CMC as the CMC Administrator
- In the Navigation bar, select Clusters
- In the cluster list, select a Cluster name
- In the canvas tabs, select Cluster Configurations
- In the panel tabs, select Default Tenant Configurations
- In the left pane, select External Visualization Tools
- In the right pane, in Default Schemas, enter a comma delimited list of default physical schemas. These schemas are considered in order whenever a non fully qualified table name is encountered in the processed SQL query from an external tool. Here is an example of a Default Schemas list:
HR,SALES
- Select Save
Configure a default schema for a Tenant Configuration
Here are the steps to configure a default physical schema for a specific tenant:
- Sign in to the CMC as the CMC Administrator
- In the Navigation bar, select Clusters
- In the cluster list, select a Cluster name
- In the canvas tabs, select Tenants
- For the given tenant, select Configure
- In the left pane, select External Visualization Tools
- In the right pane, in Default Schemas, enter a comma delimited list of default physical schemas.
These schemas are considered in order whenever a non fully qualified table name is encountered in the processed SQL query from an external tool. Here is an example of a Default Schemas list:
HR,SALES
- Select Save
Incorta Analytics and Loader Service
The 5.1 release introduces several key improvements to the Incorta Analytics and Loader Services such as:
- Data consistency and availability
- Analyzer enhancements
- New Visualizations and enhancements
- Dashboard enhancements
- Content Manager user interface enhancements
- Business Schema Manager enhancements
- Snowflake Connector
- Connector enhancements
- Physical Schema Synchronization
- Incorta SQL Table
Data consistency and availability
This release introduces enhancements that help in maintaining a high level of data consistency and availability. The introduced enhancements achieve and secure the following:
- Data consistency across the system at any point in time
- Reduced input/output (I/O) operations
- Minimal query time
The updated mechanism adopts the following approach to achieve the previous goals:
- Creating new versions of files created during a load job or a schema update job instead of overwriting the existing file versions
- A new directory structure for files created during a load job or a schema update job to support multiple file versions
- A locking mechanism to mark file versions that are in use so that the cleanup job does not delete them
- A cleanup job to delete unneeded file versions
- A change in the Incorta metadata database to store locking and versioning information
In this release, load and schema update jobs do not use the parquet
and snapshots
directories anymore. These jobs save their files to the “source” and “ddm” directories as appropriate.
In the case of upgrading from a previous release to release 5.1, you must either run the Versioning Migration Tool after upgrading the cluster metadata database or perform a full load for all physical schemas in all tenants.
What to expect after upgrading to 5.1
- A new directory structure where the
source
andddm
directories replace the oldparquet
andsnapshots
directories respectively. - A new setting in the Cluster Configurations to set the Cleanup job time interval.
- Enabling the Sync In Background feature will not cause data inconsistency issues as experienced in some cases before.
- Some files will not be available or used anymore, such as
.zxt
,.zxs
, andload-time.log
files.
When upgrading to release 5.1, you must ensure that you have adequate disk space in shared storage. If you create a backup prior to the migration, you need to account for the backup size as part of your disk space calculations. The default backup directory is the tenant directory. You can specify a different directory.
To learn more about the new implementation, review References → Data Consistency and Availability.
Analyzer enhancements
There are several enhancements and improvements to the Analyzer in this release:
For a measure pill, the Properties panel now supports a custom number format. You can add a prefix and/or a suffix that does not contain single quotes. In addition, the URL box is now vertically expandable and selecting a URL does not automatically close the panel.
You can now copy and paste formula pill(s) to other trays.
- Select multiple formula pills
- On Mac, press Command
- On Windows, press Ctrl
- Drag the duplicated pill(s) to the desired tray
- On Mac, press Option
- On Windows, press Alt
New Visualizations and enhancements
In this release, there are the following new visualizations:
In addition, this release includes several enhancements and improvements to existing visualizations:
Waterfall Visualization
This release introduces the new Waterfall Chart visualization. A Waterfall visualization allows you to visualize the effect of positive and negative values applied to a starting value. A typical waterfall insight shows how an initial value is increased and decreased by a series of intermediate time-based or category-based values, leading to a final value. To learn more, review Visualizations → Waterfall.
Stacked Column and Line Visualization
This release introduces the new Stacked Column with Line visualization. A stacked column and line visualization allows you to compare grouped data against one or more series of data points. The stacked column and line visualization plots the data using stacked rectangular columns that are color grouped and whose lengths are proportional to the values they represent; and the series of data points connected by straight line segments. To learn more, review Visualizations → Stacked Column with Line.
Treemap enhancements
A Treemap visualization now supports more than one measure pill. To learn more, please review Visualizations → Treemap.
Control column width in Table insights
This release introduces the new “Table Width” setting in the Settings panel of the Listing Table, Aggregated Table, and Pivot Table insights; with 2 possible values:
- Dynamic - where the table width adjusts automatically according to the length of the table content
- Customized - where you can control the width of the table
You can control the width of the Table insights in both the Analyzer and in an insight on a dashboard. In the Analyzer or a Dashboard, select the border of the column header and drag to the left or right to change its width.
Only dashboard owners or users with edit rights can modify the column width using the Analyzer. Modifying the column width through the Analyzer will affect how a table looks for the rest of the Dashboard users.
Users with view or share access rights can modify the column width in the dashboard view only, but such modification will not affect other users nor will be saved.
Visualizations menu search bar
This release introduces a new search bar for the visualizations in the Insight panel. You can now start typing the name of the visualization you want to create and accordingly the list of visualizations will be filtered. You can type in a part or the whole visualization name.
For example, when you start typing sta
or stacked
, the following search results will appear:
- Stacked Column
- Stacked Bar
- Stacked Area
- Stacked Line
- Stacked Column and Line
Dashboard enhancements
This release introduces several enhancements for dashboards:
Enhancements for compact mode
This release includes the following enhancements for a dashboard in compact mode:
- The sandpaper icon no longer overlaps the title of an insight.
- A KPI insight has less height so as to minimize white space.
Dashboard user interface enhancements
There are several user interface enhancements for a dashboard:
- All dashboard tabs appear as individual tabs in a Tab bar.
- The Action bar contains the Bookmark, Search, and Filter controls.
- When the width of a browser window narrows, the Action bar makes available various controls and menus.
- The Filter bar no longer appears by default unless there is a default prompt, presentation variable, or dashboard runtime filter.
- When applicable, the Filter bar is sticky. Sticky is a term to describe when a bar stays on the top of the page.
- When you scroll down, both the Navigation bar and Action bar disappear.
- When you start scrolling up, the Action bar appears again. The Navigation bar appears only when you scroll back to the top of a dashboard.
- The More Options (⋮ vertical ellipsis icon) menu contains Set as Default or Remove as Default. There is no longer a Pin to set a dashboard as your default dashboard.
- The Go To menu displays the Dashboard and Tab name.
Here are the new dashboard behaviors related to saving an insight:
- When you save your changes to a new insight or an existing insight, the dashboard will scroll to the new insight. A notification message now displays a Back to Top link. Use this link to scroll up to the first insight in the dashboard.
- When you save an insight to a tab on another dashboard, you automatically navigate to the insight on that dashboard tab.
Content Manager user interface enhancements
This release introduces several new user interface enhancements for the Content Manager:
- The Search bar is removed.
- The Search control and the toggle between List View (bullets and lines icon) and Card View (four squares icon) are in the Action bar.
- The Action bar is sticky. Sticky is a term to describe when a bar stays on the top of the page.
- When you scroll down, the Navigation bar disappears.
Business Schema Manager enhancements
This release enhanced the pagination in the Business Schema Manager. The List View in the Business Schema Manager now supports a page size of 40 rows.
Snowflake Connector
Snowflake is a cloud data warehouse with a SQL interface. Snowflake stores both structured and semi-structured data, including JSON and XML. To learn more, please review Connectors → Snowflake.
Connector enhancements
This release includes the following enhancements and improvements to existing connectors:
- Support limiting the size of cached files for Oracle Cloud Application connector
- Enhancements to the Apache Kafka connector
- Enhanced warning messages
Support limiting the size of cached files for Oracle Cloud Application connector
In addition to the retention period that you can define to keep the cached files, now you can control the maximum size of cached files so that they do not exceed a preset limit. This is to protect Incorta from running out of disk space if the user selects a long retention period and the server runs out of disk space before the retention period is exceeded.
Enhancements to the Apache Kafka connector
With this release, the Apache Kafka connector now supports for a given external data source:
- defining the security protocol options
- uploading a customer properties file
To learn more, please review Connectors → Apache Kafka.
Enhanced warning messages
An enhanced warning message is introduced to assert the necessity to define a key column for physical schema tables with the Incremental option enabled. This change applies to physical schema tables with a data source type of SQL Database, which means any table with a data source for Microsoft SQL, MySQL, Presto databases, or any other SQL-based data source.
Physical Schema synchronization
A check has been added to ensure that the metadata for a physical schema in Loader Service memory synchronizes with the version that is committed to the Incorta Metadata. When differences exist, the physical schema in the memory of the Loader Service will update to the version in the Incorta Metadata database. The synchronization check is performed as follows:
- At the start of a load job
- At the start of an update job for a physical schema
- Every five minutes in the Analytics Service
The Schema Manager now shows for the physical schema list view a new column: Model Update Status. The indicates the status of the physical schema synchronization. The column also appears in the Summary section of the Schema Designer. To learn more, please review Tools → Model Update Viewer.
Incorta SQL Table
Here are the Incorta SQL Table new features and enhancements for this release
Support Window Functions
A new capability is implemented to extend using the window functions with the SQL engine.
A window function performs a calculation across a set of table rows that are somehow related to the current row. This is comparable to the type of calculation that can be done with an aggregate function. Unlike regular aggregate functions, using a window function does not cause rows to be grouped into a single output row — the rows retain their separate identities.
With this new capability you can do the following:
- Use window functions to create complex queries in Incorta SQL Tables and SQLi
- Create advanced analytics expressions
This release currently supports the following window functions:
- AVG
- SUM
- MAX
- MIN
- LAG
- LEAD
- ROW_NUMBER
Also, this release currently supports the following keyword:
- PARTITION BY
- ORDER BY
For example purposes, consider running the following window function query on the SALES schema:
SELECT PROD_ID, SUM(AMOUNT_SOLD)OVER (PARTITION BY CALENDAR_YEAR ORDER BY CALENDAR_MONTH)FROM SALES
Capture the full SQL for an Incorta SQL Table in sqli_audit
The sqli_audit
file also contains the query execution time, and the memory used by the SQL engine. The sqli_audit
folder that contains the sqli_audit
files is located in Incorta Analytics under Data → Local Data Files.
Here are the steps to create a CSV of a sqli_audit
file:
- Create a physical schema with
sqli_audit
as the data source. - Load the
sqli_audit
schema. - Create a dashboard listing table insight from the
sqli_audit
schema. - Download the table as a CSV file.
Apache Spark, Materialized Views, and Incorta ML
In this release, there are several improvements related to Apache Spark:
- Resource allocation enhancements for materialized views
- User Authentication for the Apache Spark web user interfaces
- Support for the to_char PostgreSQL format function
- SQLi enhancements
- Additional Considerations for PySpark and Materialized Views
Resource allocation enhancements for materialized views
This release includes enhancements for resource allocation for a materialized view. For an existing materialized view in a physical schema, during a load job, the Load Job Viewer now includes a new status, Allocating Resources, in the Load Job Details.
The Allocating Resources status differentiates between a materialized view that is running versus one that is waiting for resource allocation. A materialized view waiting to be run will move from a status of In Queue to Allocating Resources. A materialized view that is running will show a status of Enriching.
When adding or saving the data source properties of a new or existing materialized view, there are now several messages that help indicate how Apache Spark is validating the job. Validating or saving a materialized view submits a temporary job to Apache Spark. The job executes the code of the materialized view. These messages are:
Message | Description |
---|---|
Starting | The Spark job is starting |
Waiting for Resources | The job is in the Spark application queue and will remain in the queue until resources are available |
Submitted | Spark has resources for the job to run. The job is now running and is no longer in a queue. |
Failing | There is an error or exception with the job. |
User Authentication for the Apache Spark web user interfaces
In this release, you can now enforce both Incorta and LDAP user authentication for the Apache Spark web user interfaces: Master, Worker, and Job Details.
User authentication consists of the following:
- Username: USERNAME@TENANT_NAME
- Password: PASSWORD
A username is the Login_Name in for the profile of a user.
Example
Here is an example of a user with the username of test@1.com
and the tenant name is default
.
- Username:
test@1.com@default
- Password:
password123
This feature requires the installation of a separate JAR file, SparkUIAuth-1.0-SNAPSHOT.jar
.
Here is an overview of the steps to install the SparkUIAuth-1.0-SNAPSHOT.jar
for an on-premises, standalone Incorta Cluster:
- Download the JAR file
- Secure copy the JAR file
- Stop Apache Spark
- Modify the Spark Default configuration file
- Start Apache Spark
- Restart the Analytics and Loader Services
Installation of the JAR requires:
- a Linux system administrator with root access to the host running Apache Spark
Download the JAR file
You can connect with Incorta Support to download the JAR file or use this link:
Secure copy the JAR file
As the Linux system administrator, use a secure copy (scp) to access the Incorta host.
- In bash shell, specify the host, pem key, user, and JAR file.
INCORTA_NODE_HOST=100.101.102.103INCORTA_NODE_HOST_PEM_FILE="host_key.pem"INCORTA_NODE_HOST_USER="incorta"SPARK_JAR_FILE="SparkUIAuth-1.0-SNAPSHOT.jar"
- In bash shell, from the Downloads directory or similar, use scp to securely copy the JAR file.
cd ~/Downloadsscp -i ~/.ssh/${INCORTA_NODE_HOST_PEM_FILE} ${SPARK_JAR_FILE} ${INCORTA_NODE_HOST_USER}@${INCORTA_NODE_HOST}:/tmp/
- Secure shell into the host.
ssh -i ~/.ssh/${INCORTA_NODE_HOST_PEM_FILE} ${INCORTA_NODE_HOST_USER}@${INCORTA_NODE_HOST}
- Change the ownership and file bits of the JAR file to the
incorta
user or other user that runs the Apache Spark related processes.
sudo su incortaSPARK_JAR_FILE="SparkUIAuth-1.0-SNAPSHOT.jar"cd /tmpsudo chown incorta:incorta ${SPARK_JAR_FILE}sudo chmod 777 ${SPARK_JAR_FILE}
- Copy the JAR file to the JARs directory for Apache Spark.
INCORTA_NODE_INSTALLATION_PATH=/home/incorta/IncortaAnalytics/IncortaNodeINCORTA_SPARK_INSTALLATION_PATH=${INCORTA_NODE_INSTALLATION_PATH}/sparkcp /tmp/${SPARK_JAR_FILE} ${INCORTA_SPARK_INSTALLATION_PATH}/jars/${SPARK_JAR_FILE}
- Confirm the existence of the JAR file.
ls -l ${INCORTA_SPARK_INSTALLATION_PATH}/jars/ | grep ${SPARK_JAR_FILE}
Stop Apache Spark
You can stop Apache Spark using the stopSpark.sh
shell script.
${INCORTA_NODE_INSTALLATION_PATH}/stopSpark.sh
Modify the Spark Default configuration file
Next, you must modify the spark-defaults.conf
file using vim or similar editor.
- Modify the Spark Default configuration file using vim.
vim ${INCORTA_SPARK_INSTALLATION_PATH}/conf/spark-defaults.conf
- To switch to insert mode, use the
i
keystroke. - Add the following configurations, replacing the following where
- AUTHENTICATION_TYPE is either
incorta
orldap
- ANALYTICS_SERVICE_URL is the URL for the Analytics Service
- PORT is the port for the Analytics Service (the default is 8080)
- AUTHENTICATION_TYPE is either
spark.ui.filters com.incorta.BasicAuthFilterspark.com.incorta.BasicAuthFilter.param.realm AUTHENTICATION_TYPEspark.com.incorta.BasicAuthFilter.param.url http://ANALYTICS_SERVICE_URL:PORT
- To save your changes, first switch back to read mode using the
ESC
keystroke. Next, use the:wq!
keystroke to save your changes.
Start Apache Spark
You can stop Apache Spark using the startSpark.sh
shell script.
${INCORTA_NODE_INSTALLATION_PATH}/startSpark.sh
Verify your changes
Here are the steps to verify your changes.
- Open the Spark Master Web UI that typically runs on port 9091
- Enter your username as a combination of your username and the tenant name:
USERNAME@TENANT_NAME
- Enter the password for the user.
- Verify that you are able to see Spark Master Web UI.
Support for the to_char PostgreSQL format function
This release supports the to_char()
PostgreSQL format function to convert various data types such as date/time, integer, floating point, and numeric to formatted strings.
The function is available for a PostgreSQL script in a materialized view. The function is also available for a SQL query that uses the SQL interface (SQLi) and Apache Spark.
Template Patterns for Date/Time formatting
The following table details the supported patterns for Date/Time formatting:
Pattern | Description |
---|---|
HH | hour of day (01–12) |
HH12 | hour of day (01–12) |
HH24 | hour of day (00–23) |
MI | minute (00–59) |
SS | second (00–59) |
MS | millisecond (000–999) |
US | microsecond (000000–999999) |
MM | month number (01–12) |
YYYY | year (4 or more digits) |
Y,YYY | year (4 or more digits) with comma |
YYY | last 3 digits of year |
YY | last 2 digits of year |
Y | last digit of year |
IYYY | ISO 8601 week-numbering year (4 or more digits) |
IYY | last 3 digits of ISO 8601 week-numbering year |
IY | last 2 digits of ISO 8601 week-numbering year |
I | last digit of ISO 8601 week-numbering year |
MONTH | full uppercase month name (blank-padded to 9 chars) |
month | full lowercase month name (blank-padded to 9 chars) |
Month | full capitalized month name (blank-padded to 9 chars) |
MON | abbreviated uppercase month name (3 chars in English, localized lengths vary) |
mon | abbreviated lowercase month name (3 chars in English, localized lengths vary) |
Mon | abbreviated capitalized month name (3 chars in English, localized lengths vary) |
DAY | full uppercase day name (blank-padded to 9 chars) |
day | full lowercase day name (blank-padded to 9 chars) |
Day | full capitalized day name (blank-padded to 9 chars) |
DY | abbreviated uppercase day name (3 chars in English, localized lengths vary) |
dy | abbreviated lowercase day name (3 chars in English, localized lengths vary) |
Dy | abbreviated capitalized day name (3 chars in English, localized lengths vary) |
W | week of month (1–5) (the first week starts on the first day of the month) |
WW | week number of year (1–53) (the first week starts on the first day of the year) |
IW | week number of ISO 8601 week-numbering year (01–53; the first Thursday of the year is in week 1) |
DDD | day of year (001–366) |
DD | day of month (01–31) |
D | day of the week, Sunday (1) to Saturday (7) |
ID | ISO 8601 day of the week, Monday (1) to Sunday (7) |
AM , am , PM or pm | meridiem indicator (without periods) |
A.M. , a.m. , P.M. or p.m. | meridiem indicator (with periods) |
BC , bc , AD or ad | era indicator (without periods) |
B.C. , b.c. , A.D. or a.d. | era indicator (with periods) |
SSSS , SSSSS | seconds past midnight (0–86399) |
RM | month in uppercase Roman numerals (I–XII; I=January) |
J | Julian Date (integer days since November 24, 4714 BC at local midnight |
CC | century (2 digits) (the twenty-first century starts on 2001-01-01) |
Q | quarter |
TZ | uppercase time-zone abbreviation |
TZH | time-zone hours |
TZM | time-zone minutes |
OF | time-zone offset from UTC |
Template Pattern Modifiers for Date/Time Formatting
You can apply a modifier to any template pattern to alter its behavior. Here are the supported modifiers:
Pattern | Description | Example |
---|---|---|
FM | fill mode (suppress leading zeroes and padding blanks) | FMMonth |
FM
suppresses leading zeros and trailing blanks that would otherwise be added to make the output of a pattern be fixed-width. In PostgreSQL, FM
modifies only the next specification, while in Oracle FM
affects all subsequent specifications, and repeated FM
modifiers toggle fill mode on and off.
Template Patterns for Numeric formatting
The following table details the supported patterns for Numeric formatting:
Pattern | Description |
---|---|
9 | digit position (can be dropped if insignificant) |
0 | digit position (will not be dropped, even if insignificant) |
. | (period) decimal point |
, | (comma) group (thousands) separator |
PR | negative value in angle brackets |
S | sign anchored to number (uses locale) |
L | currency symbol (uses locale) |
D | decimal point (uses locale) |
G | group separator (uses locale) |
MI | minus sign in specified position (if number < 0) |
PL | plus sign in specified position (if number > 0) |
SG | plus/minus sign in specified position |
V | shift specified number of digits |
SQLi Enhancements
Here are the SQLi enhancements for this release:
- Support for PostgreSQL left and right string functions
- Improved PostgreSQL compliance for built-in functions
- Support for remote table queries using the SQLi interface
- Configure a default physical schema in the CMC for SQLi
Support for PostgreSQL left and right string functions
The left(str text, n int)
function returns the first n characters in a string, and the right(str text, n int)
function returns the last n characters in a string. Refer to the PostgreSQL 10 documentation for more information.
Improved PostgreSQL compliance for built-in functions
To display the full SQL of a formula column for a dashboard, follow these steps:
- Open the Analyzer
- Create a Listing Table or Aggregated Table visualization with at least one measure and a formula column
- In the Action bar, select SQL to display the corresponding SQL of the formula column
Support for remote table queries using the SQLi interface
A remote table is a physical schema table that uses a Data Lake connector as a data source and has the Remote property enabled. A full load for a remote table does not extract data. In this release, you can query a remote table using PostgreSQL and the SQLi interface.
Additional Considerations for PySpark and Materialized Views
A PySpark materialized view utilizes an internal method for read()
. To avoid conflict, do not define a method with the same name, even if the definition exists within a custom Python namespace.
It is possible to utilize a read()
method from an external module that is available in the PYTHON_PATH
.
import my_packagemypackage.read("someValue")
Additional Features and Enhancements
Here are the new additional features and enhancements for this release:
- Formula column creation after save
- Key column changes require a full load for the physical schema table
- Accessibility enhancements
- Support locking on the level of Tables and Joins
- Dashboard search improvements
- Performance improvements
Formula column creation after save
This release now requires that you must first save a new physical schema table or new materialized view prior to creating a formula column.
Key column changes require a full load for the physical schema table
Adding or deleting a key column in a physical schema table requires a full load for that table.
Accessibility enhancements
This release introduces enhanced accessibility for the:
- Sign-in page
- Catalog (Content Manager)
- Dashboard Manager
The applied accessibility options include, but not limited to, the following:
- For all tabular visualizations (Listing Table, Aggregated Table, and Pivot Table), allowing users to use the keyboard and the screen reader (VoiceOver for example) to go through the table header and cells and read out the table data, including the total and subtotals,
#Error
andNull
values, formula columns, etc. - Using colors for text, items, and controls that have more contrast with the background
- Adding a border around items, links and options when they have focus
- Allowing users to show or hide the password using the keyboard
- Allowing users to navigate all dropdown menus using the keyboard arrows
- Reading links with their respective labels
- Announcing the following:
- in-progress processes, such as the sign-in process
- menus and panels status: opened or closed
- in-line error messages
- the title of a new page
- Using suitable icon labels and descriptions
- Removing submenus from the tab and insight More Options menus and moving their items either to modals or to the main menu for better accessibility
- Adding visual labels to unlabeled controls
- Allowing using the keyboard to access tooltips
- Supporting accessibility options on different browsers
Support locking on the level of Tables and Joins
Support locking on the level of tables and joins instead of schema locking to permit and facilitate the query, load, update, and synchronization processes
Dashboard search improvements
There are some improvements applied in the filter Prompt search. When you enter a keyword, the Prompt search lists all results that have this keyword at any position in the string. Now, the search results are sorted with the keyword you entered.
Performance improvements
There is a performance improvement in the processing time of multiple dimension formulas in an insight when a filter is applied. They are now processed in parallel instead of sequentially, as in prior releases.