Data Studio Manager → Data Flow Editor (starting 2026.3.0)
The Data Flow Editor enables you to easily create data flows witin Data Studio using a simple drag-and-drop interface. To access the Data Flow Editor, open a specific data flow from the Data Studio Manager.
Data Flow Editor Anatomy
The interface consists of the following key components:
Action bar
The Action bar provides key actions for managing your data flow. It includes the following controls:
| Control | Description |
|---|---|
| Settings | Configure data sampling preferences and spark properties. |
| More Options (⋮) | The menu includes the following options: ● Re-validate Dataflow ● Deploy All MVs ● Refresh Schemas ● Share Access |
| Close (X) | Exit the current data flow editor. |
View toolbar
The View toolbar provides navigation and layout controls for interacting with the data flow within the Canvas:
| Control | Description |
|---|---|
| Search bar | Search for specific recipes. The search results will highlight and navigate to the matching recipe |
| + Zoom in / - Zoom out | Adjust the zoom level within the Canvas |
| Maximize Canvas | Expand the data flow within the Canvas for a full view |
| Layout dropdown menu | Toggle between Default and Compact layouts of the data flow |
Canvas
The Canvas serves as the central area where you build your data flow and connect data recipes. It displays all data flow components, including recipes and joins. Add recipes by dragging and dropping datasets from the Recipes panel.
Overview panel
The Overview panel displays a zoomed-out view of your entire data flow. Select any area within the Overview panel to instantly zoom into that section on the Canvas.
The Overview panel is located in the upper-left corner of the Canvas. Select the arrow icon to expand or collapse the panel as needed.
Recipes panel
The Recipes panel appears on the left side of the Data Flow Editor. It is expanded by default, scrollable, and includes a search field for quick recipe discovery.
The panel contains the following recipe categories:
- Input & Output, including the Input Table and Save MV recipes
- Content Transformation
- Structure Transformation
- Data Quality and Validation
- Advanced Querying
- Category.genAIOperations
Edit panel
The Edit panel appears on the right side of the Data Flow Editor when you select a recipe, allowing you to configure its properties. For more information, refer to Add a new Recipe.
Results pane
The Results pane provides a detailed view of the recipe’s output. It opens at the bottom of the Canvas, and includes:
- Result Set — Displays output records in a paginated table
- Profiling View — Offers insights into the result set through statistics, histograms, frequencies, and patterns depending on the dataset.
- Filter — Enables filtering data by selecting specific columns.
- Columns - Displays the schema definition, listing the recipe’s columns and their data types.
- Alerts - Shows warnings and alerts when required recipe configurations are missing or become invalid.
- Code - Displays an auto-generated script that shows the transformation logic of the selected recipe.
Starting 2026.3.0, Data Studio supports Oracle metadata.
Data Flow Editor Actions
Using the Data Flow Editor, you can perform the following actions to build and manage your data flow:
- Add a new recipe
- Add a dataset as a recipe
- Create a Join recipe
- Configure data flow settings
- View Recipe information with a pop-up
- Preview code for a recipe
- Re-validate a data flow
- Re-validate a recipe
- Deploy all MVs
- Refresh schemas
- Share access
Add a new Recipe
- Select a recipe by navigating to the Recipes panel on the left side of the Data Flow Editor.
- Choose a recipe type from the categories below, then configure its setting:
The recipe name links to its configuration guide.
| Category | Recipe | Description |
|---|---|---|
| Content Transformation | Filter | Remove records from a dataset based on a condition. |
| Change Type | Change the data type of column(s). | |
| Select | Select which columns to keep or remove from a dataset. | |
| Unpivot | Transpose your dataset into columns and values. | |
| Sort | Sort data within a dataset. | |
| Formula | Add custom logic to create a new calculated field. | |
| Sample | Select a subset of records within your dataset. | |
| Aggregation | Aggregate your data set and set granularity through 'group by' logic. | |
| Split | Split the dataset into two datasets. | |
| Rename | Rename column labels in your dataset. | |
| Structure Transformation | Join | Join two datasets based on a set of join logic |
| Union | Union two datasets together. | |
| Data Quality and Validation | Fuzzy Join | Cleanse data through providing a lookup table. |
| Data Quality | Unleash the power of AI in your Dataflow. | |
| Advanced Querying | Python | Inject custom pySpark into your Dataflow. |
| SQL | Inject custom SQL into your data Dataflow. | |
| Gen AI | LLM | Unleash the power of AI in your Dataflow. |
| Deploy and Eject Operations | Save MV | Save your data flow output to a Materialized View. |
- Select Save.
A confirmation message appears: Recipe added successfully!
Add a dataset as a recipe
- Drag and drop Input Table from the Recipes panel.
- Enter a name for the recipe.
- Select the Schema and Table from the drop down lists.
- Type your custom SparkSQL incremental query that matches your needed criteria.
This creates a recipe on the canvas based on the selected dataset.
Create a Join recipe
- Select and drag from one recipe to another.
This action automatically creates a Join recipe between the two datasets. - Configure the Join recipe settings, including:
- Recipe Name
- Join Type
- Left and Right Input
- Match On
- Join Condition
For more information on configuring the Join Recipe, refer to References → Join Recipe.
View Recipe information with a pop-up
Hover over a recipe on the Canvas to view its information.
A pop-up displays key details, including status, last run time, duration, and row count.
For recipes with multiple outputs, such as the Split Recipe, row counts are shown for each output.
Preview code for a recipe
- Select a recipe on the Canvas.
- In the Results pane, select the Code tab.
- View the auto-generated script that represents the transformation logic of the selected recipe.
Configure data flow settings
In the Action bar, select Settings.
In the Settings dialog, Enable Sampling is toggled on by default with a sample size of 1000.
Edit the Sample Size based on your performance and profiling needs.
NoteDisabling sampling or increasing the sample size may slow down execution.
Under Spark Properties, select Add Property to define key–value pairs. You can add, edit, or delete configurations as needed.
[[note | Note]]
| By default, the Spark application uses the following configuration values:
|spark.driver.memory: 1 g
|spark.executor.memory: 1 g
| *spark.executor.cores: 1| | You can modify these default values using the new Spark configuration option.Select Save or Save & Restart to apply the changes.
A confirmation message appears: Sampling size was changed successfully. Results will be updated after re-initializing the dataflow.
Re-validate a data flow
- In the Action bar, select More Options (⋮) → Re-validate Dataflow.
- A confirmation prompt appears. Select Yes to revalidate your entire data flow.
Re-validate a recipe
- Select a recipe on the Canvas
- In the View toolbar, select More Options (⋮) → Re-validate.
- A confirmation prompt appears. Select Yes to revalidate your selected recipe.
Deploy all MVs
- In the Action bar, select More Options (⋮) → Deploy All MVs.
- A confirmation prompt appears. Select Yes to deploy all MV recipes in the data flow.
Refresh schemas
- In the Action bar, select More Options (⋮) → Refresh Schemas.
- A confirmation prompt appears. Select Yes to refresh the schemas.
Share access
In the Action bar, select More Options (⋮) → Share Access.
In the Share dialog, type the name of the user in the With: field.
Select the eye icon to set the access level:
Can View
- Allows users to open and review the data flow, preview cached data, and view recipe nodes, code, and information.
- Users with View access cannot edit, revalidate, delete, disconnect, or share the flow.
Can Share
- Includes all view permissions and also allows users to share the data flow with others, granting either view or share access.
Can Edit
- Allows users to modify the data flow, including adding or editing recipes, validating data, deleting or disconnecting flows, and managing configurations.
- To edit a data flow, the user must also have access to the associated schema(s).
Select Share.
A confirmation message appears, indicating that access has been shared.
Best Practices
- Materialized View (MV) deployment strategies
Choose between:- Updates via data flow.
- Static data flow, where updates are managed by editing in the Notebook.
- Data sampling
- By default, data sampling is limited to 1,000 records for profiling to ensure optimal performance.
- Disabling sampling can impact performance.
- Naming conventions
- Materialized Views (MVs) can be traced back to their data flows.
- Use descriptive and consistent names for data flows and MVs.