Workflow Nodes Overview
This page lists all the node types currently available in the workflow, along with a brief description of their functionality.
Last updated
This page lists all the node types currently available in the workflow, along with a brief description of their functionality.
Last updated
For better navigation, the nodes are divided into logical sections and listed in the same order as they appear in the Workflow palette. You can also find a description of each node by clicking on the node in the palette or by clicking the (How-To) icon in the node modal.
The Assign node is essential in workflows for managing data transformations and storage. It allows users to assign fixed values or the results of expressions to outputs or variables, streamlining data flow and facilitating dynamic decision-making within workflows.
Assign Fixed Values: Directly set specific values to variables or output fields.
Map Data Between Variables: Transfer or transform data from one variable to another using simple assignments or complex mappings.
Expression Evaluation: Evaluate expressions and assign the result to variables, enabling dynamic and conditional data handling.
Data Initialization: Initialize variables with specific values at the start of a workflow to ensure they are ready for use.
Dynamic Calculations: Use expressions to perform calculations and assign the result to a variable for later use in the workflow.
Data Synchronization: Map data from inputs to internal workflow variables to maintain consistency and integrity across different workflow stages.
The Switch node is designed to direct the workflow based on conditional logic, allowing for branching based on case evaluations. This node evaluates a series of cases, each defined by a specific set of conditions, and branches the workflow accordingly.
Case Evaluation: Each case within the node has a set of conditions. The workflow branches off when these conditions are met.
Default Case: If no conditions are met for any case, the workflow defaults to a predefined path, ensuring that the workflow continues smoothly.
Visibility of Case Fulfillment: It is possible to track whether each case's conditions have been met, providing clarity and transparency in workflow execution.
All Winning Cases: This strategy evaluates all cases that meet their conditions. If no cases are deemed profitable, the default case is executed.
First Winning Case: This strategy identifies and executes the first case that meets its conditions. If no cases qualify, the default case is executed.
Complex Decision Making: Ideal for scenarios requiring multiple conditions to determine the workflow path.
Error Handling: Can be used to manage different error conditions and direct to appropriate recovery paths.
User Input Handling: Direct workflow based on various user inputs or actions within a system.
This node is crucial for implementing complex decision-making capabilities within workflows, allowing for dynamic and condition-based workflow branching.
The Declare node plays a critical role in managing data within workflows. It allows for the creation of runtime variables that can be modified during the execution of a workflow. All variables established by this node are accessible in the data dictionary, providing a centralized view of workflow data.
Runtime Variable Creation: Enables the dynamic creation of variables that can be adjusted as the workflow progresses.
Data Dictionary Integration: All variables are listed in a data dictionary, making them easily accessible and manageable throughout the workflow.
Encapsulation: Variables are grouped under the name of the node, which helps in organizing and isolating data within complex workflows.
Flexible Variable Definition: Supports defining both entities (complex data structures) and inline variables (simple data types), providing versatility in data handling.
Data Handling: Ideal for scenarios where data needs to be initialized and potentially modified based on workflow conditions or user inputs.
State Management: Useful in long-running workflows where state needs to be tracked and manipulated across different stages of the workflow.
This node is essential for workflows that require a high degree of data flexibility and accessibility, ensuring efficient management and manipulation of variables during runtime.
The Global Variables node is essential for establishing a set of global variables accessible throughout the entire workflow. This node is typically configured at the start of the workflow to ensure that all variables are defined and available for use from the beginning.
Global Scope: Variables defined in this node are available throughout the entire duration of the workflow, across all nodes.
Initial Configuration: Placed right at the start of the workflow, this node sets up essential variables that are critical for the entire process.
Easy Access: All global variables are readily accessible and modifiable by any part of the workflow, enhancing data sharing and manipulation.
Consistency: Ensures that all parts of the workflow have consistent access to the same set of variables.
Simplicity: Simplifies the management of data that needs to be shared across multiple parts of the workflow.
Efficiency: Reduces the need to repeatedly define the same variables in different nodes, thus optimizing workflow performance.
Configuration Data: Ideal for storing configuration settings that need to be accessed by multiple nodes throughout the workflow.
Shared Resources: Useful for scenarios where data needs to be shared or visible across various stages of the workflow, such as user permissions, environmental settings, or operational parameters.
This node provides foundational support for workflows that require global data accessibility, ensuring that critical information is uniformly available across all stages of the process.
The Foreach node is used for iterating over arrays within a workflow, making it essential for processing collections of data. This node utilizes a "Loop Connector" to iterate through each item in the input array, providing powerful functionality for data manipulation and analysis.
Loop Connector iterates over every item in the specified input array, ensuring that each element is processed sequentially.
Access to Current Item: during each iteration, the current item is accessible via the [foreach node name].currentItem
attribute, allowing for direct manipulation of each array element as an object.
Iteration Context provides additional iteration parameters such as:
index
: Indicates the number of the current iteration.
length
: Indicates the total length of the input array.
For nested arrays, multiple Foreach nodes can be nested within each other. This allows for the handling of complex data structures and multi-dimensional arrays with ease.
Data Processing: Ideal for scenarios requiring detailed processing or transformation of each item within an array.
Batch Operations: Efficient for performing operations on batches of data, such as calculations, filtering, or aggregation.
Complex Data Structures: Useful in workflows that need to navigate and manipulate nested or hierarchical data structures.
This node is crucial for workflows that involve array processing, providing robust tools for iterating over data and applying specific operations to each element effectively.
The Sticky Note node is used to create notes within the workflow. This node serves as a simple yet effective way to add annotations, comments, or reminders directly in the workflow, enhancing clarity and communication among team members.
Annotation: Add textual notes to any part of the workflow to explain the purpose of nodes, document decisions, or provide additional information.
Visibility: Notes are visible within the workflow interface, making it easy for anyone reviewing the workflow to understand the context and reasoning behind specific nodes.
Non-Functional: This node does not affect the workflow's execution; it is purely for documentation and informational purposes.
Documentation: Use Sticky Notes to document the logic and reasoning behind complex workflow sections.
Collaboration: Facilitate better communication among team members by adding notes that explain specific parts of the workflow.
Reminders: Add reminders for future modifications or considerations that need to be addressed.
This node is essential for maintaining clear and comprehensive documentation within your workflow, ensuring that all stakeholders understand the workflow's structure and intent.
The End node serves as a visual indicator for the termination of a workflow branch and provides insight into the final data state of that branch. While not mandatory, as output mapping can be achieved with the Assign node, the End Node adds value by enhancing workflow clarity and inspection.
Workflow Inspection: Displays the data associated with the relevant workflow path in the inspector, offering a clear view of the output data and making it easier to debug and verify the workflow's final state.
Optional Use: The End node is optional and is primarily used to provide a clear endpoint for workflow branches. This helps in enhancing organization and readability but does not replace nodes like the Assign node for output mapping.
The End node is beneficial for workflows that require distinct termination points and a way to easily inspect final data states, ensuring both ease of use and enhanced workflow clarity.
The Business Rule node allows you to connect and execute various types of business rules such as decision tables, decision trees, rule flows, or even entire workflows. This node is versatile and can operate in two distinct modes to accommodate different use cases.
Static Mode: Select a specific business rule and, optionally, a particular version of that rule.
Dynamic Mode: Select a business rule and its version based on workflow variables. If no version is explicitly selected, the latest published version of the rule is used.
You can choose from different solving strategies for evaluating business rules and determining the format in which the results are returned:
Standard
Always returns an array of results from the solved business rule.
Automatically attaches a Loop connector to the node, allowing iteration over the results.
During iteration, the current item is accessible via [node name].currentItem
, and the iteration index is available via index
.
All returned data in the array is accessible through [node name].output
.
Evaluates All
Similar to the Standard strategy but specifically evaluates all rows in a decision table.
Returns an array of results and allows for iteration as in the Standard strategy.
First Match
Returns a single output object containing the result data.
This strategy does not return an array but a single object representing the first matching result.
Decision Automation: Ideal for scenarios where complex business logic needs to be automated through decision tables or trees.
Dynamic Rule Selection: Useful when the specific business rule or its version needs to be selected based on runtime conditions.
Batch Processing: Employ the Standard or Evaluates All strategies to handle multiple results from business rules, enabling batch processing and comprehensive evaluations.
This node is pivotal for integrating complex business logic into your workflows, ensuring that decisions are made based on the latest rules and data.
Array append involves adding elements to an existing array. Unlike merging, where elements are combined from two different arrays, appending adds new elements to the end of an array.
Let's say you have an existing JSON array:
Now, you want to append a new element to this array:
After appending, the resulting array would be:
In this appended array:
The original elements remain unchanged.
The new element { "id": 3, "name": "Bob" }
is added to the end of the array.
Joining two JSON arrays by a key involves combining the arrays based on a common identifier or key within the JSON objects.
Let's say you have two JSON arrays:
You want to join these arrays based on the "id" key. After merging, the resulting array should contain all unique elements from both arrays, with matching elements combined based on the "id" key. If an element exists in one array but not the other, it should still be included in the merged array.
In this join array:
The object with id
equal to 1 exists in both arrays, so their properties are joined into one object.
The object with id
equal to 2 only exists in the first array, so it's included as is.
The object with id
equal to 3 only exists in the second array, so it's included as is.
The Array Combine node is used to combine multiple values and nodes into a single array. This node consolidates various data inputs and returns the combined array, making it a valuable tool for data aggregation within workflows.
Combining Values: Allows the inclusion of multiple values from different nodes or direct inputs into a single array.
Output: The combined data is returned as [node name].value
, providing a straightforward way to access the aggregated array.
Data Aggregation: Useful for scenarios where data from multiple sources or nodes needs to be combined into one array for further processing.
Batch Processing: Ideal for workflows that require the consolidation of data points before applying batch operations or transformations.
Simplifying Outputs: Helps in organizing and simplifying outputs by combining related data into a single array structure.
This node is essential for workflows that involve combining and organizing data from various sources into a single array, facilitating efficient data handling and processing.
If you have several nodes generating different pieces of data, you can use the Array Combine node to gather all these pieces into one array:
Input Nodes:
Node A: Generates value 1
Node B: Generates value 2
Node C: Generates value 3
Array Combine Node Configuration:
Combines values from Node A, Node B, and Node C
Output:
[node name].value
will be [1, 2, 3]
The Collect node is used to create an object by collecting multiple values and nodes into a single object. This node aggregates data inputs and adds them to a specified target, making it a valuable tool for data consolidation within workflows. This node is designed to be used at the of the loop only.
Creating Objects: Allows the inclusion of multiple values from different nodes or direct inputs into a single object.
Target Addition: Adds the collected data to a specified target object, providing an organized way to manage aggregated data.
Post-Loop Data Consolidation: Useful for scenarios where data from multiple sources or nodes needs to be combined into one object for further processing.
Final Structured Outputs: Ideal for workflows that require the aggregation of data points into a structured object before applying further operations or transformations.
Organizing Final Data: Helps in organizing and simplifying outputs by combining related data into a single object structure.
This node is essential for workflows that involve combining and organizing data from various sources into a single object, facilitating efficient data handling and processing.
If you have several nodes generating different pieces of data, you can use the Collect Outputs node to gather all these pieces into one object newTarget
Input Nodes:
Node A: Generates value 1
with key a
Node B: Generates value 2
with key b
Node C: Generates value 3
with key c
Configuration
Select or Create Target: Choose or create the target object where the collected data will be added.
Object Mapping: Map the attributes from each input node to the corresponding keys in the target object.
Output
newTarget
will contain {a: 1, b: 2, c: 3}
. This reflects the newly created or updated object with the collected data in the specified target.
The REST API Client node is used to request data from an external API. This node facilitates communication with external services, making it a valuable tool for integrating third-party data into your workflows.
Request Type: Supports various HTTP request types such as GET, POST, PUT, DELETE, etc.
Enter URL: Allows the user to specify the URL of the external API endpoint.
Headers: Enables the inclusion of custom headers for authentication and other purposes.
Body: Allows the user to define the body of the request.
Data Retrieval: Useful for scenarios where data needs to be fetched from external APIs for processing within the workflow.
Data Submission: Ideal for workflows that require sending data to external services.
Integration: Helps in integrating third-party services by facilitating API communication.
This node is essential for workflows that involve requesting data from or sending data to external APIs, facilitating seamless integration with third-party services.
If you need to request data from an external API, you can use the REST API Client node to set up and send the request:
Configuration
Request Type: Select the type of HTTP request (e.g., GET, POST).
Enter URL: Specify the URL of the API endpoint (e.g., https://api.example.com/data
).
Headers: Add any necessary headers (e.g., Authorization: Bearer <token>
, Content-Type: application/json
).
Body: Define the body of the request if needed, with the option to use formats like JSON, plain text, or HTML (e.g., { "key1": "value1", "key2": "value2" }
for a JSON POST request).
Output
The response from the API will be accessible via the following properties:
[node name].data
: The main data returned from the API.
[node name].status
: The HTTP status code of the response.
[node name].responseOk
: A boolean indicating whether the request was successful (status in the 200-299 range).
[node name].responseHeaders
: The headers returned by the API response.
[node name].state
: The state of the request, which can be used for handling different stages of the API call.
The Custom Code node is used to execute custom JavaScript code within your workflow. This node provides a flexible way to perform operations or transformations that are not covered by standard nodes.
JavaScript Execution: Allows you to write and execute custom JavaScript code.
Custom Logic: Enables the implementation of bespoke logic and algorithms tailored to your specific needs.
Input Access: Provides access to inputs from other nodes, allowing for dynamic and context-aware code execution.
Output: Returns the result of the JavaScript code execution, which can be used for further processing in the workflow.
Custom Transformations: Ideal for scenarios where standard nodes do not meet your requirements, and custom logic is needed.
Data Processing: Useful for performing complex data manipulations or calculations.
Integration: Helps in integrating with other systems or APIs that require specific JavaScript code execution.
If you need to perform a custom calculation or data transformation, you can use the Custom Code node to run JavaScript code:
Configuration
JavaScript Code: Write the JavaScript code to be executed. For example, you can perform a calculation or transform input data:
With the new workflow feature, you have access to a variety of nodes that allow you to create powerful, automated processes without needing extensive coding knowledge. Each node type serves a specific purpose, and by combining them, you can easily build workflows tailored to your needs.
Our intuitive interface is designed to make it easy to get started: experiment with different nodes, explore their functionalities, and see how they can simplify your tasks and processes.