Setting Up Destinations
Each destination on the dataflow is represented as a destination object. You can have any number of destinations of the dataflow. Each destination can only receive data from a single source. To feed multiple sources into a destination you need to connect them through a transformation object, for example Merge or Union. For more information on Transformations, see the Creating Transformations article.
The following destination types are supported by the dataflow engine:
Flat File Destinations:
Tree File Destinations:
Database Destinations:
All destinations can be added to the dataflow by grabbing a destination type from the Toolbox and dropping it on the dataflow. File destinations can also be added by dragging-and-dropping a file from an Explorer window while pressing the ‘Shift’ key. Database destinations can be dragged-and-dropped from the Data Source Browser while holding down the ‘Shift’ Key. For more details on adding destinations, see Introducing Dataflows article.
Flat File Destinations
Delimited File
Adding a Delimited File Destination object allows you to write to a delimited file. An example of what a Delimited File Destination object looks like is shown below.
To configure the properties of a Delimited File Destination object after it was added to the dataflow, right-click on its header and select Properties from the context menu.
Fixed-Length File
Adding a Fixed-Length File Destination object allows you to write to a fixed-length file. An example of what a Fixed-Length File Destination object looks like is shown below.
To configure the properties of a Fixed-Length File Destination object after it was added to the dataflow, right-click on it’s header and select Properties from the context menu.
Excel File
Adding an Excel Workbook Destination object allows you to write to an Excel file. An example of what an Excel Workbook Destination object looks like is shown below.
To configure the properties of an Excel Workbook Destination object after it was added to the dataflow, right-click on its header and select Properties from the context menu.
Tree File Destinations
XML/JSON File
Adding an XML/JSON File Destination object allows you to write to an XML file. An example of what an XML/JSON File Destination object looks like is shown below.
To configure the properties of an XML/JSON File Destination Object after it was added to the dataflow, right-click on its header and select Properties from the context menu. The following properties are available:
General Properties window:
File Path – Specifies the location of the destination XML file. Using UNC paths is recommended if running the dataflow on a server.
Note: To open an existing destination file for editing in a new tab, click XXX icon next to the File Path input, and select Edit File.
File Options:
Using the Encoding dropdown, select the appropriate encoding scheme for your destination file.
Check the Format XML Output checkbox to have line breaks inserted into the destination XML file for improved readability.
Schema Options:
Read From Schema File Specified Below – Specifies the location of the XSD file controlling the layout of the XML destination file.
Note: You can generate the schema based on the content of the destination XML file if the file already exists. The data types will be assigned based on the destination file’s content. Note that the existing destination file will be overwritten when the dataflow runs.
To generate the schema, click the icon next to the Schema File input, and select Generate.
To edit an existing schema, click the icon next to the Schema File input, and select Edit File. The schema will open for editing in a new tab.
Using the Root Element dropdown, select the node that should be the root of your destination schema. Any nodes up the tree will be excluded.
Note: To ensure that your dataflow is runnable on a remote server, please avoid using local paths for the destination. Using UNC paths is recommended.
Database Destinations
Database Table
Adding a Database Table Destination object allows you to write to a database table. An example of what a Database Table Destination object looks like is shown below.
Destination Connection screen – Allows you to enter the connection information for your destination, such as Server Name, Database and Schema, as well as credentials for connecting to the selected destination.
Pick Table window:
Database Transaction Management: Enable Transaction Management if you want to wrap your transfer inside a transaction. Depending on your database settings, this can give you performance improvements during the transfer. When Transaction Management is enabled, you should choose between always committing transaction at the end of the transfer, or only committing it if there were no errors. Any errors would result in the entire transaction being rolled back.
Preserve System Generated Key Values: This option is only available if you have assigned at least one field in your destination layout as a System Generated field. If enabled, Astera will pass the incoming value from the source to the system generated field. Otherwise, the incoming source value will be ignored, and the system will write auto-increasing values to the destination System Generated field.
Data Load Options: specify the type of insert of your records into a destination database. The available types are Use Single Record Insert, Bulk Insert with Batch Size, and Bulk Insert with All Records in One Batch.
These types allow you to customize your transfer to balance performance vs. logging needs. Bulk inserts typically result in a better performance (faster transfer for a given number of records), but they also come with less logging, and less ability to undo unwanted inserts should you need to.
Use Single Record Insert: Records are inserted into a destination table one-by-one. Performance is the slowest among the three insert types. However, any errors or warnings during the transfer are displayed to you immediately as the transfer progresses.
Bulk Insert with All Records in One Batch: Typically a quick method of transferring large amounts of data. But keep in mind that, should there be any database specific errors in your transfer, they will not show until the end of the transfer at which time the entire batch were to be written to the destination database.
Note: Not all database providers support this type of insert.
Bulk Insert with Batch Size: A good tradeoff between performance and logging needs. Records are inserted in batches of the specified size. Typically, larger batch sizes result in better transfer speeds; however performance gains may be less with relatively large batch sizes.
Note: Not all database providers support this type of insert.
Note: Bulk insert may not be available if there are certain data types in a destination table. In this case the transfer will proceed as “single insert”.
SQL Statement
SQL Statement Destination object offers extra flexibility over database destination objects in applying a custom INSERT or UPDATE SQL code that controls what will be written into the destination table. An example of what an SQL Statement Destination object looks like is shown below.
To configure the properties of an SQL Statement Destination object after it was added to the dataflow, right-click on its header and select Properties from the context menu. The following properties are available:
Database Connection window – Allows you to enter the connection information for your SQL Statement, such as Server Name, Database, and Schema, as well as credentials for connecting to the selected database.
SQL Query window: In the SQL Query window, you can enter an SQL expression controlling which fields and records should be written to the destination. The SQL expression should follow standard SQL syntax conventions for the chosen database provider.
For example,
Insert into Orders values (@OrderId, “@OrderName”, “@CreatedDtTm”)
Notice the @ symbol in front of a field name. This makes the field appear in the field list inside the object box so that the field can be mapped. The fields not showing @ symbol in front of them will not show in the list of fields, but they can still receive values according to the logic of the SQL statement itself.
For example,
Insert into Orders (OrderId, OrderName, CreatedDtTm) values (@OrderId, “@OrderName”, “2010/01/01”)
Note: You can optionally use $ parameters inside your SQL expression.
Last updated