Astera Data Stack
Version 8
Version 8
  • Welcome to Astera Data Stack Documentation
  • Release Notes
    • Astera 8.0 - What's New, What's Fixed, and What's Improved
    • Astera 8.0 - Known Issues
    • Astera 8.1 - Release Notes
    • Astera 8.2 Release Notes
    • Astera 8.3 Release Notes
    • Astera 8.4 Release Notes
    • Astera 8.5 Release Notes
  • Getting Started
    • Astera 8 - Important Considerations
    • Astera 8 - System Requirements
    • Configuring the Server
    • Connecting to a Different Astera Server from the Lean Client
    • Connecting to an Astera Server using Lean Client
    • How to Build a Cluster Database and Create a Repository
    • How to Login from Lean Client
    • Setting up a Server Certificate (.pfx) File in a New Environment
    • Installing Client and Server Applications
    • Licensing Model in Astera 8
    • Migrating from Astera 7.x to Astera 8
    • UI Walkthrough - Astera 8.0
    • User Roles and Access Control
  • Dataflows
    • Sources
      • Data Providers and File Formats Supported in Astera
      • Setting Up Sources
      • COBOL File Source
      • Database Table Source
      • Data Model Query Source
      • Delimited File Source
      • Email Source
      • Excel Workbook Source
      • File Systems Item Source
      • Fixed Length File Source
      • PDF Form Source
      • Report Source
      • SQL Query Source
      • XML/JSON File Source
    • Transformations
      • Introducing Transformations
      • Aggregate Transformation
      • Constant Value Transformation
      • Data Cleanse Transformation
      • Denormalize Transformation
      • Distinct Transformation
      • Database Lookup Transformation
      • Expression Transformation
      • File Lookup Transformation
      • Filter Transformation
      • Join Transformation
      • List Lookup Transformation
      • Merge Transformation
      • Normalize Transformation
      • Passthru Transformation
      • Reconcile Transformation
      • Route Transformation
      • Sequence Generator Transformation
      • Sort Transformation
      • Sources as Transformations
      • Subflow Transformation
      • SQL Statement Lookup Transformation
      • Switch Transformation
      • Tree Join Transformation
      • Tree Transform
      • Union Transformation
    • Destinations
      • Setting Up Destinations
      • Database Table Destination
      • Delimited File Destination
      • Excel Workbook Destination
      • Fixed Length File Destination
      • SQL Statement Destination
      • XML/JSON File Destination
    • Data Logging and Profiling
      • Creating Data Profile
      • Creating Field Profile
      • Data Quality Mode
      • Record Level Log
      • Using Data Quality Rules in Astera
    • Database Write Strategies
      • Database Diff Processor
      • Data Driven Write Strategy
      • Dimension Loader - Database Write
      • Source Diff Processor
    • Text Processors
      • Delimited Parser
      • Delimited Serializer
      • Fixed Length Parser
      • Fixed Length Serializer
      • Language Parser
      • XML JSON Parser
      • XML JSON Serializer
    • Data Warehouse
      • Fact Table Loader
      • Dimension Table Loader
  • WORKFLOWS
    • What Are Workflows?
    • Using the Workflow Designer
    • Creating Workflows in Astera
    • Decision Task
    • EDI Acknowledgement Task
    • File System Task
    • File Transfer Task
    • OR Task
    • Run Dataflow Task
    • Run Program Task
    • Run SQL File Task
    • Run SQL Script Task
    • Run Workflow Task
    • Send Mail Task
    • Workflows with a Dynamic Destination Path
    • Customizing Workflows with Parameters
    • GPG-Integrated File Decryption in Astera
  • Subflows
    • Using Subflows in Astera
  • Functions
    • Introducing Function Transformations
    • Custom Functions
    • Logical
      • Coalesce (Any value1, Any value2)
      • IsNotNull (AnyValue)
      • IsRealNumber (AnyValue)
      • IsValidSqlDate (Date)
      • IsDate (AnyValue)
      • If (Boolean)
      • If (DateTime)
      • If (Double)
      • Exists
      • If (Int64)
      • If (String)
      • IsDate (str, strformat)
      • IsInteger (AnyValue)
      • IsNullOrWhitespace (StringValue)
      • IsNullorEmpty (StringValue)
      • IsNull (AnyValue)
      • IsNumeric (AnyValue)
    • Conversion
      • GetDateComponents (DateWithOffset)
      • ParseDate (Formats, Str)
      • GetDateComponents (Date)
      • HexToInteger (Any Value)
      • ToInteger (Any value)
      • ToDecimal (Any value)
      • ToReal (Any value)
      • ToDate (String dateStr)
      • TryParseDate (String, UnknownDate)
      • ToString (Any value)
      • ToString (DateValue)
      • ToString (Any data, String format)
    • Math
      • Abs (Double)
      • Abs (Decimal)
      • Ceiling (Real)
      • Ceiling(Decimal)
      • Floor (Decimal)
      • Floor (Real)
      • Max (Decimal)
      • Max (Date)
      • Min (Decimal)
      • Min (Date)
      • Max (Real)
      • Max (Integer)
      • Min (Real)
      • Pow (BaseExponent)
      • Min (Integer)
      • RandomReal (Int)
      • Round (Real)
      • Round (Real Integer)
      • Round (Decimal Integer)
      • Round (Decimal)
    • Financial
      • DDB
      • FV
      • IPmt
      • IPmt (FV)
      • Pmt
      • Pmt (FV)
      • PPmt
      • PPmt (FV)
      • PV (FV)
      • Rate
      • Rate (FV)
      • SLN
      • SYD
    • String
      • Center (String)
      • Chr (IntAscii)
      • Asc (String)
      • AddCDATAEnvelope
      • Concatenate (String)
      • ContainsAnyChar (String)
      • Contains (String)
      • Compact (String)
      • Find (Int64)
      • EndsWith (String)
      • FindIntStart (Int32)
      • Extract (String)
      • GetFindCount (Int64)
      • FindLast (Int64)
      • GetDigits (String)
      • GetLineFeed
      • Insert (String)
      • IsAlpha
      • GetToken
      • IndexOf
      • IsBlank
      • IsLower
      • IsUpper
      • IsSubstringOf
      • Length (String)
      • LeftOf (String)
      • Left (String)
      • IsValidName
      • Mid (String)
      • PadLeft
      • Mid (String Chars)
      • LSplit (String)
      • PadRight
      • ReplaceAllSpecialCharsWithSpace
      • RemoveChars (String str, StringCharsToRemove)
      • ReplaceLast
      • RightAlign
      • Reverse
      • Right (String)
      • RSplit (String)
      • SplitStringMultipleRecords
      • SplitStringMultipleRecords (2 Separators)
      • SplitString (3 separators)
      • SplitString
      • SplitStringMultipleRecords (3 Separators)
      • Trim
      • SubString (NoOfChars)
      • StripHtml
      • Trim (Start)
      • TrimExtraMiddleSpace
      • TrimEnd
      • PascalCaseWithSpace (String str)
      • Trim (String str)
      • ToLower(String str)
      • ToProper(String str)
      • ToUpper (String str)
      • Substring (String str, Integer startAt)
      • StartsWith (String str, String value)
      • RemoveAt (String str, Integer startAt, Integer noofChars)
      • Proper (String str)
      • Repeat (String str, Integer count)
      • ReplaceAll (String str, String lookFor, String replaceWith)
      • ReplaceFirst (String str, String lookFor, String replaceWith)
      • RightOf (String str, String lookFor)
      • RemoveChars (String str, String charsToRemove)
      • SplitString (String str, String separator1, String separator2)
    • Date Time
      • AddMinutes (DateTime)
      • AddDays (DateTimeOffset)
      • AddDays (DateTime)
      • AddHours (DateTime)
      • AddSeconds (DateTime)
      • AddMonths (DateTime)
      • AddMonths (DateTimeOffset)
      • AddMinutes (DateTimeOffset)
      • AddSeconds (DateTimeOffset)
      • AddYears (DateTimeOffset)
      • AddYears (DateTime)
      • Age (DateTime)
      • Age (DateTimeOffset)
      • CharToSeconds (Str)
      • DateDifferenceDays (DateTimeOffset)
      • DateDifferenceDays (DateTime)
      • DateDifferenceHours (DateTimeOffset)
      • DateDifferenceHours (DateTime)
      • DateDifferenceMonths (DateTimeOffset)
      • DateDifferenceMonths (DateTime)
      • DatePart (DateTimeOffset)
      • DatePart (DateTime)
      • DateDifferenceYears (DateTimeOffset)
      • DateDifferenceYears (DateTime)
      • Month (DateTime)
      • Month (DateTimeOffset)
      • Now
      • Quarter (DateTime)
      • Quarter (DateTimeOffset)
      • Second (DateTime)
      • Second (DateTimeOffset)
      • SecondsToChar (String)
      • TimeToInteger (DateTime)
      • TimeToInteger (DateTimeOffset)
      • ToDate Date (DateTime)
      • ToDate DateTime (DateTime)
      • ToDateString (DateTime)
      • ToDateTimeOffset-Date (DateTimeOffset)
      • ToDate DateTime (DateTimeOffset)
      • ToDateString (DateTimeOffset)
      • Today
      • ToLocal (DateTime)
      • ToJulianDate (DateTime)
      • ToJulianDayNumber (DateTime)
      • ToTicks (Date dateTime)
      • ToTicks (DateTimeWithOffset dateTime)
      • ToUnixEpoc (Date dateTime)
      • ToUtc (Date dateTime)
      • UnixTimeStampToDateTime (Real unixTimeStamp)
      • UtcNow ()
      • Week (Date dateTime)
      • Week (DateTimeWithOffset dateTime)
      • Year (Date dateTime)
      • Year (DateTimeWithOffset dateTime)
      • DateToJulian (Date dateTime, Integer length)
      • DateTimeOffsetUtcNow ()
      • DateTimeOffsetNow ()
      • Day (DateTimeWithOffset dateTime)
      • Day (Date dateTime)
      • DayOfWeekStr (DateTimeWithOffset dateTime)
      • DayOfWeek (DateTimeWithOffset dateTime)
      • DayOfWeek (Date dateTime)
      • DateToJulian (DateTimeWithOffset dateTime, Integer length)
      • DayOfWeekStr (Date dateTime)
      • FromJulianDate (Real julianDate)
      • DayOfYear (Date dateTime)
      • DaysInMonth(Integer year, Integer month)
      • DayOfYear (DateTimeWithOffset dateTime)
      • FromUnixEpoc
      • FromJulianDayNumber (Integer julianDayNumber)
      • FromTicksUtc(Integer ticks)
      • FromTicksLocal(Integer ticks)
      • Hour (Date dateTime)
      • Hour (DateTimeWithOffset dateTime)
      • Minute (Date dateTime)
      • JulianToDate (String julianDate)
      • Minute (DateTimeWithOffset dateTime)
      • DateToIntegerYYYYMMDD (DateTimeWithOffset dateTime)
      • DateToIntegerYYYYMMDD (Date dateTime)
    • Files
      • AppendTextToFile (String filePath, String text)
      • CopyFile (String sourceFilePath, String destFilePath, Boolean overWrite)
      • CreateDateTime (String filePath)
      • DeleteFile (String filePath)
      • DirectoryExists (String filePath)
      • FileExists (String filePath)
      • FileLength (String filePath)
      • FileLineCount (String filePath)
      • GetDirectory (String filePath)
      • GetEDIFileMetaData (String filePath)
      • GetExcelWorksheets (String excelFilePath)
      • GetFileExtension (String filePath)
      • GetFileInfo (String filePath)
      • GetFileName (String filePath)
      • GetFileNameWithoutExtension (String filePath)
      • LastUpdateDateTime (String filePath)
      • MoveFile (String filePath, String newDirectory)
      • ReadFileBytes (String filePath)
      • ReadFileFirstLine (String filePath)
      • ReadFileText (String filePath)
      • ReadFileText (String filePath, String codePage)
      • WriteBytesToFile (String filePath, ByteArray bytes)
      • WriteTextToFile (String filePath, String text)
    • Date Time With Offset
      • ToDateTimeOffsetFromDateTime (dateTime String)
      • ToUtc (DateTimeWithOffset)
      • ToDateTimeOffsetFromDateTime
      • ToDateTimeOffset (String dateTimeOffsetStr)
      • ToDateTimeFromDateTimeOffset
    • GUID
      • NewGuid
    • Encoding
      • ToBytes
      • FromBytes
      • UrlEncode
      • UrlDecode
    • Regular Expressions
      • ReplaceRegEx
      • ReplaceRegEx (Integer StartAt)
    • TimeSpan
      • Minutes
      • Hours
      • Days
      • Milliseconds
    • Matching
      • Soundex
      • DoubleMetaphone
      • RefinedSoundex
  • Report Model
    • User Guide
      • Report Model Tutorial
    • Report Model Interface
      • Field Properties Panel
      • Region Properties Panel
      • Report Browser
      • Report Options
    • Use Cases
      • Applying Pattern to Line
      • Auto Creating Data Regions and Fields
      • Auto-Parsing
      • Creating Multi-Column Data Regions
      • Floating Patterns and Floating Fields
      • How To Work With PDF Scaling Factor in a Report Model
      • Line Count
      • Pattern Count
      • Pattern is a Regular Expression
    • Exporting Options
      • Exporting a Report Model
      • Exporting Report Model to a Dataflow
    • Miscellaneous
      • Importing Monarch Models
      • Microsoft Word and Rich Text Format Support
      • Working With Problematic PDF Files
  • API Flows
    • API Consumption
      • Consume
        • REST API Browser
        • Making HTTP Requests Through REST API Browser
        • Using REST Client Outside of the Scope of the Project
      • Authorize
        • Authorizing ActiveCampaign API in Astera
        • Authorizing Astera Server APIs
        • Authorizing Avaza APIs in Astera
        • Authorizing Facebook APIs in Astera
        • Authorizing QuickBooks API in Astera
        • Authorizing Square API in Astera
        • Open APIs - Configuration Details
  • Project Management
    • Project Management
      • Astera's Project Explorer
      • Connecting to Source Control
      • Deployment
      • Server Monitoring and Job Management
    • Job Scheduling
      • Scheduling Jobs on the Server
      • Job Monitor
  • Use Cases
    • End-to-End Use Cases
      • Data Integration
        • Using Astera to Create and Orchestrate an ETL Process for Partner Onboarding
      • Data Warehousing
        • Building a Data Warehouse - A Step-By-Step Approach
      • Data Extraction
        • Reusing The Extraction Template for Similar Layout Files
  • Connectors
    • Connecting to Amazon Aurora Database
    • Connecting to Amazon RDS Databases
    • Connecting to Amazon Redshift Database
    • Connecting to Cloud Storage
    • Connecting to Google Cloud SQL in Astera
    • Connecting to MariaDB Database
    • Connecting to Microsoft Azure Databases
    • Connecting to MySQL Database
    • Connecting to Netezza Database
    • Connecting to Oracle Database
    • Connecting to PostgreSQL in Astera
    • Connecting to Salesforce Database
    • Connecting to Salesforce - Legacy Database
    • Connecting to SAP HANA Database
    • Connecting to Snowflake Database
    • Connecting to Vertica Database
    • Setting Up IBM DB2 iSeries Connectivity in Astera
  • Miscellaneous
    • Cloud Deployment
      • Deploying Astera on Amazon Web Services
      • Deploying Astera on Microsoft Azure Cloud
      • Deploying Astera on Oracle Cloud
    • Context Information
    • Pushdown Mode
    • Role Based Access Control in Astera
    • Safe Mode
    • Server Command Line Utility
    • SmartMatch Feature
    • Synonym Dictionary File
    • Updating Your License in Astera
    • Using Dynamic Layout/Template Mapping in Astera
    • Using Output Variables in Astera
    • Using the Data Source Browser in Astera
  • Best Practices
    • Astera Best Practices - Dataflows
    • Overview of Cardinality in Data Modeling
    • Cardinality Errors FAQs
Powered by GitBook

© Copyright 2025, Astera Software

On this page
  • Sample Use-Case
  • Reusing the Report Model for Similar Layout Files
  • Creating a Project
  • Creating a Dataflow
  • Parameterizing the Dataflow
  • Designing a Workflow
  • Reusing Methods
  • Applying Looping on File System Items Source Object
  • Using Context Info and Applying Scheduling
  • Scheduling
  1. Use Cases
  2. End-to-End Use Cases
  3. Data Extraction

Reusing The Extraction Template for Similar Layout Files

PreviousData ExtractionNextConnecting to Amazon Aurora Database

Last updated 9 months ago

Astera gives users the functionality of reusing a report model i.e., the extraction template for files of a similar layout. In this article, we will learn how to orchestrate the whole process.

A report model template contains the extraction logic to mine data from unstructured documents. The extraction process can be customized by the users via the properties and options available in Astera. To learn more about a report model, refer to article.

Sample Use-Case

In Astera, there are various reusability methods of the report model. The reusability methods enable the users to obtain meaningful data from several unstructured documents of a similar layout using the same report model i.e., the extraction template. In this article, we will look at the techniques we can apply to achieve this goal.

  • By using a workflow with a File System Item Source object in Loop mode: As part of this technique, we will cover the creation of a workflow using a File System Item Source object in Loop mode as the source.

  • By using the Context Information and the Job Scheduler: As part of this technique, we will cover the creation of a workflow using the Context Info object as the source. Moreover, we will also apply scheduling to the workflow.

In the following sections, let’s go over the whole process of how the above-mentioned techniques can be implemented.

Reusing the Report Model for Similar Layout Files

It is standard practice to create a report model containing all the extraction logic and settings so that it can be applied to multiple files of a similar layout.

We are using an example extraction template. To learn more about how to extract data from an unstructured document, click.

The example template we are using contains extraction logic to obtain details related to Accounts, the number of Orders placed by each account, and a description of each Order. This is what the extraction template looks like:

To ensure better accessibility and manageability of the files, let’s create a project and add all the relevant documents to it.

Creating a Project

  1. To create a project, go to Project > New > Integration Project.

  1. The File Explorer will open. Navigate to the path where you want to save the project and write the name of the project.

  1. Right-click on the project in the Project Explorer window and select the Add New Folder option.

Here, we have created a folder named Files to store the flows, a folder named Sources to store the unstructured source documents, and a folder named Output to store the output files.

  1. Right-click on the Files folder and select the Add New Item option from the context menu. Add a dataflow and a workflow to the Files folder.

Similarly, use Add Existing Items to add unstructured source files in the Source folder and ‘SampleOrders.Rmd’ in the Files folder. ‘SampleOrders.Rmd’ is the extraction template.

We have successfully created a project. Now, let’s head towards designing the dataflow.

Creating a Dataflow

  1. Double-click on the dataflow to open the empty designer.

  1. The Report Model Properties window will open. Here, you have to provide file paths for the Report Location and Report model Location.

    • Report Location: File path of the unstructured document.

    • Report Model Location: File path of the extraction template.

Click OK to proceed.

This is the Configuration window.

Here, specify the destination File Path and utilize the relevant configuration options available according to your needs.

Now, click OK.

  1. Map the data fields from the Report Source object to the Delimited Destination object.

Now, for the dataflow to use multiple source files, you have to parametrize the source file path. Similarly, to provide a destination path for each source file, parametrize the destination file path.

Parameterization will allow the source files and their respective destination files to be replaced at runtime.

Parameterizing the Dataflow

  1. Drag and drop the Variables object from Toolbox > Resources > Variables onto the dataflow designer.

  1. Right-click on the Variables object’s header and select Properties from the context menu.

The Properties window will open.

You have to create two fields. One field is for the source file path, while the other is for the destination file path. Set the Variable Type of both as Input and provide their file paths in the Default Value column.

Note: Default Value is optionally added to verify if the dataflow is accurately configured. At runtime, parameters passed as blank are replaced by the Default Value entry. In other words, the Default Value entry will only take effect when no other value is available.

Now, click OK.

  1. Double-click on the Report Source object’s header. Click Next. This is the Config Parameters window. Set the FilePath variable as the Value of the FilePath field.

  1. Double-click on the destination object’s header and click Next till you reach the Config Parameters window. Set the FilePathDest variable as the Value of the DestinationFile field. Click OK.

We have now created a final dataflow. Let’s proceed towards designing the workflow.

Designing a Workflow

  1. Drag and drop the Run Dataflow object from Toolbox > Workflow Tasks > Run Dataflow onto the workflow designer.

  1. Double-click on the Run Dataflow object’s header to open the Start Dataflow Job Properties window.

Now, Click OK.

  1. Drag and drop the File System Items Source object from Toolbox > Sources > File System onto the workflow designer.

  1. Double-click on the object’s header. The File System Items Source Properties window will open.

Here, specify the path to the directory where the source files reside. You can add an entry to the Filter textbox if you want to read files in a specified format only.

Click OK to proceed.

Note: Here, the unstructured files in the Output folder were of the “.txt” extension. Hence, only “.txt” files are being processed.

  1. Map the FilePath data field of the File System Items Source object to the FilePath data field of the Run Dataflow object.

Now, let’s proceed towards the construction of the dynamic destination path.

  1. Double-click on the Constant Value object’s header. The Constant Value Map Properties window will open. Provide the path to the directory or folder where you want to store the output files. Click OK.

  1. Go to Toolbox > Transformation > Expression, and drag and drop the Expression transformation object onto the workflow designer.

  1. Right-click on the Expression object’s header and select Properties from the context menu.

This will open the Layout Builder window. Here, you must create three data fields.

  • FileDirectory, set as Input.

  • FileName, set as Input.

  • FilePathDest, set as Output.

Click OK.

Note: Write the expression of the FilePathDest field as FileDirectory + FileName + “.csv”. At runtime, this expression will create the destination file path for each source file.

  1. To construct the dynamic destination path, it is necessary to make appropriate mappings for the fields of the objects. The directory that has been specified in the Constant Value object needs to be combined with the file name provided by the File System Items Source object. After the application of the defined expression on the fields, the resultant file path should be mapped to the Run Dataflow object. To achieve this, define the field mappings of the objects as follows:

    • Map the Value field of the Constant Value object to the FileDirectory field of the Expression transformation object.

    • Map the FileNameWithoutExtension field of the File System Items Source object to the FileName field of the Expression transformation object.

    • Map the FilePathDest field of the Expression transformation object to the FilePathDest field of the Run Dataflow object.

After mapping the objects, this is what the final workflow looks like:

We have now created the final workflow. In the next section, let’s discuss the reusability techniques of the extraction template.

Reusing Methods

Let’s summarize what we have done so far. We have created a dataflow by using a Report Source object, a Variables object, and a Delimited Destination object. The purpose of this dataflow is to apply the extraction logic on the unstructured document, parameterize the file paths using variables, and write the extracted data to a destination file.

We have also created a workflow and included objects such as File System Items Source, Constant Value, Expression transformation, and Run Dataflow. The purpose of this workflow is to execute the previously made dataflow and construct a dynamic destination path to store each source file at runtime.

Now, we will see how we can use the workflow and apply the extraction process on multiple unstructured files of similar format.

Applying Looping on File System Items Source Object

  1. Right-click on the File System Items Source object’s header and select Loop from the context menu.

Note: By selecting the Loop option, we are ensuring that the File System Items Source object iterates through the entire folder. This will enable us to provide multiple source files to the Run Dataflow object. By default, the selected Singleton option only picks the first file in the folder.

  1. Link the File System Items Source object to the Run Dataflow object.

Note: You can use Job Schedules on this workflow by following the steps defined under the Scheduling heading.

Using Context Info and Applying Scheduling

  1. Map the DroppedFilePath data field from the Context Info object to the FilePath data field in the Run Dataflow object.

Scheduling

  1. Go to Server > Job Schedules.

  1. To add a new task for a schedule, click on the Add Scheduler Task.

Specify the Name of the task. After that, select the Schedule Type. In our case, we want to schedule a workflow. Hence, we will select the File type for scheduling. Provide the path of the workflow in the File Location.

There are some other options for Server, Dataflow, Job, and Frequency, which you can select according to your requirements.

Here, provide the path of the directory you want the scheduler to watch in case of a file drop. You can use the File Filter option to process a specific type of file format.

Other options, including Watch Subdirectories, Process Existing Files on Startup, Rename File, and Use Polling, are available to be used according to your requirements.

Save the task by clicking on the Save Selected Task icon in the top left corner.

This is how you can automate the whole process of extracting data from multiple files with similar layouts using the same report model in Astera.

Drag and drop the object onto the dataflow designer from Toolbox > Sources > Report Source. Right-click on the object’s header and select Properties from the context menu.

Note: You can also export the report model to a dataflow. To learn more about how to do that, click .

To store the extracted data, drag and drop a destination object onto the dataflow designer. Here, we are using a object. Right-click on the destination object’s header and select Properties from the context menu.

Specify the path to the dataflow that you want to execute in the Job-Info. To learn more about the Run Dataflow object, click.

Additionally, you may choose to include all items of the subdirectories and/or include all items inclusive of entries for subdirectories by checking the options at the bottom. To learn more about the File System Items Source object, click.

Drag and drop the object from Toolbox > Transformation > Constant Value onto the workflow designer.

To learn more about how to utilize the Expression Transformation object, click.

Instead of a File System Items Source object, you can use the object from Toolbox > Resources > Context Info to process a file whenever it is dropped at the path mentioned in the scheduled task.

Click on the drop-down menu of Frequency and select When File is Dropped. To learn more about each Frequency type, click.

To gain further insights on how to schedule a job on the server using Job Schedules, click .

Report Source
here
Delimited Destination
here
here
Constant Value Transformation
here
Context Info
here
here
this
here