Astera Data Stack
Version 10
Version 10
  • Welcome to Astera Data Stack Documentation
  • RELEASE NOTES
    • Astera 10.5 - Release Notes
    • Astera 10.4 - Release Notes
    • Astera 10.3 - Release Notes
    • Astera 10.2 – Release Notes
    • Astera 10.1 - Additional Notes
    • Astera 10.1 - Release Notes
    • Astera 10.0 - Release Notes
  • SETTING UP
    • System Requirements
    • Product Architecture
    • Migrating from Astera 9 to Astera 10
    • Migrating from Astera 7.x to Astera 10
    • Installing Client and Server Applications
    • Connecting to an Astera Server using the Client
    • How to Connect to a Different Astera Server from the Client
    • How to Build a Cluster Database and Create Repository
    • Repository Upgrade Utility in Astera
    • How to Login from the Client
    • How to Verify Admin Email
    • Licensing in Astera
    • How to Supply a License Key Without Prompting the User
    • Install Manager
    • User Roles and Access Control
      • Windows Authentication
      • Azure Authentication
    • Offline Activation of Astera
    • Setting Up R in Astera
    • Silent Installation
  • DATAFLOWS
    • What are Dataflows?
    • Sources
      • Data Providers and File Formats Supported in Astera Data Stack
      • Setting Up Sources
      • Excel Workbook Source
      • COBOL File Source
      • Database Table Source
      • Delimited File Source
      • File System Items Source
      • Fixed Length File Source
      • Email Source
      • Report Source
      • SQL Query Source
      • XML/JSON File Source
      • PDF Form Source
      • Parquet File Source (Beta)
      • MongoDB Source (Beta)
      • Data Model Query
    • Transformations
      • Introducing Transformations
      • Aggregate Transformation
      • Constant Value Transformation
      • Denormalize Transformation
      • Distinct Transformation
      • Expression Transformation
      • Filter Transformation
      • Join Transformation
      • List Lookup Transformation
      • Merge Transformation
      • Normalize Transformation
      • Passthru Transformation
      • Reconcile Transformation
      • Route Transformation
      • Sequence Generator
      • Sort Transformation
      • Sources as Transformations
      • Subflow Transformation
      • Switch Transformation
      • Tree Join Transformation
      • Tree Transform
      • Union Transformation
      • Data Cleanse Transformation
      • File Lookup Transformation
      • SQL Statement Lookup
      • Database Lookup
      • AI Match Transformation
    • Destinations
      • Setting Up Destinations
      • Database Table Destination
      • Delimited File Destination
      • Excel Workbook Destination
      • Fixed Length File Destination
      • SQL Statement Destination
      • XML File Destination
      • Parquet File Destination (Beta)
      • Excel Workbook Report
      • MongoDB Destination
    • Data Logging and Profiling
      • Creating Data Profile
      • Creating Field Profile
      • Data Quality Mode
      • Using Data Quality Rules in Astera
      • Record Level Log
      • Quick Profile
    • Database Write Strategies
      • Data Driven
      • Source Diff Processor
      • Database Diff Processor
    • Text Processors
      • Delimited Parser
      • Delimited Serializer
      • Language Parser
      • Fixed Length Parser
      • Fixed Length Serializer
      • XML/JSON Parser
      • XML/JSON Serializer
    • Data Warehouse
      • Fact Table Loader
      • Dimension Loader
      • Data Vault Loader
    • Testing and Diagnostics
      • Correlation Analysis
    • Visualization
      • Basic Plots
      • Distribution Plots
    • EDI
      • EDI Source File
      • EDI Message Parser
      • EDI Message Serializer
      • EDI Destination File
  • WORKFLOWS
    • What are Workflows?
    • Creating Workflows in Astera
    • Decision
    • EDI Acknowledgment
    • File System
    • File Transfer
      • FTP
      • SFTP
    • Or
    • Run Dataflow
    • Run Program
    • Run SQL File
    • Run SQL Script
    • Run Workflow
    • Send Mail
    • Workflows with a Dynamic Destination Path
    • Customizing Workflows With Parameters
    • GPG-Integrated File Decryption in Astera
    • AS2
      • Setting up an AS2 Server
      • Adding an AS2 Partner
      • AS2 Workflow Task
  • Subflows
    • Using Subflows in Astera
  • DATA MODEL
    • Creating a Data Warehousing Project
    • Data Models
      • Introducing Data Models
      • Opening a New Data Model
      • Data Modeler - UI Walkthrough
      • Reverse Engineering an Existing Database
      • Creating a Data Model from Scratch
      • General Entity Properties
      • Creating and Editing Relationships
      • Relationship Manager
      • Virtual Primary Key
      • Virtual Relationship
      • Change Field Properties
      • Forward Engineering
      • Verifying a Data Model
    • Dimensional Modelling
      • Introducing Dimensional Models
      • Converting a Data Model to a Dimensional Model
      • Build Dimensional Model
      • Fact Entities
      • Dimension Entities
      • Placeholder Dimension for Early Arriving Facts and Late Arriving Dimensions
      • Date and Time Dimension
      • Aggregates in Dimensional Modeling
      • Verifying a Dimensional Model
    • Data Vaults
      • Introducing Data Vaults
      • Data Vault Automation
      • Raw Vault Entities
      • Bridge Tables
      • Point-In-Time Tables
    • Documentation
      • Generating Technical and Business Documentation for Data Models
      • Lineage and Impact Analysis
    • Deployment and Usage
      • Deploying a Data Model
      • View Based Deployment
      • Validate Metadata and Data Integrity
      • Using Astera Data Models in ETL Pipelines
      • Connecting an Astera Data Model to a Third-Party Visualization Tool
  • REPORT MODEL
    • User Guide
      • Report Model Tutorial
    • Report Model Interface
      • Report Options
      • Report Browser
      • Data Regions in Report Models
      • Region Properties Panel
      • Pattern Properties
      • Field Properties Panel
    • Use Cases
      • Auto-Creating Data Regions and Fields
      • Line Count
      • Auto-Parsing
      • Pattern Count
      • Applying Pattern to Line
      • Regular Expression
      • Floating Patterns and Floating Fields
      • Creating Multi-Column Data Regions
      • Defining the Start Position of Data Fields
      • Data Field Verification
      • Using Comma Separated Values to Define Start Position
      • Defining Region End Type as Specific Text and Regular Expression
      • How To Work With PDF Scaling Factor in a Report Model
      • Connecting to Cloud Storage
    • Auto Generate Layout
      • Setting Up AGL in Astera
      • UI Walkthrough - Auto Generation of Layout, Fields and Table
      • Using Auto Generation Layout, Auto Create Fields and Auto Create Table (Preview)
    • AI Powered Data Extraction
      • AI Powered Data Extraction Using Astera North Star
      • Best Practices for AI-Powered Template Creation in Astera
    • Optical Character Recognition
      • Loading PDFs with OCR
      • Best Practices for OCR Usage
    • Exporting Options
      • Exporting a Report Model
      • Exporting Report Model to a Dataflow
    • Miscellaneous
      • Importing Monarch Models
      • Microsoft Word and Rich Text Format Support
      • Working With Problematic PDF Files
  • API Flow
    • API Publishing
      • Develop
        • Designing an API Flow
        • Request Context Parameters
        • Configuring Sorting and Filtering in API Flows
        • Enable Pagination
        • Asynchronous API Request
        • Multiple Responses using Conditional Route
        • Workflow Tasks in an API Flow
        • Enable File Download-Upload Through APIs
        • Database CRUD APIs Auto-Generation
        • Pre-deployment Testing and Verification of API flows
        • Multipart/Form-Data
        • Certificate Store
      • Publish
        • API Deployment
        • Test Flow Generation
      • Manage
        • Server Browser Functionalities for API Publishing
          • Swagger UI for API Deployments
        • API Monitoring
        • Logging and Tracing
    • API Consumption
      • Consume
        • API Connection
        • Making API Calls with the API Client
        • API Browser
          • Type 1 – JSON/XML File
          • Type 2 – JSON/XML URL
          • Type 3 – Import Postman API Collections
          • Type 4 - Create or customize API collection
          • Pre-built Custom Connectors
        • Request Service Options - eTags
        • HTTP Redirect Calls
        • Method Operations
        • Pagination
        • Raw Preview And Copy Curl Command
        • Support for text/XML and SOAP Protocol
        • API Logging
        • Making Multipart/Form-Data API Calls
      • Authorize
        • Open APIs - Configuration Details
        • Authorizing Facebook APIs
        • Authorizing Astera’s Server APIs
        • Authorizing Avaza APIs
        • Authorizing the Square API
        • Authorizing the ActiveCampaign API
        • Authorizing the QuickBooks’ API
        • Astera’s Server API Documentation
        • NTLM Authentication
        • AWS Signature Authentication
        • Accessing Astera’s Server APIs Through a Third-Party Tool
          • Workflow Use Case
  • Project Management and Scheduling
    • Project Management
      • Deployment
      • Server Monitoring and Job Management
      • Cluster Monitor and Settings
      • Connecting to Source Control
      • Astera Project and Project Explorer
      • CAR Convert Utility Guide
    • Job Scheduling
      • Scheduling Jobs on the Server
      • Job Monitor
    • Configuring Multiple Servers to the Same Repository (Load Balancing)
    • Purging the Database Repository
  • Data Governance
    • Deployment of Assets in Astera Data Stack
    • Logging In
    • Tags
    • Modifying Asset Details
    • Data Discoverability
    • Data Profile
    • Data Quality
    • Scheduler
    • Access Management
  • Functions
    • Introducing Function Transformations
    • Custom Functions
    • Logical
      • Coalesce (Any value1, Any value2)
      • IsNotNull (AnyValue)
      • IsRealNumber (AnyValue)
      • IsValidSqlDate (Date)
      • IsDate (AnyValue)
      • If (Boolean)
      • If (DateTime)
      • If (Double)
      • Exists
      • If (Int64)
      • If (String)
      • IsDate (str, strformat)
      • IsInteger (AnyValue)
      • IsNullOrWhitespace (StringValue)
      • IsNullorEmpty (StringValue)
      • IsNull (AnyValue)
      • IsNumeric (AnyValue)
    • Conversion
      • GetDateComponents (DateWithOffset)
      • ParseDate (Formats, Str)
      • GetDateComponents (Date)
      • HexToInteger (Any Value)
      • ToInteger (Any value)
      • ToDecimal (Any value)
      • ToReal (Any value)
      • ToDate (String dateStr)
      • TryParseDate (String, UnknownDate)
      • ToString (Any value)
      • ToString (DateValue)
      • ToString (Any data, String format)
    • Math
      • Abs (Double)
      • Abs (Decimal)
      • Ceiling (Real)
      • Ceiling(Decimal)
      • Floor (Decimal)
      • Floor (Real)
      • Max (Decimal)
      • Max (Date)
      • Min (Decimal)
      • Min (Date)
      • Max (Real)
      • Max (Integer)
      • Min (Real)
      • Pow (BaseExponent)
      • Min (Integer)
      • RandomReal (Int)
      • Round (Real)
      • Round (Real Integer)
      • Round (Decimal Integer)
      • Round (Decimal)
    • Financial
      • DDB
      • FV
      • IPmt
      • IPmt (FV)
      • Pmt
      • Pmt (FV)
      • PPmt
      • PPmt (FV)
      • PV (FV)
      • Rate
      • Rate (FV)
      • SLN
      • SYD
    • String
      • Center (String)
      • Chr (IntAscii)
      • Asc (String)
      • AddCDATAEnvelope
      • Concatenate (String)
      • ContainsAnyChar (String)
      • Contains (String)
      • Compact (String)
      • Find (Int64)
      • EndsWith (String)
      • FindIntStart (Int32)
      • Extract (String)
      • GetFindCount (Int64)
      • FindLast (Int64)
      • GetDigits (String)
      • GetLineFeed
      • Insert (String)
      • IsAlpha
      • GetToken
      • IndexOf
      • IsBlank
      • IsLower
      • IsUpper
      • IsSubstringOf
      • Length (String)
      • LeftOf (String)
      • Left (String)
      • IsValidName
      • Mid (String)
      • PadLeft
      • Mid (String Chars)
      • LSplit (String)
      • PadRight
      • ReplaceAllSpecialCharsWithSpace
      • RemoveChars (String str, StringCharsToRemove)
      • ReplaceLast
      • RightAlign
      • Reverse
      • Right (String)
      • RSplit (String)
      • SplitStringMultipleRecords
      • SplitStringMultipleRecords (2 Separators)
      • SplitString (3 separators)
      • SplitString
      • SplitStringMultipleRecords (3 Separators)
      • Trim
      • SubString (NoOfChars)
      • StripHtml
      • Trim (Start)
      • TrimExtraMiddleSpace
      • TrimEnd
      • PascalCaseWithSpace (String str)
      • Trim (String str)
      • ToLower(String str)
      • ToProper(String str)
      • ToUpper (String str)
      • Substring (String str, Integer startAt)
      • StartsWith (String str, String value)
      • RemoveAt (String str, Integer startAt, Integer noofChars)
      • Proper (String str)
      • Repeat (String str, Integer count)
      • ReplaceAll (String str, String lookFor, String replaceWith)
      • ReplaceFirst (String str, String lookFor, String replaceWith)
      • RightOf (String str, String lookFor)
      • RemoveChars (String str, String charsToRemove)
      • SplitString (String str, String separator1, String separator2)
    • Date Time
      • AddMinutes (DateTime)
      • AddDays (DateTimeOffset)
      • AddDays (DateTime)
      • AddHours (DateTime)
      • AddSeconds (DateTime)
      • AddMonths (DateTime)
      • AddMonths (DateTimeOffset)
      • AddMinutes (DateTimeOffset)
      • AddSeconds (DateTimeOffset)
      • AddYears (DateTimeOffset)
      • AddYears (DateTime)
      • Age (DateTime)
      • Age (DateTimeOffset)
      • CharToSeconds (Str)
      • DateDifferenceDays (DateTimeOffset)
      • DateDifferenceDays (DateTime)
      • DateDifferenceHours (DateTimeOffset)
      • DateDifferenceHours (DateTime)
      • DateDifferenceMonths (DateTimeOffset)
      • DateDifferenceMonths (DateTime)
      • DatePart (DateTimeOffset)
      • DatePart (DateTime)
      • DateDifferenceYears (DateTimeOffset)
      • DateDifferenceYears (DateTime)
      • Month (DateTime)
      • Month (DateTimeOffset)
      • Now
      • Quarter (DateTime)
      • Quarter (DateTimeOffset)
      • Second (DateTime)
      • Second (DateTimeOffset)
      • SecondsToChar (String)
      • TimeToInteger (DateTime)
      • TimeToInteger (DateTimeOffset)
      • ToDate Date (DateTime)
      • ToDate DateTime (DateTime)
      • ToDateString (DateTime)
      • ToDateTimeOffset-Date (DateTimeOffset)
      • ToDate DateTime (DateTimeOffset)
      • ToDateString (DateTimeOffset)
      • Today
      • ToLocal (DateTime)
      • ToJulianDate (DateTime)
      • ToJulianDayNumber (DateTime)
      • ToTicks (Date dateTime)
      • ToTicks (DateTimeWithOffset dateTime)
      • ToUnixEpoc (Date dateTime)
      • ToUtc (Date dateTime)
      • UnixTimeStampToDateTime (Real unixTimeStamp)
      • UtcNow ()
      • Week (Date dateTime)
      • Week (DateTimeWithOffset dateTime)
      • Year (Date dateTime)
      • Year (DateTimeWithOffset dateTime)
      • DateToJulian (Date dateTime, Integer length)
      • DateTimeOffsetUtcNow ()
      • DateTimeOffsetNow ()
      • Day (DateTimeWithOffset dateTime)
      • Day (Date dateTime)
      • DayOfWeekStr (DateTimeWithOffset dateTime)
      • DayOfWeek (DateTimeWithOffset dateTime)
      • DayOfWeek (Date dateTime)
      • DateToJulian (DateTimeWithOffset dateTime, Integer length)
      • DayOfWeekStr (Date dateTime)
      • FromJulianDate (Real julianDate)
      • DayOfYear (Date dateTime)
      • DaysInMonth(Integer year, Integer month)
      • DayOfYear (DateTimeWithOffset dateTime)
      • FromUnixEpoc
      • FromJulianDayNumber (Integer julianDayNumber)
      • FromTicksUtc(Integer ticks)
      • FromTicksLocal(Integer ticks)
      • Hour (Date dateTime)
      • Hour (DateTimeWithOffset dateTime)
      • Minute (Date dateTime)
      • JulianToDate (String julianDate)
      • Minute (DateTimeWithOffset dateTime)
      • DateToIntegerYYYYMMDD (DateTimeWithOffset dateTime)
      • DateToIntegerYYYYMMDD (Date dateTime)
    • Files
      • AppendTextToFile (String filePath, String text)
      • CopyFile (String sourceFilePath, String destFilePath, Boolean overWrite)
      • CreateDateTime (String filePath)
      • DeleteFile (String filePath)
      • DirectoryExists (String filePath)
      • FileExists (String filePath)
      • FileLength (String filePath)
      • FileLineCount (String filePath)
      • GetDirectory (String filePath)
      • GetEDIFileMetaData (String filePath)
      • GetExcelWorksheets (String excelFilePath)
      • GetFileExtension (String filePath)
      • GetFileInfo (String filePath)
      • GetFileName (String filePath)
      • GetFileNameWithoutExtension (String filePath)
      • LastUpdateDateTime (String filePath)
      • MoveFile (String filePath, String newDirectory)
      • ReadFileBytes (String filePath)
      • ReadFileFirstLine (String filePath)
      • ReadFileText (String filePath)
      • ReadFileText (String filePath, String codePage)
      • WriteBytesToFile (String filePath, ByteArray bytes)
      • WriteTextToFile (String filePath, String text)
    • Date Time With Offset
      • ToDateTimeOffsetFromDateTime (dateTime String)
      • ToUtc (DateTimeWithOffset)
      • ToDateTimeOffsetFromDateTime
      • ToDateTimeOffset (String dateTimeOffsetStr)
      • ToDateTimeFromDateTimeOffset
    • GUID
      • NewGuid
    • Encoding
      • ToBytes
      • FromBytes
      • UrlEncode
      • UrlDecode
      • ComputeSHA256
      • ComputeMD5
      • ComputeHash (Str, Key)
      • ComputeHash (Str, Key, hex)
      • ConvertEncoding
    • Regular Expressions
      • ReplaceRegEx
      • ReplaceRegEx (Integer StartAt)
      • IsMatchRegEx (StartAt)
      • IsMatchRegEx
      • IsUSPhone
      • IsUSZipCode
      • GetMatchRegEx
      • GetMatchRegEx (StartAt)
    • TimeSpan
      • Minutes
      • Hours
      • Days
      • Milliseconds
      • TotalMilliseconds
      • TimeSpanFromTicks
      • Ticks
      • TotalHours
      • Seconds
      • TotalDays
      • ToTimeSpan (Hours, Min, Sec)
      • ToTimeSpan (Milli)
      • ToTimeSpan
      • TotalSeconds
      • TotalMinutes
    • Matching
      • Soundex
      • DoubleMetaphone
      • RefinedSoundex
    • Processes
      • TerminateProcess
      • IsProcessRunning
  • USE CASES
    • End-to-End Use Cases
      • Data Integration
        • Using Astera Data Stack to Create and Orchestrate an ETL Process for Partner Onboarding
      • Data Warehousing
        • Building a Data Warehouse – A Step by Step Approach
      • Data Extraction
        • Reusing The Extraction Template for Similar Layout Files
  • CONNECTORS
    • Setting Up IBM DB2/iSeries Connectivity in Astera
    • Connecting to SAP HANA Database
    • Connecting to MariaDB Database
    • Connecting to Salesforce Database
    • Connecting to Salesforce – Legacy Database
    • Connecting to Vertica Database
    • Connecting to Snowflake Database
    • Connecting to Amazon Redshift Database
    • Connecting to Amazon Aurora Database
    • Connecting to Google Cloud SQL in Astera
    • Connecting to MySQL Database
    • Connecting to PostgreSQL in Astera
    • Connecting to Netezza Database
    • Connecting to Oracle Database
    • Connecting to Microsoft Azure Databases
    • Amazon S3 Bucket Storage in Astera
    • Connecting to Amazon RDS Databases
    • Microsoft Azure Blob Storage in Astera
    • ODBC Connector
    • Microsoft Dynamics CRM
    • Connection Details for Azure Data Lake Gen 2 and Azure Blob Storage
    • Configuring Azure Data Lake Gen 2
    • Connecting to Microsoft Message Queue
    • Connecting to Google BigQuery
    • Azure SQL Server Configuration Prerequisites
    • Connecting to Microsoft Azure SQL Server
    • Connecting to Microsoft SharePoint in Astera
  • Incremental Loading
    • Trigger Based CDC
    • Incremental CDC
  • MISCELLANEOUS
    • Using Dynamic Layout & Template Mapping in Astera
    • Synonym Dictionary File
    • SmartMatch Feature
    • Role-Based Access Control in Astera
    • Updating Your License in Astera
    • Using Output Variables in Astera
    • Parameterization
    • Connection Vault
    • Safe Mode
    • Context Information
    • Using the Data Source Browser in Astera
    • Pushdown Mode
    • Optimization Scenarios
    • Using Microsoft’s Modern Authentication Method in Email Source Object
    • Shared Actions
    • Data Formats
    • AI Automapper
    • Resource Catalog
    • Cloud Deployment
      • Deploying Astera Data Stack on Microsoft Azure Cloud
      • Deploying Astera Data Stack on Oracle Cloud
      • Deploying Astera Data Stack on Amazon Web Services
      • Setting up the Astera Server on AKS
    • GIT In Astera Data Stack
      • GIT Repositories in Astera Data Stack
      • Moving a Repository to a Remote Server
      • Git Conflicts in Astera Data Stack
    • Astera Best Practices
  • FAQs
    • Installation
      • Why do we need to make two installations for Astera?
      • What’s the difference between Custom and Complete installation?
      • What’s the difference between 32-bit and 64-bit Astera?
      • Can we use a single license for multiple users?
      • Does Astera client work when it’s not connected to the server?
      • Why do we need to build a cluster database and set up a repository while working with Astera?
      • How do we set up multiple servers for load balancing?
      • How do we maintain schedules when migrating server or upgrading version?
      • Which database providers does Astera support for setting up a cluster database?
      • How many Astera clients can be connected to a single server?
      • Why is Astera not able to access my source file or create a new one?
    • Sources
      • Can I use data from unstructured documents in dataflows?
      • Can I extract data from fillable PDF forms in Astera?
      • Does Astera support extraction of data residing in online sources?
      • How do I process multiple files in a directory with a single execution of a flow?
      • Can I write information from the File System Items Source to the destination?
      • Can I split a source file into multiple files based on record count?
      • Does Astera support data extraction from unstructured docs or text files?
      • What is the difference between full and incremental loading in database sources?
      • How is the File System Items Source used in a Dataflow?
      • How does the PDF Form Source differ from the Report Source in Astera?
      • Does Astera support extraction of data from EDI files?
      • How does the Raw Text Filter option work in file sources in Astera?
    • Destinations
      • If I want to have a different field delimiter, say a pipe (“|”), is there an option to export with a
      • Tools Menu > Data Format has different date formats, but it doesn’t seem to do anything.
      • Can we export the Object Path column present in the Data Preview window?
      • I want to change the output format of a column.
      • What will be the outcome if we write files multiple times to the same Excel Destination?
    • Transformations
      • How is the Aggregate Transformation different from the Expression Transformation?
      • Can we omit duplicate records using the Aggregate Transformation in Astera?
      • How many datasets can a single Aggregate object take input from?
      • How is Expression Transformation different from the Function Transformation?
    • Workflows
      • What is a Workflow in Astera?
      • How do I trigger a task if at least one of a set of tasks fails?
      • Can I perform an action based on whether a file has data?
    • Scheduler
      • How can I schedule a job to run every x hours?
Powered by GitBook

© Copyright 2025, Astera Software

On this page
  • Video
  • Getting Fixed Length Source Object
  • Configuring the Fixed Length File Source Object

Was this helpful?

Export as PDF
  1. DATAFLOWS
  2. Sources

Fixed Length File Source

PreviousFile System Items SourceNextEmail Source

Last updated 9 months ago

Was this helpful?

The Fixed-Length File Source object in Astera provides a high-speed reader for files containing fixed length records. It supports files with record delimiters as well as files without record delimiters.

Video

Getting Fixed Length Source Object

In this section, we will cover how to get Fixed Length File Source object on the dataflow designer from the Toolbox.

  1. To get a Fixed Length File Source object from the Toolbox, go to Toolbox > Sources > Fixed Length File Source. If you are unable to see the Toolbox, go to View > Toolbox or press Ctrl + Alt + X.

  1. Drag-and-drop the Fixed Length File Source object onto the designer.

You can see that the dragged source object is empty right now. This is because we have not configured the object yet.

Configuring the Fixed Length File Source Object

  1. To configure the Fixed Length File Source object, right-click on its header and select Properties from the context menu.

When you select the Properties option from the context menu, a dialog box will open.

This is where you configure the properties for Fixed Length File Source object.

  1. The first step is to provide the File Path for the Fixed Length File Source object. By providing the File Path you are building the connectivity to the source dataset.

Note: In this case we are going to be using a fixed length file that contains Orders sample data. This file works with the following options:

  • File Contains Headers

  • Record Delimiter is specified as

  1. The dialog box has some other configuration options:

  • If the source File Contains Header and you want the Astera source layout to read headers from the source file, check this option.

  • If you want the file to be read in portions, for instance, your file has data over 1000 rows, upon selecting Partition File for Reading, Astera will read your file according to the specified Partition Count. For example, a file with 1000 rows, with the Partition Count specified as 2, will be read in two partitions of 500 rows each. This is a back-end process that makes data reading more efficient and helps in processing data faster. This will not have any effect on your output.

  • Record Delimiter field allows you to select the delimiter for the records in the source file. The choices available are carriage-return line-feed combination , carriage-return and line-feed**. You can also type the record delimiter of your choice instead of choosing from the available options.

  • In case the records do not have a delimiter and you rely on knowing the size of a record, the number in the Record Length field is used to specify the character length for a single record.

  • The Encoding field allows you to choose the encoding scheme for the delimited file from a list of choices. The default value is Unicode (UTF-8)

  • To define a hierarchical file layout and process the data file as a hierarchical file check the This is a Hierarchical File option. Astera IDE provides extensive user interface capabilities for processing hierarchical structures.

  • Advanced File Options

  • In the Header spans over field, give the number of rows that your header takes. Refer to this option when your header spans over multiple rows.

  • Check the Enforce exact header match option if you want the header to be read as it is.

  • Check the Column order in file may be different from the layout option, if the field order in your source layout is different from the field order in Astera’s layout.

  • Check the Column headers in file may be different from the layout option if you want to use alternate header values for your fields. The Layout Builder lets you specify alternate header values for the fields in the layout.

  • To skip any unwanted rows at the beginning of your file, you can specify the number of records that you want to omit through the Skip initial records option.

  • Raw text filter

  • If you do not want to apply any filter and process all records, check the No filter. Process all records option.

  • If there is a specific value which you want to filter out, you can check the Process if begins with option and specify the value that you want Astera to read from the data, in the provided field.

  • If there is a specific expression which you want to filter out, you can check the Process if matches this regular expression option and give the expression that you want Astera to read from the data, in the provided field.

  • String Processing

    String processing options come in use when you are reading data from a file system and writing it to a database destination.

  • Check the Treat empty string as null value option when you have empty cells in the source file and want those to be treated as null objects in the database destination that you are writing to, otherwise Astera will omit those accordingly in the output.

  • Check the Trim strings option when you want to omit any extra spaces in the field value.

  1. Once you have specified the data reading options on this window, click Next.

The next window is the Length Markers window. You can put marks and specify the columns in your data.

Using the Length Markers window, you can create the layout of your fixed-length file. To insert a field length marker, you can click in the window at any point. For example, if you want to set the length of a field to contain five characters and the field starts at five, then you need to click at the marker position nine.

Note: In this case we are using a fixed length file with Orders sample data.

  • If you point your cursor to where the data is starting from, (in this case next to OrderID) and double-click on it, Astera will automatically detect columns and put markers in your data. Blue lines will appear as markers on the columns that will get detected.

You can modify the markers manually. To delete a marker, double-click on the column which has been marked.

In this case we removed the second marker and instead added a marker after CustomerID and EmployeeID.

In this way you can add as many markers as the number of columns/fields there are in the data set.

  • You can also use the Build from Specs feature to help you build destination fields based on an existing file instead of manually specifying the layout.

  1. After you have built the layout by inserting the field markers, click Next.

The next window is the Layout Builder. On this window, you can modify the layout of your fixed length source file.

  • If you want to add a new field to your layout, go to the last row of your layout (Name column), which will be blank and double-click on it, and a blinking text cursor will appear. Type in the name of the field you want to add and select subsequent properties for it. A new field will be added to the source layout.

Note: Make sure to specify the length of the field that you have added in the properties of the field.

  • If you want to delete a field from your dataset, click on the serial column of the row that you want to delete. The selected row will be highlighted in blue.

Right-click on the highlighted line, a context menu will open where you will have the option to Delete.

Selecting Delete will delete the entire row.

The field is now deleted from the layout and will not appear in the output.

Note: Modifying the layout (adding or deleting fields) from the Layout Builder in Astera will not make any changes to the actual source file. The layout is specific to Astera only.

  • Other options that the Layout Builder provides are:

Column Name

Description

Data Type

Specifies the data type of a field, such as Integer, Real, String, Date, or Boolean.

Start Position

Specifies the position from where that column/field starts.

Length

Defines the length of a column/field.

Alignment

Specifies the alignment of the values in a column/field. The options provided are right, left, and center.

Allows Null

Controls whether the field allows blank or NULL values in it.

Expressions

Defines functions through expressions for any field in your data.

  1. After you are done customizing the layout in the Object Builder window, click Next. You will be taken to a new window, Config Parameters. Here, you can define parameters for the Fixed Length File Source.

Parameters can provide easier deployment of flows by eliminating hardcoded values and provide an easier way of changing multiple configurations with a simple value change.

Note: Parameters left blank will use their default values assigned on the properties page.

  1. Once you have been through all configuration options, click OK.

The FixedLengthFileSource object is now configured.

The Fixed Length File Source object has now been modified from its previous configuration. The new object has all the modifications that we specified in the Layout Builder.

In this case, the modifications that we made were:

  • Separated the EmployeeID column from the OrderDate column.

  • Added the CustomerName column.

You have successfully configured your Fixed Length File Source object. The fields from the source object can now be mapped to other objects in a dataflow.

Check the This is a COBOL data file option if you are working with COBOL files and do not have COBOL copybooks, you can still import this data by visually marking fields in the layout builder and specifying field data types. For more advanced parsing of COBOL files, you can use Astera’s .

Check the Use SmartMatch with Synonym Dictionary option when the header values vary in the source layout and Astera’s layout. You can create a file to store the values for alternate headers. You can also use the Synonym Dictionary file to facilitate automapping between objects on the flow diagram that use alternate names in field layouts.

COBOL File Source
Synonym Dictionary