Astera Data Stack
Version 9
Version 9
  • Welcome to Astera Data Stack Documentation
  • Release Notes
    • Astera 9.0 - Release Notes
  • Setting Up
    • System Requirements
    • Product Architecture
    • Installing Client and Server Applications
    • Connecting to a Astera Server using Lean Client
    • How to Connect to a Different Astera Server from the Lean Client
    • How to Set up a Server Certificate (.pfx) File in a New Environment
    • How to Build a Cluster Database and Create a Repository
    • How to Login from Lean Client
    • Licensing Model in Astera 9
    • User Roles and Access Control
    • Offline Activation of Astera Data Stack
  • Dataflows
    • Sources
      • Data Providers and File Formats Supported in Astera Data Stack
      • Setting Up Sources
      • Excel Workbook Source
      • COBOL File Source
      • Database Table Source
      • Delimited File Source
      • File System Items Source
      • Fixed Length File Source
      • Email Source
      • Report Source
      • SQL Query Source
      • XML/JSON File Source
      • PDF Form Source
    • Transformations
      • Introducing Transformations
      • Aggregate Transformation
      • Constant Value Transformation
      • Denormalize Transformation
      • Distinct Transformation
      • Expression Transformation
      • Filter Transformation
      • Join Transformation
      • List Lookup Transformation
      • Merge Transformation
      • Normalize Transformation
      • Passthru Transformation
      • Reconcile Transformation
      • Route Transformation
      • Sequence Generator
      • Sort Transformation
      • Sources as Transformations
      • Subflow Transformation
      • Switch Transformation
      • Tree Join Transformation
      • Tree Transform
      • Union Transformation
      • Data Cleanse Transformation
      • File Lookup Transformation
      • SQL Statement Lookup
      • Database Lookup
    • Destinations
      • Setting Up Destinations
      • Database Table Destination
      • Delimited File Destination
      • Excel Workbook Destination
      • Fixed Length File Destination
      • SQL Statement Destination
      • XML File Destination
    • Data Logging and Profiling
      • Creating Data Profile
      • Creating Field Profile
      • Data Quality Mode
      • Using Data Quality Rules in Astera
      • Record Level Log
    • Database Write Strategies
      • Data Driven
      • Source Diff Processor
      • Database Diff Processor
      • Dimension Loader - Database Write
    • Text Processors
      • Delimited Parser
      • Delimited Serializer
      • Language Parser
      • Fixed Length Parser
      • Fixed Length Serializer
      • XML/JSON Parser
      • XML JSON Serializer
    • Visualizations
      • Basic Plots
      • Distribution Plots
  • Workflows
    • What are Workflows?
    • Creating Workflows in Astera
    • Decision Task
    • EDI Acknowledgment Task
    • File System Task
    • File Transfer Task
    • OR Task
    • Run Dataflow Task
    • Run Program Task
    • Run SQL File Task
    • Run SQL Script Task
    • Run Workflow Task
    • Send Mail Task
    • Workflows with a Dynamic Destination Path
    • Customizing Workflows With Parameters
    • GPG-Integrated File Decryption in Astera
  • Subflows
    • Using Subflows in Astera
  • DATA MODEL
    • Creating a Data Warehousing Project
    • Data Models
      • Introducing Data Models
      • Opening a New Data Model
      • Data Modeler - UI Walkthrough
      • Reverse Engineering an Existing Database
      • Creating a Data Model from Scratch
      • General Entity Properties
      • Creating and Editing Relationships
      • Forward Engineering
      • Verifying a Data Model
    • Dimensional Modelling
      • Introducing Dimensional Models
      • Converting a Data Model to a Dimensional Model
      • Fact Entities
      • Dimension Entities
      • Date and Time Dimension
      • Verifying a Dimensional Model
    • Documentation
      • Generating Technical and Business Documentation for Data Models
      • Lineage and Impact Analysis
    • Deployment and Usage
      • Deploying a Data Model
      • Validate Metadata and Data Integrity
      • Using Astera Data Models in ETL Pipelines
      • Connecting an Astera Data Model to a Third Party Visualization Tool
  • Functions
    • Introducing Function Transformations
    • Custom Functions
    • Logical
      • Coalesce (Any value1, Any value2)
      • IsNotNull (AnyValue)
      • IsRealNumber (AnyValue)
      • IsValidSqlDate (Date)
      • IsDate (AnyValue)
      • If (Boolean)
      • If (DateTime)
      • If (Double)
      • Exists
      • If (Int64)
      • If (String)
      • IsDate (str, strformat)
      • IsInteger (AnyValue)
      • IsNullOrWhitespace (StringValue)
      • IsNullorEmpty (StringValue)
      • IsNull (AnyValue)
      • IsNumeric (AnyValue)
    • Conversion
      • GetDateComponents (DateWithOffset)
      • ParseDate (Formats, Str)
      • GetDateComponents (Date)
      • HexToInteger (Any Value)
      • ToInteger (Any value)
      • ToDecimal (Any value)
      • ToReal (Any value)
      • ToDate (String dateStr)
      • TryParseDate (String, UnknownDate)
      • ToString (Any value)
      • ToString (DateValue)
      • ToString (Any data, String format)
    • Math
      • Abs (Double)
      • Abs (Decimal)
      • Ceiling (Real)
      • Ceiling(Decimal)
      • Floor (Decimal)
      • Floor (Real)
      • Max (Decimal)
      • Max (Date)
      • Min (Decimal)
      • Min (Date)
      • Max (Real)
      • Max (Integer)
      • Min (Real)
      • Pow (BaseExponent)
      • Min (Integer)
      • RandomReal (Int)
      • Round (Real)
      • Round (Real Integer)
      • Round (Decimal Integer)
      • Round (Decimal)
    • Financial
      • DDB
      • FV
      • IPmt
      • IPmt (FV)
      • Pmt
      • Pmt (FV)
      • PPmt
      • PPmt (FV)
      • PV (FV)
      • Rate
      • Rate (FV)
      • SLN
      • SYD
    • String
      • Center (String)
      • Chr (IntAscii)
      • Asc (String)
      • AddCDATAEnvelope
      • Concatenate (String)
      • ContainsAnyChar (String)
      • Contains (String)
      • Compact (String)
      • Find (Int64)
      • EndsWith (String)
      • FindIntStart (Int32)
      • Extract (String)
      • GetFindCount (Int64)
      • FindLast (Int64)
      • GetDigits (String)
      • GetLineFeed
      • Insert (String)
      • IsAlpha
      • GetToken
      • IndexOf
      • IsBlank
      • IsLower
      • IsUpper
      • IsSubstringOf
      • Length (String)
      • LeftOf (String)
      • Left (String)
      • IsValidName
      • Mid (String)
      • PadLeft
      • Mid (String Chars)
      • LSplit (String)
      • PadRight
      • ReplaceAllSpecialCharsWithSpace
      • RemoveChars (String str, StringCharsToRemove)
      • ReplaceLast
      • RightAlign
      • Reverse
      • Right (String)
      • RSplit (String)
      • SplitStringMultipleRecords
      • SplitStringMultipleRecords (2 Separators)
      • SplitString (3 separators)
      • SplitString
      • SplitStringMultipleRecords (3 Separators)
      • Trim
      • SubString (NoOfChars)
      • StripHtml
      • Trim (Start)
      • TrimExtraMiddleSpace
      • TrimEnd
      • PascalCaseWithSpace (String str)
      • Trim (String str)
      • ToLower(String str)
      • ToProper(String str)
      • ToUpper (String str)
      • Substring (String str, Integer startAt)
      • StartsWith (String str, String value)
      • RemoveAt (String str, Integer startAt, Integer noofChars)
      • Proper (String str)
      • Repeat (String str, Integer count)
      • ReplaceAll (String str, String lookFor, String replaceWith)
      • ReplaceFirst (String str, String lookFor, String replaceWith)
      • RightOf (String str, String lookFor)
      • RemoveChars (String str, String charsToRemove)
      • SplitString (String str, String separator1, String separator2)
    • Date Time
      • AddMinutes (DateTime)
      • AddDays (DateTimeOffset)
      • AddDays (DateTime)
      • AddHours (DateTime)
      • AddSeconds (DateTime)
      • AddMonths (DateTime)
      • AddMonths (DateTimeOffset)
      • AddMinutes (DateTimeOffset)
      • AddSeconds (DateTimeOffset)
      • AddYears (DateTimeOffset)
      • AddYears (DateTime)
      • Age (DateTime)
      • Age (DateTimeOffset)
      • CharToSeconds (Str)
      • DateDifferenceDays (DateTimeOffset)
      • DateDifferenceDays (DateTime)
      • DateDifferenceHours (DateTimeOffset)
      • DateDifferenceHours (DateTime)
      • DateDifferenceMonths (DateTimeOffset)
      • DateDifferenceMonths (DateTime)
      • DatePart (DateTimeOffset)
      • DatePart (DateTime)
      • DateDifferenceYears (DateTimeOffset)
      • DateDifferenceYears (DateTime)
      • Month (DateTime)
      • Month (DateTimeOffset)
      • Now
      • Quarter (DateTime)
      • Quarter (DateTimeOffset)
      • Second (DateTime)
      • Second (DateTimeOffset)
      • SecondsToChar (String)
      • TimeToInteger (DateTime)
      • TimeToInteger (DateTimeOffset)
      • ToDate Date (DateTime)
      • ToDate DateTime (DateTime)
      • ToDateString (DateTime)
      • ToDateTimeOffset-Date (DateTimeOffset)
      • ToDate DateTime (DateTimeOffset)
      • ToDateString (DateTimeOffset)
      • Today
      • ToLocal (DateTime)
      • ToJulianDate (DateTime)
      • ToJulianDayNumber (DateTime)
      • ToTicks (Date dateTime)
      • ToTicks (DateTimeWithOffset dateTime)
      • ToUnixEpoc (Date dateTime)
      • ToUtc (Date dateTime)
      • UnixTimeStampToDateTime (Real unixTimeStamp)
      • UtcNow ()
      • Week (Date dateTime)
      • Week (DateTimeWithOffset dateTime)
      • Year (Date dateTime)
      • Year (DateTimeWithOffset dateTime)
      • DateToJulian (Date dateTime, Integer length)
      • DateTimeOffsetUtcNow ()
      • DateTimeOffsetNow ()
      • Day (DateTimeWithOffset dateTime)
      • Day (Date dateTime)
      • DayOfWeekStr (DateTimeWithOffset dateTime)
      • DayOfWeek (DateTimeWithOffset dateTime)
      • DayOfWeek (Date dateTime)
      • DateToJulian (DateTimeWithOffset dateTime, Integer length)
      • DayOfWeekStr (Date dateTime)
      • FromJulianDate (Real julianDate)
      • DayOfYear (Date dateTime)
      • DaysInMonth(Integer year, Integer month)
      • DayOfYear (DateTimeWithOffset dateTime)
      • FromUnixEpoc
      • FromJulianDayNumber (Integer julianDayNumber)
      • FromTicksUtc(Integer ticks)
      • FromTicksLocal(Integer ticks)
      • Hour (Date dateTime)
      • Hour (DateTimeWithOffset dateTime)
      • Minute (Date dateTime)
      • JulianToDate (String julianDate)
      • Minute (DateTimeWithOffset dateTime)
      • DateToIntegerYYYYMMDD (DateTimeWithOffset dateTime)
      • DateToIntegerYYYYMMDD (Date dateTime)
    • Files
      • AppendTextToFile (String filePath, String text)
      • CopyFile (String sourceFilePath, String destFilePath, Boolean overWrite)
      • CreateDateTime (String filePath)
      • DeleteFile (String filePath)
      • DirectoryExists (String filePath)
      • FileExists (String filePath)
      • FileLength (String filePath)
      • FileLineCount (String filePath)
      • GetDirectory (String filePath)
      • GetEDIFileMetaData (String filePath)
      • GetExcelWorksheets (String excelFilePath)
      • GetFileExtension (String filePath)
      • GetFileInfo (String filePath)
      • GetFileName (String filePath)
      • GetFileNameWithoutExtension (String filePath)
      • LastUpdateDateTime (String filePath)
      • MoveFile (String filePath, String newDirectory)
      • ReadFileBytes (String filePath)
      • ReadFileFirstLine (String filePath)
      • ReadFileText (String filePath)
      • ReadFileText (String filePath, String codePage)
      • WriteBytesToFile (String filePath, ByteArray bytes)
      • WriteTextToFile (String filePath, String text)
    • Date Time With Offset
      • ToDateTimeOffsetFromDateTime (dateTime String)
      • ToUtc (DateTimeWithOffset)
      • ToDateTimeOffsetFromDateTime
      • ToDateTimeOffset (String dateTimeOffsetStr)
      • ToDateTimeFromDateTimeOffset
    • GUID
      • NewGuid
    • Encoding
      • ToBytes
      • FromBytes
      • UrlEncode
      • UrlDecode
    • Regular Expressions
      • ReplaceRegEx
      • ReplaceRegEx (Integer StartAt)
    • TimeSpan
      • Minutes
      • Hours
      • Days
      • Milliseconds
    • Matching
      • Soundex
      • DoubleMetaphone
      • RefinedSoundex
  • Report Model
    • User Guide
      • Report Model Tutorial
    • Report Model Interface
      • Report Options
      • Report Browser
      • Data Regions in Report Models
      • Region Properties Panel
      • Pattern Properties
      • Field Properties Panel
    • Use Cases
      • Applying Pattern to Line
      • Auto Creating Data Regions and Fields
      • Auto Parsing
      • Connecting to Cloud Storage
      • Creating Multi Column Data Regions
      • Defining Region End Type as Specific Text and Regular Expression
      • Defining the Start Position of Data Fields
      • Floating Patterns and Floating Fields
      • How To Work With PDF Scaling Factor in a Report Model
      • Line Count
      • Pattern Count
      • Pattern is a Regular Expression
      • Using Comma Separated Values to Define Start Position
    • Auto Generate Layout (Beta)
      • Setting Up AGL in Astera
      • UI Walkthrough Auto Generation of Layout, Fields and Table
      • Using Auto Generation Layout, Auto Create Fields, and Auto Create Table (Beta)
    • Exporting Options
      • Exporting a Report Model
      • Exporting Report Model to A Dataflow
    • Miscellaneous
      • Importing Monarch Models
      • Microsoft Word and Rich Text Format Support
      • Working With Problematic PDF Files
  • API Flows
    • API Consumption
      • Consume
        • REST Connection
        • Making API Calls with the REST Client Object in Astera
        • REST API Browser
        • Method Operations
        • Pagination
      • Authorize
        • Open APIs - Configuration Details
        • Authorizing Facebook APIs in Astera
        • Authorizing Astera's Server APIs
        • Authorizing Avaza APIs in Astera
        • Authorizing Square API in Astera
        • Authorizing ActiveCampaign API in Astera
        • Authorizing QuickBooks’ API in Astera
        • Accessing Astera's Server APIs Through a Third Party Tool
        • Astera's Server API Documentation
  • Project Management
    • Project Management
      • Deployment
      • Server Monitoring and Job Management
      • Connecting to Source Control
      • Astera Project and Project Explorer
    • Job Scheduling
      • Scheduling Jobs on the Server
      • Job Monitor
  • Use Cases
    • End-to-End Use Cases
      • Data Integration
        • Using Astera Data Stack to Create and Orchestrate an ETL Process for Partner Onboarding
      • Data Warehousing
        • Building a Data Warehouse - A Step By Step Approach
      • Data Extraction
        • Reusing The Extraction Template for Similar Layout Files
  • Connectors
    • Setting Up IBM DB2/iSeries Connectivity in Astera
    • Connecting to SAP HANA Database
    • Connecting to MariaDB Database
    • Connecting to Salesforce Database
    • Connecting to Salesforce - Legacy Database
    • Connecting to Vertica Database
    • Connecting to Snowflake Database
    • Connecting to Amazon Redshift Database
    • Connecting to Amazon Aurora Database
    • Connecting to Google Cloud SQL in Astera
    • Connecting to MySQL Database
    • Connecting to PostgreSQL in Astera
    • Connecting to Netezza Database
    • Connecting to Oracle Database
    • Connecting to Microsoft Azure Databases
    • Connecting to Amazon RDS Databases
  • Miscellaneous
    • Using Dynamic Layout Template Mapping in Astera
    • Synonym Dictionary File
    • SmartMatch Feature
    • Role Based Access Control in Astera
    • Updating Your License in Astera
    • Using Output Variables in Astera
    • Connection Vault
    • Safe Mode
    • Using the Data Source Browser in Astera
    • Pushdown Mode
    • Cloud Deployment
      • Deploying Astera on Microsoft Azure Cloud
      • Deploying Astera on Oracle Cloud
      • Deploying Astera on Amazon Web Services
    • Context Information
  • Best Practices
    • Overview of Cardinality in Data Modeling
    • Cardinality Errors FAQs
    • Astera Best Practices - Dataflows
Powered by GitBook

© Copyright 2025, Astera Software

On this page
  • Use Case
  • Using the Normalize Transformation
  1. Dataflows
  2. Transformations

Normalize Transformation

PreviousMerge TransformationNextPassthru Transformation

Last updated 1 year ago

The Normalize transformation object in Astera is used to create one-to-many mappings. It allows users to create multiple records from a single record by transposing the columns in a dataset into rows.

In other words, you can take a dataset that has many columns and turn it into one that has many rows.

Use Case

In this use case, we have a sample Taxpayers Excel dataset that contains information on the types and amounts of taxes paid by taxpayers. This includes taxpayers’ Social Security Number (SSN) and the different types of taxes that they have paid. These types are divided into different fields, such as City, County, State, and Federal, with each column containing the amount paid by each customer for a particular tax type.

Our goal is to reduce the number of fields and increase the number of records by specifying the tax type in one consolidated field. To do this, we will use the Normalize object in the Astera Data Stack.

Using the Normalize Transformation

  1. Drag the relevant source object from the Toolbox and drop it onto the designer. In this case, we will select the object from Toolbox > Sources > Excel Workbook Source and configure it so that it reads data from the Taxpayers’ dataset.

To preview the data, right-click on the object header and select Preview Output from the context menu. Here is a look at the dataset:

  1. Drag the Normalize object from Toolbox > Transformations > Normalize and drop it onto the designer.

You will notice that the object header contains one Output node and two Input nodes by default.

Any field mapped as a new member to one Input node will appear in all of the input nodes as well as the Output node. In this case, we will map the SSN field from the source object to an Input node.

  1. Right-click on the header of the Normalize object and select Properties from the context menu.

A dialogue box will appear.

This dialogue box is used to configure the Normalize object.

  1. In the Layout Builder window, create the layout of your normalized dataset by providing field names. In this case, we have already mapped SSN from the source and will create two new fields, one for the TaxAmount and the other for the TaxType.

Once you have created the layout, click Next.

  1. In the Normalize (One to Many) Transformation Properties window, make appropriate selections for the following options:

    • Number of Mapping Groups: Here, you can specify the number of mapping groups that are required. Increasing this number from 2 will also increase the number of Input nodes in the object. In this case, there are four tax types. Hence, we will increase the number to 4.

    • Omit Record If this Element is Null: From this drop-down menu, you can select a field from your layout. If an element in this field is null, the entire record containing that null element will be omitted from the dataset. In this case, we will select, the default option, which denotes that this option will not apply to any field.

Once you have made the required selections, click Next.

  1. On the last window, which is the General Options window, you will be provided with an empty text box for Comments. Moreover, you can also select a few options that are common to most objects in Astera.

    • Clear Incoming Record Messages: When this option is checked, any messages coming in from the preceding object will be cleared.

    • Do Not Overwrite Default Values with Nulls: When this option is checked, actual values are not overwritten with null values in the output.

In this case, we will leave the options unchecked. Once you are done, click OK.

Now that you have configured the Normalize object, you will notice that new input nodes have been added to the object based on our selection for the Number of Mapping Groups option. Each node contains the layout we specified in the Layout Builder window.

  1. The next step is to make the required mappings from the source object to the Normalize object. These are the mappings needed for this particular use case:

    • Map SSN from the Excel Workbook Source object to SSN in all four input nodes of the Normalize object.

  • Map City to TaxAmount in the first input node, County to TaxAmount in the second input node, State to TaxAmount in the third input node, and Federal to TaxAmount in the fourth input node.

  • Map the City Field Name to TaxType in the first input node, the County Field Name to TaxType in the second input node, the State Field Name to TaxType in the third input node, and the Federal Field Name to TaxType in the fourth input node.

  • To map field names, right-click on the mapping link, hover over Change Map Type, and select Field Name.

Here is what the final dataflow should look like:

  1. Preview the output to have a look at the normalized dataset.

You can map these fields further to other objects in the dataflow using the output node of the Normalize object.

This concludes using the Normalize object in the Astera Data Stack.

Excel Workbook Source