Astera Data Stack
Version 7
Version 7
  • Welcome to Astera Data Stack Documentation
  • Release Notes
    • Upgrading Astera 6 to Version 7
    • Release Notes for Astera 7.1
    • What is New in Astera 7.4
    • What’s New in Astera 7.4.1.221
    • What’s New in Astera 7.6
    • Upgrading to Astera 7.6
  • Setting Up
    • System Requirements
    • Installing Astera Data Integrator
    • Setting up Astera Integration Server 7
    • UI Walkthrough - Astera Data Integrator
  • Dataflows
    • Introducing Dataflows
    • Sources
      • Setting Up Sources
      • Raw Text Filters in File Sources
      • COBOL File Source
      • Database Table Source
      • Data Model Source
      • Delimited File Source
      • Email Source
      • Excel Workbook Source
      • File System Items Source
      • Fixed Length File Source
      • PDF Form Source
      • Report Source
      • SQL Query Source
      • XML/JSON File Source
    • Transformations
      • Aggregate Transformation
      • Apply To All Transformation
      • Constant Value Transformation
      • Data Cleanse Transformation
      • Data Quality Rules Transformation
      • Denormalize Transformation
      • Distinct Transformation
      • Expression Transformation
      • Filter Transformation
      • Function Transformations
      • Join Transformation
      • Merge Transformation
      • Normalize Transformation
      • Passthru Transformation
      • Rest Client
      • Route Transformation
      • Sequence Generator Transformation
      • Sort Transformation
      • Sources as Transformations
      • Subflow Transformation
      • Tree Join Transformation
      • Union Transformation
      • Web Service Transformation
    • Destinations
      • Setting Up Destinations
      • Database Table Destination
      • Delimited File Destination
      • Excel Workbook Destination
      • Fixed Length File Destination
      • SQL Statement Destination
      • XML/JSON File Destination
    • Data Logging and Profiling
      • Creating Data Profile
      • Creating Field Profile
      • Data Quality Mode
      • Record Level Log
      • Using Data Quality Rules in Astera
    • Database Write Strategies
      • Database Write Strategies
    • Text Processors
      • Fixed Length Parser
      • Fixed Length Serializer
  • Workflows
    • Adding Workflow Tasks
    • Creating Workflows
    • Using Workflow Designer
    • Customizing Workflows With Parameters
    • Workflows with a Dynamic Destination Path
    • Resuming and Rerunning Workflows in Astera
  • Subflows
    • Using Subflows
  • Functions
    • Functions Glossary
    • Introducing Function Transformations
    • Custom Functions
    • Logical
      • Coalesce (Any value1, Any value2)
      • IsNotNull (AnyValue)
      • IsRealNumber (AnyValue)
      • IsValidSqlDate (Date)
      • IsDate (AnyValue)
      • If (Boolean)
      • If (DateTime)
      • If (Double)
      • Exists
      • If (Int64)
      • If (String)
      • IsDate (str, strformat)
      • IsInteger (AnyValue)
      • IsNullOrWhitespace (StringValue)
      • IsNullorEmpty (StringValue)
      • IsNull (AnyValue)
      • IsNumeric (AnyValue)
    • Conversion
      • GetDateComponents (DateWithOffset)
      • ParseDate (Formats, Str)
      • GetDateComponents (Date)
      • HexToInteger (Any Value)
      • ToInteger (Any value)
      • ToDecimal (Any value)
      • ToReal (Any value)
      • ToDate (String dateStr)
      • TryParseDate (String, UnknownDate)
      • ToString (Any value)
      • ToString (DateValue)
      • ToString (Any data, String format)
    • Math
      • Abs (Double)
      • Abs (Decimal)
      • Ceiling (Real)
      • Ceiling(Decimal)
      • Floor (Decimal)
      • Floor (Real)
      • Max (Decimal)
      • Max (Date)
      • Min (Decimal)
      • Min (Date)
      • Max (Real)
      • Max (Integer)
      • Min (Real)
      • Pow (BaseExponent)
      • Min (Integer)
      • RandomReal (Int)
      • Round (Real)
      • Round (Real Integer)
      • Round (Decimal Integer)
      • Round (Decimal)
    • Financial
      • DDB
      • FV
      • IPmt
      • IPmt (FV)
      • Pmt
      • Pmt (FV)
      • PPmt
      • PPmt (FV)
      • PV (FV)
      • Rate
      • Rate (FV)
      • SLN
      • SYD
    • String
      • Center (String)
      • Chr (IntAscii)
      • Asc (String)
      • AddCDATAEnvelope
      • Concatenate (String)
      • ContainsAnyChar (String)
      • Contains (String)
      • Compact (String)
      • Find (Int64)
      • EndsWith (String)
      • FindIntStart (Int32)
      • Extract (String)
      • GetFindCount (Int64)
      • FindLast (Int64)
      • GetDigits (String)
      • GetLineFeed
      • Insert (String)
      • IsAlpha
      • GetToken
      • IndexOf
      • IsBlank
      • IsLower
      • IsUpper
      • IsSubstringOf
      • Length (String)
      • LeftOf (String)
      • Left (String)
      • IsValidName
      • Mid (String)
      • PadLeft
      • Mid (String Chars)
      • LSplit (String)
      • PadRight
      • ReplaceAllSpecialCharsWithSpace
      • RemoveChars (String str, StringCharsToRemove)
      • ReplaceLast
      • RightAlign
      • Reverse
      • Right (String)
      • RSplit (String)
      • SplitStringMultipleRecords
      • SplitStringMultipleRecords (2 Separators)
      • SplitString (3 separators)
      • SplitString
      • SplitStringMultipleRecords (3 Separators)
      • Trim
      • SubString (NoOfChars)
      • StripHtml
      • Trim (Start)
      • TrimExtraMiddleSpace
      • TrimEnd
      • PascalCaseWithSpace (String str)
      • Trim (String str)
      • ToLower(String str)
      • ToProper(String str)
      • ToUpper (String str)
      • Substring (String str, Integer startAt)
      • StartsWith (String str, String value)
      • RemoveAt (String str, Integer startAt, Integer noofChars)
      • Proper (String str)
      • Repeat (String str, Integer count)
      • ReplaceAll (String str, String lookFor, String replaceWith)
      • ReplaceFirst (String str, String lookFor, String replaceWith)
      • RightOf (String str, String lookFor)
      • RemoveChars (String str, String charsToRemove)
      • SplitString (String str, String separator1, String separator2)
    • Date Time
      • AddMinutes (DateTime)
      • AddDays (DateTimeOffset)
      • AddDays (DateTime)
      • AddHours (DateTime)
      • AddSeconds (DateTime)
      • AddMonths (DateTime)
      • AddMonths (DateTimeOffset)
      • AddMinutes (DateTimeOffset)
      • AddSeconds (DateTimeOffset)
      • AddYears (DateTimeOffset)
      • AddYears (DateTime)
      • Age (DateTime)
      • Age (DateTimeOffset)
      • CharToSeconds (Str)
      • DateDifferenceDays (DateTimeOffset)
      • DateDifferenceDays (DateTime)
      • DateDifferenceHours (DateTimeOffset)
      • DateDifferenceHours (DateTime)
      • DateDifferenceMonths (DateTimeOffset)
      • DateDifferenceMonths (DateTime)
      • DatePart (DateTimeOffset)
      • DatePart (DateTime)
      • DateDifferenceYears (DateTimeOffset)
      • DateDifferenceYears (DateTime)
      • Month (DateTime)
      • Month (DateTimeOffset)
      • Now
      • Quarter (DateTime)
      • Quarter (DateTimeOffset)
      • Second (DateTime)
      • Second (DateTimeOffset)
      • SecondsToChar (String)
      • TimeToInteger (DateTime)
      • TimeToInteger (DateTimeOffset)
      • ToDate Date (DateTime)
      • ToDate DateTime (DateTime)
      • ToDateString (DateTime)
      • ToDateTimeOffset-Date (DateTimeOffset)
      • ToDate DateTime (DateTimeOffset)
      • ToDateString (DateTimeOffset)
      • Today
      • ToLocal (DateTime)
      • ToJulianDate (DateTime)
      • ToJulianDayNumber (DateTime)
      • ToTicks (Date dateTime)
      • ToTicks (DateTimeWithOffset dateTime)
      • ToUnixEpoc (Date dateTime)
      • ToUtc (Date dateTime)
      • UnixTimeStampToDateTime (Real unixTimeStamp)
      • UtcNow ()
      • Week (Date dateTime)
      • Week (DateTimeWithOffset dateTime)
      • Year (Date dateTime)
      • Year (DateTimeWithOffset dateTime)
      • DateToJulian (Date dateTime, Integer length)
      • DateTimeOffsetUtcNow ()
      • DateTimeOffsetNow ()
      • Day (DateTimeWithOffset dateTime)
      • Day (Date dateTime)
      • DayOfWeekStr (DateTimeWithOffset dateTime)
      • DayOfWeek (DateTimeWithOffset dateTime)
      • DayOfWeek (Date dateTime)
      • DateToJulian (DateTimeWithOffset dateTime, Integer length)
      • DayOfWeekStr (Date dateTime)
      • FromJulianDate (Real julianDate)
      • DayOfYear (Date dateTime)
      • DaysInMonth(Integer year, Integer month)
      • DayOfYear (DateTimeWithOffset dateTime)
      • FromUnixEpoc
      • FromJulianDayNumber (Integer julianDayNumber)
      • FromTicksUtc(Integer ticks)
      • FromTicksLocal(Integer ticks)
      • Hour (Date dateTime)
      • Hour (DateTimeWithOffset dateTime)
      • Minute (Date dateTime)
      • JulianToDate (String julianDate)
      • Minute (DateTimeWithOffset dateTime)
      • DateToIntegerYYYYMMDD (DateTimeWithOffset dateTime)
      • DateToIntegerYYYYMMDD (Date dateTime)
    • Files
      • AppendTextToFile (String filePath, String text)
      • CopyFile (String sourceFilePath, String destFilePath, Boolean overWrite)
      • CreateDateTime (String filePath)
      • DeleteFile (String filePath)
      • DirectoryExists (String filePath)
      • FileExists (String filePath)
      • FileLength (String filePath)
      • FileLineCount (String filePath)
      • GetDirectory (String filePath)
      • GetEDIFileMetaData (String filePath)
      • GetExcelWorksheets (String excelFilePath)
      • GetFileExtension (String filePath)
      • GetFileInfo (String filePath)
      • GetFileName (String filePath)
      • GetFileNameWithoutExtension (String filePath)
      • LastUpdateDateTime (String filePath)
      • MoveFile (String filePath, String newDirectory)
      • ReadFileBytes (String filePath)
      • ReadFileFirstLine (String filePath)
      • ReadFileText (String filePath)
      • ReadFileText (String filePath, String codePage)
      • WriteBytesToFile (String filePath, ByteArray bytes)
      • WriteTextToFile (String filePath, String text)
    • Date Time With Offset
      • ToDateTimeOffsetFromDateTime (dateTime String)
      • ToUtc (DateTimeWithOffset)
      • ToDateTimeOffsetFromDateTime
      • ToDateTimeOffset (String dateTimeOffsetStr)
      • ToDateTimeFromDateTimeOffset
    • GUID
      • NewGuid
    • Encoding
      • ToBytes
      • FromBytes
      • UrlEncode
      • UrlDecode
    • Regular Expressions
      • ReplaceRegEx
      • ReplaceRegEx (Integer StartAt)
    • TimeSpan
      • Minutes
      • Hours
      • Days
      • Milliseconds
    • Matching
      • Soundex
      • DoubleMetaphone
      • RefinedSoundex
  • Report Model
    • User Guide
      • Report Model Tutorial
    • Report Model Interface
      • Field Properties
      • Pattern Properties
      • Region Properties
      • Report Browser
      • Report Options
    • Use Cases
      • Applying Pattern to Line
      • Auto-Creating Data Regions and Fields
      • Auto-Parsing
      • Creating Multi-Column Data Regions
      • Floating Patterns and Floating Fields
      • How to Work with Microsoft Word (Doc/Docx) Files in a Report Model
      • How to Work With OCR Scanned Files in a Report Model
      • How to Work With PDF Scaling Factor in a Report Model
      • Line Count
      • Pattern Count
      • Pattern is a Regular Expression
    • Exporting Options
      • Exporting a Report Model
      • Exporting Report Model to a Dataflow
    • Miscellaneous
      • Astera Report Model: Performance on Different Hardware Settings and Case Complexities
      • Microsoft Word and Rich Text Format Support
      • Importing Monarch Models
  • Project Management
    • Project Management
      • Deployment
      • Parameterization
    • Job Scheduler
      • Scheduling and Running Jobs on a Server
  • Web Services
    • Configuring Google Drive API through REST Client in Astera
    • Connecting to Eloqua using Astera REST API
    • POSTing Data Using the REST Client
    • Using the REST Client to Download a Text File
  • Metadata Management
    • Lineage and Impact Analysis
  • Connectors and Providers
    • Setting Up Oracle ODP.Net Connectivity in Astera
    • Running Microsoft Access Database Engine with Astera
    • Oracle Client Tools Setup
    • Oracle Data Load Options
  • Miscellaneous
    • Job Trace Improvements
    • SmartMatch Feature
    • Synonym Dictionary File
    • Using the Data Source Browser in Astera
  • Best Practices
    • Astera Best Practices - Dataflows
    • Cardinality Errors FAQs
    • Overview of Cardinality in Data Modeling
Powered by GitBook

© Copyright 2025, Astera Software

On this page
  • Data Cardinality in Astera
  • Dealing with Data Cardinality while Transforming Your Data in Astera
  • Understanding Astera Transformation Types and How They Impact Cardinality
  • Single vs. Set Transformations
  • Avoiding Cardinality Issues while Designing Your Dataflows
  • Example of a Set Transformation Working on Different Input Sets
  • Commonly Asked Questions about Cardinality Errors
  • Mapping Examples
  1. Best Practices

Overview of Cardinality in Data Modeling

PreviousCardinality Errors FAQs

Last updated 9 months ago

In data modeling, cardinality refers to the relationship of data in one database table with respect to another table. Two tables can be related as “one-to-one”, “one-to-many”, or “many-to-many”:

  • 1:1 - One row in Table A relates to one row in Table B. Using an entity-relationship (ER) model, 1:1 means that one occurrence of an entity relates to only one occurrence in another entity.

  • 1:Many - One row in Table A relates to many rows in Table B. In ER modeling, 1:Many means that one occurrence in an entity relates to many occurrences in another entity.

  • Many: Many (M: N) - Many rows in Table A relate to many rows in Table B. In ER terms, many occurrences in one entity relate to many occurrences in another entity. For example, a student (table A) may sign up for many classes (table B), and a class may have several students in it. Many-to-many relationships normally require a cross-reference table, AB with two one-to-many relationships A: AB and B: AB.

A relationship may also be optional. Optional means that a row in one table doesn’t have to relate to any row in the other table. For example, not all associates are provided with business credit cards. To put it a different way, an associate may have 0 (zero) or more credit cards issued by the company.

Data Cardinality in Astera

Astera supports the cardinality types listed above, and it expands on this concept by applying cardinality rules to any data source it supports, not just database sources. A data source is similar to an entity in the ER model and maybe a database table or view, flat or hierarchical file, or even email, FTP, or web service data. A non-database source is most commonly a flat file, such as comma-separated values (CSV) or Excel workbook. In an Astera dataflow, a flat file source acts similarly to a single database table source. Astera also supports non-flat, or hierarchical sources, representing more complex data structures, such as trees containing a root node and a combination of descendant (child) nodes which may be parents to their old children nodes.

Astera implements 1:1 and 1:Many cardinality types in non-flat data sources by using tree layouts and assigning every node in the tree as a ‘Single Instance’ node, shown in the tree as , or a ‘Collection’ node, shown in the tree as . The figure below shows an example of an object with a complex tree layout.

A single instance child node has a 1:1 cardinality (i.e. a 1:1 relationship) to its parent node.

A collection child node has a Many:1 cardinality to its parent node. In other words, each parent record may have 0 or more child records that are collectively referred to as a collection.

By mapping tree nodes in a particular way, as well as by using transformations, such as Sort, Join, Union, Flower, or Aggregate, to name a few, Astera allows you to effectively de-normalize required portions of tree data structures, and save that new representation of your data in a flat file or table.

Note: A reverse process to creating flat output is to join the denormalized or disparate data sources using TreeJoin transformation. TreeJoin transformations allow you to create data trees by joining multiple flat or hierarchical data sources.

Dealing with Data Cardinality while Transforming Your Data in Astera

Understanding Astera Transformation Types and How They Impact Cardinality

With Astera dataflows, you can merge data from multiple disparate sources, split data from a single source into multiple destinations, and perform a series of relatively simple to highly complex transformations. Astera's built-in transformations include field-level transformations such as expressions, lookups, and functions, as well as recordset-level transformations such as sort, join, union, merge, filter, route, normalize, denormalize, and many others.

Single vs. Set Transformations

Astera's transformations are divided into single (or record level) and set levels:

  • Single transformations create derived values by applying a lookup, function, or expression to fields from a single record. You can view the results of single-record transformations as appending more values to the preceding layout. A name-parse function, for instance, takes a full name and breaks it into individual name components. These components can be mapped to succeeding transformations (including set transformations) or written to a destination.

  • Set transformations, on the other hand, operate on a group of records and may result in the joining, reordering, elimination, or aggregation of records. Examples of set transformations include join, sort, route, distinct, etc. Data sources and destinations are also considered set transformations. Generally, set transformations operate on the entire data set. For instance, a sort transformation sorts the entire data set before passing along records to the next object. Similarly, aggregate transformations use the entire data set to construct aggregated records.

In Astera, the data flows between set transformations. Single (or record-level) transformations are used to transform or augment individual fields during these data movements. The transformations in your flow need to be connected (aka mapped) in a certain way to avoid cardinality issues while transforming your data. A cardinality issue in your dataflow will prevent it from running successfully and will throw a cardinality error during flow verification or at run-time.

Avoiding Cardinality Issues while Designing Your Dataflows

Keep the following rules in mind when designing your flows. They will help you avoid the most common cardinality issues.

  • A record-level transformation can be connected to only one set transformation. For instance, a lookup or expression cannot receive input from two different set transformations.

  • Other than transformations that enable combining multiple data sources—such as join, merge, and union—transformations can receive input from only one set transformation.

  • Set transformations can receive input from any number of record-level transformations as long as all these record-level transformations receive input from the same set transformation. In other words, you can map any number of single transformations between two set transformations.

  • When working with non-flat layouts, such as XML or COBOL sources, for instance, you can avoid many cardinality issues by aggregating your data or scoping it with a Flower transformation. You can build Aggregate or Flower output on an entire dataset, or drill down to any level of granularity using a scope. A scope transformation is different from set transformations in that it works on a particular granularity level (such as a node in the tree) rather than on the entire data set. By using a scope, for instance, a specific tree collection can be sorted, filtered, or aggregated. For more information on Scope and Flower Transformations, see Scope Transformations.

Example of a Set Transformation Working on Different Input Sets

Union transformation is an example of a set transformation that accepts more than one input set. Union combines incoming data from two or more input sets into a single output. Each input node receives data from a different source or set transformation. Note that an input node in a Union transformation cannot receive data from two different set transformations. The figure below shows an example of a flow using Union transformation fed by 3 set transformations.

Commonly Asked Questions about Cardinality Errors

“While using the Route component I received the following error. Why am I getting this error?”

Message: Cardinality of element paths inconsistent with other source paths

If you look at the image below, you have a map going from ExcelSource 1 directly to Database Destination1. You also have a map going from the same source to a Router and then the database destination. This is a no-no.

The Route transformation is for routing entire records, not values. You will also run into this error when trying to do this with any set transformation. For example, if you have a Filter transformation instead of a Router there and filtered out all but 1 record out of 10 coming from the Excel workbook.

From the point of view of the destination table, how many records should it write? 1 or 10? The counts don’t match up, and this is the reason you get a cardinality error. To avoid this error, a record must flow entirely through the set transformation. You cannot go around it.

Mapping Examples

Below are 15 examples showing the use of correct maps that avoid data cardinality issues, as well as some incorrect uses of maps leading to cardinality issues or other dataflow errors.

Example 1:

Incorrect input mapping of a Set transformation. The following dataflow has invalid maps into Sort1. The Sort transformation object is a set transformation, and it only allows input maps from one Source/Set transformation. This flow will throw an error at run-time, and will also display a preview error if you try to preview the output of Sort1.

Example 2:

In this example, the destination file is a Set transformation, and as such it does not allow maps from two preceding Set transformations at the same time. In addition, because Filter1 may return a subset of the input set, there is a cardinality issue when mapping that subset together with the full data set from DelimitedSource1.

Example 3:

The following dataflow has correct maps. Each Set transformation in this flow receives input from only one preceding Set transformation. Note that the number of records in DelimitedDest1 may be different from the number of records in DelimitedDest2 because of the different filters applied. There is no cardinality issue here because the flow is written into two destination files.

Example 4:

The following dataflow has incorrect maps in the destination file. In this example, the destination Set transformation receives input from two preceding Set transformations, which is not allowed. In addition, only a subset of records will go through the Route1 path of the router, which would create a cardinality issue when trying to combine that subset with the full dataset out of the delimited source.

Example 5:

This flow has incorrect mapping feeding into the destination file. The mapping creates a cardinality issue with the Router’s output because the different paths out of the Router could each return a different number of records.

Example 6:

The mapping shown below is correct because the Merge transformation object accepts input from two preceding Set transformations.

Example 7:

The following mapping is correct. It does not create a cardinality issue since it denormalizes the input set by iterating the child collection WS_TF_R1 through each instance of the parent node WS_TF_D1.

Example 8:

The following mapping is correct. WS_TF_R2 is a grandchild node of WS_TF_D1. It has a 1:1 relationship with its parent node WS_TF_R1, which is a collection child node to its own parent node WS_TF_D1. So in effect, this mapping creates a denormalized record set with each instance of the root node WS_TF_D1 repeated for each occurrence of the grandchild node WS_TF_R2 in the collection.

Example 9:

This mapping is correct. There are two sibling children WS_TF_R2 and WS_TF_R3 mapped together. Normally, this could create a cardinality issue, but in this case, each of the sibling children has a 1:1 relationship with its parent node WS_TF_R1. As a result, both sibling nodes will return the same number of records, outputting a record for each occurrence of the WS_TF_R1 node in the collection.

Example 10:

The mapping shown below is correct. In this example, the WS_TF_R2 node is a single instance node, creating a 1:1 relationship with its parent node WS_TF_R1. WS_TF_R7 is a collection node to its parent node WS_TF_R1. Both of the mapped nodes, WS_TF_R2 and WS_TF_R7, share the same parent node WS_TF_R1. As a result, there is no cardinality issue here, and this mapping will create a denormalized view of the tree, repeating each instance of WS_TF_R2 for each occurrence of WS_TF_R7 in the collection.

Example 11:

The mapping below is correct. WS_TF_R7 is the child of WS_TF_R1. Both are collection nodes. This mapping will create a denormalized view of the tree, repeating each occurrence of the WS_TF_R1 node for each occurrence of its child node WS_TF_R7 in the collection.

Example 12:

This mapping is incorrect. This flow is trying to map two sibling collections, WS_TF_R7 and WS_TF_10, which are the children of the parent node WS_TF_R1. Because each of the sibling collections can return a different number of records compared to the other collection, this kind of mapping would create an N: M type relationship and so will return a cardinality error.

Example 13:

The mapping shown below is incorrect. It will return a cardinality error when the flow runs, or if you preview the Destination object’s output. This mapping causes a cardinality issue because it is attempting to combine the output of two single instance nodes WS_TF_R8 and WS_TF_R11 which are the respective children of sibling collections WS_TF_R7 and WS_TF_10.

Because each of the sibling collections can return a different number of records compared to the other collection, this mapping would create an N: M type relationship and a cardinality error.

Example 14:

This mapping is correct. It eliminates the N: M cardinality, which would normally be the case when joining two sibling collections, by applying a Flower transformation to one of the collections. The Flower transformation acts as a filter returning the first record from the WS_TF_R7 collection in each instance of the WS_TF_D1 node. There is a scope map into the Flower object shown as a blue line connecting the WS_TF_D1 node. This tells the dataflow engine to ‘scope’ the collection at the WS_TF_D1 node level. Because the Flower object returns 1 record from WS_TF_R7 for each record in the root node, this effectively creates a 1:N relationship between WS_TF_R7 and WS_TF_R10.

Example 15:

Another example using correct mapping. The mapping here is similar to the one in the previous example, but instead of a Flower transformation, this dataflow uses an Aggregate scoped at the root level. The Aggregate returns the max disbursement date among all claims in the WS_TF_D1 record, effectively returning one record from each WS_TF_R7 collection, and as a result, creating a 1:N relationship between WS_TF_R7 and WS_TF_R10. For more details on this type of mapping, please see Example 14.