Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
JSON/XML File – For this type of API source you only need to provide the Open API Specification File Path in JSON or XML file formats.
Specify the File Path and click OK.
This API will be populated in the API Browser panel, where you can simply expand the nodes and drag-and-drop methods onto the designer window.
API (Application Programming Interface) is defined as an interface or medium through which one software communicates with another. In other words, it is a set of contracts that allows different software systems to share information with each other. The greatest advantage of an API is that different programs and devices can communicate with each other in a secure manner, without interference.
APIs are messengers that conform to the technical contract between two parties. They are language and platform-independent, which means C# can talk to Java, and Unix can communicate with Mac without any difficulty. An API is not the same as a remote server. In fact, it is part of a remote server that receives requests and sends responses. More precisely, an API is a structured request and response.
The API Browser in Astera has narrowed down the steps to make HTTP calls using just one-step authentication. Once you have imported an API in Astera, all endpoint operations in that API are populated at once. API definition describes what requests are available and what the responses will look like.
So, once you load an API definition, all supported methods are populated in the API Browser unlike Legacy, where all supported methods must be configured separately in each object.
There are two methods of configuring APIs in Astera. For open APIs, you only need to provide the API Import Source and File Path or Base URL to configure the connection with a specific API. Once this standardized information is provided, any API that you have imported will populate in the API Browser, along with their methods, for example, GET, PUT, POST, PATCH, and DELETE, and they will remain accessible until their authentication period expires. From the API Browser in Astera, you can simply drag and drop operations, and use them in your flows.
It is important to note that a project must be created before importing APIs to work with the API Browser. However, you can access the API without a project when it’s an API Connection contained in the flow.
The API Browser, along with all its features and functionalities, works only within the scope of a project. Otherwise, it will give you the following error,
When a user imports an API, a shared connection file is created within the project automatically. The shared action file contains the Base URL of the imported API.
Astera supports the following HTTP request methods:
PUT: To update data to a specified resource to be processed on an API.
GET: To retrieve data from a specified resource on an API.
POST: To create or update an existing record on an API.
DELETE: To delete a specified resource on an API.
PATCH: To apply partial modifications to an existing resource.
To work with the API Browser in Astera, you must first create an API Client Project.
Follow the steps below to create an API Client Project in Astera,
Go to Menu Bar > Project > New > API Client Project.
Provide a name to the API Client Project and point the path to the location and directory where you want to save it.
Note: It is best practice to always create a new project in a new folder to avoid any errors.
Now, open the API Browser panel on your Astera client from Menu Bar > View > Data Service > API Browser.
Once selected, an API Browser panel will open on the left side of your Astera client window.
Here, you can see a couple of icons in the toolbar of the API Browser:
Import API: By clicking this option, you can import different APIs with various available options.
Remove API from Browser: This option removes the selected API from the API Browser.
Refresh API Tree: This option allows you to redraw the browser tree after you have deleted some operations.
Expand/Collapse all: These options show/hide all the requests in the CAPI file.
Add Request: This option allows you to add a new HTTP request to the CAPI file by specifying the request name, resource, and HTTP method.
Edit Properties: You can use this option to change the shared connection or the API name of the CAPI.
Open API Connection: This option allows you to directly open the shared API Connection from the project for the API opened in the API Browser.
Save CAPI file: Any changes made to the CAPI file are saved when you click on this option.
To import an API in Astera, click the Import API icon. An Import API screen will open.
Here, first, you need to select the API Import Source type from the drop-down menu. Astera offers three ways to import APIs.
JSON/XML URL – For this type of API source, you will need to provide the URL in JSON or XML format.
Specify the URL and click OK.
This API will be populated in the API Browser panel.
To make an API call, an API Connection object needs to be configured first. This object stores all the common information that can be shared across multiple API requests.
Drag-and-drop the API Connection object from the Toolbox onto a dataflow.
Note: It can also be stored as a shared action file.
Right-click on the API Connection object and select Properties from the context menu.
A configuration window will appear on your screen with the following options:
Base URL: Here, you can specify the base URL of the API which will prepend as a common path to all API endpoints sharing this connection. A Base URL usually consists of the scheme, hostname, and port of the API web address.
Timeout (msec): Specify the duration, in milliseconds, to wait for the API server to respond before giving a timeout error.
Include Client SSL Certificate: Check this box to include an imported client certificate for the specified base URL.
Enable Authentication Logs: Select this checkbox to enable authentication logging for APIs.
Authentication – Security Type: Specify the authentication type for the API.
Astera supports the following authentication types.
Identification and verification of a user is an important aspect of authentication. Authentication allows an application to determine whether a user identity is valid/authorized; based on the outcome, a user is provided access control to the application.
For APIs, authentication plays a key role in authorizing requests to the API platform’s resources. The following authentication types are available within the API Connection object.
No Authentication
OAuth 2
API Key
Basic Authentication
Bearer Token
AWS Signature
NTLM
No Authentication
With this security type, the user can send API requests without including any authentication parameters.
OAuth 2
This type is used when an unrelated application login is used to acquire permission to access data on your behalf for another application. Instead of giving away your password to the application, OAuth 2 enables delegated authorization through a third-party Authorization Server.
In response to a valid authorization, the Auth Server issues an Access Token with a restricted scope and validity to authenticate the user with permissions. When the Access Token expires, its Refresh Token is used to obtain another valid Access Token.
Configure an OAuth 2 request to generate Access and Refresh tokens. The tokens will be implicitly added to the request and auto-refreshed if expired.
The OAuth 2 authentication supports different flows for various scenarios. You can select any of the following Grant Types:
Implicit
Authorization Code
Authorization Code (with PKCE)
Password
Client Credentials
Implicit
In this Grant Type, you only need to provide an Authentication URL and Client ID to request a token without an intermediate code exchange. It was built for apps such as native Java script clients, and mobile or browser-based applications where client secrets cannot be exposed.
Hence, this flow promptly gets the token directly exposed in the URL and is considered less secure for web applications.
Authentication URL: This is the login page, where the API user authorizes itself to the Authentication Server.
Client ID: This is the public identifier for accessing the registered API Server application.
Authorization Code
This flow type is popular for mobile and web server-side applications.
In this Grant Type, you need to provide an Authentication URL, Access Token URL, Client ID, and, optionally, a Client Secret to authorize.
The flow first requests a one-time authorization code from the authorization server. The authorized request is redirected to the API Server along with its client secret which then authenticates the user for its resources by exchanging the code for an Access Token.
Authentication URL: This is the login page, where the API user authorizes itself to the Auth Server.
Access Token URL: This URL is provided to generate an Access Token for authentication after the user has been authorized successfully.
Client ID: The public identifier for accessing the registered API Server application.
Client Secret: It is provided alongside the Client ID, as a secret credential to access the registered application from the Auth Server.
After providing the authentication details, click on the Request Token option to sign in and fetch the token(s).
Authorization Code with PKCE
The Proof Key for Code Exchange flow has replaced implicit authentication flow by being more secure to be used in single-page native, mobile, and browser-based apps. As such apps existing on the browser cannot store client secrets, this Authorization Code flow keeps the client secret hidden.
Instead, the client sends a dynamically generated string generated using a code_verifier hashed to a code_challange to the Auth Server. The Auth Server stores this for verifying the client during the OAuth2 exchange.
The Client app then makes an authorization request and receives the Auth Code as a result. It then requests an Access Token by sending the Auth Code together with the code_verifier that is hashed by the Authorization server and compared to its saved copy for verification.
In this Grant Type, you need to provide an Authentication URL, Access Token URL, and the Client ID to authorize.
Password
In this Grant Type, you need an Access Token URL, Username, Password, Client ID, and Client Secret to authorize. It is considered for internal services and not recommended for third-party applications as it authenticates the given credentials in a single step.
Since user credentials are exposed to the client application, this flow type outlaws the OAuth2 principles and is now deprecated.
Access Token URL: The URL through which the Access token is going to be generated for authentication.
Username: The application login name of the user for authentication.
Password: The application user password is provided for authentication.
Client ID: The public identifier for accessing the registered API Server Application.
Client Secret: It is provided alongside the Client ID, as a secret credential to access the registered application from the Auth Server.
After providing the authentication details, click on Request Token to fetch the token(s).
Client Credentials
In this Grant Type, you need the Access Token URL, Client ID, and Client Secret to authorize. This is used with the client application. It self-authenticates access to its resources without a user context.
Access Token URL: This URL is provided to generate an access token for authentication.
Client ID: The public identifier for accessing the registered API Server application.
Client Secret: It is provided alongside the Client ID, as a secret credential to access the registered application from the Auth Server.
After providing the authentication details, click on Request Token to fetch the token(s).
Additional OAuth 2 Info
An OAuth 2 authentication flow requires some additional parameters to specify resources and scope permissions associated with the given Access Token.
To provide additional information required by an API provider for an OAuth2 request, click on the Additional Info button.
Resource: Use this to identify the URL of the web API intended for user access.
Scope: Use this to specify what the authenticating application can do on behalf of a user by imposing a limit on which resources it can access and with what rights.
State: This parameter is useful to protect against XSRF as the client generates and sends a random string while the Auth Server returns it back again on authenticating as a verification.
Response Type: This parameter is used to specify the expected type to be received from the authorization server on valid authorization. The most common inputs are “code” and “token”. Code is used for the Authorization Code grant type where it is exchanged in the follow-up request for the token. A token is used for implicit grant type where the Access Token is returned directly.
Callback URL: Redirected URL after the authentication request at which the token/code will be returned. For Astera, use “http://localhost:8050/” or “https://localhost:8050/”
Include SSL Certificate: To include the client certificate in the OAuth2 token generation request.
Ignore Certificate Errors: Check to ignore any certificate errors while authenticating.
Additional Parameters: Any additional parameters apart from the above list that are required to be sent in the authentication request can be added here as key-value pairs, separated by a comma.
Token Caching and Auto-Refresh
Following the security policy of authenticating an API call, clients are required to obtain Access/Refresh tokens for authenticating an API request. These tokens may have a defined validity and need to be invoked again to generate a new token.
Once authentication details are fully configured, users need to manually Request Token in the API Connection.
Handling token expiry and Automation
For the OAuth2 grant flow which requires users to authenticate when requesting a token, the refresh token can be used to obtain a new access token. While other grant flows directly make the call to request an access token, Astera can automatically obtain a new token in the background so your flows can be automated.
You can make use of the auto-generation and caching of these tokens which enables you to automate API requests ensuring new tokens are generated for use without needing to manually update the tokens each time.
Using ‘Client Credentials’ or ‘Password’ OAuth2 Grant Types
These grant flows work by making a single call requesting an Access Token along with the provided client application credentials. Since the flow is not dependent on any user input for authentication, it can be automated for the regeneration of a new token when the existing token expires.
Here, we have a pre-configured authentication with an expired token. Let’s see what happens when this flow is executed with an expired token.
The job trace shows that an expired token was found, and a new token has been generated for this connection and saved to the server cache for future reuse.
On the next run, the server is bound to check the cache for a valid token before opting to generate a new one. The cache stores a token for each unique connection used across all jobs running on the server.
Use of a Refresh Token
For other OAuth2 grant flows that require the user to authenticate first, the refresh token is used to regenerate the access token automatically.
Using Default User Browser for User Authentication
Some API Providers restrict using an embedded browser for authenticating using the OAuth2 code exchange. An alternate option is to request token through a more secure browser-based OAuth authentication.
In this article, we’ll discuss how to run an OAuth2.0 flow for Google Calendar API using the user’s default browser. Users will first need to create an oauth2 application on the Google Developers’ account and obtain the client ID and secret.
Authenticating the Client Application
For this example, we will be authenticating Google APIs which do not allow the use of an Embedded Browser for an OAuth2 exchange.
Open the API Connection to configure authentication information.
As Google Calendar API works with OAuth2.0 security with Authorization Code grant type, we can select and configure it accordingly.
We must enter parameters such as Authentication URL, Access Token URL, Client ID, Client Secret, and Additional Information according to the authentication and authorization information provided by Google. Now, let’s click on the Request Token button to generate the access and refresh tokens.
This opens the Embedded Browser of the Astera Client which will result in an error as Google does not allow authentication via an embedded browser. For such platforms, it is necessary to use a more secure user-default browser for OAuth2 authentication exchange.
Close the embedded browser window. Now, check the option to Use System’s Default Browser and click on the Request Token button again.
This opens the user system’s default browser for authentication, and this allows us to successfully retrieve the access token on logging in. In our case, the default Microsoft Edge web browser has opened.
Note: Whether the embedded or secure browsers are allowed for authentication strictly depends upon the API provider.
Click on Continue.
The generated Access Token along with the Refresh Token (if supported by the API provider) are displayed on the REST Connection window with their respective expiry date and time.
Tested System Browsers
The following browsers have been successfully tested for the Astera Client,
Google Chrome
Microsoft Edge
Firebox
API Key
An API Key is a key-value pair that a client provides when it makes an API request. They can be sent in the Query string or as a request Header.
It requires two parameters for authentication:
Key
Value
API Key as a Query
API Key as a Header
Note: API Key is sent in as a key-value pair in the header such as “apikey: cZRcTZt7R3gnTt9l2C9YHXke0SNDAPJK”
Basic Authentication
Basic Authentication is structured according to the HTTP protocol to provide a Username and Password when making an API request.
In basic HTTP authentication, a request header parameter is included in the form of “Authentication: Basic”, where the encoded string is the Base64 encoded.
Bearer Token
Bearer Token is an HTTP-based authentication. The access token generated by the server in response to a login request is in turn included in the request header.
To generate a Bearer Token, you need:
User Name
Password
Token URL
Note: This Authentication type is needed to access Astera APIs, and the request is sent as “application/JSON”.
API clients can enable the use of a private signed certificate to authenticate themselves when accessing APIs through mutual TLS. You can configure APIs to use a .pem or a .pkt certificate paired with a certificate key or password.
A Client certificate contains information used to identify the client including a digital signature and it is imported for a specific domain. All HTTPS - SSL-enabled requests matching the domain URL will authenticate using the installed client certificate.
All certificates used in authenticating API requests from the client will be imported to Astera’s Server and are included as authentication when an API request is sent. To import a client certificate for authenticating API requests,
Navigate to the Server tab on the main menu bar.
Right-click on the cluster node and select Client Certificates.
This opens the wizard to manage client SSL certificates.
Click on the import icon at the top left to add a certificate authenticating to a domain.
Importing a .pem certificate
Define the requested domain which will include this certificate.
Browse the .pem client certificate file obtained as a counterpart to the authenticating server certificate present on the API provider.
Provide the matching key file for the given client certificate.
Click Import.
Now, this certificate can be used with SSL-enabled authentication for API requests sent to the given domain.
Importing a .pfx certificate
Define the requested domain which will include this certificate.
Browser the .pfx client certificate file obtained as a counterpart to the authenticating server certificate present on the API provider.
Enter the password for the certificate.
Click Import.
Now this certificate can be used with SSL-enabled authentication for API requests sent to the given domain.
Enabling SSL Certificate Authentication
Once the certificate has been imported for the respective domain, let’s see how to make an API request with SSL enabled.
You need to enable SSL verification to include the certificate when making an API call. To enable SSL, open the API Connection object which has the Base URL domain, and the authentication configured. To include the SSL certificate, check the option to Include Client SSL Certificate.
Click OK and preview the API Client to send a request.
This request now includes the certificate to validate the client on the mutual TLS authentication.
Note: To include the client certificate in the Oauth2 request from the API Connection, check the Include SSL Certificate option in the Additional Information window.
This is where you can define query or header parameters to be shared across all clients using the same connection.
Name: The name of a Query or Header parameter can be defined here.
Parameter Location: This option defines whether the parameter has a Query location or a Header location.
Data Type: This option defines the data type of the parameter from a list of options.
The parameter values defined here will be inherited by all API clients using this connection unless overridden individually.
Once done, click Next and you will be led to the Config Parameters screen.
Here, config parameter values can be changed according to your application. Parameters not changed will use their default values.
Click Next, and you will be led to the General Options screen.
Here, you can add any Comments that you wish to add. The rest of the options for this object have been disabled.
Click OK to close the window.
You have successfully configured the API Connection object.
Click on File in the main toolbar, hover over New, and select Dataflow from the drop-down menu.
Once the Dataflow is open, drag-and-drop the API Connection and API Client objects from the Toolbox onto the dataflow.
Note: The API Connection here can only be accessed within the scope of this dataflow.
Configure the API Connection object for the Base URL and Authentication.
Right-click on the API Client object and select Properties from the context menu.
A new API Client Properties window will open.
The Shared Connection dropdown list shows us the API Connection object present in the same dataflow.
You can now use this API Client object to make API calls within Astera.
Navigate to the main toolbar, click Project, hover over New, and select a project type.
Note: You can also open a previously existing project.
Locate the Project Explorer on the right, right-click on the project or one of its folders and select Add New Item from the context menu.
This will open a new window where a new SharedAction can be added to the project.
Within the SharedAction file, drag-and-drop the API Connection object from the Toolbox.
Note: The SharedAction file should only contain a single API Connection object.
Configure the API Connection object with Base URL, Authentication, and Shared Parameters and save the SharedAction file
This API Connection can be used in any flow document contained in the same project.
Next, open a new dataflow within the project.
Drag-and-drop the API Client object onto the dataflow, right-click on it, and select Properties from the context menu.
A new window will open.
Here, you can see the name of the Shared Connection within the drop-down menu of the Properties option.
Note: Within the project, the shared API Connection can be accessed within any flow.
If shared connections with duplicate names exist in the project, only one will be shown and used.
If duplicate connections exist in the flow and the project, the flow connection will be given preference.
This concludes our discussion on the configuration and use of the API Connection object in Astera Data Stack.
Users can create and maintain custom API collections in case the API provider does not offer existing documentation for its APIs.
From the API Browser, open the import wizard and select Custom API as the API Import Source.
Next, provide a Name for your custom API and the Base URL of the API provider. Upon import, a new API shared connection (.sact) and a Custom-API (.capi) file will be created in the project.
Alternatively, the custom API can also point to an existing pre-configured connection from the project.
You can configure the API connection object in the shared connection file by providing valid authentication and defining parameters, if needed.
Once you are done configuring the connection object, the CAPI file will open in the API browser.
To add API requests to your custom CAPI file, click the Add Request icon from the top toolbar menu of the API browser.
Here, define the following request properties:
Request Name: This is used as the request name and description.
Resources: The unique request resource path including the URI or path parameters which appends after the Server Base URL.
HTTP Method: Select the standard HTTP method to be used for this request.
The request will be added to the CAPI file in the API Browser. Repeat this process to add all the required requests in your CAPI file.
Once you have populated the requests in your CAPI file, it may look something like this in the API browser.
Note: You may have to include a URI parameter in the resource for some requests. Some API documentations display the URI parameter after a (:) symbol. However, you will have to replace the colon ( : ) with curly brackets ( {} ) for the parameter to be considered as URI.
To configure the parameters, input/output layout, or pagination options for any request, right-click on it and choose the Edit Request option.
You can also configure and save the request properties by dragging and dropping.
Drag the request from the API browser to a flow designer.
Right-click on the API Client and select Properties. Make changes to the properties of the API client object.
To save the changes, simply drag and drop the client object back to the API browser from the flow designer.
Once you are done populating your CAPI file by configuring all request properties and authentication, click on the Save CAPI file icon on the top of the API browser to save your changes.
This will save all the configurations you have made including parameters, input/output body, and pagination settings to the request.
Sharing and adding the CAPI file to a new project
Fully configured CAPI files act as a connector for your API provider. If you want to add the CAPI file to another project, right-click on the CAPI file from the project explorer and click on copy full path.
Then open the other project, right-click on the folder you want to add the CAPI file to, and click on Add Existing Items.
A box will open. Paste the file path in the box next to the File name and click Open.
The CAPI file will be added to the project along with its corresponding .Sact file.
This concludes the basic concepts of working with the API Browser in Astera.
Note: When a user imports an API definition, a shared connection file containing the Base URL and authentication type is automatically created within the project. To learn more about importing APIs in Astera Data Stack, click .
An eTag also called an entity tag is an HTTP response header field that includes an identifier for the specific version or the state of the resource at the time the request was sent. This identifier helps to differentiate between the different versions of the resource and to check if the caches at the client side hold the updated representation of the state of the resource.
Let’s try to understand what is meant by eTags and how these options work.
If the client wants to check if the caches of a resource are usable or fresh, it can send the eTag in the If-None-Match header field in the request to the server. The server will match the client’s eTag with the one that it has for the current version of the resource. If the ETags match, the server will not send any representation of the state of the resource in the response implying that the client’s caches are fresh and usable.
There are two major uses of eTags in API requests:
Data Caching
Concurrency Control.
Let’s investigate these uses one by one. For now, let’s see how this Data Caching works with a use case.
So, we will make an API call to one of the endpoint operations of Box APIs. Here, we have a dataflow in which we are making an API call to fetch file details from one of the files on our Box account.
We will send a GET request to the /file/{fileid} resource with the help of the API client and API connection object. We have configured the API connection object in a shared action file. From the API documentation site, we can see that Box supports OAuth 2 authentication and the grant type of authorization code. Hence, we have already generated our access code after providing the client credentials from our Box app.
Coming back to our flow, let’s open the properties of the API Client object. Here, we are using the shared action API connection object that is providing the base URL. We have specified the HTTP method of GET and provided the name of the resource. Here, the curly brackets specify that the path parameter of file id will be passed along the request to fetch the information on the file related to that file id. In our flow, we are providing the file id through the constant object.
From the API documentation of Box APIs, we can see that the if none match header is supported for the endpoint at which we are making a call in our dataflow.
Now, if we go to the Service options screen of the API Client object, we can see that we have a checkbox to enable eTags which further gives us two checkboxes of,
Retrieve if None Match Header
Update Using If-Match Header.
We need to enable the eTag and the If-None-Match header checkbox.
When the request is sent to the server to fetch the file information for the first time via the GET API request, the response will be returned with an eTag value. This eTag value along with the response will be stored in the response caches at the client side.
The field of “Is cached response” in the response info node will be returned True because we are making the call for the first time and receiving the response from the server that will be cached.
In the future, if the client makes an API call again to fetch the file information, the eTag in response caches will be first compared with the latest eTag from the Server. So, for the consecutive API calls having the same cached eTag, we will see the “Is Cached Response” field as true.
In case, the file has not changed or updated, and the caches are reusable. The server will send a No modification response as we can see in our job trace. This means that the server does not have to send the requested information again in the response instead the client can use its response caches.
So, this is how eTags help to prevent unnecessary download and retrievable of information in turn saving the server’s bandwidth and request processing time.
Let’s look at another use case of eTag related to concurrency control.
It is possible that more than one client is sending requests to update the resources of the server. Then, to prevent loss of changes and to detect simultaneous updates, the client can send an eTag in the If-Match header field in the request to the server, if another client updates the resource in between or the file is modified, the server can compare the client’s eTag with its own current one and if they don’t match, the server can prevent clients from overwriting the changes or it will ensure that the latest version of the resource gets updated.
Let’s try to understand it with the help of a use case.
We will make two update API calls to change the name of a file uploaded at our Box account. But, in between the two calls we will make some changes to the file at the server side and see how the eTags play their part in ensuring consistency.
Here, we have a dataflow in which we are making a PUT request to update the name of the file associated with the file ID of “983656708053” on our Box account. From the API documentation, we can see that the if-match header field is supported in the PUT /file/{fileid} endpoint of Box APIs so in the API client service options, we will enable the If-Match Header checkbox.
The API client first checks if an eTag and response corresponding to this endpoint URL already exists in the cache. In the job trace, we can see that there is no eTag in the response caches because we are making the update request for this file for the first time.
Behind the scenes, the client first makes a Get call to the same endpoint URL and stores the eTag and response in its cache. Next, a PUT request is sent to the same endpoint URL including the eTag received earlier as the value of the If-Match header.
The server processes the update request and the eTag and the response returned is cached as the eTag received matches to the most recent version of the resource at the server.
This concludes our discussion on the eTag request service options and how they help with response caching and maintaining concurrency control in Astera.
Let’s see what steps are required to import a Postman Collection to the API Browser.
Open an Integration Project.
Open the API Browser through View > Data Service > API Browser.
Click on the Import API option on the API Browser.
This will open the Import API window.
Select Postman collection from the API Import Source drop-down.
Browse and provide the path to the Postman Collection and click OK.
If there is already a Shared Connection available, then we can re-utilize it, instead of auto-generating a new one, by checking the Use Existing Connection checkbox.
Once the Postman Collection is successfully imported, it will populate the API Browser with the available endpoints.
Note: It is recommended by Postman to export the collections in v2.1 format files. Therefore, Astera restricts the user to import only a v2.1 Postman Collection.
The Centerprise API file (.capi) and Shared Connection files will automatically generate and be saved in their respective folders.
Now, drag-and-drop any endpoint onto a logic designing artefact i.e., a Dataflow to consume.
To import a Postman Collection to the API Browser successfully, we must follow certain conventions:
The Postman collection must include a variable, namely baseUrl. (This variable is not case sensitive)
Note: A collection in which the baseUrl variable contains a special character(s) will not be imported.
All other variables, except for the baseUrl, will be discarded.
During the import, the baseUrl variable defined in all the endpoints will be replaced with the Base Url text box value in the Shared Connection.
This means that the Shared Connection’s Base Url will be populated with the baseUrl variable’s Current Value that is defined under the Variable section in the collection.
All valid Postman Collections will be imported with pre-configured Shared Connections. These Shared Connections will have the same Authentication Type selected as in the collections i.e., API Key, Auth Code, Client Credentials, etc.
Note: Confidential data such as credentials are imported for security and protection.
Upon importing a Postman Collection, each endpoint’s configuration i.e., methods, resources, parameters, and request/response payloads will also be preserved.
All parameters with their respective default values are populated in the API Client’s Parameter window.
Note: Sensitive data such as the URI parameter value is not preserved for security.
The input and output layouts/payload are structured in the respective Input and Output Layout windows. Additionally, the sample text bodies used to generate the layouts are preserved in the Sample JSON Text window.
The API Browser provides a convenient option to import pre-built and pre-tested CAPI connectors directly from Astera's GitHub repository.
These connectors are carefully curated and include a comprehensive list of endpoints that have been thoroughly tested and configured for seamless consumer use. This option allows users to easily access and integrate these connectors into their projects, ensuring reliable and efficient connectivity with the associated APIs.
Additionally, Astera offers users an option to build their own custom API connectors. Please visit the documentation here.
1. To start, open the API Browser from the View -> Data Service -> API Browser.
Select Import API.
This will open a new window.
From the API Import Source dropdown menu, select Custom Connectors.
Selecting this option will bring up a new interface on the same screen.
If we open the Connector drop-down menu, we can see a list of available CAPI Connectors.
For our use case, we will select the AgileCRM connector.
Astera will automatically create a shared action file and CAPI file in the project, as well as populate the API Browser with all the possible endpoints.
Once the connection is authorized, the endpoints can be used in various flows in accordance with the application. Just drag-and-drop any endpoint to a Dataflow and map any required inputs to use it.
This concludes the working of Custom CAPI Connectors in Astera.
HTTP redirection, also known as URL forwarding, allows an API to provide more than one URL location to the resource in the response. HTTP redirects usually happen due to temporal or permanent unavailability of the application, website, or pages. For example, unavailability due to server maintenance or re-organization of the URL links.
Redirect responses from the server have a 3xx series HTTP status code along with a Location header parameter that provides the URL to the resource’s new address.
In this use case, we have a GET API resource account that returns account details based on the provided ID Query parameter.
Upon previewing the API Client, a request is sent to fetch the account for the given Id. In the response returned the API request is redirected returning a 302 Found status code response indicating that the resource is temporarily unavailable.
It may be due to server maintenance or any other unforeseeable reason. We can see that the Location header parameter is received with the response too. The Value of this header is the address to the alternative resource that must be accessed to retrieve the required account details.
Let’s see how we can configure the API client properties to automatically follow any redirect responses to the new URL Location.
Right-click on the API Client and select properties. Next, navigate to the Service Options window. Here, there are multiple options available to configure the redirect call(s).
Follow Redirect Calls From 3xx Code Responses – This option allows auto-redirecting a 3xx HTTP response to the redirected location URL.
Redirect Authentication Information – This option allows forwarding all the authentication details along with the redirected call.
Redirect Limit – This option allows us to specify a limit to the number of redirected calls followed.
Let’s enable the redirect and authentication options while keeping the redirect limit as 1.
Note: By default, the Redirect Authentication Information and Redirect Limit options are disabled. Only on checking the Follow Redirect Calls From 3xx Code Responses option are they enabled for configuration.
Now, upon previewing the output we can see that a 200 OK status response is received instead of a 302 Found. The request URL field shows that the request was successfully auto-redirected to the redirecting URL.
Now, let’s execute the Dataflow.
Here, we can see the job traces show all steps of the redirected calls including how the authentication information was forwarded along with the request, what was the redirect limit, where the request was redirected to, and if the job executed successfully.
Let’s consider a scenario where the redirected API requires authentication, and we don’t send the authentication information along with the redirect call by unchecking the Redirect Authentication Information option from the Service Options window.
Upon executing the job, we can see that the request is redirected without the authentication information, and as a result, the server sends back a 401 Unauthorized error response.
Now, let’s consider a scenario where an API request hops through more than one redirected call.
The first redirect request returns a 3xx series response. We can see in the Job Trace that on redirecting the request we received a 307-status response indicating that the service is temporarily unavailable. As the redirect count was set to 1 so, only one redirect call was sent by the API Client.
In such a situation, we need to follow all the redirect requests until a 200 OK response is received. For that, we can increase the Redirect Limit count.
For example, we will set the limit to 2 and send the request.
In the Job Progress window, we can see that two redirect calls have been exhausted, but we still received a 307-status response.
Let’s increase the limit to 3 and send the request.
Finally, a 200 OK response is received on the third redirect call.
This concludes how HTTP redirect calls are automated by the API Client in Astera.
In this article, we will be discussing various HTTP methods. We will see how HTTP requests can be made through the API Client object in Astera.
For our use cases, we have made use of the Petstore Open-API definition. We can import the API to the API Browser using its import URL.
Once done, it automatically establishes various pre-defined endpoints as API Client objects. They can then be dragged and dropped onto a dataflow for further configurations and transformations.
Note: When imported, a shared connection object will also be created containing the base URL and authentication details.
To learn more about importing a URL to the API Browser, click here.
First, drag-and-drop the Get a file’s metadata or Content by ID endpoint from the browser onto the Dataflow.
In this scenario, we want to get metadata for a file with fileID,
“184Gi7q9iPQyiR6lkG3bdSi5z3-9eeT-d”.
For this, we will pass the relevant fileID using a ConstantValue object.
To explore the API Client object for this method, right-click on the object’s header and select Properties.
This will open the API Client screen where the connection info of your API is defined.
The Shared Connection, Method, and Resource here are already configured. Notice that Resource consists of ‘files’ along with the ‘fileid’ URL parameter.
Click Next.
Here, the ‘field’ URL parameter follows from the defined resource.
For our use case, we will use this parameter to get details for a pet.
Click Next to proceed to the Output Layout screen, where you can view the Response Layout of your API. There are two ways in which you can generate the output layout if required.
The first one is by providing sample text by clicking the Generate Layout by providing Sample Text option.
The other way to do this is by running a request by clicking the Generate Layout by running Request option.
Click Next to proceed to the Pagination Options screen.
For our use case, we have selected None.
Click OK.
You can preview the data by right-clicking on the object and selecting Preview Output from the context menu.
As seen below, the GET request that was made, has fetched data according to the user application.
Now, let’s try creating a new file.
Drag-and-drop the POST method as an API Client object and open its properties.
We will pass the required parameters to the POST request object using a Variables object.
Now, right-click on the API Client object and select Preview Output.
You can see that the HTTPStatusCode is “200”, which means that the API has successfully carried out the action requested by the client.
Let’s verify it by making a GET request for the same FileId that we had posted earlier.
You can see that a GET request for FileID has returned the same information that we had posted.
Now, let’s try making a DELETE request.
For this, we will first make a GET request to check whether that file exists in the records before we try to delete this record.
We will pass a fileId using a ConstantValue object.
Right-click on the API Client object and select Preview Output.
It has fetched the details of the file with the fileId and the status shows that the field is available.
To delete this file record, we will drag and drop another DELETE API Client object onto the flow and configure its Properties according to the DELETE method.
Pass the fileId to the DELETE request object using a ConstantValue object.
Right-click on the API Client object and select Preview Output. You can see that it has returned HTTPStatusCode, “204”, which indicates successful execution.
Let’s verify it by making a GET request again, and check if the fileId, has been deleted.
Right-click on the API Client object and select Preview Output.
You can see that Astera has returned error 404 which means that there is no pet found with PetId, “5”, and the pet record has been successfully deleted from petstore API.
You can see that Astera has returned error 404 which means that there is no fileId found, and the file record has been successfully deleted from Google Drive API.
Let us now look at the PUT HTTP Method.
Drag-and-drop the GET endpoint from the API Browser onto the Dataflow.
Right-click on the object and select Properties from the context menu.
Click Next, and the Parameters screen will appear.
For this use case, we will update the file with a fileId. Let’s define this ID in the Default Value field.
Click OK and preview the output by right-clicking on the object and selecting Preview Output
As you can see in the preview screen below, the GET method has retrieved the file Metadata by ID.
Now, drag-and-drop the relevant endpoint from the API Browser onto the Dataflow.
For our use case, we will be using this Patch object for the PUT method so we can update the ID.
Right-click on the object and select Properties from the context menu.
Our Shared Connection has already been defined. The HTTP Method is Put, and the Resource to update is a file.
Click Next, and you will be led to the Output Layout screen
We have defined the fileId here that we wish the resource to be updated to,
If required, an output can be generated by running a request using the available option.
Click OK, right-click on the object, and select Preview Output.
As you can see here, the fileId has been updated,
We will now preview the output of the GET object we have configured to verify if the pet status has been updated.
As you can see, the value has been updated.
Let’s make a GET request to see what information is there in the File ID where we want to update something.
To make a GET request, drag-and-drop the GET API Client object onto the Dataflow.
Pass userID ‘1pGLAWbY7zu1nYFjMFB5GmTjVK2kXGHP1’ to the id under the Parameters node in the API Client object using the Variable transformation object.
Right-click the API Client object’s header and select Preview Output.
Here is what the output looks like:
Drag-and-drop the Update a file’s metadata API Client object to use the PATCH method.
Pass fileId’ 1pGLAWbY7zu1nYFjMFB5GmTjVK2kXGHP1’, and name, “Astera”, using a Variables resource object.
Right-click on the object’s header, and select Preview Output.
You can see that the HTTPStatusCode is 200, which means that the API has successfully carried out the PATCH request. Let’s verify it by making a GET request for the same fileId which we altered.
Right-click the APIClient object’s header and select Preview Output.
As you can see, the request has been successfully carried out and the email address has been updated.
This concludes our discussion on the HTTP method operations in Astera.
API logging is the process of keeping track of how an application programming interface (API) is being used.
It helps in understanding how often the API is being used, how long each request takes, and any errors that occur. API logging can be used for troubleshooting, monitoring performance, and identifying security threats.
Astera allows the user to enable API Logging. There are three types of logging that are offered in the tool:
Authentication Logs
Incoming Logs
Outgoing Logs
To view API Logs in Astera, right-click on the Server node in the Server Explorer.
Select API Logs.
This will open a new window.
On the left-hand side of the window, a list of the API calls that have been logged will appear.
Logs can be filtered based on a date range as well as type.
There are three types of logs that can be filtered.
These are logs that are created during authentication.
This option can be enabled in the API Connection object by selecting the Enable Authentication Logs checkbox.
These are the API calls that are made to all APIs deployed on the Astera Server from either a third-party application such as Postman or the Astera Client itself.
Note: Incoming API logs are enabled by default.
These are the API calls that have been made from an API Client object present in the flows.
To enable Outgoing API call logging, open the Service Options: Request Options screen in the API Client object.
Select the checkbox at the bottom of the screen that says Enable API Logging.
It is unchecked by default.
Note: After enabling API logging for an API client, whenever an HTTP request is sent to the server through that specific client, the logs for that request will be saved in the API logs.
Note: The API request will not be logged if authentication logging is turned on, but the client logging is turned off. Once a new access token is fetched, it will then be logged.
Selecting this option is going to enable all outgoing API call logging.
We can see all outgoing API calls in the log, alongside whether they have ended in success or failure.
On the right-hand side of the screen, two tabs can be seen.
The Overview tab gives a list of details regarding the selected API in the log.
It gives various details including Client IP, Remote IP, Content-Type, Status, and much more of both the Request and the Response.
The Inspectors tab gives the raw information regarding the Request and Response of the selected API call within the log.
The purge frequency of API logging can be set through the Cluster Settings through the Server Explorer.
This concludes API Logging in Astera Data Stack.
The API Client now supports the text/xml content type, allowing for seamless integration of XML-based data into API requests and responses. This enhancement enables users to effectively interact with APIs that require XML-formatted data or supply XML responses.
To use the text/xml content type in the Astera Client, you can now send and retrieve data in XML format. This content type allows you to exchange data using XML-based representations.
XML, being a widely adopted format for hierarchical data structuring, supplies a standardized approach for representing and exchanging information between various systems. With the text-xml content type in the API Client, users can make SOAP API requests in integration flows.
To start, let’s make an API request using text/xml content type in the API Client, which allows you to send and retrieve data in XML format.
To start, drag-and-drop the API Connection and API Client objects onto the Dataflow.
Configure the API Connection with the Base URL and Authentication.
We are calling a SAP Success Factor API as an example.
Next, open the properties of the API Client object and select the configured shared connection from the drop-down.
Choose the POST method and define the resource to be appended after the base URL.
To define an xml input and output, select the input and output content types as text/xml.
Proceed to the next screen to define any parameters.
Note: Since SOAP (Simple Object Access Protocol) request usually does not have any parameters, we would skip this screen.
Click Next to navigate to the Input Layout screen.
On the Input Layout screen click on Generate Layout by Providing Sample Text and supply the raw xml content to be sent in the request.
Click Generate to automatically create the layout for the request, allowing you to supply any default values or supply mappings from the preceding flow.
Click Next and you will be led to the Output Layout screen.
Here, there are two options to create a layout, by supplying sample text or running a request to generate the layout.
Right-click on the API Client object and select Preview Output.
In the Data Preview window, we can see the following result:
This concludes seamlessly integrating XML SOAP API requests and responses, enabling smooth data exchange and integration with APIs.
The Avaza API follows REST protocol with ‘OAuth2’ authentication. It allows you to access contacts, projects, tasks, invoices and taxes. In Astera, you can configure an Avaza API through a swagger definition using the Import API option in API Browser.
Let’s go over how we can authenticate an Avaza API in Astera.
Create an integration project by going to Project > New > Integration Project.
To import Avaza API in your Astera client, click on the following icon.
An Import API window will open. Here you will need to select your relevant import source. In this case, we will import using the Json/Yml Url source.
Base URL: https://api.avaza.com/swagger/docs/v1
You will see that all the APIs present on Avaza’s URL have been populated in the API Browser.
Now, you need to authenticate the Avaza APIs to be able to use them in your dataflow. Without authentication, you will get an error. To authenticate an API, go to the Project Explorer and double click on the API’s .sact file under the Shared Connection node.
The Avaza .sact file will open on the designer. Now, right-click the shared action file’s header and select Properties.
This will open the API Connection window where you can configure settings to authenticate Avaza API.
Avaza uses ‘OAuth 2’ authentication. In the ‘OAuth 2’ Security Type, select one from the following Grant Type options:
Authorization Code
Implicit
In this case, we will be using the ‘Authorization Code’.
Note: Login to your Avaza account and go to Settings > Developer Apps > Add OAuth App to generate the ClientID and Client Secret.
Auth Url: https://any.avaza.com/oauth2/authorize
Access Token Url: https://any.avaza.com/oauth2/token
Now, click Request token to generate an access token and refresh token for Avaza.
Note: As you click on Request Token, Avaza’s authorization app will open where you will be required to provide your credentials to be able to generate access token and refresh token to access Avaza.
After authentication, save the shared action file, and you are ready to use Avaza APIs in Astera.
This concludes authenticating the Avaza APIs in Astera.
Follow the steps below to learn how to authenticate Astera’s Server APIs.
Right-click on the server name in Server Explorer > Server Connections > DEFAULT > HTTPS://(ServerName):9260.
A wizard will appear with the Centerprise Server API Path. Click on the copy icon located at the bottom-left of the wizard to copy it.
A message will appear to confirm that the text has been copied successfully. Click OK.
Click the Import API option in the API Browser and paste the Astera Server API path in the URL box. Then click OK.
Note: Check the “Ignore certificate errors over HTTP/SSL” option to avoid any certification barriers.
A wizard will appear, notifying you about the created shared action file. Click Yes to set it up.
You can also click on the .sact file in Project Explorer to configure the authentication settings.
The API Browser will be populated with Astera’s Server APIs, which you can use in your dataflow.
Right-click on the Centerprise_Server object and select Properties.
This will open the API Connection screen. Select the Security Type as Bearer Token, as Astera Server APIs use Bearer Token authentication.
Provide the User Name, Password, and Token URL for Bearer Token. Then click Request Token to generate a token, and click OK. Press Ctrl+S to save changes in the shared action file.
Note: You will have to regenerate the token if the validity period has expired.
Now, drag-and-drop the /api/ServerInfo from the API Browser to make a GET request.
Right-click on the object’s header and select Preview Output.
This is how your output would look like:
This concludes working with Astera’s Server API.
Facebook uses HTTP-based APIs that can be utilized to extract or load data, to and from Facebook. You can configure Facebook APIs for use in Astera using the ‘Custom API’ source in the REST API Browser (Beta).
To authorize a Facebook API in Astera, follow the steps below.
Go to this Url: https://developers.facebook.com/ and log in.
Note: If you have not created an account yet, you need to create one first after signing in.
Enter your Facebook account credentials to log in.
Go to My Apps > Create App to create an application.
Provide the Display Name for your application, and click Create App ID.
Once your application is created, it will show under the My Apps tab.
Click Centerprise to open the dashboard.
Reference Url: https://developers.facebook.com/apps/217423066002800/dashboard/
Click on Settings > Basic to get the relevant credentials.
Reference Url: https://developers.facebook.com/apps/217423066002800/settings/basic/
Here you can see the App ID and App Secret. Save this information to use later for authentication.
To use Bearer Token authentication, go to Tools > Graph API Explorer.
Reference Url: https://developers.facebook.com/tools/explorer/
Click Generate Access Token and copy the token.
To access and try out different APIs, go to Tools > Graph API Explorer.
Reference Url: https://developers.facebook.com/tools/explorer/
Select anything from the drop-down list.
Click Submit, to see the results.
Import the API in Astera using the Import API option in the REST API Browser (Beta). Select API Import Source as Custom API by providing Name and Base Url.
Base Url: https://graph.facebook.com/
Now, you need to authenticate the Facebook APIs to use them in your dataflow. Without authentication, you will get an error. To authenticate an API, go to the Project Explorer panel and double click on the API’s .sact file under the Shared Connection node.
Facebook’s .sact file will open on the designer. Now, right-click on the shared action file’s header and select Properties. This will open the REST API Connection window, where you can configure the settings to authenticate Facebook’s API.
Facebook uses ‘OAuth 2’ authentication with Grant Type, ‘Authorization Code’.
Auth Url: https://www.facebook.com/dialog/oauth
Access Token Url: https://graph.facebook.com/oauth/access_token
Provide ClientID and Client Secret that you had saved earlier, then click on Request token to generate the access token for Facebook.
Note: As you click on Request Token, Facebook’s login window will open where you will have to provide your credentials to generate the access token to access Facebook API.
Save the shared action file after authentication and you are ready to use Facebook APIs in Astera.
This concludes authenticating the Facebook APIs in Astera.
To make an API call in Astera, an API Client object, along with its API Connection, needs to be configured.
First, drag-and-drop an API Connection object from the Toolbox and configure it in the dataflow. Alternatively, you can use an API Connection object in a shared action file within the scope of the project you are working with.
The API Connection object contains the Base URL, authentication details, and shared parameters for the API endpoint.
Next, let’s configure the API Client object.
First, drag-and-drop an API Client object from the Toolbox onto the dataflow.
Right-click on the API Client object’s header and select Properties.
The API Client screen will now open. Here you will have to specify the following,
Shared Connection: Establish your API Client’s connection from this drop-down that lists all shared connections from within the flow as well as from the project.
HTTP Method: The HTTP request verb defines the operation you want to make on the API resource.
Resource: the resource of the API from which you want to make a request. This will be appended after the Base URL from the selected shared connection to form the complete endpoint. Any URI or path parameters must be included in the resource text enclosed in curly brackets, {}.
Input Content Type: This is the content-type header for the request payload which is default to application/JSON type. The actual request payload layout can be defined in the input layout screen.
Output Content Type: This is the content type of the response payload which is default to application/JSON type. The actual response payload layout can be defined in the output layout screen.
Note: For an unsupported type, a relevant pop-up notification will appear on-screen.
Click Next. A Parameters screen will appear.
Here you will have to specify the following:
Override Inherited Parameter: Check this to override any parameters previously defined and inherited from the shared connection.
Name: The name of your parameter.
Parameter Key: Since the Name column does not allow any special characters, the parameter key can be used to define an alternate name including any special characters to replace the name in the API request.
Parameter Location: The parameter type such as Query, URI, and Header.
Data Type: Specify the data type of your parameter.
Format: Define the datatype format of the parameter’s value sent in the API request.
Plaintext: Check this box to disable URL encoding the parameters when the request is sent. The parameters will be sent in plaintext format, or you could optionally encode your parameter values manually using the URLEncode function from the toolbox.
Default Value: The parameter’s value for which you want to make a request.
Note: Any values mapped to the input node of the object will take preference.
Click Next. An API Client Output Layout screen will now open.
Here, we will select the Generate Layout by Running Request to build the response layout. Alternatively, you can build the layout manually or use a sample text.
Next, click OK.
Note: Prior to this screen, there will be an additional screen to configure an API Client input layout for the following methods: POST, PUT, and PATCH.
Once done, click Next, and you will be led to the Pagination Options screen.
Here, you can select the type of pagination that has been specified by the API providers. Astera offers the following pagination types.
When done, click Next, and you will be taken to the Service Options screen.
Request Options:
Request Delay: Delay time (in milliseconds) before sending a request.
Retry Count: Number of retry attempts to be made in case of a time-out error.
Retry Delay: The duration (in milliseconds) between each consecutive retry attempt.
Continue on Retry Failure: Check to succeed the flow even after all retries have failed.
Use Parallelism: Check this option to send requests in parallel. Check this to send requests in parallel. Number of requests to be sent in parallel (max limit of 10).
Follow Redirect: Check to allow forwarding a 3xx response to the redirected URL.
Include Authentication: Check to include authentication in the redirected API call.
Redirect Limit: Number of allowed redirect calls from a request. -1 indicates no limit.
Keep Connection Alive: Check to keep the TCP connection open to reuse for all subsequent requests to the same server.
Enable E-Tags: To learn about E-Tags, click here.
Retrieval: Check this to enable e-tags to request caching for GET requests.
Updates: Check this to enable request concurrency control using etags for PUT, PATCH or DELETE requests.
Response Options:
Ignore HTTP Status Codes: Selecting this option will show and allow processing responses other than 2xx in the flow, which are otherwise considered an error.
Include Content as String: Adds a field for serialized response content string in the Response-Info output node.
Include Response Headers: Adds all response headers as a collection in the Response-Info output node.
Include Raw Bytes: Adds a field for response content in the form of raw bytes in the Response-Info output node.
Click Next, and the Config Parameters screen will appear.
Config Parameters can enable the deployment of flows by eliminating hardcoded values and provide a dynamic way of changing multiple configurations with a simple value change.
Click OK, and the API Client object will be configured.
Now, right-click on the API Client object’s header, and select Preview Output.
Your request has been executed successfully, as you can see that the HTTP status code is 200 which means that the API Client has successfully carried out the GET request for the provided status.
This concludes our discussion on making API calls with the API Client object in Astera Data Stack.
Pagination refers to managing the traffic of records coming from a source. It divides the records into a discrete number of pages so that they are comprehensible for a user.
Pagination is not supported by all APIs. For those that do support it, Astera offers four types of paginations.
This type of pagination requires two parameters: A Limit and an Offset value to be specified by the user. A Limit specifies the number of records that you want to fetch in a one-page request, and an Offset simply tells the number of records to be skipped before selecting records.
Offset Parameter: Select the offset parameter of the API that you are working with, as specified on the Parameters screen.
Initial Offset: The record index from which you want to start your pagination.
Limit Parameter: Select the limit parameter of the API that you are working with, as specified on the Parameters screen.
Limit: Number of records on a one-page request.
Number of Pages: The number of pages indicates the number of request iterations which you want to be processed. Each iterative request incrementally adds the respective offset and limit values for the next set of records page.
Read Till End: Check this option if you want to fetch all the records. Selecting this will disable the ‘Number of pages’ option and all the records will be returned as requests are sent in a loop till no more data is found.
Repeating Item: This option is only enabled when you check the Read till end box. You can choose a repeating item or the collection node of the data from the output layout of the API client object. The repeating item helps the API client recognize the end of records, as whenever an empty response node is returned, the client stops sending further requests, and pagination ends.
This type of pagination generates a token to indicate a pointer for the next page of records. You can set a limit to the number of pages you want to process.
Cursor Field: Here, you can specify the field from the output layout which contains the cursor from the server response.
Cursor Parameter: Here, you can select the parameter to be used to send the cursor value received in the previous request of the API that you are working with, as specified on the Parameters screen. Alternatively, you can choose to send the cursor as an input body layout field by selecting the ‘Use Input Body Parameters’ checkbox.
Number of Pages: Here, you can specify the number of pages or the number of requests to be made iterating over the data set. Additionally, you can simply check the Read till End option if you want to fetch all records without specifying the number of pages.
This type is the same as Cursor pagination, except that it generates a URL instead of a token for every subsequent page.
Next URL Field: Here, you can specify the field from the response layout that contains the URl to fetch the next set of records.
Number of Pages: Here, you can specify the number of pages or requests you want to fetch, or you can simply check the Read Till End option if you want to fetch all records without specifying a page number limit.
In this type of pagination, you can specify the number of pages you would like to fetch in one go.
Page Number Parameter: Here, you can specify the page number parameter of the API that you are working with, as specified on the Parameters screen.
Start Page Number: The page number from where you want to start fetching your output, or the lower limit.
End Page Number: The page number where you want to end.
Read till end: Check this option if you want to fetch all the available records. Selecting this will disable the End page number option and make requests till no data is returned.
Repeating item: This option is only enabled when you check the Read till end box. You will be required to choose a repeating item, which can be one of the collection nodes from the output layout of the API client object. The repeating item helps the API client recognize the end of records, as whenever an empty response node is returned, the client stops reading the response and the pagination ends.
This concludes our discussion of pagination for APIs in Astera.
The raw request and response preview features allow API developers to view the exact request and response payloads being exchanged between clients and servers in their APIs.
This feature provides a detailed look at the headers, body, parameters, and metadata of the HTTP request and response, which can help API developers debug issues, test APIs, and optimize performance. By using raw preview request and response capabilities, API developers can gain a deeper understanding of how their APIs are being used and troubleshoot issues quickly and efficiently.
Astera lets the user preview the Raw request and Raw response both from the API Client object.
Drag-and-drop an API Client object and configure it.
For our use case, we have used an API Client making a GET Call to a resource.
Right-Click on the object and select Preview Raw Request.
This will show the raw request in the Raw Data Preview window.
As you can see, it has shown the HTTP method as well as the resource, host server details, and the Content-Type of the Request.
It even shows us tabs on the Request, Parameters, and Body.
To preview the raw response, right-click on the API Client object and select Preview Raw Response from the context menu.
This will generate a raw response in the Raw Data Preview window.
As you can see above, the raw response has been generated, which shows us the entire HTTP response in raw form. It even has tabs that show us the Parameters, body, and response info.
Curl is a command-line utility that can be used to send HTTP requests to APIs and retrieve the respective responses.
It allows API developers and testers to easily interact with APIs and perform tasks such as testing, debugging, and troubleshooting. Curl supports various HTTP methods such as GET, POST, PUT, and DELETE, and can handle HTTP headers, cookies, and authentication.
It is a simple, yet powerful tool that is widely used in API development and management.
Astera lets the user copy and view the CURL command from the Raw Data Preview window to help in comparing and debugging results from any external clients such as Windows command prompt or Postman.
Note: The Copy CURL Command option is available in the raw request preview.
This concludes Raw Preview and Copy CURL in Astera.
The ActiveCampaign API is structured around REST, HTTP, and JSON. You can make requests by using URL endpoints particular to a specific resource. The resources in ActiveCampaign are represented in JSON following a conventional schema. In Astera, you can configure an ActiveCampaign API using the Import API option present in the REST API Browser.
ActiveCampaign does not provide an Open API definition so we will add a request manually by using a Custom API in Astera.
To authorize an ActiveCampaign API in Astera, follow these steps:
Create an integration project in Astera.
Create a Custom API and provide Base Url.
Reference link for Base Url: https://developers.activecampaign.com/reference#url
Now, you need to authenticate the ActiveCampaign APIs to use them in your dataflow. Without authentication, you will get an error. To authenticate an API, go to the Project Explorer and double-click on the API’s .sact file under the Shared Connection node.
The ActiveCampaign .sact file will open in the designer. Now, right-click the shared action file’s header and select Properties.
ActiveCampaign uses an API Key as Security Type. Specify your Key and Value.
Key: API-Token
Value: {Token}
Click OK, and save the shared action file (.sact).
Add methods in REST API Browser panel which you want to use in Astera by adding requests, and you are ready to use the ActiveCampaign API in Astera.
This concludes authorizing the ActiveCampaign API.
The QuickBooks API is a RESTful API which allows you to read or write data to and from QuickBooks. It uses ‘OAuth 2’ authentication type. You can configure a QuickBooks API in Astera by using the Import API option present in the API Browser.
QuickBooks does not provide Open API definition, so we will add the request manually by using a Custom API in Astera.
We only need to follow steps from Development > Create and Configure an App from the following link:
Authentication steps: https://developer.intuit.com/app/developer/qbo/docs/build-your-first-app
Where the Redirect Url used in step 7 in the above link for Astera would be:
Redirect Url for Astera Server: http://{Server_Name}:8050/)
Note: Save ClientID and secret to use it afterwards in Astera Data Stack authentication.
Create an integration project in Astera.
Create a Custom API and provide a Name and Base Url.
Base Url (Sandbox): https://sandbox-quickbooks.api.intuit.com
Base Url (Production):
Now, you need to authenticate QuickBooks APIs to be able to use them in your dataflow. Without authentication, you will get an error. To authenticate an API, go to the Project Explorer and double click on the API’s .sact file under the Shared Connection node.
The QuickBooks .sact file will open in the designer. Now, right click on the Shared Action file’s header and select Properties.
QuickBooks uses ‘OAuth 2’ Security Type with Grant Type, ‘Authentication Code’.
Auth Url: https://appcenter.intuit.com/connect/oauth2
Token Url: https://oauth.platform.intuit.com/oauth2/v1/tokens/bearer
ClientID: {ClientID}
Client Secret: {Client_Secret}
Scope: {Scope}
State: {State}
Additional Info - You can modify the authorization by mentioning the names of only those permissions that you want to access from QuickBooks in Astera.
Note: While working with QuickBooks APIs, it is necessary to specify Scope and State to generate the access token.
Click OK, and save the Shared Action file (.sact).
Add methods in the REST API Browser which you want to access in Astera by adding requests and you are ready to use QuickBooks APIs in Astera.
This concludes authorizing the QuickBooks API.
Astera’s Server APIs use Bearer Token authentication.
Login
Method: POST
Endpoint: https://{servername}:{portno}/api/account/login
In this case: https://LOCALHOST:9261/api/account/login
Resource: /api/account/login
Request Body
Note: The format of our request body is JSON type.
Status
Method: GET
Endpoint: https://LOCALHOST:9261/api/Job/{jobID}/Status
Resource: /api/Job/{jobID}/Status
Required Parameter
Description: This method fetches the status of a job for the given job ID. A few of the response statuses are given below:
Unknown
Invalid
NotStarted
Queued
Initializing
Running
Completed
Square API is an HTTP-based API that follows REST standards. It allows you to manage the resources of your Square account by making requests to URLs representing those resources. You can configure Square API for use in Astera by providing its swagger definition using the Import API option in the API Browser.
After you have created the application in Square, go to Manage Properties.
Now go to OAuth properties in Production tab. Here, you have to provide the Redirect URL for the authorization callback.
Note: Save Applicant ID and secret to use it later for Astera authentication.
Reference Link: https://developer.squareup.com/docs/oauth-api/overview
Now create an integration project in Astera. Also, import the following swagger definition in the API Browser:
Base Url: https://raw.githubusercontent.com/square/connect-api-specification/master/api.json
Go to the Square’s shared action file’s (.sact) properties to authenticate it in Astera.
You can authorize Square API by using Security Type OAuth 2 or Bearer Token. In this example, we will be authorizing using OAuth 2.
Set its Security Type as ‘OAuth 2’ and Grant Type as ‘Authentication Code’. Provide the application ID and secret that you saved in step 2.
Click on Request Token to get the access token to Square API.
Access Token Url: https://connect.squareup.com/oauth2/token
Additional Info: You can modify your authorization by mentioning the names of only those permissions that you want to access from your Square account in Astera. In case you want to access all of them, leave the settings at default.
Once you get the access token, save the Shared Action file and you are ready to use Square API in Astera.
This concludes authenticating the Square API in Astera.
You can learn all about the configuration and usage of the API Connection object .
Auth Url:
Fields | Field Location | Data Type | Description | JSON Format |
User | Body | String | Username of the Astera user account. | { "user": "admin", "password": "Admin123", "rememberMe": true } |
Password | Body | String | Password of the Astera user account. |
RememberMe | Body | Boolean | Binary value. Pass “1” for yes and “0” for no. |
Parameter | Parameter Location | Data Type | Description |
JobID | URI | Integer | Job ID of the flow that has already been executed in Astera. |
Note: Client Secret, Access Token and API Key are to be generated by the user, and will be unique for every application. The values specified below are just for example.
Authentication Type: API Key
Import API: https://raw.githubusercontent.com/adafruit/io-api/gh-pages/v2.json
Authentication: API-KEY
Key: X-AIO-Key
Value: aio_UTqF73klycqdLWpbp0wLl7RHKV25
UserName: [Enter you user name]
FeedKey: [Enter your feed key]
Adafruit Login Page: https://accounts.adafruit.com/users/sign_in
Email: [Enter your login email]
Password: [Enter your password]
Authentication Type: OAuth 2, Authorization Code
Import API: https://api.avaza.com/swagger/docs/v1
Authentication: oauth2 (Access token will be valid for 1 day)
Token URL: https://any.avaza.com/oauth2/token
Auth URL: https://any.avaza.com/oauth2/authorize
ClientId: [Enter client ID]
Client Secret: c1d4b723790f0e24d0b2df68ebde613e9533
Avaza Login Page: https://any.avaza.com/account/login
Email: [Enter your email]
Password: [Enter your password]
Authentication Type: Bearer Token
Base URL: https://api.box.com/2.0
Authentication: Bearer Token (Access token will be valid for 1 hr)
Token: 1IVYyDgfDPyWpoXe9c4RMOt7tmtiB75q
Steps to generate access token:
Page: https://app.box.com/developers/console/app/984015/configuration
Email: [Enter your login email]
Password: [Enter password]
Click Generate Developer Token to generate access token
API Reference: https://developer.box.com/en/reference
Authentication Type: OAuth 2, Authorization Code
Base URL: https://graph.facebook.com/
Auth URL: https://www.facebook.com/dialog/oauth
Access Token URL: https://graph.facebook.com/oauth/access_token
Client ID: 217423066002
Client Secret: d7d8969c6ea31bf117f04768b63bb
Credentials to use when using ‘Request Token’
Email address: [Enter your email]
Password: [Enter your password]
Authentication Type: Bearer Token
Base URL: https://www.googleapis.com/drive/v3
Authentication: Bearer Token (Token will be valid for an hour)
Token: ya29.Il_AB6CICAcAQD6lKoQCW3K2DO_enBd3be5G2Vvd0hZ3Q8US4eHL-PEOS1qRD7zzSEN3t_qb_eNqWzZS3zsXP_FcAHA9TSoy-tDpsWv0RnWRledPhZqRt79f9X
API Reference: https://developers.google.com/drive/api/v3/reference
Steps to generate access token:
Go to https://developers.google.com/oauthplayground/
Select the APIs you want to authorize and click Authorize APIs.
On the next screen, provide your credentials.
Email: [Enter your login email]
Password: [Enter your password]
Now click Exchange authorization code for tokens to generate access token.
Authentication Type: API Key
Import API: https://api.doc.nextauth.com/api/swagger.json
Authentication: API-KEY
KEY: [Enter API Key]
VALUE: J5znqilK_qUt65iQyy9W2Q
Help link: https://api.doc.nextauth.com/
Authentication Type: API Key
API key to be passed as a query parameter
JSON File: http://www.omdbapi.com/swagger.json
Steps to generate API Key:
Open
Select Account Type, ‘FREE.’
Enter your email address.
Enter your first name and last name.
Describe in a few words your purpose of using this service.
Click Submit.
You will get the API Key in your email with a link to activate it. Click on this link and the key will be activated.
Authentication Type: Bearer Token
Import API: https://raw.githubusercontent.com/square/connect-api-specification/master/api.json
Authentication: Bearer Token
Token: EAAAEPXVtza2Utrx-GJ90Az4sCQ_NLbLYOKANVFmJiPGJ1Z6B-eJgZ-2V1
Use this API to import: https://raw.githubusercontent.com/
Note: This looks like an issue with Square Connect’s documentation because the ‘Import API’ option does not work.
Authentication Type: Basic Authentication
Username: [Enter username or login email]
Password: [Enter password]
This article is intended to provide a brief user guide to Astera users on how they can use Astera Server APIs to perform some commonly used actions without using the Astera Client.
The document covers the following operations:
Configure Astera Server properties
Configure Astera Server license
Deploy user projects on Astera Server
Schedule/execute the jobs
For the use cases below, Astera Integration Server is installed on a Virtual Machine and no Astera Client is going to be used to perform operations. However, some images of the Astera Client have been used for description purposes. We will use the Postman client as a third-party tool to send API requests to the Astera Server to perform several tasks.
‘POST /api/account/login’
This API returns the bearer token that can be used to make calls to the Astera Server. It also returns some more information about the user.
In this section, we are going to use the API ‘POST /api/Server/Config’ to change Server Profile of the Astera Server.
Here, we can see that a Server Profile named DEFAULT has been selected in our Server Properties.
To change this profile shown in the image above, we must provide the JSON body containing information related to the Repository Database with the name of the desired Server Profile in the POST request.
Note: Server Profile 2 was created in advance to be used in this request.
Once we have provided the relevant information, we can send the request.
We can see that a 200 OK success response is received in the image above.
We can verify using the Astera Client that we have successfully configured our server with the desired profile ServerProfile2.
Note: To see how the JSON request body is structured or what fields are required for a successful POST request, send a request to the GET /api/Server/Config API. This API will return the configured server’s properties in the response.
In this section, we are using the POST /api/License/Key API that allows the user to change the server license. We will also use the POST /api/License/Activate API to activate the license.
Go to Server > Configure > Enter License Key. Here, we can see a user ‘Nisha’ of the organization ‘G’ is registered with an active license.
To change the license key and register the user, we must provide the User, Organization, and License Key in the JSON request body. Refer to image below.
Note: The license key was taken in advance for this demo.
Once we have provided the relevant information, we can send the request.
In response, we can see that a 200 OK success status is received indicating that the license key has been changed.
Now, go to Server > Configure > Enter License Key, and notice the license properties. We can see a user Nisha of the organization Astera with a different/new license key has been registered.
3.3.3. Activate license API Example (/api/License/Activate)
However, the license is not activated yet. To activate the license, we can simply send a request to the /api/License/Activate endpoint. After sending the request, we can see that a 200 OK response is received.
Note: To receive a 200 OK response, we must send an empty text in the body otherwise the request will result in an error. Also, this API only activates the license online.
Go to Server > Configure > Enter License Key again. Here, we can see the status of the license is activated now.
No parameters required, send an empty body here.
Configuration of cluster settings is important before proceeding with the project deployment.
To successfully deploy an archive file (User project .car file) using the APIs, a user must perform the following prerequisites:
Upload the archive file (User project .car file) to the deployment directory.
Upload the config file to the deployment directory. (optional)
There are two methods to do this, let's see each in action.
a. Example of using ‘POST /api/UploadFile’
The user can upload the config and .car files to the deployment directory using APIs. There is a possibility that a user might delete or move the config or .car file from their local machine. To avoid any issues, it is recommended to first upload these files to the deployment directory.
To upload the file, use the ‘POST /api/UploadFile’ API. In this API, we must provide two query parameters:
FileTypes: Extension of the target file e.g., cfg for the config file, car for Archive files.
TargetFileName: Here, we define the target file’s name e.g., Testing.
Next, we need to configure Request Body for this API. For the request body, select the form-data content type, select the Key type as File and provide the desired archive file (.car file) in the Value. Click Send.
Note: The archive project file (.car file) was created in advance for this demo.
Here, we can see that a 200 OK response, with a file path of the deployment directory’s archive file, has been received. Please copy down this path on your notepad as we need to use this path while creating the deployment.
Similarly, we can upload the config file to the deployment directory using this api/UploadFile endpoint.
Parameter description of /api/UploadFile
b. Using the ‘POST api/UploadCarFile’
An archive file (.car file) can also be uploaded to the deployment directory via another API i.e., ‘POST api/UploadCarFile’.
In this API, we do not need to specify the query parameters. Simply select the archive file (.car file) in the body and click Send. Here, we receive the archive file path in the response.
The uploaded files can also be seen in the deployment directory.
Parameter description of api/UploadCarFile
3.5.2. Creating the Project Deployment
3.5.2.1. Using POST /api/Deployment example
Now, let’s proceed to the deployment creation. To create a deployment, we must use the ‘POST /api/Deployment’ API.
In this API’s Request’s Body, we must provide info such as:
Relevant archive (.car) and config (optional) file paths (both local and deployment directory file paths)
The deployment’s name, its ID, and its activation state, etc.
After defining the body, we can click Send.
Here, we can see a 200 OK response has been received indicating that a deployment has been created.
Open the client and go to Server > Deployment Settings > Deployment. In the deployment window, we can see the archive file has successfully been deployed.
Please note the following:
To create a new deployment, we must provide the field “Id” as 0. We should also provide a unique deployment “Name” i.e., not the same name as an already existing deployment. Otherwise, the request will result in a 400 Bad Request error.
If we provide a non-zero “Id” field e.g., Id = 7, the server will consider this request as an update request, and if a deployment with ID 7 already exists on the server it will be modified/updated.
3.5.2.2. Parameter description of /api/Deployment
3.5.3. Post Deployment Modification
The ‘POST /api/Deployment’ API can also be used to modify an existing deployment. In this API’s Request Body, details of an existing deployment are required.
In this scenario, we want to update the name of the above-created DeploymentTesting. However, we do not have its details available.
So, to gather the details, we firstly use the ‘GET api/Deployments’ API to fetch the info of the existing deployment. Then, we copy the deployment Id, UpdateDtTm, and CreatedDtTm fields from the response.
Note: This GET API returns information for all the deployments on the server. Since we desired to modify only one deployment DeploymentTesting, we copied these highlighted fields only.
Now, in the POST Request’s Body, let’s change the deployment Name to DeploymentTesting_Modified and replace the values of Id, UpdateDtTm, and CreatedDtTm fields with the values copied from the GET response.
Let’s send the request.
A 200 OK response is received. Now, go to Server > Deployment Settings > Deployment, in the deployment window, we can see the modified deployment name.
Note: Each time we update an existing deployment the UpdateDtTm field is modified as well. Therefore, we always must send a GET api/Deployments request first to fetch the details of the deployment and then use the received details as the body for the POST request to successfully modify the deployment. Using an invalid (past time) UpdateDtTm value will give a 400 Bad Request error.
Let’s proceed to learn how we can schedule jobs on the server using APIs.
In this section, we are scheduling the previously created deployment using the ‘POST api/Schedule’ API.
This API’s Request Body requires the schedule configuration information, i.e., Schedule Name, Schedule Type, Frequency, Activate State, Server Info, etc.
Let’s create the schedule called Schedule_Testing, which runs daily, with schedule type deployment, an active state as True, etc.
Sending the request shows a 200 OK status response.
If we go to Server > Job Schedules, in the scheduler window, we can see a schedule called Schedule_Testing with the following properties: Schedule Job Id is 4, with Schedule Type as Deployment, running at Frequency daily has been created.
Note: Like the deployment POST API, this POST api/Schedule endpoint can also be used for modification of existing schedules.
Each time we update an existing schedule the UpdateDtTm field is modified as well. Therefore, we always have to send a GET api/Schedules or a GET /api/Schedule/:scheduleId request first to fetch the details and then use the received details as the body for the POST request to successfully modify a schedule.
Note: Using an invalid (past time) UpdateDtTm value will give a 400 Bad Request error.
AWS Signature authentication is the process of verifying the authenticity of requests made to Amazon Web Services (AWS) using the AWS Signature method.
This authentication process involves calculating a digital signature for each request using the requester’s access key and secret access key, along with details about the request being made. AWS verifies the signature against the user’s access credentials and grants access to the requested resources if the signature is valid.
The AWS Signature authentication method ensures that requests are securely transmitted and that only authorized users can access AWS resources.
Astera lets the user configure an API Connection with AWS Signature as an authentication type.
Drag-and-drop an API Connection object from the Toolbox onto a Dataflow.
Right-Click on the object and select Properties from the context menu.
This will open a new window:
Base URL: Here, you can specify the base URL of the API which will prepend as a common path to all API endpoints sharing this connection. A Base URL usually consists of the scheme hostname and port of the API web address.
Timeout (msec): Specify the duration, in milliseconds, to wait for the API server to respond before giving a timeout error.
Include Client SSL Certificate: Selecting this option is going to include any Client SSL certificate that is needed for authentication.
Enable Authentication Logs: Selecting this checkbox will allow the client to generate authentication logs when the API connection has been configured.
Define the Base URL and select AWS Signature from the security type.
Selecting it will make the following options available.
Access Key: The unique access key provided to the AWS user for authentication.
Secret Key: The corresponding unique key provided to the AWS user for authentication
AWS Region: The region from where the API connection is being made, set by the admin.
Service Name: The name of the AWS service being used in the API Connection.
Note: While the Access Key and Secret Key are unique to each user, the AWS Region and Service Name are common among a group of users.
5. Once the fields have been filled, click OK and the API Connection will be configured.
This API Connection can then be used in an API Client object to make API Calls to the resource.
Drag-and-drop an API Client object and configure it with the API Connection.
Preview the output of the API Client object.
As you can see, the response has returned a ‘200 OK’ status.
This concludes the configuration and testing of the AWS Signature Authentication in Astera.
NTLM (NT LAN Manager) authentication is a Microsoft proprietary authentication protocol used to authenticate users in a Windows-based network.
It provides secure authentication by using a challenge-response mechanism, where the server sends a challenge to the client, and the client sends a response that is encrypted using a hash of the user’s password.
NTLM authentication is used in various Microsoft products, including Windows, Internet Explorer, and Microsoft Office.
Astera also offers the ability to use NTLM authentication when establishing an API connection.
To start, drag-and-drop the API Connection object from the Toolbox onto a Dataflow.
Right-click on the object and select Properties from the context menu.
This will open a new window,
Base URL: Here, you can specify the base URL of the API which will prepend as a common path to all API endpoints sharing this connection. A Base URL usually consists of the scheme hostname and port of the API web address.
Timeout (msec): Specify the duration, in milliseconds, to wait for the API server to respond before giving a timeout error.
Include Client SSL Certificate: Selecting this option is going to include any Client SSL certificate that is needed for authentication.
Enable Authentication Logs: Selecting this checkbox will allow the client to generate authentication logs when the API connection has been configured.
Fill in the Base URL and open the Security Type drop-down menu,
For our use case, we have deployed an API on IIS Manager on another machine, and we will send a request to access that API.
Select NTLM as the authentication type.
This will give us the following options,
Username: This field will input the same username that is used to login to Windows.
Password: The password associated with Windows login credentials.
Note: NTLM authentication establishes API connections using a challenge-response mechanism. When sending an API request, Astera sends a hashed version of the user’s credentials (username and password) to the server, which sends back a random challenge. Astera then mixes this challenge with the user’s password and sends back a hashed value for verification. Access is granted if the validation is successful.
Click OK and the API Connection object will be configured with NTLM Authentication.
This API Connection can then be used in API Client objects to make API calls to the server and receive appropriate responses in return.
Drag-and-drop an API Client object onto the dataflow and select the shared connection that was defined.
Note: The Resource will be ‘/’ since our entire address has been defined in the Base URL.
Click OK and preview the output of the API Client object.
As we can see in our data preview window, the request has been sent successfully and the response has returned as ‘200 OK’.
This concludes working with and configuring the NTLM Authentication in Astera.
The Astera Data Stack provides you with the flexibility to execute your jobs through a third-party tool, without using the Astera client. Let’s learn how to achieve this in the article below.
In this use case, we have our Astera client on a local machine and a server installed on a virtual machine. Instead of using the Astera client, we will use Postman as a third-party tool to send REST requests to the server in order to execute the job.
The workflow document in Astera consists of a Variables object, a FileTransferTask object and a RunDataflow object.
We will pass the name of the file that we want to download and process to the FileTransferTask from the Variables object. The Variables object takes an input from the REST call sent through Postman, and passes it to FTP to download the file with that name. We then pass the file path of the downloaded file to the RunDataflow object.
In the following section, we will cover a step-by-step overview of how you can achieve this.
We will make the first API call for logging into the Astera server to generate an access token. Provide the following credentials in the request body and click on Send.
User: admin
Password: Admin123
RememberMe: 1
The Astera server will provide you with an access token in response.
In the second step, we will send the path of the file that we want to download from FTP, in the form of a string, to the Variables object.
In the parameters:
ActionName: Variables
Name of the object present inside the workflow to which the name of the file will be passed
Parameters: sourceFilePath
The value of the input variable field inside the workflow
Value: [file path of the file that you want to download]
The value of the input variable field inside the workflow
As soon as you send this API request, Astera will provide you with a jobID that you can use to get the job status.
In the third step, we will make a GET call to fetch the job’s status by providing the job ID.
This is what Astera’s response would look like.
This concludes accessing Astera’s server APIs through a third-party tool.
A sample server config JSON body is provided for reference.
The JSON payload sample for the cluster configuration is attached .
A sample deployment JSON body is attached for reference.
A sample JSON is attached for reference.
A postman collection containing all the server APIs discussed in this article is attached .
https://localhost:9261/api/account/login
|
Description
|
{
|
"User":"admin",
|
Username of the user trying to login (default is "admin")
|
"Password":"Admin123",
|
Password of the user trying to login (default is "Admin123")
|
"RememberMe":1
|
This parameter takes 1 or 0, indicating True or False.
|
}
|
{
|
"configParameters": {
|
This section is recommended to leave defaulted.
|
"instrumentationOn": false,
|
Instrumentation slows down a server's processing capacity, but adds more logging for visibility. Only set to "true" when you want to debug an issue with the server or job running on that server.
|
"purgeChunkSize": 10000,
|
This is the number of records the server will try to delete at once when purging old job and event history from the cluster database.
|
"purgeCommandTimeoutSeconds": 600,
|
This is how long the server should wait for the SQL command to complete before giving up on a purge operation.
|
"purgeWindowStartHour": 0,
|
This is the beginning of the 24-hour window in which the server can attempt to purge old job and event history from the cluster database.
|
"purgeWindowEndHour": 24
|
The is the end of the 24-hour window in which the server can attempt to purge old job and event history from the cluster database.
|
},
|
"serverDbInfo": {
|
"port": 1433,
|
SQL Server database port on which instance is running on
|
"protocol": "http",
|
Protocol for connecting
|
"serviceName": "RepositoryDWB",
|
Name of the database
|
"authenticationType": "SqlServerAuth",
|
Type of authentication, SQL Server Authentication or Windows logon
|
"connectionTimeOut": 15,
|
Timeout when trying to connect to the instance
|
"commandTimeOut": 90,
|
Timeout when executing a command on the SQL server instance
|
"dataProvider": "SqlServer",
|
Database provider name. Either SqlServer or PostgreSQL
|
"server": "localhost",
|
Database instance host server
|
"database": "RepositoryDWB",
|
Name of the database
|
"isRepository": true,
|
This option is used to tell the server that this connection is pointing to an Astera repository. Always set this to true.
|
"schema": "",
|
Schema of the database, by default it’s 'dbo'
|
"user": "sa",
|
Username for logging in to the SQL Server
|
"password": "Astera123"
|
Password for logging in to the SQL Server
|
},
|
"port": 9261,
|
The port where Astera Integration Server is running on. By default its 9261, unless specified during installation
|
"serverProfile": "DEFAULT",
|
Astera Integration Server profile configuration. User can create different profiles for different servers, in this profile the user can set the max job count and other administrative properties.
|
}
|
|
|
Description
|
{
|
"LicenseKeyRegistrationModel":
|
{
|
"user":"TEST",
|
Username you want to register the product with.
|
"organization":"TEST",
|
Organization name you want to register the product with.
|
"key":"TEST"
|
License Key provided by Astera Software.
|
}
|
}
|
{
|
"id": 1,
|
This is the identifier for the settings object in the repository database. Should always be 1.
|
"name": "DEFAULT",
|
This is the name we want to give to our cluster (Server group), by default its set to 'Default'
|
"sendErrorInfoToAstera": true,
|
Allow sending anonymous usage and error data to Astera.
|
"purgeJobInfoAfter": 7,
|
The number of days before a job info will be purged (removed from the repository database). It will no longer be available in the Job Monitor after it is purged.
|
"purgeEventInfoAfter": 7,
|
The number of days before a Server event info will be purged (removed from the repository database). It will no longer be available in the Server Monitor after it is purged.
|
"stagingDirectory": {
|
Setting Staging directory in this section
|
"path": "C:\\Staging"
|
The staging directory path, this is where Astera Integration Server keeps files which are related to a deployment.
|
},
|
"deploymentDirectory": {
|
Setting deployment directory in this section
|
"path": "C:\\Deployment"
|
The deployment directory path, this is where Astera Integration Server keeps a local copy of deployment archive files(.car) and its configuration files (.cfg)
|
},
|
"pauseAllServers": false,
|
This parameter takes in a Boolean value to set the feature for pausing Astera Integration Servers on or off
|
"pauseServersFrom": null,
|
Start time when servers pause. Depends on if pauseAllServers is true.
|
"pauseServersTo": null,
|
End time when servers pause. Depends on if pauseAllServers is true.
|
"clientAndServersShareTheSameFileSystem": false
|
False if the client and the server do not share the same file system. This applies to any scenario when the client and server do not exist on the same machine or network.
|
}
|
Query Parameters | Description |
|
|
FileTypes: Cfg | The extension of the file e.g., Car or Cfg. There are two files related to the Deployment, one is .car file, another is .cfg file. .car files are archive files which have the snapshot of the project when it was generated, and .cfg files are configuration files, it contains the values for the parameters which were used in the project. |
TargetFileName: Testing | The name with which the uploaded files will be saved |
|
|
Request Body (content-type form-data) | Description |
|
|
ArchiveFile | In this parameter, attach the archive or configuration file you wish to upload to the deployment directory. |
Request Body (content-type form-data) | Description |
|
|
Archive | In this parameter, attach the archive or configuration file you wish to upload to the deployment directory. |
Description |
{ |
"userArchiveFilePath": "C:\\Project_ArchiveFile.car", | This is the path that was the source .car file used for deployment. It is not used anywhere at runtime. |
"clusterArchiveFilePath": "C:\\Testing.Car", | Archive file's (.car file's) deployment directory path, i.e, the path to the archive file that is uploaded in the deployment directory. |
"userConfigFilePath": "", | This is the path that was the source .cfg file used for deployment. It is not used anywhere at runtime. |
"clusterConfigFilePath": "", | Config file's (.cfg file's) deployment directory path, i.e, the path to the configuration file that was uploaded in the deployment directory. The config file is optional. |
"encryptFiles": true, | Turn it to true or false to set the encryption of the configuration (.cfg) file on or off. |
"comment": "Modifying by reducing the body", | User comment (i.e., description) attached to a deployment. |
"id": 0, | Deployment ID of the deployment in case of modification. Use zero for new deployments. |
"name": "DeploymentTesting" | Name of the Deployment, this should be a unique name. |
} |
{
|
"archiveStartItem": "C:\\DataflowSample.Df",
|
Initial artifact to run from the deployment. Omit if not using an archive.
|
"schedule": {
|
"dailyScheduleBase": {
|
"startTime": "2022-09-06T04:53:46.2573662-07:00",
|
Date and time when this scheduled job should first start. Since this is a daily schedule, this job will repeat everyday on the same time given here.
|
"typeName": "Astera.Core.DailyScheduleEveryDay"
|
Type of the schedule. i.e, daily, weekly, etc.
|
},
|
"typeName": "Astera.Core.DailySchedule"
|
Type of the schedule. i.e, daily, weekly, etc.
|
},
|
"traceLevel": "Job",
|
Set this parameter to 'Job' if you want to track it in job monitor, otherwise this job's progress will not be tracked.
|
"skipIfRunning": false,
|
Set this to true if you want to skip if the same schedule's last run is already queued or running.
|
"isActive": true,
|
Activates/deactivates the schedule.
|
"jobOptions": {
|
"usePushdownOptimization": false,
|
Set this to true if you want to run scheduled job in 'Pushdown mode'.
|
},
|
"filePathResolved": {
|
This section sets the file path of the archive or flow file.
|
"path": "C:\\SampleConfigFile\\test.car"
|
Path to the .car, .df or .wf file depending on 'isFile' parameter.
|
},
|
"deploymentId": 1,
|
Deployment ID in case using an archive deployment, i.e, 'isFile' is set to 'false'. This can be found under Server->Deployment.
|
"isFile": false,
|
Set this to True if pointing to a dataflow or workflow file directly, otherwise if using deployment set this to false.
|
"name": "test"
|
Name of the scheduled job.
|
}
|