Big Data

Automate replication of relational sources right into a transactional information lake with Apache Iceberg and AWS Glue

Automate replication of relational sources right into a transactional information lake with Apache Iceberg and AWS Glue
Written by admin


Organizations have chosen to construct information lakes on prime of Amazon Easy Storage Service (Amazon S3) for a few years. A knowledge lake is the most well-liked selection for organizations to retailer all their organizational information generated by completely different groups, throughout enterprise domains, from all completely different codecs, and even over historical past. In keeping with a examine, the common firm is seeing the amount of their information rising at a charge that exceeds 50% per yr, often managing a mean of 33 distinctive information sources for evaluation.

Groups typically attempt to replicate 1000’s of jobs from relational databases with the identical extract, remodel, and cargo (ETL) sample. There’s lot of effort in sustaining the job states and scheduling these particular person jobs. This method helps the groups add tables with few modifications and likewise maintains the job standing with minimal effort. This could result in an enormous enchancment within the growth timeline and monitoring the roles with ease.

On this put up, we present you learn how to simply replicate all of your relational information shops right into a transactional information lake in an automatic style with a single ETL job utilizing Apache Iceberg and AWS Glue.

Answer structure

Information lakes are often organized utilizing separate S3 buckets for 3 layers of information: the uncooked layer containing information in its authentic type, the stage layer containing intermediate processed information optimized for consumption, and the analytics layer containing aggregated information for particular use circumstances. Within the uncooked layer, tables often are organized primarily based on their information sources, whereas tables within the stage layer are organized primarily based on the enterprise domains they belong to.

This put up gives an AWS CloudFormation template that deploys an AWS Glue job that reads an Amazon S3 path for one information supply of the info lake uncooked layer, and ingests the info into Apache Iceberg tables on the stage layer utilizing AWS Glue assist for information lake frameworks. The job expects tables within the uncooked layer to be structured in the best way AWS Database Migration Service (AWS DMS) ingests them: schema, then desk, then information information.

This answer makes use of AWS Methods Supervisor Parameter Retailer for desk configuration. It is best to modify this parameter specifying the tables you need to course of and the way, together with info corresponding to major key, partitions, and the enterprise area related. The job makes use of this info to routinely create a database (if it doesn’t exist already) for each enterprise area, create the Iceberg tables, and carry out the info loading.

Lastly, we will use Amazon Athena to question the info within the Iceberg tables.

The next diagram illustrates this structure.

Solution architecture

This implementation has the next concerns:

  • All tables from the info supply will need to have a major key to be replicated utilizing this answer. The first key generally is a single column or a composite key with multiple column.
  • If the info lake incorporates tables that don’t want upserts or don’t have a major key, you possibly can exclude them from the parameter configuration and implement conventional ETL processes to ingest them into the info lake. That’s exterior of the scope of this put up.
  • If there are further information sources that must be ingested, you possibly can deploy a number of CloudFormation stacks, one to deal with every information supply.
  • The AWS Glue job is designed to course of information in two phases: the preliminary load that runs after AWS DMS finishes the complete load process, and the incremental load that runs on a schedule that applies change information seize (CDC) information captured by AWS DMS. Incremental processing is carried out utilizing an AWS Glue job bookmark.

There are 9 steps to finish this tutorial:

  1. Arrange a supply endpoint for AWS DMS.
  2. Deploy the answer utilizing AWS CloudFormation.
  3. Evaluate the AWS DMS replication process.
  4. Optionally, add permissions for encryption and decryption or AWS Lake Formation.
  5. Evaluate the desk configuration on Parameter Retailer.
  6. Carry out preliminary information loading.
  7. Carry out incremental information loading.
  8. Monitor desk ingestion.
  9. Schedule incremental batch information loading.

Stipulations

Earlier than beginning this tutorial, you must already be acquainted with Iceberg. In the event you’re not, you may get began by replicating a single desk following the directions in Implement a CDC-based UPSERT in a knowledge lake utilizing Apache Iceberg and AWS Glue. Moreover, arrange the next:

Arrange a supply endpoint for AWS DMS

Earlier than we create our AWS DMS process, we have to arrange a supply endpoint to connect with the supply database:

  1. On the AWS DMS console, select Endpoints within the navigation pane.
  2. Select Create endpoint.
  3. In case your database is operating on Amazon RDS, select Choose RDS DB occasion, then select the occasion from the record. In any other case, select the supply engine and supply the connection info both by means of AWS Secrets and techniques Supervisor or manually.
  4. For Endpoint identifier, enter a reputation for the endpoint; for instance, source-postgresql.
  5. Select Create endpoint.

Deploy the answer utilizing AWS CloudFormation

Create a CloudFormation stack utilizing the offered template. Full the next steps:

  1. Select Launch Stack:
  2. Select Subsequent.
  3. Present a stack identify, corresponding to transactionaldl-postgresql.
  4. Enter the required parameters:
    1. DMSS3EndpointIAMRoleARN – The IAM function ARN for AWS DMS to write down information into Amazon S3.
    2. ReplicationInstanceArn – The AWS DMS replication occasion ARN.
    3. S3BucketStage – The identify of the present bucket used for the stage layer of the info lake.
    4. S3BucketGlue – The identify of the present S3 bucket for storing AWS Glue scripts.
    5. S3BucketRaw – The identify of the present bucket used for the uncooked layer of the info lake.
    6. SourceEndpointArn – The AWS DMS endpoint ARN that you just created earlier.
    7. SourceName – The arbitrary identifier of the info supply to duplicate (for instance, postgres). That is used to outline the S3 path of the info lake (uncooked layer) the place information will probably be saved.
  5. Don’t modify the next parameters:
    1. SourceS3BucketBlog – The bucket identify the place the offered AWS Glue script is saved.
    2. SourceS3BucketPrefix – The bucket prefix identify the place the offered AWS Glue script is saved.
  6. Select Subsequent twice.
  7. Choose I acknowledge that AWS CloudFormation would possibly create IAM assets with customized names.
  8. Select Create stack.

After roughly 5 minutes, the CloudFormation stack is deployed.

Evaluate the AWS DMS replication process

The AWS CloudFormation deployment created an AWS DMS goal endpoint for you. Due to two particular endpoint settings, the info will probably be ingested as we’d like it on Amazon S3.

  1. On the AWS DMS console, select Endpoints within the navigation pane.
  2. Seek for and select the endpoint that begins with dmsIcebergs3endpoint.
  3. Evaluate the endpoint settings:
    1. DataFormat is specified as parquet.
    2. TimestampColumnName will add the column last_update_time with the date of creation of the information on Amazon S3.

AWS DMS endpoint settings

The deployment additionally creates an AWS DMS replication process that begins with dmsicebergtask.

  1. Select Replication duties within the navigation pane and seek for the duty.

You will note that the Activity Kind is marked as Full load, ongoing replication. AWS DMS will carry out an preliminary full load of present information, after which create incremental information with modifications carried out to the supply database.

On the Mapping Guidelines tab, there are two kinds of guidelines:

  • A variety rule with the identify of the supply schema and tables that will probably be ingested from the supply database. By default, it makes use of the pattern database offered within the conditions, dms_sample, and all tables with the key phrase %.
  • Two transformation guidelines that embrace within the goal information on Amazon S3 the schema identify and desk identify as columns. That is utilized by our AWS Glue job to know to which tables the information within the information lake correspond.

To study extra about learn how to customise this in your personal information sources, seek advice from Choice guidelines and actions.

AWS mapping rules

Let’s change some configurations to complete our process preparation.

  1. On the Actions menu, select Modify.
  2. Within the Activity Settings part, underneath Cease process after full load completes, select Cease after making use of cached modifications.

This fashion, we will management the preliminary load and incremental file technology as two completely different steps. We use this two-step method to run the AWS Glue job as soon as per every step.

  1. Beneath Activity logs, select Activate CloudWatch logs.
  2. Select Save.
  3. Wait about 1 minute for the database migration process standing to indicate as Prepared.

Add permissions for encryption and decryption or Lake Formation

Optionally, you possibly can add permissions for encryption and decryption or Lake Formation.

Add encryption and decryption permissions

In case your S3 buckets used for the uncooked and stage layers are encrypted utilizing AWS Key Administration Service (AWS KMS) buyer managed keys, it’s essential add permissions to permit the AWS Glue job to entry the info:

Add Lake Formation permissions

In the event you’re managing permissions utilizing Lake Formation, it’s essential permit your AWS Glue job to create your area’s databases and tables by means of the IAM function GlueJobRole.

  1. Grant permissions to create databases (for directions, seek advice from Making a Database).
  2. Grant SUPER permissions to the default database.
  3. Grant information location permissions.
  4. In the event you create databases manually, grant permissions on all databases to create tables. Seek advice from Granting desk permissions utilizing the Lake Formation console and the named useful resource methodology or Granting Information Catalog permissions utilizing the LF-TBAC methodology in keeping with your use case.

After you full the later step of performing the preliminary information load, make sure that to additionally add permissions for customers to question the tables. The job function will turn into the proprietor of all of the tables created, and the info lake admin can then carry out grants to further customers.

Evaluate desk configuration in Parameter Retailer

The AWS Glue job that performs the info ingestion into Iceberg tables makes use of the desk specification offered in Parameter Retailer. Full the next steps to evaluation the parameter retailer that was configured routinely for you. If wanted, modify in keeping with your personal wants.

  1. On the Parameter Retailer console, select My parameters within the navigation pane.

The CloudFormation stack created two parameters:

  • iceberg-config for job configurations
  • iceberg-tables for desk configuration
  1. Select the parameter iceberg-tables.

The JSON construction incorporates info that AWS Glue makes use of to learn information and write the Iceberg tables on the goal area:

  • One object per desk – The identify of the thing is created utilizing the schema identify, a interval, and the desk identify; for instance, schema.desk.
  • primaryKey – This ought to be specified for each supply desk. You’ll be able to present a single column or a comma-separated record of columns (with out areas).
  • partitionCols – This optionally partitions columns for goal tables. In the event you don’t need to create partitioned tables, present an empty string. In any other case, present a single column or a comma-separated record of columns for use (with out areas).
  1. If you wish to use your personal information supply, use the next JSON code and substitute the textual content in CAPS from the template offered. In the event you’re utilizing the pattern information supply offered, preserve the default settings:
{
    "SCHEMA_NAME.TABLE_NAME_1": {
        "primaryKey": "ONLY_PRIMARY_KEY",
        "area": "TARGET_DOMAIN",
        "partitionCols": ""
    },
    "SCHEMA_NAME.TABLE_NAME_2": {
        "primaryKey": "FIRST_PRIMARY_KEY,SECOND_PRIMARY_KEY",
        "area": "TARGET_DOMAIN",
        "partitionCols": "PARTITION_COLUMN_ONE,PARTITION_COLUMN_TWO"
    }
}
  1. Select Save modifications.

Carry out preliminary information loading

Now that the required configuration is completed, we ingest the preliminary information. This step consists of three elements: ingesting the info from the supply relational database into the uncooked layer of the info lake, creating the Iceberg tables on the stage layer of the info lake, and verifying outcomes utilizing Athena.

Ingest information into the uncooked layer of the info lake

To ingest information from the relational information supply (PostgreSQL if you’re utilizing the pattern offered) to our transactional information lake utilizing Iceberg, full the next steps:

  1. On the AWS DMS console, select Database migration duties within the navigation pane.
  2. Choose the replication process you created and on the Actions menu, select Restart/Resume.
  3. Wait about 5 minutes for the replication process to finish. You’ll be able to monitor the tables ingested on the Statistics tab of the replication process.

AWS DMS full load statistics

After some minutes, the duty finishes with the message Full load full.

  1. On the Amazon S3 console, select the bucket you outlined because the uncooked layer.

Beneath the S3 prefix outlined on AWS DMS (for instance, postgres), you must see a hierarchy of folders with the next construction:

  • Schema
    • Desk identify
      • LOAD00000001.parquet
      • LOAD0000000N.parquet

AWS DMS full load objects created on S3

In case your S3 bucket is empty, evaluation Troubleshooting migration duties in AWS Database Migration Service earlier than operating the AWS Glue job.

Create and ingest information into Iceberg tables

Earlier than operating the job, let’s navigate the script of the AWS Glue job offered as a part of the CloudFormation stack to know its habits.

  1. On the AWS Glue Studio console, select Jobs within the navigation pane.
  2. Seek for the job that begins with IcebergJob- and a suffix of your CloudFormation stack identify (for instance, IcebergJob-transactionaldl-postgresql).
  3. Select the job.

AWS Glue ETL job review

The job script will get the configuration it wants from Parameter Retailer. The perform getConfigFromSSM() returns job-related configurations corresponding to supply and goal buckets from the place the info must be learn and written. The variable ssmparam_table_values comprise table-related info like the info area, desk identify, partition columns, and first key of the tables that must be ingested. See the next Python code:

# Predominant utility
args = getResolvedOptions(sys.argv, ['JOB_NAME', 'stackName'])
SSM_PARAMETER_NAME = f"{args['stackName']}-iceberg-config"
SSM_TABLE_PARAMETER_NAME = f"{args['stackName']}-iceberg-tables"

# Parameters for job
rawS3BucketName, rawBucketPrefix, stageS3BucketName, warehouse_path = getConfigFromSSM(SSM_PARAMETER_NAME)
ssm_param_table_values = json.masses(ssmClient.get_parameter(Identify = SSM_TABLE_PARAMETER_NAME)['Parameter']['Value'])
dropColumnList = ['db','table_name', 'schema_name','Op', 'last_update_time', 'max_op_date']

The script makes use of an arbitrary catalog identify for Iceberg that’s outlined as my_catalog. That is carried out on the AWS Glue Information Catalog utilizing Spark configurations, so a SQL operation pointing to my_catalog will probably be utilized on the Information Catalog. See the next code:

catalog_name="my_catalog"
errored_table_list = []

# Iceberg configuration
spark = SparkSession.builder 
    .config('spark.sql.warehouse.dir', warehouse_path) 
    .config(f'spark.sql.catalog.{catalog_name}', 'org.apache.iceberg.spark.SparkCatalog') 
    .config(f'spark.sql.catalog.{catalog_name}.warehouse', warehouse_path) 
    .config(f'spark.sql.catalog.{catalog_name}.catalog-impl', 'org.apache.iceberg.aws.glue.GlueCatalog') 
    .config(f'spark.sql.catalog.{catalog_name}.io-impl', 'org.apache.iceberg.aws.s3.S3FileIO') 
    .config('spark.sql.extensions', 'org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions') 
    .getOrCreate()

The script iterates over the tables outlined in Parameter Retailer and performs the logic for detecting if the desk exists and if the incoming information is an preliminary load or an upsert:

# Iteration over tables saved on Parameter Retailer
for key in ssm_param_table_values:
    # Get desk information
    isTableExists = False
    schemaName, tableName = key.break up('.')
    logger.information(f'Processing desk : {tableName}')

The initialLoadRecordsSparkSQL() perform masses preliminary information when no operation column is current within the S3 information. AWS DMS provides this column solely to Parquet information information produced by the continual replication (CDC). The info loading is carried out utilizing the INSERT INTO command with SparkSQL. See the next code:

sqltemp = Template("""
    INSERT INTO $catalog_name.$dbName.$tableName  ($insertTableColumnList)
    SELECT $insertTableColumnList FROM insertTable $partitionStrSQL
""")
SQLQUERY = sqltemp.substitute(
    catalog_name = catalog_name, 
    dbName = dbName, 
    tableName = tableName,
    insertTableColumnList = insertTableColumnList[ : -1],
    partitionStrSQL = partitionStrSQL)

logger.information(f'****SQL QUERY IS : {SQLQUERY}')
spark.sql(SQLQUERY)

Now we run the AWS Glue job to ingest the preliminary information into the Iceberg tables. The CloudFormation stack provides the --datalake-formats parameter, including the required Iceberg libraries to the job.

  1. Select Run job.
  2. Select Job Runs to observe the standing. Wait till the standing is Run Succeeded.

Confirm the info loaded

To substantiate that the job processed the info as anticipated, full the next steps:

  1. On the Athena console, select Question Editor within the navigation pane.
  2. Confirm AwsDataCatalog is chosen as the info supply.
  3. Beneath Database, select the info area that you just need to discover, primarily based on the configuration you outlined within the parameter retailer. If utilizing the pattern database offered, use sports activities.

Beneath Tables and views, we will see the record of tables that had been created by the AWS Glue job.

  1. Select the choices menu (three dots) subsequent to the primary desk identify, then select Preview Information.

You’ll be able to see the info loaded into Iceberg tables. Amazon Athena review initial data loaded

Carry out incremental information loading

Now we begin capturing modifications from our relational database and making use of them to the transactional information lake. This step can also be divided in three elements: capturing the modifications, making use of them to the Iceberg tables, and verifying the outcomes.

Seize modifications from the relational database

As a result of configuration we specified, the replication process stopped after operating the complete load part. Now we restart the duty so as to add incremental information with modifications into the uncooked layer of the info lake.

  1. On the AWS DMS console, choose the duty we created and ran earlier than.
  2. On the Actions menu, select Resume.
  3. Select Begin process to begin capturing modifications.
  4. To set off new file creation on the info lake, carry out inserts, updates, or deletes on the tables of your supply database utilizing your most well-liked database administration device. If utilizing the pattern database offered, you would run the next SQL instructions:
UPDATE dms_sample.nfl_stadium_data_upd
SET seatin_capacity=93703
WHERE group = 'Los Angeles Rams' and sport_location_id = '31';

replace  dms_sample.mlb_data 
set bats="R"
the place mlb_id=506560 and bats="L";

replace dms_sample.sporting_event 
set start_date  = current_date 
the place id=11 and sold_out=0;
  1. On the AWS DMS process particulars web page, select the Desk statistics tab to see the modifications captured.
    AWS DMS CDC statistics
  2. Open the uncooked layer of the info lake to discover a new file holding the incremental modifications inside each desk’s prefix, for instance underneath the sporting_event prefix.

The file with modifications for the sporting_event desk appears to be like like the next screenshot.

AWS DMS objects migrated into S3 with CDC

Discover the Op column to start with recognized with an replace (U). Additionally, the second date/time worth is the management column added by AWS DMS with the time the change was captured.

CDC file schema on Amazon S3

Apply modifications on the Iceberg tables utilizing AWS Glue

Now we run the AWS Glue job once more, and it’ll routinely course of solely the brand new incremental information because the job bookmark is enabled. Let’s evaluation the way it works.

The dedupCDCRecords() perform performs deduplication of information as a result of a number of modifications to a single file ID could possibly be captured throughout the similar information file on Amazon S3. Deduplication is carried out primarily based on the last_update_time column added by AWS DMS that signifies the timestamp of when the change was captured. See the next Python code:

def dedupCDCRecords(inputDf, keylist):
    IDWindowDF = Window.partitionBy(*keylist).orderBy(inputDf.last_update_time).rangeBetween(-sys.maxsize, sys.maxsize)
    inputDFWithTS = inputDf.withColumn('max_op_date', max(inputDf.last_update_time).over(IDWindowDF))
    
    NewInsertsDF = inputDFWithTS.filter('last_update_time=max_op_date').filter("op='I'")
    UpdateDeleteDf = inputDFWithTS.filter('last_update_time=max_op_date').filter("op IN ('U','D')")
    finalInputDF = NewInsertsDF.unionAll(UpdateDeleteDf)

    return finalInputDF

On line 99, the upsertRecordsSparkSQL() perform performs the upsert similarly to the preliminary load, however this time with a SQL MERGE command.

Evaluate the utilized modifications

Open the Athena console and run a question that selects the modified information on the supply database. If utilizing the offered pattern database, use one the next SQL queries:

SELECT * FROM "sports activities"."nfl_stadiu_data_upd"
WHERE group = 'Los Angeles Rams' and sport_location_id = 31
LIMIT 1;

Amazon Athena review cdc data loaded

Monitor desk ingestion

The AWS Glue job script is coded with easy Python exception dealing with to catch errors throughout processing a selected desk. The job bookmark is saved after every desk finishes processing efficiently, to keep away from reprocessing tables if the job run is retried for the tables with errors.

The AWS Command Line Interface (AWS CLI) gives a get-job-bookmark command for AWS Glue that gives perception into the standing of the bookmark for every desk processed.

  1. On the AWS Glue Studio console, select the ETL job.
  2. Select the Job Runs tab and duplicate the job run ID.
  3. Run the next command on a terminal authenticated for the AWS CLI, changing <GLUE_JOB_RUN_ID> on line 1 with the worth you copied. In case your CloudFormation stack will not be named transactionaldl-postgresql, present the identify of your job on line 2 of the script:
jobrun=<GLUE_JOB_RUN_ID>
jobname=IcebergJob-transactionaldl-postgresql
aws glue get-job-bookmark --job-name jobname --run-id $jobrun

On this answer, when a desk processing causes an exception, the AWS Glue job won’t fail in keeping with this logic. As a substitute, the desk will probably be added into an array that’s printed after the job is full. In such state of affairs, the job will probably be marked as failed after it tries to course of the remainder of the tables detected on the uncooked information supply. This fashion, tables with out errors don’t have to attend till the consumer identifies and solves the issue on the conflicting tables. The consumer can rapidly detect job runs that had points utilizing the AWS Glue job run standing, and determine which particular tables are inflicting the issue utilizing the CloudWatch logs for the job run.

  1. The job script implements this characteristic with the next Python code:
# Carried out for each desk
        strive:
            # Desk processing logic
        besides Exception as e:
            logger.information(f'There is a matter with desk: {tableName}')
            logger.information(f'The exception is : {e}')
            errored_table_list.append(tableName)
            proceed
        job.commit()
if (len(errored_table_list)):
    logger.information('Complete variety of errored tables are ',len(errored_table_list))
    logger.information('Tables that failed throughout processing are ', *errored_table_list, sep=', ')
    elevate Exception(f'***** Some tables didn't course of.')

The next screenshot exhibits how the CloudWatch logs search for tables that trigger errors on processing.

AWS Glue job monitoring with logs

Aligned with the AWS Properly-Architected Framework Information Analytics Lens practices, you possibly can adapt extra subtle management mechanisms to determine and notify stakeholders when errors seem on the info pipelines. For instance, you should use an Amazon DynamoDB management desk to retailer all tables and job runs with errors, or utilizing Amazon Easy Notification Service (Amazon SNS) to ship alerts to operators when sure standards is met.

Schedule incremental batch information loading

The CloudFormation stack deploys an Amazon EventBridge rule (disabled by default) that may set off the AWS Glue job to run on a schedule. To offer your personal schedule and allow the rule, full the next steps:

  1. On the EventBridge console, select Guidelines within the navigation pane.
  2. Seek for the rule prefixed with the identify of your CloudFormation stack adopted by JobTrigger (for instance, transactionaldl-postgresql-JobTrigger-randomvalue).
  3. Select the rule.
  4. Beneath Occasion Schedule, select Edit.

The default schedule is configured to set off each hour.

  1. Present the schedule you need to run the job.
  2. Moreover, you should use an EventBridge cron expression by deciding on A fine-grained schedule.
    Amazon EventBridge schedule ETL job
  3. If you end establishing the cron expression, select Subsequent 3 times, and eventually select Replace Rule to avoid wasting modifications.

The rule is created disabled by default to permit you to run the preliminary information load first.

  1. Activate the rule by selecting Allow.

You should utilize the Monitoring tab to view rule invocations, or instantly on the AWS Glue Job Run particulars.

Conclusion

After deploying this answer, you’ve got automated the ingestion of your tables on a single relational information supply. Organizations utilizing a knowledge lake as their central information platform often must deal with a number of, generally even tens of information sources. Additionally, an increasing number of use circumstances require organizations to implement transactional capabilities to the info lake. You should utilize this answer to speed up the adoption of such capabilities throughout all of your relational information sources to allow new enterprise use circumstances, automating the implementation course of to derive extra worth out of your information.


In regards to the Authors

Luis Gerardo BaezaLuis Gerardo Baeza is a Huge Information Architect within the Amazon Net Providers (AWS) Information Lab. He has 12 years of expertise serving to organizations within the healthcare, monetary and training sectors to undertake enterprise structure packages, cloud computing, and information analytics capabilities. Luis at the moment helps organizations throughout Latin America to speed up strategic information initiatives.

SaiKiran Reddy AenuguSaiKiran Reddy Aenugu is a Information Architect within the Amazon Net Providers (AWS) Information Lab. He has 10 years of expertise implementing information loading, transformation, and visualization processes. SaiKiran at the moment helps organizations in North America to undertake trendy information architectures corresponding to information lakes and information mesh. He has expertise within the retail, airline, and finance sectors.

Narendra MerlaNarendra Merla is a Information Architect within the Amazon Net Providers (AWS) Information Lab. He has 12 years of expertise in designing and productionalizing each real-time and batch-oriented information pipelines and constructing information lakes on each cloud and on-premises environments. Narendra at the moment helps organizations in North America to construct and design sturdy information architectures, and has expertise within the telecom and finance sectors.

About the author

admin

Leave a Comment