Data Engineer (Pyspark, AWS)

NBNCo, Melbourne

Implemented Spark jobs with multiple transformations with AWS technologies like S3, EMR, Kinesis, Glue, and Athena.
Ensured timely generation of service health reports being made available in the customer portal.
Analysed production loads to tune parameters for scaling of Lambda functions and DynamoDB writes.
Increased scalability and robustness of the job by promoting efficient utilisation of resources.
Deployed prediction models with endpoints on Docker containers that can be invoked by API calls.
Deployed prediction models with endpoints on Docker containers that can be invoked by API calls.
Assisted support and deployment teams with data lake infrastructure issues and documentation in confluence.
Advanced deployment timeframes and promote the smooth flow of information between teams.

Hadoop Engineer (Spark, Kafka, Hive, Java)

Subex, Bangalore

Took complete ownership of the Reconciliation feature of the Revenue Assurance (RA) product.
Developed, tested, and deployed one of the key features of the product for the client.
Developed Common Data Model APIs for data access to Hive for multiple product teams through the platform.
Provided reusable APIs and abstraction to other teams and increased efficient querying of data.
Fixed major and critical issues in ETL layer of product involving Spark streaming and Kafka.
Avoided data loss of failed micro-batches and increased resilience of ETL jobs.
Deployed the product onsite for the client on their on-prem production cluster of 20 nodes.
Played the role of a delivery engineer and assisted the client in successfully deploying their use cases.

Full Stack Developer (PHP, JS)

Usku Tech, Melbourne

Designed and developed webpages and the backend database schema in MySql.
Built an adaptable schema for the product and provided a meaningful experience for the user.
Integrated cookies management and session management into the website.
Improved user experience for customers and provided easy access to historical actions.