Software Engineer

Software Engineer

Software Engineer

People 2.0 Australia (est) Pty Ltd

Workforce

EVELEIGH 2015 - NSW

2 hours ago

No application

About

Our client is a Digital Consulting Company who harnesses the digital disruption to develop products & services that make life simple.

They are seeking an experienced professional to join their team!

The ideal candidate will have a strong background in both data engineering and DevOps, with a deep understanding of cloud-native technologies, particularly within the AWS ecosystem. You will be responsible for the end-to-end lifecycle of their data processing and analytics applications, from development and optimization to deployment and maintenance. Your expertise will be critical in ensuring their data pipelines are efficient, cost-effective, and highly available.

Key Responsibilities
Data Engineering & Development:
o Design, develop, and maintain proprietary data processing applications using Scala and Apache Spark to handle large-scale datasets.
o Tune and optimize Spark application performance on AWS EMR, including selecting appropriate instance types for memory-intensive workloads and improving ETL runtimes.
o Write and maintain high-quality, production-ready code that adheres to system requirements, designs, and quality standards.
o Develop and implement unit tests and data generators to ensure the accuracy and reliability of data applications during early development stages.

DevOps & Infrastructure:
o Maintain and manage containerized services orchestrated using Kubernetes (EKS) on AWS.
o Manage application deployments using Helm charts, ensuring smooth and repeatable releases.
o Develop and maintain Infrastructure as Code (IaC) using AWS Cloudformation.
o Be comfortable with tools like AWS Step Functions, EMR, and OpenSearch to automate the orchestration of ETL jobs.
o Improve and automate the CI/CD pipeline for ETL jobs, streamlining the deployment of stable application versions.
o Implement and manage EMR clusters to optimize cost of Big data processing and resource utilization.

Data Store Management & Operations:
o Work with AWS OpenSearch (Elasticsearch) as a data store, optimizing its performance for complex full-text search and analytical queries.
o Monitor and troubleshoot application performance and operational metrics using tools like Splunk and AWS EMR dashboards.
o Respond to and resolve alerts generated by the monitoring systems to ensure application stability.
• Security & Documentation:
o Handle Client-side encryption using AWS KMS for data files transferred between on-premise systems and AWS S3 using AWS SDKs.
o Create and maintain comprehensive technical documentation for all applications, processes, and infrastructure.

Required Skills and Qualifications
• Extensive experience with big data processing technologies, specifically Scala and Apache Spark.
• Proven expertise in cloud infrastructure, particularly with AWS services (EKS, EMR, Step Functions, S3, OpenSearch).
• Strong understanding of containerization and orchestration using Docker and Kubernetes.
• Hands-on experience with Infrastructure as Code (IaC) principles and tools.
• Proficiency in CI/CD pipeline development and automation.
• Solid experience with monitoring and logging tools (e.g., Splunk, Observe, Grafana).
• Familiarity with data formats like Parquet.
• Experience with data security practices, including encryption/decryption.
• Excellent problem-solving skills and the ability to conduct research and tune system configurations for optimal performance and cost.
• Strong communication skills and the ability to create clear, concise technical documentation.

Nice to Have
• Experience with other AWS data services (e.g., Redshift, Glue, etc.).
• Knowledge of other programming languages (e.g., Python).
• Experience in the financial or Fin-crime sector.