Database Programmer_261312- Developer Programmer
Tech Mahindra Limited
NORTH SYDNEY 2060 - NSW
•2 hours ago
•No application
About
Role: Database Programmer_261312- Developer Programmer
Location: Sydney
Role type: Hybrid
Working Hours: 9 AM - 5 PM (Mon- Fri)
Role description/Main duties:
Will be responsible for developing and integrating Operations Support Systems within Telecom practice. This will involve development, testing, documentation and support services for Telstra. Will be involved in to deliver tailored financial solutions and exceptional service, programming and database skills. May also be required to use other administrative skills to complete tasks for the project.
Responsibilities:
· Involve in designing, architecting, coding, testing, and implementing scalable Big Data solutions to handle large volumes of data and built a migration framework using the Big Data stack and Azure Cloud.
· Built and optimized data ingestion, transformation, and loading pipelines using Azure Data Factory, Spark, and Delta Lake.
· Design and implement real-time streaming solutions using Kafka and batch processing with Data bricks and Azure Synapse.
· Write Spark scripts for data cleaning, analyzing large datasets, and storing them as Delta tables in Parquet format.
· Design and developed ML models for prediction and classification using Tensor Flow and Scikit-learn.
· Implement MLOps pipelines using MLflow for model versioning, monitoring, and automated retraining.
· Optimize ML models through distributed training, hypeparameter tuning, and feature engineering.
· Optimize Spark jobs by improving data partitioning, indexing, and caching for better performance and cost efficiency.
· Deploy ML models as scalable APIs using FastAPI, Flask, and Azure ML Endpoints.
· Implement security measures, including RBAC, encryption, and data masking, to ensure compliance with GDPR and HIPAA.
· Collaborate with cross-functional teams, including data scientists, data engineers, and business analysts, to deliver data-driven solutions.
· Led code reviews, design discussions, and the resolution of project issues for developers.
· Implement CI/CD pipelines using Jenkins, GitHub Actions, and Azure DevOps to automate build, test, and deployment processes.
· Mentor junior developers and provided guidance on best practices in Big Data and machine learning.
· Execute projects using agile methodologies to ensure efficient development and timely delivery.
· Participated in Agile ceremonies such as sprint planning, daily stand-ups, and Retrospective
· Implemented the project in agile model.
Essential Skills required to perform the above job role:
BIGDATA Framework (Map Reduce, HIVE, PIG, HDFS, HBASE, SQOOP)
Data bricks, Solution Design, Cloud (Azure, AWS)
KAFKA, SPARK (SQL, STREAMING),
SCALA and CASSANDRA
Special Skills required to perform the job role:
Machine Learning Algorithms
AZURE STACK (ADF, ADL, DATABRICKS, Synapse, Event Hub),
AWS Stack (S3, Redshift, Glue, Lambda, Hadoop/EMR, Hive, Kinesis), R, Python, Java
Trade/Professional qualification/Training (including on-the job training) if any:
Qualification: Bachelor of Technology on Computer Science
Trainings: Generative AI, AZURE Stack, AWS Components.
Years of experience required for the job: 7+ years
”Tech Mahindra is an Equal Employment Opportunity employer. We promote and support a diverse workforce at all levels of the company. All qualified applicants will receive consideration for employment without regard to race, religion, color, sex, age, national origin or disability. All applicants will be evaluated solely on the basis of their ability, competence, and performance of the essential functions of their positions.
We are also committed to make reasonable adjustments to support candidates, so they can perform at their best throughout the recruitment process.”