Kafka Engineer
Remotive
Remote
•16 hours ago
•No application
About
The Amivero Team
Amivero’s team of IT professionals delivers digital services that elevate the federal government, whether national security or improved government services. Our human-centered, data-driven approach is focused on truly understanding the environment and the challenge, and reimagining with our customer how outcomes can be achieved.
Our team of technologists leverage modern, agile methods to design and develop equitable, accessible, and innovative data and software services that impact hundreds of millions of people.
As a member of the Amivero team you will use your empathy for a customer’s situation, your passion for service, your energy for solutioning, and your bias towards action to bring modernization to very important, mission-critical, and public service government IT systems.
Special Requirements
- US Citizenship Required to obtain Public Trust
- DHS Public Trust preferred
- Bachelor’s Degree required + 7 years of relevant experience
The Gist…
Our DevOps Engineer will work on a large enterprise level team and will be responsible for working in an Agile environment. You will support Cloud initiative and the development of custom software and database applications.
What Your Day Might Include…
- Design, develop, and deploy high-performance Kafka producers, consumers, and stream processing applications (using Kafka Streams, ksqlDB, Flink, or Spark Streaming) in Java.
- Collaborate with architects and other engineering teams to define and evolve our event-driven architecture, ensuring best practices for Kafka topic design, partitioning, replication, and data retention.
- Implement and manage components of the Kafka ecosystem, including Kafka Connect (source and sink connectors), Schema Registry (Avro, Protobuf), and Kafka security features.
- Monitor, troubleshoot, and optimize Kafka clusters and Kafka-dependent applications for throughput, latency, reliability, and resource utilization.
- Build and maintain robust and resilient data pipelines for real-time ingestion, transformation, and distribution of data across various systems.
- Provide operational support for Kafka-based systems, including incident response, root cause analysis, and proactive maintenance to ensure high availability and reliability.
- Enforce data contract definitions and schema evolution strategies using Schema Registry to maintain data quality and compatibility across services.
- Implement comprehensive testing strategies for Kafka applications, including unit, integration, and end-to-end tests, ensuring data integrity and system reliability.
- Create and maintain detailed technical documentation, architectural diagrams, and operational runbooks for Kafka-related components and processes.
- Act as a subject matter expert, sharing knowledge, mentoring junior engineers, and championing Kafka best practices across the organization.
You’ll Bring These Qualifications…
- US Citizenship Required in order to obtain Public Trust.
- Active DHS Public Trust preferred.
- Bachelor Degree + 7 years of experience.
- Extensive hands-on experience designing, developing, and deploying applications using Apache Kafka (producers, consumers, topic management, consumer groups).
- Deep understanding of Kafka's internal architecture, guarantees (at-least-once, exactly-once), offset management, and delivery semantics.
- Experience with Kafka Streams API or other stream processing frameworks (e.g., Flink, Spark Streaming with Kafka).
- Programming Proficiency: High-level proficiency in at least one modern backend programming language suitable for Kafka development (Java strongly preferred).
- Strong understanding of distributed systems principles, concurrency, fault tolerance, and resilience patterns.
- Experience with data serialization formats such as Avro, Protobuf, or JSON Schema, and their use with Kafka Schema Registry.
- Solid understanding of relational and/or NoSQL databases, and experience integrating them with Kafka.
- Excellent analytical, debugging, and problem-solving skills in complex distributed environments.
- Strong verbal and written communication skills, with the ability to clearly articulate technical concepts to diverse audiences.
- Knowledge of monitoring and observability tools for Kafka and streaming applications (e.g., Prometheus, Grafana, ELK stack, Datadog).
- Working knowledge of Git and collaborative development workflows.
- Understanding of all elements of the software development life cycle, including planning, development, requirements management, CM, quality assurance, and release management.
These Qualifications Would be Nice to Have:
- Hands-on experience with Confluent Platform components (Control Center, ksqlDB, REST Proxy, Tiered Storage).
- Experience with Kafka Connect for building data integration pipelines (developing custom connectors is a plus).
- Familiarity with cloud platforms (AWS, Azure, GCP) and managed Kafka services (e.g., AWS MSK, Confluent Cloud, Azure Event Hubs).
- Experience with containerization (Docker) and orchestration (Kubernetes) for deploying Kafka-dependent applications.
- Experience with CI/CD pipelines for automated testing and deployment of Kafka-based services.
- Familiarity with performance testing and benchmarking tools for Kafka and related applications.
EOE/M/F/VET/DISABLED
All qualified applicants will receive consideration without regard to race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, disability, genetic information, marital status, amnesty, or status as a covered veteran in accordance with applicable federal, state and local laws. Amivero complies with applicable state and local laws governing non-discrimination in employment in every location in which the company has facilities.
