Join Our Team as a Data Infrastructure Engineer
At Hyperion360, we are dedicated to building and managing remote teams that drive innovation and success. We are seeking a skilled Data Infrastructure Engineer specializing in Python and Kafka Streaming to join our client’s engineering team. This role offers the opportunity to work remotely and contribute to groundbreaking solutions that enhance the reliability and safety of the electrical grid.
About Our Client
Our client is a leader in safeguarding critical infrastructure by advancing grid monitoring and analysis systems. Their innovative technology employs high-precision sensor arrays to continuously assess grid assets’ electrical and mechanical behavior, enabling early detection and mitigation of potential faults. This proven system has significantly reduced customer outage durations while strengthening safety measures for major utilities.
What You’ll Do
- Transform the data infrastructure into an event-driven architecture using Kafka and related technologies.
- Develop a producer to generate and stream synthetic sensor data to Kafka topics.
- Create a consumer to read events, perform data enrichment using cache or databases, and reinject enriched data into another Kafka topic.
- Build an end-of-pipeline consumer to process and analyze streamed events.
- Collaborate with cross-functional teams to integrate event-streaming solutions into existing infrastructure.
- Contribute to the setup and maintenance of infrastructure as code (IaC) and CI/CD pipelines using ArgoCD and Kubernetes.
- Optimize and ensure the reliability and efficiency of data workflows.
Your Background and Skills
- Experience: Minimum of 5 years in a Software Engineer role, focusing on Python and event streaming.
- Education: Bachelor’s degree in Computer Science, Engineering, or related fields, or equivalent work experience.
- Technical Proficiency:
- Proven experience with Kafka for building event streaming applications.
- Familiarity with Jupyter Notebook for data analysis and development.
- Experience with CI/CD tools and infrastructure management, particularly with ArgoCD and Kubernetes.
- Problem-Solving Skills: Strong ability to work collaboratively in fast-paced environments.
- Communication Skills: Excellent verbal and written communication in English.
- Additional Skills (Pluses):
- Experience with Apache Flink.
- Familiarity with Elasticsearch.
- Knowledge of Databricks or Snowflake.
Join us at Hyperion360 and play a vital role in enhancing grid safety and reliability through innovative data infrastructure solutions. Apply today to be part of a team that values your expertise and fosters growth.