Senior Software Engineer – Data Foundations

Join Our Team as Senior Software Engineer – Data Foundations

At Hyperion360, we believe in empowering our engineers to shape the future of technology from the comfort of their own homes. We are a premier software outsourcing company, partnering with some of the world’s most successful businesses to build and manage dedicated, remote teams of top-tier software engineers and other technical talent.

We are looking for a talented Senior Software Engineer – Data Foundations to join our global team.

About This Role

This role emphasizes observability, automation, and optimization—cleaning up unused tables, streamlining data pipelines, and ensuring data engineers and analysts can move quickly with confidence. You will also play a key role in supporting the lifecycle of ML and GenAI data workflows.

Job Description

As a Senior Software Engineer in Riot Data Foundations on the Data Experiences and Automation team, you will help shape how Riot builds and maintains scalable, observable, and reliable data pipelines. You will work closely with product partners and technical leads to expand data processing, monitoring, and reporting capabilities, while ensuring best practices for data quality, security, and system efficiency.

You will bring your experience in data engineering and DevOps, with a strong focus on building and maintaining reliable pipelines using tools like DBT and Airflow. You will report into the Engineering Manager of the team.

Key Responsibilities:

  • Design, build, and enhance pipelines that make Riot’s data ecosystem more reliable,observable, and efficient.
  • Implement monitoring and observability frameworks to ensure pipelines are performant and resilient.
  • Work with APIs to build integrations, handle errors (HTTP status codes, retries), and enforce secure credential management.
  • Identify and clean up unused or redundant data tables to streamline system performance.
  • Participate in code reviews, team rituals, and contribute to best practices in pipeline development.
  • Collaborate with non-technical stakeholders to understand data requirements and deliver actionable solutions.
  • Support live products via on-call rotations, ensuring uptime and reliability.

Experience:

  • Bachelor’s degree in Computer Science or related field (or equivalent work experience).
  • 5+ years experience with Python and SQL for data engineering.
  • Strong hands-on experience with DBT and Airflow (must have).
  • Experience in data warehousing and distributed data ecosystems (Databricks, Snowflake, or similar).
  • Solid understanding of observability practices (logging, metrics, tracing) for data pipelines.
  • Familiarity with DevOps practices (CI/CD, GitHub Actions, infrastructure as code).
  • Experience building secure and resilient API integrations.

Desired Qualifications

  • Exposure to Monte Carlo, Tableau, or similar tools for data quality and visualization.
  • Experience with MLOps or GenAI pipelines and infrastructure.
  • Working knowledge of Golang for tooling and automation (nice to have).
  • Hands-on experience with open-source ETL frameworks and modern orchestration platforms.
  • Passion for improving developer experience through better tooling and automation.

Why Choose Hyperion360?

  • Remote-First Culture: Work from anywhere with flexible hours
  • Top-Tier Clients: Partner with Fortune 500 companies and top startups
  • Professional Growth: Continuous learning and development opportunities
  • Competitive Compensation: Market-leading salaries and benefits
  • Global Team: Collaborate with talented professionals worldwide

Ready to take your career to the next level? Apply today and become part of Hyperion360’s elite team!