Impact Engineer - Data Science (Remote)

Join Our Team as Impact Engineer - Data Science (Remote)

At Hyperion360, we believe in empowering our engineers to shape the future of technology from the comfort of their own homes. We are a premier software outsourcing company, partnering with some of the world’s most successful businesses to build and manage dedicated, remote teams of top-tier software engineers and other technical talent.

We are looking for a talented Impact Engineer - Data Science (Remote) to join our global team.

About This Role

a data scientist/analyst who thrives in a fast-paced, high-ownership environment

Job Description

We’re looking for a data scientist/analyst who thrives in a fast-paced, high-ownership environment. At our company, data is at the core of our decision engine, risk models, and product strategy. You’ll design, implement, and productionize data-driven systems that directly shape how we underwrite advances, optimize user experience, and manage risk.

This role is not limited to notebooks — you’ll write production-ready Python code, deploy models and pipelines into live systems, and collaborate with backend engineers to build scalable data products. You’ll also leverage modern AI/LLM-powered workflows to accelerate analysis, automate reporting, and improve decision-making.

Your work will go beyond models: you’ll generate actionable insights into user behavior, product funnels, and financial performance, helping drive both day-to-day decisions and long-term strategy.

If you’re excited about blending hands-on engineering, business analytics, and AI-driven workflows — and about seeing your work have direct impact on thousands of users — you’ll feel right at home here.

What you’ll do

  • Build and maintain services that power Gerald’s decision engine — the core of our product
  • Design and develop statistical and machine learning models to support underwriting, fraud detection, and user engagement
  • Write production-ready Python code (FastAPI, Pandas, SQLAlchemy) to deploy models and pipelines into live systems
  • Develop and optimize ETL jobs and data pipelines to transform raw data into actionable insights
  • Write and maintain SQL queries to analyze user behavior, product performance, and financial metrics
  • Partner with product and business teams to analyze funnels, unit economics, and risk metrics, translating insights into strategic decisions
  • Design and run data-driven experiments (A/B tests, causal analysis) to validate product improvements and policy changes
  • Leverage LLMs and AI-powered tools to accelerate data exploration, automate reporting, and support SQL/analysis workflows
  • Prototype and integrate AI-driven workflows (agents, RAG pipelines, anomaly detection, automated monitoring) into Gerald’s data stack
  • Implement and maintain unit and integration tests to ensure model and pipeline reliability
  • Communicate findings and recommendations clearly across both technical and non-technical stakeholders

What we’re looking for

  • 4–6+ years of experience in a data science, data analyst, or ML engineering role
  • Strong Python programming skills (comfortable writing production-ready code, not just notebooks)
  • Experience with SQL and relational databases (Postgres, MySQL, or similar)
  • Hands-on experience with Python data libraries (Pandas, NumPy, SciPy, scikit-learn, etc.)
  • Background in applied statistics and machine learning (classification, regression, clustering, etc.)
  • Experience deploying models, pipelines, or analytics workflows into production environments
  • Familiarity with data-driven experimentation (A/B testing, causal inference, or uplift modeling)
  • Experience leveraging LLMs and AI-powered tools to improve analysis and reporting productivity
  • Strong problem-solving and communication skills, with the ability to collaborate in a remote-first environment
  • A responsible, detail-oriented mindset with a strong sense of ownership
  • Bachelor’s degree in Computer Science, Statistics, Mathematics, Engineering, or related field

Nice to have

  • Experience with ETL/data engineering frameworks (Airflow, dbt, or similar)
  • Familiarity with cloud platforms (AWS preferred — RDS, S3, ECS, Lambda)
  • Practical experience with advanced LLM applications — e.g., multi-agent workflows, automated feature engineering, anomaly detection agents
  • Familiarity with retrieval-augmented generation (RAG) and vector databases (e.g., Pinecone, Weaviate, FAISS)
  • Experience optimizing AI/data pipelines for latency, cost efficiency, and reliability
  • Experience with Docker and containerized workflows
  • Previous experience in fintech, risk modeling, or credit underwriting
  • Preference for candidates with startup experience and a diverse range of roles in their career

Why Choose Hyperion360?

  • Remote-First Culture: Work from anywhere with flexible hours
  • Top-Tier Clients: Partner with Fortune 500 companies and top startups
  • Professional Growth: Continuous learning and development opportunities
  • Competitive Compensation: Market-leading salaries and benefits
  • Global Team: Collaborate with talented professionals worldwide

Ready to take your career to the next level? Apply today and become part of Hyperion360’s elite team!