hero

Make your next big move

65
companies
635
Jobs

Staff Data Engineer - Immediate Joiners

FinOpsly

FinOpsly

Marketing & Communications, Data Science
Bengaluru, Karnataka, India
Posted on Apr 6, 2025

About Us

At FinOpsly, we’re reinventing cloud cost management—empowering businesses to track, manage, and optimize their cloud spend so they can stretch resources further. Our AI-driven product provides deep insights and a unique customer experience, and our distributed team across India and the USA collaborates to protect customers from cost overruns. We operate in a massive, growing market, yet it’s still early enough for you to make a significant impact. You’ll have the freedom to think creatively, dream big, and use your full range of skills to drive our product and company forward.

We are seeking a Staff Data Engineer to join the Data Stewardship team, with deep expertise in data modeling, ETL/ELT processes, and large-scale data engineering. In this role, you’ll design and implement the backbone of our data platform, enabling customers to gain insights into cloud spend across multiple providers, identify optimization opportunities, and proactively alert on spend spikes. You’ll build efficient data models and transformation pipelines that are well-suited for front-end consumption. As a Staff Data Engineer, you will work closely with our data engineering, full stack developers and AI/ML engineers build strategies, models, and tools to improve the quality, efficiency, and safety of FinOpsly’s data.

As a Data Steward, you are passionate about efficient data management, privacy techniques, and helping teams create high quality data. This role is critical to delivering insightful, real-time analytics for cloud spend optimization. You’ll shape how our platform ingests, models, and serves data, ensuring the front end can quickly access and visualize the information our customers need. If you’re excited about scalable architectures, best-in-class data practices, and driving product impact, we’d love to meet you!

What You’ll Do

  • Analyze existing databases and warehouse architectures, recommending the most efficient methods for integrating, consolidating, and aggregating large datasets.
  • Translate business needs into robust schemas for online reporting and ad-hoc analysis, focusing on scalability and multi-tenant data access.
  • Build logical abstraction layers for high-volume and multi-tenant data, ensuring smooth downstream consumption by applications and analytics tools.
  • Create end-to-end data pipelines (ETL/ELT) using Snowflake or other modern technologies, ensuring high availability, speed, and accuracy of ingested data.
  • Engage with internal stakeholders to capture requirements, then design repeatable, maintainable data engineering processes that align with customer-facing objectives.
  • Propose and maintain technical design documentation (specifications, data flows, future ETL functionality) to guide and inform the broader team.
  • Design, test, document, and operate large-scale data structures for business intelligence and real-time analytics.
  • Monitor, tune, and troubleshoot complex queries, continuously improving query performance, throughput, and cost-efficiency.
  • Evaluate and provide feedback on dataset implementations designed and proposed by peer data engineers.
  • Translate internal customer business requirements into repeatable, maintainable automated data engineering processes
  • Participate from design to delivery, including testing, documentation, support, and maintenance.
  • Produce and update dataset documentation, metadata, and operational runbooks to ensure consistency and ease of knowledge sharing.

What You’ll Bring

  • Bachelor’s (or higher) in Computer Science, Engineering, Mathematics, or related field, with 5+ years of industry experience.
  • 4+ years of building and operating large-scale ETL/ELT processes, working with database technologies (e.g., Snowflake, Redshift, Postgres), and advanced data modeling.
  • 4+ years of coding experience in a language such as Python, Scala, Java, C#, or PowerShell.
  • Advanced SQL knowledge with a proven track record in query performance tuning.
  • Familiarity with at least one MPP environment (e.g., Spark, Hadoop-based solutions, Snowflake) for large-scale data processing.
  • Experience with Python-based libraries like Pandas or Polars for data manipulation and prototyping.
  • Hands-on experience building highly available, distributed systems for data extraction, ingestion, and processing of large datasets.
  • A passion for creating data models that efficiently support front-end dashboards and complex analytics.
  • Strong collaboration, able to evaluate and provide feedback on peer implementations and design proposals.
  • Excellent communication skills to translate internal customer requirements into robust, repeatable data engineering solutions.

Why You’ll Love Working Here

You’ll discover an environment where your voice truly matters. We champion experimentation, learning, and mentorship so every team member thrives professionally. Working alongside a supportive, diverse group, you’ll feel the excitement of tackling fresh challenges, celebrating shared wins, and pushing boundaries. As we evolve to meet dynamic customer needs, you’ll help define a product that aims to simplify and transform how businesses manage cloud spend—enjoying the camaraderie, creativity, and energy of an early-stage startup along the way.