Data Engineer at Rhythm Energy
Houston, Texas, United States -
Full Time


Start Date

Immediate

Expiry Date

01 Jul, 26

Salary

0.0

Posted On

02 Apr, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Data Engineering, Python, SQL, Spark, Databricks, Data Modeling, Data Transformation, Cloud-Based Environments, Stakeholder Communication, Operational Efficiency, Automation, Data Pipelines, Distributed Data Systems, Business Intelligence, Machine Learning, Advanced Analytics

Industry

Services for Renewable Energy

Description
About the Role We’re hiring a Data Engineer to help build and scale the data platform that powers Rhythm’s core business operations across energy markets, customer lifecycle, and internal decision-making. You’ll work across the full data stack — from ingestion and pipeline reliability to data modeling and delivery — ensuring teams across Finance, Retail, Wholesale, Marketing, and Operations have accurate, timely, and scalable data. This role sits at the intersection of Data, Engineering, and the business. You’ll partner directly with stakeholders to understand how data is used across the company and design systems that support those needs reliably and at scale. In the near term, a key focus will be integrating data systems across regions following a recent acquisition, helping unify platforms and improve data consistency across the business. This is a broad, hands-on role in a startup environment — ideal for someone who enjoys working across systems, owning problems end-to-end, and operating with a high degree of autonomy. You Might Be a Great Fit If You’ve built and maintained data pipelines in production and enjoy owning systems end-to-end, not just parts of them You’re comfortable working across the data stack — ingestion, transformation, modeling, and supporting downstream use cases You enjoy working closely with stakeholders and can translate business needs into scalable data solutions You’re proactive about communication — you don’t stay blocked, and you reach out early to unblock yourself or others You’ve worked in startup or high-growth environments where priorities shift and breadth matters more than narrow specialization You care about building systems that multiple people can use and maintain, not just solutions optimized for yourself What You’ll Do Build and maintain scalable data pipelines using Python, SQL, and Spark-based frameworks (e.g., Databricks), ensuring reliability and performance at scale Work across the full data lifecycle, including ingestion, transformation, modeling, and enabling downstream analytics and BI use cases Partner with stakeholders across Finance, Operations, Marketing, and Product to understand data needs and deliver solutions that drive business decisions Own projects end-to-end — from problem definition and data exploration to implementation, deployment, and iteration Improve performance and scalability of distributed data systems, including optimization of pipelines and transformations Help define and evolve data models that support multiple use cases across the business Collaborate closely with other engineers through code reviews, shared ownership, and system design discussions Contribute to a transparent engineering culture where problems are surfaced early and solved collaboratively, focusing on improving systems rather than assigning blame What You’ll Bring 3–6+ years of experience in Data Engineering or similar roles, with hands-on ownership of data pipelines in production Strong experience with Python and SQL Experience working with large datasets in distributed environments (e.g., Spark, Databricks, or similar) Solid understanding of data modeling and transformation best practices Experience working in cloud-based environments and familiarity with cloud data concepts Ability to work cross-functionally and communicate effectively with both technical and non-technical stakeholders Proven ability to operate independently without close supervision in a fast-paced environment Nice to Have Experience with Databricks and Delta Lake Familiarity with AWS and cloud-based data architectures Exposure to BI tools and analytics delivery Experience working with operational, financial, or customer lifecycle data Interest in machine learning or advanced analytics What Success Looks Like Within your first 12–18 months, success in this role will look like: Reliable data pipelines: Core pipelines run consistently with minimal failures, and issues are identified and resolved quickly Improved data consistency: Systems across regions are better integrated, with clearer and more unified data models Strong stakeholder trust: Business teams rely on your data products for decision-making and see you as a trusted partner High-quality delivery: You consistently deliver well-structured, scalable solutions that are easy for others to use and maintain Better system observability: Data issues are easier to detect, debug, and resolve through improved monitoring and transparency Operational efficiency: Manual data work is reduced through automation and better-designed pipelines What You’ll Love Our culture: We’re friendly, transparent, and love to innovate together. Flexible work-life balance: We embrace a mix of working remote and from the office. Professional development opportunities: We support your growth across technical and business domains. A chance to make a difference: We’re a sustainably driven company rethinking what’s possible in energy.
Responsibilities
Build and maintain scalable data pipelines and work across the full data lifecycle, ensuring data reliability and performance. Partner with stakeholders to understand data needs and deliver solutions that drive business decisions.
Loading...