Software Engineer, Data (Core Engineering) at Dime Line Trading
Chicago, Illinois, United States -
Full Time


Start Date

Immediate

Expiry Date

07 Jun, 26

Salary

0.0

Posted On

09 Mar, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, Java, Go, Scala, SQL, System Design, AWS, GCP, Azure, NoSQL, Kafka, Kinesis, RabbitMQ, Spark, Flink, CI/CD

Industry

Financial Services

Description
We are looking for a strong Software Engineer who specializes in data systems. In this role, you won't just be writing scripts; you will be building the core backend services, distributed systems, and robust infrastructure that power our data platform. If you approach data challenges with a software engineering mindsetand have a deep understanding of how data flows through complex systems, this role is for you. What you'll do: Build Core Systems: Design, develop, and deploy highly scalable backend services, APIs, and distributed systems that support our data infrastructure. Manage Access Patterns: Architect scalable systems capable of handling diverse data access patterns (e.g., high-throughput writes, low-latency reads, heavy analytical scans) and optimize our existing data access layers. Construct Data Pipelines: Build and maintain fault-tolerant pipelines, leveraging industry best practices for data storage, retrieval, and processing. Event-Driven Architecture: Design and implement robust event-driven platforms that ensure reliable data delivery and real-time processing capabilities. Champion Engineering Standards: Apply rigorous software engineering practices to data, including CI/CD, comprehensive testing (unit, integration, and end-to-end), and version control. Skills you need: Core Engineering Background: 4+ years of overall experience in backend software engineering, with a strong grasp of computer science fundamentals, data structures, and algorithms. Data Expertise: 2+ years of dedicated experience in a data engineering capacity. Language Proficiency: Expert-level coding skills in Python, Java, Go, or Scala, along with advanced SQL capabilities. System Design: Proven experience building scalable systems and optimizing data access patterns for various downstream use cases. Data Ecosystem Knowledge: Strong familiarity with industry best-practice solutions for: Cloud-based architecture (AWS, GCP, or Azure). Data storage and retrieval (e.g., relational, NoSQL, columnar databases). Event-driven platforms (e.g., Kafka, Kinesis, RabbitMQ). Batch and stream data processing (e.g., Spark, Flink).
Responsibilities
The role involves designing, developing, and deploying highly scalable backend services, APIs, and distributed systems to power the core data infrastructure. Responsibilities also include architecting systems for diverse data access patterns and building fault-tolerant, event-driven data pipelines.
Loading...