Python Developer at Weekday AI
Bengaluru, karnataka, India -
Full Time


Start Date

Immediate

Expiry Date

24 Apr, 26

Salary

0.0

Posted On

24 Jan, 26

Experience

5 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, AWS Lambda, AWS S3, AWS RDS, ETL, Data Pipeline Design, Async Programming, Concurrent API Handling, Multithreading, Multiprocessing, Caching Strategies, Memory Management, File Processing, Code Review, Logging, Problem-Solving

Industry

technology;Information and Internet

Description
This role is for one of the Weekday's clients Min Experience: 5 years Location: Bengaluru JobType: full-time As a Python Data Engineer / Backend Engineer, you will be responsible for designing efficient data workflows that process large volumes of data stored in cloud environments. The role involves working extensively with AWS services such as S3, Lambda, and RDS, while applying Python techniques like async programming, multithreading, and multiprocessing to solve performance bottlenecks. You will also contribute to building reliable APIs, implementing caching strategies, and ensuring data quality through validation and error handling. This role assesses not just implementation skills, but also architectural thinking and the ability to evaluate and improve existing code. Key Responsibilities Design memory-efficient and high-performance data processing solutions for large datasets (1GB+ CSV and similar files) using Python Build cloud-native data pipelines that read and write data from AWS S3 using AWS Lambda Implement data validation and cleansing logic to ensure accuracy and consistency Develop scalable ETL workflows to extract data from S3, transform it based on business logic, and load it into RDS Optimize API data fetching by implementing concurrent and asynchronous processing using async/await Design and implement caching mechanisms for cloud-based database-backed APIs to improve response times and reduce load Decide and implement appropriate concurrency models using multithreading or multiprocessing based on workload characteristics Identify performance bottlenecks and recommend architectural or code-level improvements Review Python code to identify errors, inefficiencies, and missing best practices Apply logging, exception handling, and monitoring best practices for production-grade systems Ensure solutions are scalable, cost-effective, and aligned with cloud-native design principles What Makes You a Great Fit Strong experience in Python for backend or data engineering use cases Hands-on experience with AWS Lambda, S3, and RDS Solid understanding of ETL concepts and data pipeline design Experience implementing async programming and concurrent API handling Ability to reason about performance trade-offs between multithreading and multiprocessing Familiarity with caching strategies for APIs and cloud databases Strong understanding of memory management and efficient file processing Ability to review existing Python code and identify bugs, anti-patterns, and missing best practices Knowledge of cloud scalability, cost optimization, and serverless architectures Strong problem-solving skills with an architectural mindset Clear communication skills to explain technical decisions and improvements
Responsibilities
Design efficient data workflows for processing large datasets and build cloud-native data pipelines using AWS services. Implement data validation, develop ETL workflows, and optimize API performance.
Loading...