Software Engineer II- Python,GenAI,AWS at JPMC Candidate Experience page
Hyderabad, Telangana, India -
Full Time


Start Date

Immediate

Expiry Date

16 Jun, 26

Salary

0.0

Posted On

18 Mar, 26

Experience

2 year(s) or above

Remote Job

Yes

Telecommute

Yes

Sponsor Visa

No

Skills

Python, GenAI, AWS, Agentic Frameworks, LLM, Data Processing Pipelines, Databricks, Data Lakes, S3, Lambda, Redshift, Athena, Step Functions, MSK, EKS, PySpark

Industry

Financial Services

Description
Job Overview We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within Consumer and Community Banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. Job Responsibilities * Collaborate with cross-functional teams to identify business requirements and develop data-driven solutions using Agentic/GenAI frameworks in a fast-paced environment. * Conduct research on prompt and context engineering techniques to enhance the performance of LLM-based solutions. * Design and implement scalable and reliable data processing pipelines, performing analysis and deriving insights to optimize business outcomes. Build and maintain Data Lakes and data processing workflows using Databricks to support machine learning operations. * Communicate technical concepts and results effectively to both technical and non-technical stakeholders. * Utilize AWS services including S3, Lambda, Redshift, Athena, Step Functions, MSK, EKS, and Data Lake architectures. * Collaborate with data scientists, engineers, and business stakeholders to deliver high-quality data solutions. * Act as a self-starter, independently taking initiative in driving assignments to completion and solving problems without the need for escalation. Required Qualifications, Capabilities, and Skills * Advanced degree in Computer Science, Data Science, Mathematics, or related field. * 3+ years of applied experience in data science, machine learning, or related areas. * Strong Python skills with PySpark, Spark SQL, and Datagrams for large-scale data processing. * Proficiency with GenAI models (e.g., OpenAI) to solve business problems, including RAG/fine-tuning when appropriate. * Experience with LLM orchestration, building AI agents, agentic frameworks, and MCP servers. * Databricks expertise building and managing data lakes and end-to-end data processing workflows. * Strong problem-solving, troubleshooting, clear stakeholder communication, rapid POC-to-production delivery, and mentoring abilities. Preferred Qualifications, Capabilities, and Skills * Proficiency in all other AWS components—preferably AWS certified. * Experience integrating AI/ML models into data pipelines is a plus. * Experience with version control (Git) and CI/CD pipelines.  
Responsibilities
The role involves collaborating with cross-functional teams to develop data-driven solutions using Agentic/GenAI frameworks and conducting research on prompt engineering for LLM-based solutions. Responsibilities also include designing and implementing scalable data processing pipelines using Databricks and utilizing various AWS services.
Loading...