Senior PySpark Data Engineer
at Luxoft
Romania, , Romania -
Start Date | Expiry Date | Salary | Posted On | Experience | Skills | Telecommute | Sponsor Visa |
---|---|---|---|---|---|---|---|
Immediate | 15 Feb, 2025 | Not Specified | 18 Nov, 2024 | 5 year(s) or above | Code,Apache Kafka,Project Documentation,Data Engineering,Version Control,Neural Networks,Sql,Adherence,Data Quality,Computer Science,Training,Data Modeling,Analytics,Data Processing,Confluence,Git,Data Driven Decision Making,Algebra,Mathematics | No | No |
Required Visa Status:
Citizen | GC |
US Citizen | Student Visa |
H1B | CPT |
OPT | H4 Spouse of H1B |
GC Green Card |
Employment Type:
Full Time | Part Time |
Permanent | Independent - 1099 |
Contract – W2 | C2H Independent |
C2H W2 | Contract – Corp 2 Corp |
Contract to Hire – Corp 2 Corp |
Description:
PROJECT DESCRIPTION
Join our dynamic team working on exciting projects in the thriving Middle East region. We offer a multitude of opportunities in various domains. Our diverse team comprises skilled professionals, including front-end and back-end developers, data analysts, data scientists, architects, analysts, and project managers. Currently, we are actively seeking a talented Data Engineer with proficiency in Python programming.
SKILLS
Must have
Technical skills:
5+ years of relevant experience in a Senior Data Engineer role
Big Data Technologies: Familiarity with big data technologies such as Hadoop, Apache Spark, or other distributed computing frameworks.
Data Security and Governance: Comprehensive understanding of data security principles and practices to ensure the confidentiality and integrity of sensitive information, coupled with knowledge of data governance frameworks and practices for ensuring data quality, compliance, and proper data management.
Python and PySpark: Demonstrated strong expertise in both Python and PySpark for efficient data processing and analytics.
Advanced SQL Knowledge: Proficient in SQL with the ability to handle complex queries and database operations.
ETL Experience: Prior experience working with Extract, Transform, Load (ETL) processes.
Data Pipelines: Familiarity with data cleansing, data profiling, data lineage, and adherence to best practices in data engineering.
Familiarity with Data Analysis Approaches: Some experience with various data analysis methodologies.
Python Libraries: Familiarity with building libraries in Python for enhanced functionality.
API Integration: Knowledge of integrating data pipelines with various APIs for seamless data exchange between systems.
Version Control: Proficiency in version control systems, such as Git, for tracking changes in code and collaborative development.
Cloud Technology Experience: Prior exposure to cloud technologies, particularly Azure or any leading cloud platform.
Data Visualization: Some exposure to data visualization tools like Tableau, Power BI, or others to create meaningful insights from data.
Collaboration Tools: Familiarity with collaboration tools such as Azure DevOps, Jira, Confluence, or others to enhance teamwork and project documentation.
Educational Background: A degree in computer science, mathematics, statistics, or a related technical discipline.
Financial Markets Knowledge: Familiarity with financial markets, portfolio theory, and risk management is a plus.
Non-technical skills:
Problem-Solving: Strong problem-solving skills to tackle complex data engineering challenges.
Data Storytelling: Ability to convey insights effectively through compelling data storytelling.
Quality Focus: Keen attention to delivering high-quality solutions within specified timelines.
Team Collaboration: Proven ability to work collaboratively within a team, taking a proactive approach to problem resolution and process improvement.
Communication Skills: Excellent communication skills to articulate technical concepts clearly and concisely.
Nice to have
Streaming Data Processing: Exposure to streaming data processing technologies like Apache Kafka for real-time data ingestion and processing.
Containerization: Knowledge of containerization technologies like Docker for creating, deploying, and running applications consistently across various environments.
Data Modeling and Evaluation: Extensive experience in data modeling and the evaluation of large datasets.
Model Training, Deployment, and Maintenance: Background in training, deploying, and maintaining models for effective data-driven decision-making.
Requirements for Machine Learning: Experience in developing and implementing machine learning algorithms, Natural Language Processing (NLP), and Neural Networks.
Applied Mathematics: Proficiency in applied mathematics, including but not limited to linear algebra, probability, statistics, and distributions.
Responsibilities:
Actively engage in requirements clarification and contribute to sprint planning sessions.
Design and architect technical solutions that align with project objectives.
Develop comprehensive unit and integration tests to ensure the robustness and reliability of the codebase.
Provide valuable support to QA teammates during the acceptance process, addressing and resolving issues promptly.
Continuously assess and refine best practices to optimize development processes and code quality.
Collaborate with cross-functional teams to ensure seamless integration of components and efficient project delivery.
Stay abreast of industry trends, emerging technologies, and best practices to contribute to ongoing process improvement initiatives.
Contribute to documentation efforts, ensuring clear and comprehensive records of technical solutions and best practices.
Actively participate in code reviews, providing constructive feedback and facilitating knowledge sharing within the team.
REQUIREMENT SUMMARY
Min:5.0Max:10.0 year(s)
Information Technology/IT
IT Software - DBA / Datawarehousing
Software Engineering
Graduate
Computer Science, Mathematics, Statistics
Proficient
1
Romania, Romania