Looking for a Data Engineer to build scalable data pipelines with PySpark, perform impact analysis, and ensure data quality in collaboration with the Group Operations Team.
We are looking for a highly skilled Data Engineer to join the Data Engineering Chapter supporting the Group Operations Team. The ideal candidate will work closely with business and technical stakeholders to understand data requirements, perform impact analysis, and build scalable data pipelines using modern technologies like PySpark.
Key Responsibilities- Collaborate with the Group Operations Team to gather and analyze data requirements
- Perform impact assessment, technical data mapping, and data profiling
- Design and develop data extraction, transformation, and loading (ETL) pipelines
- Build and optimize data pipelines using PySpark as part of the bank’s modern tech stack
- Develop data solutions aligned with AECB application data models
- Ensure data quality, integrity, and consistency across systems
- Participate in unit testing, deployment, and production support
- Leverage modern AI tools (e.g., Claude) to improve development efficiency and reduce operational errors
- Work in an agile environment and contribute to continuous improvement initiatives
- Strong hands-on experience with PySpark and big data processing
- Expertise in Informatica BDM Development
- Solid understanding of ETL/ELT concepts and data warehousing
- Experience in data mapping, profiling, and impact analysis
- Knowledge of SQL, data modeling, and performance tuning
- Familiarity with banking/financial data systems is a plus
- Exposure to AI-assisted development tools is an added advantage
- Strong problem-solving and analytical skills
- Experience working in banking or financial services domain
- Familiarity with AECB reporting/data standards
- Experience with cloud platforms (AWS/Azure/GCP) is a plus
Similar Jobs
Marketing Tech • Analytics
The Data Engineer will develop scalable data solutions, optimize ETL/ELT pipelines, and ensure data quality across various cloud platforms.
Top Skills:
Apache AirflowAWSAzureGCPPower BIPysparkPythonSQLTableau
Fintech • Financial Services
The AI/ML Data Engineer designs scalable data pipelines for unstructured content, integrates AI systems, and collaborates across teams to optimize data workflows and implement best practices.
Top Skills:
Apache AirflowAWSFaissGCPHugging FaceLangchainPysparkPythonRedisSQLTransformers
Fintech • Payments • Software • Financial Services • Automation
This role involves developing and optimizing data solutions using PySpark and Informatica BDM while collaborating with various teams on Risk & Compliance platforms.
Top Skills:
Informatica BdmPysparkSQL
What you need to know about the Bengaluru Tech Scene
Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.

