The Data Engineer will build scalable data pipelines, perform data analysis and ensure high-quality data solutions aligned with enterprise data models.
We are looking for a skilled Data Engineer to join the Data Engineering Chapter supporting the Group Operations team at ENBD. The ideal candidate will be responsible for building scalable data pipelines, performing data analysis, and delivering high-quality data solutions aligned with enterprise data models.
Key Responsibilities- Collaborate with the Group Operations Team to understand business and data requirements on a daily basis
- Perform Impact Assessment for new and existing data changes
- Conduct Technical Data Mapping and Data Profiling activities
- Design, develop, and maintain ETL pipelines for data extraction, transformation, and loading
- Build data solutions to feed into AECB application as per prescribed data models
- Develop and optimize data pipelines using PySpark on modern data platforms
- Ensure data quality, consistency, and integrity across systems
- Perform unit testing, debugging, and deployment of data solutions
- Leverage modern tools and AI technologies (e.g., Claude) to enhance development efficiency and reduce operational errors
- Work closely with cross-functional teams including business analysts, architects, and QA
- Strong hands-on experience with PySpark and distributed data processing
- Experience in Informatica BDM Development (Big Data Management)
- Solid understanding of ETL/ELT concepts and data pipeline architecture
- Expertise in data mapping, data profiling, and impact analysis
- Experience working with large-scale data systems and cloud/data platforms
- Strong SQL skills and understanding of data warehousing concepts
- Familiarity with banking/financial domain (preferred but not mandatory)
- Knowledge of AI-assisted development tools (e.g., Claude) is a plus
- Good problem-solving and analytical skills
- Experience with AECB data/reporting systems
- Exposure to big data ecosystems (Hadoop/Spark clusters)
- Understanding of data governance and compliance standards
Top Skills
Ai Technologies
Informatica Bdm
Pyspark
SQL
Similar Jobs
Fintech • Financial Services
The PySpark Data Engineer is responsible for developing and maintaining scalable data pipelines, optimizing data workflows, ensuring data quality, and collaborating with analysts and scientists to meet analytics needs.
Top Skills:
Apache AirflowApache OozieAWSAzureCloudera Data PlatformGCPHadoopKafkaLinuxPysparkSQL
Fintech • Financial Services
Design, develop, and maintain scalable data pipelines using PySpark within the Cloudera Data Platform. Collaborate with teams to ensure data quality and optimize performance.
Top Skills:
Apache AirflowAws GlueAzure Data FactoryBashCloudera Data PlatformEmrHadoopHbaseHdfsHiveImpalaOoziePysparkPythonSQL
Fintech • Payments • Software • Financial Services • Automation
This role involves developing and optimizing data solutions using PySpark and Informatica BDM while collaborating with various teams on Risk & Compliance platforms.
Top Skills:
Informatica BdmPysparkSQL
What you need to know about the Bengaluru Tech Scene
Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.
