Similar Jobs
Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Cybersecurity • Data Privacy
The Data Platform Engineer designs data architecture, enables analytics, manages data integration processes, and leads proof of concepts, focusing on AI and GenAI use cases.
Top Skills:
AWSAzureDbtGCPPythonSnowflakeSQLTableau
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Design, develop, and maintain Big Data solutions for Disability & Absence products. Collaborate with teams to improve existing solutions and create automation scripts.
Top Skills:
Ci/CdHbaseHiveKafkaNifiNoSQLPigPythonShell ScriptSolrSpark
Digital Media • Information Technology • Analytics
The Senior Data Engineer leads complex data analysis projects, builds scalable ETL pipelines, manages client relationships, and provides technical solutions using SQL and Python in AWS environment.
Top Skills:
AirflowAWSAws AthenaGitGlueLambdaPysparkPythonS3SQLYaml
Role Overview
We are looking for a Senior Data Engineer who will play a key role in designing, building, and maintaining data ingestion frameworks and scalable data pipelines. The ideal candidate should have strong expertise in platform architecture, data modeling, and cloud-based data solutions to support real-time and batch processing needs.
What you'll be doing:
- Design, develop, and optimise DBT models to support scalable data transformations.
- Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect.
- Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks
- Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures
- Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications.
- Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution.
- Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows.
- Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance.
What you'll have:
- 5+ years of hands-on experience with DBT, including model design, testing, and performance tuning.
- 5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning.
- 5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3.
- 3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs.
- Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query
- Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch).
- Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform).
- Ability to effectively communicate timelines and deliver MVPs set for the sprint.
- Strong analytical and problem-solving skills, with the ability to work across cross-functional teams.
Nice to haves:
- Experience in hardware manufacturing data processing.
- Contributions to open-source data engineering tools.
- Knowledge of Tableau or other BI tools for data visualization.
- Understanding of front-end development (React, JavaScript, or similar) to collaborate effectively with UI teams or build internal tools for data visualization
What you need to know about the Bengaluru Tech Scene
Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.



