Develop and optimize data pipelines, ETL/ELT workflows, and models for cloud platforms. Collaborate with various teams to deliver data solutions and ensure data quality.
About Us
Zuru Tech is digitalizing the construction process of buildings all around the world. We have a multi-national team developing the world’s first digital building fabrication platform, you design, we build it!
We at ZURU develop the Zuru Home app, a BIM software meant for the general public, architects, and engineers, from here anyone can buy, design, and send to manufacturing any type of building with complete design freedom. Welcome to the future!
What are you Going to do?
📌Architect and develop complex data pipelines, ETL/ELT workflows, and data models on platforms such as Snowflake, Databricks, Azure Synapse, Redshift, BigQuery, etc.
📌Build scalable data transformation pipelines using the Medallion Architecture (Bronze → Silver → Gold layers).
📌Develop, manage, and optimize Airflow DAGs for orchestration and scheduling.
📌Implement transformation logic and semantic models using DBT, enforcing analytics engineering best practices.
📌Write, optimize, and maintain advanced SQL queries, stored procedures, and performance-tuned transformations.
📌Design and maintain reusable data ingestion and transformation frameworks, including support for geospatial data.
📌Build connectors and integrate streaming/event-driven architectures using Kafka for near real-time data pipelines.
📌Enable downstream analytics by preparing curated datasets and data models for BI consumption, including Power BI dashboards.
📌Collaborate with Architects, Senior Engineers, API teams, and Visualization teams to deliver end-to-end data solutions.
📌Conduct PoCs/PoVs to evaluate cloud data integration tools and modern data engineering technologies.
📌Ensure strong data quality, lineage, governance, metadata management, and cloud security standards.
📌Work within Agile/DevOps methodologies to deliver iterative, high-quality solutions.
📌Troubleshoot and proactively resolve complex pipeline and performance issues.
What are we Looking for?
✔5+ years of hands-on experience as a Data Engineer on cloud-based data transformation and platform modernization projects.
✔Strong experience with at least one full lifecycle implementation of a cloud data lake/data warehouse using Snowflake, Databricks, Redshift, Synapse, or BigQuery.
✔Hands-on experience with Medallion Architecture or other layered data modelling approaches.
✔Proficient in Airflow for workflow orchestration and DBT for SQL-based transformations and modelling.
✔Strong skills in Advanced SQL, data modelling (star, snowflake), ETL/ELT, and performance tuning.
✔Experience supporting BI teams by creating curated datasets, semantic models, and optimized schemas for tools like Power BI.
✔Proficient in Python and PySpark for building scalable data processing pipelines.
✔Experience with cloud object storage (S3, ADLS, GCS, MinIO) and cloud security (IAM/RBAC, networking, resource monitoring).
✔Familiarity with relational and NoSQL databases, distributed frameworks (Spark, Hadoop), and modern data integration patterns.
✔Strong analytical and problem-solving skills with the ability to handle complex data challenges independently.
Required Skills
✔Cloud Platforms: AWS / Azure / GCP (any one).
✔Data Platforms: Snowflake, Databricks, Redshift, Synapse, BigQuery.
✔Orchestration & Transformation: Airflow, DBT, CI/CD, DevOps.
✔Streaming & Monitoring: Kafka, ELK, Grafana.
✔Distributed Processing: Spark, Hadoop ecosystem.
✔Programming: Python, PySpark.
✔BI & Visualization Support: Power BI, DAX basics (optional but beneficial).
✔Data Modelling: Dimensional modelling, Medallion Architecture, SQL optimization.
What do we Offer?
💰 Competitive compensation
💰 Annual Performance Bonus
⌛️ 5 Working Days with Flexible Working Hours
🌎 Annual trips & Team outings
🚑 Medical Insurance for self & family
🚩 Training & skill development programs
🤘🏼 Work with the Global team, Make the most of the diverse knowledge
🍕 Several discussions over Multiple Pizza Parties
A lot more! Come and discover us!
Top Skills
Airflow
Azure Synapse
BigQuery
Databricks
Dbt
Kafka
Power BI
Pyspark
Python
Redshift
Snowflake
SQL
Similar Jobs
Big Data • Cloud • Food • Machine Learning • Software • Database • Analytics
The Data Engineer II will design, deliver, and maintain data solutions for analytics, ensuring data quality and governance while leading junior engineers and collaborating with teams for project success.
Top Skills:
AWSAzureBigQueryDbtGCPGraphQLJavaScriptPythonRSnowflakeSQLTableau
Consumer Web • Marketing Tech • Design
Architect and develop data pipelines, manage ETL/ELT workflows, optimize data models, and ensure data quality and governance within a cloud-based environment.
Top Skills:
AirflowAWSAzureAzure SynapseBigQueryDatabricksDbtGCPKafkaPysparkPythonRedshiftSnowflakeSQL
Fintech • Analytics
The Senior Data Engineer will lead application development for big data processing, ensuring quality and timely delivery, while collaborating in a dynamic, agile environment.
Top Skills:
Apache HadoopSparkAWSAzureC#DatabricksGCPGoogle BigqueryJavaKafkaNoSQLSnowflakeSQL
What you need to know about the Bengaluru Tech Scene
Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.


