MerQube Logo

MerQube

Data Engineer

Posted 19 Days Ago
Be an Early Applicant
Easy Apply
In-Office
Bangalore, Bengaluru Urban, Karnataka
Mid level
Easy Apply
In-Office
Bangalore, Bengaluru Urban, Karnataka
Mid level
As a Data Engineer at MerQube, you will design and maintain AWS-based data pipelines and a lakehouse platform, modernizing ETL processes and supporting index calculations for global clients.
The summary above was generated by AI
About MerQube

MerQube is a cutting-edge fintech firm specializing in the development of advanced technology for indexing and rules-based investing. Founded in 2019 by industry veterans and technology experts, MerQube provides a tech-focused alternative in the indexing space, with offices in New York, San Francisco, and London.

We design and calculate a wide variety of indices, including thematic, ESG, QIS, and delta one strategies, spanning multiple asset classes such as equities, futures, and options. Powered by modern cloud architecture and advanced index-tracking technology, our platform helps clients bring sophisticated ideas to market quickly, securely, and at scale.

Summary

Are you passionate about building robust, scalable data systems that power mission-critical financial platforms? Do you enjoy working hands-on  with complex financial datasets, modern data pipelines, and governed data lakes?

We are looking for a Data Engineer to join our growing platform engineering team in Bangalore. You will play a key role in modernizing and scaling MerQube’s core market data and index computation platforms by transforming legacy ETL pipeline into standardized cloud-native AWS data platform.

What you’ll work on?

As part of the Platform Engineering team, you will design, build, and operate scalable AWS-based data pipelines and a resilient lakehouse platform serving both transactional and analytical workloads. Your work will directly support index construction, analytics, research, and reporting used by global clients.

Core responsibilities:

  • Design, build, and maintain large-scale ETL/ELT pipelines to ingest, normalize, and curate market, reference, and vendor data
  • Modernize legacy ETL frameworks into standardized, cloud-native AWS pipelines
  • Build and manage data lakes and analytics-ready datasets using AWS-native services
  • Clean, standardize, and govern financial instrument identifiers, mappings, corporate actions, and historical data across vendors
  • Design canonical financial data models (facts, dimensions, hierarchies, and mappings)
  • Implement data quality checks, lineage, observability, and validation frameworks to ensure accurate index calculations
  • Develop data catalogs and inventory systems to improve data discoverability and governance
  • Collaborate closely with Product, Index Operations, Research, and Engineering teams to translate financial logic into scalable data pipelines
  • Monitor production data systems, troubleshoot issues, and support on-call rotations as needed
What the position requires
  • Bachelor’s Degree in Computer Science, Engineering, Mathematics, or equivalent experience
  • 4–7 years of experience as a Data Engineer, preferably in fintech, trading, market data, or financial analytics domains
  • Strong programming skills in Python and solid SQL expertise
  • Hands-on experience building batch and/or streaming ETL pipelines
  • Experience working with large, messy, heterogeneous datasets
  • Strong understanding of data modeling concepts (fact/dimension models, canonical models, historical versioning)
  • Experience with cloud platforms, preferably AWS (S3, Glue, Lambda, Step Functions, Athena/Redshift, CloudWatch, IAM)
  • Familiarity with data orchestration tools such as Airflow or similar
Preferred Qualifications
  • Experience with Spark or PySpark for large-scale data transformations
  • Experience with streaming technologies such as Kafka or similar systems
  • Experience working with financial market data providers (e.g., Bloomberg, Refinitiv, Morningstar, Nasdaq)
  • Knowledge of financial instruments, corporate actions, and identifiers (ISIN, CUSIP, RIC, etc.)
  • Experience building data catalogs, governance layers, or migrating legacy pipelines to cloud-native architectures
  • Exposure to containerized environments (Docker, Kubernetes) and modern data warehouses (Redshift, Snowflake, BigQuery, etc.)

Why join MerQube?

We believe in creating an environment where engineers thrive, innovate, and grow. We are proud to offer:

  • Competitive compensation packages and Comprehensive benefits
  • Flexible working arrangements
  • collaborative, community-first culture
  • Strong Focus on health, wellness, and work-life balance
  • Opportunities to make real impact in fast-scaling fintech platform
Equal Opportunity Employer

MerQube is committed to building a diverse and inclusive workplace. All qualified applicants will be considered without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status, or any other legally protected characteristic.

If you’re passionate about building world-class data systems and want to shape the future of indexing technology, we’d love to have you onboard.

Top Skills

Airflow
Athena
AWS
Cloudwatch
Glue
Iam
Kafka
Lambda
Pyspark
Python
Redshift
S3
Spark
SQL
Step Functions

Similar Jobs

12 Days Ago
Hybrid
Bangalore, Bengaluru Urban, Karnataka, IND
Mid level
Mid level
Big Data • Information Technology • Productivity • Software • Analytics • Business Intelligence • Consulting
As a Data Engineer, you will refine data transformations for business processes, design ontologies, validate models, and document development principles. Responsibilities include cross-functional collaboration and maintaining content quality.
Top Skills: Crm SystemsOracleProcess MiningSAPServicenowSQL
14 Days Ago
In-Office
Bengaluru, Bengaluru Urban, Karnataka, IND
Mid level
Mid level
Food • Greentech • Logistics • Sharing Economy • Transportation • Agriculture • Industrial
The Data Engineer will design and maintain data systems, develop data products, support data pipelines, and collaborate with teams to meet data requirements.
Top Skills: Hadoop,Hdfs,Hive,Mapreduce,Apache Spark,Pyspark,Spark Sql,Python,Scala,Sql,Spark Structured Streaming,Kafka Connect,Airflow,Aws Data Factory,Glue,Snowflake,Patroni,Pgbouncer
10 Days Ago
In-Office or Remote
Bengaluru, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
As a Data Engineer, you'll build data models, design ETL processes, improve data quality, and mentor junior engineers, leveraging AWS and various data technologies.
Top Skills: AirflowAmazon Web ServicesData Build ToolDatabricksEmrHiveJavaKinesisPythonRdsS3ScalaSparkSQLSqs

What you need to know about the Bengaluru Tech Scene

Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account