MerQube Logo

MerQube

Data Engineer

Reposted 13 Days Ago
Be an Early Applicant
Easy Apply
In-Office
Bangalore, Bengaluru Urban, Karnataka
Mid level
Easy Apply
In-Office
Bangalore, Bengaluru Urban, Karnataka
Mid level
As a Data Engineer at MerQube, you will design and maintain AWS-based data pipelines and a lakehouse platform, modernizing ETL processes and supporting index calculations for global clients.
The summary above was generated by AI
About MerQube

MerQube is a cutting-edge fintech firm specializing in the development of advanced technology for indexing and rules-based investing. Founded in 2019 by industry veterans and technology experts, MerQube provides a tech-focused alternative in the indexing space, with offices in New York, San Francisco, and London.

We design and calculate a wide variety of indices, including thematic, ESG, QIS, and delta one strategies, spanning multiple asset classes such as equities, futures, and options. Powered by modern cloud architecture and advanced index-tracking technology, our platform helps clients bring sophisticated ideas to market quickly, securely, and at scale.

Summary

Are you passionate about building robust, scalable data systems that power mission-critical financial platforms? Do you enjoy working hands-on  with complex financial datasets, modern data pipelines, and governed data lakes?

We are looking for a Data Engineer to join our growing platform engineering team in Bangalore. You will play a key role in modernizing and scaling MerQube’s core market data and index computation platforms by transforming legacy ETL pipeline into standardized cloud-native AWS data platform.

What you’ll work on?

As part of the Platform Engineering team, you will design, build, and operate scalable AWS-based data pipelines and a resilient lakehouse platform serving both transactional and analytical workloads. Your work will directly support index construction, analytics, research, and reporting used by global clients.

Core responsibilities:

  • Design, build, and maintain large-scale ETL/ELT pipelines to ingest, normalize, and curate market, reference, and vendor data
  • Modernize legacy ETL frameworks into standardized, cloud-native AWS pipelines
  • Build and manage data lakes and analytics-ready datasets using AWS-native services
  • Clean, standardize, and govern financial instrument identifiers, mappings, corporate actions, and historical data across vendors
  • Design canonical financial data models (facts, dimensions, hierarchies, and mappings)
  • Implement data quality checks, lineage, observability, and validation frameworks to ensure accurate index calculations
  • Develop data catalogs and inventory systems to improve data discoverability and governance
  • Collaborate closely with Product, Index Operations, Research, and Engineering teams to translate financial logic into scalable data pipelines
  • Monitor production data systems, troubleshoot issues, and support on-call rotations as needed
What the position requires
  • Bachelor’s Degree in Computer Science, Engineering, Mathematics, or equivalent experience
  • 4–7 years of experience as a Data Engineer, preferably in fintech, trading, market data, or financial analytics domains
  • Strong programming skills in Python and solid SQL expertise
  • Hands-on experience building batch and/or streaming ETL pipelines
  • Experience working with large, messy, heterogeneous datasets
  • Strong understanding of data modeling concepts (fact/dimension models, canonical models, historical versioning)
  • Experience with cloud platforms, preferably AWS (S3, Glue, Lambda, Step Functions, Athena/Redshift, CloudWatch, IAM)
  • Familiarity with data orchestration tools such as Airflow or similar
Preferred Qualifications
  • Experience with Spark or PySpark for large-scale data transformations
  • Experience with streaming technologies such as Kafka or similar systems
  • Experience working with financial market data providers (e.g., Bloomberg, Refinitiv, Morningstar, Nasdaq)
  • Knowledge of financial instruments, corporate actions, and identifiers (ISIN, CUSIP, RIC, etc.)
  • Experience building data catalogs, governance layers, or migrating legacy pipelines to cloud-native architectures
  • Exposure to containerized environments (Docker, Kubernetes) and modern data warehouses (Redshift, Snowflake, BigQuery, etc.)

Why join MerQube?

We believe in creating an environment where engineers thrive, innovate, and grow. We are proud to offer:

  • Competitive compensation packages and Comprehensive benefits
  • Flexible working arrangements
  • collaborative, community-first culture
  • Strong Focus on health, wellness, and work-life balance
  • Opportunities to make real impact in fast-scaling fintech platform
Equal Opportunity Employer

MerQube is committed to building a diverse and inclusive workplace. All qualified applicants will be considered without regard to race, color, religion, sex, sexual orientation, gender identity, age, national origin, disability, protected veteran status, or any other legally protected characteristic.

If you’re passionate about building world-class data systems and want to shape the future of indexing technology, we’d love to have you onboard.

Top Skills

Airflow
Athena
AWS
Cloudwatch
Glue
Iam
Kafka
Lambda
Pyspark
Python
Redshift
S3
Spark
SQL
Step Functions

Similar Jobs

2 Days Ago
Remote or Hybrid
India
Senior level
Senior level
Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Design, develop, implement and maintain Big Data solutions for Disability & Absence products. Build batch, speed, and serving layers using Spark, Hive, NoSQL, SOLR and Kafka; optimize Hadoop jobs; automate with Shell/Python; support deployments, CI/CD, runbooks, and cross-team collaboration.
Top Skills: Spark,Hive,Nosql,Solr,Scala,Pig,Kafka,Hbase,Nifi,Change-Data-Capture,Hadoop,Ci/Cd,Shell,Python,Azure,Google Cloud
3 Days Ago
Hybrid
Bangalore, Bengaluru Urban, Karnataka, IND
Mid level
Mid level
Automotive • Hardware • Robotics • Software • Transportation • Manufacturing
The Data Engineer will develop data-driven solutions, build data pipelines, support operations, and create BI tools while collaborating with stakeholders.
Top Skills: Big Data PlatformsData FactoryData Visualization TechniquesEtl ToolsMS OfficeMySQLNosql DatabasesOracleSQL Server
4 Days Ago
In-Office
Bangalore, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Fintech • Information Technology • Financial Services
The Associate Data Engineer will lead ETL integration flows, implement data quality engineering, and manage periodic data migrations with a focus on master data management.
Top Skills: Apache AirflowAWSAzureDbtDockerFastapiGithub ActionsKafkaKubernetesMongoDBMySQLPostgresPythonSnowflake

What you need to know about the Bengaluru Tech Scene

Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account