Synechron Logo

Synechron

PySpark Data Engineer with Cloudera and Cloud Expertise

Posted 6 Days Ago
Be an Early Applicant
In-Office
Thanisandra Nagavara, Bangalore, Karnataka, IND
Senior level
In-Office
Thanisandra Nagavara, Bangalore, Karnataka, IND
Senior level
Design, develop, and maintain scalable data pipelines using PySpark within the Cloudera Data Platform. Collaborate with teams to ensure data quality and optimize performance.
The summary above was generated by AI

Job Summary
Synechron is seeking a highly experienced PySpark Data Engineer to design, develop, and maintain scalable, high-quality data pipelines within the Cloudera Data Platform (CDP). This role is critical in ensuring reliable data ingestion, transformation, and availability for advanced business analytics, reporting, and data science initiatives. The successful candidate will bring a strong background in big data processing, data architecture, and cloud integration, contributing to data-driven decision-making and operational excellence across the organization.

Software Requirements

  • Required:

    • Advanced proficiency in PySpark, including handling RDDs, DataFrames, Spark SQL, and optimization techniques

    • Hands-on experience with Cloudera Data Platform (CDP) components such as Cloudera Manager, Hive, Impala, HDFS, and HBase

    • Working knowledge of Hadoop ecosystem, Kafka, and distributed data processing tools

    • Experience with SQL-based data warehousing tools like Hive and Impala

    • Scripting skills in Linux (Bash, Python) for automation and operational tasks

    • Familiarity with orchestration and scheduling tools such as Apache Airflow or Oozie

  • Preferred:

    • Knowledge of cloud-native data services (AWS Glue, EMR, Azure Data Factory)

    • Use of version control systems like Git and CI/CD pipelines (Jenkins, GitLab CI)

    • Experience with data modeling, data governance, and metadata management tools

Overall Responsibilities

  • Design, develop, and optimize scalable data pipelines using PySpark within the Cloudera Data Platform.

  • Manage end-to-end data ingestion processes from multiple sources (relational databases, APIs, file systems) into data lakes or warehouses.

  • Execute data transformation, cleansing, and aggregation processes supporting analytical and reporting requirements.

  • Conduct performance tuning of Spark jobs and related CDP components to ensure efficient resource utilization.

  • Implement data validation and quality checks, ensuring data accuracy and consistency through monitoring and alerting.

  • Automate data workflows using orchestration tools like Airflow or Oozie to reduce manual intervention.

  • Monitor pipeline performance, troubleshoot failures, and implement corrective actions for operational stability.

  • Collaborate with data architects, analysts, and data scientists to support large-scale analytics initiatives.

  • Document data architecture, pipeline configurations, and operational procedures for ongoing maintenance and governance.

  • Lead data architecture discussions supporting data privacy, security, and compliance standards.

Technical Skills (By Category)

  • Programming Languages (Essential):

    • Python (especially PySpark)

    • SQL for data extraction, validation, and analysis

  • Big Data & Data Management (Essential):

    • Spark (PySpark), Hadoop ecosystem, HDFS, Hive, Impala, HBase

    • Data ingestion and transformation in large distributed environments

  • Cloud & Platform Technologies (Preferred):

    • Cloud-native data processing (AWS EMR, Azure HDInsight, GCP Dataproc)

  • Frameworks & Libraries (Essential):

    • Spark SQL, Spark Streaming

    • Data modeling and governance tools (preferred: Apache Atlas or Collibra)

  • Orchestration & Automation (Preferred):

    • Airflow, Oozie, Jenkins

  • Security & Data Governance (Preferred):

    • Data masking, encryption, access control in distributed systems

Experience Requirements

  • Minimum of 5+ years as a Data Engineer with deep expertise in PySpark and big data processing

  • Proven experience designing, implementing, and maintaining scalable data pipelines in enterprise environments

  • Strong background with Cloudera Data Platform (CDP) components such as Hive, Impala, HDFS, and HBase

  • Demonstrated ability to optimize Spark jobs and manage high-volume data workflows

  • Support experience in cloud environments (AWS, Azure, or GCP) for data processing is advantageous

  • Industry experience supporting financial services, banking, or highly regulated sectors is a plus

  • Alternative pathways include extensive hands-on Big Data processing experience in data-centric roles with demonstrated expertise in performance tuning and operational stability

Day-to-Day Activities

  • Develop and optimize Spark (PySpark) data pipelines for ingesting, transforming, and publishing data in large distributed systems

  • Monitor data workflows and troubleshoot issues proactively to maintain pipeline health.

  • Collaborate with data scientists, analysts, and platform teams to meet data quality, security, and governance standards.

  • Automate operational workflows, including job scheduling, alerting, and resource management.

  • Perform performance tuning of Spark jobs and related components to optimize runtime and resource efficiency.

  • Conduct data validation, anomaly detection, and data quality assessments.

  • Document architecture, data flows, and operational procedures for compliance and knowledge sharing.

  • Support ongoing system upgrades, data privacy initiatives, and cloud migration efforts.

Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or equivalent

  • 5+ years of hands-on experience in data engineering, with an emphasis on PySpark and big data systems

  • Proven expertise in designing scalable, high-performance data pipelines in enterprise environments

  • Hands-on experience with Cloudera Data Platform (CDP), Hadoop, Hive, Impala, and HBase

  • Strong SQL and data modeling skills within distributed data architectures

  • Experience with cloud data services is a plus

  • Relevant certifications (e.g., AWS Data Analytics Specialty, GCP Professional Data Engineer) are advantageous

  • Strong analytical, troubleshooting, and communication skills

Professional Competencies

  • Critical thinking and analytical mindset for complex data workflows and problem resolution

  • Ability to manage multiple priorities and deliver results in a fast-paced environment

  • Effective collaboration skills for cross-team data initiatives and stakeholder engagement

  • Innovation-driven approach for optimizing and automating data processes

  • Ownership mindset to ensure operational stability and data quality standards

  • Adaptability and continuous learner to keep pace with evolving big data and cloud technologies

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.

All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Top Skills

Apache Airflow
Aws Glue
Azure Data Factory
Bash
Cloudera Data Platform
Emr
Hadoop
Hbase
Hdfs
Hive
Impala
Oozie
Pyspark
Python
SQL

Similar Jobs

5 Days Ago
In-Office
Thanisandra Nagavara, Bangalore, Karnataka, IND
Senior level
Senior level
Fintech • Financial Services
The PySpark Data Engineer is responsible for developing and maintaining scalable data pipelines, optimizing data workflows, ensuring data quality, and collaborating with analysts and scientists to meet analytics needs.
Top Skills: Apache AirflowApache OozieAWSAzureCloudera Data PlatformGCPHadoopKafkaLinuxPysparkSQL
3 Hours Ago
In-Office
Bangalore, Bengaluru Urban, Karnataka, IND
Senior level
Senior level
Cloud • Information Technology • Internet of Things • Machine Learning • Software • Cybersecurity • Infrastructure as a Service (IaaS)
The Senior Engineer will verify ASIC designs, implement UVM test environments, develop and run test cases, and collaborate across teams to enhance verification methodologies.
Top Skills: AmbaAxiCadenceChiJenkinsJIRASimscopeSynopsysSystemverilogUvm
3 Hours Ago
Easy Apply
Hybrid
Bangalore, Bengaluru, Karnataka, IND
Easy Apply
Senior level
Senior level
Cloud • Information Technology • Security • Software • Cybersecurity
The role involves overseeing identity and access control infrastructure, leading a team in Zero Trust technology, managing development, and collaborating with various stakeholders to ensure efficient project delivery.
Top Skills: AWSAzureCassandraDynamoDBGCPGoogle SpannerKafkaKubernetesLdapMemcachedOauthRabbitMQRedisSAMLScim

What you need to know about the Bengaluru Tech Scene

Dubbed the "Silicon Valley of India," Bengaluru has emerged as the nation's leading hub for information technology and a go-to destination for startups. Home to tech giants like ISRO, Infosys, Wipro and HAL, the city attracts and cultivates a rich pool of tech talent, supported by numerous educational and research institutions including the Indian Institute of Science, Bangalore Institute of Technology, and the International Institute of Information Technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account