• Create and manage scalable data pipelines to collect, process, and store large volumes of data from multiple sources
• Integrate data from diverse systems while ensuring consistency, quality, and reliability
• Design, implement, and optimize database schemas and data structures to support efficient storage and retrieval
• Develop, maintain, and enhance ETL (Extract, Transform, Load) processes to move data accurately and efficiently between systems
• Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, Mathematics, or a related field
• 5+ years of hands-on experience with SQL, Python, .NET, SSIS, and SSAS
• 2+ years of experience working with Azure cloud services, especially SQL Server, Azure Data Factory (ADF), Azure Databricks, ADLS, Key Vault, Azure Functions, and Logic Apps (strong focus on Databricks)
• 2+ years of experience using Git for version control and deploying code through CI/CD pipelines


