Back to jobs

Data Engineering Lead (Azure/ Databrick/ Spark)- 50-60k

Job description

Responsibilities:

  • Engineering Support & Technical Management:Oversee engineering support and manage the technical team to ensure quality outcomes.
  • Data Strategy Development:Develop and implement a data strategy leveraging Azure Data, Delta Lake, Azure Databricks, and Azure Data Factory for advanced data processing, storage, and analytics.
  • Databricks Platform Leadership: Lead the development of data pipelines and big data solutions using Azure Databricks, focusing on components such as Databricks Run-time, clusters setup, workflows, Delta Live Tables, and Unity Catalog. Optimize cluster performance and resource utilization.
  • ETL/ELT Process Management: Design, develop, and maintain robust ETL/ELT processes using Azure Data Factory and Azure Databricks for seamless data integration.
  • Data Governance & Security: Implement data governance best practices using Databricks Unity Catalog and Azure Purview, ensuring compliance and data integrity. Manage data access frameworks and security measures using Azure Key Vault and role-based access controls.
  • Collaboration & Stakeholder Communication: Collaborate with cross-functional teams to understand data needs and provide tailored solutions. Communicate technical specifications clearly to technical and non-technical stakeholders.
  • People Management: Train colleagues on data and analytics best practices, and influence IT teams to optimize data pipeline processes.
  • Monitoring & Optimization: Continuously monitor and optimize the cost and performance of Databricks environments, implementing techniques for fast and reliable data access.

Requirements:

  • University/College graduate in Computing/IT or related discipline.

  • Minimum 5 years of relevant experience with at least 3 years of hands-on experience using Databricks.
  • In-depth knowledge of Azure Databricks, including proficiency in Apache Spark, PySpark, and Delta Lake.
  • Familiarity with Azure Data Factory for building complex data pipelines.
  • Proficiency in SQL, Python, and PySpark for optimizing data workflows.
  • Understanding of the insurance industry is preferable.
  • Experience with Azure Purview for data governance and familiarity with security tools like Azure Key Vault and Azure Active Directory.
  • Strong skills in performance optimization techniques for Databricks and Azure SQL.

Please send your CV to Alexandra via alexandra.leung@ambition.com.hk or APPLY NOW
Data provided is for recruitment purposes only.

If this job isn't quite right for you, but you know someone who would be great at this role, why not take advantage of our referral scheme? We offer HKD1000 in Apple gift cards for every referred candidate who we place in a role. Terms & Conditions Apply. https://www.ambition.com.hk/refer-a-friend