Databricks Developer

Databricks Developer

Job Type: Full Time

Location : Sydney

Delivery Centric is seeking an experienced Databricks Developer to engineer scalable, high-performance data solutions across enterprise platforms. This role drives end-to-end data pipeline development, optimizes Lakehouse frameworks, and ensures secure, reliable, and efficient data operations. You’ll collaborate closely with cross-functional teams to translate business requirements into resilient data architectures that accelerate decision-making and digital transformation.

Key Responsibilities

  • Design, develop, and maintain robust data pipelines and Lakehouse solutions using Databricks and Delta architecture.
  • Build scalable ETL workflows integrating Databricks with orchestration tools and enterprise data ecosystems.
  • Develop high-performance Spark-based transformations using DataFrame APIs, Spark SQL, and Python/Scala.
  • Leverage Databricks-native capabilities including Workflows, Delta Live Tables, and Unity Catalog for governance and automation.
  • Configure and optimize Databricks workspaces, clusters, and jobs for performance, cost efficiency, and operational reliability.
  • Implement data security, lineage, audit, and monitoring practices aligned with enterprise governance standards.
  • Develop and maintain streaming solutions using Spark Structured Streaming and Autoloader with diverse ingestion platforms.
  • Contribute to the design of conceptual, logical, and physical data models supporting analytical and operational workloads.
  • Stay current with Databricks best practices, emerging capabilities, and modern data engineering trends.

Qualifications

  • Minimum 5+ years in IT with 3+ years hands-on in Databricks design, development, and operational support.
  • Proven expertise in Lakehouse Architecture including Delta Tables, Schema Evolution, ACID, versioning, and history tracking.
  • Strong command of Spark ecosystem—DataFrame API, Spark SQL, Python/Scala programming, and SQL proficiency.
  • Experience integrating Databricks with ETL/orchestration tools for enterprise-scale data operations.
  • Hands-on exposure to Databricks Workflows, Delta Live Tables, Unity Catalog, and platform-native automation.
  • Strong knowledge of cluster configuration, performance tuning, cost management, data security, and monitoring.
  • Experience delivering streaming pipelines using Structured Streaming and Autoloader integrations.
  • Understanding of data modeling frameworks across conceptual, logical, and physical layers.
  • Bonus: Exposure to AI/GenAI workloads and Databricks/Cloud certifications (associate or professional level).

Join a high-performing data and cloud engineering practice where innovation, automation, and enterprise-scale delivery drive measurable business impact. You’ll work with modern platforms, proven frameworks, and a team committed to excellence in data engineering, analytics, and cloud transformation.