As a Data Engineer, you will have the opportunity to shape the future big data solution landscape for leading Fortune 500 organizations.
This position is a senior level customer-facing role that needs deep expertise in Apache Spark along with breadth of big data solution architecture experience.
On a weekly basis, you will guide customers through architecture, design and implementation activities while strategically aligning their technical roadmap for expanding the usage of the Databricks platform.
At Databricks we work on some of the most complex distributed processing systems and our customers challenge us with interesting new big data and AI requirements.
Data Engineers at Databricks are five-star, self-motivated, rock stars with a history of delivering strong business results in technology and consulting.
And since teamwork makes the dream work is a fundamental growth value at Databricks, a Data Engineer works internally within a multi-
functional team including Account Executives and Customer Success Engineers, all while having a direct channel to the original creators of Apache Spark.
Guide strategic customers as they design and implement Big Data projects ranging from transformations to data science and AI through on-
site and remote engagements
Provide technical leadership in a pre-sales and post-sales capacity for customers to support successful understanding, evaluation and adoption of Databricks
Identify and drive new initiatives that enable customers to succeed in turning their data into value
Build reference architectures, frameworks, solutions, how-to’s, and prototypes for customers
Provide escalated level of support for critical customer operational issues
Architect, implement, and / or validate migration of workloads from 3rd party databases and data platforms to Apache Spark.
Evangelize Spark and Databricks across developer community through meetups and conferences
Plan and coordinate with Account Executives, Customer Success Engineers and Solution Architects for expanding the use of Databricks platform within strategic enterprise customers on a weekly basis
Deep hands-on technical expertise with Apache Spark
Minimum 5+ years of design and implementation experience in Big Data technologies (Hadoop ecosystem, Kafka, NoSQL databases)
3-5 years in customer-facing pre-sales, technical architecture or consulting role
Open to travel up to 30% per month
Familiarity with data architecture patterns (data warehouse, data lake, streaming, Lambda / Kappa architecture)
Outstanding verbal and written communication skills; Comfortable with talking up and down the IT chain of command including directors, managers, architects and developers
Passionate about learning new technologies and making customers successful
Excellent presentation and whiteboarding skills
Comfortable coding Python, Scala or Java
Familiarity with AWS / EC2 cloud deployment models (Public vs. VPC)
BS / MS in Computer Science or equivalent
Proven track record within a data platform software vendor in a consulting / services function
Experience working as or with Data Scientists
Experienced with performance tuning, troubleshooting, and debugging Spark and / or other big data solutions
Familiarity with database and analytics technologies in the industry including Data Warehousing / ETL, Relational Databases, or MPP