Remote
Python Data Engineer
The Data Engineer will support the Worldwide Sales and Marketing data lake house by developing data integration workflows using Python in the context of a Databricks data platform. Operating within a cross-functional agile team, the Engineer will create new data integration code, and improve upon existing data integration code to leverage ADPs vast data resources in our next generation sales enablement tools. The Engineer will help to define and formalize code frameworks that conforms to industry best practices.
The Engineer may also perform data wrangling – ingesting, cleansing, and combining data from many sources- enabling additional segmentation, measurement, and activation capabilities across the client experience spectrum. The Engineer may also produce advanced data analysis and data quality metrics as needed.
Main responsibilities:
- Create new data integration code, helping to define best practice design patterns.
- Analyze existing data integration workflows, understand them, and recommend improvements.
- Create and improve processes and standards to automate and optimize data delivery and scalability.
- Provide operational support for existing data integration workflows.
- Produce data analysis and data quality metrics.
Qualifications:
- Python expert with 5+ years of experience in data engineering with Python. Ideally using Databricks with Amazon Web Services (AWS) and Redshift.
- Extensive experience in writing complex SQL queries to extract, transform and load data from large heterogenous data sources to meet business requirements.
- Knowledge about database architecture and schema design to create and load data to tables.
- Basic knowledge of API configuration to extract and load data into databases.
- Bachelor's Degree -- Computer Science or equivalent.