How can we change the world to make marketing both relevant and impactful? With your help! At Schwarz Media Platform, we are on a mission to build Europe's largest and most advanced ad network for retail - a real-life AdTech application with a big impact on consumers, stores, and advertisers. It is based on Europe's largest retail data pool from Europe's No. 1 retailer, Schwarz Group, and cutting-edge technology that understands individual consumer behavior at scale. If you are interested in this vision and are excited about how data and engineering excellence can help us get there, you will love Schwarz Media Platform.
What you´ll do
Work in a cross-functional product team to design and implement data centered features for Europe’s largest Ad Network
Help to scale our data stores, data pipelines and ETLs handling terabytes of one of the largest retail companies
Design and implement effi cient data processing workfl ows
Continue to develop our custom data processing pipeline and continuously search for ways to improve our technology stack along our increasing scale
Develop and standardize a product for measuring the incremental impact of advertising campaigns
Design and deliver market-leading reporting solutions
Leverage Business Intelligence tools to provide internal business insights, supporting strategic decision-making and driving product development initiatives
Extend our reporting platform for external customers and internal stakeholders to measure advertising performance
You will work in a fully remote setup but you will meet your colleagues in person in the company and engineering specific onsite events
What you’ll bring along
5+ years of professional experience working on data-intensive applications
Fluency with Python and profi cient in SQL
Experience with developing scalable data pipelines with Apache Spark
Experience with data visualization tools (e.g., Looker, Tableau, Microstrategy)
Familiarity with statistical techniques and A/B testing methodologies
Good understanding of effi cient algorithms and know-how to analyze them
Curiosity about how databases and other data processing tools work internally
Ability to write testable and maintainable code that scales
Ability to present fi ndings in a clear, concise manner to both technical and non-technical stakeholders
Familiarity with git
Excellent communication skills and a team-player attitude
Great if you also have
Experience with Kubernetes
Experience with Google Cloud Platform
Experience with Snowfl ake, Big Query, Databricks and DataProc
Knowledge of columnar databases and fi le formats like Apache Parquet
Knowledge of "Big Data" technologies like Delta Lake
Experience with workfl ow management solutions like Apache Airfl ow
Knowledge of Datafl ow / Apache Beam
Our benefits
What happens once you applied?
1
Application
We love simplicity. Apply quickly and easily digitally, even without registration.
2
Checkup
Together with the department, we take a look at your documents.
3
Interview
When you first get to know each other, the focus is on both your personality and your professional suitability.
4
Contract offer
Have we convinced each other? Then your employment contract is on its way to you digitally by e-mail.
.
5
Welcome
Your first day of work with is about to begin and your individual onboarding in the team begins. We look forward to seeing you!