Skip to Main Content

Abinitio Engineer - Data Platform

New York, New York

Apply

Description

Xandr powers the real-time sale and purchase of digital advertising. We simplify the most sophisticated machine learning and data science capabilities to help our customers unlock their greatest potential. Our data platform team handles 15billion transacted ad requests per day and up to 5 terabytes of data per hour. We are looking for a senior engineer to elevate our data ingestion and pipeline processing that emphasizes high performance & optimize data processing designs that include Streaming, MapReduce and Parallelized ETL to support even faster, higher volume and the delivery of actionable data for our stakeholders.

About the job:

• Abinitio Engineer will partner with partner with the Transaction Logics Team which is responsible for the data transformations, integrations, enrichment of the data platform. The tool use for this is Abinitio.

• Most of our Data Platform is implemented in Java and TL processing is being migrated from Java MapReduce jobs to Abinitio.

• There is still much work require to refactor these migrated jobs in Abinitio to optimize and streamline these processing.

• It will be vital for the Abinitio Engineer to have an appreciation of the Java language and the origin for these migrated jobs to successfully refactor and optimize these jobs.

• There are additional data pipeline jobs in MapReduce and Scala that needs to be migrated to Abinitio as well.

• Take ownership of core components of data ingestion & integration processing that intakes ~ 20 billon events per hour with data volume in excess of 2TB per hour.

• Design and implement new features and enhancements to our data platform in an intelligent fashion that advances component architecture toward strategic data governance/management objectives

• Work with world class high frequency/low latency design experts to perform in-depth analysis and optimization of data pipeline components to ensure smooth execution within strict time and resource limitations

• Work closely with product stakeholders and users to understand data and reporting requirements

• Prioritize bug fixes to ensure critical up-time

• Serve as a mentor and guide for other team members


Qualifications

• BA/BS/MS degree and 5-15 years of experience; we’ll designate level of role base on depth of expertise. (Degree in Computer Science or related field preferred)

• Must be a capable developer with some compiled language (java, C#, C++ or C); with deep understanding for automated, distributed and highly scalable designs patterns.

• Experience with Big Data, ETL and distributed systems using technologies such as Hadoop, Spark, Python, Java, MapReduce, Scala and Ab Initio.

• Experience/Achievements in coping with Big Data deployments and/or extremely high frequency/low latency projects.

• Familiarity with Enterprise Data Management & Governance practices or Data Architecture methodologies are a plus.


More about you:


• You are passionate about a culture of learning and teaching. You love challenging yourself to constantly improve, and sharing your knowledge to empower others

• You like to take risks when looking for novel solutions to complex problems. If faced with roadblocks, you continue to reach higher to make greatness happen

• You care about solving big, systemic problems. You look beyond the surface to understand root causes so that you can build long-term solutions for the whole ecosystem

• You believe in not only serving customers, but also empowering them by providing knowledge and tools


Job ID 1930260 Date posted 06/11/2019
Career Areas

#XandrLife

#XandrLife means we’re creating an incredible experience for our people, too. Let our employees show you what it’s really like to work here.

See what it's like here
Back to top