MTS 1 Project Lead – Big Data

Urgent
Posted 5 months ago
Chennai, Tamil Nadu
Application deadline closed.

Job Description

MTS 1 Project Lead – Big Data

Paypal India Pvt Ltd

Chennai, Tamil Nadu

Job Description Fueled by a fundamental belief that having access to financial services creates opportunity, PayPal (NASDAQ: PYPL) is committed to democratizing financial services and empowering people and businesses to join and thrive in the global economy. Our open digital payments platform gives PayPals 305 million active account holders the confidence to connect and transact in new and powerful ways, whether they are online, on a mobile device, in an app, or in person. Through a combination of technological innovation and strategic partnerships, PayPal creates better ways to manage and move money, and offers choice and flexibility when sending payments, paying or getting paid. Available in more than 200 markets around the world, the PayPal platform, including Braintree, Venmo and Xoom enables consumers and merchants to receive money in more than 100 currencies, withdraw funds in 56 currencies and hold balances in their PayPal accounts in 25 currencies.Job Description Summary The… Data Central Team is looking for a highly talented, self-driven, proven engineer/lead to help design and build core data movement platform that deal with big data (>90PB) at low latency (i.e. sub-second) for data movement from on-premise to GCS/GCP cloudJob DescriptionFueled by a fundamental belief that having access to financial services creates opportunity, PayPal (NASDAQ: PYPL) is committed to democratizing financial services and empowering people and businesses to join and thrive in the global economy. Our open digital payments platform gives PayPal’s 300 million active account holders the confidence to connect and transact in new and powerful ways, whether they are online, on a mobile device, in an app, or in person. Through a combination of technological innovation and strategic partnerships, PayPal creates better ways to manage and move money, and offers choice and flexibility when sending payments, paying or getting paid. Available in more than 200 markets around the world, the PayPal platform, including Braintree, Venmo and Xoom enables consumers and merchants to receive money in more than 100 currencies, withdraw funds in 56 currencies and hold balances in their PayPal accounts in 25 currencies.The Data Platform Engineering Team is looking for a highly talented, self-driven, proven software engineer to help design and build core data services that deal with big data (>90PB) at low latency (i.e. sub-second) for a variety of use-cases spanning near-real time analytics and machine intelligence.Do you consider yourself a software engineer, coder, hacker, inventor, hackerpreneur, etc…? Are you excited about not just learning new things (languages, frameworks, & technology) but mastering them? Do you care about building clean APIs, solving tough problems, inventing solutions when needed, and building resilient systems? Do you believe that your work should not only impact PayPal but the world beyond our 4 walls?If yes, we would like to talk with you!ResponsibilitiesGather and process raw data at scale.Design and develop data applications using selected tools and frameworks as required and requested.Assemble large, complex data sets that meet functional / non-functional business requirements.Build the infrastructure required for optimal extraction, transformation, and loading data from a wide variety of data sources utilising the GCP big data technologies.Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Keep our data separated and secure across national boundaries through multiple data centers and Geo regions.Work with data and analytics experts to strive for greater functionality in our data systems.Requirements Programming experience, ideally in Java/Scala, and a willingness to learn new programming languages to meet goals and objectives. Experience with Hadoop and streaming ecosystem technologies like Spark, Kafka is a must Outstanding implementation skills with 12 years of experience in developing and enhancing scalable server-side components. Proficient API development Good OOP skills combined with solid SDLC practices like code reviews, unit testing, integration testing and continuous integration Strong debugging and problem solving skills across the full tech stack – language, databases, web servers and system environment Willing to take lead and mentor junior team members Experience with relational databases (Oracle, MySQL) as well as the NoSQL databases is a must Preferable experience with Goblin and GCS/GCP