Posted: Jul 20, 2020
We at Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there’s no telling what you could accomplish.
Apple’s Manufacturing Systems and Infrastructure (MSI) team is responsible for capturing, consolidating and supervising all manufacturing data for Apple’s products and modules worldwide within Apple’s Operations team. This data is stored and used during the entire product’s lifecycle- from prototypes to mass production through warranty support for customers. Our environment develops product innovation, rapid iteration, and a liberating amount of autonomy. As an expert in developing software to manage large, multifaceted data sets, you’ll be building platform for data ingestion, cleaning, transformation and evaluation to support a rapidly scaling organisation.
• 5+ years of professional… experience with Big Data systems, pipelines and data processing
• Hands on experience Big Data, data ingestion, data processing using Spark, Spark Streaming, Flink, HIVE, Kafka, Hadoop, HDFS, S3
• Hands-on experience with design and development with NoSQL technologies Cassandra, HBase or similar scalable Key value-Stores and time series data stores like Druid, influx or similar
• Understanding on various distributed file formats such as Apache AVRO, Apache Parquet and common methods in data transformation
• Confirmed understanding of design and development of large scale, high efficiency and low latency applications is a plus
• Understanding and experience with Micro Services is desired
• Excellent problem solving and programming skills
• Experience with containerization technologies like Kubernetes, Docker, Mesos, Marathon is desirable
• Experience with CI/CD, debugging and monitoring applications and big data jobs is desirable
– Develop solutions to answer sophisticated analytical and real-time operational questions
– Help to design, architect and build the data platform using a variety of Big Data technologies
– Design and develop applications involving data processing, hygiene, augmentation and transformation for distributed systems
– Identify Data Validation rules and alerts based on data publishing specifications for data integrity and anomaly detection
– Innovate by exploring, recommending, benchmarking, and implementing data centric platform technologies
– Ensure operational and business metric health by supervising production decision points
– Provide hardware architectural mentorship, estimate cluster capacity, and build roadmaps for Hadoop cluster deployment
Education & Experience
Education & Experience
B.S., M.S., or PhD in Computer Science, Computer Engineering, or equivalent practical experience.