Big Data Developer
Noida, Uttar Pradesh (+1 other)
What You’ll Do:
• Have all round experience in developing and delivering large-scale business applications in scale-up systems as well as scale-out distributed systems.
• Must have worked on Hadoop admin include – deploying a hadoop cluster, maintaining a hadoop cluster, adding and removing nodes using cluster monitoring tools.
• Implementing, managing and administering the overall hadoop infrastructure.
• Takes care of the day-to-day running of Hadoop clusters
• Responsible for design and development of application on Big data platform.
• Should implement complex algorithms in a scalable fashion. Core data processing skills are highly important with tools like HIVE/Impala
• Should be able to write MapReduce jobs or Spark jobs for implementation.
• Ability to write Java-based middle layer orchestration between various components on Hadoop/spark stack.
• Work closely with product and Analytic managers, user interaction designers, and other software engineers to develop new offerings… and improve existing ones.
What Makes You A Great Fit:
• Experience in building software or web applications with object-oriented or functional programming languages. Doesnt matter what language, just a focus on writing clean, well designed and scalable code on MapReduce.
• Experience with Big Data technologies such as Hadoop, Hive, Spark, or Storm
• Experience with streaming technologies like Kafka, Spark, and Flink
• Experience with scalable systems, large-scale data processing, and ETL pipelines
• Experience with SQL and relational databases such as Postgres or MySQL
• Experience with NoSQL databases like Dynamo DB, Cloud Search, or open source variants like: Cassandra, HBase, Solr, or Elastic Search
• Experience with DevOps tools (Git Hub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)
• Experience building and deploying applications on on-premise and AWS cloud-based infrastructure