We are looking for an engineer with experience building analysis systems using Map/Reduce, Hadoop, Pig/Hive, Redshift and similar distributed technologies. The position entails being responsible for data processing flows which includes collection, storage and processing of raw data that drives our matching algorithms. The work involves development of systems and tools for data collection, processing & correction while overcoming the challenges of processing massive amounts of data. A perfect candidate for this position is an autonomic engineer with experience in various technologies, a fast learner that has a "get things done" mentality.
- Design data processing architecture for analyzing massive amounts of data in scalable ways
- Implement data crunching processes while taking into account performance, scale, availability, accuracy, monitoring and more
- Work closely with the research team to implement scalable matching algorithms
- At least 2 years of industry experience in similar positions
- B.Sc. in Computer Science or 8200/Kehiliya alumni
- Experience and knowledge of Big Data technologies such as Hadoop, Map/Reduce, Pig/Hive, Cascading/Scalding, Spark, Giraph & GraphLab
- Experience with Java/Scala and the surrounding ecosystem (Gradle/Maven etc.)
- Thorough knowledge and experience with dynamic programming languages such as Python or Ruby
The following will be considered an advantage:
- Experience with AWS
- Skilled with Linux shell and bash scripting
- Online advertising experience
We are looking for best-of-the-best data scientists and software developers who are interested in furthering their careers in a cutting-edge tech startup. If the idea of solving huge, data-centric challenges excites you, send your CV to firstname.lastname@example.org.