Big Data Engineer

Ideal Candidate:

The ideal candidate should be flexible with an attitude to learn anything and a willingness to provide the highest level of professional service.

  • Experience of 3-9 Years
  • 3+ Years of experience in Hadoop Stack and Utilities: HDFS, HBase/NoSQL DB, Hive/Spark/Impala/Sqoop, and Hadoop streaming
  • Experience in linux/unix scripting and one or more programming languages: Java, Scala/Python/OR/Perl
  • Experience in using query languages such as SQL, HiveQL/SparkSQL/Sqoop
  • Experience in storage and process optimization techniques in Hadoop/Hive/Spark
  • Experience in Non Functional Requirement of Hadoop – performance tuning, scalability, monitoring
  • Familiar with with Big Data Security (Kerberos/Ranger/SSL/Knox) and Data Governance (Atlas/Studio)
  • Familiar with Containerization: Docker, Kubernetes /OpenShift
  • Familiar with using tools like Jenkins for CI, Git for version Control
  • Nice to have experience on other Hadoop technologies viz. Solr, Storm, NiFi
  • Demonstrable – Good aptitude, analytical and problem solving skills. Ability to easily handle data
  • Demonstrable:- A Self starter who works well within a team and can play the role of mentor to the team

How You Will Grow:

xfactrs believes in supporting you and your career. We will encourage you to grow by providing you with professional development opportunities across multiple business functions. Please visit our company website to know more about our DNA. We look forward to you joining our Growth Chapter.

Share your resume

Send us your details and our team will get back to you shortly with the time for the conversation.