Strong knowledge on Python and Spark
Minimum 3 years of extensive experience in design, build and deployment of PySpark-based applications.
Hands-on experience writing complex SQL queries, exporting, and importing large amounts of data using utilities
Strong knowledge on SQL & PLSQL(especially stored procedures)
Hands-on experience in generating / parsing XML, JSON documents, and REST API request / responses
Ability to build abstracted, modularized reusable code components
Expertise in handling complex large-scale Big Data environments preferably (20Tb+)
Understand the current application infrastructure and suggest changes to it
Define and document best practices and strategies regarding application deployment and infrastructure maintenance
Should be able to understand complete SDLC process
Should be able to estimate and managing all of his her own tasks and independent in working
Any experience with Hadoop ecosystem components (e.g. Hadoop, Storm, Kafka, Cassandra, Spark, Hive, Pig, etc.) is a plus.
Excellent communication skills and Good Customer Centricity.
Vous devez être connecté pour pouvoir ajouter un emploi aux favoris
Connexion ou Créez un compte