Strong knowledge on Python and SparkMinimum 3 years of extensive experience in design, build and deployment of PySpark-based applications.Hands-on experience writing complex SQL queries, exporting, and importing large amounts of data using utilitiesStrong knowledge on SQL & PLSQL(especially stored procedures)Hands-on experience in generating / parsing XML, JSON documents, and REST API request / responsesAbility to build abstracted, modularized reusable code componentsExpertise in handling complex large-scale Big Data environments preferably (20Tb+)Understand the current application infrastructure and suggest changes to itDefine and document best practices and strategies regarding application deployment and infrastructure maintenanceShould be able to understand complete SDLC processShould be able to estimate and managing all of his her own tasks and independent in workingAny experience with Hadoop ecosystem components (e.G. Hadoop, Storm, Kafka, Cassandra, Spark, Hive, Pig, etc.) is a plus.Excellent communication skills and Good Customer Centricity.
Vous devez être connecté pour pouvoir ajouter un emploi aux favoris
Connexion ou Créez un compte