Data Engineer with experience in SQL, ETL, and Big Data technologies, working with relational databases such as PostgreSQL and SQL Server. Skilled in building scalable data pipelines, performing data transformations, and deploying data applications on AWS or Azure cloud platforms using technologies like Spark and Scala.
Job Description:
- Implementing end to end features and user stories from analysis/design, building of feature, validation and deployment and post deployment support
- In depth understanding and expertise of Relational databases “PostgreSQL / SQL Server”
- Proficiency in writing advanced SQL queries (CTEs and Window Functions)
- Proficiency in writing procedures, triggers and PL/SQL blocks
- Experience with query optimization techniques
- Experience with data processing, storage and retrieval services on AWS Cloud platform
- Proficiency in creating data transformations and aggregations
- Experience in building data pipelines in AWS/Azure cloud platform
- Expertise in programming languages like Scala
- Deploy data applications on cloud platform “AWS/Azure”
- Perform code review and provide meaningful feedback to improve code quality
- Possess/acquire strong troubleshooting skills and be interested in performing troubleshooting of issues in different desperate technologies and environments
- Help teams in complex and unusual bugs and troubleshooting scenarios
Qualification:
- Bachelor’s degree in computer science, Information Technology, or equivalent experience
- 4 to 6 years of working experience with Relational databases “PostgreSQL / SQL Server”, Spark, Scala, AWS data services, Kubernetes, CI/CD
- Strong knowledge on SQL, ETL, Big Data technologies, OLAP, ETL
- Understanding agile methodology
- Strong written and verbal communication skills including explaining complex concepts effectively to technical and non-technical audience