Big Data Engineer with experience in Spark and Scala for building scalable data processing pipelines. Skilled in working with large datasets, implementing ETL workflows, and deploying data solutions on modern big data platforms and cloud environments.
Job Description:
- Design and implement scalable data processing solutions using Spark and Scala.
- Develop and maintain ETL pipelines to process large volumes of structured and unstructured data.
- Collaborate with data engineers, analysts, and product teams to build data-driven solutions.
- Perform data transformation, aggregation, and optimization for analytical workloads.
- Ensure data quality, reliability, and performance across big data environments.
- Participate in code reviews and maintain coding standards.
- Troubleshoot production issues and optimize data processing workflows.
- Deploy and manage data applications on cloud platforms.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- 4–6 years of experience working with Big Data technologies.
- Strong experience with Apache Spark and Scala programming.
- Experience working with distributed data processing frameworks.
- Knowledge of data warehousing, ETL processes, and data modeling.
- Experience working in Agile development environments.
- Strong analytical and problem-solving skills.