Data Engineer
Talent Genie
- Pretoria, Gauteng
- Permanent
- Full-time
- Design, build, and maintain the CI/CD pipeline for automated code deployment processes.
- Manage and optimize our PostgreSQL databases, ensuring high availability and performance.
- Develop and maintain scalable and efficient data processing pipelines using Apache Spark.
- Write robust, efficient, and maintainable code in Python for various data and DevOps tasks.
- Deploy, manage, and scale applications using Kubernetes, ensuring seamless deployment and operation.
- Utilize Git/GitLab for version control and collaboration with the development team.
- Monitor system performance, troubleshoot issues, and implement solutions to ensure optimal operation and security.
- Stay up-to-date with emerging trends in DevOps, big data technologies, and cloud computing to drive continuous improvement and innovation within the team.
- Bachelor's degree in Computer Science, Engineering, or a related field.
- At least 5 years of experience in a DevOps role, with a proven track record of developing and maintaining scalable infrastructure.
- Strong experience with PostgreSQL, Apache Spark, and Python is mandatory.
- In-depth knowledge of Kubernetes (aka K8), including deployment, scaling, and management of containerized applications.
- Proficient in Git/GitLab for version control and collaborative development.
- Solid understanding of CI/CD pipelines and automation tools.
- Excellent problem-solving skills and the ability to work independently or as part of a team.
- Strong communication and collaboration skills, with the ability to interact effectively with different stakeholders.
- Experience with cloud services (AWS, GCP, Azure) and their managed services.
- Certifications in Kubernetes, cloud technologies, or DevOps methodologies.
- Experience in working with big data technologies beyond Apache Spark.
- A passion for learning and adapting to new technologies and challenges.
JobPlacements.com