At Cleeng, we’re working each day to enhance the way people consume online TV and videos. Cleeng empowers content owners, broadcasters and publishers to fully embrace the potential of videos! Since our founding in 2011, we’ve been serving global brands such as FIFA, Final Fantasy, One Championship, Foxtel, and the Tennis Channel to monetize their videos through PPV Live and Subscription VOD. We are growing fast and need exceptionally talented, bright, and driven people ready to take up the challenge with us. If you’d like to help us have an impact, this is your chance to make history.
Working at Cleeng is rewarding, fun, and challenging. We thrive on innovation. If you want to join a fast growing business and make a real contribution, come join our team and help shape the future of video consumption.
To strengthen our Data Team in Poznan, we’re looking for a talented software engineer who has a passion for writing clean code and an understanding nuances of data processing.
To manipulate existing platform data to feed our data lake and leverage it to build a cutting edge analytics and predictive visualizations.
- Building data manipulation tools to feed the data lake. You will be able to improve, operate and maintain certain functional domains of data processing using AWS stack.
- You will be creating modules according to the best practices.
- By leveraging smartly other tools, frameworks, partners and people you will deliver effectively your tasks and enhance the interoperability of all data lake features you are accountable for.
- Creating data consistency and quality tests across all our services.
- Knowledge of events sourcing architecture and similar design patterns.
- Knowledge of Scala and/or Spark
- Designing and developing big data and distributed applications,
- Proven experience of commercial coding for at least 2 years
- Abstract thinking for understanding, modeling and documenting complex processes.
- A practical can-do mentality, with strong ability to get things done.
- Good attitude and good sense of humor. 😉
Nice to have:
- Knowledge of Hadoop stack (knowledge of Amazon EMR will be an advantage)
- Experience in developing, monitoring Spark applications
- Experience in optimization big data applications (Hadoop, Spark)
- Knowledge of AWS tools
- Experience in building out and improve, training AI & machine learning model.
Our tech stack and organization:
- Data lake is built using AWS stack (buzzwords):
- EMR (Hadoop)
- Core backend is API-centric, mainly written in PHP 7.3
- Infrastructure is hosted in AWS.
- We organize our work with SCRUM.
We love sharing knowledge and improving our skills on internal workshops and trainings!
How can you apply?