Big Data Development Engineer – Hybrid (Madrid)
We are looking for a skilled Big Data Engineer to join our team. If you have experience in Scala, Spark, Python, SQL, and GCP, this could be the perfect opportunity for you.
Location & Work Setup:
* Hybrid role – 50% onsite in Madrid
Responsibilities:
* Design and develop software solutions for big data processing
* Work with Scala, Spark, Python, SQL, GCP (Cloud Storage, Dataproc, Cloud Functions)
* Implement data processing workflows and ETL pipelines
* Collaborate with cross-functional teams including Finance, IT, and Engineering
* Ensure high performance, security, and scalability of data systems
* Work with Datahub (CrushFTP), Appian, Django, PostgreSQL, MySQL, PowerBI, NiFi, JavaScript (Linux, CentOS)
* Participate in testing, documentation, and continuous improvement of data processes
Requirements:
* Minimum 3 years of experience in Big Data Engineering
* Strong programming skills in Python, Scala, SQL
* Experience with GCP (Cloud Storage, Dataproc, Cloud Function)
* Hands-on experience with ETL pipelines, MySQL, and PowerBI
* Excellent analytical and problem-solving skills
* Strong communication and teamwork abilities
No sponsorship available.
Apply now to be part of an exciting journey in Big Data Engineering.