Location: Madrid
Other locations: Primary Location Only
Requisition ID: 1559530
About Us
At EY wavespace Madrid - Data & AI Hub, we are a diverse, multicultural team at the forefront of technological innovation, working with cutting-edge technologies like Gen AI, data analytics, robotics, etc. Our center is dedicated to exploring the future of AI and Data.
What We Offer
Join our Data & AI Hub, where you will have the chance to work in a vibrant and collaborative environment. You will engage directly with advanced data engineering, where you'll leverage cutting-edge technologies to drive innovative data solutions and transform business insights. Our team supports your growth and development, providing access to the latest tools and resources.
Tasks & Responsibilities:
1. Lead the design and execution of Intelligence transformation projects, ensuring alignment with business goals and objectives.
2. Drive transformation processes towards a data-centric culture, promoting best practices and innovative solutions.
3. Collaborate with cross-functional teams, including risk management, business optimization, and customer intelligence, to deliver impactful data solutions.
4. Architect, design, and implement robust technical solutions across various business domains, ensuring scalability and performance.
5. Mentor and coach junior data engineers, fostering a culture of continuous learning and development within the team.
6. Apply your extensive knowledge and experience to deliver key projects, providing strategic insights and technical guidance.
Key Requirements:
1. Bachelor’s or Master’s degree in Computer Engineering (or related fields), Physics, Statistics, Applied Mathematics, Computer Science, Data Science, Applied Sciences, etc.
2. Fluent in English; proficiency in Spanish or other languages is an asset.
3. Expertise in one or more object-oriented languages, including Python, Scala, or C++
4. Deep understanding of data-modeling principles and best practices.
5. Extensive experience with relational databases and excellent SQL fluency.
6. Proven experience working with big data technologies (Hadoop, Spark, Hive).
7. Experience working with MS Fabric, Databricks, and/or Snowflakes.
Preferred:
1. Hands-on experience with ETL products
2. Strong background in developing microservices-based architectures and the technologies that enable them (containers, REST APIs, messaging queues, etc.).
3. Experience with team collaboration tools such as Git, Bitbucket, Jira, Confluence, etc.
4. Solid experience with unit testing and continuous integration practices.
5. Familiarity with Scrum/Agile development methodologies and the ability to lead agile teams.
6. Strong critical thinking capabilities, with the ability to see the ‘big picture’ while also diving into the details when necessary.
#J-18808-Ljbffr