Ebury is a hyper-growth FinTech firm, named as one of the top 15 European Fintechs to work for by AltFi. We offer a range of products including FX risk management, trade finance, currency accounts, international payments, and API integration.
Senior Data Platform Engineer - Fintech
Málaga Office - Hybrid: 4 days in the office, 1 day working from home
Join Our Technology Team at Ebury Málaga Office.
Ebury’s strategic growth plan would not be possible without our Data team, and we are seeking a Senior Data Engineer to join our Data Platform Engineering team!
Our data mission is to develop and maintain Ebury’s Data Warehouse and serve it to the whole company, where Data Scientists, Data Engineers, Analytics Engineers, and Data Analysts work collaboratively to:
1. Build ETLs and data pipelines to serve data in our platform.
2. Provide clean, transformed data ready for analysis and used by our BI tool.
3. Develop department and project-specific data models and serve these to teams across the company to drive decision-making.
4. Automate end solutions so we can all spend time on high-value analysis rather than running data extracts.
We are looking for a skilled Senior Data Engineer with a strong focus on building and optimizing data platforms to join our growing team.
In this role, you will be responsible for developing, enhancing, and maintaining robust frameworks and best practices to support our analytics and ML initiatives. You will work closely with data analysts and other engineering teams to ensure our data platform is scalable, secure, and efficient.
Why should you join Ebury?
Want to work in a high-growth environment? We are always growing. Want to build a better world? We believe in inclusion and stand against discrimination in all forms, having no tolerance for the intolerance of differences that makes us a modern and successful organization.
At Ebury, you will find an internal group dedicated to discussing how we can build a more diverse and inclusive workplace for all people in the Technology Team, so if you’re excited about this job opportunity but your background doesn’t match exactly the requirements in the job description, we strongly encourage you to apply anyway. You may be just the right candidate for this or other positions we have.
About our technology and Data stack
* Google Cloud Platform as our main Cloud provider.
* Apache Airflow as orchestration tool.
* Docker as PaaS to deliver software in containers.
* dbt for data transformation.
* Looker and Looker Studio as BI tools.
* Github as code management tool.
* Jira as project management tool.
* Synq as a data observability tool.
Among others third party tools such as: Hevodata, MonteCarlo, Synq…
What we offer:
* Variety of meaningful and competitive benefits to meet your needs.
* Competitive salary.
* You’ll have continuous professional growth thanks to our regular reviews.
* Equity process through a performance bonus.
* Allowance to take annually paid time off as well as during local public holidays.
* Continued personal development through training and certification.
* Being part of a diverse technology team that cares deeply about culture and best practices, and believes in agile principles.
* We are Open Source friendly, following Open Source principles in our internal projects and encouraging contributions to external projects.
Responsibilities:
1. Establish performance monitoring to track the speed and efficiency of data processing and analysis, and address bottlenecks or slowdowns as needed.
2. Participate in data modelling reviews and discussions to validate the model's accuracy, completeness, and alignment with business objectives.
3. Work on reducing technical debt by addressing code that is outdated, inefficient, or no longer aligned with best practices or business needs.
4. Help to implement data governance policies, including data quality standards, data access control, and data classification.
5. Collaborate with data scientists, analysts, and stakeholders to understand data requirements and translate them into platform capabilities.
6. Automate data ingestion, transformation, testing, and integration processes to enhance data accessibility and data quality.
7. Evaluate and integrate new data tools and technologies to continuously improve the platform’s capabilities.
8. Create and maintain detailed documentation on platform architecture, data flows, and operational processes.
9. Collaborate with team members to reinforce best practices across the platform, encouraging a shared commitment to quality.
Experience and qualifications:
* 3+ years of experience as a Data Engineer or in a similar role.
* Proficiency in SQL and Python.
#J-18808-Ljbffr