- Build ETLs and data pipelines to serve data in our platform
- Provide clean, transformed data ready for analysis and used by our BI tool
- Develop department and project specific data models and serve these to teams across the company to drive decision making
- Automate end solutions so we can all spend time on high-value analysis rather than running data extracts
- Google Cloud Platform for all of our analytics infrastructure
- dbt for our data modelling and warehousing
- Looker and Looker Studio for Business Intelligence
- MonteCarlo for our monitoring and optimization infrastructure
- Establish performance monitoring to track the speed and efficiency of data processing and analysis, and address bottlenecks or slowdowns as needed.
- Participate in data modelling reviews and discussions to validate the model's accuracy, completeness, and alignment with business objectives.
- Work on reducing technical debt by addressing code that is outdated, inefficient, or no longer aligned with best practices or business needs.
- Collaborate with team members to reinforce best practices across the platform, encouraging a shared commitment to quality.
- Help to implement data governance policies, including data quality standards, data access control, and data classification.
- Serve hands-on delivery of data models using solid software engineering practices (eg. version control, testing, CI/CD)
- Identify opportunities to optimise and refine existing processes.
- Design, develop, deploy and maintain ELT/ETL data pipelines from a variety of data sources (transactional databases, REST APIs, file-based endpoints).
- Manage overall pipeline orchestration using AirFlow, as well as execution using GCP hosted services such as cloud run, cloud functions, and GKE.
- 1+ years of data/analytics engineering experience building, maintaining & optimising data pipelines & ETL processes on big data environments
- Proficiency in SQL and python.
- Experience with our modern data stack tools (a plus) We use dbt, Google Cloud Platform (BigQuery, DataStudio), Looker
- Familiarity with dimensional modelling/data warehousing concepts
- Basic understanding of data governance practices
- Experience with software engineering practices in data
- Attention to detail and commitment to data quality
- Stay informed about the latest developments and industry standards in analytics engineering
- Fluency in English (Spanish, a plus)
Data Engineer - Madrid, España - Ebury
Descripción
Ebury is a hyper-growth FinTech firm, named in as one of the top FinTechs to work for by Glassdoor and AltFi. We offer a range of products including FX risk management, trade finance, currency accounts, international payments and API integration.
Data Engineer - Data Platform Engineering
Madrid - 4 days in the office
Ebury is a FinTech success story, positioned among the fastest-growing international companies in its sector . Headquartered in London, we have more than 1, staff covering over 50 nationalities (and counting) working across more than 27 offices worldwide and serving more than 45, clients every day.
Ebury's strategic growth plan would not be possible without our Data team and we are seeking a Data Engineer to join our Data Platform Engineering team
Our data mission is to develop and maintain Ebury's Data Warehouse and serve it to the whole company, where Data Scientists, Data Engineers, Analytics Engineers and Data Analysts work collaboratively to:
Our Data Analytics stack
As a Data Engineer, you will be an integral part of our data team. You will work closely with the rest of the team to help model and maintain the Data Platform.
This role also encompasses important responsibilities in quality assurance, data governance, and promoting best practices to ensure that data-related processes are
efficient, secure, and adhere to industry standards.
As part of the team you will
Help to implement and maintain a robust testing and validation framework that aligns with industry best practices and organisational requirements.
What we're looking for
LI-AK1
LI-Hybrid