About MIGx
MIGx is a global consulting company with an exclusive focus on the healthcare and life science industries, with their particularly demanding requirements on quality and regulatory aspects. We have been managing challenges and solving problems for our clients in the areas of compliance, business processes and many others.
MIGx interdisciplinary teams from Switzerland, Spain and Georgia have been taking care of projects in the fields of M&A, Integration, Application, Data Platforms, Processes, IT management, Digital transformation, Managed services and compliance.
About the profile
We are looking for a data enthusiast who likes to play with structured and unstructured data, transform, organize to work in state of the art data fabric and data mesh projects..
Project Description
In this role you will be working as a Data Engineer, working in complex projects with multiple data sources and formats. You will be part of a bigger team in MIGx responsible for the Data Services and building Data Products for our customers (mid to large size enterprises). You will have an opportunity to continue growing in the are of all things Data related. You will participate in building overall Data Mesh architecture for the customer while focusing on one specific visualization project and more upcoming.
Responsibilities:
* Develop ETL pipelines in Python and Azure Data Factory as well as their DevOps CI/CD pipelines.
* Software engineering and systems integration via REST APIs and other standard interfaces.
* Work together with a team of professional engineers with the objective of>
* developing the data pipelines, Automate processes,
* deploying and building infrastructure as code
* managing the solutions designed in multicloud systems.
* Participate in agile ceremonies, weekly demos and such.
* Communicate your daily commitments.
* To be able to configure and connect different data sources, in special, SQL databases.
Requirements:
* Must have
* Studies in Computer Science (BSc and/or MSc desired).
* 3+ years of practical experience working in similar roles.
* Proficient with ETL products (Spark, Databricks, Snowflake, Azure Data Factory, etc.)
* Proficient with Azure Data Factory.
* Proficient with Databricks/Snowflake and PySpark.
* Proficient developing DevOps/CICD pipelines.
* Proficient with Azure DevOps Classic/YAML Pipelines.
* Proficient with Azure cloud services: ARM templates, API management, App Service, VMs, AKS, Gateways.
* Advanced SQL knowledge and background in relational databases such as MS SQL Server, Oracle, MySQL, and PostgreSQL
* Understanding of landing, staging area, data cleansing, data profiling, data security and data architecture concepts (DWH, Data Lake, Delta Lake/Lakehouse, Datamart)
* Data Modeling skills and knowledge of modeling tools.
* Advanced programming skills in Python.
* Ability to work in an agile development environment (SCRUM, Kanban)
* Understanding of CI/CD principles and best practices
* Nice to have
* Proficient with .NET C#
* Terraform.
* Bash/Powershell
* Data Vault Modeling
* Familiar with GxP.
* Programming skills in other languages.
What we offer:
* Hybrid work model and flexible working schedule that would suit night owls and early birds
* 25 holiday days per year
* Free English classes
* Possibilities of career development and the opportunity to shape the company future
* An employee-centric culture directly inspired by employee feedback - your voice is heard, and your perspective encouraged
* Different training programs to support your personal and professional development
* Work in a fast growing, international company
* Friendly atmosphere and supportive Management team