➡️ Apply here: Data Engineer
👩💼 Want to stand out? Improve your resume to appeal to recruiters, hiring managers, and Applicant Tracking Systems. ➡️ Improve your resume
**Job Title:** Data Engineer
**Company:** MIGx
**Location:** Batumi
**Job Description:**
MIGx is seeking a Data Engineer to join their growing Data and AI Engineering team. This role involves building and managing ETL/ELT pipelines, developing scalable data platforms on cloud environments (Azure, AWS, GCP), and implementing DataOps practices. The Data Engineer will be responsible for integrating automated data quality checks, ensuring data observability, and collaborating with agile teams.
**Responsibilities:**
* Build and manage ETL/ELT pipelines using tools like Databricks, dbt, PySpark, and SQL.
* Contribute to scalable data platforms across cloud environments (Azure, AWS, GCP).
* Implement and maintain CI/CD workflows using tools such as GitHub Actions and Azure DevOps.
* Apply DataOps principles: pipeline versioning, testing, lineage, deployment automation, and monitoring.
* Integrate automated data quality checks, profiling, and validation into pipelines.
* Ensure strong data observability via logging, metrics, and alerting tools.
* Collaborate on infrastructure as code for data environments using Terraform or similar tools.
* Connect and orchestrate ingestion from APIs, relational databases, and file systems.
* Work in agile teams, contributing to standups, retrospectives, and continuous improvement.
**Requirements – Must have:**
* Experience with cloud-native data engineering using tools such as Databricks, dbt, PySpark, and SQL.
* Comfort working with at least one major cloud platform (Azure, AWS, GCP) — and openness to others.
* Hands-on experience with CI/CD automation, especially with GitHub Actions or Azure Pipelines.
* Strong Python programming skills for transformation, scripting, and automation.
* Working knowledge of data quality, validation frameworks, and test-driven data development.
* Familiarity with observability practices including metrics, logging, and data lineage tools.
* Understanding of DataOps concepts, including reproducibility, automation, and collaboration.
* Team-first mindset and experience in agile environments (Scrum or Kanban).
* Professional working proficiency in English.
**Requirements – Nice to have:**
* Experience with Snowflake or similar cloud data warehouses.
* Knowledge of data lineage tools and frameworks.
* Infrastructure automation using Terraform, Bash, or PowerShell.
* Exposure to data modeling techniques like Data Vault or dimensional modeling.
* Familiarity with data testing tools.
* Understanding of GxP or other healthcare data regulations.
* Experience with non-relational data systems (e.g., MongoDB, CosmosDB).
**What we offer:**
* Excellent compensation package
* Family Insurance Package
* Modern office in a good location
* Possibilities of career development and the opportunity to shape the company future
* An employee-centric culture
* Different training programs
* Work in a fast-growing, international company
* Friendly atmosphere and supportive Management team
