Skip to main content

Bank of Georgia is hiring Beginner Data Engineer

➡️ Apply here: Beginner Data Engineer

🔔 Monitor #data_engineer jobs

👩‍💼 Want to stand out? Improve your resume to appeal to recruiters, hiring managers, and Applicant Tracking Systems. ➡️ Improve your resume


Position Purpose: The Bank of Georgia, Customer Tribe, is looking for passionate, curious, and talented Beginner Data Engineers to join their growing Data Team. The role offers an opportunity to make a significant impact on the design, architecture, and implementation of innovative products. Team members will be challenged to innovate using the latest big data techniques and are sought after for their ability to think big, move fast, and explore business insights. Resourceful, creative, and team-oriented individuals who can collaborate to deliver solutions are encouraged to apply.

Key Responsibilities:
* Develop large-scale data structures and pipelines for organizing, collecting, transforming, and standardizing data from multiple fragmented sources in various formats to generate insights.
* Build and operate highly available, distributed systems for data extraction and processing of large datasets.
* Architect and develop streaming applications for system feeding.
* Create high data volume processing applications to provide robust data interoperability.

Requirements:
* Education & Experience: Bachelor’s degree in Computer Science, Mathematics, or a related field; 1+ years of professional experience in IT or Data Engineering.
* Core Skills (Mandatory):
* Processing: Basic to intermediate knowledge in Apache Spark.
* Orchestration: Basic understanding of Airflow concepts (DAGs, tasks, scheduling); hands-on experience is a plus.
* Languages: Advanced knowledge of Python and SQL.
* Frameworks: Solid grasp of distributed computing (Hadoop, Spark, Hive) as well as non-distributed computing environments.
* Databases: Proficiency in both Relational (RDBMS) and NoSQL database systems.

Preferred Qualifications:
* Infrastructure & DevOps: Experience with Kubernetes (K8s). Basic understanding of CI/CD pipelines and Infrastructure as Code (IaC).
* Cloud: Hands-on experience with AWS.
* Streaming: Proficiency in Spark Structured Streaming.
* Modern Architecture: Familiarity with Data Lakehouse (e.g., Delta Lake, Iceberg) and Data Lake architectures.

Knowledge, Skills And Abilities:
* Knowledge of relational database concepts, data modeling, ETL development, and data warehousing.
* Knowledge of data management fundamentals and data storage principles.
* Knowledge of distributed systems as it pertains to data storage and computing.
* Strong analytical and problem-solving skills.
* Enjoyment in learning new technologies and applying them.
* Communication, flexibility, teamwork, a quick learning ability, a can-do attitude, independent initiative, creative and analytical thinking.
* Ability to solve problems and troubleshoot issues quickly and effectively.
* Ability to write clearly and concisely, and to communicate effectively in Georgian and English.

How To Apply:
Interested candidates should fill in the required information, attach their CV, and submit by clicking “apply for position now”. The deadline for applications is 05/02/2026.

Previous and next articles