Data Engineer (Lead)

Location
Department
Job type
Date

Description:

Synodus is a software service provider specializing in designing, developing, and maintaining on-premises Data Lakehouse and Enterprise BI/Data Warehouse systems for both domestic and international clients.
We collaborate with businesses across various industries, especially those with high demands for performance, security, and scalability of their data systems.
In addition to on-premises projects (with Oracle Data Warehouse being a strong advantage), Synodus also delivers hybrid cloud solutions for international clients in English-speaking markets.

Key Responsibilities:

  • Design, build, and optimize data pipeline architectures for large-scale on-premises projects, including data collection, processing, cleansing, transformation, storage, and visualization, while ensuring data integrity, security, and high performance.
  • Set up and manage data infrastructure for efficient ETL/ELT operations from multiple sources using technologies such as Apache Spark, Flink, NiFi, Kafka, Hadoop, Minio, Trino, Iceberg, K8s/Docker, etc.
  • Design and oversee overall system data architecture to ensure scalability, flexibility, and alignment with business requirements.
  • Provide technical direction and code reviews for project teams, design data models and ETL workflows, and maintain consistent implementation quality across the system.
  • Automate manual data processes, enhance performance, reduce pipeline latency, and optimize system resources.
  • Collaborate closely with clients to analyze requirements, provide technical consultancy, and propose appropriate data solutions in on-premises or hybrid environments.
  • Work closely with Data Analysts, Data Scientists, and DevOps teams to ensure seamless data flow and stable, secure system operations.
  • As a Lead, take ownership of technical leadership, mentorship, and team guidance, supporting junior/mid-level Data Engineers while contributing to project planning, resource estimation, and timeline management.

Requirements:

  • At least 5 years of experience as a Data Engineer, including a minimum of 1 year at Senior or Lead level, preferably in large-scale on-premises data system implementation.
  • Proficiency in at least one programming language: Java, Scala, or Python; strong understanding of multithreading, synchronization, file processing, database manipulation, logging, data testing, and data processing optimization.
  • Hands-on experience with RDBMS (SQL Server, MySQL, PostgreSQL, Oracle), Linux (bash scripting, firewall), Git, Kafka, Spark, Hadoop, Airflow, or similar tools.
  • Solid understanding of data modeling, Data Warehouse / Data Lakehouse architecture, and the ability to design systems that meet performance and scalability requirements.
  • Experience in Data Governance, Data Security, Documentation, and Testing in on-premises environments.
  • Strong communication skills with the ability to work independently and lead small technical teams (3–5 members).
  • Proven experience in participating or leading Data Lakehouse / Data Warehouse on-premises projects is a significant advantage.
  • Capable of consulting and proposing technical solutions tailored to client needs.
  • Logical thinking, strong ownership mindset, mentoring ability, and effective team collaboration.
  • Basic English proficiency, with the ability to read and understand technical documents.

Benefits:

  • Salary: Competitive, negotiable + project bonus
  • 13th-month salary and annual performance review
  • Holiday bonuses for four major occasions: April 30, September 2, Lunar New Year, and New Year
  • Company trip, quarterly/monthly team building activities, and internal cultural events
  • Social and health insurance in compliance with Vietnamese law
  • BSH health insurance package covering medical expenses
  • Annual health check-up
  • Union benefits: Birthday gifts, special occasion allowances, etc.
  • Dynamic and open technical environment encouraging knowledge sharing, internal seminars, and sustainable career development
  • Opportunity to work on large-scale projects using cutting-edge technologies: Apache Spark, Flink, NiFi, Kafka, Hadoop, Minio, K8s/Docker, Trino, Iceberg, and more

How useful was this post?

Click on a star to rate it!

Average rating / 5. Vote count:

No votes so far! Be the first to rate this post.

We use cookies to bring the best personalized experience for you. By clicking “Accept” below, you agree to our use of cookies as described in the Cookie policy