Data Engineer

About Healint

Healint is a leading maker of healthcare technology used all over the world. Healint leverages innovative techniques in software, data science and user experience design to empower people to manage their chronic conditions and diseases.

Healint’s first global program - the Migraine Buddy platform and its apps - helps a thriving community of users manage and track their migraines. To date, Migraine Buddy has recorded terabytes of data that helps patients, doctors and researchers better understand the real-world causes and effects of neurological disorders.

We're committed to revolutionizing healthcare technology, and are continually looking to add talented people to the Healint team. We promise challenging problems, an opportunity to have real impact on people's lives, and an environment where you'll learn rapidly from one of the best teams in Singapore.

As a Data Engineer you’ll be working on collecting, storing, processing, and analysing the 250GB (and growing!) of data we receive every week. Your number 1 goal is to help us turn all this data into insights. This also involves helping build machine learning algorithms by preparing and processing training and testing datasets.

We’re expecting a well-rounded profile for this position. You need to feel comfortable being responsible for our analytics infrastructure.

Our current data stack: Redshift/PostgreSQL, Airflow, Python & Tableau


  • Maintain and improve our data warehousing system: Databases, ETL/ELT, data streaming system

  • Monitoring data integrity, performance, advising and implementing necessary infrastructure changes

  • Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities

  • Participating in data product development, with a focus on:

    • The implementation of practical machine learning solutions

    • Bringing data solutions in production (REST API)


  • 5 years experience in software engineering/ data engineering / ops

  • Hands-on working experience with large-scale datasets

  • Databases: Practical knowledge with SQL and no-SQL databases. You’re comfortable with querying and writing to databases.

  • Proficient in Python.

  • Linux sys-admin skills

  • Self-starter, natural planner who looks ahead, raises issues, resolves them and meet deadlines


  • Hands-on experience with Machine Learning (classification, clustering)

  • Proficient in a compiled language would be a plus.

  • Familiarity with AWS (DynamoDB, Redshift, S3, EC2, RDS)

  • Understanding of some BI Tools (Tableau, Qlikview, etc.)

  • Experience in creating a REST API that can handle a production load (code + deploy)