-
Engineering · Data Science
How we seamlessly migrated high volume real-time streaming traffic from one service to another with zero data loss and duplication
In the world of high-volume data processing, migrating services without disruption is a formidable challenge. At Grab, we recently undertook this task by splitting one of our backend service's stream read and write functionalities into two separate services. Discover how we conducted this transition with zero data loss and duplication using a simple switchover strategy, along with rigorous validation mechanisms. -
Engineering · Data Science
Supercharging LLM Application Development with LLM-Kit
Discover how Grab's LLM-Kit enhances AI app development by addressing scalability, security, and integration challenges. This article discusses the challenges faced in LLM app building, the solution, the architecture of the LLM-Kit as well as the future plans of the LLM-Kit. -
Engineering
How we reduced initialisation time of Product Configuration Management SDK
Discover how we revolutionised our product configuration management SDK, reducing initialisation time by up to 90%. Learn about the challenges we faced with cold starts and the phased approach we took to optimise the SDK's performance. -
Engineering · Data Science
Metasense V2: Enhancing, improving and productionisation of LLM powered data governance
In the initial article, we explored the integration of Large Language Models (LLM) to automate metadata generation, addressing challenges like limited customisation and resource constraints. This integration enabled efficient column-level tag classifications and data sensitivity tiering. With the model initially scanning over 20,000 entries, we identified areas for improvement post-rollout. These advancements have significantly reduced manual workloads, increased accuracy, and bolstered trust in our data governance processes. -
Engineering
How we reduced peak memory and CPU usage of the product configuration management SDK
Learn about GrabX, Grab’s central platform for product configuration management. This article discusses the steps taken to optimise the SDK, aiming to improve resource utilisation, reduce costs, and accelerate internal adoption. -
Engineering · Data Science
LLM-assisted vector similarity search
Vector similarity search has revolutionised data retrieval, particularly in the context of Retrieval-Augmented Generation in conjunction with advanced Large Language Models (LLMs). However, it sometimes falls short when dealing with complex or nuanced queries. In this post, we explore our experimentation with a simple yet effective approach to mitigate this shortcoming by combining the efficiency of vector similarity search with the contextual understanding of LLMs. -
Engineering · Analytics · Data Science
Leveraging RAG-powered LLMs for Analytical Tasks
The emergence of Retrieval-Augmented Generation (RAG) has significantly revolutionised Large Language Models (LLMs), propelling them to unprecedented heights. This development prompts us to consider its integration into the field of Analytics. Explore how Grab harnesses this technology to optimise our analytics processes.
Engineering
·
Product
Turbocharging GrabUnlimited with Temporal
Discover how Grab tackled the challenges of scaling its flagship membership program, GrabUnlimited. In this deep dive, we explore the migration from a legacy system to Temporal, reducing production incidents by 80%, improving scalability, and transforming the architecture for millions of users.