Step-by-Step Breakdown: What Happens When You Tap “Request”
Client Sends Location & Ride Request to Backend
Frontend (Mobile App) Behavior:
Your mobile device, using the device’s GPS module, retrieves your current coordinates (latitude & longitude).
Alongside your destination and preferences (e.g., payment method, ride type), this data is bundled into a JSON payload.
The payload is sent via HTTPS to the inDrive backend’s ride-request API endpoint.
{
"rider_id": "user_2839",
"pickup_location": {
"lat": -17.8292,
"lng": 31.0522
},
"destination": {
"lat": -17.8210,
"lng": 31.0409
},
"timestamp": 1713191002,
"payment_method": "cash"
}
Authentication: The app attaches an access token (JWT or OAuth2) in the request header to verify the user.
Transport Security: Request uses TLS/SSL to encrypt communication.
Backend Receives & Validates Request
API Gateway (e.g., Kong, NGINX) routes the request to the correct internal microservice typically the
RideRequestService.This service:
Validates the request structure.
Checks if the user is authenticated and allowed to request rides.
Sanitizes location coordinates (rounding, boundary check).
Writes an entry in a ride_request database table or collection.
Triggers a downstream event, like
NewRideRequested, into a message broker (e.g., Kafka, RabbitMQ).
Geospatial Query to Find Nearby Drivers
Now, the MatchingService kicks in. This service queries nearby active drivers using real-time location data.
Driver Locations are stored in an in-memory geospatial cache, usually Redis with
GEOADDandGEORADIUScommands, or with Uber’s H3 hexagonal grid, we will break H3 hexagonal in a bit.Example Redis query:
GEORADIUS drivers:active 31.0522 -17.8292 3 km WITHDISTThis retrieves all driver IDs within 3 km of the rider’s pickup point, sorted by distance.
If Redis is used, this cache is updated every 2–5 seconds by drivers’ devices pinging the server with their latest GPS coordinates.
Fallback Mechanism: If Redis fails or the match pool is too small, fallback to a PostgreSQL/PostGIS or MongoDB geospatial query is used with a larger radius.
How H3 Works
H3 divides the globe into hexagonal cells at 16 resolutions, from large regions (~4,500 km² per cell) to very small zones (~0.9 m² per cell). Each point (lat/lng) is assigned a unique H3 index representing the hexagon it falls into.
Example:
Rider’s GPS → H3 index:
8a2a1072b59ffffFind all driver H3 indexes within a K-ring radius (say, 2 hops)
Perform set intersection:
rider’s neighbors ∩ active driver indexes
This drastically reduces the search space.
Technical Highlights
Fast Proximity Queries: H3’s hierarchical design allows efficient
k-ringsearches (get all neighbors within K steps).Compact Representation: H3 indexes are 64-bit integers — small, cache-friendly, and ideal for Redis or Kafka pipelines.
Geo-Aggregation: Easily group metrics (e.g., ride demand) by cell. Great for heatmaps and surge pricing logic.
Scalable: Works well with distributed systems — you can partition workload by H3 regions.
Use Case in inDrive
Convert all active drivers to H3 indexes and store in Redis.
When a rider requests, convert their location to H3.
Perform a k-ring query (e.g., within 2 hexagons) to get nearby driver cells.
Retrieve drivers in those cells, sort by distance/ETA, then apply matching logic.
This lookup can be done in under 10ms using in-memory data, enabling real-time responsiveness even at scale.
Filtering & Ranking: Matching Algorithm Executes
At this point, inDrive likely processes hundreds of potential drivers within that geofence.
Here’s what happens:
Filtering Phase
Apply logic to narrow the pool:
Remove busy or inactive drivers.
Filter out drivers with low ratings or poor response times.
Filter drivers who have disabled the auto-accept feature or are on break.
Scoring Phase
A scoring algorithm runs to rank the remaining drivers. This is likely done via a weighted decision model, like:
score = (
(1 / distance_km) * 0.4 +
(driver_rating / 5) * 0.3 +
(acceptance_rate) * 0.2 +
(1 / estimated_arrival_time_min) * 0.1
)
This is computed in real time for every candidate, often in a dedicated MatchEngine microservice deployed regionally (for latency).
Advanced platforms may integrate:
Traffic congestion data from Google Maps or Here Maps API.
Surge zones — giving priority to drivers in zones with higher demand.
Real-Time Notifications via Persistent Connections
Once the top n drivers (e.g., top 3) are selected, a real-time notification event is triggered.
Driver apps maintain a persistent WebSocket or MQTT connection with inDrive’s Notification Gateway.
The MatchingService emits an event like:
{
"event": "ride_offer",
"data": {
"rider_id": "user_2839",
"pickup_location": "...",
"fare_offer": "$5",
"expires_in": 15
}
}
The top driver receives the ride request. A 15-second timeout begins.
If they don’t accept, the offer goes to the next best driver.
If all fail, the radius expands, and the match cycle restarts.
To optimize this:
Many systems use priority queues with TTL for each offer.
A/B testing can run in the background to measure timeout strategies (e.g., 10s vs 15s vs 20s).
Optional Enhancements Behind the Scenes
Pre-Fetching Driver Pools: Some systems compute and cache likely driver candidates in advance using background jobs, so when the rider requests, the response is near-instant.
Graph-based Distance Calculation: Instead of raw Euclidean distance, use Dijkstra or A pathfinding* on road graphs for accurate ETA.
Regional Service Partitioning: Each city or zone runs its own isolated instance of
MatchEngineto reduce cross-region latency and limit service blast radius.
Summary of the Stack in Action
Conclusion
inDrive’s ability to match you with a nearby driver in seconds is powered by geospatial intelligence, real-time infrastructure, scalable microservices, and smart algorithms. It’s a blend of precision engineering and large-scale system design.
The next time you request a ride, just remember: behind that quick match is a brilliant system crunching thousands of requests per second — all to get you where you need to be, faster.