Global search in Mochi has always been a little rough around the edges. It worked, but it never felt quite right — sometimes slow, sometimes inconsistent. I’ve spent the past few months reworking it from the ground up, and I wanted to share a bit about how it’s evolved.
The original in-memory search
The first version of Mochi’s global search used a lightweight, in-memory index. It was fast and fairly accurate, but came with two big drawbacks.
1. Memory usage.
Keeping an index of every card in memory nearly doubled the app’s overall memory footprint. This was especially problematic for memory-constrained devices like mobile phones.
2. Limited language and word matching.
The fuzzy search algorithm didn’t do any kind of stemming or lemmatization (recognizing that “garden,” “gardens,” and “gardening” all come from the same root word). That meant search results often missed related forms of a term. On top of that, the system only supported English, making it a poor fit for Mochi’s multilingual community.
Switching to full-text search
To address those problems, I switched to a proper full-text search engine with on-disk indexes and built-in multilingual support. This fixed the memory issue right away and also gave much better linguistic coverage — it could automatically normalize word forms, handle multiple languages, and generally produce more consistent results.
However, it did come with its own set of trade-offs. In some cases, results were more accurate, but the fuzzy matching from the old system sometimes did a better job. And since the index lived on disk, search results took longer to appear.
The current approach: hybrid search
The latest version combines both systems. Mochi now uses an in-memory fuzzy search for instant results, layered with an on-disk full-text search for more complete matches.
Here’s what that means in practice:
- The in-memory index (which now only includes card titles) returns instant results as you type.
- The full-text engine then fills in richer, more comprehensive matches from disk — including results from the full card content and in multiple languages.
This hybrid approach delivers the speed of the original in-memory search, the linguistic accuracy of full-text indexing, and a much smaller memory footprint overall.
Summary
- Instant fuzzy results from memory
- Deeper, language-aware matches from disk
- Much smaller memory footprint
Search in Mochi now feels smoother and more reliable, without eating up resources.