Gemini

2025-03-30

Bringing clarity to >1M documents with LLM structured output

With powerful LLMs widely available, processing millions of documents is becoming a common scenario, even for smaller projects. Regular API queries simply don’t cut it for this volume—they’re too slow and error-prone—making batch processing the obvious choice. For my recent project, I picked Gemini 2.0 Flash via Vertex AI.