LiteParse
Explained

Fast, local document parsing for AI agents

By LlamaIndex TypeScript-Native Apache-2.0 Released March 2026

Your documents are locked. LiteParse gets the text out.

LiteParse is an open-source document parsing library by LlamaIndex. It extracts text from PDFs, Office documents, and images while preserving where that text sat on the page — the columns, the spacing, the layout. It runs entirely on your machine. No cloud calls, no API keys, no LLMs involved in the parsing itself.

The problem it solves is specific: AI agents need to read documents, but most parsers either scramble the layout trying to convert everything to Markdown, or they require expensive cloud APIs that add latency and cost. LiteParse skips both traps. It projects text onto a spatial grid, keeps the whitespace intact, and trusts that modern LLMs are smart enough to read a table that looks like a table.

The Problem

Parsing breaks layout

Traditional parsers detect document structure and try to convert it to Markdown. Multi-column layouts, nested tables, and merged cells routinely break in translation — columns shift, rows merge, numbers attach to the wrong labels.

The Solution

Spatial text, not Markdown

LiteParse preserves the original page layout using precise indentation and whitespace. Instead of guessing structure, it keeps the spatial relationships intact and lets the LLM interpret what it sees — the way a human would read a printed page.

The Result

Fast local parsing for agents

Agents get reliable text extraction in milliseconds. No cloud round-trips, no Python dependency headaches, no conversion artefacts. Parse once, reason immediately. Screenshot pages for visual fallback when needed.

📄
Document In
PDF, DOCX, XLSX, image
📐
Spatial Parse
Layout-aware extraction
🔍
OCR (if needed)
Tesseract.js built-in
🤖
LLM-Ready Text
+ optional screenshots

The parsing bottleneck is real.

In most RAG and agentic AI pipelines, the bottleneck isn't the LLM — it's getting documents into a format the LLM can actually work with. Agents routinely spend more compute parsing documents than reasoning about them. LiteParse tackles this by being fast enough to disappear from the workflow.

0
Python dependencies
0
API keys required
2.8k+
GitHub stars (first 2 weeks)
50+
Supported file formats

Why not just use pypdf or pdfplumber?

You can. They're solid tools. But they're Python-only, and they strip layout context — which means your LLM gets a wall of text with no spatial awareness. LiteParse is TypeScript-native (with a Python wrapper available), preserves spatial layout by default, and includes built-in OCR for scanned documents. If your agent is already in a JS/TS environment, the setup friction drops to near zero.

Four concepts. No magic.

LiteParse is deliberately simple. The design philosophy is that preserving layout is more reliable than detecting structure. Here are the core ideas.

Concept 01

Spatial Text Parsing

Instead of converting tables and columns to Markdown (which breaks constantly), LiteParse projects text onto a spatial grid. Whitespace and indentation preserve the original layout. LLMs, trained on ASCII tables and code indentation, read this natively.

Concept 02

Bounding Boxes

Every line of text comes back with precise coordinate data — where it sat on the page, how wide it was. This is useful for downstream processing, visualization, or building region-specific extraction pipelines.

Concept 03

Built-in OCR

Scanned PDFs and images are handled automatically via Tesseract.js. OCR parallelises across CPU cores by default (num_workers = cores - 1). You can also plug in an external OCR server (PaddleOCR, EasyOCR) for higher accuracy on difficult documents.

Concept 04

Multimodal Screenshots

LiteParse can generate page-level screenshots alongside text output. This enables a powerful agent pattern: parse text for fast understanding, fall back to screenshots when the agent needs to visually inspect charts, diagrams, or complex formatting.

# Screenshot specific pages
lit screenshot document.pdf --target-pages "1,3,5"

Getting started in seconds

Install globally via npm and parse immediately from the command line:

# Install globally
npm i -g @llamaindex/liteparse

# Parse a document
lit parse your-document.pdf

# Or use programmatically
import { LiteParse } from '@llamaindex/liteparse';

const parser = new LiteParse({ ocrEnabled: true });
const result = await parser.parse('document.pdf');
console.log(result.text);

Also available via Homebrew (brew install llamaindex-liteparse) and pip (pip install liteparse) for the Python wrapper.

Where LiteParse fits in the stack.

LiteParse is one piece of LlamaIndex's document intelligence stack. Understanding where it sits — and what it deliberately doesn't do — helps you pick the right tool.

Local Parsing

LiteParse

Open-source, local-first. Spatial text + bounding boxes + screenshots. Fast, simple, no cloud. Best for agents and real-time pipelines where speed matters more than perfect structure detection.

Cloud Parsing

LlamaParse

Paid cloud service with proprietary models. Agentic OCR, structured outputs (Markdown, JSON schemas), premium accuracy on dense tables, charts, and handwritten text. Built for production document intelligence.

Framework

LlamaIndex

The broader Python/TS framework for building LLM applications. LiteParse slots in as the document loading stage — a drop-in component for VectorStoreIndex and IngestionPipeline workflows.

When to use what

NeedUse ThisWhy
Quick text extraction, agent reads a PDF LiteParse Fast, local, zero config. Agent can parse and move on immediately.
Dense tables, charts, handwritten text LlamaParse Cloud-powered models handle complex layouts that spatial parsing can't resolve.
Structured output (JSON schema, Markdown tables) LlamaParse LiteParse outputs spatial text only. LlamaParse converts to structured formats.
Privacy-sensitive documents, air-gapped environments LiteParse Everything stays on your machine. No data leaves the local security perimeter.
Scanned PDFs with basic OCR needs LiteParse Built-in Tesseract.js handles standard scans. Plug in PaddleOCR for harder cases.

What people actually build with it.

LiteParse is a building block. These are the patterns we're seeing developers apply it to.

Agent Tooling

Two-Step Document Reading

Agents parse text first for fast understanding, then generate page screenshots for visual follow-up on charts or complex layouts. LiteParse ships as an agent skill for this exact pattern.

RAG Pipelines

Local Document Ingestion

Feed documents into a vector store without cloud round-trips. LiteParse handles the parsing stage of RAG pipelines where latency and privacy matter — internal docs, legal files, financial reports.

Dev Tooling

CLI Document Processing

Pipe remote PDFs directly through LiteParse from the command line. Batch-parse entire directories. Integrate into CI/CD or automation scripts without standing up a service.

Enterprise

Air-Gapped Environments

Regulated industries (finance, healthcare, government) where documents cannot leave the network. LiteParse runs fully offline with no external calls — OCR included.

Multimodal AI

Text + Vision Workflows

Combine spatial text extraction with page screenshots. Feed both into multimodal models for richer document understanding — particularly useful for reports with embedded charts and diagrams.

Edge & Embedded

Browser & Edge Parsing

TypeScript-native means LiteParse fits into web-based and edge-computing environments without a Python runtime. Parse documents closer to the user, closer to the data.

From cloud service to open-source core.

LiteParse didn't appear from nowhere. It's the distilled result of years spent building production document parsing at LlamaIndex.

2022

LlamaIndex launches

Originally called GPT Index, the framework establishes itself as the go-to toolkit for connecting LLMs to external data sources. Document loading is a core concern from day one.

2024

LlamaParse goes to production

LlamaIndex launches LlamaParse as a managed cloud parsing service. Agentic OCR, structured outputs, and premium accuracy — built specifically for enterprise document intelligence pipelines.

MAR 2026

LiteParse open-sourced

LlamaIndex extracts the lightweight, fast-mode core of LlamaParse's parsing engine and releases it as LiteParse — a standalone open-source tool under the Apache-2.0 licence. TypeScript-native, zero cloud dependencies, designed specifically for AI agents.

MAR 2026

2.8k GitHub stars in two weeks

Rapid community adoption. LiteParse reaches version 1.3.1 with Python wrapper, Homebrew formula, and agent skill packaging. The repo includes benchmarking code and evaluation datasets on HuggingFace.

Is LiteParse right for your project?

LiteParse is deliberately limited in scope. That's a feature. Here's an honest breakdown of when it makes sense and when it doesn't.

✓ Use LiteParse when

You're building AI agents that need to read documents quickly and move on. Speed matters more than perfect structural conversion.

You want local-first execution. Your documents are sensitive, your environment is air-gapped, or you simply don't want to pay for cloud parsing on straightforward documents.

Your stack is JavaScript/TypeScript. LiteParse is native to this environment — no Python runtime overhead, installs via npm in seconds.

You need a two-step parse-then-screenshot workflow. LiteParse was built around exactly this agent pattern: fast text first, visual fallback second.

✗ Skip LiteParse when

Your documents have dense, complex tables with merged cells, multi-level headers, or columns that don't snap to a clean grid. Spatial parsing alone won't resolve these reliably.

You need structured output — Markdown tables, JSON schemas, strict key-value extraction. LiteParse outputs spatial text and bounding boxes. That's it.

You're processing handwritten text or heavily degraded scans. Built-in Tesseract.js is decent for standard scans but isn't state-of-the-art OCR. You'll want LlamaParse or a dedicated OCR model.

You need chart or diagram parsing — extracting data from visual elements. This requires multimodal LLM reasoning on screenshots, which LiteParse can enable but doesn't do itself.

How we see it. What we recommend.

Our take

LiteParse is the kind of tool the AI ecosystem needs more of — scoped, honest about its limits, and immediately useful. The spatial parsing approach is the right call for agent workflows: instead of building an elaborate structure-detection pipeline that breaks on edge cases, it trusts the LLM to do what LLMs are already good at. The fact that it's TypeScript-native matters more than it seems — it removes an entire class of environment-setup friction that blocks non-Python developers from building with AI. We're using it and recommending it where it fits.

Enterprise

Document Ingestion Layer

For enterprise clients with privacy constraints, LiteParse handles the first-pass parsing of internal documents — contracts, reports, correspondence — without data leaving the network. We pair it with LlamaParse for documents that need deeper structure extraction.

Studio

Agent Skill Integration

In our agent builds, LiteParse is the default document reading skill. The two-step pattern (text first, screenshot fallback) maps cleanly to how we design agent tool use. It's fast enough that parsing never becomes a noticeable bottleneck.

Dojo

Teaching Document Pipelines

LiteParse is how we introduce document parsing in our AI workshops. The npm-install-and-go simplicity means participants are parsing real documents within minutes, not fighting Python virtual environments. It makes the concept of spatial parsing tangible.

Go deeper. Start building.

Official

LiteParse Resources

GitHub Repository — Source code, issues, contributing guide ↗
Documentation — Getting started, library usage, CLI reference ↗
Launch Blog Post — Design philosophy and benchmarks ↗
npm Package — @llamaindex/liteparse ↗

Imbila.AI

Get Started With Us

Evaluating document parsing for your AI pipeline? We run AI Clarity Audits to assess your use case, Co-Design Labs to prototype solutions, and AI Mastery Workshops for hands-on learning.

Get in touch Read the blog ↗

Sources & References

LiteParse GitHub · LlamaIndex Blog — LiteParse Launch · LiteParse Documentation · MarkTechPost Coverage

Content validated March 2026. LiteParse and LlamaParse are trademarks of LlamaIndex. This is an independent educational explainer by Imbila.AI.