Solving “Database or Disk is Full” Error in SQLite When Disk Space is Available


Understanding the "Database or Disk is Full" Error in Large SQLite Databases

Issue Overview: SQLite Database Size Limits and Filesystem Constraints

The "database or disk is full" error in SQLite is a common but misleading message that does not always indicate a lack of storage space. In this case, the user encountered the error while working with a 4TB database on an ext4 filesystem with 1.5TB of available space. The strace logs and SQLite version (3.38.1) pointed to a transaction attempting to write to the database file at offsets exceeding 4TB (e.g., pwrite64(3, ..., 4096, 4316495052800)). However, the filesystem had sufficient free space, and the OS reported no disk-related errors.

Key observations from the discussion:

  1. Database Page Size: The strace showed 4KB I/O operations, strongly suggesting a 4KB SQLite page size (default is 4KB for most systems).
  2. SQLITE_MAX_PAGE_COUNT: The default maximum page count in SQLite is 1,073,741,823 pages. With a 4KB page size, this results in a maximum database size of 4TB (1,073,741,823 × 4096 ≈ 4.39 × 10^12 bytes).
  3. Filesystem Overhead: The ext4 filesystem metadata (e.g., inode count, reserved blocks) was not the bottleneck. The tune2fs output confirmed ample free inodes and blocks.

The error occurred because the database reached the SQLITE_MAX_PAGE_COUNT limit, not because the disk was full. SQLite enforces this limit to prevent runaway database growth, but it becomes a problem for very large datasets.


Diagnosing the Root Cause: Page Count Limits vs. Disk Space

1. SQLITE_MAX_PAGE_COUNT and Page Size

Every SQLite database is divided into fixed-size pages. The maximum number of pages is controlled by the max_page_count pragma, which defaults to 1,073,741,823. When this limit is reached, SQLite returns "database or disk is full," even if the filesystem has space. The relationship between page size, max page count, and maximum database size is:
[
\text{Max Database Size} = \text{page_size} \times \text{max_page_count}
]
For a 4KB page size, the theoretical maximum is ~4TB. The user’s database hit this ceiling.

2. Filesystem-Specific Limitations

While ext4 supports files up to 16TB (with 4KB blocks), other filesystems like FAT32 or NTFS impose lower limits. The df -h output ruled out filesystem capacity issues here. However, fragmentation or filesystem metadata exhaustion (e.g., inodes) can also cause similar errors, though this was not the case here.

3. Transaction Logs and Write Operations

The strace revealed attempts to write to a journal file (database.db-journal), which SQLite uses for atomic transactions. If the journal file cannot grow (due to inode exhaustion or permissions), this error may appear. However, the journal was successfully unlinked, indicating no issues with temporary files.


Resolving the Error: Adjusting Limits and Optimizing Storage

Step 1: Confirm Current SQLite Configuration

Run these commands in the SQLite CLI:

PRAGMA page_size;      -- Verify current page size (e.g., 4096)
PRAGMA page_count;     -- Check current number of pages
PRAGMA max_page_count; -- Confirm the enforced limit (default: 1073741823)

If page_count is close to max_page_count, the limit is the culprit.

Step 2: Increase max_page_count

Modify the limit temporarily for the current session:

PRAGMA max_page_count = 2147483646; -- Double the default limit

This allows the database to grow up to 8TB with a 4KB page size. For a permanent change, recompile SQLite with a higher SQLITE_MAX_PAGE_COUNT in sqlite3.c (not recommended for most users).

Step 3: Increase Page Size

A larger page size reduces the number of pages needed for the same data. To change the page size:

PRAGMA page_size = 65536; -- Set to 64KB (maximum allowed)
VACUUM;                   -- Rebuilds the database with the new page size

After this, the same 4TB database would use only 67,108,864 pages (4TB / 65536), well below the default max_page_count.

Step 4: Monitor Database Growth

Use automated tools to track database size:

#!/bin/bash
# Monitor SQLite database size
DB_SIZE=$(stat -c %s /path/to/database.db)
PAGE_SIZE=$(sqlite3 /path/to/database.db "PRAGMA page_size;")
PAGE_COUNT=$(sqlite3 /path/to/database.db "PRAGMA page_count;")
MAX_PAGES=$(sqlite3 /path/to/database.db "PRAGMA max_page_count;")

echo "Current size: $(($DB_SIZE / 1024**3)) GB"
echo "Pages used: $PAGE_COUNT / $MAX_PAGES"

Step 5: Filesystem and OS Considerations

  • Preallocate Space: Use fallocate to preallocate database files, avoiding fragmentation.
  • Disable Journaling: For non-critical databases, disable the rollback journal with PRAGMA journal_mode = OFF;. This eliminates journal file growth but sacrifices atomic transactions.

Step 6: Long-Term Strategies

  • Sharding: Split the database into smaller shards (e.g., by date or region).
  • Archiving: Move historical data to cold storage.
  • Compression: Use SQLite’s zlib extension or application-level compression for large BLOBs.

Conclusion: Balancing SQLite Limits and Storage Requirements

The "database or disk is full" error in SQLite often masks deeper issues like page count limits or configuration mismatches. For databases approaching 4TB, proactively adjusting max_page_count or increasing the page size prevents interruptions. Regular monitoring and architectural adjustments (e.g., sharding) ensure scalability. Always validate the filesystem’s capabilities and SQLite’s pragmas when working with large datasets to avoid hitting invisible ceilings.

Related Guides

Leave a Reply

Your email address will not be published. Required fields are marked *