Unexpected Data Loss When Altering Strict Tables in SQLite Expert
Issue Overview: Data Wipeout During Schema Modification with STRICT Tables
The core problem revolves around catastrophic data loss occurring during a schema modification operation on SQLite STRICT tables using SQLite Expert 5.4.6. The user’s database retained its structural integrity (tables, views, triggers, constraints) but lost all data across multiple unrelated tables after attempting to add a column to an existing table. This incident occurred in an environment using SQLite 3.37.0 with strict table enforcement, foreign key constraints, check constraints, and triggers. Critically, the database file size remained unchanged, ruling out accidental file truncation or deletion. The user’s workflow involved frequent schema changes and data validation, with strict tables ensuring type enforcement and constraint validation. Despite daily backups, hours of unsaved work were lost due to the unexpected data purge.
Key environmental factors include:
- SQLite Expert 5.4.6: A third-party GUI tool interfacing with SQLite.
- STRICT Tables: All tables were defined with
STRICT
mode, enforcing rigid column typing and constraint validation. - Complex Constraints: Foreign keys, unique constraints, and check clauses (e.g.,
[modified] >= [created]
) were pervasive. - Foreign Key Cascades: While cascading deletes were not explicitly configured, foreign key relationships spanned multiple tables (e.g.,
addresses
↔streets
,persons
↔births
).
The data loss was not isolated to the modified table but affected unrelated tables (e.g., continents
, genders
), eliminating cascading deletes as the sole culprit. The absence of transactional rollback or error logs complicates root cause analysis.
Potential Culprits: STRICT Mode Interactions, Tool Behavior, and Constraint Cascades
Three primary vectors explain the data loss:
1. SQLite Expert’s Schema Modification Logic
GUI tools like SQLite Expert often implement ALTER TABLE
operations by creating a new table, copying data, dropping the original, and renaming the new table. This process is necessitated by SQLite’s limited ALTER TABLE
support (e.g., no direct column addition). If interrupted or mishandled, this workflow can truncate data. STRICT tables exacerbate this risk:
- Type Enforcement Conflicts: If the GUI tool fails to handle STRICT type checks during data migration (e.g., invalid type conversions), the copy operation might abort, leaving an empty table.
- Constraint Validation Failures: Check constraints (e.g.,
[modified] >= [created]
) or unique indexes could reject copied data, causing the tool to discard records. - Transaction Misuse: If the tool does not wrap operations in a transaction, partial failures leave the database in an inconsistent state.
2. SQLite STRICT Table Edge Cases
While SQLite’s STRICT mode is designed to enforce type rigidity, undocumented interactions with schema changes could theoretically corrupt data. For example:
- Implicit Column Type Coercion: STRICT tables reject type conversions that non-strict tables permit. A tool attempting to copy data without respecting strict typing might delete rows during migration.
- Metadata Corruption: Rare bugs in SQLite’s page cache or schema parser could misrepresent table definitions, though this is highly unlikely given SQLite’s robustness.
3. Constraint-Triggered Data Purging
Foreign key constraints with ON DELETE CASCADE
or triggers could propagate deletions across tables. However, the user explicitly denied executing DELETE
or DROP
commands. A misconfigured trigger (e.g., AFTER UPDATE
deleting rows) might explain the data loss, but the schema’s triggers (e.g., contacts_update_modified
) only update timestamps.
Troubleshooting Steps, Solutions & Fixes
Step 1: Immediate Data Recovery Attempts
1.1 Check Write-Ahead Log (WAL) Files
If the database was in WAL mode (journal_mode = WAL
), the -wal
and -shm
files may contain uncommitted transactions or recent data:
cp original.db recovered.db
cp original.db-wal recovered.db-wal
cp original.db-shm recovered.db-shm
Open recovered.db
using the SQLite CLI. If WAL entries exist, finalize them with:
PRAGMA wal_checkpoint(TRUNCATE);
1.2 Use Forensic Tools
Tools like sqlite3_undel
or undark
can scan database pages for orphaned data:
undark -i corrupted.db > recovered.sql
This extracts raw data from free pages or slack space, which may include deleted records.
1.3 Restore from Backups
If backups exist (e.g., .bak
, versioned copies), validate their integrity:
.open backup.db
PRAGMA integrity_check;
Step 2: Diagnose Schema Modification Workflows
2.1 Reproduce the Issue in a Controlled Environment
- Create a clone of the schema using the provided structure.
- Populate tables with test data.
- Use SQLite Expert to add a column to a STRICT table. Monitor the tool’s SQL output (if available).
2.2 Audit SQLite Expert’s Generated SQL
Enable logging in SQLite Expert or use a proxy like sqlite3_trace
to capture the tool’s operations:
sqlite3_trace_v2(db, SQLITE_TRACE_STMT, trace_callback, NULL);
Look for statements like:
CREATE TABLE new_table (...);
INSERT INTO new_table SELECT ... FROM old_table;
DROP TABLE old_table;
ALTER TABLE new_table RENAME TO old_table;
If the INSERT
fails (e.g., due to STRICT type checks), the DROP TABLE
would leave an empty schema.
2.3 Test with SQLite CLI
Perform the same schema change using raw SQL:
-- Create new table
CREATE TABLE addresses_new (
address_id INTEGER PRIMARY KEY NOT NULL,
street_id INTEGER NOT NULL,
housenr TEXT NOT NULL,
...
STRICT
);
-- Copy data
INSERT INTO addresses_new SELECT * FROM addresses;
-- Verify row count
SELECT COUNT(*) FROM addresses_new;
-- Drop old table
DROP TABLE addresses;
-- Rename new table
ALTER TABLE addresses_new RENAME TO addresses;
If this succeeds, the issue lies with SQLite Expert’s implementation.
Step 3: Prevent Future Data Loss
3.1 Enable Atomic Transactions
Ensure all schema changes are wrapped in transactions:
BEGIN;
-- Schema modification steps
COMMIT;
If using a GUI tool, verify it employs transactions.
3.2 Use Versioned Backups
Automate backups before schema changes:
sqlite3 original.db ".backup backup-$(date +%s).db"
3.3 Avoid GUI Tools for Critical Operations
Perform schema changes via scripts or the SQLite CLI to maintain control.
3.4 Monitor Constraint Interactions
Audit triggers and foreign keys for cascading actions:
SELECT * FROM pragma_foreign_key_list('table_name');
SELECT * FROM sqlite_master WHERE type = 'trigger';
3.5 File Bug Reports
If SQLite Expert is implicated, report the issue to its developers with reproduction steps. For SQLite itself, submit a minimal test case to the SQLite forum.
By methodically isolating the fault domain (tool vs. SQLite vs. user error) and implementing safeguards, users can mitigate the risks of catastrophic data loss while leveraging STRICT tables’ benefits.