Trigger-Based Row Backup in SQLite: Resolving “no such table: main.NEW” Error


Understanding the Trigger Context and the Nature of NEW

The core challenge involves creating a SQLite trigger that automatically inserts all columns of a newly inserted row into a backup table. The initial attempt uses INSERT INTO backup SELECT * FROM NEW, which fails with the error "no such table: main.NEW." This error stems from a misunderstanding of how the NEW alias operates in SQLite triggers. Unlike PostgreSQL, where NEW represents a row type that can be directly queried, SQLite treats NEW as a transient reference to column values of the row being inserted. It does not expose NEW as a virtual table or subqueryable object. Consequently, attempting to SELECT * FROM NEW is syntactically invalid because NEW is not a table, view, or CTE. This limitation forces developers to adopt alternative strategies for bulk-copying row data between tables.

The backup table must structurally mirror the source table to accept all column values. Any mismatch in column count, order, or data type will cause the trigger to fail, even if the SELECT * syntax were valid. Schema synchronization between the source and backup tables is a critical dependency. For example, adding a column to the source table without updating the backup table will break the trigger. This interdependency complicates maintenance, as schema changes require coordinated updates to both tables and any associated triggers.


Schema Synchronization, Dynamic SQL Limitations, and Row Identification

Three primary factors contribute to the problem:

  1. NEW as a Non-Queryable Row Reference:
    SQLite triggers provide NEW.column_name to access individual column values but do not support treating NEW as a row-shaped object. This design simplifies trigger logic for common use cases but complicates bulk operations. Developers accustomed to PostgreSQL’s composite types or Oracle’s :NEW pseudo-record may find this behavior unintuitive. The error "no such table: main.NEW" directly results from attempting to query NEW as if it were a table.

  2. Schema Synchronization Requirements:
    Even if SELECT * FROM NEW worked, the backup table’s schema must exactly match the source table’s structure. Adding, removing, or reordering columns in the source table without propagating these changes to the backup table would cause runtime errors. For example, if the source table gains a new non-nullable column without a default value, the backup table must also include this column; otherwise, the trigger’s INSERT will fail due to a column count mismatch.

  3. Absence of Dynamic SQL in Triggers:
    SQLite triggers cannot dynamically generate SQL statements based on runtime schema metadata. A trigger’s body is static and parsed at creation time. This limitation prevents workarounds such as querying PRAGMA table_info() to programmatically construct a column list. Consequently, triggers cannot adapt to schema changes without being manually redefined.

A secondary challenge involves efficiently identifying the newly inserted row in the source table for backup. Using WHERE rowid = NEW.rowid requires the source table to have an implicit rowid column. This approach fails for WITHOUT ROWID tables or tables with composite primary keys, necessitating explicit primary key comparisons. Additionally, querying the source table within the trigger introduces a redundant table scan, which may degrade performance for high-throughput applications.


Step-by-Step Solutions for Row Backup Triggers

1. Explicit Column Listing with Schema Synchronization
The most reliable method is to explicitly list all columns in the trigger’s INSERT statement. While this requires manual updates when the schema changes, it ensures compatibility and clarity. For example:

CREATE TRIGGER backup_trigger AFTER INSERT ON source_table
BEGIN
  INSERT INTO backup_table (col1, col2, ..., colN)
  VALUES (NEW.col1, NEW.col2, ..., NEW.colN);
END;

To automate schema synchronization, use a script or application code to generate the trigger definition by querying PRAGMA table_info(source_table). This approach extracts column names dynamically and constructs the INSERT statement. For instance, in Python:

import sqlite3

def create_backup__trigger(conn, source_table, backup_table):
    cursor = conn.execute(f"PRAGMA table_info({source_table})")
    columns = [row[1] for row in cursor.fetchall()]
    cols_quoted = ', '.join([f'"{col}"' for col in columns])
    values = ', '.join([f'NEW."{col}"' for col in columns])
    trigger_sql = f"""
    CREATE TRIGGER backup_trigger AFTER INSERT ON {source_table}
    BEGIN
      INSERT INTO {backup_table} ({cols_quoted})
      VALUES ({values});
    END;
    """
    conn.execute(trigger_sql)

This script must be rerun whenever the source table’s schema changes. It eliminates manual column listing but requires external automation.

2. Copying via Source Table Query with Rowid
If the source table uses an implicit rowid, the trigger can re-query the source table after insertion:

CREATE TRIGGER backup_trigger AFTER INSERT ON source_table
BEGIN
  INSERT INTO backup_table SELECT * FROM source_table WHERE rowid = NEW.rowid;
END;

This method works for tables with single-column integer primary keys (which alias rowid). For WITHOUT ROWID tables or composite keys, modify the WHERE clause to match all primary key columns:

CREATE TRIGGER backup_trigger AFTER INSERT ON source_table
BEGIN
  INSERT INTO backup_table
  SELECT * FROM source_table
  WHERE pk_col1 = NEW.pk_col1 AND pk_col2 = NEW.pk_col2;
END;

The downside is the additional table scan on source_table, which may impact performance. Use this approach only if the source table is small or insertions are infrequent.

3. Schema Duplication with Shadow Tables
Create the backup table as an exact structural copy of the source table. This ensures column consistency:

CREATE TABLE backup_table AS SELECT * FROM source_table WHERE 0;

The WHERE 0 clause creates an empty table with the same schema. However, this method does not preserve constraints, indexes, or defaults. To create a more precise copy, use CREATE TABLE ... LIKE (available in SQLite 3.37+):

CREATE TABLE backup_table (LIKE source_table);

After creating the backup table, use one of the above trigger strategies. Periodically check schema parity using:

SELECT name, type FROM pragma_table_info('source_table')
EXCEPT
SELECT name, type FROM pragma_table_info('backup_table');

4. Leveraging SQLite’s C API for Dynamic Triggers (Advanced)
For embedded systems or applications with access to SQLite’s C API, use custom virtual tables or user-defined functions to dynamically construct INSERT statements. This approach is beyond typical SQL usage but allows true dynamic behavior. A minimal example would involve registering a UDF that returns a JSON object of NEW values, which could then be parsed and inserted into the backup table. However, this requires significant programming effort and tight integration with the host language.

5. Embracing Schema Change Management
Accept that schema changes necessitate trigger updates and formalize a change management process. Use migration scripts to modify both the source table, backup table, and triggers atomically. For example:

-- Migration: Add column 'new_col' to source_table and backup_table
BEGIN TRANSACTION;
ALTER TABLE source_table ADD COLUMN new_col TEXT;
ALTER TABLE backup_table ADD COLUMN new_col TEXT;
DROP TRIGGER backup_trigger;
-- Recreate trigger with new_col included
CREATE TRIGGER backup_trigger AFTER INSERT ON source_table ...;
COMMIT;

Tools like SQLAlchemy Alembic or Flyway can automate this process.


Final Recommendations

For most applications, explicit column listing with automated trigger generation offers the best balance of reliability and maintainability. Combine this with schema migration tools to ensure the backup table and trigger stay synchronized with the source table. Avoid SELECT * in production triggers due to its fragility under schema changes. If performance is critical and rowid-based lookup is feasible, use the re-query method with appropriate indexing on the source table.

Related Guides

Leave a Reply

Your email address will not be published. Required fields are marked *