Error handling & transactions

A database transaction is a logical unit of work that consists of one or more SQL operations that must be executed together as a single, indivisible unit. The fundamental principle behind transactions is that they either complete entirely (all operations succeed) or fail entirely (no operations take effect), ensuring data consistency and integrity even when errors occur or systems fail unexpectedly.

Think of a transaction like a contract where all terms must be fulfilled, or the entire contract becomes void. In banking systems, for example, when you transfer money from one account to another, the system must both debit the source account and credit the destination account. If either operation fails, both must be reversed to maintain the integrity of the financial data.

Transactions provide the foundation for reliable database operations by guaranteeing what database professionals call ACID properties: Atomicity ensures that transactions are all or nothing operations. Consistency ensures that transactions move the database from one valid state to another valid state. Isolation ensures that concurrent transactions don’t interfere with each other inappropriately. Durability ensures that committed transactions persist even if the system crashes immediately afterward.

The transaction lifecycle begins when the first SQL statement in a transaction executes, continues through subsequent operations, and ends when the transaction is either committed (making all changes permanent) or rolled back (undoing all changes). During this lifecycle, the database system maintains logs and locks to ensure ACID compliance while providing reasonable performance for concurrent users.

Transaction Control Statements

Database systems provide explicit transaction control through several key statements that allow you to mark transaction boundaries and control their outcomes. Understanding these statements is crucial for implementing robust database applications that can handle errors gracefully while maintaining data integrity.

The BEGIN TRANSACTION statement explicitly starts a new transaction, giving you precise control over when transactional boundaries begin. While many database systems automatically start transactions for individual statements, explicit transaction control becomes essential when multiple related operations must succeed or fail together.

BEGIN TRANSACTION CustomerOrderProcessing;

-- Insert new customer if they don't exist
IF NOT EXISTS (SELECT 1 FROM customers WHERE email = 'john@example.com')
BEGIN
    INSERT INTO customers (customer_name, email, registration_date)
    VALUES ('John Smith', 'john@example.com', GETDATE());
END

-- Get the customer ID for order processing
DECLARE @CustomerID INT;
SELECT @CustomerID = customer_id FROM customers WHERE email = 'john@example.com';

-- Create the order header
INSERT INTO orders (customer_id, order_date, order_status, order_total)
VALUES (@CustomerID, GETDATE(), 'Processing', 0);

-- Get the newly created order ID
DECLARE @OrderID INT = SCOPE_IDENTITY();

-- Add order line items
INSERT INTO order_details (order_id, product_id, quantity, unit_price)
VALUES 
    (@OrderID, 101, 2, 29.99),
    (@OrderID, 205, 1, 149.99),
    (@OrderID, 310, 3, 15.50);

-- Update the order total based on line items
UPDATE orders 
SET order_total = (
    SELECT SUM(quantity * unit_price) 
    FROM order_details 
    WHERE order_id = @OrderID
)
WHERE order_id = @OrderID;

-- Update inventory for purchased items
UPDATE products 
SET stock_quantity = stock_quantity - od.quantity
FROM products p
INNER JOIN order_details od ON p.product_id = od.product_id
WHERE od.order_id = @OrderID;

COMMIT TRANSACTION CustomerOrderProcessing;

This comprehensive example demonstrates how multiple related operations can be grouped into a single transaction to ensure data consistency across customer creation, order processing, and inventory management.

Rollback Operations

The ROLLBACK statement undoes all changes made within the current transaction, returning the database to its state before the transaction began. Rollbacks can be triggered explicitly by application logic or automatically by the database system when errors occur.

BEGIN TRANSACTION ProductInventoryUpdate;

DECLARE @ProductID INT = 150;
DECLARE @OrderQuantity INT = 5;
DECLARE @CurrentStock INT;

-- Check current inventory level
SELECT @CurrentStock = stock_quantity 
FROM products 
WHERE product_id = @ProductID;

-- Validate sufficient inventory exists
IF @CurrentStock < @OrderQuantity
BEGIN
    PRINT 'Insufficient inventory. Current stock: ' + CAST(@CurrentStock AS VARCHAR);
    PRINT 'Required quantity: ' + CAST(@OrderQuantity AS VARCHAR);
    ROLLBACK TRANSACTION ProductInventoryUpdate;
    RETURN; -- Exit the batch
END

-- Proceed with inventory reduction
UPDATE products 
SET stock_quantity = stock_quantity - @OrderQuantity,
    last_updated = GETDATE()
WHERE product_id = @ProductID;

-- Log the inventory change
INSERT INTO inventory_log (product_id, change_quantity, change_date, change_reason)
VALUES (@ProductID, -@OrderQuantity, GETDATE(), 'Sale');

-- Verify the update was successful
IF @@ROWCOUNT = 0
BEGIN
    PRINT 'Failed to update product inventory';
    ROLLBACK TRANSACTION ProductInventoryUpdate;
    RETURN;
END

COMMIT TRANSACTION ProductInventoryUpdate;
PRINT 'Inventory successfully updated';

This example shows how rollback logic can be integrated with business validation to ensure that transactions only complete when all business rules are satisfied.

Savepoints and Nested Transactions

Savepoints provide a mechanism for creating intermediate checkpoints within a transaction, allowing you to roll back to a specific point rather than abandoning the entire transaction. This capability is particularly valuable in complex transactions where you want to handle errors for individual operations while preserving the work done in earlier parts of the transaction.

BEGIN TRANSACTION ComplexDataMigration;

-- First phase: Clean up existing data
DELETE FROM temporary_staging_table WHERE import_date < DATEADD(DAY, -30, GETDATE());
SAVE TRANSACTION AfterCleanup;

-- Second phase: Import new customer data
BEGIN TRY
    INSERT INTO customers (customer_name, email, phone)
    SELECT name, email_address, phone_number
    FROM external_customer_data
    WHERE import_status = 'Pending';
    
    SAVE TRANSACTION AfterCustomerImport;
END TRY
BEGIN CATCH
    PRINT 'Customer import failed: ' + ERROR_MESSAGE();
    ROLLBACK TRANSACTION AfterCleanup;
    -- Continue with next phase despite customer import failure
END CATCH

-- Third phase: Import order data
BEGIN TRY
    INSERT INTO orders (customer_id, order_date, order_total)
    SELECT 
        c.customer_id,
        eod.order_date,
        eod.total_amount
    FROM external_order_data eod
    INNER JOIN customers c ON eod.customer_email = c.email
    WHERE eod.import_status = 'Pending';
    
    SAVE TRANSACTION AfterOrderImport;
END TRY
BEGIN CATCH
    PRINT 'Order import failed: ' + ERROR_MESSAGE();
    ROLLBACK TRANSACTION AfterCustomerImport;
    -- Preserve customer data but skip order import
END CATCH

-- Final phase: Update import status
UPDATE external_customer_data SET import_status = 'Completed' 
WHERE import_status = 'Pending';

UPDATE external_order_data SET import_status = 'Completed'
WHERE import_status = 'Pending';

COMMIT TRANSACTION ComplexDataMigration;

This pattern demonstrates how savepoints enable granular error recovery in complex multi phase operations, allowing successful phases to be preserved while recovering from failures in subsequent phases.

Structured Error Handling

Modern database systems provide structured error handling mechanisms that allow applications to detect, analyze, and respond to errors in a controlled manner. Structured error handling replaces the older pattern of checking return codes and global variables with a more robust exception based approach.

The TRY/CATCH construct provides the foundation for structured error handling, allowing you to separate normal processing logic from error handling logic. When an error occurs within a TRY block, control immediately transfers to the associated CATCH block, where you can examine error details and determine the appropriate response.

CREATE PROCEDURE ProcessCustomerPayment
    @CustomerID INT,
    @PaymentAmount DECIMAL(10,2),
    @PaymentMethod VARCHAR(20),
    @ProcessingResult VARCHAR(100) OUTPUT
AS
BEGIN
    BEGIN TRANSACTION PaymentProcessing;
    
    BEGIN TRY
        -- Validate customer exists and is active
        DECLARE @CustomerStatus VARCHAR(20);
        SELECT @CustomerStatus = customer_status 
        FROM customers 
        WHERE customer_id = @CustomerID;
        
        IF @CustomerStatus IS NULL
        BEGIN
            RAISERROR('Customer ID %d not found', 16, 1, @CustomerID);
        END
        
        IF @CustomerStatus != 'Active'
        BEGIN
            RAISERROR('Customer account is not active. Current status: %s', 16, 1, @CustomerStatus);
        END
        
        -- Validate payment amount
        IF @PaymentAmount <= 0
        BEGIN
            RAISERROR('Payment amount must be positive. Received: %s', 16, 1, CAST(@PaymentAmount AS VARCHAR));
        END
        
        -- Record the payment
        INSERT INTO payments (customer_id, payment_amount, payment_method, payment_date, payment_status)
        VALUES (@CustomerID, @PaymentAmount, @PaymentMethod, GETDATE(), 'Completed');
        
        -- Update customer balance
        UPDATE customers 
        SET account_balance = account_balance + @PaymentAmount,
            last_payment_date = GETDATE()
        WHERE customer_id = @CustomerID;
        
        -- Log successful transaction
        INSERT INTO payment_audit (customer_id, payment_amount, processing_date, status, notes)
        VALUES (@CustomerID, @PaymentAmount, GETDATE(), 'Success', 'Payment processed successfully');
        
        COMMIT TRANSACTION PaymentProcessing;
        SET @ProcessingResult = 'Payment of ' + CAST(@PaymentAmount AS VARCHAR) + ' processed successfully';
        
    END TRY
    BEGIN CATCH
        -- Error occurred, rollback transaction
        ROLLBACK TRANSACTION PaymentProcessing;
        
        -- Capture error information
        DECLARE @ErrorMessage VARCHAR(4000) = ERROR_MESSAGE();
        DECLARE @ErrorSeverity INT = ERROR_SEVERITY();
        DECLARE @ErrorState INT = ERROR_STATE();
        DECLARE @ErrorProcedure VARCHAR(50) = ERROR_PROCEDURE();
        DECLARE @ErrorLine INT = ERROR_LINE();
        
        -- Log the error
        INSERT INTO error_log (
            error_date, procedure_name, error_line, error_message, 
            error_severity, customer_id, attempted_amount
        )
        VALUES (
            GETDATE(), @ErrorProcedure, @ErrorLine, @ErrorMessage,
            @ErrorSeverity, @CustomerID, @PaymentAmount
        );
        
        -- Return error information to caller
        SET @ProcessingResult = 'Payment processing failed: ' + @ErrorMessage;
        
        -- Optionally re-raise the error for calling application
        -- RAISERROR(@ErrorMessage, @ErrorSeverity, @ErrorState);
        
    END CATCH
END

This comprehensive error handling example demonstrates validation, business logic implementation, structured error capture, audit logging, and graceful error recovery all within a transactional context.

Error Information Functions

Database systems provide several built in functions that allow you to examine details about errors that occur within CATCH blocks. These functions provide comprehensive information about what went wrong, where it happened, and how severe the error is, enabling sophisticated error handling and logging strategies.

ERROR_MESSAGE() returns the complete text of the error message, including any parameter substitutions. This is often the most useful piece of information for debugging and user communication, as it contains the specific details about what caused the error.

ERROR_SEVERITY() returns the severity level of the error, which indicates how serious the database system considers the error to be. Severity levels range from informational messages to critical system errors, helping you determine appropriate response strategies.

ERROR_STATE() returns the error state number, which can provide additional context about the specific circumstances that triggered the error. While less commonly used than message and severity, the state can be valuable for distinguishing between different instances of the same error type.

ERROR_PROCEDURE() returns the name of the stored procedure or function where the error occurred, which is invaluable for debugging complex applications with multiple layers of database objects.

ERROR_LINE() returns the line number within the procedure where the error occurred, providing precise location information for debugging purposes.

CREATE PROCEDURE DemonstrateErrorHandling
AS
BEGIN
    BEGIN TRY
        -- Intentionally cause a division by zero error
        DECLARE @Result INT = 10 / 0;
    END TRY
    BEGIN CATCH
        SELECT 
            'Error occurred in procedure execution' as error_context,
            ERROR_MESSAGE() as error_message,
            ERROR_SEVERITY() as error_severity,
            ERROR_STATE() as error_state,
            ERROR_PROCEDURE() as error_procedure,
            ERROR_LINE() as error_line,
            GETDATE() as error_timestamp;
            
        -- Demonstrate different handling based on severity
        IF ERROR_SEVERITY() >= 16
        BEGIN
            PRINT 'Critical error detected - notifying administrators';
            -- In real applications, this might send alerts or notifications
        END
        ELSE
        BEGIN
            PRINT 'Non-critical error - logging for review';
        END
    END CATCH
END

Transaction Isolation Levels

Transaction isolation levels control how database systems handle concurrent access to data, balancing data consistency with system performance. Different isolation levels provide different guarantees about what concurrent transactions can see and how they interact with each other’s changes.

READ UNCOMMITTED is the lowest isolation level, allowing transactions to read data that has been modified by other transactions but not yet committed. This level provides maximum concurrency but can result in dirty reads, where your transaction sees changes that might be rolled back by other transactions.

READ COMMITTED ensures that transactions can only read data that has been committed by other transactions, eliminating dirty reads. However, this level still allows non repeatable reads, where the same query executed twice within a transaction might return different results if another transaction commits changes between the executions.

REPEATABLE READ guarantees that if a transaction reads data, subsequent reads of the same data within the same transaction will return identical results, even if other transactions modify that data. This prevents non repeatable reads but can still allow phantom reads, where new rows matching your query criteria appear between executions.

SERIALIZABLE is the highest isolation level, providing complete isolation from other transactions. Transactions executing at this level appear to execute sequentially, eliminating all concurrency anomalies but potentially reducing system throughput due to increased locking and blocking.

-- Demonstrate different isolation levels
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

BEGIN TRANSACTION InventoryAnalysis;

-- First read of inventory data
SELECT product_id, product_name, stock_quantity 
FROM products 
WHERE category_id = 10;

-- Simulate some processing time during which other transactions might modify data
WAITFOR DELAY '00:00:05';

-- Second read of the same data
-- Under READ COMMITTED, this might return different results
-- if other transactions committed changes to the inventory
SELECT product_id, product_name, stock_quantity 
FROM products 
WHERE category_id = 10;

COMMIT TRANSACTION InventoryAnalysis;

-- Now demonstrate REPEATABLE READ
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;

BEGIN TRANSACTION ConsistentInventoryAnalysis;

-- First read
SELECT product_id, product_name, stock_quantity 
FROM products 
WHERE category_id = 10;

WAITFOR DELAY '00:00:05';

-- Second read will return identical results to the first read
-- even if other transactions have modified the data
SELECT product_id, product_name, stock_quantity 
FROM products 
WHERE category_id = 10;

COMMIT TRANSACTION ConsistentInventoryAnalysis;

Deadlock Detection and Prevention

Deadlocks occur when two or more transactions are blocked indefinitely, each waiting for resources held by the other transactions. Database systems automatically detect deadlock situations and resolve them by choosing one transaction as a deadlock victim and rolling it back, allowing the other transactions to proceed.

Understanding common deadlock scenarios helps you design applications that minimize deadlock occurrence. The most common pattern involves two transactions accessing the same resources in different orders, creating a circular dependency where neither can proceed.

-- Transaction A (running in one session)
BEGIN TRANSACTION TransactionA;

UPDATE customers SET customer_status = 'VIP' WHERE customer_id = 100;
-- Transaction A now holds a lock on customer 100

WAITFOR DELAY '00:00:02'; -- Simulate processing time

-- Transaction A tries to update customer 200, but Transaction B holds this lock
UPDATE customers SET credit_limit = 5000 WHERE customer_id = 200;

COMMIT TRANSACTION TransactionA;

-- Transaction B (running in another session simultaneously)
BEGIN TRANSACTION TransactionB;

UPDATE customers SET customer_status = 'Premium' WHERE customer_id = 200;
-- Transaction B now holds a lock on customer 200

WAITFOR DELAY '00:00:02'; -- Simulate processing time

-- Transaction B tries to update customer 100, but Transaction A holds this lock
UPDATE customers SET credit_limit = 10000 WHERE customer_id = 100;

COMMIT TRANSACTION TransactionB;

This scenario creates a classic deadlock where Transaction A waits for Transaction B to release customer 200, while Transaction B waits for Transaction A to release customer 100. The database system will detect this circular dependency and roll back one of the transactions.

Deadlock Prevention Strategies

Preventing deadlocks involves designing your transactions to access resources in consistent orders, keeping transaction duration short, and using appropriate locking hints when necessary.

-- Deadlock prevention through consistent resource ordering
CREATE PROCEDURE TransferFunds
    @FromAccountID INT,
    @ToAccountID INT,
    @TransferAmount DECIMAL(10,2)
AS
BEGIN
    -- Always process accounts in ID order to prevent deadlocks
    DECLARE @FirstAccount INT = CASE WHEN @FromAccountID < @ToAccountID THEN @FromAccountID ELSE @ToAccountID END;
    DECLARE @SecondAccount INT = CASE WHEN @FromAccountID < @ToAccountID THEN @ToAccountID ELSE @FromAccountID END;
    
    BEGIN TRANSACTION FundTransfer;
    
    BEGIN TRY
        -- Lock accounts in consistent order
        UPDATE accounts SET last_accessed = GETDATE() WHERE account_id = @FirstAccount;
        UPDATE accounts SET last_accessed = GETDATE() WHERE account_id = @SecondAccount;
        
        -- Validate sufficient funds
        DECLARE @FromBalance DECIMAL(10,2);
        SELECT @FromBalance = account_balance FROM accounts WHERE account_id = @FromAccountID;
        
        IF @FromBalance < @TransferAmount
        BEGIN
            RAISERROR('Insufficient funds for transfer', 16, 1);
        END
        
        -- Perform the transfer
        UPDATE accounts SET account_balance = account_balance - @TransferAmount 
        WHERE account_id = @FromAccountID;
        
        UPDATE accounts SET account_balance = account_balance + @TransferAmount 
        WHERE account_id = @ToAccountID;
        
        -- Log the transaction
        INSERT INTO transfer_log (from_account, to_account, amount, transfer_date)
        VALUES (@FromAccountID, @ToAccountID, @TransferAmount, GETDATE());
        
        COMMIT TRANSACTION FundTransfer;
        
    END TRY
    BEGIN CATCH
        ROLLBACK TRANSACTION FundTransfer;
        
        -- Check if this was a deadlock victim
        IF ERROR_NUMBER() = 1205 -- Deadlock error number
        BEGIN
            PRINT 'Transaction was chosen as deadlock victim and will be retried';
            -- In production, you might implement retry logic here
        END
        ELSE
        BEGIN
            -- Handle other types of errors
            RAISERROR(ERROR_MESSAGE(), ERROR_SEVERITY(), ERROR_STATE());
        END
    END CATCH
END

Understanding error handling and transactions is fundamental to building reliable database applications. These concepts work together to ensure that your data remains consistent and accurate, even when errors occur or multiple users access the system simultaneously. The key is implementing appropriate error handling strategies while designing transactions that minimize conflicts and maintain data integrity across all possible execution scenarios.

Track your progress

Mark this subtopic as completed when you finish reading.