Always try to affect the maxiumum number of rows with the minimum number of transactions
For any SQL database to be trustworthy, it must protect both the integrity and the consistency of its data. That's where atomicity and unique indexes play a critical role. Atomicity means that a group of database operations succeeds or fails as a single unit. If one insert in a batch fails, the entire transaction is rolled back, preventing partial or inconsistent data from being stored.
This is especially important in ETL processes, financial systems, or any application where incomplete writes could cause downstream errors. Unique indexes, on the other hand, enforce business rules directly at the database level by preventing duplicate values in key columns.
Whether defined as a PRIMARY KEY or UNIQUE constraint, they guarantee that identifiers remain truly unique and that your data model cannot silently drift into inconsistency. Together, atomic transactions and unique indexes form a foundational safeguard; one that ensures changes are applied safely, and the other ensures only valid, non-duplicated data can exist in the first place.
Let's say you want to create and populate a simple table containing the phonetic alphabet. Let's create a table to hold this information. In practice phonetic_id would be an auto increment column.
- create table phonetic (
- phonetic_id int not null primary key,
- phonetic varchar(10) not null unique
- );
You could produce 26 insert statements like this:
- insert phonetic(phonetic_id, phonetic) values (1, 'Alfa');
- insert phonetic(phonetic_id, phonetic) values (2, 'Bravo');
- insert phonetic(phonetic_id, phonetic) values (3, 'Charlie');
- insert phonetic(phonetic_id, phonetic) values (4, 'Delta');
- insert phonetic(phonetic_id, phonetic) values (5, 'Echo');
- insert phonetic(phonetic_id, phonetic) values (6, 'Foxtrot');
- insert phonetic(phonetic_id, phonetic) values (7, 'Golf');
- insert phonetic(phonetic_id, phonetic) values (8, 'Hotel');
- insert phonetic(phonetic_id, phonetic) values (9, 'India');
- insert phonetic(phonetic_id, phonetic) values (10, 'Juliet');
- insert phonetic(phonetic_id, phonetic) values (11, 'Kilo');
- insert phonetic(phonetic_id, phonetic) values (12, 'Lima');
- insert phonetic(phonetic_id, phonetic) values (13, 'Mike');
- insert phonetic(phonetic_id, phonetic) values (14, 'November');
- insert phonetic(phonetic_id, phonetic) values (15, 'Oscar');
- insert phonetic(phonetic_id, phonetic) values (16, 'Papa');
- insert phonetic(phonetic_id, phonetic) values (17, 'Quebec');
- insert phonetic(phonetic_id, phonetic) values (18, 'Romeo');
- insert phonetic(phonetic_id, phonetic) values (19, 'Sierra');
- insert phonetic(phonetic_id, phonetic) values (20, 'Tango');
- insert phonetic(phonetic_id, phonetic) values (21, 'Uniform');
- insert phonetic(phonetic_id, phonetic) values (22, 'Victor');
- insert phonetic(phonetic_id, phonetic) values (23, 'Whisky');
- insert phonetic(phonetic_id, phonetic) values (24, 'X-ray');
- insert phonetic(phonetic_id, phonetic) values (25, 'Yankee');
- insert phonetic(phonetic_id, phonetic) values (26, 'Zulu');
This is 26 separate transactions and take 0.383 secs to execute. If one of them fails the table is left in an inconsistent state. A better approach is to specify all values at once:
- insert phonetic(phonetic_id, phonetic)
- values
- (1, 'Alfa'),
- (2, 'Bravo'),
- (3, 'Charlie'),
- (4, 'Delta'),
- (5, 'Echo'),
- (6, 'Foxtrot'),
- (7, 'Golf'),
- (8, 'Hotel'),
- (9, 'India'),
- (10, 'Juliet'),
- (11, 'Kilo'),
- (12, 'Lima'),
- (13, 'Mike'),
- (14, 'November'),
- (15, 'Oscar'),
- (16, 'Papa'),
- (17, 'Quebec'),
- (18, 'Romeo'),
- (19, 'Sierra'),
- (20, 'Tango'),
- (21, 'Uniform'),
- (22, 'Victor'),
- (23, 'Whisky'),
- (24, 'X-ray'),
- (25, 'Yankee'),
- (26, 'Zulu');
Took 0.026 sec, ten times faster than individual insert statements.
What would happen if I tried to insert a duplicate value?
insert phonetic(phonetic_id, phonetic) values (100, 'Alfa');