I was reading Steve Souza's FAQ on this site, about performance, and I read that there could be a performance difference between normalised and denormalised database tables. That could be very interesting to me, if I knew what it meant!
Can anyone help me with a succinct explanation?
Betty Rubble? Well, I would go with Betty... but I'd be thinking of Wilma.
Database normalization is the process of structuring your data according to the relational theory. Wikipedia has a nice explanation of normalzations, and the steps to normalize a database.
Normalization focuses on the structure of your database. It may be that this optimal structured database does not meet your performance requirements. You can take a step back, and make a choice between optimal structure and better performance.
Regards, Jan [ April 16, 2007: Message edited by: Jan Cumps ]
The simple answer from a programming perspecive, is that data exists in only one place in fully normalized systems and is duplicated in un-normalized. So you can think of it it terms of unique or duplicate date. Unique data has the organizational convenience that updates only need to be made to one table, where duplicate data, you may have to update multiple fields.
Think of a column that keeps a count of the number of similar records. You could update this entry every time you add/update a record in the table, or you could delete this field and recompute similarity every time someone asks for it. There's no clear advantage. Performance-wise fully normalized can be awful, often joining dozens of tables per query. The goal is usually to get as close to fully normalized as possible while still allowing for quick access to methods like count. Often times such data can be maintained by triggers and/or whats called "materialized views".