What is Denormalization of Database?

What is De - normalization of Database? can the system achieve acceptable performance without denormalizing? reasons to Denormalization of Database?


Views: 64612 | Community Opinion: 14

Tags..  Database Normalization  SQL Server  Oracle

Bookmark this page..



Ask a New Question Go to Home

Community Opinion/Answers
 

De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It is sometimes necessary because current DBMSs implement the relational model poorly.

A true relational DBMS would allow for a fully normalized database at the logical level, while providing physical storage of data that is tuned for high performance. De-normalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access.


Denormalization of Database! Why?

Only one valid reason exists for denormalizing a relational design - to enhance performance. However, there are several indicators which will help to identify systems and tables which are potential denormalization candidates. These are:



* Many critical queries and reports exist which rely upon data from more than one table. Often times these requests need to be processed in an on-line environment.

* Repeating groups exist which need to be processed in a group instead of individually.

* Many calculations need to be applied to one or many columns before queries can be successfully answered.

* Tables need to be accessed in different ways by different users during the same timeframe.

* Many large primary keys exist which are clumsy to query and consume a large amount of DASD when carried as foreign key columns in related tables.

* Certain columns are queried a large percentage of the time. Consider 60% or greater to be a cautionary number flagging denormalization as an option.


Be aware that each new RDBMS release usually brings enhanced performance and improved access options that may reduce the need for denormalization. However, most of the popular RDBMS products on occasion will require denormalized data structures. There are many different types of denormalized tables which can resolve the performance problems caused when accessing fully normalized data. The following topics will detail the different types and give advice on when to implement each of the denormalization types.




gnanendra Said..

its good




kihumuro timothy Said..

since performance is also a high needed factor factor i hope denormalisation must be put in place to create some data redundancy . thanx




Ganesh Tangale Said..

Yes i got here satsifaction.this is proper explanation of Denormalization. Because i am always say about normalization but not familiar with Denormalization after reading above explanation i will give proper answer if in future some interviwer will aske me about both.......Thanks its realy good




Piyush Chandra Said..

The intentional introduce of redundancy in a table in order to improve performance is called “Denormalization”. Denormalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access. Denormalization is usually done to decrease the time required to execute complex queries. Drawbacks of a normalized database are mostly in performance. In a normalized database, more joins are required to gather all the information from multiple entities, as data is divided and stored in multiple entities rather than in one large table. Queries that have a lot of complex joins will require more CPU usage and will adversely affect performance. Sometimes, it is good to denormalize parts of the database. Examples of design changes to denormalize the database and improve performance are...
For more details on this post please check out this link...
http://mindstick.com/Articles/2168d768-017c-41f4-a7a3-ac76faabb48d/?Denormalization%20in%20SQL%20Server




Bapun. Said..

The answer is quite simple.




master Said..

this answer in not over (in details)
In computing, denormalization is the process of attempting to optimise the read performance of a database by adding redundant data or by grouping data.[1][2] In some cases, denormalisation helps cover up the inefficiencies inherent in relational database software. A relational normalised database imposes a heavy access load over physical storage of data even if it is well tuned for high performance.

A normalised design will often store different but related pieces of information in separate logical tables (called relations). If these relations are stored physically as separate disk files, completing a database query that draws information from several relations (a join operation) can be slow. If many relations are joined, it may be prohibitively slow. There are two strategies for dealing with this. The preferred method is to keep the logical design normalised, but allow the database management system (DBMS) to store additional redundant information on disk to optimise query response. In this case it is the DBMS software's responsibility to ensure that any redundant copies are kept consistent. This method is often implemented in SQL as indexed views (Microsoft SQL Server) or materialised views (Oracle). A view represents information in a format convenient for querying, and the index ensures that queries against the view are optimised.

The more usual approach is to denormalise the logical data design. With care this can achieve a similar improvement in query response, but at a cost—it is now the database designer's responsibility to ensure that the denormalised database does not become inconsistent. This is done by creating rules in the database called constraints, that specify how the redundant copies of information must be kept synchronised. It is the increase in logical complexity of the database design and the added complexity of the additional constraints that make this approach hazardous. Moreover, constraints introduce a trade-off, speeding up reads (SELECT in SQL) while slowing down writes (INSERT, UPDATE, and DELETE). This means a denormalised database under heavy write load may actually offer worse performance than its functionally equivalent normalised counterpart.

A denormalised data model is not the same as a data model that has not been normalised, and denormalisation should only take place after a satisfactory level of normalisation has taken place and that any required constraints and/or rules have been created to deal with the inherent anomalies in the design. For example, all the relations are in third normal form and any relations with join and multi-valued dependencies are handled appropriately.

Examples of denormalisation techniques include:

Materialised views, which may implement the following:
Storing the count of the "many" objects in a one-to-many relationship as an attribute of the "one" relation
Adding attributes to a relation from another relation with which it will be joined
Star schemas, which are also known as fact-dimension models and have been extended to snowflake schemas
Prebuilt summarisation or OLAP cubes

Denormalisation techniques are often used to improve the scalability of Web applications.[3]




Anurag Dake Said..

Normalisation:
Normalisation will save disc space by redundancy but will take more time on computation of queries since generally database sizes are very large.

Denormalisation:
Denormalisation will perform computation in less time and consume more disc space.

In case ,we are out of space normalisation is better. But if we want to solve queries faster we need to denormalise the database.




Vinay Pandey Said..

The intentional introduce of redundancy in a table in order to improve performance is called “Denormalization”. Denormalization is a technique to move from higher to lower normal forms of database modeling in order to speed up database access. Denormalization is usually done to decrease the time required to execute complex queries.




ISMITA ROY Said..

Its good but quite simple.................
.........................................................................................................................................................................................................................................................................................................................




adilkhalil Said..

its good for denormalization approch .......................................................................................................................................................................................................................




Nilesh Chaudhar Said..

m Stisfied Wid Mr Piyush Anwr..
tnx u sir 4 makin me clear abt denormalizn.


@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@eqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq




balu Said..

its good
for denormilzation




Navin Said..

Normalisation and Denormalisation both are required for redundancy and fast process of DATABASE but sometimes as per situation of project both rules are failed and one third rules are arised from our inner concious that are fit to developing the database.






What do you think? Add your opinion/answer
*Name
*your opinion/answer: