This portal is to share my acquired knowledge related to Sql Server. I hope will be helpful for novice and professionals.

Wednesday, December 29, 2010

Server Roles in Sql Server 2005

Server Role is used to grant server-wide security privileges to a user.

Sql Server 2005 comes with following fixed server Roles are:
1. bulkadmin
 2. dbcreator
3. diskadmin
4. processadmin
5. securityadmin
6. serveradmin
7. setupadmin
8. sysadmin

Each Role has different power to play with SQL Server Instance,Like Role dbcreator can create, alter,

drop, and restore any database. It is not easy to remember that what a Role can do all.
Sql Server provides a System stored procedure(sp_srvrolepermission) to list the permissions of Server Roles.

Try this statement on any database:
EXEC sp_srvrolepermission dbcreator

Will list the all permissions associated with dbCreator Server Role in tabular form.
To get list of permissions for all Roles, run SP without parameter.

Using this SP we can easily understand all Server Roles and make a better Decision to implement security.

Tuesday, December 21, 2010

Truncate Parent Table

If the Foreign key references is estabilished between tables then
we can not fire Truncate Satement on Parent table weither records exists in Parent table or not.

We need to delete the reference then we can truncate table.

Monday, December 20, 2010

Execution Plan Of Encrypted Stored Procedure

If a stored procedure is encrypted, then we cannot see the execution plan.

But we can see In SQL Server 2000 .

Wednesday, December 8, 2010

Configure TFS 2010 On Visual Studio 2005

Can we use TFS 2010 with Visual Studio 2005??

Yes we can, Microsoft provides facilities to use TFS 2010 on Visual Studio 2005 with Installing a forward compatibility update .
Install the following components in below order:

1. Visual Studio 2005
2. TFS Team Explorer 2005
3. Visual Studio 2005 Team Suite SP1
4. Visual Studio Team Explorer 2005 forward compatibility update to support TFS 2010

The order is must here.

If you face any application error to Install Visual Studio 2005 Team Suite SP1 component.
then give a try to Extract the EXE and run setup.

Some Important Points:
1. Close Visual Studio before Installion
2. If you have Visual Studio 2005 installed already with Service pack 1, then I think you need to uninstall the Visual Studio first,
then install all components in order.
3. The Installation order is really important to make it work.


How to Install SQL Server 2005 Client only?

1. Simply Browse your SQL Server Setup CD/Folder.
2. Go to Tool Folder
3. Run Setup.exe

The following components will be installed:
2. Visual Studio Integrated Development Environment(IDE) 2005
3. SQL Server Books Online
4. SQL Setup Suppport Files
6. SQL Native Client
7. QWC11
8. SQL Server Backward Compatibility Files
9. Workstation Components, Books Online, Development Tools (SSMS,VS)


Wednesday, November 24, 2010

Capital first letter of every word in string Reporting Services

Today I found a very good function in reporting services to Converts the first letter of every word in string to uppercase.
Either Sql server or Reporting Services does not provide any straight forward way to implement the same, we need to write the any user defined function to make the proper case.

But using "strConv" function we can simply achive.

Here is the syntex:

=StrConv(Fields!Name.Value, vbProperCase)

Converts the first letter of every word in string to uppercase.

To ge more details, click on :

Monday, November 22, 2010

Difference between XType and Type Column in sysobjects view Sql Server 2005?

Both are same , used to identity the type of objects.

Object type. Can be one of the following object types:

C = CHECK constraint

D = Default or DEFAULT constraint

F = FOREIGN KEY constraint

L = Log

FN = Scalar function

IF = In-lined table-function

P = Stored procedure

PK = PRIMARY KEY constraint (type is K)

RF = Replication filter stored procedure

S = System table

TF = Table function

TR = Trigger

U = User table

UQ = UNIQUE constraint (type is K)

V = View

X = Extended stored procedure

Type column is for backward compatibility.


Inline queries in Dataset Reporting Services

Suppose we are planning to change database schema, we are sure that we can use syscomments table to find out that which Stored Procedure may get break after made changes.
But what about those reports having dataset with inline query.

Today I found a very good article on this topic.

We can find all dataset queries to dbo.Catalog table in ReportServer database.
Here is the query :


DataSetXML.value('@Name','varchar(MAX)') DataSourceName,
DataSetXML.value('REP:Query[1]/REP:CommandText[1]','varchar(MAX)') CommandText
CAST(CAST(Content AS varbinary(max)) AS xml) ReportXML
Type = 2) ReportXML
CROSS APPLY ReportXML.nodes('//REP:DataSet') DataSetXML (DataSetXML)
INNER JOIN dbo.Catalog c
ON ReportXML.ItemID = c.ItemID
-- Search by part of the query text
WHERE (DataSetXML.value('REP:Query[1]/REP:CommandText[1]','varchar(MAX)')) LIKE '%...enter text here...%'

After running this query , we can check which report may get break.


Friday, November 19, 2010

Apply Filter on XML column

If Object_ID ('tempdb..#tblXML') Is Not Null Drop Table #tblXML
Create Table #tblXML (Id Int, colXML XML)

Declare @XML XML
Set @XML =



Insert Into #tblXML
Select 1, @XML

Select *, colXML.value('(//@id)[1]','varchar(200)') As [XMLData]
From #tblXML
Where colXML.value('(//@id)[1]','varchar(200)') ='D3AB51B3-B3F8-4D5F-B928-F48EA2996EA6'

Thursday, November 4, 2010

Drop Multiple Objects using single statement

Sometime we need to drop multiple database objects, and then we simply write Drop statement for each object.

But SQL Server provides a facility to drop multiple objects using single DROP statement.

DROP ObjectType ObjectName1, ObjectName2, ObjectName….

For Example :
Drop Table Employee, Address, Contact ;

Wednesday, November 3, 2010

Add multiple projects to a BIDS solution

This is not a good habit to create a solution for each report or project, if we do this, not easy to manage the all reports/projects.

I have learnt a good technique to my last company and today implemented same in new company, while implementing I had faced some problems.
But ultimately get the solution after sometime.
So I want to share that solution to all of others who are looking to implement the same.

After implementing this, BIDS looks like in hierarchy type:

   o Project1
          Shared Data Sources
   o Project2
         Shared Data Sources
  o Project3
        Shared Data Sources

Here are the steps:

Personalize the Visual Studio:

1. Open Visual Studio
2. Go to Tool > Option
3. Choose Projects And Solutions
4. Check “Always show solution”
5. Uncheck “Warn user when the project location is not trusted”.

Create a new Blank Solution:
1. On the File menu, select New and then click Project.
2. In the Project type’s pane, select Other Project Types and then select Visual Studio Solutions.
3. In the Templates pane, select Blank Solution.
4. Enter a name for the project.
5. To change the location of the solution directory, choose Browse and specify a new location.
6. Select Add to Source Control if we want to add the solution to a source control database or repository. (will be disable if we don’t have any Source Control installed)
7. Click OK.
After creating an empty solution, we can add new or existing projects and items to the empty solution by using the Add New Item or Add Existing Item command from the Project menu.

Add new Project to solution:
1. On the File menu, select New and then click Project.
2. In the New Project dialog box, select one of the Project types from the left pane.
3. Select one of the project Templates from the right pane.
4. Enter a Name for the new project.
5. In Solution: choose “Add to solution”.
Simply right click on Solution > Properties > New Projects>Choose Template > Enter a Name > Ok.

So in this way, we can manager our all reports and projects under one solution. And also we can now add complete solution to any source control.

It’s pretty quite simple, I hope we enjoy it. !!!!!!!

Friday, October 29, 2010

Fix Row Height for a Row Reporting Services 2005

Might be you have ever faced a problem that we set Row Height on Visual Studio but when we export report to excel , row expands to its original size.

Here is the solution for this in Reporting Services 2005:
1. Click on each cell of the row and Set Properties CAN GROW = False
2. Put a under scrore in each cell . (Its mandatory otherwise CAN Grow facility will not work)

Grant permission on a Store Procedure To Login

The EXECUTE permission was denied on the object 'SPName', database 'Database', schema 'dbo'.

This error accours when we try to execute a Stored procedure using Login that does not have execute permission on that database.

Here is the solutuion :

Note : Only sysadmin or dbowner can grant permission.

1. Login with sysadmin or dbowner login on server.
2. Go to the properties of Login on which you want to grant permission.
3. Click on User Mapping, and Map the login with database.
4. Then execute Grant statement :

Try again now to execute the Stored Procedure with Login that we have permitted now.

Friday, October 22, 2010

Primary Key Without Clustered Index

We have read/heard many times that clustered index is created when we create Primary key on column.

But is it possible to create PK witout Clustered Index?

Yes it is possible but one of the index will created either Clustered or NonClustered .

See SQL:

Create Table dbo.PKWithoutCLIdx

In this way we can force SQL Server to created NonClustered on Primary Key.


Get size of all tables

Here is the SP to get the size of all table:

Create PROCEDURE dbo.GetAllTableSizes

Print 'Read complete Article in case of collation conflict error '

IF Object_ID ('tempdb..#TempTable') Is Not Null DROP TABLE #TempTable

----Declare Cursor
Select Schema_Name(Schema_ID) + '.'+T.Name-----Concate schema also
From Sys.Tables T
For Read Only

--Temp table to store the results
tableName varchar(100),
numberofRows varchar(100),
reservedSize varchar(50),
dataSize varchar(50),
indexSize varchar(50),
unusedSize varchar(50)

--Open the cursor
OPEN tableCursor

--Get the first table name from the cursor
FETCH NEXT FROM tableCursor INTO @TableName

--Loop until the cursor was not able to fetch
WHILE (@@Fetch_Status >= 0)
--Dump the results of the sp_spaceused query to the temp table
INSERT #TempTable
EXEC sp_spaceused @TableName

--Get the next table name
FETCH NEXT FROM tableCursor INTO @TableName

--Close cursor
CLOSE tableCursor
DEALLOCATE tableCursor

--Select all records so we can use the reults
SELECT Schema_Name(Schema_ID) AS [Schema],T.*
FROM #TempTable T
Inner Join sys.tables ST On T.tableName = ST.Name ----To Disply the schema
Order By T.numberofRows Desc

Exec Dbo.GetAllTableSizes

/*-If you face collation conflict error like "Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS" in the equal to operation.
Just do simply following steps:
1. Execute sp_Help [sys.tables]
2. Copy Collation for the column NAME
3. Paste that copied collation in SP at temp table script part,
Your temp table in SP will be looks like:
tableName varchar(100) COLLATE SQL_Latin1_General_CP1_CI_AS,
numberofRows varchar(100),
reservedSize varchar(50),
dataSize varchar(50),
indexSize varchar(50),
unusedSize varchar(50)

4. Alter the SP
5. Execute again.


Sunday, July 18, 2010

7 Differences Between SP and FN

1. Functions must return a value(scalar,inline table or multi statement table) whereas stored procedure may or may not retun a value.
2.Functions can return a table whereas stored procedures can create a table but can't return table.
3. Stored procedures can be called independently using exec keyword whereas function are called using select statements.
4. Stored procedure can be used to change server configuration(like setting permissions ) whereas function can't be used for this.
5. XML and output parameters can't be passed to functions whereas it can be with sp's.
6. Transaction related statement can be handled in sp whereas it can't be in function.
7. Stored procedures can call a funtion or another stored procedure similarly a function can call another function and a stored procedure.The catch with function is that no user defined stored proc can be called.Only extended/system defined procs can be called.


Find nth highest salary..

How to find the nth highest salary to salary table??
I have faced many times this question in interviews, I think,
it is the most popular question asked in interview.

Read many articles on Internet and found that following is the best one to find Nth highest salary using TOP clause.

Suppose we want to find 3rd highest salary, then:-

Create Table Salary
EmpID Int,
Salary Int

Insert Into Salary
Select 101, 25000
Select 102, 45000
Select 103, 75000
Select 104, 5000
Select 105, 3000
Select 106, 2000
Select 106, 97000

Select * From Salary
Order By Salary Desc

Select Top 1 Salary
Select Top 3 Salary
From Salary
Order By Salary Desc
) aa
Order By Salary

OUTPUT : 45000

So simply using this SQL we can find Nth highest salary to change the Second Top clause.


Delete Duplicate Record using CTE?

Hi All,
I know there are many ways to delete duplicate records to table, but some times we find that there is no column to identity each record like Identity or any PK.
So handle this requirement , I think CTE is best option here..
Try sql:

----Delete Duplicate
Create Table Product
Product Varchar(100),
Price Int

Insert Into Product
Select 'Sugar','100'
Select 'Salt','50'
Select 'Refined','300'
Select 'Sugar','100'
Select 'Salt','50'
Select 'Juice','75'

Select 'Juice','75'

Select *,'Before' from Product;

WITH CTE (ID,Product,Price)
Select ROW_Number () OVER(Partition By Product Order By Product) AS ID,*
from Product
From CTE
Where ID >1;

Here Row_Numer function generates unique number for each partition and then we can simply delete the records having ID value > 1 and solve our problem.


Thursday, June 24, 2010

How to show total No of rows return on report (SSRS)

1. Add a text box on report
2. Write below expression:

Tuesday, June 22, 2010

Types of sub-queries

SQL Server allow three types of subqueries:

1. Single Row subquery: - Subquery returns only one row
2. Multiple Row subquery: - Subquery returns multiple rows
3. Multiple column subquery : - Subquery returns multiple columns.

---1. Example
Select *
From Table1
Where Col1 > = (Select Sum(Col1) From Table2)

Here in first example inner query will always return single value

---2. Example
Select *
From Table1
Where Col1 IN (Select Col1 From Table2)

Here in second example inner query can returns multiple row

---3. Example
Select *
From Table1 t1
Inner Join (Select Col1, Sum(Col2) As Col2
From Table2
Group By Col1
) t2
On t1.Col1 = t2.Col1

Here in third example subquery used in JOIN will return multiple 2 columns.

There are another two terms used in Subquery called Correlated and NonCorrelated.

A correlated subquery is an inner subquery which is referenced by the main outer query such that the inner query is considered as being executed repeatedly.

----Example of Correlated Subqueries

SELECT e.EmployeeID
FROM Employee e
WHERE e.ContactID IN
SELECT c.ContactID
FROM Person.Contact c
WHERE MONTH(c.ModifiedDate) = MONTH(e.ModifiedDate)

A noncorrelated subquery is subquery that is independent of the outer query and it can executed on its own without relying on main outer query.

----Example of Noncorrelated Subqueries

SELECT e.EmployeeID
FROM Employee e
WHERE e.ContactID IN
SELECT c.ContactID
FROM Person.Contact c
WHERE c.Title = 'Mr.'


ISNull : This function requires 2 arguments, check if first argument is NULL then replace NULL value with second argument , exactly works as :

Case When FirstArgument Is Null Then SecondArgument Else FirstArgument End As WithCaseWhen

Simply this type of statement can be handle using ISNULL function like

Select ISNULL (FirstArgument, SecondArgumanet)

COALESCE : When we have to check multiple arguments suppose:

A contact table contains 3 types of contact numbers ( HomePhone, WorkPhone, CellPhone), and all these three columns are with allow Null value.
and now want to fetch any one Non Null phone number , we are sure that this requirement can be done with case when statement, but no need to write complex
case When statement , sql server provide a function to check multiple argument and to select first Non Null value.

So COALESCE function returns the first non-NULL expression among its arguments.

Select COALESCE ( HomePhone, WorkPhone, CellPhone)

NULLIF : ISNULL and COALESCE functions are to replace Null with Non Null value , but sometimes we need to replace Non Null value with Null value, like

Select Case When FirstAgument = SecondArgument Then Null Else FirstAgument End

The NULLIF function takes two arguments. If the two arguments are equal, then NULL is returned. Otherwise, the first argument is returned.

Try this sql:

Create Table #Names
FirstName Varchar(20),
LastName Varchar(10)

Insert Into #Names
Select 'Randhir','Randhir'
Select 'Satish','Verma'
Select 'Sumit','Gure'

Select * From #Names

--Now show FirstName + Last Name, but if First and Last names both are same, show only FirstName

---With Case When
Case When FirstName = LastName Then FirstName Else FirstName + ' ' + LastName End Name
From #Names

ISNULL(NULLIF(FirstName,LastName),FirstName) Name
From #Names

Here NULLIF(FirstName,LastName) expression will show NULL in case of fist record because both first & Last Name are same.
and IsNull replace the Null value with FirstName.

Thursday, June 17, 2010

Cannot assign a default value to a local variable.

This error occurs when we try to assign value to variable at the time of declaration.

Declare @FareType Char(1) = NULL

Here sql server will through that error with severity level 15.
Actually declaration and assignment are both distinct operations.
We can only assign value to variable using SET or SELECT statement.
Declare @FareType Char(1)
Set @FareType = 'F'


Declare @FareType Char(1)
Select @FareType = 'F'

But fortuantely MS SQL Server 2008 allows assigning value to variable at the time of declaration .

Friday, June 4, 2010

What we can do using SSIS..

Integration Services is a platform for building high performance data integration and workflow solutions, including extraction, transformation, and loading (ETL) operations for data warehousing.

Integration Services includes graphical tools and wizards for
building and debugging packages;
tasks for performing workflow functions such as
1. FTP operations,
2. SQL statement execution.
3. E-Mail messaging.
4. Data sources and destinations for extracting and loading data.
5. Transformations for cleaning.
6. Aggregating.
7. Merging.
8. Copying data.
9. Management service (like Backup & restore using Maintaince Plan)
10. Administering Integration Services packages.

Typical uses of SSIS packages:

Merging Data from Heterogeneous Data Stores:Data is typically stored in many different data storage systems, and extracting data from all sources and merging the data into a single, consistent dataset is challenging. This situation can occur for a number of reasons. For example:

Many organizations archive information that is stored in legacy data storage systems. This data may not be important to daily operations, but it may be valuable for trend analysis that requires data collected over a long period of time.
Branches of an organization may use different data storage technologies to store the operational data. The package may need to extract data from spreadsheets as well as relational databases before it can merge the data.

Data may be stored in databases that use different schemas for the same data. The package may need to change the data type of a column or combine data from multiple columns into one column before it can merge the data.

Integration Services can connect to a wide variety of data sources, including multiple sources in a single package. A package can connect to relational databases by using .NET and OLE DB providers, and to many legacy databases by using ODBC drivers. It can also connect to flat files, Excel files, and Analysis Services projects.

Integration Services includes source components that perform the work of extracting data from flat files, Excel spreadsheets, XML documents, and tables and views in relational databases from the data source to which the package connects.

Next, the data is typically transformed by using the transformations that Integration Services includes. After the data is transformed to compatible formats, it can be merged physically into one dataset.

After the data is merged successfully and transformations are applied to data, the data is usually loaded into one or more destinations. Integration Services includes destination for loading data into flat files, raw files, and relational databases. The data can also be loaded into an in-memory recordset and accessed by other package elements.

Populating Data Warehouses and Data Marts :The data in data warehouses and data marts is usually updated frequently, and the data loads are typically very large.

SSIS includes a task that bulk loads data directly from a flat file into SQL Server tables and views, and a destination component that bulk loads data into a SQL Server database as the last step in a data transformation process.

An SSIS package can be configured to be restartable. This means you can rerun the package from a predetermined checkpoint, either a task or container in the package. The ability to restart a package can save a lot of time, especially if the package processes data from a large number of sources.

You can use SSIS packages to load the dimension and fact tables in the database. If the source data for a dimension table is stored in multiple data sources, the package can merge the data into one dataset and load the dimension table in a single process, instead of using a separate process for each data source.

Updating data in data warehouses and data marts can be complex, because both types of data stores typically include slowly changing dimensions that can be difficult to manage through a data transformation process. The Slowly Changing Dimension Wizard automates support for slowly changing dimensions by dynamically creating the SQL statements that insert and update records, update related records, and add new columns to tables.

Additionally, tasks and transformations in Integration Services packages can process Analysis Services cubes and dimensions. When the package updates tables in the database that a cube is built on, you can use Integration Services tasks and transformations to automatically process the cube and to process dimensions as well. Processing the cubes and dimensions automatically helps keep the data current for users in both environments; users who access information in the cubes and dimensions, and users who access data in a relational database.

Integration Services can also compute functions before the data is loaded into its destination. If your data warehouses and data marts store aggregated information, the SSIS package can compute functions such as SUM, AVERAGE, and COUNT. An SSIS transformation can also pivot relational data and transform it into a less-normalized format that is more compatible with the table structure in the data warehouse.

Cleaning and Standardizing Data:Whether data is loaded into an online transaction processing (OLTP) or online analytic processing (OLAP) database, an Excel spreadsheet, or a file, it needs to be cleaned and standardized before it is loaded. Data may need to be updated for the following reasons:

Data is contributed from multiple branches of an organization, each using different conventions and standards. Before the data can be used, it may need to be formatted differently. For example, you may need to combine the first name and the last name into one column.

Data is rented or purchased. Before it can be used, the data may need to be standardized and cleaned to meet business standards. For example, an organization wants to verify that all the records use the same set of state abbreviations or the same set of product names.

Data is locale-specific. For example, the data may use varied date/time and numeric formats. If data from different locales is merged, it must be converted to one locale before it is loaded to avoid corruption of data.

Integration Services includes built-in transformations that you can add to packages to clean and standardize data, change the case of data, convert data to a different type or format, or create new column values based on expressions. For example, the package could concatenate first and last name columns into a single full name column, and then change the characters to uppercase.

An Integration Services package can also clean data by replacing the values in columns with values from a reference table, using either an exact lookup or fuzzy lookup to locate values in a reference table. Frequently, a package applies the exact lookup first, and if the lookup fails, it applies the fuzzy lookup. For example, the package first attempts to look up a product name in the reference table by using the primary key value of the product. When this search fails to return the product name, the package attempts the search again, this time using fuzzy matching on the product name.

Another transformation cleans data by grouping values in a dataset that are similar. This is useful for identifying records that may be duplicates and therefore should not be inserted into your database without further evaluation. For example, by comparing addresses in customer records you may identify a number of duplicate customers.

Building Business Intelligence into a Data Transformation Process:
A data transformation process requires built-in logic to respond dynamically to the data it accesses and processes.

The data may need to be summarized, converted, and distributed based on data values. The process may even need to reject data, based on an assessment of column values.

To address this requirement, the logic in the SSIS package may need to perform the following types of tasks:

>>>Merging data from multiple data sources.
>>>Evaluating data and applying data conversions.
>>>Splitting a dataset into multiple datasets based on data values.
>>>Applying different aggregations to different subsets of a dataset.
>>>Loading subsets of the data into different or multiple destinations.

Integration Services provides containers, tasks, and transformations for building business intelligence into SSIS packages.

Containers support the repetition of workflows by enumerating across files or objects and by evaluating expressions. A package can evaluate data and repeat workflows based on results. For example, if the date is in the current month, the package performs one set of tasks; if not, the package performs an alternative set of tasks.

Tasks that use input parameters can also build business intelligence into packages. For example, the value of an input parameter can filter the data that a task retrieves.

Transformations can evaluate expressions and then, based on the results, send rows in a dataset to different destinations. After the data is divided, the package can apply different transformations to each subset of the dataset. For example, an expression can evaluate a date column, add the sales data for the appropriate period, and then store only the summary information.

It is also possible to send a data set to multiple destinations, and then apply different sets of transformation to the same data. For example, one set of transformations can summarize the data, while another set of transformations expands the data by looking up values in reference tables and adding data from other sources.

Automating Administrative Functions and Data Loading:

Administrators frequently want to automate administrative functions such as backing up and restoring databases, copying SQL Server databases and the objects they contain, copying SQL Server objects, and loading data. Integration Services packages can perform these functions.

Integration Services includes tasks that are specifically designed to copy SQL Server database objects such as tables, views, and stored procedures; copy SQL Server objects such as databases, logins, and statistics; and add, change, and delete SQL Server objects and data by using Transact-SQL statements.

Administration of an OLTP or OLAP database environment frequently includes the loading of data. Integration Services includes several tasks that facilitate the bulk loading of data. You can use a task to load data from text files directly into SQL Server tables and views, or you can use a destination component to load data into SQL Server tables and views after applying transformations to the column data.

An Integration Services package can run other packages. A data transformation solution that includes many administrative functions can be separated into multiple packages so that managing and reusing the packages is easier.

If you need to perform the same administrative functions on different servers, you can use packages. A package can use looping to enumerate across the servers and perform the same functions on multiple computers. To support administration of SQL Server, Integration Services provides an enumerator that iterates across SQL Management Objects (SMO) objects. For example, a package can use the SMO enumerator to perform the same administrative functions on every job in the Jobs collection of a SQL Server installation.

SSIS packages can also be scheduled using SQL Server Agent Jobs.

Sunday, May 30, 2010

Database Normalization

It is Sunday, had dinner and thinking what to do today that can enhance my database management skill, ohhh Great a very basic topic of relation database "Normalization" is here, that I need to explore more because every time I gets confused whenever faces question on that.

Normalization is the systematic way to organizing data in a database. This includes creating tables and establishing relationships between those tables according to rules designed both to protect the existing data (accidental deletions or amendments) and to make the database more flexible by eliminating redundancy and inconsistent dependency.

Normalization is the process that includes creating tables and establishing relationships between those tables according to rules designed both to protect the data and to make the database more flexible by eliminating redundancy and inconsistent dependency.

Redundant data wastes disk space and creates maintenance problems. If data that exists in more than one place must be changed, the data must be changed in exactly the same way in all locations. A customer address change is much easier to implement if that data is stored only in the Customers table and nowhere else in the database.

There are a few rules for database normalization. Each rule is called a "normal form." If the first rule is observed, the database is said to be in "first normal form." If the first three rules are observed, the database is considered to be in "third normal form." Although other levels of normalization are possible, third normal form is considered the highest level necessary for most applications.

First Normal Form

* Eliminate repeating groups in individual tables.
* Create a separate table for each set of related data.
* Identify each set of related data with a primary key.

Basically there are 4 properties of Relational table that can be considered as in 1Nf:
1. Entries in columns are single-valued.
2. Entries in columns are of the same kind.
3. Each row is unique.
4. Each column has a unique name.

Example 1

Table 1. (Not in 1Nf)
Item_ID Item_Number Item_Amount
100 ITM001, ITM002, ITM003 500,450,300

Table 1. (In 1Nf)
Item_ID Item_Number Item_Amount
100 ITM001 500
101 ITM002 450
103 ITM003 300

Second Normal Form

* Create separate tables for sets of values that apply to multiple records.
* Relate these tables with a foreign key.

Records should not depend on anything other than a table's primary key (a compound key, if necessary). For example, consider a customer's address in an accounting system. The address is needed by the Customers table, but also by the Orders, Shipping, Invoices, Accounts Receivable, and Collections tables. Instead of storing the customer's address as a separate entry in each of these tables, store it in one place, either in the Customers table or in a separate Addresses table.

# Unnormalized table:

Collapse this tableExpand this table
Student# Advisor Adv-Room Class1 Class2 Class3
1022 Jones 412 101-07 143-01 159-02
4123 Smith 216 201-01 211-02 214-01

Note the multiple Class# values for each Student# value in the above table. Class# is not functionally dependent on Student# (primary key), so this relationship is not in second normal form.

The following two tables demonstrate second normal form:

Collapse this tableExpand this table
Student# Advisor Adv-Room
1022 Jones 412
4123 Smith 216

Collapse this tableExpand this table
Student# Class#
1022 101-07
1022 143-01
1022 159-02
4123 201-01
4123 211-02
4123 214-01

Third Normal Form

* Eliminate fields that do not depend on the key.

Values in a record that are not part of that record's key do not belong in the table. In general, any time the contents of a group of fields may apply to more than a single record in the table, consider placing those fields in a separate table.

For example, in an Employee Recruitment table, a candidate's university name and address may be included. But you need a complete list of universities for group mailings. If university information is stored in the Candidates table, there is no way to list universities with no current candidates. Create a separate Universities table and link it to the Candidates table with a university code key.

Why We Need Database Normalization:
Normalization is part of successful database design; without normalization, database systems can be inaccurate, slow, and inefficient, and they might not produce the data you expect.

Third Normal Form: Eliminate Data Not Dependent On Key

In the last example, Adv-Room (the advisor's office number) is functionally dependent on the Advisor attribute. The solution is to move that attribute from the Students table to the Faculty table, as shown below:


Collapse this tableExpand this table
Student# Advisor
1022 Jones
4123 Smith


Collapse this tableExpand this table
Name Room Dept
Jones 412 42
Smith 216 42

This article is posted on one of MS site, i just understood,simplified and put on my blog.
I think this is good to go for Normalization basic.