SQL Tips by Namwar Rizvi

January 15, 2009

How to use more than 4GB RAM on SQL Server?

SQL Server is an highly optimized and well tuned piece of code. It try to utilise Operating System resources as optimum as possible. Due to this, most DBA think that if they will increase RAM on the server then SQL Server will automatically start using it, right? Wrong, SQL Server can only use RAM more than 4GB if and only if,

  • You are running Enterprise Editions of Windows 2003 Servers orAdvanced/Datacenter version of Windows 2000
  • You have enabled Operating System to use the extra RAM you have installed. Please check here for the details and how to do it.
  • You are not running Express or Workgroup version of SQL Server. Please check here for further details about limitations in different versions of SQL Server
  • you have enabled awe enabled option and set max server memory to the maximum memory you can allocate to SQL Server. This is applicatiable for 32 but versions of OS and not required in 64 bit version of Windows servers. Please check here for further details and how to configure the memoryfor SQL Server

August 28, 2008

Resource Governor in SQL Server 2008 – Beware of catches

Resource governor is one of the best additions in SQL Server 2008. It gives you a real control on your environment as a DBA and can be very useful for better utilization of your database server resources. So far so good, but don’t just start using it without knowing how it actually restricts resource utilization.  Following are some points to remember for better utilization of this feature:

  1. Normally, most of us assume that if we restricts a memory usage for a user A upto 10% only then he will never be able to utilize more than 10% of server in any case. Right? Wrong. Actually, Resource governer only restricts user to utilize not more than 10% if it feels that remaining memory is not available but if memory is available and there are no pending workload then it will allow the user to use more than its allowed quota of 10%. This is there to optimize the utilization of memory and avoids wastage of resources. But it can have worse effects also because if User A fires it query before other users then server will start utilizing all the available memory and all other users which came afterwards will suffer the consequences.
  2. Please note that Resource Governor is for database engine not for other services in SQL Server 2008. It means you can not control usage of Reporting Services, Analysis Services or Integration Services.
  3. If you have multiple instances running on same machine then you can not use Resource Governor to manage load between these instances.

Keeping these points in your mind will help you to better understand how to use resource governor and what to expect. It is one of the best tools to manage your load and highly recommended but make sure you know the pros and cons of it.

July 17, 2008

How SQL Server Index Statistics works – Part 1

In my last article,

Selectivity of a Query and how it affects indexing

I discussed the selectivity of a query. Today I will further extend our discussion and will drill down how index Statistics works. I will continue it in my next article where we will see some real world examples but first we need to understand the concept.

Index statistics is one of the most fundamental part of query plan optimization in SQL Server. Better understanding of index statistics helps you optimize and fine tune your queries. So if you have 1000000 rows and you are searching for a range of values of an indexed column then how SQL Server forecasts how many number of rows will match your criteria before even searching for any row? Here is the actual process:

As we all know that when we index a column, SQL Server does two things:

  1. Sorts the values of the column in an internal data structure called “Index” This data structure contains the sorted value and a pointer to its respective row in the table.
  2. Generates a histogram of values.

The histogram shows the distribution of values for the sorted column. For example:

if we have following values in an integer column 1,2,2,3,4,1,4,5,4 then a typical histogram will be similar to this


Rows matched











So suppose if you are searching for all rows having column values between 3 and 5 then by looking at the above histogram table you can estimate that you will get total 1+3+1=5 rows. If your table contains just 9 rows in total then 5 rows means approximately 55% of total rows will match the criteria. Based on this calculation you may elect to directly scan the table instead of first reading the index, fetching the pointers of the matching rows and then read the actual rows. Although, it is a very simple example and there are other factors involve in deciding whether to use index or not but this example demonstrates the concept in very simple terms.

In my next article, I will further explain this concept with the help of real world examples.

August 27, 2007

MERGE Statement of SQL Seerver 2008

Filed under: Information,New Features,Performance,SQL Server 2008,TSQL — namwar @ 10:28 PM

SQL Server 2008 has introduced a new statement called MERGE which will combine the logic of INSERT-IF NOT EXITS and UPDATE-IF EXISTS.
MERGE statement will give you the ability to write one single statement which will update the row if it already exists otherwise it will insert a new row.
Additionally, MERGE statement also enable you to combine the logic of delete if you want.
Following example from Microsfot SQL Server 2008 Books Online
will give you the better idea:

Use AdventureWorks

MERGE Departments AS d
USING Departments_delta AS dd

ON (d.DeptID = dd.DeptID)
WHEN MATCHED AND d.Manager dd.Manager
OR d.DeptName dd.DeptName
THEN UPDATE SET d.Manager = dd.Manager,
d.DeptName = dd.DeptName

INSERT (DeptID, DeptName, Manager)
VALUES (dd.DeptID, dd.DeptName, dd.Manager)


August 21, 2007

Caching and Recompilation in SQL Server 2005

Filed under: Information,New Features,Performance,Query,SQL Server 2005 — namwar @ 8:11 PM

I was browsing through the newsgroups and found a very interesting white paper link about SQL Server 2005 Plan caching and batch recompilations. This white paper is a must read for anyone who really wants to master the query tuning and optimization strategies.
You can read this white paper here

August 17, 2007

TRUNCATE command can be Rolledback

Filed under: Information,Performance,TSQL — namwar @ 4:05 PM

Some people wonder why TRUNCATE is so faster than a DELETE operation? Most of the time answer is DELETE is a logged operation and TRUNCATE is not a logged operation therefore, TRUNCATE is faster than DELETE.
This undertsanding is not fully correct. TRUNCATE is also a logged operation and it will also be rolled back if you roll back your transaction exactly as the rollback of DELETE operation.
But there is a difference in the logging of DELETE and TRUNCATE. A DELETE operation is logged on per row basis therefore, the time to delete full table is nearly directly proportional to the number of rows it has. On the other hand, in a TRUNCATE operation SQL Server logs all data pages which get deallocated because of the TRUNCATE command. You can take the analogy of File system here. Your operating system does not take more time to delete a large file than to a smaller file. This is because for both type of files, OS needs to do the same set of operations which is atcually deallocating the sectors used by the target file and this can be achieved very quickly regardless of the size of the file. Same logic applies in TRUNCATE command also the only difference is since RDBMS guarntees you ACID properties therefore, it provides you Rollback facility of the TRUNCATE if you performed it in a Transaction.
Following example will help you to actually see it in action:

Use tempdb

–Create a test table
Create table testTable (id int)

–Insert three sample rows
Insert into testTable values (1)
Insert into testTable values (2)
Insert into testTable values (3)

–Display all rows of the table
Select * from testTable

–Perform transactional truncate
Begin Transaction
Truncate table testTable

Rollback transaction

–Verify that rows are still there
Select * from testTable

August 15, 2007

Difference between INNER JOIN and INTERSECT

Suppose you have two tables and you want the distinct rows which are common in both the tables what will you do to get them? Most of us will answer that they will use INNER JOIN to get the common rows of two tables, right? Wrong!!!!!!
This is a big misconception that INNER JOIN will always return all the common rows between two tables. In reality, INNER JOIN treats two NULLS as two different values rather than a same value so if you are joining a row based on a NULLable column there is a chance that if both tables have NULLs in that column then INNER JOIN will ignore those rows because

Two NULLs are not same in TSQL

To correctly retrieve all common rows between two tables, SQL Server 2005 has introduced the INTERSECT keyword. INTERSECT treats two NULLs as a same value and it returns all rows which are common in both the tables.
Run the following example and you will see the result yourself:

–Create two table variables for testing
Declare @m_table1 table (id int, firstName varchar(50))
Declare @m_table2 table (id int, firstName varchar(50))

–Fill the tables with NULLs
Insert into @m_table1 values (1,NULL)
Insert into @m_table2 values (1,NULL)

—Getting all common rows by Inner Join
Select t1.*
from @m_table1 t1
inner join @m_table2 t2
t1.id=t2.id and t1.firstName=t2.firstName

—Getting all common rows by INTERSECT
Select * from @m_table1
* from @m_table2

August 10, 2007

Optimized solution of Paging by using Count(*) OVER() functionality

Paging through a dataset is always a requirement of any data oriented application. There are several solutions to implement paging. Some of them are client based and some are server based.
Every paging solution needs:

  1. Records of only selected page
  2. Total no. of records to calculate total of pages available

Whatever solution you implement, you will end up doing at least two queries, first to find total no. of records and secondly to find the records of your selected page.

In oneof my previous posts, I showed you the way of implementing paging by using Row_Number() function of SQL Server 2005. Today, I am showing you an excellent trick to get total of records and list of selected records in just one query. Following is the sample code to implement this logic. This code demonstrates the way you can implement paging very effectively without hitting the server twice. Note down the use of Count(*) OVER() functionality.

Use AdventureWorks

–Change these two variable to parameters of your
–actual stored procedure

Declare @PageNumber int
Declare @PageSize int

–Assume we need page 6 i.e. records from 51-60
Select @PageNumber=6
Select @PageSize=10

–Select only those records which
–fit into a single page

Select Top(@PageSize) * from
TotalRows=Count(*) OVER() –Count all records
from Production.Product

) A
Where A.RowId > ((@PageNumber1)*@PageSize)

August 9, 2007

Calculate Product of the numeric values by TSQL

Filed under: Information,Performance,Query,tips,TSQL — namwar @ 9:55 PM

TSQL Provides you mathematical functions like SUM(), AVG() etc. but it does not provide you a way to calculate Product of a list of values. Most of us know from our high school mathematics that Multiplication can be converted into a series of addition. If you use remember basic logarithms then you can easily calculate the product of the values. To refresh your concepts of logarithms please visit wikipedia here
Now, following is a sample code to demonstrate how you can calculate the product of the values by TSQL

Please be aware that since logarithms involves decimal numbers therefore, the result will not be 100% accurate but it will serve the purpose in most of the cases.

–Declare a sample table
Declare @m_TestTable table (sampleValue int)

–Fill the sample table with 10 values
Declare @m_value int
Select @m_value=1
While (@m_value < 10)
Insert into @m_TestTable values(@m_value)
Select @m_value=@m_value+1


–Calculate the product
Select Power(10,Sum(Log10(sampleValue))) from @m_TestTable

August 5, 2007

Utilize multiple processors’ power by using just a single SQL Server license

Filed under: Information,Performance,SQL Server 2005 — namwar @ 8:40 PM

Everyone knows multiple processors mean more power to SQL Server but this also means multiple licenses i.e. one for each processor. So is there any way we can reduce cost but not the SQL Server computing power???
Yes, there is a way; buy multi-core processor.
SQL Server is licensed for each socket no matter how many cores do you have. A quad core processor means you can have the power of 4 processors smashed in one socket so you just need 1 license of SQL Server.

Don’t confuse Hyper-threading with Multi-core systems because Hyper-threading is just the virtual core not more than that. Infact a hyper-threaded processor sometime may slowdown your query because of lack of actual parallelism possible.
On the other hand multi-core processors have totally different architecture and will behave like real multiple processors.
So if you are about to suggest multiple processors for the new SQL Server then think again and research on multi-core processors and save your company money.
For the detailed discussion of dual processor and Multi-core architecture, please visit here

Next Page »

Create a free website or blog at WordPress.com.