Friday, September 9, 2011

Locking in SQL Server

In multi user environment SQL Server use lock to prevent conflict modification. This is purely logical no hardware is needed for that.

There are different types of lock available in SQL Server.

1) Shared Lock :- In shared lock other process can not modify the data they can read only. Shared lock used for operations that do not change or modify the data. Select statement is an example of shared lock. Shared lock are held on data under the pessimistic concurrency level.Two shared lock are compatible with each other.

2)Update Lock :- SQL Server use the Update Lock when SQL Server intends to modify the data. It later promote Update Lock to Exclusive lock when actually making the changes.Update lock are compatible with shared lock.Basically to avoid deadlock update lock has used.

3) Exclusive (X) :- Used for data-modification operations, such as INSERT, UPDATE, or DELETE. Ensures that multiple updates cannot be made to the same resource at the same time.

4) Intent :- Used to establish a lock hierarchy. The types of intent locks are: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX).

5) Schema :- Used when an operation dependent on the schema of a table is executing. The types of schema locks are schema modification (Sch-M) and schema stability (Sch-S).

6) Bulk Update (BU) :- Used when bulk-copying data into a table and the TABLOCK hint is specified.




SQL Server Locking Mechanism

For acquiring and releasing lock SQL Server required a fix amount of memory. On the base of Memory it can support a large number of concurrent user. Memory is managed dynamically by SQL Server 7.0 and later.But you can set the maximum amount of memory for SQL Server. This SQL server play a major role in determining lock granularity.

Before understand granularity we need to dive into the mechanism of storing data by SQL Server. Data is stored page wise, every page contains 8 KB data. A set of eight pages is called an extent.

Granularity start from row and goes up to database, row is on lowest level and database on highest level in SQL Server 2000. Locking is done by SQL Server and it can lock row, pages, extent, entire table and data base. If you have written an update query to update a single row, SQL server will lock entire row so that no one can modify that row until your query ends. Other row can be update by other users,

If your update statement affecting 1000 rows, than SQL Server can lock single row at a time. It means acquiring 1000 locks, this type of locking require larger memory. On the other hand if row are located in contiguous pages than SQL Server may be use page locks or extent lock or entire table till transaction completion. If SQL Server locks several pages than other users can update the row that are not exist in affected pages? If SQL Server locks the entire table than all user need to wait until transaction get completed.
  
SQL Server choose lock which is cheaper in respect to memory. If row wise locking taking more memory than SQL Server use page or extent locking. And if extent locking is costly than table locking than it lock entire table instead of page lock.

SQL Server itself control locking but user also can specify explicitly which type of locking require for his update statement. User can use ROWLOCK for row wise lock of his update statement. SQL Server definitely use ROWLOCK for your update statement, but if your update statement updating 95% rows than it required a lot of memory to acquire so many lock. Let say you have 100 row and your update statement updating 95 rows than it is require 95 locks. So you need to choose carefully which you should use and why.

1.1    Transaction Isolation Levels

Transaction Isolation level affects the acquiring and releasing of locks. We try to optimize concurrency since a lot of users running INSER, DELETE and UPDATE statements constantly. So we avoid table-level-lock otherwise it will be hold for other users. We can avoid concurrency issue by good application design or improving physical/logical data model. Still good designed application face concurrency problems. In case of problems you can change the default TRANSACTION ISOLATION LEVEL. By default READ COMMITTED isolation level used in SQL Server. When you change the isolation level you can face following issue

Lost Updates
As name indicate loss of some updates. It occur when two transaction modify same data at same time. In that case the update done by first transaction (which complete first) would lost. Two transaction that are updating data at same time are not aware to each other.
Example: If I have 2000 Rs balance in my account and I deposit Rs 1000 in my account at 11:00 am and my brother withdraws Rs 500 from ATM at same time 11:00 am. Ideally balance should be 2000+1000-500=Rs 2500. Although if transaction isolation is set to READ UNCOMMITED and my brother’s transaction committed after my transaction than balance at 11:01 will be 2000-500=1500. Which is not good!

Non-Repeatable Read
No repeatable read means reading the same several times and getting the different value each time. This problem is also occur due to read uncommitted isolation level. Cause two transaction updating the data at same time that’s why you are getting different results.

Example: If I have 2000 balance in my account, and I am depositing 1000 Rs more in my account. During this time my brother was discussing to bank manager for a loan. Manager rum a store procedure that returns my account balance. The first query return the balance 2000 it was before my deposit. The second query use the same table to get the balance but it run after my deposit, so return 3000 as balance. So same return different result.

Dirty Reads
This is a special condition of non-repeatable read. If you run a report while transaction are modifying the same data than you face this dirty reads. You may see the changes that are not committed.

Example: Suppose I want to withdraw Rs 500 from my account. My balance is Rs 3000. I have punch my card type the pin. ATM reduces the balance to 2500 but same time due to some issue I cancelled the transaction. At this point If a banker check my balance he will get 2500 which is not fair because I have already cancelled my transaction.

Phantom Reads
If a transaction able to read data firstly but not able to read it again because some other transaction has been updated the data. It is called phantom read.

Example: If a banker run a stored procedure which return the name of account holder which has balance less than 3000. Suppose this procedure returns the list of 10 people name. A person who has the name in same list withdraw his money and close his account with bank. Now the banker run the second procedure to get the address of all people who has the balance less than 3000. Now he will get the list of 9 people.

Phantom reads can occur even with READ COMMITTED isolation level. Isolation level SERIALIZABLE can prevent phantom read. Which says each transaction is fully isolated from others. It means no one can acquire any type of locks on the affected rows while being modified.

Transaction Isolation Levels
Read UncommittedOnly committed data can be read, Shared lock being held for entire transaction, other transaction cannot change the data at same time. Other transaction can insert and modify data in same table in case of row level lock.

Read Uncommitted: No lock are being held during this isolation. Anyone can read or update data at any time, No restriction at all, best concurrency but least data integrity.

Repeatable Read: This setting prevent dirty and non-repeatable read. Although lock are being held on read data, new rows can still be inserted

Serializable: This is most restrictive setting. In this setting no one can read on insert data in the range which is locked. So prevent phantom read too.

To change the transaction level you to run following command

Set Transaction Isolation Level Serializable

Default transaction level is Read Committed in SQL Server.

Ready to face all consequences after changing the default transaction.

Lock Types and Compatibility
Your application performance depends on whether various lock held on are compatible to each other.

Intent: 

As name indicates intent lock present the future intention of SQL Sever’s lock manager on data for a specific transaction. There are

1. Intent Shared (IS)
2. Intent Exclusive (IX)
3. Shared with Intent Exclusive (SIX)



IS :- Indicates that transaction will read some resources by placing shared lock on them.

Intent Exclusive:- Indicates that transaction will modify some resources by placing exclusive lock.

Shared with Intent Exclusive: Indicated that transaction will read all resources and modify some of them. So placed shared on resource that need to read and exclusive for modified. Only one SIX lock allowed for a resource at a time. So other connection cannot modify data they can read only during SIX lock.

SharedIn case of shared lock no transaction are allowed to modify the data they can read only.
Example: Select statement,

UpdateBefore modifying the row update lock acquired, At time of updation update lock convert into exclusive lock otherwise into shared lock. One transaction can acquire update lock on resource at a time. Update lock prevent other connection to modify the data. Shared lock are not compatible with update lock.

ExclusiveIn case of exclusive lock nothing are allowed even reading also prohibited. It is used while firing INSERT,UPDATE,DELETE.

Schema Modification (Sch-M): Schema modification locks are used in case of DDL,like Create Table, Create index, Alter Table etc.

Schema Stability (Sch-S):  Schema Stability locks used on the time of compiling queries or stored procedure. Other transaction can modify the data table while using Schema locks. Although DDL cannot performed. 

Bulk UpdateBulk locks are acquired when you need to copy data in bulk into the table with TABLOCK hint. Bulk Update locks improve performance but reduce concurrency to stop other users doing update or read.

No comments:

Followers

Link