Logo
programming4us
programming4us
programming4us
programming4us
Home
programming4us
XP
programming4us
Windows Vista
programming4us
Windows 7
programming4us
Windows Azure
programming4us
Windows Server
programming4us
Windows Phone
 
Windows Server

SQL Server 2012 : Latch Contention Examples - Queuing

- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019
8/24/2014 9:18:38 PM

Queuing

Another typical scenario that can exhibit large amounts of latch contention is a system designed to allow queuing, for similar reasons to the last example, although exhibited in a slightly different way, and certainly resolved with a different method.

Most queues are handled using a table, with numerous inserts used to push items onto the queue, and deletes using TOP to enable quickly locating the earliest row in the table. Techniques such as using the OUTPUT clause can help with concurrency, but as the load increases this kind of design can still end up showing latch contention issues.

Certainly there would be PAGELATCH_EX waits in the leaf levels, as in the last example; but from time to time, activity in the leaf levels would cause similar activity through the higher levels of the b-tree, even up to the root. This means there is potential for contention between the inserts and deletes, even if they are at opposite sides of the b-tree. A representation of this can be seen in Figure 1.

FIGURE 1

image

It’s interesting to note at this point that some of the changes required at the higher levels of a b-tree when performing inserts and deletes are simply not required when performing updates. Unless the update causes a page split by being larger than the earlier page, and provided the clustered index key values for the row being updated don’t change, an update command should not need to affect the higher levels of the clustered index at all. The table of contents need not change if only the information in a particular paragraph is being updated, and no extra pages are being introduced.

To that end, one method to avoid this kind of latch contention is to pre-populate a table with a number of fixed-length columns, and then cycle through them with updates, using two sequences to help the queuing stored procedures to know which value is at the top of the queue, and which one is at the end. It is important to gauge the maximum length of the queue. The impact on the b-tree of needing to perform inserts is significant, and should be avoided with a little planning.

An approach such as this can work nicely:

CREATE SEQUENCE dbo.seqQueuePush START WITH 1 CACHE 1000;
CREATE SEQUENCE dbo.seqQueuePop START WITH 1 CACHE 1000;

Unless specified otherwise, sequences are created using the bigint type, starting at the lowest possible. Because the maximum bigint is extremely large, it might be a little nicer to start with 1 and work up. Either way, it’s important to have your queue start empty, with both sequences at the same number. A cache is used to avoid a bottleneck on generating the next number. You should experiment to see what size cache suits your particular queuing system.

As well as markers to indicate the locations of the beginning and end of your queue, you need a table structure to hold it. For example, if you anticipate needing to be able to handle 10,000 messages in the queue, you should create 10,000 positions using placeholder messages. This enables the b-tree to grow to the appropriate size before the system is under load.

The following code will create the queue, and populate it with the 10,000 placeholder items.

CREATE TABLE dbo.MyQueue (ID INT, Available BIT, Message CHAR(7000));
INSERT dbo.MyQueue
SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY (SELECT 1))-1, 1, ''
FROM sys.all_columns t1, sys.all_columns t2;

The message has been chosen at 7,000 characters, as it fits nicely within a single page. Note that it is CHAR(7000), not VARCHAR(7000), as the row should be fixed length. You do not want to implement compression at this point either. A bit column is used to indicate whether or not the position in the queue is taken, in case the queue fills up completely.

These 10,000 slots are numbered from 0 to 9,999. Your ever-increasing sequences will far exceed this range, but the modulo function will provide a mapping, enabling the sequence numbers to roll around to the start every 10 thousand entries.

When message 3,549,232 arrives, it would be pushed into slot 9232. If message 3,549,019 is being popped out at the time, it would be found in slot 9,019. After these two operations, the sequences would be ready to tell the system that the next slot for a push would be position 3,549,233, and for a pop it would be 3,549,020. Any delay in processing the messages that are being popped off the queue would be fine as long as the size of the queue doesn’t stretch beyond 10,000.

Pushing a message onto the queue is therefore as simple as incrementing the sequence, performing a modulo 10,000 on the sequence number to discover into which slot the message should be pushed, and running an UPDATE command to put the message into that appropriate slot:

DECLARE @pushpos INT = NEXT VALUE FOR dbo.seqQueuePush % 10000;
UPDATE dbo.MyQueue SET Message = @msg, Available = 0
WHERE ID = @pushpos;

To pop a message from the queue, code such as this could be used:

DECLARE @poppos INT = NEXT VALUE FOR dbo.seqQueuePop % 10000;
UPDATE dbo.Queue SET Message = '', Available = 1
OUTPUT deleted.Message
WHERE ID = @poppos;

Some testing could be performed to ensure that the queue is not empty, but this technique can certainly enable up to 10,000 messages in the queue at any one time, and spread a heavy load across a large number of pages. Most important, negative impact on the higher levels of the b-tree, caused by performing inserts and deletes, can be avoided.

There was data that needs to be updated very quickly, and updates are used rather than inserts — as shown in Figure 2, the DMV sys.dm_os_latch_stats:

FIGURE 2

image

It does not contain any kind of ID field. The only fields are latch_class, waiting_requests_count, wait_time_ms, and max_wait_time_ms; and yet the data is always returned in order, and the order is meaningful. The BUFFER class is always row 28. ACCESS_METHODS_HOBT_VIRTUAL_ROOT is always row 5 (this is a non-buffer latch that exhibits waits when root splits are needed, which would occur if a traditional delete/insert queue had been implemented).

You may have noticed when querying this DMV that many of the entries are zero, but the entries are still there. This is different to, say, sys.dm_db_index_usage_stats, which only includes a row once an index is used for a scan, seek, lookup, or update operation.

The sys.dm_os_latch_stats DMV is like your queue structure. It needs to be able to respond extremely quickly, as do many of the internal mechanisms within SQL Server. To that end, it is much quicker to set bits than to squeeze them in. Incrementing a counter that is already in place is a significantly better option than trying to preserve space until it is needed, if the speed of recording the data is to be maintained.

Other -----------------
- SQL Server 2012 : Latch Contention Examples - Inserts When the Clustered Index Key Is an Identity Field
- SQL Server 2012 : Latches and Spinlocks - Monitoring Latches and Spinlocks
- SQL Server 2012 : Latches and Spinlocks - SuperLatches/Sublatches
- SQL Server 2012 : Latches and Spinlocks - Latch Types, Latch Modes
- Sharepoint 2013 : Overview of The Client-Side Object Model and Rest APIs - Client-Side Object Model API Coverage
- Sharepoint 2013 : Overview of The Client-Side Object Model and Rest APIs - REST and OData (part 3) - Creating, Updating, and Deleting
- Sharepoint 2013 : Overview of The Client-Side Object Model and Rest APIs - REST and OData (part 2) - Filtering and Selecting
- Sharepoint 2013 : Overview of The Client-Side Object Model and Rest APIs - REST and OData (part 1) - Getting Started with REST and OData
- Sharepoint 2013 : Integrating Apps for Office with SharePoint (part 2) - Apps for Office Integrated with an App for SharePoint
- Sharepoint 2013 : Integrating Apps for Office with SharePoint (part 1) - Standalone Apps for Office
 
 
Top 10
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
- Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
- Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
 
programming4us
Windows Vista
programming4us
Windows 7
programming4us
Windows Azure
programming4us
Windows Server