Manage Learn to apply best practices and optimize your operations.

Part 3: Cursor disadvantages

Cursors have been called memory hogs, poor performers and various other unflattering names. Contributor Hilary Cotter explains the cons of cursors.

Resources consumed by cursors

As I mentioned earlier, a cursor is a memory resident set of pointers -- meaning it occupies memory from your system that may be available for other processes. Poorly written cursors can completely deplete available memory. (See this example.)

If you are using AWE (Address Windowing Extensions) on 32-bit SQL Server 2000, the cursor occupies available memory from the pool used by locks, cached procedure plans and user connections, which may cause more memory pressure in the memory space. This is not a problem in SQL Server 2005 or in 64-bit SQL Server 2000.

Speed and performance issues

Cursors can be faster than a while loop but they do have more overhead. If your cursor will not be updating the base tables, use a FAST_FORWARD cursor for optimal performance. The problem with cursor speed is that, in many cases, the operation can be more efficiently written as a set operation or perhaps in a while loop. It's these cursor rewrites that lead to the impression that cursors are evil – or cursed.

Another factor affecting cursor speed is the number of rows and columns brought into the cursor. Time how long it takes to open your cursor and fetch statements. If it's lengthy, look carefully at your cursor logic; see if you can remove columns from the declare statement, and change your where clause in the declare statement to only return rows the cursor needs. If the fetch statements themselves are lengthy or consuming too much IO or CPU, look at the cursor declare statement and ensure you have optimal indexes in place on your base tables or temporary tables.

Wrong tool for the wrong task

Cursors are frequently the wrong tool for the wrong task. They're used for quick-and-dirty programming when a developer does not have a good understanding of set operations -- or they're used for the wrong task entirely.

For example, an operation is sometimes best done client side rather than server side. Server-side cursors were supported in ADO; code solutions that required a read-only view of data used static server-side cursors. ADO.NET uses the data reader or data adapter, which operates disconnected client side. It grabs a results set and brings the data client side, then typically disconnects from the server immediately resulting in greater performance and scalability. Similarly if your requirements require a snapshot of the data, and don't need a window into real-time updates, use ADO.NET's DataReader to blast the data back to your client and cache it there. That way the client can page through the results set on the Web page or Web server, as opposed to paging the results set on the SQL Server, thus consuming resources.

Before you use a cursor, evaluate how the data will be consumed. Sometimes business cases can be made to have the data manipulated on the middle tier as opposed to the data tier.

Subtle errors

Cursors sometimes introduce subtle errors. We already looked at a few:

  • Failing to check the value of @@Fetch_Status
  • Improper indexes on the base tables in your results set or FETCH statement
  • Too many columns being dragged around in memory, which are never referenced in the subsequent cursor operations (probably the result of legacy code)
  • WHERE clause that brings too many rows into the cursor, which are subsequently filtered out by cursor logic.

However, there are also subtle errors that a cursor can introduce.For example, the Halloween problem illustrated how a cursor update operation changes the order of rows in the underlying base tables, so the same rows were retrieved in the cursor results set each time and updated multiple times.

Migrating to SQL Server 2005

Hilary Cotter has been involved in IT for more than 20 years as a Web and database consultant. Microsoft first awarded Cotter the Microsoft SQL Server MVP award in 2001. Cotter received his bachelor of applied science degree in mechanical engineering from the University of Toronto and studied economics at the University of Calgary and computer science at UC Berkeley. He is the author of a book on SQL Server transactional replication and is currently working on books on merge replication and Microsoft search technologies.

Dig Deeper on SQL Server Database Modeling and Design