Tip

Using SQL Profiler for Analysis Services 2005

Baya Pavliashvili, Contributor

 
Those of us who have used SQL Server Profiler acknowledge it as one of the best tools for troubleshooting and tuning relational database applications. If you need to find what indeed is happening "under the hood" Profiler is by far the best place to start.

    Requires Free Membership to View

If you want similar functionality with OLAP Services (part of SQL Server 7) or Analysis Services 2000 you're simply out of luck. Fortunately we're blessed with the ability to use Profiler with Microsoft Analysis Services (MSAS) 2005. In this tip, I'll introduce you to usage of SQL Profiler to monitor, troubleshoot and optimize your analytical applications.

An introduction to using SQL Profiler with Analysis Services 2005

   Running SQL Profiler
   Templates in SQL Profiler
   SQL Profiler for performance tuning
   Profiler for auditing and server management

 
Running SQL Profiler

You can start a Profiler trace by choosing StartProgramsMicrosoft SQL Server 2005Performance Tools SQL Server Profiler. Then choose FileNew trace and select "Analysis Services" from the server type drop down box in the resulting dialog box. As you're probably aware, Analysis Services relies on Windows authentication; since this is the only way you can connect to the server authentication, options are grayed out, as shown below:


Since MSAS 2005 supports multiple instances you need to specify both server name and instance name, unless you're connecting to the default instance. Alternatively, you could specify a server name and port number, as in MyServer:MSASPortNumber.
 
Templates in SQL Profiler

Each Profiler trace is defined by a collection of events it will record. Total data set collected by the Profiler on a busy server can be voluminous. To focus on a particular subset of all data, you can limit the events and columns to be collected. MSAS 2005 comes with three Profiler templates: pre-defined combinations of events, columns and filters to help you get started.

You don't have to use all events supplied with the template and you can add your own filters to further reduce the output. But templates are great learning tools for those who haven't had much experience with the tool and need a quick start. If you're already familiar with Analysis Services events and know which ones you'll typically want to monitor, you can either edit existing templates or create new templates for future use.

Trace templates have the extension TDF; default trace definitions are stored in SQL Server installation folder under Microsoft SQL Server\90\Tools\Profiler\Templates\Microsoft Analysis Services\90. However, if you create your own templates they will be stored under "documents and settings\user name\application data\Microsoft\SQL Profiler\9.0\Templates\Microsoft Analysis Services\90 folder. These files aren't meant to be modified manually through any tool other than Profiler. However, if you find your trace template particularly useful you could copy it to other servers that you or one of your colleagues might manage.

I won't discuss every single column you can monitor with MSAS 2005 because that would make this tip too lengthy; besides the Profiler gives a brief description of each column so you can learn by reading the definition. I will discuss event categories available within the Profiler, however, as appropriate.
 
SQL Profiler for performance tunin

Cube performance is generally evaluated from two perspectives:

  • Time it takes to execute the common set of MDX queries
  • Time it takes to process dimensions and partitions

As an Analysis Services architect you're probably most concerned about MDX query performance, however, while the cube is being processed, the MDX query performance will suffer and fresh data won't be available for querying. When an MDX query is submitted Analysis Services can resolve it in one of the three ways:

  1. Obtain query results from cache. MSAS 2005 has a fairly sophisticated caching mechanism which allows reusing calculation results within the same query, across multiple queries on the same connection or across multiple connections. You can learn more about caching options from this performance tuning white paper.
  2. Obtain query results by querying aggregations. Of course to exploit this option MSAS must have a useful set of aggregations for the current query.
  3. Obtain query results from the appropriate measure group(s).

As you might guess, the final option is the most expensive operation. You should examine the Profiler output to ensure that the majority of your queries are resolved from cache or from aggregations. To do so, you need to examine the following events under QueryProcessing category (in fact the majority of events developers should be concerned about would appear under this category):

  • Get Data From Aggregation
  • Get Data From Cache

Unfortunately, there are no hard and fast rules that would let you predict when MSAS would or would not use aggregations or whether it obtains query results from cache. However, if you observe the Profiler output and see that aggregations are not being used, you should examine your aggregation strategy.

Query Begin and Query End events advise you of course when a particular query starts and ends. Command Begin and Command End do the same thing, except they get triggered for any XMLA command, not just MDX queries.

Another very useful set of events related to MDX query performance shows how MSAS obtains results for NON EMPTY construct. Cube data is naturally very sparse because not every customer shops in every store and buys every product every day. Therefore, a typical query is likely to return empty members unless you limit the output by using keywords NON EMPTY or NONEMPTY function (NONEMPTYCROSSJOIN function is still supported but is deprecated with MSAS 2005). Unfortunately removing empty tuples from a query is an expensive operation and can cause significant performance issues. If you suspect there are issues related to NON EMPTY constructs, you should monitor the following events:

  • Calculate Non Empty Current
  • Calculate Non Empty Begin
  • Calculate Non Empty End

Of course simply monitoring these events won't help. There are few options for optimizing queries that contain NON EMPTY. First, CROSSJOIN operations are inherently slow, even if you don't fish out empty tuples. Always use NONEMPTY function as opposed to NON EMPTY CROSSJOIN construct. Second, if you have calculated members, be sure to define "non empty behavior" property. Non empty behavior property advises MSAS to examine the values of a measure (or set of measures) as opposed to deriving the calculated members and then checking whether each has a non empty value.

Calculated members offer a lot of flexibility for specifying a formula that will be applied to each row in your fact table. However, applying heavy calculation logic to a large number of cells can get quite expensive, depending on the number of affected cells. In some cases, you can tell Analysis Services that the calculated member should be considered empty if one of the underlying regular measures is empty. Other calculated members are complex and you cannot define non empty behavior for them. It is often more beneficial to implement such calculations as regular measures.

Other useful events under Query Processing category include:

  • Execute MDX Script current
  • Execute MDX Script begin
  • Execute MDX Script end

Not surprisingly, these events allow you to monitor execution of the MDX script during cube processing. As with other programming languages, the same operation can often be implemented through multiple different expressions. If you find a poorly performing query (or MDX script), you can attempt to re-write it using some other construct(s) for speed.

Events in the Locks' category allow you to monitor deadlocks and lock timeout events. Since MSAS data is predominantly read-only you might think that you should never encounter a deadlock. However, deadlocks can occur if a user is querying cube meta-data (read lock) while a developer is trying to deploy structural changes to the cube (write lock). The Error category displays errors in MDX queries and can be useful for troubleshooting failed queries.

Query subcube and query subcube verbose events are triggered when MDX statements are recorded in query log. Query log is subsequently used for usage-based optimization (and in rare cases for auditing).
 
Profiler for auditing and server mgt.

Analysis Services administrators might want to know who is (and perhaps who isn't) using their databases. Profiler has the following events to show you who is currently connected to your server:

  • Existing Connection
  • Existing Session
  • Session Initialize

Another set of events called "security audit" allows a finer level of audit. It shows you when a database is archived or restored, when security settings are modified and when the server is started or stopped.

Notification events are triggered by events that aren't explicitly initiated by any user. These events are only useful if you're using the proactive caching feature.

Discover events get executed each time a user queries server meta-data. If you choose events

More on Anaylsis Services in SQL Server:

under the Discover category and try to browse any cube using SQL Server Management Studio, you'll see a multitude of these events being recorded. Before you can browse the data, SQL Server must retrieve data about cubes, dimensions, hierarchies, measure groups and so forth. Discover Server State events collect data about current connections, sessions, transactions, locks, processing and querying jobs.

Events in the Progress Report category track the progress of processing and querying jobs. Reviewing these events can be helpful in understanding the steps that the server must take internally when you execute different sorts of processing requests.

Flight Recorder is a Profiler trace that is turned on by default and tracks the server activity in the background. This is an awesome functionality for troubleshooting problems after they take place because you can often re-produce the problem by replaying the flight recorder trace. There's a slight problem with this utility. By default, it only keeps 10MB of data before "rolling over;" when the file reaches 10MB the current file is renamed to FlightRecorderBack.trc and a new trace file is created. At this time, the existing historical file is deleted, so you can only review 20 MB of historical data at any time. As you might imagine, on a busy server you could accumulate 20MB of trace events within just a few minutes. Fortunately you can alter Analysis Services properties to retain more historical trace events. You can also alter the trace template used to collect the Flight Recorder trace as needed. Flight Recorder trace files are stored in MSAS installation directory under \LOG folder.

In summary, I discussed how you can use Profiler to monitor, troubleshoot and optimize you analytical applications. Of course a rich tool such as Profiler cannot be easily covered in a tip. If you're serious about utilizing Analysis Services be sure to play with this tool and learn its functionality inside and out.

ABOUT THE AUTHOR
Baya Pavliashvili
is a DBA manager overseeing database operations that support more than one million users. Pavliashvili's primary areas of expertise are performance tuning, replication and data warehousing.

This was first published in May 2007

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.