Home > Analysis Services, Analysis Services 2012, MOLAP, TABULAR > Analysis Services Partition Creation TABULAR and MOLAP

Analysis Services Partition Creation TABULAR and MOLAP

As something I’ve noticed over the past few years. I see many people not sure on how many partitions to create within analysis services. That in itself is a long conversation, but it should be a balance between

  • What is a logical sub division of the data
  • How much parallelism you would like to load the cube (of course not yet in the case of TABULAR cubes)
  • How many partitions a query is likely to hit
  • How long it actually takes to create/alter/delete them.

I believe it’s often the case that people tend to take into account the first one, but less consideration as the list goes on. To this end, I’ve run a few tests against a MOLAP and Tabular adventure works cube to record the timings for creating partitions through AMO and also through XMLA and have a few interesting observations.

UPDATE: For the purposes of this post, I am reusing some existing code that I have so the single partitions were created in AMO and the batch in XMLA


  1. Create up to 1000 partitions using AMO
  2. Create 50, 100, 200, 400, 600, 800, 1000 partitions in batches using XMLA

Test 1

As mentioned already, tests were carried out on the TABULAR and MOLAP versions of the Adventure works cube. This test was coded in a c# script in a loop. A connection was opened and maintained and then up to individual partitions were created.

Between each test all partitions were removed so the starting point was always the same.

Below are the results of the test plotted to compare the TABULAR creation times versus the MOLAP creation times.


As can be see, the 1000th TABULAR partition took 3101ms to create whereas the 1000th MOLAP partition took 297ms to create.  It is very clear that creating the partitions in TABULAR is far more expensive than in MOLAP.


Another perspective on these results is to look at the cumulative time.


So, the cumulative time to create 1000 TABULAR partitions is 1643 seconds (27 minutes and 23 seconds) and in MOLAP it took 221 seconds (3 minutes and 41 seconds).

The TABULAR timing is obviously a very long time, but given a major advantage MOLAP partitioning has over TABULAR is parallel processing, we probably wouldn’t consider creating so many to begin with.  Having said that, the MOLAP time also isn’t particularly good.

Test 2

For this test, I again used a c# script task in SSIS to just build up various size XMLA strings to batch create a number of partitions.  Again, all partitions that were created were removed before the next batch.

Below is a chart representing the timings for each of the batches in seconds.



For alternative consumption, here is a table of the above chart



So even using XMLA batches, MOLAP is considerably faster.  For those of you know suffered the pain of having to create many partitions back in SSAS 2005, the TABULAR partition creating times will seem all to familiar, where the algorithm was timing was considerably worse.  It seems that we aren’t yet seeing the benefits of those changes in 2012 TABULAR yet that we currently see in MOLAP.

Taking a closer look at the difference in time between creating the TABULAR and MOLAP partitions, it seems that when creating a TABULAR partition there is an additional event “Metadata Load” which consumes some time, but the main difference in time is during the commit phase, where there adjustments made to every existing partition in that measure group (table).


UPDATE: Thanks to Gerhard for the reminder that it is also possible to do a batch transaction in AMO.  This is just as fast as creating the XMLA, but I think a lot easier to read. I won’t paste all of the AMO code to do this, but a snippet is below

Server svr = new Server();
svr.CaptureXml = true;

//connected to cube, not create all partitions as usual, including the .update

//when all done we execute the log

//execute the log against the server connection
//second parameter MUST be false, i.e. not parallel for creating partitions

XmlaResultCollection executeResults = svr.ExecuteCaptureLog(true, false);

foreach (XmlaResult result in executeResults)
foreach (XmlaMessage message in result.Messages)

        if (message is XmlaError)
//do something with this error, probably other failure



If you need to create multiple partitions then do it in a batch in XMLA.

TABULAR is considerably slower than MOLAP at present, but I expect this will change when partitioning becomes more useful in TABULAR.

There are many advantages with partitioning, but there are also costs, and the meta data cost with both TABULAR and MOLAP is a significant one and can certainly cause things to slow down, especially in the COMIT phase!



  1. February 7, 2014 at 11:23 am

    using AMO and setting/using server.CaptureXml = true; and respectively server.ExecuteCaptureLog(true, true);
    should also create all partitions within one batch thereby resulting in only one COMMIT being executed
    have you tested this also? – results should be similar to your XMLA results
    from my point of view AMO is still way much more convenient to handle (and automatize) than XMLA

    • February 7, 2014 at 11:51 am

      Thanks very much for the comment. You are completely correct. It is possible to execute as a batch in AMO, and it is just as fast as an XMLA batch. I will update the post to clarify this now, with some of the code for doing it in AMO. You are also correct in AMO being much easier 🙂
      I had actually just reused some existing code I already had for the testing hence using AMO for single partitions vs XMLA for the batch.

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: