Oct 312009
 

 

This year I’ll be volunteering at PASS Summit. Birds-of-a-Feather (BOF) Lunch is one of events you can find me at. Whether you want to discuss MDX or just meet me in person, search among 50 tables and join me for a chat. The event will take place on Tuesday, 3rd of November, between 11:45 and 12:45 (Expo Hall 4B).

Additionally, the company I work for, SoftPro Tetral, is one of exhibitors. If you’re interested in seeing our CubePlayer in action or just to have a chat with whoever is there at that moment (me included), feel free to drop by, either during breaks or Exhibitors Reception (Tuesday, 3rd of November 3, 6:00pm-8:00pm, Exhibit Hall 4A). We’re located in bottom-right corner on the map.

Oct 292009
 

 

On 2nd of September, I watched a marathon of SQL Server sessions named “24 Hours of PASS”. At least one small part of it. One session that made me think was Steve Jones’ Building a better blog. I wanted to learn few tricks about blogging in order to make mine a better place. What surprised me was Steve’s advice to blog about everything you do, in order to show others what topics you covered so far, regardless whether it was covered by someone else before or not. Now, I simply couldn’t agree with that. For me, writing a blog article, and by that I don’t mean a short announcement or similar, is writing about something others haven’t done so far or at least, not in that way. It’s about creating something new, a challenge to convert ideas in practice and demonstrate it in public. Otherwise, what’s the point in duplicate content? Then again, maybe I just don’t understand the point in having a blog.

And so I thought to myself, surely there must be several types of bloggers. One, for instance, that act as messengers, concentrating mostly on what’s new and what’s considered to be best practice. News and knowledge aggregators, we might call them. Then, there are occasional practitioners, people who don’t blog often and rarely do so regarding news. When they blog, it’s mostly about something they’ve done recently and found interesting to share. Next, there are regular bloggers, who post weekly if not often, regardless of what, just to keep the pace. And so on. Which makes me think. Could we perform a real-life analysis on this topic? Could we analyze bloggers writing about Business Intelligence? Why not, I know a good place where this kind of information can be obtained. Our precious – SSAS-Info.com portal, hosted by Vidas Matelis.

STARTING THE PROJECT

Although the strategic decision to go in this project was done at the same time when the idea came to me, the tactical decision followed a month later.

I mean, I saw this right from the start as a community project, this being merely its initial phase. A project where we all could participate, explore possibilities, share the findings and project’s know-how and have fun at the same time. Because this time it would not be about Northwinds, Foodmarts or AdventureWorks, this time it would be about us. Just imagine, us being inside a cube. Not as analysts, but the one being analyzed for change. Feels weird, yet tempting.

Considering all circumstances, it had to be done before PASS Summit 2009, because that’s a great place to gather, at least some of us will be there, and discuss the findings and further possibilities. Therefore, upon receiving emergency note from tactical management, the operational team had to act fast. And so I did.

I’ve contacted Vidas who kindly sent me CSV dump of his site. One big table containing all the records, like articles, news, faq, etc. Here’s 5 for cooperativeness!

CSV - 1st export

Since my virtual environment wasn’t ready at the moment, I started in Excel 2007 and managed to parse, clean, distill and consolidate the data using Excel functions, formulas and copy-paste technique. Like a real BA :-). I also transposed some data (tags) into M2M model using Query. Details from that experience in some other article, perhaps. I’m glad I did it in the end. Says enough?

Before going any further I’ve sent it back to Vidas. And I recommend it as a good practice for future users of PowerPivot – once they get what they want, loop-back to IT in order to clean/consolidate data at the very source. Don’t be mistaken, data will always be dirty. Yes, I know, it’s an extra step, but that way data becomes better and better for analysis. Love thy brother thou Gemini.

He gladly cleaned it following my findings inside. At the same time my virtual environment was ready, so I moved to Excel 2010 and Gemini add-in.

BI bloggers - source

I couldn’t make a model directly from the same Excel holding several tables for analysis, therefore, I saved that Excel and used it in a new one as an ODBC datasource.

Connecting went smoothly. I couldn’t see the data, just headers, but I assumed it had something to do with being CTP still (and perhaps something else). I just ignored that and returned to Excel, add a new pivot, filled it with some fields and the data was there. It works!

PERFORMING THE ANALYSIS

I made Four Charts view (under Pivot dropdown) and started to search for interesting and meaningful results (and also the ones where I’m in chart yet it doesn’t look suspicious :-) ). Finally, I settled with this. What do you think?

Gemini analysis of BI bloggers

I connected the orange Section slicer to charts on left (colored orange too). That slicer separates different part of SSAS-Info site, articles being the most fun par of it for analysis (hence selected). The upper radar graph shows the quantity of articles per category. Three most blogged categories (as seen in the radar graph) are: MDX, Management and Design. The bottom bar graph shows Top 10 bloggers.

The blue slicer, Category, represents topics being blogged about. Those top 3 are selected manually in order for the blue stacked bar graph to show their total as well as their share for each of Top 10 bloggers in those topics. As you can see, almost all of Top 10 bloggers are among Top 10 bloggers in these categories too. Vidas Matelis and Teo Lachev focus their attention primarily on other subjects, while Mi
crosoft and I blog above average in top 3 categories. Blue graph is linked to both orange and blue slicers and supports multi-select quite beautifully, because of stacked bar graph being used here.

Green pie graph is linked to green Author slicer (and orange slicer too). Here multi-select is not recommended, because this graph is not designed for that. However, it shows a distribution of categories for an author. Chris Webb, seen in this beautiful graph, appears as a true multi-vitamin for consultancy because of a wide variety of interests he blogs about and their nice distribution.

Which brings us to my initial dilemma. Using this graph it is relatively easy to analyze how versatile one chooses to be in his blog (because of categories he decides to write about). Mosha Pasumansky, me and many other BI bloggers are inclined to one or two areas. Partially because most of us are under 10 posts, which is a relatively small number compared to a number of categories.

The same analysis can be done in radar graph, either by hooking the orange radar graph to Author slicer, or by changing the bottom-right graph type from pie to radar, since it is already connected as required. That shows us some BI bloggers like Carl Rabeler and Dave Fackler have square in radar graph (focus on 4 categories). Many of Microsoft employees have a more or less wide wedge, with SSAS 2008 being the longest side. Other bloggers have triangles, etc.

Chris Webb, Vidas Matelis, Thomas Ivarsson, Dan English and others show more balanced pie covering many categories. And so on. And so on.

Btw, having so many celebrities in my article (not knowing about it too), I feel like Michael Jackson shooting Liberian Girl video. And I apologize in advance for not mentioning all of you (~150). Don’t worry, I have a plenty of ideas for the sequel 😉

DRAWING THE CONCLUSION

Having seen many of these charts, I conclude that my assumption from the beginning of this article is true. Patterns do exist. Therefore, they should be analyzed.

My comparison with vitamins might not be bad at all. I mean, we both help the society. Next, each of us is targeting one or more areas. Finally, each of us, like vitamins from different vendors, have more or less better perception/absorption. That is, what we write about and how we write about it. Yet, unlike vitamins, we have the option to evolve.

My (not first, but second) impression regarding PowerPivot is that it is envisioned as a very pleasant analytical environment, featuring slicers as the most impressive component of it.

Although the same analysis could have been done in Excel 2007 or in SSAS 200*, I’ve chosen Gemini because I participate in CTP program, haven’t posted anything about it so far and wanted to learn. And what better way than practicing by setting up a task and then fulfilling it. Incrementally, this being the first step.

As for in-memory cube generation is concerned, that part is still to come. I made some tests, but that’s not it. A careful reader should remember that my standards prevent me from posting unless it is something valuable or unique. In CTP3 perhaps.

  

WHERE TO NEXT?

This experiment showed interesting results. At least to me. And it pointed out one important thing – data is everywhere! Yet it’s not being analyzed, just collected. Opportunities are numerous. Business Intelligence, at least its technology part, enables everyone in gaining fast insight into data, encouraging the creative process which then leads to better decision-making and actions.

Opportunities are numerous here too. This was only a small portion of what can be done. I see this project as a potentially flourishing area for the whole community where each of us could cover his area of interest. Btw, this is just one of them I have on my mind, and I intend to start them when the time comes.

For example, this data is static (captured a week ago). One possible project would make it come alive, enabling real-time (or daily) analysis (although weekly could be fine too).

The next one is Time Analysis. Records in the database behind SSAS-Info portal don’t have original article’s date but rather a date/time of when it was manually entered into database. I’ve omitted that in this example, I might make another one soon. Now, that difference means we have uncertainty of a couple of days for more recent items, and unpredictable offset for older items. Those dates could be collected and merged into DW.

Then there are numerous data-mining possibilities. For example, clustering of authors in order to obtain clustered groups based on categories or tags. We could also classify authors as nighthawks and daylight writers. Or by regularity in their posts (regular, occasional), day in week affinity, etc, etc.

This could also be a nice case for data-cleaning experts, to show their strength and ideas in that field, in Excel as well as on SQL Server platform. And perhaps by comparing good and bad sides of each path.

Finally, there are M2M challenges not to be neglected. All together, a nice playground for the upcoming CTP3.

Yes, I know what you’re thinking. OK, fine, no problem. Feedback would however be very appreciated.

If, on the other hand, you are interested in participating in this project, Vidas and me have no objection that you use raw or prepared data, which btw can be found in below.

 

Feel free to contact us regarding cooperation, advices, anything.

And again, don’t be shy, give thumb up or down, feedback, anything. Let me know what you’re thinking. That way I can make another analysis. Just kidding :-)

PS: A small notice to Redmond team – the same idea could be applied to MSDN forum. Extending the text-mining of tags and posts recently introduced there, if I’m not mistaken.

Oct 282009
 

Tomorrow, on 29th of October, I’ll be holding an unplugged session named “OLAP” for local SQL&DEV user group. The term “unplugged” is literal – there will be no laptops, slides or demos. Just a (hopefully) good intro and discussion, with few key points to take away in the end. All together, an hour of specific subject, following the practice of last month’s gathering.

The event will take place in Microsoft’s building in Zagreb (Croatia) at 6 PM.

Beside a regular monthly newsletter, there’s also an announcement on BUG Online (web portal of local IT magazine) that I’ve managed to find regarding this event.

Sutra, 29. listopada, održat ?u unplugged predavanje na temu OLAP-a za SQL&DEV user grupu. Ono “unplugged” je doslovno – nema laptopa, nema slajdova, nema demoa. Samo, nadam se, dobar uvod, diskusija i naravno, nekoliko istaknutih stvari, da ostanu u sje?anju. Dakle, jedan sat na odabranu temu, slijede?i dobru praksu prošlomjese?nog druženja.

Doga?aj ?e zapo?eti u 18 sati u prostorijama Microsoft Hrvatske, Turinina 3 u Zagrebu.

Osim redovnog mjese?nog e-letka, postoji i obavijest na Bug Online portalu o ovom doga?aju.

Oct 252009
 

 

Creating a Bit array (also known as Bitmap) is an useful technique when analyzing data. Especially (but not necessarily) in case of low cardinality (small number of distinct values in column compared to the number of records in a table). For example, a column having two distinct values (male/female or yes/no status) could be optimized in storage and represented using values 0 and 1 only (bits).

A collection of such bits forms a bit array or a bitmap. Bitmap indexes use those arrays performing bitwise operations on them.

Almost all DSS and analytical systems use bitmap indexes, and so does Analysis Services. An example where it can be seen is SQL Server Profiler. Query Subcube event represents the bitmap of hierarchies and attributes being used in a query. Yet, what’s fine for computers might not be for humans, hence there’s also Query Subcube Verbose event which explains that in plain English. Kind of.

Although SSAS engine intensively uses them, bitwise operations in MDX are in short supplies. In spite of that, the same idea can be borrowed and turned in action using string functions. After all, an alternate name for bit array is bitstring.

Recently I’ve participated in a case where a measure was expected to return the percentage of parent no matter which hierarchy is placed on an axis and no matter how many of them are there. Actually, a similar problem appeared on MSDN SSAS forum a year ago too. While the initial problem in this latter case was solved using special solution, general solutions emerged also. The offer ranged from heavy string manipulation to recursion and stored procedure. Taking a look at it now, from the perspective of this case, I believe bitmap indexes might have been an alternate MDX solution, a relatively fast approach that doesn’t require anything but a query and few calculated members aside.

Here’s a case study featuring the bitmap index as the key point in obtaining the general solution.

SITUATION

The idea was to have a general solution for percentage (ratio to parent). One calculated measure covering all scenarios, meaning, whatever combination of hierarchies appears in OWC (cube browser).

The problem that soon appeared was this – no matter how much improved the calculation in each iteration I posted in forum was (a previous thread and my blog article on that subject), there was always a case where it didn’t fit. Therefore I started focusing on getting as detailed explanation as possible. When I finally got it (as an Excel sheet) and analyze it, deduced the algorithm and describe it like this (as in integrams):

  •  
    • if there was a non-root member in the rightmost hierarchy, show ratio to parent
    • root member in the rightmost hierarchy combined with all non-root members from other hierarchies to the left should result in 100%
    • if all hierarchies were on root member, the result should be 100%
    • in all other situations, ratio to parent for non-root member should be the result

The idea in integrams (I couldn’t link for them in English, it’s an enigmatic game) is to mark certain events that hold true and other that can never be such. By doing so in iterations (reading the initial text over and over again, every time in new context), one should be able to deduce the final combination of entities in a puzzle and thereby solve it. The idea here was to list all possible combinations using a bitmap, where 0 represents non-root member, 1 root member and their sequence, well, …the sequence of hierarchies on axis. Translated, that would look like this (asterisk being any value):

CASE BITMAP VALUE REQUIRED RESULT
**0 0, 2, 4, 6, … (even numbers) parent of 3rd
001 1 parent of 3rd (self), special case for 100%, per request
011 3 parent of 1st
101 5 parent of 2nd
111 7 parent of 1st, special case for 100%

 

Now, see how this looks more like a solvable case? And so it was. The only thing necessary at this stage is to code it using MDX expressions in the most optimized way.

SOLUTION

A general MDX for this could be far more simpler than the one below. However, the requirement was that it should work in OWC and OWC has its specifics when it comes to MDX, about which I blogged here. Therefore, I had to make a lot of additional things, like testing where OWC put measures and everything related to that in later calculations. Nevertheless, I managed to make it work as required.

Create Member CurrentCube.[Measures].[Where are measures in OWC] AS
case
when NOT IsError(Extract( Axis(0), Measures ).Count) then 0
        when NOT IsError(Extract( Axis(1), Measures ).Count) then 1
        when NOT IsError(Extract( Axis(2), Measures ).Count) then 2
        else -1  -- should not be possible
end
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Ratio measure] AS
-- specify a measure you want to have a ratio for, or
-- use default measure, or
-- use dynamic expression, that selects    -- either first measure on an axis or
-- the default measure in case no measure is selected

    /* -- option 1 */
    [Measures].[Sales Amount]
    /* -- option 2
    [Measures].DefaultMember
    */
    /* -- option 3
      iif(Extract( Axis( [Measures].[Where are measures in OWC] ),
                   Measures ).Item(0).Item(0).UniqueName =
          '[Measures].[Universal ratio %]',
          [Measures].DefaultMember,
          Extract( Axis([Measures].[Where are measures in OWC]),                   Measures ).Item(0)
         )
    */
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Test Empty] AS
    IsEmpty( [Measures].[Ratio measure] )
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Count of columns] AS
    iif( IsError( Axis(1).Count ),
         0,
         iif( [Measures].[Where are measures in OWC] = 1,
              Axis(0).Item(0).Count,
              Axis(1).Item(0).Count )
       )
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Bitmap Index as String] AS
    iif( [Measures].[Where are measures in OWC] = 1,
        Generate( Head(Measures.AllMembers,                       [Measures].[Count of columns] ) AS L,
                  -(Axis(0).Item(0).Item(L.CurrentOrdinal - 1)                    .Hierarchy.CurrentMember.Level.Ordinal = 0)
              ),
        Generate( Head(Measures.AllMembers,                       [Measures].[Count of columns] ) AS L,
                  -(Axis(1).Item(0).Item(L.CurrentOrdinal - 1)                    .Hierarchy.CurrentMember.Level.Ordinal = 0)
          )
     )
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Reversed Bitmap Index as String] AS
    iif( [Measures].[Where are measures in OWC] = 1,
        Generate( Head(Measures.AllMembers,                       [Measures].[Count of columns] ) AS L,
                  -(Axis(0).Item(0).Item([Measures].[Count of columns] -
                     L.CurrentOrdinal)                    .Hierarchy.CurrentMember.Level.Ordinal = 0)
              ),
        Generate( Head(Measures.AllMembers,                       [Measures].[Count of columns] ) AS L,
                  -(Axis(1).Item(0).Item([Measures].[Count of columns] -
                     L.CurrentOrdinal)                    .Hierarchy.CurrentMember.Level.Ordinal = 0)
              )
       )
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Rightmost non root position] AS
    InStr([Measures].[Reversed Bitmap Index as String], '0')
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Inner 100%] AS
    CInt([Measures].[Bitmap Index as String]) = 1
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Universal ratio %] AS
    iif( [Measures].[Test Empty],
         null,
         iif( [Measures].[Rightmost non root position] = 0 OR
              [Measures].[Inner 100%],
              1,
              iif( [Measures].[Where are measures in OWC] = 1,
                   -- check division by zero
                   iif( ( Axis(0).Item(0)                          .Item([Measures].[Count of columns] -
                           [Measures].[Rightmost non root position])                          .Hierarchy.CurrentMember.Parent,
                          [Measures].[Ratio measure] ) = 0,
                        0,
                        [Measures].[Ratio measure]
                        /
                        ( Axis(0).Item(0)                          .Item([Measures].[Count of columns] -
                           [Measures].[Rightmost non root position])                          .Hierarchy.CurrentMember.Parent,
                          [Measures].[Ratio measure] )),
                    -- check division by zero
                    iif( ( Axis(1).Item(0)                           .Item([Measures].[Count of columns] -
                            [Measures].[Rightmost non root position])                           .Hierarchy.CurrentMember.Parent,
                           [Measures].[Ratio measure] ) = 0,
                           0,
                           [Measures].[Ratio measure]
                           /
                           ( Axis(1).Item(0)                             .Item([Measures].[Count of columns] -
                              [Measures].[Rightmost non root position])                             .Hierarchy.CurrentMember.Parent,
                             [Measures].[Ratio measure] ) )
                 )
            )
       ),
    FORMAT_STRING = "percent"
;

 

The first measure detects axis that has measures when using OWC.

The second measure is our ratio measure. That is, the measure we want to calculate the ratio for. I provided the syntax for several scenarios. The first one, of course, is the obvious and easiest one – using any measure we want and hard-coding it there. The second option is more general one – the use of cube’s default measure. Meaning, it could work for any cube. Finally, the third and the most complex option is to use the first measure on axis, whatever it might be. That’s even more general solution. In that scenario, it is up to an end-user to put a measure of his choice as the first measure and to see ratios based on that measure. It should work in any cube/report/query. The calculation features a handle for no measures at all (except that ratio measure) in which case the default measure is taken.

Next we test whether the original, ratio measure is empty. We’ll use the result of this in the following calculations, in order to preserve empty rows.

Count of columns gives us count of hierarchies being placed on an axis of interest (that is rows or columns, depending on where measures are not).

Next thing we do is generate a bitmap, and right next to it – a reversed bitmap. Innermost logical test returns either –1 or 0 (True or False), while minus in front of it turns -1 into 1. Looping is performed on a set of measures (which might be a week point – it is up to BI designer to provide something better in case there are only few measures). From which axis is bitmap formed? Depends on measures position in OWC. Additionally, in the reversed bitmap, the reversed order is obtained as it is usually being done in MDX – using a combination of two MDX functions: Count and CurrentOrdinal.

Why do we needed reversed bitmap? It becomes obvious once we come to the next calculated measure – Rightmost non root position. Analysis should be performed backwards, from right to left. That is the order in which we search patterns in bitmaps and therefore we needed reversed bitmap (string search is left based). This new measure returns us the position of the first zero value. In the table featuring 5 distinct cases, those positions are: 1, 2, 3, 2, 0.

Next measure is a Boolean type measure that detects a special case (see the table above) and that special case is 001 combination, where, per request, the value should be 100%.

Finally, we come to the last and the only visible measure – Universal Ratio %. There, the first thing we should do is test for emptiness of original ratio measure. Then only we handle the cases in our table. After covering cases with 100% result, based on where the measures are, we enter one branch and there test denominator for zero. Finally, we perform the ratio calculation. Only by now it is in utmost relative form, covering wide varieties of cases withonly one expression (notice the use of Rightmost non root position measure inside the innermost calculation). That’s the power and beauty of having and using bitmap indexes.

UniversalPercentageInOWC

BENEFITS

It’s worth reminding again that the final formula looks complicated only because of two reasons: OWC is being used as a front-end and, there were special cases in the bitmap table. Without them, the calculation would come down to always necessary checks for emptiness and division by zero, following by the core ratio expression. In case you’re using a client that keeps measures on the axis you’ve dropped them on in your pivot, you can consider stripping down all unnecessary calculations or part of it, as mentioned in the beginning of this article.

I also made tests with real bitmaps. Meaning, I tried to use numbers, 2 powered to N, where N was the number of hierarchies on axis. Yet that turned out to be not only a dead-end (because the reverse operation is not easy), but also time consuming, prolonging the query time significantly. String operations on the other hand were very fast and manipulations with them very easy. Relatively, of course.

All in all, bitmaps and their power in being flexible pointers proved to be an interesting approach in providing universal MDX solutions, calculations that, did I mention it before, work in any cube.

Oct 252009
 

 

“Life is like a box of chocolate: You never know what you’re gonna get.”

It is not the exact memorable quote from the Forrest Gump film, rather simply a form of it which I’ve memorized. Moreover, being excited about the package with three blue letters on it I’ve just received, it’s also the one I find the most appropriate for the series of articles I intend to write.

What’s the topic this time? Randomness in MDX.

Having thought about it for a while, I’ve recognized three variants of it: random values, random members and random objects. Each a story by itself. Therefore, I’ve decided to cover them separately, starting with chocolate flavors representing, well, you’ve guessed it by now – random values.

INTRODUCTION

Imagine a box full of chocolates. Each chocolate being filled with some delicious ingredient. How many different ingredients are there in total? What do they taste like? Which shape would you try first? You feel tempted. The hand moves. A mixture of tastes overwhelms your senses as you reach for them one by one, experiencing the a short glimpse of sensation as their sweetness melts in your mouth. You repeat the process and the next thing you know, they’re gone. All of them.

Now, if that made you take a walk around a place you’re at, be sure to come back soon and see how to simulate the same in MDX query.

RANDOM VALUES

There will be times when you’ll need unpredictable results. Possibly inside a range of values, yet random in nature. The solution is to create a calculated measure that uses VBA Rnd() function. Yet it’s not that easy as it looks like. Why? Because that function will provide repeating values in each row, that is, unless prepared properly. And here’s how.

WITH
-- initial settings
MEMBER [Measures].[Min Value] AS 10
MEMBER [Measures].[Max Value] AS 99
MEMBER [Measures].[Value Range] AS
       [Measures].[Max Value] - [Measures].[Min Value]

-- A) non-working approach, same number for all rows MEMBER [Measures].[Fixed Random Value] AS
Int( [Measures].[Min Value] +
            [Measures].[Value Range] * Rnd( ) ),
       NON_EMPTY_BEHAVIOR = '[Measures].[Sales Amount]'
-- B) working approach
MEMBER [Measures].[Random Value 1] AS
Int( [Measures].[Min Value] +
            [Measures].[Value Range] *
            Rnd( Rank( [Product].[Product Model Lines]                       .CurrentMember,
                       [Product].[Product Model Lines]                       .CurrentMember.Level.MEMBERS ) )
          ),
       NON_EMPTY_BEHAVIOR = '[Measures].[Sales Amount]'
-- C) working approach
MEMBER [Random number per row] AS
       Rnd( Rank( [Product].[Product Model Lines]                  .CurrentMember,
                  [Product].[Product Model Lines]                  .CurrentMember.Level.MEMBERS ) )
MEMBER [Measures].[Random Value 2] AS
Int( [Measures].[Min Value] +
            [Measures].[Value Range] * [Random number per row] ),
       NON_EMPTY_BEHAVIOR = '[Measures].[Sales Amount]'
SELECT
    {
        [Measures].[Sales Amount],
        [Measures].[Fixed Random Value],
        [Measures].[Random Value 1],
        [Measures].[Random Value 2]
    }
    ON 0,
    NON EMPTY
    {
        [Product].[Product Model Lines].[Model].MEMBERS
    }
    ON 1
FROM
    [Adventure Works]

 

The first calculated measure (Fixed Random Value) will not generate random values, while the second two measures will (Random Value 1 and 2). The reason is that SSAS engine tends to determine what expressions can be considered deterministic and recognizes Fixed Random Value measure’s definition as one of them. In order to prevent that, we need to insert non-deterministic component as part of the expression, either by inserting it directly (Random Value 1) or by referring to a helper measure that contains it (Random Value 2).

Non-determinism is usually obtained by referring to current member or similar. Generally, that (unnecessary non-determinism) is something one should avoid if possible, because it slows down the query in most cases (causing additional loops and preventing block mode). However, here it was a requirement in order to achieve dynamic results. The Rank() function we used here is probably not the only way to achieve this.

Random values

Screenshot – Random values

Min and Max values are used in order to show how to make offset – the range doesn’t have to start from zero, as in this example.

Non_Empty_Behavior is used to eliminate rows that have no sales (notice NON EMPTY on rows). Without it, we would have rows with null value in sales regardless of NON EMPTY, because the random values wouldn’t be null. Hence we had to take care of it. Alternatively, that can be done using iif() inside calculations, or when you need to provide more complex condition (i.e. expression instead of a measure).

CONCLUSION

It is possible to have random values inside a calculated measure. The definition of that measure should include VBA Rnd() function and should also be non-deterministic in nature. In short, that means we inject a part of calculation that generates different values per each member/row on axis. In the above example that was achieved using Rank() function, because rank is different for each member.

The other thing we should take care of is providing a null value when it is required. For example, if some other measure in query is null, so shall our calculation return, to prevent additional rows from appearing when NON EMPTY is used on axes.

NEXT STEPS

That brings us to the end of part one. In the next part, we’ll cover samples of hierarchy, something which Mosha already covered here, and which is included in ASSP project on CodePlex. Nevertheless, I’ll try to provide additional examples of usability.

Finally, here’s a link for the same topic, yet in T-SQL, I managed to discover while performing search for related articles.

Oct 012009
 

Today I’ve been awarded the Microsoft MVP status.

I’m very happy about it and want to express my gratitude towards:

  • the company I work for, SoftPro Tetral, for providing a nice climate to learn and evolve from day one, and for supporting me in my wishes and activities
  • my collegues, for pushing each other to his/her limits
  • Chris Webb, for his kind words about me when I was starting and everything involving me being at two SQLBits conferences
  • Edward Melomed, Raymond-Lee and other forum moderators, for regularly following my posts in SSAS forum
  • Alessandro Teglia, for bringing good news as my new MVP lead
  • Dean Vitner, for giving me a chance at Kulendayz conference
  • all members of SSAS forum, some for sharing their problems, others for sharing their knowledge and ideas with the rest of us
  • SSAS bloggers, for valuable information in their articles and Vidas Matelis, for his SSAS-Info hub
  • readers of my humble blog, for their small but encouraging feedback
  • everyone else I met, vitually or in real world during this year or so, for sharing a moment and enriching me as a person
  • and last but not least, my family, for having patience with me during my moments of inspiration

Thank you all.

PS: I could just drop those bullets, leave the last sentence and hope that you’ll all recognize yourself in it. But then, you know me, I find it hard to write a small post :-)