Tomislav Piasevoli

Business Intelligence Specialist

Oct 312009
 

 

This year I’ll be volunteering at PASS Summit. Birds-of-a-Feather (BOF) Lunch is one of events you can find me at. Whether you want to discuss MDX or just meet me in person, search among 50 tables and join me for a chat. The event will take place on Tuesday, 3rd of November, between 11:45 and 12:45 (Expo Hall 4B).

Additionally, the company I work for, SoftPro Tetral, is one of exhibitors. If you’re interested in seeing our CubePlayer in action or just to have a chat with whoever is there at that moment (me included), feel free to drop by, either during breaks or Exhibitors Reception (Tuesday, 3rd of November 3, 6:00pm-8:00pm, Exhibit Hall 4A). We’re located in bottom-right corner on the map.

Oct 292009
 

 

On 2nd of September, I watched a marathon of SQL Server sessions named “24 Hours of PASS”. At least one small part of it. One session that made me think was Steve Jones’ Building a better blog. I wanted to learn few tricks about blogging in order to make mine a better place. What surprised me was Steve’s advice to blog about everything you do, in order to show others what topics you covered so far, regardless whether it was covered by someone else before or not. Now, I simply couldn’t agree with that. For me, writing a blog article, and by that I don’t mean a short announcement or similar, is writing about something others haven’t done so far or at least, not in that way. It’s about creating something new, a challenge to convert ideas in practice and demonstrate it in public. Otherwise, what’s the point in duplicate content? Then again, maybe I just don’t understand the point in having a blog.

And so I thought to myself, surely there must be several types of bloggers. One, for instance, that act as messengers, concentrating mostly on what’s new and what’s considered to be best practice. News and knowledge aggregators, we might call them. Then, there are occasional practitioners, people who don’t blog often and rarely do so regarding news. When they blog, it’s mostly about something they’ve done recently and found interesting to share. Next, there are regular bloggers, who post weekly if not often, regardless of what, just to keep the pace. And so on. Which makes me think. Could we perform a real-life analysis on this topic? Could we analyze bloggers writing about Business Intelligence? Why not, I know a good place where this kind of information can be obtained. Our precious – SSAS-Info.com portal, hosted by Vidas Matelis.

STARTING THE PROJECT

Although the strategic decision to go in this project was done at the same time when the idea came to me, the tactical decision followed a month later.

I mean, I saw this right from the start as a community project, this being merely its initial phase. A project where we all could participate, explore possibilities, share the findings and project’s know-how and have fun at the same time. Because this time it would not be about Northwinds, Foodmarts or AdventureWorks, this time it would be about us. Just imagine, us being inside a cube. Not as analysts, but the one being analyzed for change. Feels weird, yet tempting.

Considering all circumstances, it had to be done before PASS Summit 2009, because that’s a great place to gather, at least some of us will be there, and discuss the findings and further possibilities. Therefore, upon receiving emergency note from tactical management, the operational team had to act fast. And so I did.

I’ve contacted Vidas who kindly sent me CSV dump of his site. One big table containing all the records, like articles, news, faq, etc. Here’s 5 for cooperativeness!

CSV - 1st export

Since my virtual environment wasn’t ready at the moment, I started in Excel 2007 and managed to parse, clean, distill and consolidate the data using Excel functions, formulas and copy-paste technique. Like a real BA :-). I also transposed some data (tags) into M2M model using Query. Details from that experience in some other article, perhaps. I’m glad I did it in the end. Says enough?

Before going any further I’ve sent it back to Vidas. And I recommend it as a good practice for future users of PowerPivot – once they get what they want, loop-back to IT in order to clean/consolidate data at the very source. Don’t be mistaken, data will always be dirty. Yes, I know, it’s an extra step, but that way data becomes better and better for analysis. Love thy brother thou Gemini.

He gladly cleaned it following my findings inside. At the same time my virtual environment was ready, so I moved to Excel 2010 and Gemini add-in.

BI bloggers - source

I couldn’t make a model directly from the same Excel holding several tables for analysis, therefore, I saved that Excel and used it in a new one as an ODBC datasource.

Connecting went smoothly. I couldn’t see the data, just headers, but I assumed it had something to do with being CTP still (and perhaps something else). I just ignored that and returned to Excel, add a new pivot, filled it with some fields and the data was there. It works!

PERFORMING THE ANALYSIS

I made Four Charts view (under Pivot dropdown) and started to search for interesting and meaningful results (and also the ones where I’m in chart yet it doesn’t look suspicious :-) ). Finally, I settled with this. What do you think?

Gemini analysis of BI bloggers

I connected the orange Section slicer to charts on left (colored orange too). That slicer separates different part of SSAS-Info site, articles being the most fun par of it for analysis (hence selected). The upper radar graph shows the quantity of articles per category. Three most blogged categories (as seen in the radar graph) are: MDX, Management and Design. The bottom bar graph shows Top 10 bloggers.

The blue slicer, Category, represents topics being blogged about. Those top 3 are selected manually in order for the blue stacked bar graph to show their total as well as their share for each of Top 10 bloggers in those topics. As you can see, almost all of Top 10 bloggers are among Top 10 bloggers in these categories too. Vidas Matelis and Teo Lachev focus their attention primarily on other subjects, while Mi
crosoft and I blog above average in top 3 categories. Blue graph is linked to both orange and blue slicers and supports multi-select quite beautifully, because of stacked bar graph being used here.

Green pie graph is linked to green Author slicer (and orange slicer too). Here multi-select is not recommended, because this graph is not designed for that. However, it shows a distribution of categories for an author. Chris Webb, seen in this beautiful graph, appears as a true multi-vitamin for consultancy because of a wide variety of interests he blogs about and their nice distribution.

Which brings us to my initial dilemma. Using this graph it is relatively easy to analyze how versatile one chooses to be in his blog (because of categories he decides to write about). Mosha Pasumansky, me and many other BI bloggers are inclined to one or two areas. Partially because most of us are under 10 posts, which is a relatively small number compared to a number of categories.

The same analysis can be done in radar graph, either by hooking the orange radar graph to Author slicer, or by changing the bottom-right graph type from pie to radar, since it is already connected as required. That shows us some BI bloggers like Carl Rabeler and Dave Fackler have square in radar graph (focus on 4 categories). Many of Microsoft employees have a more or less wide wedge, with SSAS 2008 being the longest side. Other bloggers have triangles, etc.

Chris Webb, Vidas Matelis, Thomas Ivarsson, Dan English and others show more balanced pie covering many categories. And so on. And so on.

Btw, having so many celebrities in my article (not knowing about it too), I feel like Michael Jackson shooting Liberian Girl video. And I apologize in advance for not mentioning all of you (~150). Don’t worry, I have a plenty of ideas for the sequel 😉

DRAWING THE CONCLUSION

Having seen many of these charts, I conclude that my assumption from the beginning of this article is true. Patterns do exist. Therefore, they should be analyzed.

My comparison with vitamins might not be bad at all. I mean, we both help the society. Next, each of us is targeting one or more areas. Finally, each of us, like vitamins from different vendors, have more or less better perception/absorption. That is, what we write about and how we write about it. Yet, unlike vitamins, we have the option to evolve.

My (not first, but second) impression regarding PowerPivot is that it is envisioned as a very pleasant analytical environment, featuring slicers as the most impressive component of it.

Although the same analysis could have been done in Excel 2007 or in SSAS 200*, I’ve chosen Gemini because I participate in CTP program, haven’t posted anything about it so far and wanted to learn. And what better way than practicing by setting up a task and then fulfilling it. Incrementally, this being the first step.

As for in-memory cube generation is concerned, that part is still to come. I made some tests, but that’s not it. A careful reader should remember that my standards prevent me from posting unless it is something valuable or unique. In CTP3 perhaps.

  

WHERE TO NEXT?

This experiment showed interesting results. At least to me. And it pointed out one important thing – data is everywhere! Yet it’s not being analyzed, just collected. Opportunities are numerous. Business Intelligence, at least its technology part, enables everyone in gaining fast insight into data, encouraging the creative process which then leads to better decision-making and actions.

Opportunities are numerous here too. This was only a small portion of what can be done. I see this project as a potentially flourishing area for the whole community where each of us could cover his area of interest. Btw, this is just one of them I have on my mind, and I intend to start them when the time comes.

For example, this data is static (captured a week ago). One possible project would make it come alive, enabling real-time (or daily) analysis (although weekly could be fine too).

The next one is Time Analysis. Records in the database behind SSAS-Info portal don’t have original article’s date but rather a date/time of when it was manually entered into database. I’ve omitted that in this example, I might make another one soon. Now, that difference means we have uncertainty of a couple of days for more recent items, and unpredictable offset for older items. Those dates could be collected and merged into DW.

Then there are numerous data-mining possibilities. For example, clustering of authors in order to obtain clustered groups based on categories or tags. We could also classify authors as nighthawks and daylight writers. Or by regularity in their posts (regular, occasional), day in week affinity, etc, etc.

This could also be a nice case for data-cleaning experts, to show their strength and ideas in that field, in Excel as well as on SQL Server platform. And perhaps by comparing good and bad sides of each path.

Finally, there are M2M challenges not to be neglected. All together, a nice playground for the upcoming CTP3.

Yes, I know what you’re thinking. OK, fine, no problem. Feedback would however be very appreciated.

If, on the other hand, you are interested in participating in this project, Vidas and me have no objection that you use raw or prepared data, which btw can be found in below.

 

Feel free to contact us regarding cooperation, advices, anything.

And again, don’t be shy, give thumb up or down, feedback, anything. Let me know what you’re thinking. That way I can make another analysis. Just kidding :-)

PS: A small notice to Redmond team – the same idea could be applied to MSDN forum. Extending the text-mining of tags and posts recently introduced there, if I’m not mistaken.

Oct 282009
 

Tomorrow, on 29th of October, I’ll be holding an unplugged session named “OLAP” for local SQL&DEV user group. The term “unplugged” is literal – there will be no laptops, slides or demos. Just a (hopefully) good intro and discussion, with few key points to take away in the end. All together, an hour of specific subject, following the practice of last month’s gathering.

The event will take place in Microsoft’s building in Zagreb (Croatia) at 6 PM.

Beside a regular monthly newsletter, there’s also an announcement on BUG Online (web portal of local IT magazine) that I’ve managed to find regarding this event.

Sutra, 29. listopada, održat ?u unplugged predavanje na temu OLAP-a za SQL&DEV user grupu. Ono “unplugged” je doslovno – nema laptopa, nema slajdova, nema demoa. Samo, nadam se, dobar uvod, diskusija i naravno, nekoliko istaknutih stvari, da ostanu u sje?anju. Dakle, jedan sat na odabranu temu, slijede?i dobru praksu prošlomjese?nog druženja.

Doga?aj ?e zapo?eti u 18 sati u prostorijama Microsoft Hrvatske, Turinina 3 u Zagrebu.

Osim redovnog mjese?nog e-letka, postoji i obavijest na Bug Online portalu o ovom doga?aju.

Oct 252009
 

 

Creating a Bit array (also known as Bitmap) is an useful technique when analyzing data. Especially (but not necessarily) in case of low cardinality (small number of distinct values in column compared to the number of records in a table). For example, a column having two distinct values (male/female or yes/no status) could be optimized in storage and represented using values 0 and 1 only (bits).

A collection of such bits forms a bit array or a bitmap. Bitmap indexes use those arrays performing bitwise operations on them.

Almost all DSS and analytical systems use bitmap indexes, and so does Analysis Services. An example where it can be seen is SQL Server Profiler. Query Subcube event represents the bitmap of hierarchies and attributes being used in a query. Yet, what’s fine for computers might not be for humans, hence there’s also Query Subcube Verbose event which explains that in plain English. Kind of.

Although SSAS engine intensively uses them, bitwise operations in MDX are in short supplies. In spite of that, the same idea can be borrowed and turned in action using string functions. After all, an alternate name for bit array is bitstring.

Recently I’ve participated in a case where a measure was expected to return the percentage of parent no matter which hierarchy is placed on an axis and no matter how many of them are there. Actually, a similar problem appeared on MSDN SSAS forum a year ago too. While the initial problem in this latter case was solved using special solution, general solutions emerged also. The offer ranged from heavy string manipulation to recursion and stored procedure. Taking a look at it now, from the perspective of this case, I believe bitmap indexes might have been an alternate MDX solution, a relatively fast approach that doesn’t require anything but a query and few calculated members aside.

Here’s a case study featuring the bitmap index as the key point in obtaining the general solution.

SITUATION

The idea was to have a general solution for percentage (ratio to parent). One calculated measure covering all scenarios, meaning, whatever combination of hierarchies appears in OWC (cube browser).

The problem that soon appeared was this – no matter how much improved the calculation in each iteration I posted in forum was (a previous thread and my blog article on that subject), there was always a case where it didn’t fit. Therefore I started focusing on getting as detailed explanation as possible. When I finally got it (as an Excel sheet) and analyze it, deduced the algorithm and describe it like this (as in integrams):

  •  
    • if there was a non-root member in the rightmost hierarchy, show ratio to parent
    • root member in the rightmost hierarchy combined with all non-root members from other hierarchies to the left should result in 100%
    • if all hierarchies were on root member, the result should be 100%
    • in all other situations, ratio to parent for non-root member should be the result

The idea in integrams (I couldn’t link for them in English, it’s an enigmatic game) is to mark certain events that hold true and other that can never be such. By doing so in iterations (reading the initial text over and over again, every time in new context), one should be able to deduce the final combination of entities in a puzzle and thereby solve it. The idea here was to list all possible combinations using a bitmap, where 0 represents non-root member, 1 root member and their sequence, well, …the sequence of hierarchies on axis. Translated, that would look like this (asterisk being any value):

CASE BITMAP VALUE REQUIRED RESULT
**0 0, 2, 4, 6, … (even numbers) parent of 3rd
001 1 parent of 3rd (self), special case for 100%, per request
011 3 parent of 1st
101 5 parent of 2nd
111 7 parent of 1st, special case for 100%

 

Now, see how this looks more like a solvable case? And so it was. The only thing necessary at this stage is to code it using MDX expressions in the most optimized way.

SOLUTION

A general MDX for this could be far more simpler than the one below. However, the requirement was that it should work in OWC and OWC has its specifics when it comes to MDX, about which I blogged here. Therefore, I had to make a lot of additional things, like testing where OWC put measures and everything related to that in later calculations. Nevertheless, I managed to make it work as required.

Create Member CurrentCube.[Measures].[Where are measures in OWC] AS
case
when NOT IsError(Extract( Axis(0), Measures ).Count) then 0
        when NOT IsError(Extract( Axis(1), Measures ).Count) then 1
        when NOT IsError(Extract( Axis(2), Measures ).Count) then 2
        else -1  -- should not be possible
end
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Ratio measure] AS
-- specify a measure you want to have a ratio for, or
-- use default measure, or
-- use dynamic expression, that selects    -- either first measure on an axis or
-- the default measure in case no measure is selected

    /* -- option 1 */
    [Measures].[Sales Amount]
    /* -- option 2
    [Measures].DefaultMember
    */
    /* -- option 3
      iif(Extract( Axis( [Measures].[Where are measures in OWC] ),
                   Measures ).Item(0).Item(0).UniqueName =
          '[Measures].[Universal ratio %]',
          [Measures].DefaultMember,
          Extract( Axis([Measures].[Where are measures in OWC]),                   Measures ).Item(0)
         )
    */
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Test Empty] AS
    IsEmpty( [Measures].[Ratio measure] )
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Count of columns] AS
    iif( IsError( Axis(1).Count ),
         0,
         iif( [Measures].[Where are measures in OWC] = 1,
              Axis(0).Item(0).Count,
              Axis(1).Item(0).Count )
       )
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Bitmap Index as String] AS
    iif( [Measures].[Where are measures in OWC] = 1,
        Generate( Head(Measures.AllMembers,                       [Measures].[Count of columns] ) AS L,
                  -(Axis(0).Item(0).Item(L.CurrentOrdinal - 1)                    .Hierarchy.CurrentMember.Level.Ordinal = 0)
              ),
        Generate( Head(Measures.AllMembers,                       [Measures].[Count of columns] ) AS L,
                  -(Axis(1).Item(0).Item(L.CurrentOrdinal - 1)                    .Hierarchy.CurrentMember.Level.Ordinal = 0)
          )
     )
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Reversed Bitmap Index as String] AS
    iif( [Measures].[Where are measures in OWC] = 1,
        Generate( Head(Measures.AllMembers,                       [Measures].[Count of columns] ) AS L,
                  -(Axis(0).Item(0).Item([Measures].[Count of columns] -
                     L.CurrentOrdinal)                    .Hierarchy.CurrentMember.Level.Ordinal = 0)
              ),
        Generate( Head(Measures.AllMembers,                       [Measures].[Count of columns] ) AS L,
                  -(Axis(1).Item(0).Item([Measures].[Count of columns] -
                     L.CurrentOrdinal)                    .Hierarchy.CurrentMember.Level.Ordinal = 0)
              )
       )
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Rightmost non root position] AS
    InStr([Measures].[Reversed Bitmap Index as String], '0')
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Inner 100%] AS
    CInt([Measures].[Bitmap Index as String]) = 1
    , Visible = 0
;

Create Member CurrentCube.[Measures].[Universal ratio %] AS
    iif( [Measures].[Test Empty],
         null,
         iif( [Measures].[Rightmost non root position] = 0 OR
              [Measures].[Inner 100%],
              1,
              iif( [Measures].[Where are measures in OWC] = 1,
                   -- check division by zero
                   iif( ( Axis(0).Item(0)                          .Item([Measures].[Count of columns] -
                           [Measures].[Rightmost non root position])                          .Hierarchy.CurrentMember.Parent,
                          [Measures].[Ratio measure] ) = 0,
                        0,
                        [Measures].[Ratio measure]
                        /
                        ( Axis(0).Item(0)                          .Item([Measures].[Count of columns] -
                           [Measures].[Rightmost non root position])                          .Hierarchy.CurrentMember.Parent,
                          [Measures].[Ratio measure] )),
                    -- check division by zero
                    iif( ( Axis(1).Item(0)                           .Item([Measures].[Count of columns] -
                            [Measures].[Rightmost non root position])                           .Hierarchy.CurrentMember.Parent,
                           [Measures].[Ratio measure] ) = 0,
                           0,
                           [Measures].[Ratio measure]
                           /
                           ( Axis(1).Item(0)                             .Item([Measures].[Count of columns] -
                              [Measures].[Rightmost non root position])                             .Hierarchy.CurrentMember.Parent,
                             [Measures].[Ratio measure] ) )
                 )
            )
       ),
    FORMAT_STRING = "percent"
;

 

The first measure detects axis that has measures when using OWC.

The second measure is our ratio measure. That is, the measure we want to calculate the ratio for. I provided the syntax for several scenarios. The first one, of course, is the obvious and easiest one – using any measure we want and hard-coding it there. The second option is more general one – the use of cube’s default measure. Meaning, it could work for any cube. Finally, the third and the most complex option is to use the first measure on axis, whatever it might be. That’s even more general solution. In that scenario, it is up to an end-user to put a measure of his choice as the first measure and to see ratios based on that measure. It should work in any cube/report/query. The calculation features a handle for no measures at all (except that ratio measure) in which case the default measure is taken.

Next we test whether the original, ratio measure is empty. We’ll use the result of this in the following calculations, in order to preserve empty rows.

Count of columns gives us count of hierarchies being placed on an axis of interest (that is rows or columns, depending on where measures are not).

Next thing we do is generate a bitmap, and right next to it – a reversed bitmap. Innermost logical test returns either –1 or 0 (True or False), while minus in front of it turns -1 into 1. Looping is performed on a set of measures (which might be a week point – it is up to BI designer to provide something better in case there are only few measures). From which axis is bitmap formed? Depends on measures position in OWC. Additionally, in the reversed bitmap, the reversed order is obtained as it is usually being done in MDX – using a combination of two MDX functions: Count and CurrentOrdinal.

Why do we needed reversed bitmap? It becomes obvious once we come to the next calculated measure – Rightmost non root position. Analysis should be performed backwards, from right to left. That is the order in which we search patterns in bitmaps and therefore we needed reversed bitmap (string search is left based). This new measure returns us the position of the first zero value. In the table featuring 5 distinct cases, those positions are: 1, 2, 3, 2, 0.

Next measure is a Boolean type measure that detects a special case (see the table above) and that special case is 001 combination, where, per request, the value should be 100%.

Finally, we come to the last and the only visible measure – Universal Ratio %. There, the first thing we should do is test for emptiness of original ratio measure. Then only we handle the cases in our table. After covering cases with 100% result, based on where the measures are, we enter one branch and there test denominator for zero. Finally, we perform the ratio calculation. Only by now it is in utmost relative form, covering wide varieties of cases withonly one expression (notice the use of Rightmost non root position measure inside the innermost calculation). That’s the power and beauty of having and using bitmap indexes.

UniversalPercentageInOWC

BENEFITS

It’s worth reminding again that the final formula looks complicated only because of two reasons: OWC is being used as a front-end and, there were special cases in the bitmap table. Without them, the calculation would come down to always necessary checks for emptiness and division by zero, following by the core ratio expression. In case you’re using a client that keeps measures on the axis you’ve dropped them on in your pivot, you can consider stripping down all unnecessary calculations or part of it, as mentioned in the beginning of this article.

I also made tests with real bitmaps. Meaning, I tried to use numbers, 2 powered to N, where N was the number of hierarchies on axis. Yet that turned out to be not only a dead-end (because the reverse operation is not easy), but also time consuming, prolonging the query time significantly. String operations on the other hand were very fast and manipulations with them very easy. Relatively, of course.

All in all, bitmaps and their power in being flexible pointers proved to be an interesting approach in providing universal MDX solutions, calculations that, did I mention it before, work in any cube.

Oct 252009
 

 

“Life is like a box of chocolate: You never know what you’re gonna get.”

It is not the exact memorable quote from the Forrest Gump film, rather simply a form of it which I’ve memorized. Moreover, being excited about the package with three blue letters on it I’ve just received, it’s also the one I find the most appropriate for the series of articles I intend to write.

What’s the topic this time? Randomness in MDX.

Having thought about it for a while, I’ve recognized three variants of it: random values, random members and random objects. Each a story by itself. Therefore, I’ve decided to cover them separately, starting with chocolate flavors representing, well, you’ve guessed it by now – random values.

INTRODUCTION

Imagine a box full of chocolates. Each chocolate being filled with some delicious ingredient. How many different ingredients are there in total? What do they taste like? Which shape would you try first? You feel tempted. The hand moves. A mixture of tastes overwhelms your senses as you reach for them one by one, experiencing the a short glimpse of sensation as their sweetness melts in your mouth. You repeat the process and the next thing you know, they’re gone. All of them.

Now, if that made you take a walk around a place you’re at, be sure to come back soon and see how to simulate the same in MDX query.

RANDOM VALUES

There will be times when you’ll need unpredictable results. Possibly inside a range of values, yet random in nature. The solution is to create a calculated measure that uses VBA Rnd() function. Yet it’s not that easy as it looks like. Why? Because that function will provide repeating values in each row, that is, unless prepared properly. And here’s how.

WITH
-- initial settings
MEMBER [Measures].[Min Value] AS 10
MEMBER [Measures].[Max Value] AS 99
MEMBER [Measures].[Value Range] AS
       [Measures].[Max Value] - [Measures].[Min Value]

-- A) non-working approach, same number for all rows MEMBER [Measures].[Fixed Random Value] AS
Int( [Measures].[Min Value] +
            [Measures].[Value Range] * Rnd( ) ),
       NON_EMPTY_BEHAVIOR = '[Measures].[Sales Amount]'
-- B) working approach
MEMBER [Measures].[Random Value 1] AS
Int( [Measures].[Min Value] +
            [Measures].[Value Range] *
            Rnd( Rank( [Product].[Product Model Lines]                       .CurrentMember,
                       [Product].[Product Model Lines]                       .CurrentMember.Level.MEMBERS ) )
          ),
       NON_EMPTY_BEHAVIOR = '[Measures].[Sales Amount]'
-- C) working approach
MEMBER [Random number per row] AS
       Rnd( Rank( [Product].[Product Model Lines]                  .CurrentMember,
                  [Product].[Product Model Lines]                  .CurrentMember.Level.MEMBERS ) )
MEMBER [Measures].[Random Value 2] AS
Int( [Measures].[Min Value] +
            [Measures].[Value Range] * [Random number per row] ),
       NON_EMPTY_BEHAVIOR = '[Measures].[Sales Amount]'
SELECT
    {
        [Measures].[Sales Amount],
        [Measures].[Fixed Random Value],
        [Measures].[Random Value 1],
        [Measures].[Random Value 2]
    }
    ON 0,
    NON EMPTY
    {
        [Product].[Product Model Lines].[Model].MEMBERS
    }
    ON 1
FROM
    [Adventure Works]

 

The first calculated measure (Fixed Random Value) will not generate random values, while the second two measures will (Random Value 1 and 2). The reason is that SSAS engine tends to determine what expressions can be considered deterministic and recognizes Fixed Random Value measure’s definition as one of them. In order to prevent that, we need to insert non-deterministic component as part of the expression, either by inserting it directly (Random Value 1) or by referring to a helper measure that contains it (Random Value 2).

Non-determinism is usually obtained by referring to current member or similar. Generally, that (unnecessary non-determinism) is something one should avoid if possible, because it slows down the query in most cases (causing additional loops and preventing block mode). However, here it was a requirement in order to achieve dynamic results. The Rank() function we used here is probably not the only way to achieve this.

Random values

Screenshot – Random values

Min and Max values are used in order to show how to make offset – the range doesn’t have to start from zero, as in this example.

Non_Empty_Behavior is used to eliminate rows that have no sales (notice NON EMPTY on rows). Without it, we would have rows with null value in sales regardless of NON EMPTY, because the random values wouldn’t be null. Hence we had to take care of it. Alternatively, that can be done using iif() inside calculations, or when you need to provide more complex condition (i.e. expression instead of a measure).

CONCLUSION

It is possible to have random values inside a calculated measure. The definition of that measure should include VBA Rnd() function and should also be non-deterministic in nature. In short, that means we inject a part of calculation that generates different values per each member/row on axis. In the above example that was achieved using Rank() function, because rank is different for each member.

The other thing we should take care of is providing a null value when it is required. For example, if some other measure in query is null, so shall our calculation return, to prevent additional rows from appearing when NON EMPTY is used on axes.

NEXT STEPS

That brings us to the end of part one. In the next part, we’ll cover samples of hierarchy, something which Mosha already covered here, and which is included in ASSP project on CodePlex. Nevertheless, I’ll try to provide additional examples of usability.

Finally, here’s a link for the same topic, yet in T-SQL, I managed to discover while performing search for related articles.

Oct 012009
 

Today I’ve been awarded the Microsoft MVP status.

I’m very happy about it and want to express my gratitude towards:

  • the company I work for, SoftPro Tetral, for providing a nice climate to learn and evolve from day one, and for supporting me in my wishes and activities
  • my collegues, for pushing each other to his/her limits
  • Chris Webb, for his kind words about me when I was starting and everything involving me being at two SQLBits conferences
  • Edward Melomed, Raymond-Lee and other forum moderators, for regularly following my posts in SSAS forum
  • Alessandro Teglia, for bringing good news as my new MVP lead
  • Dean Vitner, for giving me a chance at Kulendayz conference
  • all members of SSAS forum, some for sharing their problems, others for sharing their knowledge and ideas with the rest of us
  • SSAS bloggers, for valuable information in their articles and Vidas Matelis, for his SSAS-Info hub
  • readers of my humble blog, for their small but encouraging feedback
  • everyone else I met, vitually or in real world during this year or so, for sharing a moment and enriching me as a person
  • and last but not least, my family, for having patience with me during my moments of inspiration

Thank you all.

PS: I could just drop those bullets, leave the last sentence and hope that you’ll all recognize yourself in it. But then, you know me, I find it hard to write a small post :-)

Sep 232009
 

Recently, I was asked to write a calculation that calculates the ratio to parent on rows that doesn’t depend on cube structure and that also works in OWC. The first part I managed to dig somewhere from the past, the second proved to be much more demanding task. Having bypassed several obstacles and spent significant portion of time on this subject, I decided to pass the knowledge in form of a blog article. Here’s the calculation:

CREATE MEMBER CURRENTCUBE.
[Measures].[Universal share percent for OWC in BIDS %] AS

   iif( IsError( Extract( Axis(0), Measures ).Item(0) ),
   iif( IsError( Extract( Axis(1), Measures ).Item(0) ),
   iif( Axis(1).Item(0).Item(
 Axis(1).Item(0).Count - 1
)
.Hierarchy.CurrentMember.Level.Ordinal = 0,
1,
Extract( Axis(2), Measures ).Item(0)
/
(
Axis(1).Item(0).Item(
Axis(1).Item(0).Count - 1
).Hierarchy.CurrentMember.Parent,
Extract( Axis(2), Measures ).Item(0)
)
),
iif( Axis(0).Item(0).Item(
Axis(0).Item(0).Count - 1
)
.Hierarchy.CurrentMember.Level.Ordinal = 0,
1,
Extract( Axis(1), Measures ).Item(0)
/
(
Axis(0).Item(0).Item(
Axis(0).Item(0).Count - 1
).Hierarchy.CurrentMember.Parent,
Extract( Axis(1), Measures ).Item(0)
)
)
),
iif( IsError( Axis(1).Item(0) ),
1,
iif( Axis(1).Item(0).Item(
Axis(1).Item(0).Count - 1
)
.Hierarchy.CurrentMember.Level.Ordinal = 0,
1,
Extract( Axis(0), Measures ).Item(0) /
(
Axis(1).Item(0).Item(
Axis(1).Item(0).Count - 1
).Hierarchy.CurrentMember.Parent,
Extract( Axis(0), Measures ).Item(0)
)
)
)
),FORMAT_STRING = "Percent",
VISIBLE = 1;

What’s good about it is that it can be used in any project.

I also enhanced it a bit, to support other clients/controls as well (SSMS included).

Finally, it can be used as a template for other calculations that use Axis() function and are intended to work in Cube Browser.

ATTENTION: The improved variant of the same calculation can be found in a more recent blog article here.

Sep 232009
 

 

Cube Browser is often referred to by BI developers when working with SSAS cubes. Actually, that’s just another name for OWC (Office Web Components), a control integrated in BIDS that generates MDX in the background for each pivot table end-user makes. If you’re a BI developer, there are few things I believe you should know about OWC:

  • It never mixes (crossjoins) measures with other dimensions.
  • It puts measures on columns only when there is no other dimension in pivot.
  • It puts measures on rows whenever columns or rows are occupied.
  • It moves measures to pages (Axis 2) when both columns (Axis 0) and rows (Axis 1) are occupied.
  • It creates subcube, doesn’t use subselect.
  • It always creates session sets, one per hierarchy and axis.
  • It cascades them (includes previously defined set in a new set’s definition by crossjoining it with another hierarchy).
  • It only refers to session sets, besides measures, in its final query.

For a simple preview of a cube, for what I believe OWC is incorporated in BIDS in the first place, it makes no difference how the MDX behind looks like. For advanced stuff, like the use of Axis() function or similar, it matters a lot. If you find yourself in problems, consider adjusting your calculations to OWC specifics mentioned above. When done, calculations should work as normal.

Aug 242009
 
Last month I blogged about histograms, types of histograms and how to implement them in SSAS cubes. Since the entity-based ones are enabled in OLAP per se, I’ve focused on the event-based histograms and explained how to achieve them using a step-by-step approach. The inspiration for that article originated from this thread.
 
At the same time, another thread was describing a similar problem – how to make a dimension based on calculated measure’s values? At first, I thought it can be solved with a slight modification of the Histogram solution. The only difference should have been the use of a real utility dimension (instead of a dummy one). In other words, scoping members of that dimension with a proper calculation instead of defining a new calculated measure. However, not everything went smoothly. Finally, I’ve decided to make an example in Adventure Works, solve this interesting problem and elaborate the distilled solution here in details. What’s the topic this time? Buckets!
 
 
 
INTRODUCTION
 
If you ask me, Wikipedia doesn’t have a good explaination of what buckets are. In order to present this solution in a proper manner, first I’ll try to explaine buckets, then provide a solution on how to implement them in OLAP cubes.
 
Bucketization is a common process when dealing with data. Normally, you want to be able to perform analysis on every possible value of an entity, for example product, store, date, and their attributes as well. However, there are times when such a detailed analysis is either not advisable or not required. Loans, for example, are usually a huge dimension, customers also in some industries. Implementing an attribute based on the primary key of the respective dimension table can have serious consequences on the performance.
 
Continues values (or significant number of discrete values) are also sometimes bucketized. Values are split into discrete regions, bands, groups or ranges, which are nothing but various phrases for the same. Since the number of buckets is smaller than the number of distinct values, manipulation is easier and performance vs storage ratio potentially better. Yet, these buckets still represent the data in satisfactory way, meaning, we don’t lose information about it, we just reduce the resolution of analysis on that entity, since it suits us that way.
 
Typical examples can be found in Customer dimension (Yearly Income, Commute Distance) and Employee dimension (Base rate, Sick Leave Hours, Vacation Hours). Adventure Works, of course.
 
Some of the attributes mentioned above are implemented directly in dimension tables, as a range of values with their unique ID. The other using DiscretizationMethod / DiscretizationBucketCount properties of an attribute. The latter are wonderfully explained in a series of articles by William Pearson (Part 79-83, see the list below the article). A search on the web shows us that Chris Webb also blogged about buckets and how to build them dynamically few years ago.
 
The principle with histograms was this – we could see the frequency of occurence of the key attribute of a dimension per any of its attributes, as long as there was a distinct count measure defined on that key attribute. Those were entity-based histograms. Event-based histograms were also possible, but we had to introduce a dummy dimension with a certain number of members and a new calculated measure that calculates the frequency of events in a fact per each of them. The question is – are there entity and event-based buckets too? And the answer is – yes!
 
Buckets are reduced set of values of an attribute. Besides that, they are attribute like any other. Any measure can be analyzed across them. Such buckets are entity-based. They are built using previously mentioned techniques. What’s also true is that they are static, they don’t change in fact table. But, that’s not what we’re after. We want our buckets to be dynamic, calculated from a fact. And for that we need an utility dimension. Only this time calculations must be placed on it, using scope command, so that we’re able to analyze all measures, whether regular or calculated, across these new virtual buckets. Here’s how.
 
 
 
BUILDING BUCKETS
 
1. Open a copy of Adventure Works 2008 database in BIDS. Adjust Deployment part of Project properties in order not to overwrite your original database. Specify Buckets (or anything you like) for the name of the database.
 
2. Open Adventure Works DW.dsv. Add New Named Query.
 
 

 
Specify Bucket for name and then add this T-SQL script:
 
SELECT        0 AS KeyColumn, ‘Bucket 0’ AS NameColumn
UNION ALL
SELECT        1 AS KeyColumn, ‘Bucket 1’ AS NameColumn
UNION ALL
SELECT        2 AS KeyColumn, ‘Bucket 2’ AS NameColumn
UNION ALL
SELECT        3 AS KeyColumn, ‘Bucket 3’ AS NameColumn
UNION ALL
SELECT        4 AS KeyColumn, ‘Bucket 4’ AS NameColumn
UNION ALL
SELECT        5 AS KeyColumn, ‘Bucket 5’ AS NameColumn
UNION ALL
SELECT        6 AS KeyColumn, ‘Bucket 6’ AS NameColumn
UNION ALL
SELECT        7 AS KeyColumn, ‘Bucket 7’ AS NameColumn
UNION ALL
SELECT        8 AS KeyColumn, ‘Bucket 8’ AS NameColumn
UNION ALL
SELECT        9 AS KeyColumn, ‘Bucket 9’ AS NameColumn
UNION ALL
SELECT        10 AS KeyColumn, ‘Bucket 10’ AS NameColumn
 
Test, execute, save.
 
3. Build a new dimension from that query. Name the dimension Bucket. Use KeyColumn and NameColumn for the same attribute (one and only) and provide MemberValue also (it should be KeyColumn as well). Order by key.
 
 

 
4. Add that dimension to the cube. Don’t link to any measure group.
 
 

 
5. Go to Calculation tab. Position cursor just below the Calculate; command. Paste this script:
 

Create

 

 

 

 

 

Member CurrentCube.[Measures].[Weighted Discount Buckets] As
Int( [Measures].[Discount Amount] /
     
[Measures].[Reseller Sales Amount] * 10 )
, Format_String = ‘#,##0’
, Associated_Measure_Group = ‘Reseller Sales’
, Display_Folder = ‘WA’;

 

Scope

 

 

 

 

 

(
       [Bucket].[Bucket].[Bucket].
MEMBERS 
     );
This = Aggregate( Existing [Promotion].[Promotion].[Promotion].MEMBERS,
            iif( [Bucket].[Bucket].CurrentMember.MemberValue =
                 (
                   [Measures].[Weighted Discount Buckets],
                   [Bucket].[Bucket].[All]
                 ),
                 (
                 
 [Measures].CurrentMember,
                   [Bucket].[Bucket].[All]
                 ),
                
null
              
 )
          );
End Scope;

 

 

 

 

 

 

6. Build and deploy.
 
7. Jump to Cube Browser tab.
 
 

 
Put Bucket dimension on rows. Put Weighted Discount Buckets measure from WA folder first, then couple of measures from Reseller Sales measure group. I selected some having Sum aggregation, others being calculated, in order to show how buckets behave. You can also try to slice per country in Sales Territories dimension or any other related dimension – results adjust accordingly.
 
That’s it. Buckets are implemented!
 
 
 
EXPLAINATION
 
First we need to have a calculation for the buckets. In this example, we used weighted average of a discount percentage (which is Discount Amount over Reseller Sales Amount – we already have those two in our cube :-)). Discount percentage is related to a DimPromotion table. These two queries show us that discounts can be found only in reseller sales process – there are 12 distinct promotions having various discount percentages ranging from 0 to 0.5 (0% – 50%).
 
 

select

PromotionKey, UnitPriceDiscountPct
from dbo.FactResellerSales
group by PromotionKey, UnitPriceDiscountPct
order by PromotionKey, UnitPriceDiscountPct

  

select PromotionKey, UnitPriceDiscountPct
from dbo.FactInternetSales
group by PromotionKey, UnitPriceDiscountPct
order by PromotionKey, UnitPriceDiscountPct


Our buckets will be discount percentages, grouped in bands of ten (0-9 = 0, 10-19 = 1, … 91 – 100 = 10). All together, we need 10 members in our utility dimension and that’s what initial named query does.

Since that dimension is not linked to any measure group, by bringing it in cube browser, we only see the same values per each member – total values. In order to have correct values per each member, we need to reach for MDX script and write proper expressions. The important thing to notice is that expressions should be written before any other statement in MDX script in order to allow all calculated measures (that follow) to be aggregated the way they’re suppose to (after our scope!). What once was solve order, is now the order of statements in MDX script.

The first thing we do is we define our calculated measure that will be used to determine buckets. It’s the same definition as in [Measures].[Discount Percentage] calculated measure, except we rounded the values and grouped them in 10 buckets. That measure can later be made hidden, if wanted so.

The following scope handles how each bucket is to be calculated per any measure – as an aggregate of values across existing promotions where dimension bucket key is equal to bucket measure. That expression is actually our transformation of data, which allocates value on buckets. It’s sort of a projection of calculated bucket measure on bucket dimension. That’s how it can be visualized and understood. Inside, it is important to refer to the root member of bucket dimension, because it’s the only member that has data before transformations get applied. That’s why we used tuples.

CONCLUSION

We reached the end of this article. Let’s summarize it one more time.

It is possible to make analysis of cube data based on dynamically calculated measure/buckets. The solution is to build an utility dimension with predetermined number of members (buckets) and then assign a value for each bucket. Values are assigned using scope statement where the key of the bucket is compared to the value of the calculation (calculated bucket measure). Effectively, that is a transformation of data from a measure to a dimension. Speaking of which, what better way to finish this article than to provide a vivid example.

We all love music and have our favorite media player, either a device or an application on a computer. Small and simple ones provide only basic operations like Play, Pause, etc. The other, more advanced feature multimedia database (music catalog, tracking the number of times a song is being played) and visualization (spectrum analyzer) among other things.

Last time we’ve focused on PlayedCount field, that is, on the frequency of occurence of an event (a song being played). A histogram of that data would show how many songs (height of a vertical bar) have been played each number of times (points on x axis). Not a very interesting thing, right? A drill-down/drill-through on top of that would be more interesting (which songs are played N times). But there wouldn’t be drill if it weren’t for histogram in the first place, to separate the data. Therefore, histograms are important, although not always used as the final result.

In this article, we’ve focused on the visualization. Spectrum analyzer to be precise. Those small panels where several vertical bands show how loud the bass, middle and treble are. Or more bands, depending on the number of them it displays (i.e. 100 Hz, 300 kHz, 1 KHz, 3 kHz, 10 kHz). Bands are ranges of frequency which display the amplitude for any given part of a song.

Well, did you know that a song can also be considered a small fact? The amplitude in that case is the measure. Dimensions? Well, there’s a Time dimension for sure. Others? Not likely. What about the Frequency dimension? Doesn’t exist there. It is however required to be present in the end result. How come? Well, bands are nothing but buckets of frequency range, where the number of them is predetermined in advance. The only thing that’s missing is the calculation that projects the amplitude across buckets in spectrum analyzer. Ring a bell?

FFT (Fast Fourier Transformation). How about it Smješko namiguje?

 
Aug 142009
 
Chris Webb recently blogged about a strange phenomenon regarding sets in where clause and auto-exists. Today it occured to me that I had experienced a similar issue while doing some testing a year from now. A quick search on the right words (Tomislav Darren set slicer) and the exact thread popped up first in SSAS forum search engine.
 
After I’ve read it, I was very much intrigued by Chris’ post, so I’ve read it and tested it too, and then decided to spend few hours in order to find out what’s going on in the examples he provided, hoping to discover a general rule of this “weird” behavior and, most of all, the answer “why”.
 
Whether I succeeded or not, I’ll leave up to you. The explaination follows.
 
 
 
First, let’s make the summary of facts.
 
Here is the list of queries, how the MDX looks like, what do they return (not values, we focus on members), as well as short description of behavior/mechanism behind.
 
 
 
Query 1 – returns the value of 01-07-2001 (member in slicer wins in advance by forcing fixed and explicit coordinate on 20010701)
select [Measures].[Internet Sales Amount] on 0,
[Date].[Fiscal].[All Periods] on 1
from [Adventure Works]
where( [Date].[Calendar].[Date].&[20010701] )
 
Query 2 – returns the value of All Periods (axis wins because there’s a set in slicer)
select [Measures].[Internet Sales Amount] on 0,
[Date].[Fiscal].[All Periods] on 1
from [Adventure Works]
where( { [Date].[Calendar].[Date].&[20010701] } )
 
Query 3 – returns the value of 2001 (“nobody” wins, a tuple is formed from axis and slicer members, which in the end manifests as 2001 has won)
select [Measures].[Internet Sales Amount] on 0,
[Date].[Fiscal].[All Periods] on 1
from [Adventure Works]
where( { [Date].[Calendar Year].&[2001] } )
 
Query 4 – returns the value of 01-07-2001 (“nobody” wins, a tuple is formed from axis and slicer members, which in the end manifests as 20010701 has won)
select [Measures].[Internet Sales Amount] on 0,
[Date].[Fiscal Year].&[2002] on 1
from [Adventure Works]
where( { [Date].[Calendar].[Date].&[20010701] } )
 
Query 5 – returns the value of 01-07-2001 (again, axis wins because there’s a set in slicer)
select [Measures].[Internet Sales Amount] on 0,
[Date].[Calendar].[Date].&[20010701] on 1
from [Adventure Works]
where( { [Date].[Fiscal].[All Periods] } )
 
Query 6 – returns the value of All Periods (again, axis wins because there’s a set in slicer, similar to Query 2)
select [Measures].[Internet Sales Amount] on 0,
[Date].[Fiscal].[All Periods] on 1
from [Adventure Works]
where( { [Date].[Calendar].[Date].&[20010701], [Date].[Calendar].[Date].&[20010702] } )
 
 
OK, now something about colors, which will help us explain this faster and easier.
 
All queries except the first one have { } in the slicer. All hierarchies preserve their color throughout the queries. Fiscal hierarchies are warm-colored (orange, maroon), calendar hierarchies are cold-colored (green, blue). Each hierarchy has it’s own color, each member a nuance of that color.
 
Queries 1, 2 and 5 are similar. Let’s call them core queries. Query 6 is modified core query, it has another member in slicer. Queries 3 and 4 are special, they have one of the members present in core queries, and another one from a separate hierarchy. Attribute hierarchy, to be precise (one level only). On the other hand, core queries have user hierarchies (many levels).
 
 
 
Two things should be separated: autoexists and evaluation of cells.
 
The first of those two mechanism triggers in all queries because hierarchies in slicer and on axis belong to the same dimension. Therefore, only “existing” members on axis survive. In these examples it is not so obvious, but if you replace .[All members] with .MEMBERS (on axis only, not in slicer), you’ll see what I mean. Besides the root member, you’ll also get all members that a related. For example, this query:

select

[Measures].[Internet Sales Amount] on 0,
[Date].[Fiscal].
members on 1
from [Adventure Works]
where({[Date].[Calendar].[Date].&[20010701]})

gives you this result:
Date.Fiscal       Internet Sales Amount
All Periods              $29,358,677.22
· FY 2002                 $7,072,084.24
· · H1 FY 2002            $3,266,373.66
· · · Q1 FY 2002          $1,453,522.89
· · · · July 2001           $473,388.16
· · · · · July 1, 2001       $14,477.34
 
 
The issue in this weird case is not about autoexists. It performs perfectly. Therefore we will not focus on that. We will focus on something else – the evaluation of cells.
 
The second mechanism is the evaluation of cells. It means that for each cell of resulting table a tuple is being formed and then evaluated. Normally, a tuple consists of members. All hierarchies members. One from each. Some of them are explicitely there, some not (but are taken into account during tuple construction). If they are there, it also depends on where they come from (calculated member, slicer and axes are typical areas). In the same order they win over each other. Forcing the weaker to implicitely adjust to them. Members on axes adjust to slicer (autoexist), and slicer might get overwritten by a calculated member (if the formula says so). Except when we have sets in slicer. Then the situation is different.
 
From now on it’s all about my speculation.
 
 
 
They do participate in autoexist (as we stated before, no problem there), but I believe they do not force the coordinate the way a single member in a tuple in slicer does.
 
I believe set in tuple is made weaker than an explicit member from an axis on purpose. Because of that, if there comes a member in a tuple that is related to the hierarchy of that set, a set is implicitely replaced with a proper, related, single member in that hierarchy. If with nothing below, then with the root member.
 
And why is it made so?
 
Sets can have more than one member. In SSAS 2000, you would have to construct a calculated member and use that in slicer. Since there was no attribute relations in SSAS 2000, you could combine it freely. In SSAS 2005+ it’s different. Attribute relations exist. A potential calculated member (aggregate of set, maybe it is being done internally), just like all calculated members, is never related to any real member on any hierarchy. When selected, it forces the rest of hierarchies (from same dimension only, to be precise) to shift their current members to their root.
 
If you only have that hierarchy in context, then fine, all other shift to All, there’s no conflict.
 
But what if another hierarchy is forcing its member in a tuple as an explicit member (i.e. a member coming from an axis)? That means we would have a conflict and would have to choose how to solve it, who’s to win. In SSAS 2005 and 2008 attributes are related. Inside a tuple they must all be aligned. As I stated earlier, there are rules who wins. Calculated members, then slicer, then axes. In this case, I guess it has been decided that a member is stronger than a set. Somebody had to step back.
 
Why? Because there might have been problems otherwise. Members in a set may have different related members on other hierarchies. More than one per same hierarchy. If the set was stronger, then we would end up with more than one “currentmember” on other hierarchies, which a tuple doesn’t support. We would end up with a subcube. Therefore, sets must be weaker and adjust.
 
 
 
How about they intersect, find a common area in multidimensional space and then evaluate? Well, it’s not that easy.
 
Some combinations change before they have a chance to intersect (form a tuple) because hierarchies are related (Query 1, 2, 5, 6). Others do intersect. In Query 3 namely, there are two fiscal years that match 2001 calendar year. It can’t be so, therefore, that hierarchy is shifted to root member, not to interfere. Likewise, Query 4 is ok, members form a tuple, intersect and get evaluated.
 
 
 
And if they would intersect before the attribute relations mechanism? OK, but how? Who should adjust to whom when you have an intersection that is not a tuple but a subcube (set of tuples)?
 
Here’s an example. Construct a set in slicer that causes SSAS to puke. As I witnessed a year ago, but failed to understand why.
 
SELECT

{   [Measures].[Sales Amount] } on 0
FROM
 
    [Adventure Works]
WHERE
( { [Date].[Calendar].[Calendar Quarter].&[2003]&[4],
    [Date].[Calendar].[Calendar Quarter].&[2002]&[4] },
    [Date].[Calendar Year].&[2003] )

 

Attribute relations are being triggered in order to get the rest of the tuple. The error was about a fiscal hierarchy (not present in a query!). Of course, it couldn’t be evaluated differently. Hierarchies in slicer are equally good, none of them should be prefered to win. One of them is having a set that implies 2 implicite currentmembers on another hierarchy. The result? An error:
 

Executing the query …

The MDX function failed because the coordinate for the ‘Fiscal Year’ attribute contains a set.

Execution complete

 
 
 
Ups, I haven’t explained how come the queries evaluate the way they did.
 
Query 1 is understandable, a coordinate is set and is kept throughout iteration over axes. Because a member in slicer is also the intersection of itself with axis members, the result is its value.
 
Query 2 is evaluated as All Periods because there was a set in slicer (no matter it has one member only, it is still a set for SSAS). There were no calculated members, no members in slicer, so there was nobody to stop axis members to force the coordinate and make the others (set from slicer) to adjust to it this time. It’s not the whole story, but I don’t want to bring all the details yet. Follow on.
 
In Query 3 there was a member, year 2001, from another hierarchy. Attribute hierarchy. Again, it was a set in slicer. However, it won. How come? The same thing happened as in Query 2. But after the end of the tuple formation process (attribute relations mechanism in action), by some miracle 2001 survived. Or it wasn’t such a miracle?
 
If you take a look at user hierarchies in Adventure Works database, you’ll see that Date and Month are common attributes in both Calendar and Fiscal user hierarchy. That means that in the Query 2 the date in July was overwritten because it was used in the Fiscal hierarchy as well as in Calendar hierarchy. Currentmember of Fiscal hierarchy was All Period. That we had on axis. Having established one and only one current member per that hierarchy caused Date hierarchy (as well as Date.Calendar hierarchy and many others) to shift their current member to All member (or related if any).
 
Now, in Query 3 we have another situation. We have a member from a hiearchy that is not directly related to All Periods on Fiscal hierarchy. Therefore, there was no overwriting. Both currentmembers, All Periods from Fiscal hierarchy and 2001 from Year attribute hierarchy came in the tuple and the result was evaluated as result for 2001. As there was nothing on axis.
 
Query 4 is a similar case. Both members end up in tuple, tuple is evaluated as intersection of them, meaning – 1st of July.
 
Query 5 is the opposite of Query 2. Members have switched their places. Since both can be found in the same hierarchy (Fiscal hierarchy), member on axis wins. One more time, Date attribute is the lowest level in both Fiscal and Calendar hierarchies. By selecting a date, we select date in both hierarchies. Now, it only matters who comes in tuple first (axis member in case of set in slicer) as an explicite member, and who gets adjusted (set from slicer) because that hierarchy current member place is already taken.
 
Query 6? Same thing. No matter the number of members in set in slicer, axis member wins.
 
 
 
Is there a solution? Is it a bug afterall? Is there a workaround?
 
From what I analyzed and concluded, it was a decision. Which means – by design. However, a big part of this analysis is bases on assumptions.
 
Can it be improved?
 
I believe it can. Sets in where block can be tested if they contain a single member. MDX is full of shortcuts, different ways of writing this and that, one more wouldn’t hurt – single-member in a set in slicer is converted to a member. Afterall, the reverse path is present. A member placed on axis without {} around it, a member in lots of functions and so on, is converted to a set. Downconversion from a set to a single member would make Query 2 evaluate the same way Query 1 did.
 
 
 
But what if sets have multiple members in slicer? What coordinate should be set up?
 
Can’t tell. That’s the problem I guess.
 
Attribute relations mechanism is so strong and eager to chew everything first, that it breaks some tooths along the way, in situation like these. From time to time it might be better to consider letting autoexists be the first in line. In my last example, the one with an error, autoexisting the slicer (intersecting, so that what survives – survives) would leave only one quarter and the year, not causing the error. However, in more complex examples it wouldn’t help.