Tag Archives: Dimensional Modelling

Oracle BI EE 10.1.3.4.1 – Sub-Totals during Drills – Conforming Dimensions

I saw a peculiar requirement being discussed in a pentaho forum(i was actually browsing for some unique Pentaho features and stumbled upon the requirement) this week wherein end users wanted to get the sub-totals automatically while drilling on a dimension. In normal cases, i would have left it as a unique requirement and would not have tried to replicate in BI EE. But when i thought about this requirement further, it somehow seemed like a very common requirement to me. Whenever i look at my phone or internet bills, i always look for sub-totals. Without them a reporting tool would be incomplete. And BI EE does offer some good sub-total features out of the box. But what is not available is an automated Sub-Totaling feature after drilling down on a dimension in a dashboard. Many might argue that when we do the drill we are actually drilling from a Total to its individual values. But again sometimes end users would like to have this feature (atleast i would love to have this in BI EE).

There are couple of options to enable this. Both of them require some good amount of work (for a small feature) in the repository. I would recommend to use either of the solutions only for a dimension or 2 with not more than 4 to 5 levels. It is possible to use this for more than 3 dimensions as well but the repository can grow in size pretty quickly. We shall be seeing a technique that leverages the Conforming dimensions concept of BI EE. Conforming dimensions can be explained easily by the diagram below

image

As you see, whenever 2 different dimensions form 2 logical table sources for the same dimension and we have 2 different fact tables, then the dimension and fact combination would produce a union of the dimension members. The basic premise behind conforming dimensions is the fact that there is no linkage between the dimensions nor the fact themselves. That is Dim1 should be related to Fact1 alone and Dim2 should be related to Fact2 alone. If there is any relation between Dim1 and Fact2 or vice-versa, BI Server would not treat them as conforming dimensions.

With that background lets move on to the actual requirement. Producing dynamic sub-totals. The idea is very straight-forward. We make every member that we drill on to be a member in the column that comes up after drill (in addition to the child members of the parent member that we drilled on). For example, consider the report below

image

As you see, while drilling on Channel total member, we get the children of that member in addition to the member itself which simulates a sub-total. If we have n levels in a dimension when we model it we would have to model it as having n+1 levels. For example, in our example the Channel Dimension has 3 levels. Channel Total –> Channel Class –> Channel Desc. So, when we model in our repository, we need to basically have 4 levels as shown below.

image

According to the conforming dimension concept explained above, in order to ensure that BI EE fires different queries for every fact table, we need to create 3 aliases for the Channels dimension and 3 aliases for the fact table.

image

Include all the 3 channel aliases as logical table sources for the channel dimension. Now, ensure that mappings of the sources follow this table.

image

Also create 3 separate fact tables with the same aggregation for the amount measure. Each fact table will be sourced by a Fact alias

image

Now, for normal reports that do not require this capability, just pull in the dimension and one of the fact alias tables. Wherever this sub-total drill is required one would have to ensure that all the fact measures from all the fact tables are chosen. This will ensure that conforming dimension join kicks in as shown below.

image

image

From here on, its just a matter of combining all the facts into a single fact using the formula below

IFNULL(SALES.AMOUNT_SOLD,IFNULL("SALES1".AMOUNT_SOLD,0)+IFNULL("SALES2".AMOUNT_SOLD,0))

image image

The position of the sub-total can be easily  controlled by adding one more sort order column in all the Logical table sources.

Oracle BI EE 10.1.3.4.1 – Reporting on Non-Transactional Dimension values – Equivalence of Outer Joins

As with any reporting tool BI EE works on the premise of converting the front-end column selection to the corresponding physical queries. But one of the advantages that makes BI EE stand apart from other reporting toolsets is its ability to do certain joins/sorting etc in-memory without pushing them down on to the database. This feature has multi-fold advantages. Especially in BI Applications this feature is put into good use by having a single repository that can report across multiple ERP modules using conforming dimensions. One other advantage of this is the fact that it can help in generating reports that produce all dimension members even if those members do not have a transaction.

For example, whenever a dimension and a fact measure are pulled into a report, BI EE will convert them into a physical join between the dimension and the fact measure. So, in effect only those dimension members that have a corresponding fact transaction would come out in the report. In most cases, end users might want to see all the dimension members. This is illustrated better by the screenshots below

image

image

As you see, even though we have 5 different Channels, since our fact table had only transactions for 3 channels, our report produces data only for 3 Channel Members. But what if the end users want to see all the 5 members in the report (non-transactional channel members would be shown as Null in the measure). The most common resolution for this is to use outer joins in the physical layer. But this cannot be done in all the cases as when one applies any filter on the fact measures, the outer join effect is negated. Also, there are possible performance impacts in using outer joins. So, lets look at a modeling technique that will negate the use of outer joins.

The basic idea is to simulate a Cartesian join across dimensions and then make the BI Server to do the conforming dimensions join using a separate dummy fact table. So, to start with create a Dummy fact table in the Physical layer which will be based on a Select (Select 1 FROM DUAL or something like that)

imageNow make every dimension in the warehouse to have a Cartesian join with this dummy fact table.

image 

image

Now create a new Logical Fact table source containing this fact table and the Dummy Fact column. Ensure that the fact column has an aggregation set (same aggregation as the other measures in the original fact table)

image

Now, include this fact along with other facts whenever you want an outer join but with filters on the outer joined fact table. Just hide the dummy from display.

image

Now, sometimes end users might want to browse multiple dimensions simultaneously. Since there is no join across dimensions, this would result in not producing all dimension member combinations. This is illustrated below

image

As you see, we only get those channels for which we have transactions on the fact table. In many cases we do not want dimension member values to be dependent on the fact table. To negate this, we would have to use the same dummy fact table that we used above and put that as the implicit fact table as shown below.

image

Now, if you view the same report again

image

This is because, the Implicit Fact column makes the BI Server to choose the Dummy Fact as the fact table for joining the dimensions and hence would result in a Cartesian product across the dimensions.

Oracle BI EE 10.1.3.4.1 – Modeling Parent-Child Hierarchies – Using Federation

One common issue that gets highlighted when someone uses BI EE is its inability to leverage Parent Child Hierarchies out of the box. 11g release scheduled sometime later next year is supposed to address this seamlessly. Having said that, being consultants ourselves most of the times we find ourselves in situations wherein we would have to use the parent child hierarchies. I would be covering a technique today that can basically address hierarchical reporting on parent-child hierarchies, in the current release itself.

It is possible to mimic value based hierarchical reporting in BI EE in a number of ways. I will provide 3 high level options here. Though there can be other possibilities they would only be slight variations of the 3 listed below

1. The first option is to flatten the hierarchies and convert the parent-child into a level based hierarchy. This is the approach that Oracle’s BI Applications product called CPM Analytics uses. This is a straight forward approach wherein you have a pl/sql or a sql program which basically converts the hierarchy into level based wherein the number of the levels is pre-determined.

2. The second option is to mimic everything using a combination of Answers UI navigation and Repository based drills.

3. The third option is to provide out of the box drills from the repository using a combination of BI Server Federation and Oracle database hierarchical operators. Here there is no need for setting up custom navigation in Answers.

I would be covering the 3rd option which requires fair bit of understanding of Federation, how BI Server uses the logical table sources and the issues inherent with a parent-child hierarchy. I would try to address them in a simplified manner here. Lets take the Global Schema. I will be using a parent child hierarchy table for the Products dimension (shown below)

tmp6A

As you see, it is a simple parent child hierarchical table. The idea behind the technique is to have one Logical table source for every parent-child level. This can be best illustrated by the diagram below

tmp6F

So effectively we would have n-1 logical table sources for n levels. For drilling from Level 1 to Level 2, we would use the Parent to Child Relation. From Level 2 to Level 3, we make the BI Server to switch to another logical table source wherein Level 2 would now be the parent and Level 3 would be the child. But when we drill, the values of Level 1 would have to be propagated across all the table sources. This is where the crux of the technique lies. So the first step in our technique is to alter the above parent child hierarchy table to contain the Level information as well as Level-0 product information i.e every record in the parent-child table should have the Level Number and should have all the Level-0 products that roll into the parent.

tmp74

In Oracle 10g, this can be done easily through the sql below. We are doing this to ensure that we get the rolled up numbers for every parent. There can be multiple variations of the script below. But all we need is a level number and all the level-0 children of every record in the parent-child table. All the sub-queries below are not actually necessary.

SELECT LEVEL_1, LEVEL_2, PRODUCT_NAME, PRODUCT_ID, LEVEL_NUM
FROM
(
SELECT LEVEL_1,
CASE WHEN LEVEL_1 = LEVEL_2 THEN LEVEL_3 ELSE LEVEL_2 END LEVEL_2,
LEVEL_3 PRODUCT_NAME, LEVEL_3_ID PRODUCT_ID
FROM
(SELECT LEVEL_1,
CASE WHEN INSTR(PATH,'|',1,2) = 0 THEN LEVEL_1
WHEN INSTR(PATH,'|',1,3) = 0
THEN SUBSTR(PATH,INSTR(PATH,'|',1,2) + 1,LENGTH(PATH))
ELSE SUBSTR(PATH,INSTR(PATH,'|',1,2)+1,INSTR(PATH,'|',1,3)-INSTR(PATH,'|',1,2)-1)
END LEVEL_2,
LEVEL_3 LEVEL_0,
PATH LEVEL_1_ID, LEVEL_3_ID, LEVEL_3, LEAF
FROM
(
SELECT SYS_CONNECT_BY_PATH(PARENT_DSC,'|') PATH,
CONNECT_BY_ROOT PARENT_ID LEVEL_1_ID,
PRODUCT_ID LEVEL_3_ID,
CONNECT_BY_ROOT PARENT_DSC LEVEL_1,
PRODUCT_DSC LEVEL_3,
CONNECT_BY_ISLEAF LEAF,
LEVEL
FROM
(SELECT TO_NUMBER(A.PRODUCT_ID) PRODUCT_ID,
TO_NUMBER(A.PARENT_ID) PARENT_ID,
A.PRODUCT_DSC PRODUCT_DSC,
B.PRODUCT_DSC PARENT_DSC
FROM PRODUCT_CHILD_PARENT A,
PRODUCT_CHILD_PARENT B WHERE B.PRODUCT_ID(+) = A.PARENT_ID)
CONNECT BY PRIOR PRODUCT_ID = PARENT_ID) A
WHERE LEAF = 1)) A,
(SELECT DISTINCT PARENT_DSC, LEVEL LEVEL_NUM FROM
(SELECT TO_NUMBER(A.PRODUCT_ID) PRODUCT_ID,
TO_NUMBER(A.PARENT_ID) PARENT_ID,
A.PRODUCT_DSC PRODUCT_DSC,
B.PRODUCT_DSC PARENT_DSC
FROM PRODUCT_CHILD_PARENT A,
PRODUCT_CHILD_PARENT B WHERE B.PRODUCT_ID(+) = A.PARENT_ID)
CONNECT BY PRIOR PRODUCT_ID = PARENT_ID
START WITH PARENT_DSC = 'Total Product') B
WHERE A.LEVEL_1 = B.PARENT_DSC

The idea is to create a table which would hold the output of the above script. Now import this table into the physical layer. Since Product id would no more be unique, make a complex join with the product_id in this table and the fact.

image

All this gives us is the information required for the first logical table source. Remember when we jump from one logical table source to another we would still be having the filter for Level 1, Level 2 etc. If we leave those columns to be mapped to Parent or child columns, the drills would not work. So, for each additional logical table source we would need an additional Select view within the physical layer. This select view will be a Cartesian product of the filterable columns. Do not worry about the performance much as the Cartesian product would still be producing only valid combinations as we would be filtering them while drilling(but this filter in effect actually does not filter anything from a reporting standpoint). This Cartesian product select view is only for disabling the filters while making the jump. So, for the 2nd logical table source, create a select view in the physical layer using the sql below.

SELECT
B.LEVEL_1 LEVEL_1,
A.LEVEL_1 LEVEL_2,
A.LEVEL_2 LEVEL_3,
A.PRODUCT_ID PRODUCT_ID,
A.PRODUCT_NAME PRODUCT_NAME,
A.LEVEL_NUM LEVEL_NUM
FROM
(SELECT DISTINCT LEVEL_1 FROM PRODUCT_PARENT_CHILD_TBL1) B,
PRODUCT_PARENT_CHILD_TBL1 A

image

In the same way for the 3rd logical table source we would now need a Cartesian product of 2 tables, one with Level1, Level 2 columns and the other with the source of the first logical table source.

SELECT
B.LEVEL_1 LEVEL_1,
B.LEVEL_2 LEVEL_2,
A.LEVEL_1 LEVEL_3,
A.LEVEL_2 LEVEL_4,
A.PRODUCT_ID PRODUCT_ID,
A.PRODUCT_NAME PRODUCT_NAME,
A.LEVEL_NUM LEVEL_NUM
FROM
PRODUCT_PARENT_CHILD_TBL1 A,
(SELECT DISTINCT LEVEL_1, LEVEL_2 FROM PRODUCT_PARENT_CHILD_TBL1) B

Now make a physical complex join between these 2 aliases with the fact table. In the example above, i have stopped with 4 levels. Depending on your requirement and the depth of the hierarchy, the same procedure above can be repeated for more levels.

image

Once we have the physical sources ready, the next step is to design the Business Model Layer. As discussed above, we would have 3 logical table sources contributing to the product dimension. And each source would have a mapping as shown below

image

image

image

image

The ordering of the Logical table sources is very important. Generally whenever there are 2 or more dimensional logical table sources that contribute to the same set of columns, the first logical table source would chosen for querying(not always true – It is dependent on the measure as well). So, when we create a report containing LEVEL1 and a fact measure we would be hitting the parent child table (And not the Cartesian Product views). And the other important aspect to this is the filtering on Level Number. When we move from one logical table source to another, as we are negating the effect of the drill filter using Cartesian product, we would have to explicitly apply the level filters for each logical table source as shown below(For level 1 lts the filter would be equated for Number 1, level 2 to number 2 and so on.

image

Once this is done, just create the hierarchy as shown below.

image

This will ensure that our BMM is parent-child hierarchy aware for the product dimension. The drills would be seamless for an end user

tmp96

There can be many more improvements to above technique. For example, instead of the Cartesian Product select view we could just have another table with repeating Level 1/ Level 2 values. But the idea would remain the same.

Hybrid SCDs using OWB and OBIEE

If you attended my ODTUG Kaleidoscope presentation on Oracle Warehouse Builder (feel free to view the presentation), then you know that certain aspects of the product leave me scratching my head, especially as a follower of the dimensional modeling approach popularized by Ralph Kimball. One of the subjects covered was dimensional operators and the lack of support for hybrid slowly-changing dimensions (SCD’s). For an enterprise data warehouse, this is a real deal-breaker. When you look at the Customer Dimension, for instance, why would we ever want to track historical changes to attributes such as Birthdate or Ethnicity? Changes to these attributes can only be seen as corrections, and these corrections would need to be made to ALL rows for that particular customer: the current row, and all other rows inserted as a result of Type 2 changes through the life of that customer. There are other examples that might not be so black and white: sometimes, the end user simply needs to see both Type 1 and Type 2 changes in the same table.

So what are we left with? We have two choices really. First, we can choose an alternative for our SCD processing. This could entail custom coding our SCD handling so that we can represent both Type 1 and Type 2 changes, or possibly using a third-party add-in, such as the Transcend Framework, which I developed to handle situations such as these (Transcend will hopefully be available as an option from Rittman Mead, so watch the blog for news). The other more interesting option would be to go ahead and use OWB for our SCD processing and then attempt to represent some of our Type 2 changes as Type 1’s in the reporting layer. With the flexibility that OBIEE provides, I should be able to make a go of it.

First, I’ll use OWB to create a fact table based on SH.SALES called SALES_FACT, a dimension table based on SH.PRODUCTS called PRODUCT_DIM, and the necessary operators and mappings to load the two. For the PRODUCT_DIM table, I used a standard dimension operator with two levels.

Product Levels.png

You can see that I configured all my attributes as Type 2 attributes, including the VALID column. For the purposes of this example, imagine this to be a kind of Discontinued Flag, set to (A)vailable or (N)ot Available. Currently, all the products in my warehouse are available for purchase… at least they are right now. The business informed me that they would like this attribute represented as a Type 1 column. When forecasting sales for the coming year, they’d like to see how their discontinued products performed in the last year or two to make sure adjustments for these products are made.

I created a mapping to load the data, pulling all the rows from SH.PRODUCTS initially… using the PROD_EFF_FROM and PROD_EFF_TO dates to populate the effective dates in our dimension.

Product Mapping.png

After running this mapping, we can see a small data set from the PRODUCT_DIM table, specifically, the Photo and Hardware categories.

Product Data 1.png

Suppose that our client decides to get out of the Hardware business (did you look at the specs for the PC’s they are selling… no wonder!), discontinuing all the products in the Hardware category. So this change from the inventory system makes it’s way to our source table in the form of the following two rows:

Source Data 1.png

Now, I run the PRODUCT_DIM mapping again, and have the following rows in the PRODUCT_DIM table for the Hardware category:

Product Data 2.png

If we are triggering history for all the attributes in the PRODUCT_DIM table, what value will these two new rows provide? Currently, no rows from the SALES_FACT table reference them, and seeing as this change designates that these products are no longer available, it’s unlikely that any new sales are going to be associated with them either. Furthermore, we want to see how our discontinued products stack up with our current product line… and the only way to address this kind of reporting is with Type 1 attributes.

So now I have to try and address this issue using OBIEE. When I create the Physical Layer, I bring the SALES_FACT table in verbatim from the database, but I adjust the PRODUCT_DIM slightly, using a Table Type of “Select” instead of “Physical”. Basically, this means that the table in the Physical Model will actually be the results of a SELECT statement from the database:

Table Type Select.png

I’m using the Oracle analytic function LAST_VALUE to represent all values for the VALID attribute across each natural key (PRODUCT_SOURCE_ID) according to the most recent value, using PRODUCT_EFFECTIVE_DATE as the ordering mechanism. Now, whenever we update our product table to discontinue a particular product, we can use this attribute to provide impact reports on how those products affected our sales in the past. Below is a quick Answers report that demonstrates the behavior of this new Type 1 attribute, starting first with the criteria:

Build Answer.png

and then the results:

Display Answer.png

When you consider the modeling effort that goes into assigning Type 1 and Type 2 attributes, you might say that this approach is not merely a replacement for a feature that OWB lacks, but it might actually be the preferred approach. Consider the case when the business decides, after analyzing their reports for several months, that they would like to change the SCD Type of a particular column. While it’s relatively easy to go from a Type 2 to a Type 1, the reverse is nearly impossible.

Depending on the environment, there would be rows and rows of historical changes never inserted. We would have to tell the business that we can affect that change going forward, but the historical data of all those Type 2 changes simply doesn’t exist. But with our new OBIEE approach… it’s as simple as making a few changes to the Physical Model in our repository. You can also see that I brought the VALID column in as well, and mapped it in the Business Model to an attribute called Historical Valid. This gives me the ability to report on it as a Type 1 when necessary, and the standard Type 2 manner in other reports.

Visualizing relational data as Essbase/OLAP cubes – Partition Outer Joins and MODEL Clauses – Part 3

In a prior blog entry here, i had shown the advantages of using the MODEL clause. I also showed how the MODEL clause can be used to visualize the relational data in a multi-dimensional format. In most cases, using the MODEL clause alone can be sufficient. Just to recap, the MODEL clause divides your resultant data into 3 parts

1. Partition By set of Dimensions – Similar to Index entries in Essbase (sparse dimensions)
2. Dimension By set of Dimensions – Similar to Blocks of Essbase (dense dimensions)
3. Measure Dimension

So, whenever you use MODEL, Oracle constructs a multidimensional array for each partition by the dimension(s) combination value. The multi-dimensional array would be dimensioned by the dimensions specified in the DIMENSION BY clause. Though a multi-dimensional array is created, the created array contains only the valid intersections present in the fact table and not every intersections that are possible. For example, consider the sql query below

SELECT FISCAL_YEAR_DSC,CHANNEL_DSC,

UNITS

FROM

(

select sum(T10976.UNITS) as UNITS, T10939.FISCAL_YEAR_DSC as FISCAL_YEAR_DSC, T10881.CHANNEL_DSC as CHANNEL_DSC

from

CHANNEL_DIM T10881,

TIME_DIM T10939,

UNITS_FACT T10976

where

( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )

group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC)

MODEL

PARTITION BY (FISCAL_YEAR_DSC)

DIMENSION BY (CHANNEL_DSC)

MEASURES (UNITS) RULES (UNITS['Total'] = SUM(UNITS)[ANY])

ORDER BY 1,2

image

As you see, the above query produces Sub-Totals for every Fiscal Year. But if you notice in the output of the query, we would be missing the year FY-06 which has no transactions in the fact table. Similarly even the Channel Television is missing in the output since it does not have any fact level transactions. In some cases, though there are no transactions we need the capability to show the non-existent (in fact tables) dimension values in the output(data densification). This is where Partition Outer Joins come in very handy. One can visualize partition outer join as a Cartesian product of dimensions. The major difference between a Cartesian product and the partition outer join is, one can have a control on Cartesian product using Partition Outer Join. Stewart has already covered Partition Outer Joins in a prior blog entry here.

For example, lets say we have a requirement to show all the dimension values of Channel and only the transaction values of Fiscal year in our above report i.e. we need to bring in the Television Channel as well into our above report for Year.

image

Usually we would approach this using a normal outer join with the Channel Dimension as shown below.

SELECTA.FISCAL_YEAR_DSC FISCAL_YEAR_DSC,B.CHANNEL_DSC CHANNEL_DSC,NVL(A.UNITS,0) UNITS

FROM

( select sum(T10976.UNITS) as UNITS, T10939.FISCAL_YEAR_DSC as FISCAL_YEAR_DSC, T10881.CHANNEL_DSC as CHANNEL_DSC

from

CHANNEL_DIM T10881,

TIME_DIM T10939,

UNITS_FACT T10976

where ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )

group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC) A

RIGHT OUTER JOIN

(SELECT DISTINCT CHANNEL_DSC FROM CHANNEL_DIM) B ON (A.CHANNEL_DSC = B.CHANNEL_DSC)

image

But again the above outer join only provides a single row for Television channel. But our requirement is to have the same Television Channel for every Fiscal Year that has transactions. This is where Partition Outer Join would be of great help. Lets modify the above query to the one shown below

SELECTA.FISCAL_YEAR_DSC FISCAL_YEAR_DSC,B.CHANNEL_DSC CHANNEL_DSC,NVL(A.UNITS,0) UNITS

FROM

(

select

sum(T10976.UNITS) as UNITS,

T10939.FISCAL_YEAR_DSC as FISCAL_YEAR_DSC,

T10881.CHANNEL_DSC as CHANNEL_DSC

from

CHANNEL_DIM T10881,

TIME_DIM T10939,

UNITS_FACT T10976 where

( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )

group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC) A

PARTITION BY (A.FISCAL_YEAR_DSC)

RIGHT OUTER JOIN

(SELECT DISTINCT CHANNEL_DSC FROM CHANNEL_DIM) B

ON (A.CHANNEL_DSC = B.CHANNEL_DSC)

image

Now, if you notice we now have non-existent(in the fact table) all channel dimension values for every transactional Year. We can extend the same concept to show both non-existent year (FY-06) as well as channel dimension values using the query below.

SELECTA.FISCAL_YEAR_DSC,B.CHANNEL_DSC,NVL(A.UNITS,0)

FROM

(

SELECT B.FISCAL_YEAR_DSC FISCAL_YEAR_DSC, A.CHANNEL_DSC CHANNEL_DSC, NVL(A.UNITS,0) UNITS

FROM

(

select sum(T10976.UNITS) as UNITS, T10939.FISCAL_YEAR_DSC as FISCAL_YEAR_DSC, T10881.CHANNEL_DSC as CHANNEL_DSC

from

CHANNEL_DIM T10881,

TIME_DIM T10939,

UNITS_FACT T10976

where ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )

group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC) A

PARTITION BY (A.CHANNEL_DSC)

RIGHT OUTER JOIN

(SELECT DISTINCT FISCAL_YEAR_DSC FROM TIME_DIM) B

ON (A.FISCAL_YEAR_DSC = B.FISCAL_YEAR_DSC)) A

PARTITION BY (A.FISCAL_YEAR_DSC)

RIGHT OUTER JOIN (SELECT DISTINCT CHANNEL_DSC FROM CHANNEL_DIM) B

ON (A.CHANNEL_DSC = B.CHANNEL_DSC)

image

This can further be modified to include the sub-totals using the MODEL clause.

SELECTFISCAL_YEAR_DSC,CHANNELS,NVL(UNITS,0)

FROM

(SELECT B.FISCAL_YEAR_DSC FISCAL_YEAR_DSC, A.CHANNEL_DSC CHANNEL_DSC, NVL(A.UNITS,0) UNITS

FROM

(

select

sum(T10976.UNITS) as UNITS, T10939.FISCAL_YEAR_DSC as FISCAL_YEAR_DSC, T10881.CHANNEL_DSC as CHANNEL_DSC

from

CHANNEL_DIM T10881, TIME_DIM T10939, UNITS_FACT T10976

where

( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )

group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC) A

PARTITION BY (A.CHANNEL_DSC)

RIGHT OUTER JOIN

(SELECT DISTINCT FISCAL_YEAR_DSC FROM TIME_DIM) B

ON (A.FISCAL_YEAR_DSC = B.FISCAL_YEAR_DSC)) A

PARTITION BY (A.FISCAL_YEAR_DSC)

RIGHT OUTER JOIN (SELECT DISTINCT CHANNEL_DSC FROM CHANNEL_DIM) B

ON (A.CHANNEL_DSC = B.CHANNEL_DSC)

MODEL

PARTITION BY (FISCAL_YEAR_DSC)

DIMENSION BY (B.CHANNEL_DSC CHANNELS)

MEASURES (UNITS) RULES (UNITS['Total'] = SUM(UNITS)[ANY])

ORDER BY 1,2

image

The primary advantage of using the Partition By Outer Join and MODEL clause together is the fact that one can now do custom member based allocations, calculations etc directly on a relational data source. If you do not have Essbase/Oracle OLAP and would still like to do the complex data manipulation on a relational source, MODEL and Partition By Outer Joins can come in very handy.