Tag Archives: Oracle BI Suite EE

Incremental refresh of Exalytics aggregates using native BI Server capabilities

One of the key design features of the Exalytics In-Memory Machine is the use of aggregates (pre-calculated summary data), held in the TimesTen In-Memory database. Out of the box (“OotB”) these aggregates are built through the OBIEE tool, and when the underlying data changes they must be rebuilt from scratch.

For OBIEE (Exalytics or not) to make use of aggregate tables in a manner invisible to the user, they must be mapped into the RPD as additional Logical Table Sources for the respective Logical Table in the Business Model and Mapping (BMM) layer. OBIEE will then choose the Logical Table Source that it thinks will give the fastest response time for a query, based on the dimension level at which the query is written.

OBIEE’s capability to load aggregates is provided by the Aggregate Persistence function, scripts for which are generated by the Exalytics Summary Advisor, or the standard tool’s Aggregate Persistence Wizard. The scripts can also be written by hand.

Aggregate Persistence has two great benefits:

  1. It uses the existing metadata model of the RPD to understand where to get the source data for the aggregate from, and how to aggregate it. Because it uses standard RPD metadata, it also means that any data source that is valid for reporting against in OBIEE can be used as a source for the aggregates, and OBIEE will generate the extract SQL automagically. The aggregate creation process becomes source-agnostic. OBIEE will also handle any federation required in creating the aggregates. For example, if there are two source systems (such as Sales, and Stock) but one target aggregate, OBIEE will manage the federation of the aggregated data, just as it would in any query through the front-end.
  2. All of the required RPD work for mapping the aggregate as a new Logical Table Source is done automagically. There is no work on the RPD required by the developer.

However, there are two particular limitations to ‘vanilla’ Aggregate Persistence:

  1. It cannot do incremental refresh of aggregates. Whenever the underlying data changes, the aggregate must be dropped and rebuilt in entirety. This can be extremely inefficient if only a small proportion of the source data has changed, and can ultimately lead to scalability and batch SLA issues.
  2. Each time that the aggregate is updated, the RPD is modified online. This can mean that batch times take longer than they need to, and is also undesirable in a Production environment.

I have written about alternatives and variations to the OotB approach for refreshing Exalytics aggregates previously here and here, namely:

  1. Loading TimesTen aggregates through bespoke ETL, in tools such as GoldenGate and ODI. TimesTen supports a variety of interfaces – including ODBC and JDBC – and therefore can be loaded by any standard ETL tool. A tool such as GoldenGate can be a good way of implementing a light-touch CDC solution against a source database.
  2. Loading TimesTen aggregates directly using TimesTen’s Load from Oracle functionality, taking advantage of Aggregate Persistence to do the aggregate mapping work in the RPD

In both of these cases, there are downsides to the method. Using bespoke ETL is ultimately very powerful and flexible, but has the overhead of writing the ETL along with requiring manual mapping of the aggregates into the RPD. This mapping work is done in the TimesTen Load from Oracle method, but can only be used against an Oracle source database and where there is a single physical SQL required to load the aggregate.

Refreshing aggregates using native OBIEE functionality alone

Here I present another alternative method for refreshing Exalytics aggregates, but using OBIEE functionality alone and remaining close to the OotB method. It is based on Aggregate Persistence but varies in two significant ways :

  1. Incremental refresh of the aggregate is possible
  2. No changes are made to the RPD when the aggregate is refreshed

The method still uses the fundamentals of Aggregate Persistence since , as I mentioned above, it has some very significant benefits:

  • BI Server uses (dare I say, leverages), your existing metadata modelling work which is necessary – regardless of your aggregates – for users to report from the unaggregated data.
  • BI Server generates your aggregate refresh ETL code
  • If your source systems change, your aggregate refresh code doesn’t need to – just as reports are decoupled from the source system through the RPD metadata layers, so are your target aggregates

For us to understand the new method, a bit of background and explanation of the technology is required.

Background, part 1 : Aggregate Persistence – under the covers

When Aggregate Persistence runs, it does several things:

  1. Remove aggregates from physical database and RPD mappings
  2. Create the physical aggregate tables and indexes on the target database, for the fact aggregate and supporting dimensions
  3. Update the RPD Physical and Logical (BMM) layers to include the newly built aggregates
  4. Populate the aggregate tables, from source via the BI Server to the aggregate target (TimesTen)

What we are going to do here is pick apart Aggregate Persistence and invoke just part of it. We don’t need to rebuild the physical tables each time we refresh the data, and we don’t need to touch the RPD. We can actually just tell the BI Server to load the aggregate table, using the results of a Logical SQL query. That is, pretty much the same SQL that would be executed if we ran the aggregate query from an analysis in the OBIEE front end.

The command to tell the BI Server to do this is the populate command, which can be found from close inspection of the nqquery.log during execution of normal Aggregate Persistence:

populate "ag_sales_month" mode ( append table connection pool "TimesTen aggregates"."TT_CP") as
select_business_model "Sales"."Fact Sales"."Sale Amount" as "Sale_Amoun000000AD","Sales"."Dim Times"."Month YYYYMM" as "Month_YYYY000000D0" 
from "Sales";

This populate <table> command can be sent by us directly to the BI Server (exactly in the way that a standard create aggregate Aggregate Persistence script would be – with nqcmd etc) and causes it to load the specified table (using the specified connection pool) using the logical SQL given. The re-creation of the aggregate tables, and the RPD mapping, doesn’t get run:

The syntax of the populate command is undocumented, but from observing the nqquery.log file it follows this pattern:

Looking at a very simple example, we can see how a simple aggregate with a measure summarised by month could be populated:

SELECT_BUSINESS_MODEL was written about by Venkat here, and is BI Server syntax allowing a query directly against the BMM, rather than the Presentation Layer which Logical SQL usually specifies. You can build and test the SELECT_BUSINESS_MODEL clause in OBIEE directly (from Administration -> Issue SQL), in nqcmd, or just by extracting it from the nqquery.log.

Background, part 2 : Secret Sauce – INACTIVE_SCHEMAS

So, we have seen how we can take advantage of Aggregate Persistence to tell the BI Server to load an aggregate, from any source we’ve modelled in the RPD, without requiring it to delete the aggregate to start with or modify the RPD in any way.

Now, we need the a bit of secret sauce to complete the picture and make this method a viable one.

In side-stepping the full Aggregate Persistence sequence, we have one problem. The Logical SQL that we use in the populate statement is going to be parsed by the BI Server to generate the select statement(s) against the source database. However, the BI Server uses its standard query parsing on it, using the metadata defined. Because the aggregates we are loading are already mapped into the RPD then by default the BI Server will probably try to use the aggregate to satisfy the aggregate populate request (because it will judge it the most efficient LTS) – thus loading data straight from the table that we are trying to populate!

The answer is the magical INACTIVE_SCHEMAS variable. What this does it tell OBIEE to ignore one or more Physical schemas in the RPD, and importantly, any associated Logical Table Sources. INACTIVE_SCHEMAS is documented as part of the Double Buffering. It can be used in any logical SQL statement, so is easily demonstrated in an analysis (using Advanced SQL Clauses -> Prefix):


Forcing OBIEE query to use avoid a LTS, using INACTIVE_SCHEMAS. Click image for a larger version.


So when we specify the populate command to update the aggregate, we just include the necessary INACTIVE_SCHEMAS prefix:

SET VARIABLE INACTIVE_SCHEMAS='"TimesTen Aggregates".."EXALYTICS"': 
populate "ag_sales_month" mode ( append table connection pool 
"TimesTen aggregates"."TT_CP") as  
select_business_model "Sales"."Fact Sales"."Sale Amount" as "Sale_Amoun000000AD","Sales"."Dim Times"."Month YYYYMM" as "Month_YYYY000000D0" 
from "Sales";

Why, you could reasonably ask, is this not necessary in a normal OotB aggregate refresh? For the simply reason that in “vanilla” Aggregate Persistence usage the whole aggregate gets deleted from the RPD before it is rebuilt, and therefore when the aggregate query is executed there is only the base LTS is enabled in the RPD at that point in time.

The final part of the puzzle – Incremental refresh

So, we have a way of telling BI Server to populate a target aggregate without rebuilding it, and we have the workaround necessary to stop it trying to populate the aggregate from itself. The last bit is making sure that we only load the data we want to. If we execute the populate statement as it stands straight from the nqquery.log of the initial Aggregate Persistence run then we will end up with duplicate data in the target aggregate. So we need to do one of the following :

  1. Truncate the table contents before the populate
  2. Use a predicate in the populate Logical SQL so that only selected data gets loaded

To issue a truncate command, you can use the logical SQL command execute physical to get the BI Server to run a command against the target database, for example:

execute physical connection pool "TimesTen Aggregates"."TT_CP" truncate table ag_sales_month

This truncate/load method is appropriate for refreshing dimension aggregate tables, since there won’t usually be an update key as such. However, when refreshing a fact aggregate it is better for performance to use an incremental update and only load data that has changed. This assumes that you can identify the data and have an update key for it. In this example, I have an aggregate table at Month level, and each time I refresh the aggregate I want to load just data for the current month. In my repository I have a dynamic repository variable called THIS_MONTH. To implement the incremental refresh, I just add the appropriate predicate to the SELECT_BUSINESS_MODEL clause of the populate statement:

select_business_model "Sales"."Fact Sales"."Sale Amount" as "Sale_Amoun000000AD","Sales"."Dim Times"."Month YYYYMM" as "Month_YYYY000000D0" 
from "Sales" 
where "Dim Times"."Month YYYYMM" =  VALUEOF("THIS_MONTH")

Making the completed aggregate refresh command to send to the BI Server:

SET VARIABLE DISABLE_CACHE_HIT=1, DISABLE_CACHE_SEED=1, DISABLE_SUMMARY_STATS_LOGGING=1, 
INACTIVE_SCHEMAS='"TimesTen Aggregates".."EXALYTICS"'; 
populate "ag_sales_month" mode ( append table connection pool 
"TimesTen aggregates"."TT_CP") as  
select_business_model "Sales"."Fact Sales"."Sale Amount" as "Sale_Amoun000000AD","Sales"."Dim Times"."Month YYYYMM" as "Month_YYYY000000D0" 
from "Sales" 
where "Dim Times"."Month YYYYMM" =  VALUEOF("THIS_MONTH");

Since there will be data in the table for the current month, I delete this out first, using execute physical:

execute physical connection pool "TimesTen Aggregates"."TT_CP" delete from ag_sales_month where Month_YYYY000000D0 = VALUEOF(THIS_MONTH);

Step-by-step

The method I have described above is implemented in two parts:

  1. Initial build- only needs doing once
    1. Create Aggregate Persistence scripts as normal (for example, with Summary Advisor)
    2. Execute the Aggregate Persistence script to :
      1. Build the aggregate tables in TimesTen
      2. Map the aggregates in the RPD
    3. Create custom populate scripts:
      1. From nqquery.log, extract the full populate statement for each aggregate (fact and associated dimensions)
      2. Amend the INACTIVE_SCHEMAS setting into the populate script, specifying the target TimesTen database and schema.
      3. For incremental refresh, add a WHERE clause to the populate logical SQL so that it only fetches the data that will have changed. Repository variables are useful here for holding date values such as current date, week, etc.
      4. If necessary, build an execute physical script to clear down all or part of the aggregate table. This is run prior to the populate script to ensure you do not load duplicate data
  2. Aggregate refresh – run whenever the base data changes
    1. Optionally, execute the execute physical script to prepare the aggregate table (by deleting whatever data is about to be loaded)
    2. Execute the custom populate script from above.
      Because the aggregates are being built directly from the base data (as enforced by INACTIVE_SCHEMAS) the refresh scripts for multiple aggregates could potentially be run in parallel (eg using xargs). A corollary of this is that this method could put additional load on the source database, because it will be hitting it for every aggregate, whereas vanilla Aggregate Persistence will build aggregates from existing lower-level aggregates if it can.

Summary

This method is completely valid for use outside of Exalytics too, since only the Summary Advisor is licensed separately. Aggregate Persistence itself is standard OBIEE functionality. For Exalytics deployed in an environment where aggregate definitions and requirements change rapidly then this method would be less appropriate, because of the additional work required to modify the scripts. However, for an Exalytics deployment where aggregates change less frequently, it could be very useful.

The approach is not without drawbacks. Maintaining a set of custom populate commands has an overhead (although arguably no more so than a set of Aggregate Persistence scripts), and the flexibility comes at the cost of putting the onus of data validity on the developer. If an aggregate table is omitted from the refresh (for example, a support aggregate dimension table) then reports will show erroneous data.

The benefit of this approach is that aggregates can be rapidly built and maintained in a sensible manner. The RPD is modified only in the first step, the initial build. It is then left entirely untouched. This makes refreshes faster, and safer; if it fails there is just the data to tidy up, not the RPD too.

Looking forward the BI Forum 2013, Brighton UK

It’s around a month to go now until the Rittman Mead BI Forum runs in Brighton, UK, with the Atlanta event running the week after. Full details on both events are on the Rittman Mead BI Forum 2013 homepage, but I thought it’d be worth taking a look at the Brighton event in more detail, along with the speaker and session line-up.

Our first ever BI Forum ran at the Hilton Metropole in Brighton back in 2009, with around fifty attendees from around the world getting together talk about OBIEE, Essbase, ODI and the Oracle Database. Tony Heljula won the inaugural “Best Speaker” award, Venkat Janakiraman came over all the way from India (and joined us at Rittman Mead shortly afterwards, to become Managing Director for Rittman Mead India), Edward Roske slept through the days and was awake during the night, and we took the decision shortly afterwards to make it an annual event, moving down the road to the Hotel Seattle the following year where we’ve been ever since.

NewImage

This year is the fifth anniversary of the BI Forum, and whilst we now run the event in Atlanta as well we’ve kept to the same basic principles; keep the attendee numbers restricted, assume a basic level of knowledge with the tools, maximise networking and keep the focus on OBIEE and its related technologies. This year our speaker line-up includes familiar faces and some new speakers from around the world, along with Oracle’s product management team and a guest speaker from outside our immediate industry:

  • Uli Bethke, Tony Heljula, Michael Wilcke, Edelweiss Kammermann and Adam Seed from the developer/partner community will be talking about OBIEE performance tuning, ODI development best practices, going beyond the basics with Endeca, and getting started with BI-related technologies such as Oracle Business Activity Monitoring
  • Oracle’s Philippe Lions, Adam Bloom, Alan Lee, Mike Durran and Nick Tuson will update everyone on what’s new with OBIEE post-11.1.1.7, including news on the replacement to the BI Administration Tool and the updated 11.1.1.7 version of SampleApp, as well as answering questions around OBIEE multi-tenancy, Exalytics and virtualisation

On the day before the main conference, myself, Stewart Bryson and Michael Rainey will be delivering a masterclass around Oracle’s data integration tools; in this optional one-day session we’ll be taking a closer look at the ODI product roadmap, integration with other tools such as GoldenGate, Hadoop and Enterprise Data Quality, and also sharing ODI development best practices around topics such as ETL high-availability and scripted deployment.

NewImage

Finally – if all of this sounds a bit heavy going – the secret about the BI Forum is that we try to make it fun as well. On the night before the main event opens we hold a drinks reception followed by an Oracle keynote, and on both nights we host gala dinners with open bars and lots of opportunities to meet everyone at the event. The vast majority of people who come to one BI Forum event then come back every following year (if not, we come and get you anyway, Christian…), and if you’ve not been before, make sure you sign-up soon before all of the places go. Details of the event including links to the registration form are on the BI Forum 2013 homepage, and I’ll be back in a couple of days time to talk about what we’ve got planned for Atlanta this year.

Upgrading OBIEE to 11.1.1.7

OBIEE 11.1.1.7 was released earlier this week, and brings with it the usual delights and challenges of the Fusion Middleware (FMW) platform. Because OBIEE is part of Fusion Middleware, and because it is made up of OBIEE-specific components (such as the BI Server) and FMW-generic components (such as Web Logic domains), the upgrade documentation isn’t quite so simple as it could be in a standalone product.

There are two options for upgrading to OBIEE 11.1.1.7 from an earlier release of 11g:

  1. Out-of-Place”, which in the olden days of software was called “reinstallation”. A brand new copy of OBIEE is installed, and the relevant configuration files (RPD, web cat, and so on) are migrated to the new installation, and then the old one decommissioned.
  2. In-Place” – OBIEE is upgraded where it resides, and what was a 11.1.1.6 (or .3 or .5) OBIEE then becomes 11.1.1.7

There are pros and cons to each approach, including:

  1. In-place upgrade are big bang, where as out-of-place lets you migrate in a phased manner if you want to. Depending on the scale of the changes in the upgrade, or the amount of regression testing you want to do, this may be a factor
  2. In 11.1.1.3 -> 11.1.1.5 -> 11.1.1.6, the in-place upgrade was complex and infamous for being buggy. From what I’ve seen of two separate in-place 11.1.1.6 -> 11.1.1.7 upgrades it is simple to do and with no problems discovered yet.
  3. An out of-place upgrade is ‘safer’ because you are just doing a reinstallation. When you do an in-place upgrade you are trusting the tools to not fail and/or damage your current installation. The counter argument to this is that you should always have backups, and in doing an out-of-place upgrade you may yourself make mistakes or have trouble migrating content

Christian Screen wrote about the advantages of an out of place upgrade, but I think some of these could be less relevant now. Given the simplicity of the in-place upgrade, out-of-place strikes me as more complex and risky.

Getting started

Take a full backup of your OBIEE installation. You do take backups, right? If not, now is a very good time to start, upgrade or not. See the backup documentation, here

Next, regardless of upgrade method, you need to download the complete OBIEE 11.1.1.7 installers for your platform, from here

Out-of-place upgrade

The installation process for 11.1.1.7 is identical to previous versions (except you now have the option to install Essbase too).

Once you have installed 11.1.1.7, you migrate your RPD, web catalog, security configuration and component configuration over from your existing installation. You also need to think if you want to keep any of your RCU table content (Usage Tracking, Event Polling, etc), and manually migrate this, watching out for if the RCU table definitions have changed.

In-place upgrade

An in-place upgrade will upgrade your existing 11g (whether 11.1.1.3, 11.1.1.5, or 11.1.1.6) to 11.1.1.7. It is documented here: Moving from 11.1.1.3, 11.1.1.5, or 11.1.1.6 to 11.1.1.7.

I have tested it on Linux and Windows, from both 11.1.1.6.2 BP1 and 11.1.1.6.9, to 11.1.1.7.
Be aware that previous in-place upgrades (11.1.1.3 -> 11.1.1.5 -> 11.1.1.6) had issues, and I have not tested in-place upgrading 11.1.1.3/11.1.1.5 -> 11.1.1.7.

The outline of an in-place upgrade is as follows:
1. Do a Software Only Install of 11.1.1.7, into the existing installation
2. Run the Patch Set Assistant to update the RCU schemas for 11.1.1.7
3. Run the Configuration Assistant to update the BI domain

This can be done using the GUI tools, or scripted using response files.

In-place upgrade using the GUI, step by step

  1. Shutdown the BI stack (WLS Admin Server, Managed Server, Node Manager, OPMN)
  2. Run 11.1.1.7 installer (runInstaller on Linux or setup.exe on Windows)
    1. Select Software Only Install
    2. Specify the existing FMW_HOME (this should be pre-populated in the OUI installer screen)
    3. Click Next through the following screens, until the installation begins.
  3. Run the Patch Set Assistant (PSA). This is located in oracle_common/bin/ and is called psa on Linux or psa.cmd on Windows
    1. Under Available upgrades select Oracle Business Intelligence
    2. Before running the PSA you should make sure you’ve got a valid database backup (just as you should for the whole BI stack). You are given checkbox to tick to confirm you’ve taken the backup.
    3. Enter the details of the database where your existing RCU tables reside, along with a DBA-privileged user.
      When specifying DB credentials, note that it is different from the RCU screens – if connecting as SYSDBA you must enter SYSDBA AS DBA (as you would if connecting from sqlplus etc)Click Connect. The PSA then brings back a list of RCU schemas; select the MDS one

    4. Complete the following screen for the BIPLATFORM schema, and continue through the PSA. Click Upgrade.
  4. Start Node Manager
    1. On Windows, this is a service
    2. On Linux, go to your FMW home and run ./wlserver_10.3/server/bin/startNodeManager.sh
  5. Start the Web Logic Administration Server (AdminServer)
    1. On Windows, go to your FMW home folder and then double click on user_projects\domains\bifoundation_domain\bin\startWebLogic.cmd
    2. On Linux, go to your FMW home and run ./user_projects/domains/bifoundation_domain/bin/startWebLogic.sh

    Wait until the Admin Server is running – you should be able to login to the WLS Console at http://<yourhost>:7001/console, and the Admin Server command window will show Server started in RUNNING mode

  6. Run the Configuration Assistant
    1. The script is in the Oracle_BI1\bin folder
      1. On Windows, go to your FMW home folder and then double click on Oracle_BI1\bin\config.bat
      2. On Linux, go to your FMW home and run ./Oracle_BI1/bin/config.sh
    2. Select Update BI Domain and then enter the details of your domain.
      Note on Windows, if you are using a loopback adaptor (which you should on a DHCP host such as often found on a VM) make sure the hostname that you specify is the one you have in your hosts file for the loopback IP.

  7. The Configuration Assistant will [re]start AdminServer, the managed server, and OPMN components.
  8. Flush your browser cache, and then login to OBIEE and enjoy the new version number and the sloping tabs of Fusion skin

Command-line / “Silent” in-place upgrade

The three utilities that are run to do an in-place upgrade can all be run from the commandline only, using response files to specify the values and actions.

To create a response file, you can run the utility in GUI mode and then choose the Save option on the summary screen. You can also take an existing response file and modify it to suit your target environment.

Passwords in response files

When you create a response file from the GUI, passwords are not saved in clear text.

  • In the Patch Set Assistant response file passwords are saved encrypted, so if you don’t need to change them you can leave this file unmodified. Alternatively, you can change encrypted for cleartext:
    MDS.encryptedSchemaPassword = 0562A61E90282BD7CE8DDA04D618E53960D533DC3903301F63
    MDS.encryptedDbaPassword = 05115EE31A473F46DFCB9FB23C99C3A348DD8D3D2C4FBA68BF
    

    would become

    MDS.cleartextSchemaPassword = Password01
    MDS.cleartextDbaPassword = Password01
    

    And do the same for the BIPLATFORM schema and DBA passwords in the file

  • For the Configuration Assistant response file you need to replace all instances of <SECURE VALUE> with the actual password

Step by step silent in-place upgrade

  1. Shutdown the BI stack (WLS Admin Server, Managed Server, Node Manager, OPMN)
  2. Run the Software Only install of 11.1.1.7
    cd bishiphome\Disk1\
    setup.exe -silent -responsefile c:\response_files\1_swonly.rsp
    
  3. Run the Patch Set Assistant
    cd oracle_common\bin\
    psa.bat -response c:\response_files\2_psa.rsp
    
  4. Start Node Manager
  5. Start AdminServer
    cd user_projects\domains\bifoundation_domain\bin\
    startWebLogic.cmd 
    

    Wait until the Admin Server is running – you should be able to login to the WLS Console at http://<yourhost>:7001/console, and the Admin Server command window will show Server started in RUNNING mode

  6. Run the Configuration Assistant script.
    cd Oracle_BI1\bin\
    config.bat -waitforcompletion -silent -responseFile c:\response_files\3_config_ass.rsp
    

    The -waitforcompletion is optional but useful on Windows as it stops the cmd window which you launch the script from disappearing

You can download the response files used from here, or generate them yourself using the utilities in GUI mode.

In-place upgrade of a scaled-out BI domain

If you have a scaled-out (horizontal cluster) BI domain, the in-place upgrade is simple for the additional nodes. Since the domain and RCU upgrades are handled by the Configuration Assistant and Patch Set Assistant respectively, you just need to update the binaries, using the Software Only Install

On each additional node:

  1. Shutdown all WLS, OPMN
  2. Run 11.1.1.7 installer
    1. Select Software Only Install
    2. Specify existing FMW_HOME (should be pre-populated in OUI)
    3. Takes c.10 minutes
  3. Start up Managed Server, and OPMN
  4. That’s it

Known issues

Common Header no longer visible

If you don’t flush your browser cache then you can expect to see the header missing from OBIEE :
upg14

Flush your browser cache and then refresh / relogin, and it should display correctly:

upg15

INST–08112

Error: INST-08112: The Admin Server is listening on multiple network interfaces and the default listening address 192.168.248.160 cannot be accessed from the scale out host

Diagnostics: Windows machine has loopback (10.10.10.10) plus a NAT NIC (IP 192.168.248.160)

Resolution: In WLS console, set the Listen Address for the Admin Server to 10.10.10.10

Known issues from the release notes

These are currently listed in the release notes and are related to in-place upgrades:

OBIEE 11.1.1.7 Now Available for Download

OBIEE 11.1.1.7 became generally available (“G.A.”) over the long Easter weekend, for Microsoft Windows 32/64-bit and all the usual Unix and Linux platforms. We’ll have more on this blog over the next few weeks on the details of some of the most important new features, but at a high-level here’s what’s new:

The OBIEE dashboard and analyses have a new look-and-feel to match the Oracle Fusion Apps, though you can switch-back to the old look if you want to (both are user/developer-selectable themes). There are a number of new visualisation types including performance tiles (as shown in the screenshot below), and 100% stacked bar and waterfall charts.

NewImage

There’s also a new view suggestion feature, that recommends the best visualisation based on the type of analysis or comparison you’re trying to put together.

NewImage

There’s also improvements to the Trellis Chart view including the ability to associate action links with individual trellis cells; an ability to freeze headers in table and pivot views, and a number of other “fit and finish” changes to improve overall product quality.

With the installer, Essbase can now be installed alongside the main OBIEE server products, and as we saw in a preview with OBIEE 11.1.1.6.2 BP1, Essbase then appears within the main Oracle BI Domain and can be monitored, and stopped and started, from within Enterprise Manager Fusion Middleware Control.

NewImage

There’s also a number of improvements and new features to the Administration tool and BI Server, including support for Apache Hadoop sources (via Hive and MapReduce), some incremental improvements to the Model Checker and statistics gathering, and some changes and improvements around MUD (it wouldn’t an OBIEE patch release without some changes to how MUD works). Full details of everything are in the updated product docs, and we’ll cover some of these new features, particularly the Apache Hadoop integration, in some postings in the near future.

Using OBIEE against Transactional Schemas Part 5: Complex Facts

I’ve finally gotten around to finishing this series… I believe it’s actually been a year in the making. I’m planning on being more proactive with getting content on the blog. Actually… the main reason I’m putting this one to bed is I have some exciting posts planned in the coming months, and I feel guilty writing those without closing this one off. For the recap, here are the previous entries:

Using OBIEE against Transactional Schemas Part 1: Introduction

Using OBIEE against Transactional Schemas Part 2: Aliases

Using OBIEE against Transactional Schemas Part 3: Simple Dimensions and Facts

Using OBIEE against Transactional Schemas Part 4: Complex Dimensions

In this post, I want to talk about complex facts. The OBIEE metadata layer is a very powerful force indeed, and I can’t demonstrate all the possibilities for shifting a normalized schema into a dimensional one. Instead, I thought I would show one really interesting example… something extremely powerful, in hopes that others reading this series might find inspiration to try something daring themselves.

OLTP developers design their systems with one task in mind: facilitating the functions of the transactional system. Even when standard reporting is delivered as part of an OLTP system, the flexibility of the schema for these purposes is usually an afterthought because the stated goal of these systems is (and should be): delivering transactions with as much efficiency as possible. It should come as no surprise to BI developers when these systems store unrelated activities (at least from a source-system perspective) in completely separate tables. We often want to combine these activities in a coherent structure — usually an activity-based fact table — to be able to GROUP BY the Activity Type in a single report. It’s usually these advanced reporting requirements that cause us to pull out SQL Developer Data Modeler and start designing a data warehouse, which is probably a good idea. But for the reasons mentioned in the introductory blog post in this series, a data warehouse isn’t always in the cards. Desperate for their data, business analysts pull out that old Swiss Army knife — Excel — and start combining data sets and pivoting them, generating the reports that the business needs. We have to understand the need to deliver content to the business quickly, and the business can’t always wait for mounds of ETL code before making very important decisions.

The Business Model and Mapping layer available to us in the OBIEE Semantic layer provides us the ability to present a logical fact table, such as Fact – Customer Activity, as a shell for many types of activities, and even combine those activities in intelligent and performant ways. The example we’ll construct from the Customer Tracking application (which has been the basis for all the previous entries) involves reporting against the activities stored in the EBA_CUST_CUST_ACTIVITY table. As mentioned earlier, this table tracks explicit CRM activities related to particular Customers and Contacts, including meetings, phone calls, etc. We have several activities that we would find useful to combine with these explicit events, such as Customer creation dates, Customer inactivity dates, etc. These implicit activities would look great combined in our Fact – Activity Fact table so we could include the type of activity in our GROUP BY list, and return the results when we drill down to the detail level for a particular customer. We could try to build this integration in the dashboard itself to show the account creation date from Dim – Contact on the same dashboard with dates of CRM activities. But we should all admit that, it’s a better solution to build this functionality in the Business Model if it’s possible. But is this feasible without ETL? Would we have to stitch this together using perhaps presentation variables, or worse: have business analysts dump the results of two separate analyses into an Excel spreadsheet and produce the report they need outside of OBIEE?

So we want to add an implicit event to our Fact – Customer Activity logical table: the creation of a Customer account, which is represented in Customer Tracker with the CREATED_ON column in the EBA_CUST_CUSTOMERS table. We’ll start by adding another logical table source using this source table  and provide the same hard-coded value of 1 to the Activity Count measure:

Complex Fact LTS

Remember: this is still a factless fact table, and we have to provide a measure to the OBIEE semantic layer which allows aggregate queries. We have a little bit more to do with these logical table sources, but we need to make similar changes to several of our logical dimension tables as well before we complete this task. I’ll preview two logical table sources below (both called Customers), and then explain them further down. The one on the left is a new logical table source for the Dim – Customer Activity table, while the second is for the Dim – Activity Date table:

Dimension Table LTS for Complex Facts

In the Dim – Customer Activity logical dimension table, we’ll create a new logical table source also based on the EBA_CUST_CUSTOMERS table. As this is a dimension table and requires that we map a value for the primary key, we simply use the physical primary key from the EBA_CUST_CUSTOMERS table. Notice that we have constructed a new logical column called Activity Source Type. This attribute will allow us to differentiate our explicit activities, such as those sourced directly from the EBA_CUST_CUST_ACTIVITY table, from this new implicit activity that we are constructing from the EBA_CUST_CUSTOMERS table. We also provide several hard-coded values to other attributes in this dimension table to compensate for the lack of values for those attributes for our implicit activities.

We also need to provide an additional logical table source for our date dimension Dim – Activity Date. This is where the magic starts to happen. The creation of the Customer account is actually the event that we are interested in reporting on, so it’s this date that ties all the activities together. We’ll map the CREATED_ON date from EBA_CUST_CUSTOMERS to the single Activity Date column that we have defined in the logical table source and let the other calculated measures provide the remaining attributes necessary in the logical dimension table. However, since the CREATED_ON column in the EBA_CUST_CUSTOMERS table is defined as a DATETIME attribute in the physical model (and we want it to remain that way when we view it as a dimensional attribute), we need to modify the expression slightly in the logical table source to remove the time element. As the calculation is not visible in the image above, I’ve listed it here:

Cast(“orcl”.”".”CUST_TRACK”.”EBA_CUST_CUSTOMERS”.”CREATED_ON” AS  DATE )

The only remaining dimension table is Dim – Contact, but we don’t need to make any changes here, as the EBA_CUST_CUSTOMERS table is already adequately represented in the logical table source. Because we are bringing a logical table source associated with this logical dimension table into our logical fact table, the BI Server already understands how to construct the join (or more correctly, the lack of a join) with this logical dimension.

Now, we can return to the logical table sources for Fact – Customer Activity to exploit one final piece of sheer magic from the BI Server. For each logical table source, we select the Content tab and make the following changes:

Fragmentation Content for Complex Facts

There’s a few really important bits that we are unlocking here. First: we need to check the option next to This source should be combined with other sources at this level. Ordinarily, logical table sources are usually selected by the BI Server using an OR evaluation: one LTS per combination of fact and dimension joins. (There are exceptions to this, but I’m distilling the content down a bit to hopefully make this easier to follow). This setting instead dictates that the LTS’s should be evaluated with an AND instead. We are instructing the BI Server to combine these two logical table sources as if they existed in the same table. This is done using a logical union, which manifests itself as an actual UNION statement in the physical SQL when both sources exist in the same database. We can see this behavior by examining the physical SQL generated by an analysis using Fact – Activity Fact:

Complex Fact Analysis

Complex Fact Analysis SQL

The more impressive functionality comes when we make use of the expression specified in Fragmentation content. This logic instructs the BI Server to do a kind of partition pruning that is similar to the behavior of the Oracle Database optimizer when dealing with partitioned tables. What we have constructed here is really a form of logical partitioning, with the source for the Fact – Customer Activity logical fact table existing in two logical physical sources, or partitions. So far, our query wasn’t precise enough to allow the BI Server to prune down to a single logical table source. However, when we choose to filter on the Activity Source Type logical column either directly or by drilling down, which is the same column we defined in our Fragmentation content section, the BI Server removes the UNION statement and generates a query against a single logical table source:

Complex Fact Analysis SQL 2

It’s still a best practice to build conformed data warehouses, and transactional reporting should be seen as a means to an end. Regardless, there will always be reasons to do transactional reporting, and the power of the Business Model and Mapping layer provides us with capabilities to deliver analyses and dashboards to replace the Excel spreadsheets that often form the cornerstone of transactional reporting. I hope you’ve enjoyed this series… and perhaps the long delays between each entry kept you on the edge of your seat for each new installment. Perhaps not. Regardless… drop me some comments and let me know what you think.