Tag Archives: Oracle BI Suite EE

MDS XML versus MUDE Part1: Introduction

This is my first blog post in quite a while… mostly because of all the planning and preparation that went into the Rittman Mead BI Forum, which is now sadly behind us. There’s been a lot of other activity around Oracle BI as well. Of course, we had OBIA 11.1.1.7.1 PS1 release recently, and this has us very busy internally, preparing for our first implementation, and thinking about what the training course will look like. Mark covered the subject very well… but still expect something from me on the new OBIA in the not-so-distant future. I’ve also been busy with my upcoming Kscope New Orleans presentations. Edward Roske and I have a 2-hour, double-room presentation on Essbase and OBIEE integration; I have an OBIEE and Data Vault presentation that I’m presenting with Kent Graziano; and then, I have my only solo presentation concerning MDS XML versus MUDE as a way of doing multi-user RPD development.

I presented on the MDS XML topic already at Collaborate 13 this year, and what I noticed from discussions with the attendees is that most users don’t understand where this feature fits in. Honestly… I wasn’t too sure myself when the feature was first released, so I thought I would take a look and see how organizations might use it. I was planning on addressing Aggregation next on the blog (a subject I recently spoke on at Collaborate as well), but the MDS XML subject seems to have more momentum… so here we go.

If you aren’t aware of what MDS XML is, or if you have an idea, but are still throwing your hands in the air, then let me try to explain first what it is. The OBIEE (and Siebel Analytics, and nQuire before that) metadata repository file has always been binary… that single RPD file that we deploy to the BI Server. But a single, monolithic binary file is a problematic solution for the OBIEE metadata layer… just as it is problematic for almost any deployment large or small. Microsoft faced a similar crossroads not so long ago with Office file formats: the world wanted non-binary, and at that time, XML was king. So Microsoft launched a file-format conversion project to produce a resulting Office file format based on XML and open standards (or at least… as open as Microsoft can allow itself).

The paramount issue with binary files in any technology project is their difficulty integrating with version control systems (VCS). Most of the efficiency capabilities of these systems, from merging functionalities, to cheap delta copies, revolves around the ability to do basic text diff’ing. We immediately lose all of this functionality when working with a binary file. But even a monolithic text file would be difficult to manage as well because it becomes impossible to track the granular changes made to individual objects. So what we’ve needed in OBIEE is the ability to store our repository as a collection of granular text documents, using a recognized format (such as XML) and having them all act together to form our metadata definition.

We have that capability now with the MDS XML feature in the Admin Tool. Whenever we “Create” a repository, “Open” a repository, or “Copy|Save As” a repository, we have the option to work instead with a directory of XML files using Oracle’s standard MDS format.

xml0

MDS XML Open

MDS XML Toolbar

The Admin Tool will ask us for a directory to serve as the “core” directory for the RPD… a container to hold all the subsequent subdirectories and individual XML files (I used “core” instead of “base” because there is actually a “base” subdirectory in the directory tree.) The core directory serves as the pointer to the RPD… we browse to this high-level directory when opening, creating or copying an RPD file stored in this way. In the below screenshot, the “gcbc” directory is the core directory I specified in the Admin Tool during metadata repository development.

MDS XML Directory Contents

In the next few posts, I’ll examine this new file format and see what we can actually do with it. In all honesty, I put my first MDS XML abstracts forward at conferences before I even knew what was possible. I wanted to put this new feature through it’s paces and see what (if any) holes it filled in the current project delivery paradigm. In the next post, I’ll take a brief look at MUDE (I know, I’m sorry…) to see where the bar is currently set with repository multi-user development for OBIEE. After that, I’ll take MDS XML for a stroll along with the Git version control system… arguably the most powerful VCS to date. Finally, I’m planning on taking a look at what a delivery methodology might look like using Git… including RPD migration and rollout.

Photos and Presentation Downloads from the Rittman Mead BI Forum 2013, Brighton & Atlanta

Well, we’re all back home now after two very successful Rittman Mead BI Forum events in Brighton, and then Atlanta, earlier this month in May 2013. Around 70 OBIEE, ODI, Endeca and Essbase developers from around Europe got together in the first week in Brighton, followed by around 60 in Atlanta, and we were joined by Cary Millsap (Method R Corporation), Alex Gorbachev (Pythian) and Toby Potter (Data Sift) as special guest speakers over the two events. Thank you again to everyone who came along and supported the event, and a special thanks to the speakers without whom, of course, the BI Forum couldn’t take place. In addition, sincere thanks to Mike, Adam, Philippe, Alan, Marty, Jack and Florian from Oracle for coming along and sharing plans and insights around the Oracle product roadmap, and finally; congratulations to Antony Heljula (Peak Indicators Ltd) and Jeremy Harms (CD Group) who won the “Best Speaker” award for Brighton and Atlanta respectively.

NewImage

Photos from the two events (a selection from Brighton are above, some from Atlanta below this paragraph) are available in these Flickr photo sets:

NewImage

As we always do, we’re also making the slides (where allowed by the speaker, and not under NDA) available for download using the links below, including the one-day Oracle Data Integration Masterclass provided by Stewart Bryson, Michael Rainey and myself. Note that Christian Screen’s and Jeremy Harms slides are actually online, so I don’t think you’ll be able to download them from whatever service is hosting them – sorry.

Oracle Data Integration Masterclass (Stewart Bryson, Michael Rainey, Mark Rittman, Rittman Mead)

Brighton RM BI Forum, May 8th – 10th 2013

Atlanta RM BI Forum, May 15th – 17th 2013

So once again – thank you to everyone who came along, especially the speakers but also everyone from our Brighton and Atlanta offices who helped set the event up, and made sure it all ran so smoothly. See some of you again in Brighton and Atlanta next year, and our next outing is to ODTUG KScope’13 in New Orleans – another great event with the BI Track organised by Kevin McGinley – make sure you’re there!

Testing aggregate navigation on OBIEE and Exalytics

One of OBIEE’s many great strengths is aggregate navigation; the ability to choose from a list of possible tables the one which will probably give the optimal performance for a given user query. Users are blissfully unaware of which particular table their query is being satisfied from, since aggregate navigation happens on the BI Server once the user’s request comes through from an Analysis or Dashboard.

This seamless nature of aggregate navigation means that testing specific aggregates are working can be fiddly. We want to ensure that the aggregates we’ve built are (i) being used when appropriate and (ii) showing the correct data. This is the particularly the case in Exalytics when aggregates are put into in-memory (TimesTen) by the Summary Advisor and we need to validate them.

Whilst the log file nqquery.log (or Usage Tracking table S_NQ_DB_ACCT) tells us pretty easily which table a query used, it is nice to be able to switch a query easily between possible aggregate sources to be able to compare the data. This blog demonstrates how we can use the INACTIVE_SCHEMAS variable (as described in my previous blog on loading Exalytics incrementally) to do this.

INACTIVE_SCHEMAS is a Logical SQL variable that tells the BI Server to exclude the specified physical schema(s) from consideration for resolving an inbound query. Normally, the BI Server will parse each incoming query through the RPD, and where a Logical Table has multiple Logical Table Sources it will evaluate each one to determine if it (a) can satisfy the query and (b) whether it will be the most efficient one to use. By using INACTIVE_SCHEMAS we can force the BI Server to ignore certain Logical Table Sources (those associated with the physical schema specified), ensuring that it just queries the source(s) we want it to.

In the following example, the data exists on both Oracle database, and TimesTen (in-memory). Whilst the example here is based on an Exalytics architecture, the principle should be exactly the same regardless of where the aggregates reside. This is how the RPD is set up for the Fact table in my example:

The GCBC_SALES schema on Oracle holds the unaggregated sales data, whilst the EXALYTICS schema on TimesTen has an aggregate of this data in it. The very simple report pictured here shows sales by month, and additionally uses a Logical SQL view to show the contents of the query being sent to the BI Server:

Looking at nqquery.log we can see the query by default hits the TimesTen source:

[...]
------------- Sending query to database named TimesTen aggregates
WITH
SAWITH0 AS (select distinct T1528.Sale_Amoun000000AD as c1,
     T1514.Month_YYYY000000D0 as c2
from
     SA_Month0000011E T1514,
     ag_sales_month T1528
[...]

Now, for thoroughness, let’s compare this to what’s in the TimesTen database, using a Direct Database Request:

OK, all looks good. But, is what we’ve aggregated into TimesTen matching what we’ve got in the source data on Oracle? Here was can use INACTIVE_SCHEMAS to force the BI Server to ignore TimesTen entirely. We can see from the nqquery.log that OBI has now gone back to the Oracle source of the data:

[...]
------------- Sending query to database named orcl
WITH
SAWITH0 AS (select sum(T117.FCAST_SAL_AMT) as c1,
     T127.MONTH_YYYYMM as c2
from
     GCBC_SALES.TIMES T127 /* Dim_TIMES */ ,
     GCBC_SALES.SALES T117 /* Fact_SALES */
[...]

and the report shows that actually we have a problem in our data, since what’s on the source doesn’t match the aggregate:

A Direct Database Request against Oracle confirms the data we’re seeing – we have a mismatch between our source and our aggregate:

This is the kind of testing that it is crucial to perform. Without proper testing, problems may only come to light in specific reports or scenarios, because by the very nature of aggregate navigation working silently and hidden from the user.

So this is the feature we can use to perform the testing, but below I demonstrate a much more flexible way that having to build multiple reports.

Implementing INACTIVE_SCHEMAS

Using INACTIVE_SCHEMAS in your report is very simple, and doesn’t require modification to your reports. Simply use a Variable Prompt to populate INACTIVE_SCHEMAS as a Request Variable. Disable the Apply button for instantaneous switching when the value is changed.

A Request Variable will be prepended it to any logical SQL sent to the BI Server. Save this prompt in your web catalog, and add it to any dashboard on which you want to test the aggregate:

Even better, if you set the security on the dashboard prompt such that only your admins have access to it, then you could put it on all of your dashboards as a diagnostic tool and only those users with the correct privilege will even see it:

Displaying the aggregate source name in the report

So far this is all negative , in that we are specifying the data source not to use. We can examine nqquery.log etc to confirm which source was used, but it’s hardly convenient to wade through log files each time we execute the report. Ripped off from Inspired by SampleApp is this trick:

  1. Add a logical column to the fact table
  2. Hard code the expression for the column in each Logical Table Source
  3. Bring the column through to the relevant subject area
  4. Incorporate it in reports as required, for example using a Narrative View.

Bringing it all together gives us this type of diagnostic view of our reports:

Summary

There’s a variety of ways to write bespoke test reports in OBI, but what I’ve demonstrated here is a very minimal way of overlaying a test capability on top of all existing dashboards. Simply create the Request Variable dashboard prompt, set the security so only admins etc can see it, and then add it in to each dashboard page as required.

In addition, the use of a ‘data source’ logical column in a fact table tied to each LTS can help indicate further where the data seen is coming from.

Introduction to the BI Apps 11.1.1.7.1 – Product Architecture & New Configuration Tools

In my previous posting in this series, I looked at the new 11.1.1..7.1 release of the Oracle BI Applications at a high-level, and talked about how this new release uses ODI as the embedded ETL tool instead of Informatica PowerCenter. Support for Informatica will come with patch set 2 (PS2) of BI Apps 11.1.1.7.x giving customers the choice of which ETL to use (with the caveat that customers upgrading from 7.9.x will typically have to stick with Informatica unless they want to completely re-implement using ODI), but for this initial release at least, ODI and some new Fusion Middleware tools take over from Informatica and the DAC, giving us what could well be a much simpler architecture for supplying the underlying data for the BI Apps dashboards.

In this posting then, I’m going to take a closer look at this new product architecture, and I’ll follow it with a more detailed look at how the various bits of ODI functionality replace the workflows, mappings, transformation operators and execution plans provided in earlier releases by Informatica and the DAC. For anyone familiar with the previous, 7.9.x versions of the BI Applications, the architecture diagram below shows the five tiers that this product typically implemented; tiers for the source data and data warehouse/repository databases, an ETL tier for Informatica and the DAC server, then two more tiers for the OBIEE application server and the client web browser.

NewImage

Communication between the tiers was – to put it politely – “loosely coupled”, with DAC task names corresponding with Informatica workflow names, each workflow containing a single mapping, and all of the connections and sources having to be named “just so”, so that every part of the stack could communicate with all the others. It worked, but it was a lot of work to implement and configure, and once it was up and running in most cases customers were scared to then change it, in case a name or a connection got out of sync and everything then stopped working. Plus – Informatica skills are scarce in the Oracle world, and the DAC is an extra piece of technology that few DBAs really understood properly.

The 11.1.1.7.1 release of the BI Apps simplifies this architecture by removing the separate ETL tier, and instead using Oracle Data Integrator as the embedded ETL tool, with its server functions running as JEE applications within the same WebLogic domain as OBIEE 11g, giving us the overall architecture in the diagram below.

NewImage

Now anyone who read my series of posts back in 2009 on the 7.9.5.2 release of the BI Apps, which also used ODI as the embedded ETL tool, will know that whilst ODI 10g could do the job of loading data into the BI Apps data warehouse, it lacked the load orchestration capabilities of Informatica and the DAC and wasn’t really set up to dynamically generate what have become, in ODI 11g, load plans. BI Apps 7.9.5.2 turned-out to be a one-off release and in the intervening years Oracle have added the aforementioned load plans along with other functionality aimed at better supporting the BI Apps, along with two new JEE applications that run in WebLogic to replace the old DAC.  These new applications, along with the ODI JEE agent, ODI Console and the ODI SDK, are shown in the more detailed BI Applications 11.1.1.7.1 logical architecture diagram shown below.

NewImage

Oracle BI Applications 11.1.1.7.1 has two main product tiers to it, made up of the following components:

  • The Middleware (BI and ETL) tier; a WebLogic domain and associated system components, comprising BI components delivered as part of OBIEE 11.1.1.7 (including Essbase and related applications) as one managed server, and another managed server containing ODI Java components, including three new BI Apps-related ones; Configuration Manager, Functional Setup Manager, and ODI Load Plan Generator
  • The Database (DW and Repositories) tier; for the time-being, Oracle only, and comprising a data warehouse schema (staging + performance layer), and a repository database containing the OBIEE repository schemas plus new ones to hold the ODI repository and other ETL/configuration metadata used for configuring your system.

Essbase at this stage is installed, but not used for the main BI applications, and all of it uses Fusion Middleware security (application roles and policies) along with the WebLogic Embedded LDAP server to hold users and groups. A special version of RCU is used to set up the new BI Apps-related schemas, and import data into them using Oracle database export files, so that the ODI repository, metadata tables and so forth are all populated prior to the first load taking place. Enterprise Manager Fusion Middleware Control is still used to manage and monitor the overall platform, but there’s now an entry for ODI along with Essbase, the latter of course being delivered as part of the 11.1.1.7 OBIEE platform release.

NewImage

In the next posting in the series we’ll take a closer look at how ODI uses its JEE agent and mappings imported into its repository to load the BI Apps data warehouse, but what about the two new web-based configuration tools, Oracle BI Applications Configuration Manager (BIACM) and Oracle BI Applications Functional Setup Manager (FSM) – what do they do?

After you install OBIEE 11.1.1.7 and then the BI Applications 11.1.1.7.1, the BI Apps installer extends the BI domain to include FSM, BIACM and the ODI Load Plan Generator, along with some other supporting applications and libraries required for the full product. Load Plan Generator works behind the scenes to build new load plans in a similar way to the Execution Plan “Build” feature in the DAC, and the two web-based tools perform the following functions:

  • Oracle BI Applications Configuration Manager performs system-wide setup tasks such as defining sources, selecting BI Apps modules and performing other, “one-only” tasks similar to the Setup feature in the DAC Console.
  • Oracle BI Applications Functional Setup Manager is then used to list out, and track progress against, the various tasks required to configure the BI Applications modules, or “Offerings”, that you selected in the Configuration Manager

Most importantly though, these tools connect directly through to the ODI repository, so data sources you set up here will get pushed down to ODI as data servers in the ODI master repository; load plans you set up to, as in the screenshot below, load configuration tables, are ODI load plans and you can track their progress either from within ODI, or from within these applications themselves.

NewImage

I haven’t had a chance to properly “diff” the RPD used in BI Apps 11.1.1.7.1 with the previous 7.9.x ones, or do a similar exercise for the underlying database data model, but on first glance the new RPD is at least recognisable, albeit with new sources and subject areas for the Fusion Apps, Oracle Transactional BI (OTBI), Real-Time Decisions and the like. The web catalog also looks familiar, but also has new content around the new applications along with additional content for the existing ones.

NewImage

So, we’re at the point now where can start to think about loading data into the BI Apps data warehouse, and in tomorrows post we’ll take a look at what’s involved in a BI Apps 11.1.1.7.1 ETL load, and also look into how GoldenGate can now be used to extract and stage data prior to loading via ODI. Back tomorrow…