Tag Archives: Hyperion Essbase
Essbase and EPM Integration Improvements in OBIEE 11.1.1.7
One of the major new feature areas in OBIEE 11.1.1.7, but which has so far got very little attention, is the significant improvement in integration between Essbase, the Hyperion EPM Suite, and OBIEE 11g. The integration between EPM Workspace and OBIEE’s Presentation Services which disappeared when 11g came along is now back, along with installation and security integration, a new version of SmartView that (properly) supports OBIEE as a data source, and the ability to spin-off aggregates from the RPD into Essbase ASO cubes.
Now some of these features of course made an appearance in the earlier, 11.1.1.6.2 BP1 release, and integration between OBIEE 11g and EPM Suite has been happening on-and-off right back from the OBIEE 10g days, but where we’re at now with OBIEE 11.1.1.7 is the delivery of a number of things that customers have long been asking for, including:
- The ability to run OBIEE from within EPM Workspace, with single sign-on between the two
- Shared security provisioning and organisation between Essbase and OBIEE, through application roles and policies
- The ability to install Essbase and the other EPM tools into the same WebLogic domain as OBIEE, using a single installer
- A proper Excel (and Word, Powerpoint, Outlook) add-in for OBIEE, with the ability to author reports as well as run existing Answers-authored ones
This is actually one of a number of new feature areas that came with 11.1.1.7 that have had little publicity; as well as better Essbase integration, there’s actually now support for multi-tenancy in the RPD and catalog, Hadoop integration (which we covered in a blog post last week), the View Suggestion Engine, the inevitable changes to MUD, and quite a few others, some of which I’ll try and cover in the next few days and weeks, but for now let’s look at these new Essbase/EPM integration improvements, starting with installation of Essbase and its related tools into the OBIEE WebLogic domain.
As I mentioned back in my OBIEE 11.1.1.7 New Features posting a few weeks ago, the OBIEE product installer now offers Essbase as an installation option alongside OBIEE, Real-Time Decisions (RTD) and BI Publisher. As with RTD, Essbase isn’t included in the base OBIEE+ license, but it is included in Oracle BI Foundation Suite, the product package that Oracle encourage new customers to take out an includes OBIEE, Scorecard & Strategy Management, Essbase and BI Mobile. Selecting Essbase during the install process installs it, and the other EPM Suite tools, in the same WebLogic domain as OBIEE, and you can see Essbase within Fusion Middleware Control as a product – separate from OBIEE – that you can manage and monitor.
Essbase Server, and Essbase Studio (the client/server tool used to design and build Essbase cubes) are also now controlled and monitored through OPMN, something that’s been a feature of EPM Suite for several releases now but which is, of course, new for OBIEE.
[oracle@obiee11117 ~]$ cd /home/oracle/obiee/instances/instance1/bin
[oracle@obiee11117 bin]$ ./opmnctl status
Processes in Instance: instance1
---------------------------------+--------------------+---------+---------
ias-component | process-type | pid | status
---------------------------------+--------------------+---------+---------
essbasestudio1 | EssbaseStudio | 12682 | Alive
essbaseserver1 | Essbase | 12685 | Alive
coreapplication_obiccs1 | OracleBIClusterCo~ | 12686 | Alive
coreapplication_obisch1 | OracleBIScheduler~ | 12687 | Alive
coreapplication_obijh1 | OracleBIJavaHostC~ | 12683 | Alive
coreapplication_obips1 | OracleBIPresentat~ | 12684 | Alive
coreapplication_obis1 | OracleBIServerCom~ | 12689 | Alive
[oracle@obiee11117 bin]$
So something that’s been an issue for EPM customers upgrading from OBIEE 10g to 11g was the removal, at the time, of the ability to integrate OBIEE’s Presentation Services within EPM Workspace, and the SSO link between the two products. Back with OBIEE 10.1.3.4 there was an admittedly complicated but supported and working process to integrate the two products together, allowing EPM Workspace customers to “skin” OBIEE to look like Workspace and run the two products together, albeit with separate report catalogs, security models and so forth.
This, coupled with the removal of OBIEE’s Hyperion custom authenticator for the RPD left many EPM Suite customers upgrading to OBIEE 11g in the lurch, leading to workarounds such as this one that we put together recently for one of our customers. Well this integration (mostly…) is back with OBIEE 11.1.1.7, so let’s see what it does, and what functionality is still missing compared to OBIEE 10g.
First off, Essbase and EPM Suite as installed as part of an OBIEE installation isn’t quite the same as EPM Suite installed standalone; most importantly, Essbase in this OBIEE incarnation has a different security model than “standalone” EPM Suite, in that it uses the same system of application roles and policies that the Fusion Middleware 11g-centric OBIEE 11g does, rather than the Shared Services and groups that standalone EPM Suite does. Also, the OBIEE 11.1.1.7 install installs just the following EPM Suite products:
- Essbase Server, including Essbase Agent, Essbase Studio, Essbase Administration Services, Provider Services
- Financial Reporting
- Calculation Manager
Therefore you don’t get Planning, Web Analysis and so forth, and you can’t subsequently install them into the domain and Fusion Middleware Control afterwards – so think of Essbase and the EPM Suite tools in this context as an add-on and complement to OBIEE, not a full installation of EPM Suite in their own right. Moreover, the majority of Essbase administration tasks which for standalone EPM Suite installs are performed through MaxL, Shared Services and EAS are performed through Fusion Middleware Control, and Essbase high-availability and clustering works different within this context, for example. The standard product architecture diagram for OBIEE and Essbase combined within the 11.1.1.7 release therefore gets updated, with a number of products added to the Java components, and System components part of the diagram, like this:
Now, when installed as part of OBIEE 11.1.1.7′s WebLogic domain, EPM Workspace is available at http://[machine_name:port]/workspace, and when you launch it you’re presented with a view into the BI Catalog, and menu options to administer the various EPM and BI tools from one place.
Within this catalog are both OBIEE objects such as analyses, dashboards and agents, and EPM objects such as Financial Reporting and SmartView reports.
There are limits to this EPM/BI Catalog integration though – FR reports, for example, can only be opened using the File > Open dialog in EPM Workspace, with an error message showing if you just click on the report itself in the BI Catalog view within EPM Workspace. But SSO between Workspace and OBIEE seems to work (as in, you don’t need to re-enter your BI password when clicking on an analysis in the Workspace Catalog view) as both OBIEE and EPM are working off of the same Fusion Middleware security model, which (the lack of) explains why the feature disappeared for so long after OBIEE 11g was introduced.
Now that OBIEE and Essbase share the same security, the need for the old HSS Custom Authenticator has now gone away, though of course this will only be of use if a customer has moved their Essbase installation into the OBIEE domain, with standalone EPM Suite installations still needing the security workaround mentioned earlier in this article. There’s no upgrade path from standalone EPM Suite installations to this integrated arrangement, so most probably any users of Essbase within this new 11.1.1.7 context will be installing it “net-new”, with the main objective being to enhance their existing BI setup rather than merging their separate BI and EPM platforms into one.
As you’ve probably picked-up by now, much of this new integration ability is down to security harmonised across both Essbase and OBIEE, or more accurately Essbase now having an option to use Fusion Middleware 11g security rather than Hyperion Shared Services. So what does Essbase and FMW11g security look like in practice? Let’s head over to Fusion Middleware Control, in particular the Application Policies administration screen, to take a look.
The big difference when Essbase runs as part of an Oracle BI domain is that authentication, and authorization for Essbase use Fusion MIddleware security rather than Shared Services or Native Essbase security. Although Essbase Administration Services ships with OBIEE 11.1.1.7, you should use Fusion Middleware Control to enable access to particular Essbase databases, and give permission to access tools such as Financial Reporting or Administration Services; the only security role for EAS and MaxL in this setup is to create the Essbase data and metadata filters; these filters are then assigned to users through FMW security resource permissions and application policies, which then are then granted to application roles and thereby to users.
Whilst this probably seems like an over-complicated nightmare to traditional Essbase users, it does have the major advantage that one set of application roles granted to users within a Fusion Middleware system can cover both OBIEE and Essbase permissions, and there’s no need to link to Shared Services or support Native Essbase security. We’ll cover the implications of this more in some future blog posts, but this is the enabling technology that makes the rest of this integration make sense.
With Essbase integrated into the OBIEE BI Domain, you can also now use Essbase as an aggregate persistence target, though this feature comes with the same (slightly strange) approach and limitations that we first encountered when it was first introduced with OBIEE 11.1.1.6.2 BP1; although there’s not the same requirement for the Essbase server only to be used for aggregate persistence, you still have to name the Essbase database in a particular way, it’s ASO-only, and the Aggregate Persistence Wizard still creates a separate ASO database for each aggregation (similar to Oracle Database materialised views) rather than one single cube covering all aggregations. In practical terms – I’m not sure how much you’d use this vs. creating your own Essbase cube in Studio against the whole RPD business area – but it might be useful for OBIEE developers who otherwise don’t know Essbase.
So finally, the other major Essbase-related new feature in OBIEE 11.1.1.7 is SmartView, the successor to Oracle BI Office. But that’s a topic in itself, so I’ll cover that this in the next posting.
Can Endeca Provide an Alternative to OBIEE for Fast, Complex Ad-hoc Analysis?
Towards the end of last week I was pulling together an article for Oracle Magazine on Oracle Endeca Information Discovery, and how it can now use OBIEE’s BI Server and repository as an additional data source. I won’t go into the details of that connectivity now as it’ll be covered in the article itself, but one of the reasons I wanted to write this column was because of the dilemma I outlined in this blog post earlier in the month – is it possible to provide an ad-hoc query tool that gives users ultimate flexibility in how they create the report, but always guarantees sub-second response times. In that blog post, and in the Kevin McGinley blog post that inspired it, I speculated that Endeca Information Discovery 2.x might be able to provide that combination of features and I was keen to try this out in a “real-world” reporting scenario. If you’re new to Endeca and want to do a bit of background reading first, check out our Endeca focus area and these particular postings on the Endeca technology stack:
- Oracle Endeca Week : So Just What Is Endeca?
- Oracle Endeca Week : What is the Endeca MDEX Engine?
- Oracle Endeca Week : Where Does Endeca Fit with Oracle BI and DW?
- Oracle Endeca Week : How Do You Develop Endeca Latitude Applications?
- Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design
- Creating an Oracle Endeca Information Discovery 2.3 Application Part 2 : Preparing and Loading Data
- Creating an Oracle Endeca Information Discovery 2.3 Application Part 3 : Creating the User Interface
Update 25-Mar-2013: Endeca Information Discovery 3.0 is now out, and you can read about my first impressions of this updated release here.
So in the article, I used the OBIEE 11.1.1.6 v207 SampleApp as my data source, in particular the Airline Flight Delays dataset that exists in cut-down form in this VirtualBox VM, and in expanded form on our Exalytics demo server. The screenshot below shows this dataset in an OBIEE dashboard, that then allows the end-user to analyze flight statistics and associated delays over a number of years using graphs, pivot tables, maps and other interactive visualisations.
All of this works well, and a set of pre-built summary tables that also come with the VM ensure that the reports in the demo run fairly fast – as you’d expect from a demo system provided by Oracle to showcase the tool. But if you go outside of the dashboard and use the analysis editor to create a new report, particularly one that involves lots of subtotals, time-series functions, outer joins across multiple fact tables and so forth, then it can sometimes be many seconds, maybe minutes before it returns data – basically, if you try hard enough then you’ll construct a query that returns results too slow for a demanding user. I had this issue a couple of weeks ago when I visited a client looking to upgrade from OBIEE 10g to 11g, and who were trying reconcile what appeared to be two contradictory aims – provide a reporting environment where the user could request just about anything, however complex or expensive the query, and the expectation was that the answer would come back straightaway.
Of course in reality it’s a tough call to do this consistently with OBIEE, but could Endeca Information Discovery meet this requirement, with its in-memory, column-store search analytic database back-end and it’s web-based dashboard interface? Looking at the screenshot below from the Quickstart demo application that comes as part of the Endeca Information Discovery 2.4 download, it’s not a million miles away from an OBIEE dashboard, and if it provides guaranteed split-second response times, could it be a viable replacement for Answers, or at least an option that we could turn to for a particular type of user?
It’s worth saying at this point that Oracle, quite clearly, don’t position Endeca Information Discovery as a replacement or upgrade for OBIEE; instead, as I explain in this presentation from last year’s Oracle Openworld, Oracle position Endeca Information Discovery in the following way:
- Primarily, as a BI tool that enables analysis of unstructured and semi-structured data, as well as more traditional structured (measures, dimensions etc) datasets
- As a “data discovery” tool, through its ability to bring together loosely-related datasets and analyse them using search and lexical analysis tools
- As an agile, rapid application development tool for BI, because the key/value-pair record-orientated database provided by the Endeca Server requires little upfront data modelling, using a “schema on read” approach rather than the traditional “schema on write” used by relational databases and OBIEE’s semantic layer.
But … Endeca has a number of properties that might be of interest to more traditional BI users looking for in-memory ad-hoc analysis:
- The in-memory key-value store database it uses doesn’t have the same costs around data manipulation, table joins and disk access that traditional databases have, and the column-based storage it uses is particularly suited to selecting from sets of dimension members
- The search-and-click-orientated interface might actually suit users better now, given that everyone uses Google and tools such as QlikView have taken off
- It’s got a good range of data visualisation components, and a web-based dashboard that appears to do a similar job to OBIEE’s interactive dashboard.
So it seemed a good opportunity to take some data from an OBIEE 11g repository, load it up into an Endeca Server datastore, and see how well it actually worked as a replacement for Answers, even though this isn’t what the product was intended for.
The article goes through the steps loading up the Endeca Server datastore through a connection to OBIEE’s BI Server, and it works pretty well although I do have to create a special version of the SampleApp Flight Delays business model that removes the numbering from the business model tables, as Endeca doesn’t allow numbers as prefixes for its datastore attribute names. I took most of the tables from the Flight Delays subject area, and all of the rows, with the new BI Server integration feature creating a skeleton project for me in Endeca Information Discovery Integrator.
The article then goes on to create a basic dashboard, using a few search and guided navigation components on the left-hand side of the dashboard page, and a selection of basic visualisations on the right, looking something like this:
So far so good, and for an article whose focus is mainly around the Endeca-OBIEE integration piece, it illustrates the feature well. But how well does this work as an ad-hoc query tool, how fast are the queries, and how might a typical end-user get on trying to create their own reports, graphs and other visuals?
Initial results were encouraging; response times were consistently fast, albeit within the limited, controlled environment provided by the dashboard. All interactions were really in the form of filtering and searching of data, but as you can see from the performance stats below provided by the Endeca Information Discovery Studio control panel, all response times on all dashboards were under a second.
Compare this with similar stats for the OBIEE dashboards and analyses, and whilst most queries also came in around a second or so, some of them, particularly the more complex ones (and, ironically, the ones using data from TimesTen) were ten, fifteen or more seconds on average, and this is for a system with predefined queries, aggregate tables and most probably, caching enabled as well. Loading the SampleApp flight delays data into Endeca took around 20 minutes or so and took up around 600MB of memory on the VM, so based on this initial test, Endeca looks an interesting alternative to Answers and a regular RBDMS back-end database.
But it’s when you come to use Endeca Information Discovery’s dashboard as an ad-hoc reporting tool that you begin to see the limitations, at least with Studio, the current web-based front-end for the Endeca Server. Going back to the Quickstart dashboard for a moment, the basic analysis process that an end-user would use with this tool would be as follows:
- Use the search and guided navigation tools to identify, and then focus in-on, the records that are of interest to you
- Then, using the graphing, tabular, tag cloud and other visualisations, aggregate and analyse the measures within that subset of records
To take an example, you might use the search and guided navigation tools in Quickstart to focus-in on mountain bike sales, in the UK, made in the second quarter of calendar year 2007, and once you’ve done this you would see a dashboard page looking like the one below.
The Endeca Server engine is good at aggregating data on the fly, and as the whole dataset should be held in-memory, results should return fast. In fact, this dashboard doesn’t look a million-miles away from an OBIEE dashboard, with the Breadcrumbs component on the left looking just like a set of dashboard prompts on an OBIEE dashboard. And this works pretty well, with the main issue then coming from how suitable a tool Studio is for users creating their own reports.
For example, going back to the dashboard I created for the Oracle Magazine article, suppose I wanted to add a cross tab (pivot table) component to the dashboard. Selecting it from the list of new components is easy enough, but then the first dialog you’re presented with when you come to configure the rows and columns is this one:
errrrr … what’s an EQL query? In fact, EQL is much like OBIEE’s logical SQL and therefore easy to write, but you still need to know the names of attributes, the format aggregation functions need to take, and of course the particular syntax EQL uses. In this example, to create a cross tab that breaks down flights by the percentage that cancel over region and quarter, the EQL would look something like this:
Once you know EQL, and particularly if you’ve got a copy of Endeca Information Discovery Integrator open in front of you with the attribute names listed, it’s not exactly rocket-science, but its’ still a process more like creating ADF Data Visualization Components than working with OBIEE’s Analysis Editor. According to reports around the upcoming 3.0 release of Endeca Information Discovery, the tabular and crosstab components are being reworked to presumably make them easier for end-users to set up, but where we are now is a long way from the usability of tools like the OBIEE Analysis Editor or even Microsoft Excel.
Once the crosstab is configured and on the screen, options are limited for hierarchy drilling, pivoting, subtotalling and all the other pivot table-style operations that “power users” would expect to see. Attributes, equivalent to columns in OBIEE’s subject areas, can be nested within each other, but there’s no equivalent to OBIEE 11g’s row and column-swapping, selection steps, calculated items and groups, and all the other pivot table features that power users require, and that are typically the cause of the expensive and complex SQL you sometimes see associated with an OBIEE analysis. The screenshot on the left below shows a typical Endeca Information Studio crosstab whilst the one on the right shows a pivot table from OBIEE 11.1.1.6, featuring right-click interactions to further manipulate the report dataset after its returned from the underlying database.
Its a similar story around charts in Endeca Studio. Adding a chart component to a dashboard page and selecting the Properties dialog this time requires that you select from a list of “views”, rather than type in an EQL query, with the average end-user probably not knowing what a view is or how to get hold of one.
Views are in fact metadata objects within Studio that allow you to define the equivalent of a database view over records and attributes in the datastore, designating some of the attributes as “dimensions” and others as “measures”. Except the dimensions are more like member lists, with no obvious place to define hierarchies or any of the other structures associated with OLAP-style reporting. Views are actually quite cool and once you get the hang of them, a great additional to building Studio applications, but it’s another technique to learn compared to creating crosstab reports, and with the creation of the view we’re starting to get into RPD and catalog-style territory which isn’t probably where you want to be end-users spending their time.
Update 24-March-2013: Oracle Endeca Information Discovery 3.0 has now been released, and the Crosstab component (now renamed Pivot Table component) now uses the same “views” datasource as Chart components, taking away the need to type in raw EQL when defining a crosstab component’s underlying query.
Another issue is a bit more subtle and concerns how the Endeca Server arranges data in the record-based datastore. When you load data into an Endeca Server datastore, you effectively denormalize all the input tables and files into a single large “table” of records, with each record being made up of a number of attributes (used for data selection) and measures (also attributes, but typically aggregated and displayed in numeric form on reports). This works reasonably well if you’re just loading the equivalent of a single fact table and its dimensions into a datastore, with the datastore records taking on the granularity of fact table rows or transactions in your source system. But typically, OBIEE users want to run queries that span multiple fact tables, and once you start trying to mix in fact and transaction sources of differing granularity, you start having to use workarounds and partner solutions such as this in order to have it make some sort of sense to the end user.
None of this is meant to imply any criticism of Endeca Information Discovery, or indeed to let OBIEE off-the-hook; Endeca was designed first and foremost as a data discovery tool, in a way the visualisations and analytics are a bonus, and Oracle have clearly stated that the graphing and analysis elements of the product are one of the major areas of the product they’re looking to improve, probably making more use of Oracle’s own data visualisation components and presumably moving away from LifeRay portal and Tomcat, on which Endeca Studio is built. But it’s no good comparing the relatively simplistic Endeca data analysis components with the more complex ones that OBIEE uses and saying they are faster, because it’s precisely those extra features that OBIEE’s components use that make their queries run slower – who knows how well Endeca queries would run given a query of a similar complexity?
I think what this exercise said to me is that, whilst the Endeca Server engine is most probably an excellent in-memory back-end for an ad-hoc query tool, the current generation of end-user reporting tools that come with the product just aren’t up to the standard that OBIEE power-users would require, and instead clearly they’re more aimed at developers who will build a dashboard and then either use it themselves, as part of a data discovery exercise, or give it to users to use in a “locked-down” form. And this is the key thing – if we’re considering Endeca as an alternative to OBIEE for these “power users”, they’re going to want to create their own reports, manipulate their own data and so forth, and clearly not having this ability will more or less make it unusable for them. Of course, the issue is compounded with Endeca in that the only interface into the Endeca Server datastore is via web service calls, rather than ODBC or JDBC, so Studio is more or less the only game in town until Oracle either make the interface more standard, or start to use OBIEE technologies as a way of analysing and visualising Endeca data.
So … close, but realistically a non-runner due to usability issues. If the requirement really is high-speed ad-hoc analysis of data, your best bet is still probably Essbase or OBIEE ideally in conjunction with Oracle Exalytics, but one wonders what Oracle have planned in terms of products to meet this need – some sort of combination of Endeca (for in-memory filtering and navigating across all types of data), Essbase (for aggregation and calculation) and OBIEE (for scalability, metadata management and data federation) combined with the best of each tools’ front-end capabilities would be very interesting – a kind of “Project Fusion” for BI, if you were.
Improvements to Essbase Integration in OBIEE 11.1.1.6.2 BP1 : ASO Aggregate Persistence, Combined Install & Systems Management
The recent 11.1.1.6.2 BP1 patchset for OBIEE 11g provided a bunch of new features including trellis charts, Oracle BI Mobile and the repository Model Checker, but it also sneaked in another set of new features that will be of particular interest to Essbase users: closer integration between Essbase Server and OBIEE, and the ability to persist OBIEE repository aggregates in an Essbase database. So how does this work?
As far as I can tell, nothing’s been officially announce but Oracle Support Doc. ID 1471661.1 details how this feature is set up and enabled. Whilst the ability to persist repository aggregates in an Essbase database is of course interesting, to my opinion it’s how Oracle enabled this feature, and the integration it’s required between OBIEE and Essbase, that’s even more interesting. For anyone who’s new to OBIEE, or OBIEE’s Aggregate Persistence Wizard feature the idea is that the BI Administration tool provides a wizard that lets you select one or more measures from the logical business dimensional model, select a level from each hierarchy that the measures are dimensioned by, and then specify a physical database that the aggregates will be created in, like this:
The output of the process is a script for use with the nqcmd utility, which acts as a command-line interface into the BI Server’s ODCB interface. The script contains logical SQL queries that create the required aggregate tables and indexes, and then run SQL commands to populate the tables. Finally, the script then registers these aggregates back in the BI Server repository, mapped in as new logical table sources and registered at the correct levels of granularity.
Then, when users query the business model from the dashboard, if a query comes through that could make use of the aggregate tables created in the previous steps, the BI Server automatically redirects the query to these new aggregates, in most cases significantly speeding up query response time as the results have already been pre-computed and stored ready for use.
As the feature previously stood, you could create these aggregate tables in either Oracle Database, Microsoft SQL Server or IBM DB/2, with Oracle TimesTen for Exalytics also an option if you’re running on the Exalytics platform. What this new feature does though is make it possible to store these aggregates in an Essbase ASO (Aggregate Storage Option) database, a multi-dimensional data store that potentially offers faster response times than relationally-stored aggregates.
So far so good; the really interesting thing though is how this new feature has been enabled. To make it possible to quickly spin-up Essbase databases to hold these aggregates, the installer for OBIEE 11.1.1.6 has a “hidden” option to install various Essbase components as well, something that’s only available when you initially install OBIEE and enabled through using a response file with some additional options selected. Once you’ve performed the install, you have to immediately patch OBIEE up to the 11.1.1.6.2 BP1 release and enable Essbase Studio integration through the NQSConfig.INI file, but once you do so and then log into Fusion Middleware Control, you’ll now see Essbase added to the list of targets, and Essbase elements in the coreapplication module as well.
What you’ve got now as part of this setup is the following Essbase products, installed along with OBIEE as part of a single Oracle BI Domain:
- Essbase Server 11.1.2.2.0
- Essbase Administration Services 11.1.2.2.0
- Essbase Studio 11.1.2.2.0
Essbase and EAS are now stopped, started and monitored through the same Fusion Middleware Control installation as OBIEE, with OPMN starting up Essbase and the WebLogic Admin Server controlling EAS. Essbase Studio is also part of the install, but it’s managed separately with no integration (yet) with Fusion Middleware Control. There are some significant limitations and restrictions with this setup though:
- It’s only available for new OBIEE 11.1.1.6.2 BP1 installations – if OBIEE is already installed you’ll need to de-install it, re-install OBIEE 11.1.1.6 and then patch it all up again to 11.1.1.6.2 BP1, enabling the Essbase features en-route
- It only works with the Essbase binaries shipped as part of this release, with no other versions supported
- Scale-out of Essbase isn’t permitted, and you can only use the Essbase server for aggregate persistence, nothing else
- There’s no upgrade path for Essbase databases created using this feature to later versions of Essbase, and
- Releases beyond 11.1.1.6.2 BP1 may require you to re-install the whole stack to get the upgrade.
In other words, this is for early adopters only and it’s just there to support aggregate persistence into Essbase ASO databases. With all of that in-mind, let’s go through the aggregate persistence process now, using the SampleAppLite repository and data set as the source. Here’s a quick look at SampleAppLite’s repository before we do any aggregate persistence, and in the Physical layer you can see the stub entry for the Essbase database that’s going to hold our aggregates, which corresponds to some new entries that go into the NQSConfig.INI file to enable integration with Essbase Studio.
Let’s step through the Aggregate Persistence Wizard now, and then we’ll look at how the aggregates get created in the Essbase database. We’ll start by checking that the repository is open online, then we’ll select Tools > Utilities > Aggregate Persistence. After specifying a name and location for the script that the process is going to create, I then select the measures that I want to aggregate using the wizard. These measures will then be aggregated using the default aggregation method specified for them in the repository’s business model and mapping layer.
I then select the dimension hierarchies and levels that I want to aggregate these measures by. All measures have to be aggregated by the same levels, and only a single aggregation can be specified by each run of the wizard (I’ll come back to the implications of this later on).
Next I select the connection pool, and therefore target physical database, for the aggregates. In this case I can now select an Essbase database as my target, rather than the usual Oracle Database, SQL Server or IBM DB/2 targets.
Finally, to close the process I press Next, Next and then Finish to exit the wizard and generate the script. Let’s take a look at what it’s produced.
The output from the Aggregate Persistence Wizard is a script that contains a logical SQL statement to create a specific aggregation, like this:
create aggregates "ag_F0_Re" for "SampleApp Lite"."F0 Revenue Base Measures"("Revenue","Billed Quantity","Discount Amount") at levels ("SampleApp Lite"."H0 Time"."Quarter", "SampleApp Lite"."H1 Products"."Products Brand", "SampleApp Lite"."H3 Orders"."Order Type", "SampleApp Lite"."H2 Offices"."Offices Organization") using connection pool "DMA_DB"."localhost" in "DMA
This script is obviously (a kind of) SQL script rather than an MDX or MaxL script, so how does it product an Essbase outline and a populated Essbase database? The script itself is executed using the nqcmd BI Server ODBC command-line utility, with a syntax like this:
cd c:\oracle\Middleware\Oracle_BI1\bifoundation\server\bin nqcmd -d coreapplication_OH1272047220 -u weblogic -p welcome1 -s c:\TEMP\create_ess_aggs.txt
Under the covers, the BI Server uses a special version of Essbase Studio to create the Essbase objects. With the 11.1.1.6.2 BP1 release of OBIEE, assuming you’ve performed the special install three Essbase components get installed along with the regular OBIEE 11g ones, giving you the updated architecture diagram below (with the new Essbase components highlighted).
As you might have spotted, Essbase Studio sits within the architecture and is managed via OPMN, but by default it’s not in the standard Oracle Instance that contains the BI Server, BI Presentation Server and now in this case, Essbase Server. Essbase Studio in this version has a slightly modified catalog that doesn’t persist metadata in a database schema, and is called on an “as-needed” basis from a servlet specially created to provide datamart automation for the BI Server. Requests come in from the BI Server in XML format via HTTP, and Essbase Studio then takes the aggregate definition, checks back with the BI Server repository to get the details, creates an internal “cube model” and then uses it to generate an outline for the Essbase database. Once this is done, it then generates an Essbase rules file contain SQL statements against the BI Repository’s database model, and then uses it to populate the Essbase database, like this:
Once the Essbase database is created, you can switch over to the Essbase Administration Services console that also comes as part of this setup, where you’ll see that it’s an Aggregate Storage Option database that’s been created, with an outline and rules files generated by Essbase Studio from the script definition.
You can also see the Essbase database within Enterprise Manager, listed as an application under the main Essbase server entry.
Whilst back over in the BI Administration Tool, you can see the new Essbase database mapped in to the Physical layer, and as logical table sources in the Business Model and Mapping Layer.
Making use of the new Essbase aggregates is fairly automatic; you just run queries through the dashboard or analysis editor as normal, and in the background where appropriate the BI Server will direct queries to the Essbase database, rather than the detail-level relational data source, something you can check by turning on query logging and then checking the nqquery.log file.
So, overall, what’s the verdict? Well, it all works as advertised, but … it’s kind-of missing the point when it comes to using Essbase as a data source. The Aggregate Persistence Wizard only creates single aggregations whereas of course an Essbase cube contains potentially many, many aggregation points, so if you use the Aggregate Persistence Wizard to create your Essbase data, you’ll (a) spend ages doing it as you’ll need to create every aggregation separately, and (b) you’ll end up with lots and lots of separate Essbase databases. That said, for the task in-hand (aggregating specific slices of a logical business model) it does the job and potentially stores the data more efficiently, and more effectively, than in a relational data store.
For me though what’s more interesting is the increased integration between Essbase and OBIEE. Essbase Server along with Essbase Administration Services and a cut-down version of Essbase Studio all get installed alongside OBIEE, and are managed using a single instance of Fusion Middleware Control. What’s not there yet is integrated security, and of course there’s no EPM Workspace, SSO between EPM and OBIEE applications and so on, but hopefully it’s one more step towards complete integration between the EPM and OBIEE toolsets.
Hyperion Profitability & Cost Management – Overview
Recently i have been doing lot more work on the Oracle EPM stack than on the Oracle BI stack. So, i will be writing more on the various Oracle EPM products like HFM, Planning, FDQM etc in the forthcoming weeks. To sort of kick start the series of postings, i thought i will begin an article on Hyperion Profitability & Cost Management also popularly known as HPCM. It is one of those products that is often overlooked, due to the overlap of features it has with other products like Essbase & Planning. It is sort of a targeted product with a solid technical foundation and uses Hyperion Essbase as its backend. On the outset, HPCM primarily provides Functional Users with the ability to automatically allocate Costs & Revenue to various departments, accounts thereby giving the ability to do proper & complete profitability reporting.
HPCM primarily has 3 main sweet spots
1. Every company will have indirect costs. For example, a Consulting company where Revenue is obtained through driving projects will have a lot of indirect costs like HR Costs, Admin Costs etc. HPCM provides an ability to allocate the costs back to the projects so that proper project profitability is derived. How the costs are allocated will be defined through HPCM itself. For example, if a company is running say 3 consulting projects with 20, 30 & 40 resources each, then the indirect costs are allocated back to the project based on the number of people(or time logged etc) in each project.
2. Every company will be storing the incoming Revenue & Costs in a Ledger. Due to various reasons, even the direct costs & revenue might not actually be tied back to a project (Consulting company example above). So, there might be a need to allocate the project based Direct Costs & Revenue as well to the Projects (allocation possibly by head count etc).
3. Allocation of Costs is pretty dynamic in nature depending on the type of business. It can vary quite frequently. So, the key is to ensure that the allocation logic can be changed frequently and easily. In addition, one more key point is to find the lineage back to the source on how the costs are obtained.
HPCM provides all the 3 above. If you are an Essbase or a Planning person, you could argue that we can do the same thing using these 2 products itself. Though true, in many cases Cost & Revenue allocation rules are defined by Functional Users. So, it is not possible for Functional users to create Business Rules/Calculation Scripts every time there is a change. In addition though Essbase is very good, it is very difficult to do a data lineage from a calculation script, to find out how the costs are allocated. Thats the main reason why, HPCM is a solution positioned primarily at Business/Functional users for providing that cost & Revenue Breakdown.
Though i have mentioned that HPCM is a functional tool, its underlying technology is very interesting. It has a relational metadata that stores the metadata related to HPCM. In addition each HPCM application will have 2 Essbase databases. One is Block Storage cube which will be used for the allocation & calculations. The other is a reporting Aggregate Storage cube which will be used for reporting. Data push from BSO and ASO is automatically available out of the box from HPCM. Also, one important point to note, change to dimensions, change to metadata, pushing data from BSO to ASO are all achieved within HPCM without writing any external code/scripts. Everything is done out of the box. This architecture is shown below.
In addition, the most interesting aspect of HPCM is the way it handles dimensions. This is what we will be covering in this article today. HPCM uses EPMA for managing its dimensions & attributes. HPCM as an application has 3 types of dimensions
1. System Dimensions – There are 2 System Dimensions – Measures & AllocationType. Generally while creating a HPCM application through the Wizard we can pre-create these 2 dimensions. AllocationType is used by HPCM internally for doing allocations. It is generally not needed to make any changes to this dimension. But Measures dimension is the most important dimension that HPCM uses for pushing costs & allocating them. We can create custom members in the Measure dimension if needed.
2. POV Dimensions – HPCM supports upto 4 point of view dimensions. These dimensions are generally for storing Year, Period, Scenario & Version. True to their names, they generally are used as POVs and are not used directly in any allocation (the POVs are always fixed in the calculation scripts).
3. Business Dimensions – Business Dimensions are those dimensions where allocations happen. These dimensions drive the allocation logic.
In addition HPCM also supports alias and attribute dimensions. For this article, i will use a simple case of demonstrating how to go about allocating HR Costs in a Consulting Company recognising its revenue through Projects. Lets make an assumption that on a monthly basis we record the HR Costs (including Salary paid to HR, other misc costs) etc. Lets also assume that we have 3 projects running in the company with the following break-up
a. Project A – 300 people full time
b. Project B – 500 people full time
c. Proejct C – 200 people full time
We will start off with creating the application through the Application Wizard (pre-create System Dimensions) and then we shall define the necessary dimensions.
We basically have 2 business dimensions – one for Accounts which will hold the HR Costs. Then we have the Project dimension which will record the revenue and costs specific to the project.
Lets deploy this application and then login to the application through Workspace.
In HPCM all allocation happens through stages. Stages is where allocation happen. HPCM supports uppto 9 stages. Each stage also supports intra stage allocation. Lets try to understand what this means from a Multi-Dimensional Essbase database Standpoint. In our example, we will have 2 stages. The first stage will have just the Accounts dimension – basically HRCosts in Accounts dimension will flow from Stage 1 to Stage 2 and will get allocated in Stage 2. So, Stage 2 will contain both the Accounts and Project dimensions.
After creating these 2 stages(ensure you also have a POV defined) lets go ahead and deploy this to Essbase from the Manage Database screen(both Calculation & Reporting Database). What you will notice is 3 things
1. There will be 2 essbase databases one suffixed with letter C and the other suffixed with letter R. C database is the Block Storage database that is used for allocation. R database is the Aggregated Storage database that will be used for reporting.
2. You will notice that for each stage there will be a corresponding set of dimensions prefixed by the Stage prefix given at the time of creation. So effectively, if there are 2 stages with 2 dimensions each, then Essbase will have 4 dimensions (though the 2 dimensions might be the same in EPMA).
3. You will also notice that each dimension will have a dummy member called NoMember. This is one of the most important members that controls the grain of the data. This member is the key in loading multi-grain data for allocation into HPCM.
Now that we have the Essbase cubes deployed as well, lets try to understand how the allocation logic works. To begin with lets assume for the month of Jan 2011, the HR Costs for the Consulting Company is say 1000 USD as shown below
This is the input data into Stage 1. So to load this in we will have be creating a text file and load it directly into Essbase. There are 3 options to load data into HPCM
1. Manual Data Entry – HPCM provides a screen where we can update data manually.
2. Staging Tables – We can load the data temporarily into a set of staging tables, and then from within HPCM we can push the data from Staging tables into Essbase.
3. Directly loading data into Essbase
In our case, we will load the data directly into Essbase as that will give more clarity on how HPCM works. For doing data load for Stage 1, remember we have a total of 9 dimensions in Essbase (Measures,AllocationType,Year, Scenario, Period, Version, ACAccount,PRAccount,PRProject). But our input data of HRCosts comes at a grain of only 7 dimensions (Measures,AllocationType,Year, Scenario, Period, Version, ACAccount). So, load this in we will have use the NoMember intersection of the remaining 2 dimensions (PRAccount & PRProject). The input data file to Essbase is shown below
CostInput,DirectAllocation,2011,Jan,Actual,Working,HRCosts,[PRAccounts].[NoMember],[PRProgram].[NoMember],1000
Our idea is to allocate the 1000 USD down to the 3 projects for the January Month. So, our end result should look like this
If you notice, the 1000 USD is split across the 3 projects based on the overall number of resources(Resource Count for each project/Total Resources*HRCost) in each project. So basically for the allocation to happen, we need to load the Resource Count data. Resource Count data for all projects and the individual projects have to be loaded as shown below
FixedDriverValue,DirectAllocation,2011,Jan,Working,Actual,HRCosts,HRCosts,Project A,300
FixedDriverValue,DirectAllocation,2011,Jan,Working,Actual,HRCosts,HRCosts,Project B,500
FixedDriverValue,DirectAllocation,2011,Jan,Working,Actual,HRCosts,HRCosts,Project C,200
Weight,DirectAllocation,2011,Jan,Working,Actual,HRCosts,[PRAccounts].[NoMember],[PRProgram].[NoMember],1000
In the above data file there are 2 things we can notice
1. We have used a measure called FixedDriverValue and Weight. We will see their significance a little bit later. For now think of them as intersections that will hold the Resource Count Data.
2. The first 3 records above have HRCosts repeated twice to load into both the Accounts dimension we have in the Essbase Cube. Again we will see the significance of why we are doing this below.
So far we have loaded all the necessary data into the Essbase Cubes and have also setup the stages. The next step is in defining the Allocation Logic. This is done through a concept called Drivers. From the Perspective of HPCM, drivers define how the allocation values get pushed from source to target. HPCM supports different allocations like Activity Based Costing etc through the concept of Drivers. In our case, the Driver for allocation is the Resource Count. Just to recap, within Essbase now we have dimensions catering to two stages – Stage 1 and Stage 2. This is illustrated below. So basically we have 3 separate sub-cubes each having its own intersection.
In the above diagram, whenever we want to do allocation there can be different ways of doing it. The 3 most common ways are
1. Allocation Based on Source – Here all the driver values are obtained from the source stage and data from source stage is then assigned to the Destination Stage based on the Source driver values.
2. Allocation Based on Destination – Here all the driver values are obtained from Destination stage and data from source stage is then assigned to the Destination Stage based on the Destination driver values.
3. Allocation Based on Source & Destination – Also known as Assignments – Here all the driver values are obtained from the intersection of Source & Destination and data from source stage is then assigned to the Destination Stage based on the Assignment driver values.
If you look at our Driver data above (3 records containing FixedDriverValue & 1 containing Weight), you can see that our driver (Resource Count) is loaded in 2 ways. First the FixedDriverValue is loaded at the intersection of Source & Destination stages. The second driver Weight is loaded at the Source Stage intersection. We have chosen these 2 measures (FixedDriverValue & Weight). We could have chosen any other measure like Rate, Volume etc. But each measure has a logical meaning and it is better to stick to the ones we think is logically close to what we are trying to do. If we are not able to use the existing measures and if they don’t relate to our driver names then we can create custom measures. So, this allocation handling through drivers is defined through the Driver Definition screen in HPCM.
As you see in our formula we are basically doing a division of each Resource Count from Stage 2 by the Total Resource Count in Stage 1. HPCM will automatically multiply the resulting values of the driver to the CostInput measure (input data) thereby allocating.
Once we have defined the driver the next step is to assign the driver to both the stages. HPCM allows us to have multiple drivers for each stage. And even within a single dimension in a stage we can define exceptions so that multiple drivers can be assigned within the same dimension. This way we can have allocation logic based not only on Resource Count but also on say Clocked Time of a resource in certain cases. Driver assignment is done through the Driver Selection screen in HPCM.
After the assignments, one other important feature of HPCM is its ability to assign allocations on a cell by cell basis. So, what we can do is, for each cell in the Source Stage we can assign the Destination cell(s) in the Destination Stage. That way we can clearly find out what intersections will get affected by what allocation logic. If you had to do them through say Essbase Calculation Scripts alone, it would have been such a laborious task not only while doing the allocations but also in maintaining them. In our example, lets assign the Data intersection of HRCosts in Stage 1 to the 3 Destination Projects in Stage 2 as shown below
Now lets run the Calculation & then immediately transfer the data to the ASO cube from the Manage Calculation screen.
If you now look at the data you can clearly see that each project now will have the corresponding costs allocated.
To validate how the costs have flown through the stages, HPCM provides an option to do a stage balancing report. This will show us how the costs have moved from stage to stage. In addition one biggest advantage of HPCM is, if there are any unassigned costs in the source, those will automatically move into Idle Costs (or we can configure it throw an error to ensure all costs are always allocated).
In terms of reporting, the ASO cube that HPCM provides will be more than sufficient. But based on what i have seen, we will have a lot of un-necessary dimensions especially when we are not concerned about the inter-stage essbase dimensions. In such cases, we can build our own ASO cube and extract only the necessary intersections that we need. But for now, i will showcase how BI EE 11g can be used for reporting again the native HPCM ASO cube. I will quickly show the lineage breakup reports as well
We start off with importing the ASO cube into BI EE which is pretty straight-forward as shown below.
Then lets start with building the report just for looking at our Stage 1 data. Remember Stage 1 input HRCosts data is loaded against NoMember intersections of the other dimensions. So we will have to explicitly filter to arrive at the right data.
Similarly to get the final allocated data we will have to apply filters as shown below
In the same way we can get the driver values directly from the cube using a simple BI EE report. All of these are shown in the form of a dashboard – showing how the costs from stage to stage using the drivers.
Announcing Rittman Mead Training Days 2009, London
I’m very pleased to announce details of our second-ever Training Days event, to be held at the Bloomsbury Hotel in London on October 27th – 29th 2009. This year I’m especially pleased to be co-hosting the event with Venkat Janakiraman and Christian Berg, two experts in the Oracle BI EE world and star speakers at the recent Rittman Mead BI Forum in Brighton.
Like last year’s Training Days event, the focus is on in-depth technical knowledge around Oracle BI EE, Essbase, Oracle Data Integrator and the BI Applications, with each of us running several sessions over the three days. Unlike other “beginners-level” courses this event focuses on real project issues and we’ll be going through the reality of designing, architecting, performance-tuning and integrating products across the range of Oracle’s BI and EPM product stack. I’m particularly pleased to have Venkat on board because of his deep knowledge of OBIEE and Essbase integration, and it’s great to have Christian presenting as well as he’s worked on so many successful OBIEE and Oracle BI Applications projects. Anyway, here’s the agenda for the three days:
- Day 1
- Oracle BI and EPM architecture overview – Mark Rittman
- Oracle BI EE Data Modeling against DW and 3NF sources – Mark Rittman
- Oracle BI Delivers + Integration with Java and BI Publisher – Venkat Janakiraman
- What’s new in Oracle BI, DW and EPM from Oracle Open World – Mark Rittman
- Day 2
- Oracle BI EE Data Modeling against Essbase – Venkat Janakiraman
- Leveraging MDX functions and calculations in OBIEE – Christian Berg
- Integrating Security across OBIEE and EPM – Venkat Janakiraman
- I can do this in Hyperion – how do I do it in OBIEE? – Christian Berg and Venkat Janakiraman
- Day 3
- OBIEE Systems Management with OEM BI Mgmt Pack – Mark Rittman
- OBIEE Configuration Management Best Practices – Christian Berg
- ODI functionality in Oracle BI Applications – Mark Rittman
- ODI Integration with Essbase, Planning and Oracle EPM Suite – Venkat Janakiraman
As with our BI Forum in Brighton earlier in the year, this is a great opportunity to meet and train with real-world Oracle BI and EPM consultants who have real project experience to back up their training materials. With lots of discussions and the opportunity for you to bring your own laptops and try out some of the techniques we discuss, this is a rare opportunity to take your OBIEE, Oracle BI and EPM knowledge up to the next level. Unlike the BI Forum though, this is an intensive three days with just three trainers, where we’ll look at topics in much greater detail and follow a common theme from start to finish.
Like last year, you can either book for just one day, or you can book for all three in a “three for the price of two” deal (though last year, everyone ended up staying for all three days). There is also an “early-bird” offer where we are discounting both rates by 10% up to the end of September, so if you’re interested in coming take a look at the event website for more details. The rate includes lunch on all three days and a special meal on the second night where we can kick back and discuss the world of Oracle BI in one of London’s top restaurants.
I’m really excited about this event and I know Christian and Venkat are as well. We only have twenty spaces available, so if you’re interested make sure you register now and we’ll see you in London in October.