Tag Archives: Hyperion

Oracle BI EE 11g – Hyperion Financial Management Analytics (HFM Analytics)

If you had watched the announcements in Open World this year, one of the products that got silently released was HFM Analytics. Basically this is an Analytics Suite under the umbrella of BI Applications but without any ETL. This suite provides comprehensive set of pre-built reports/dashboards for customers who have Hyperion Financial Management (and Close Management) and Oracle Business Intelligence 11g. This is a very interesting analytics application in many ways.

1. This does not have a ETL layer – Basically this is the second time i am seeing a BI Apps suite without a ETL layer (last time being the CPM Analytics which is no more available). So the reports/dashboards directly go against HFM & Close Management. Interestingly there is also a Single Sign-on support. So essentially we can start using this as an alternative to Hyperion Financial Reporting (functionality is still behind when compared with HFR – but at least to some extent wherever possible) that has been the standard & only reporting tool available so far on HFM.

2. This has support not only for data but also for HFM Process Management. Essentially we can not only report on the data but to an extent on the process management. This has been the sore point in HFM considering there are no built in tools outside of HFR that can leverage this (but of course there is always the API). Even ODI or HyperionDIM(based on Informatica) cannot extract Process Management related details out of HFM. So, we always had to rely on external adapters or direct API calls to do this. That is not the case anymore.

3. This has support for the newly released Close Management suite. Considering the fact that HFM & Close Management go together, it makes sense to have these 2 together supported out of the box.

4. The entire integration is based on the connector that i had blogged about earlier here. So, no extra configuration/setup (in terms of binaries – though there is a install process which just sets up the reports/dashboards) is required for this to work.

The architecture of this suite is given below

As mentioned in the earlier linked post, this suite uses the ADM driver to connect and extract data out of HFM. So, there are 2 things to keep in mind

1. This suite requires you to be 11.1.2.1.102 version of EPM(earlier releases are supported though – 9.3.3 and 11.1.1.3) – That is you need to apply one more patch on the HFM side to basically add a new feature to the HFM driver (for process management).

2. You need to be on BI EE 11.1.1.5.1 MLR patch set for this to work without any issues. More than the above EPM certification, 11.1.1.5.1 patch is very critical for the connectivity to work. But interestingly this MLR patch is not recommended for all customers. I am not sure why it is this way but looking at the Readme of this patch, it looks like 11.1.1.5.1 patch is meant only for customers who require HFM connectivity and Fusion integration. But for other customers who don’t need this, the patch recommends not to upgrade at all which is interesting as you would expect any latest patch set to be valid for all customer environments.

Once you have both the above, download the HFM Analytics suite from Edelivery. The install itself is straight forward.

Ensure that you are using a Custom install. Looks like there is a bug in the installer. If you do a full install, the installer does not seem to install the web catalog.

The installer basically installs a RPD, Web Catalog and a configuration utility. One of the advantages of using HFM is, we will always have a constant set of dimensions (8 system dimensions + 4 custom dimensions). So, oracle has basically fixed the RPD but the web catalog structure will vary based on the number of custom dimensions used. That is where the configuration utility comes into play. The default password of the repository is welcome1. Let’s open this RPD and take a look at the contents

Basically FCM is modelled using the relational metadata directly. But the HFM one uses the ADM Driver. To complete the configuration, we need to run the configuration utility which gets installed along with the software.

We start with specifying the connection details of both the BI instance and HFM instance. In addition we need to provide the RPD and Web Catalog location details that comes along with the software. Interestingly the configuration utility injects new content into the web catalog dynamically based on our selection. We start with selecting the Entity values that we are interested in for reporting.

Then we need to provide the following account grouping details

1. Balance Sheet Accounts

2. P & L Accounts

3. Cash Flow Accounts

4. Performance Indicator Accounts

Actually there are more groupings but at a high level the above are the basic classifications required.

If we have any Custom dimensions configured then those values need to be configured as well. Basically the values chosen for custom dimensions (along with default values) will be used for the web catalog prompts and filters. Initially it might be a bit confusing but once we have web catalog, it sort of makes sense how the grouping actually works.

Issues:

Though it works and does give a high level complete picture of the HFM financials, there still are certain grey areas in this suite (atleast for me). I will list them below. If anyone has any other insights in this, i will be interested in hearing them.

1. License – This is one area of the product that i am not able to understand. Based on what i heard in Open World, this requires an extra license. But again if you look at the underpinnings of how this works, there is nothing that stops a customer to build the same set of reports and dashboards with just a HFM and BI EE 11g license. In fact, to make this work (on my demo data sets), i myself had to understand the reports and make the requisite changes to the filters. So, anyone can build the same set of reports and dashboards all by themselves without having to license this(i.e. just use the BI EE 11g-HFM connector). Or i am hoping i am just wrong in this case.

2. Journal Entries – As far as i see ADM driver is pretty limited in its functionality. There is no way to just create a report with just the journal entries. For that we still have to rely on Hyperion Financial Reports.

3. Active/Inactive Entities – IC Transactions etc – Same comment as above. The connectivity does not support reporting for Active/Inactive entities of a period, or for just creating a report containing just the Inter Company transactions etc.

5. Custom Dimension names – Somehow the connector is sort of hard-coded to use Custom1, Custom2 etc dimension names. If we had used a different name in the Aliases field then the connector throws an error. This looks more to be a bug in the connector. So, if you have a custom named dimensions then revert them to Custom1, Custom2 etc until this is fixed.

6. BI EE 11.1.1.5 – If you are using this without the MLR patch, the reports will work. But none of the custom attribute columns will work. I guess that is due to the ADM client driver that is bundled as part of the BI EE 11.1.1.5. But hopefully once you upgrade this issue should go away.

A Quik Look at QlikView

Outside the world of Oracle, one of the rising stars of the BI world has been Qliktech, the Swedish software company behind QlikView. QlikView is one of a new breed of desktop, in-memory BI tools (see also the forthcoming Gemini tool from Microsoft, an in-memory, column-store BI tool that’ll be delivered as part of Microsoft Office 2010) that are being evaluated by customers as alternatives to traditional relational query tools such as Oracle BI Enterprise Edition and OLAP servers such as Essbase or Oracle OLAP. So what is QlikView, and how does it compare to Oracle’s BI tools?

QlikView, compared to Oracle’s relational and OLAP query tools, has a slightly different approach to storing and querying data. Whilst it can load data from relational and file-based data sources, it then stores the data in-memory in a compressed form with associations defined between data items rather than the traditional joins that we see in relational query tools. The idea behind these associations is that they are derived by Qlikview during the data load based on matching up columns across tables with the same name, and then when you query the resulting dataset you can analyze your data using these associations.

To take an example, the following QlikView demonstration shows data for a consulting company in a Windows thick client application. To analyze your data, you click on a data item, the numbers on the charts are then filtered by this value, and other data items that are associated with this selected value are then also made available for selection. The screenshot below shows this filtering in action, you can see the selected fields in green, the ones that are greyed out are no longer available for selection.

Qv1-1

Now obviously this way of selecting data items, and this method of “association”, is fairly alien to traditional Oracle BI users and so I put together a simple example using the Sales History (SH) dataset available on recent Oracle databases. The first step in creating some QlikView reports is to define a new document, which in the end will hold the report layout, the data loading queries and the actual reporting data itself, which gets loaded in to memory when you open the QlikView document. Once you’ve defined the new document, you create a script to load data in from in this case an Oracle database.

Qv2-1

This is in fact a similar process to putting load scripts together for Essbase and Oracle Express, with connections being made through ODBC and OLE DB for databases and direct connections being made to file and HTTP data sources. I get the impression that the most regular use case for data loading in QlikView is loading from files, and this would correspond to the target market for this tool which is desktop analysts who want to report on their data separately from the infrastructure around their enterprise BI tools and data warehouse.

Once you’ve brought your data in, you can view it either as a database (source) data model, or as the logical model that QlickView generates when it loads in the data. In fact for the SH schema, these are both the same as the joins in the Oracle schema are turned into the associations that QlikView uses, so what you see looks just like the regular sort of data model you’d see in any relational query tool.

Qv3-1

The joins (or associations) are defined by QlikView during the import process by matching up columns of the same name – each match creates an association. This means that to create a join, you need to alias column names as they come in (to change ID to CUSTOMERID for example, if this is required to join to another CUSTOMERID), and you need to create aliases to break associations so that the STATUS column from the orders table isn’t linked to the STATUS column from the customer table when this isn’t really appropriate.

Qv4-1

For an Oracle dataset with well defined primary and foreign keys, the logical table structure that QlikView creates in memory is usually pretty much the same as the Oracle schema, but for files or with data sets with lots of cross-related data, QlikView will create synthetic keys and synthetic tables to record the associations and assist with analysis. Internally QlikView tokenizes and compresses the data as well which meant that, for the Sales History data set that I worked with earlier, the total size of the QlikView document (including the data) was only 4MB and took about 1 minute to load and prepare, whereas the corresponding Essbase data set (as built in this OTN article) took up several gigabytes of space and took over an hour to load and prepare. Impressive stuff.

Loading the normalized data set that I used for this article on OBIEE data modeling produced the following logical QlikView table diagram, which as you can see is a direct reflection of the normalized Oracle data model.

Qv5-1

This ability to report against normalized data obviously makes setting up the data model easier compared to OBIEE (notwithstanding the need to rename columns to make and break associations), but it then doesn’t make any distinction between columns that are measures and ones that are dimension attributes, which makes working with data in reports a bit more complicated.

So once you’ve got the data in, how do you create a report? And more importantly, how do these reports compare to Essbase or OBIEE reports? The actual report creation process is a bit like working with tools such as Hyperion Interactive Reporting or Web Analysis in that you construct an application made up of crosstabs, graphs, gauges and the like, which are then run either in the thick-client Windows application or online using a variety of thick and thin web clients. If I wanted to create a bar chart that allows users to select the product category, month and state of purchase and then graph product sales broken down by customer satisfaction rating, I’d add three field list boxes to my document and then create a bar graph chart, which would be automatically filtered by whatever I select in the lists.

Qv6

In the screenshot above, I’ve first selected the “Computers” product category which filters the graph, and highlights the states and months that have sales for this category. If I then selected the CA and NY states, only the months that are associated with sales (quantity) for Computers and CA / NY sales would be highlighted, and so on. As such, the “Qlik” part of Qlikview seems to translate to progressive filters of your data based on the items you have selected, with the filtered-down dataset being made up of the data that will join (or “associate”) with the data items that you have selected.

If you’re looking to produce Essbase-like reports where data is shown in hierarchies, this is a bit trickier. There isn’t really a concept of hierarchies in Qlikview; you can create column groups which group together columns that are related, but I can’t see a simple way to select column members, for example, by their position in a hierarchy which is pretty standard in OLAP tools such as Essbase.

Displaying data in hierarchies is a bit simpler, especially as QlikView automatically calculates the relationship between columns and their values so that, for example, if you display product categories and products in a pivot table chart, the product are automatically shown under the categories that they relate to.

Qv7

What I found trickier (as in I couldn’t do it) was to create a pivot table with regions, for example, across the top of the pivot, or to add page items or other controls. I’ve only used QlikView for a few days and so I may well have missed this feature, but none of the example documents that came with either of these features and therefore I suspect that it’s not something that the product does, for the moment.

Working through the examples that ship with QlikView 9, visualizations and fast access to data seem to be the main strengths of the tool. For example, the Sparkline graph type that features strongly in Stephen Few’s “Information Dashboard Design” book is present and correct in QlikView:

Qv8

As are butterfly financial-style tables, used for displaying P&L reports and other finance data.

Qv9

One of the examples uses Google Maps to illustrate data, in this case around house for sale in London.

Qv10

Where QlikView is weaker in my view, compared to any of the Essbase-backed query tools, is around number-intensive and hierarchy-intensive applications. It’s not easy to select dimension members via a hierarchy, the scripting and calculation language that comes with the tool obviously doesn’t support the richness of languages like MDX and Essbase calculations scripts, and of course there are not the applications such as Planning and Financial Management that you’d expect to make use of in the Finance Department.

Qv11

That said, for what it does (provide fast access to star and normalized data sets, including data sourced from files and from the web) it does the job very well, with excellent load and preparation times and sub-second response times, even with large data sets. You probably wouldn’t use Qlikview instead of Essbase or Oracle BI EE as they address different requirements, but as a sales analysis tool or a tool to analyze OLTP datasets, it takes some beating.

It wouldn’t surprise me to find that Oracle or another of the big vendors acquires QlikTech at some point as QlikView would make a nice complement to the more enterprise, heavy-duty query tools, especially when tools such as this and the forthcoming Gemini from Microsoft threaten to create desktop BI tool “bridgeheads” on users’ desktops as they are just so easy to use and to set up (can you imagine getting Business Objects and all its technology stack up and running over a couple of evenings, for example, let alone learning how to use it?). It wouldn’t also surprise me if vendors such as Oracle do more to bring their database and query tool processing in-memory, given the amounts of RAM available on users’ desktops these days and some of the advances being made in data compression technologies.

Announcing Rittman Mead Training Days 2009, London

I’m very pleased to announce details of our second-ever Training Days event, to be held at the Bloomsbury Hotel in London on October 27th – 29th 2009. This year I’m especially pleased to be co-hosting the event with Venkat Janakiraman and Christian Berg, two experts in the Oracle BI EE world and star speakers at the recent Rittman Mead BI Forum in Brighton.

Like last year’s Training Days event, the focus is on in-depth technical knowledge around Oracle BI EE, Essbase, Oracle Data Integrator and the BI Applications, with each of us running several sessions over the three days. Unlike other “beginners-level” courses this event focuses on real project issues and we’ll be going through the reality of designing, architecting, performance-tuning and integrating products across the range of Oracle’s BI and EPM product stack. I’m particularly pleased to have Venkat on board because of his deep knowledge of OBIEE and Essbase integration, and it’s great to have Christian presenting as well as he’s worked on so many successful OBIEE and Oracle BI Applications projects. Anyway, here’s the agenda for the three days:

  • Day 1
    • Oracle BI and EPM architecture overview – Mark Rittman
    • Oracle BI EE Data Modeling against DW and 3NF sources – Mark Rittman
    • Oracle BI Delivers + Integration with Java and BI Publisher – Venkat Janakiraman
    • What’s new in Oracle BI, DW and EPM from Oracle Open World – Mark Rittman
  • Day 2
    • Oracle BI EE Data Modeling against Essbase – Venkat Janakiraman
    • Leveraging MDX functions and calculations in OBIEE – Christian Berg
    • Integrating Security across OBIEE and EPM – Venkat Janakiraman
    • I can do this in Hyperion – how do I do it in OBIEE? – Christian Berg and Venkat Janakiraman
  • Day 3
    • OBIEE Systems Management with OEM BI Mgmt Pack – Mark Rittman
    • OBIEE Configuration Management Best Practices – Christian Berg
    • ODI functionality in Oracle BI Applications – Mark Rittman
    • ODI Integration with Essbase, Planning and Oracle EPM Suite – Venkat Janakiraman

As with our BI Forum in Brighton earlier in the year, this is a great opportunity to meet and train with real-world Oracle BI and EPM consultants who have real project experience to back up their training materials. With lots of discussions and the opportunity for you to bring your own laptops and try out some of the techniques we discuss, this is a rare opportunity to take your OBIEE, Oracle BI and EPM knowledge up to the next level. Unlike the BI Forum though, this is an intensive three days with just three trainers, where we’ll look at topics in much greater detail and follow a common theme from start to finish.

Like last year, you can either book for just one day, or you can book for all three in a “three for the price of two” deal (though last year, everyone ended up staying for all three days). There is also an “early-bird” offer where we are discounting both rates by 10% up to the end of September, so if you’re interested in coming take a look at the event website for more details. The rate includes lunch on all three days and a special meal on the second night where we can kick back and discuss the world of Oracle BI in one of London’s top restaurants.

I’m really excited about this event and I know Christian and Venkat are as well. We only have twenty spaces available, so if you’re interested make sure you register now and we’ll see you in London in October.

Hyperion Planning 11.1.1.2 – New Smart view OracleSV protocol – Integration into smart view and other desktop applications – Part 1

In the last few blog entries, i have been covering some of the new features of the EPM 11 release. Continuing on the same line, lets look at another new feature that was introduced for integrating smart view with Hyperion Planning. The most common way of integrating multiple web based applications/reporting tools is by passing parameters through the url. These parameters can be passed either using the get or the post methods and most applications support both the methods. I have covered URL based integration between Hyperion Financial Reporting and Oracle BI EE here and here. But if you are a Hyperion planning user then the most common requirement is to open up the smart view automatically from within the Hyperion Planning. Not only should the smart view open up automatically, it should automatically login using SSO and also should open up the form for editing purposes. In the same way, one might want to integrate Smart view with BI EE and other applications as well. In order to achieve that a new protocol called OracleSV protocol has been introduced in the EPM 11 release(i believe the integration was there to an extent in 9.3 release. But the protocol was introduced in EPM 11 for the SSO integration).

So what is OracleSV protocol. It is a protocol which is similar to the mailto protocol. For example, in most cases we might have a requirement to open up a local email client with a given email address. In such cases, we would be using mailto: venkat@rittmanmead.com in the url bar of the browser(or in any html code).

tmp17

This will automatically open up the local email client. Along similar lines, one can use the oraclesv protocol to open the Smartview excel client directly from the browser. The syntax for using this is

oraclesv: <parameter list>

tmp18

This protocol implementation has been done at the EPM layer. One can find the code for this implementation in the {HYPERION_HOME}\products\Essbase\SmartView\Bin\chrome\content directory.

tmp19

One of the important points to note is the fact that the same approach can be used to implement custom integration between EPM 11 products, BI EE etc with any desktop gadgets. In the older releases of various browsers like IE and Firefox, this was considered a security risk. But now with the latest releases, i believe this is no more an issue. Also, one has to be aware that this protocol is browser specific. In my case, it worked only for IE and Firefox. For firefox one would have to enable it from about:config screen. And also, one would have to ensure that the smartview client add-on is installed for firefox.

The other aspect of this protocol is the fact that it provides a SSO from Hyperion Planning to Smartview.

tmp1C

In the coming blog entries i would cover what parameters this protocol can take and how one can call this directly from BI EE.

Hyperion Essbase 11.1.1.2 – XOLAP – MDX to SQL Conversion – Part 1

One of the interesting aspects of XOLAP is the fact that it exposes Essbase as a dynamic relational aggregation engine. XOLAP to an extent is pretty good in determining what kind of sql should be pushed back to the relational sources and what requires custom in-memory calculations. For example, lets first start with a simple XOLAP cube that we built the other day using the Global Schema.

tmp1A

Lets import this XOLAP cube into BI EE and without changing any default aggregation settings.

tmp1B

After this import lets first create a very simple report containing the Generation 1 of Channel and Product dimensions.

tmp1D

tmp1E

To generate the above report BI EE basically generates an MDX that is fired back to Essbase.

With
  set [Channel Hierarchy1]  as '[Channel Hierarchy].Generations(1).members'
  set [Product Hierarchy1]  as '[Product Hierarchy].Generations(1).members'
select
  { [Units Hierarchy].[UNITS]
  } on columns,
  NON EMPTY {crossjoin ({[Channel Hierarchy1]},{[Product Hierarchy1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [GlobXola.GlobXola]

MDX is used to tell Essbase on what is needed in the report. Then Essbase converts the MDX to the physical sql. Unfortunately, XOLAP does not give you an automated way of finding the physical sql fired back to Essbase. The main reason for this is, it just does not fire the sql alone in some kind of reports. But for the above report, to determine the sql query, i had to use the v$sql views to find out what was getting fired by XOLAP(i would recommend DBMS_MONITOR package if you are on 10g). The converted sql from MDX looked like the one shown below

SELECT
cp_1108."TOTAL_CHANNEL_DSC",
cp_2106."TOTAL_PRODUCT_DSC",
SUM(cp_107."UNITS"),
count(*)
FROM
(("GLOBAL"."CHANNEL_DIM" cp_1108 join "GLOBAL"."PRODUCT_DIM" cp_2106 on (1=1))
join "GLOBAL"."UNITS_FACT" cp_107 on (cp_1108."CHANNEL_ID" = cp_107."CHANNEL_ID")
AND
(cp_2106."ITEM_ID" = cp_107."ITEM_ID"))
WHERE
( (cp_1108."TOTAL_CHANNEL_DSC" IN ('All Channels')))
AND
( (cp_2106."TOTAL_PRODUCT_DSC" IN ('Total Product')))
GROUP BY cp_1108."TOTAL_CHANNEL_DSC", cp_2106."TOTAL_PRODUCT_DSC"

The sql above is pretty straight forward. But one interesting point to note is the filters on the topmost node of the outline (though i have not explicitly applied the filters). So, to an extent XOLAP simply is not a conversion engine. It is an outline aware SQL conversion/aggregation engine. To put this under more tests, lets now include one more column in the report. But this column would contain an EVALUATE MDX expression to pull in all the Time dimension members.

EVALUATE('%1.dimension.members',"Time Hierarchy"."Gen1,Time Hierarchy")

tmp20

tmp21

Again, XOLAP has basically converted our BI EE report into its corresponding MDX given below

With
  set [Channel Hierarchy2]  as '[Channel Hierarchy].Generations(2).members'
  set [Product Hierarchy2]  as '[Product Hierarchy].Generations(2).members'
  set [Evaluate0]  as '{[Time Hierarchy].Generations(1).dimension.members }'
select
  { [Units Hierarchy].[UNITS]
  } on columns,
  NON EMPTY {crossjoin ({[Channel Hierarchy2]},crossjoin ({[Product Hierarchy2]},{[Evaluate0]}))} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [GlobXola.GlobXola]

But the corresponding sql is what would be of interest to us as the above report cannot be produced using a normal sql. Either it would have to be a union all request or it should use the MODEL clause. But its actually neither of them.

SELECT
cp_1108."TOTAL_CHANNEL_DSC",
cp_2106."TOTAL_PRODUCT_DSC",
cp_3109."MONTH_DSC",
SUM(cp_107."UNITS"),
count(*)
FROM
((("GLOBAL"."CHANNEL_DIM" cp_1108 join "GLOBAL"."PRODUCT_DIM" cp_2106 on (1=1))
join "GLOBAL"."TIME_DIM" cp_3109 on (1=1))
join "GLOBAL"."UNITS_FACT" cp_107 on  (cp_1108."CHANNEL_ID" = cp_107."CHANNEL_ID")
AND  (cp_2106."ITEM_ID" = cp_107."ITEM_ID") AND
(cp_3109."MONTH_ID" = cp_107."MONTH_ID"))
WHERE ( (cp_1108."TOTAL_CHANNEL_DSC" IN ('All Channels')))
AND ( (cp_2106."TOTAL_PRODUCT_DSC" IN ('Total Product')))
GROUP BY cp_1108."TOTAL_CHANNEL_DSC", cp_2106."TOTAL_PRODUCT_DSC", cp_3109."MONTH_DSC"

All that the sql does is, it produces level-0 data at the month level. Every other outline based calculation/aggregation is done by the Essbase at run-time. The next question would be, how are we sure that Essbase is doing the Aggregation instead of the BI Server. In order to eliminate this possibility, lets fire the MDX directly in Essbase.

tmp22

As you see, the MDX itself has produced the year and quarter level data even though the sql fired got the month level. So, every outline based operation is done at the Essbase layer using the Essbase aggregation engine. This opens up a lot of possibilities for reporting. We now have 3 levels of aggregation for relational sources.

1. Relational Database itself.

2. Essbase

3. BI Server

The major advantage of having this flexibility is that we can determine what is best depending on the system scalability etc. Also, it provides Essbase specific calculation capabilities to relational sources. I would cover more advanced reporting use cases in the coming blog entries.