Tag Archives: Hyperion Essbase

Optimizing TimesTen for Exalytics Queries using TimesTen’s Index Advisor

Late last week I posted an article on our blog on loading detail-level data into Exalytics; whilst the article was intended really as an introduction to Essbase Aggregate Storage Option, and how it could offer an alternative to TimesTen when storing large amounts of detail-level data, to make the case for ASO I needed to show how using TimesTen for this type of scenario might be an issue. TimesTen within the context of Exalytics is really positioned as an aggregate cache, with the Summary Advisor automatically recommending and storing aggregates in TimesTen, and this is a great use-case for it; but as I showed in the article (and as others have also evidenced) TimesTen can start to slow-down when you’ve got large amounts of data being returned by the query, as it doesn’t currently use common VLDB features like partitioning and parallel query to help speed-up queries.

But there was a crucial step I missed-out when preparing the TimesTen datasets for querying. Whilst the ttimportfromOracle utility creates indexes for the TimeTen tables it’s creating, it creates them based on the ones on the corresponding Oracle tables, which isn’t a bad place to start from, but might not suit either TimesTen (which has its own, memory-optimised indexes) or the particular queries you’re then firing at it. To take an example; in the query that I ran agains the Sales History data in TimesTen in the previous article, the execution plan showed two TmpHashScan operations, like this:

NewImage

Now TmpHashScan operations actually involve TimesTen creating temporary indexes in the background, to speed up the query but which of course add to the execution time of the query. If instead though, we’d created those indexes beforehand, we wouldn’t incur that hit during the actual query, and the response time would go down. Now to be fair to the Oracle database source, and the Essbase ASO source, there are also query optimisation steps that we could perform for those ones too, but let’s look at what happens if we run an index optimisation process on the TimesTen datasets – Sales History, with around 900k rows of fact table data, and Flight Delays, with around 120m rows of data – as analysed in the two dashboards below (and notice how the Flight Delays one even has an analysis that times-out, such is the performance issue with the data “as is”):

NewImage

Looking at the current response times of the analyses on these two dashboards, you can see that the Sales History queries return data instantaneously, whereas the Airline Delays ones have much longer response times – anything from 15 seconds on average up to a minute – and that minute is the query timeout.

NewImage

Now there’s actually a utility within TimesTen that helps with index optimisation called the “Index Advisor”. How this works is that you enable it via the ttisql command-line utility, then run a bunch of queries to generate a workload, then run the utility again to generate recommendations based on table scans, joins, sorts and grouping operations within the queries. When capturing the workload, you can either do so for just that ttisql session, or for the whole database, which would make more sense in an Exalytics-type environment. So let’s enable this utility, switch over to the dashboard and run some queries, and see what comes out.

C:\Users\Administrator>ttisql "DSN=tt_exalytics_3"
Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved. Type ? or "help" for help, type "exit" to quit ttIsql.
connect "DSN=tt_exalytics_3"; Connection successful: DSN=tt_exalytics_3;UID=Administrator;DataStore=C:\TimesTen\tt_data_3;DatabaseCharacterSet=AL32UTF8;ConnectionCharacterSet=US7ASCII;DRIVER=C:\TimesTen\TT1122~2\bin\ttdv1122.dll;LogDir=C:\TimesTen\tt_data_3;PermSize=9000;TempSize=2000;LockWait=600010.0;SQLQueryTimeout=600000;TypeMode=0;QueryThreshold=600000;PLSCOPE_SETTINGS=IDENTIFIERS:NONE;RangeIndexType=1;(Default setting AutoCommit=1)
Command> call ttindexAdviceCaptureStart(1,0);

I then run some queries on the TimesTen Sales History dashboard, filtering on year and channel class, and then after a while go back to ttisql to stop the capture process, and then output the index advice.

Command> call ttindexAdviceCaptureEnd(1);
Command> call ttindexAdviceCaptureOutput(1); 
< 33, create hash index SALES_i6 on SH.SALES(CHANNEL_ID); >
< 2, create hash index SALES_i10 on SH.SALES(PROD_ID); >
< 1, create unique hash index SALES_i12 on SH.SALES(TIME_ID); >
< 1, create hash index SALES_i12 on SH.SALES(TIME_ID); >
< 16, create hash index PRODUCTS_i7 on SH.PRODUCTS(PROD_ID,PROD_CATEGORY); >
< 1, create hash index PRODUCTS_i13 on SH.PRODUCTS(PROD_ID,PROD_SUBCATEGORY); >
< 35, create hash index TIMES_i8 on SH.TIMES(TIME_ID,CALENDAR_YEAR); >
< 1, create index TIMES_i11 on SH.TIMES(CALENDAR_YEAR); >
< 8, create unique hash index CUSTOMERS_i9 on SH.CUSTOMERS(CUST_ID); >
< 1, create hash index CUSTOMERS_i9 on SH.CUSTOMERS(CUST_ID); >
10 rows found.
Command>

I then do the same whilst running some queries against the Airline Delays (130m+ rows) dataset, and get the following output:

Command> call ttindexAdviceCaptureStart(1,0);
Command> call ttindexAdviceCaptureEnd(1);
Command> call ttindexAdviceCaptureOutput(1); 
< 20, create index OBIEE_GEO_ORIG_i14 on BI_AIRLINES.OBIEE_GEO_ORIG(AIRPORT); >
< 12, create hash index OBIEE_GEO_ORIG_i19 on BI_AIRLINES.OBIEE_GEO_ORIG(AIRPORT); >
< 8, create unique index PERFORMANCE_VIEW_i15 on BI_AIRLINES.PERFORMANCE_VIEW(DEST); >
< 1, create index PERFORMANCE_VIEW_i15 on BI_AIRLINES.PERFORMANCE_VIEW(DEST); >
< 20, create unique hash index PERFORMANCE_VIEW_i16 on BI_AIRLINES.PERFORMANCE_VIEW(DEST); >
< 1, create hash index PERFORMANCE_VIEW_i16 on BI_AIRLINES.PERFORMANCE_VIEW(DEST); >
< 13, create unique hash index UNIQUE_CARRIERS_i17 on BI_AIRLINES.UNIQUE_CARRIERS(CODE); >
< 1, create hash index UNIQUE_CARRIERS_i17 on BI_AIRLINES.UNIQUE_CARRIERS(CODE); >
< 4, create unique hash index OBIEE_TIME_DAY_D_i18 on BI_AIRLINES.OBIEE_TIME_DAY_D(CALENDAR_DATE,CAL_YEAR); >
< 10, create hash index OBIEE_TIME_DAY_D_i18 on BI_AIRLINES.OBIEE_TIME_DAY_D(CALENDAR_DATE,CAL_YEAR); >
< 5, create unique hash index OBIEE_TIME_DAY_D_i20 on BI_AIRLINES.OBIEE_TIME_DAY_D(CALENDAR_DATE); >
< 1, create hash index OBIEE_TIME_DAY_D_i20 on BI_AIRLINES.OBIEE_TIME_DAY_D(CALENDAR_DATE); >
12 rows found.

So that’s 10 indexes recommended for the Sales History dataset, and 12 recommended for the Flight delays one. I therefore go back to the ttisql command line and execute each of the index creation commands, and then run the two dashboards again. And the performance improvement is significant:

NewImage

Recommendation from Oracle is to run this process over-and-over as more recommendations could come out, so I repeat the process to see if this is the case.

Command> call ttindexAdviceCaptureStart(1,0); 
Command> call ttindexAdviceCaptureEnd(1);
Command> call ttindexAdviceCaptureOutput(0);
0 rows found.

So no more recommendations then, and a much-improved query response time from TimesTen. But how does this stack-up against the plain Oracle database source, and Essbase ASO? For the Sales History queries, all sources perform well now, with nothing really in it between the three sources, except that the Essbase source is now actually the slowest (if only by half a second or so).

NewImage

With the Airline Delays dashboards though, TimesTen is now a winner, with sub-second response times even across 130m+ rows, compared to many seconds for the other sources. 

NewImage

I therefore stand corrected on TimesTen not being suitable for query large sets of large, sparse data, AS LONG as you run the Index Advisor after you’ve run a representative workload (and I even forgot to re-gather stats after generating the new indexes, which would probably have improved things even more). So, on that basis, do I take back my recommendation to consider Essbase ASO for these large, sparse datasets? Well, to be honest – I’ve not done any optimisation on Essbase yet, or used the Summary Advisor on the plain Oracle source, so it looks like there is one more round of testing to do, before coming to some final conclusions. Check back tomorrow for the final post in this series.

Storing Detail-Level Data in Oracle Exalytics

Although the primary use-case for TimesTen within Oracle Exalytics is to store relatively-small aggregate tables created using the Summary Advisor, many customers have asked us whether it’s possible to store their entire reporting dataset in TimesTen. With 1TB of RAM in the original version of the Exalytics hardware, and now 2TB of RAM in the new X3-4 version, even with the requirement to set aside space in RAM for TimesTen’s temporary objects, many data warehouses or data marts would fit in 500GB or 1TB of RAM, which makes the idea pretty interesting.

If you’ve read blogs from us in the past though, you may have picked-up on comments about TimesTen not being a great place to store large tables, as it lacks features such as table partitioning, parallel query and other VLDB essentials we’re used to with the Oracle database. Even though TimesTen runs in-memory, there’s still a cost to table scans, joins, row lookups and so on, and in some cases this can make querying a TimesTen database actually slower than querying an equivalent Oracle one. So where’s the tipping point for this, and is it possible to put an entire data warehouse in-memory, such as the BI Apps data warehouse that Peter Scott talked about in a blog post series a few months ago?

This is actually something we’ve been looking at fairly intensely over the past few weeks, as we’re getting this request from customers looking to maximise the return on their investment in Exalytics. Where it gets particularly interesting is that there’s also a second option for storing these large, detail-level datasets in Exalytics – the Essbase Aggregate Storage Option (ASO), an alternative to the more traditional Block Storage Option (BSO) that’s capable of storing and analyzing very large, highly dimensional sparse datasets, and with the 11.1.1.7 release of OBIEE 11g comes pre-installed and pre-integrated with OBIEE. So, if a customer wants to store their entire data warehouse or data mart on an Exalytics server, ideally in-memory but whichever way, taking advantage of the high-performance Exalytics hardware and its close integration with Oracle’s BI and OLAP software, how do the options work out on a sample dataset?

Let’s start off with the SH (Sales History) sample dataset that comes with all recent Oracle databases, made up of a couple of fact tables and a set of related dimension tables. To take one of the fact tables, SALES, the table contains just over 900k rows, is partitioned by year, and out-of-the-box has a bunch of materialised views and indexes to support it and speed up user queries. Disabling the MVs for the time being and importing the tables into the BI Repository, my initial physical model looks like this:

NewImage

Running a sample set of reports and with caching turned-off, response time even with no MVs and this Oracle source was pretty good, with no noticeable delay – of course this is only a single user on an unused Exalytics server, but on this size of data response time is perfectly acceptable.

NewImage

Looking at the physical SQL for a typical one of the analyses, you can see hash joins and partition elimination taking place, standard features within an Oracle data warehouse with partitioned tables.

SQL> explain plan for
2 select sum(T43903.QUANTITY_SOLD) as c1,
3 sum(T43903.AMOUNT_SOLD) as c2,
4 T43814.CHANNEL_DESC as c3,
5 T43841.CUST_STATE_PROVINCE as c4,
6 T43866.PROD_CATEGORY_DESC as c5
7 from
8 SH.CHANNELS T43814,
9 SH.PRODUCTS T43866,
10 SH.TIMES T43911,
11 SH.CUSTOMERS T43841,
12 SH.SALES T43903
13 where ( T43814.CHANNEL_ID = T43903.CHANNEL_ID and T43841.CUST_ID = T43903.CUST_ID and T43866.PROD_ID = T43903.PROD_ID a
ID = T43911.TIME_ID and T43903.TIME_ID = TO_DATE('1998-05-03 00:00:00' , 'YYYY-MM-DD HH24:MI:SS') and T43911.TIME_ID = TO_DAT
0:00:00' , 'YYYY-MM-DD HH24:MI:SS') )
14 group by T43814.CHANNEL_DESC, T43841.CUST_STATE_PROVINCE, T43866.PROD_CATEGORY_DESC;
Explained.
SQL> set lines 130
SQL> set pages 0
SQL> select * from table(dbms_xplan.display);
Plan hash value: 3773586640
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------
| 1 | HASH GROUP BY | | 2 | 172 | 21 (10)| 00:00:01 | | |
| 2 | NESTED LOOPS | | | | | | | |
| 3 | NESTED LOOPS | | 2 | 172 | 20 (5)| 00:00:01 | | |
| 4 | NESTED LOOPS | | 2 | 140 | 18 (6)| 00:00:01 | | |
| 5 | NESTED LOOPS | | 2 | 98 | 16 (7)| 00:00:01 | | |
|* 6 | HASH JOIN | | 2 | 72 | 14 (8)| 00:00:01 | | |
|* 7 | INDEX UNIQUE SCAN | TIMES_PK | 1 | 8 | 1 (0)| 00:00:01 | | |
| 8 | PARTITION RANGE SINGLE | | 98 | 2744 | 12 (0)| 00:00:01 | 6 | 6 |
| 9 | TABLE ACCESS BY LOCAL INDEX ROWID| SALES | 98 | 2744 | 12 (0)| 00:00:01 | 6 | 6 |
| 10 | BITMAP CONVERSION TO ROWIDS | | | | | | | |
|* 11 | BITMAP INDEX SINGLE VALUE | SALES_TIME_BIX | | | | | 6 | 6 |
| 12 | TABLE ACCESS BY INDEX ROWID | CHANNELS | 1 | 13 | 1 (0)| 00:00:01 | | |
|* 13 | INDEX UNIQUE SCAN | CHANNELS_PK | 1 | | 0 (0)| 00:00:01 | | |
| 14 | TABLE ACCESS BY INDEX ROWID | PRODUCTS | 1 | 21 | 1 (0)| 00:00:01 | | |
|* 15 | INDEX UNIQUE SCAN | PRODUCTS_PK | 1 | | 0 (0)| 00:00:01 | | |
|* 16 | INDEX UNIQUE SCAN | CUSTOMERS_PK | 1 | | 0 (0)| 00:00:01 | | |
| 17 | TABLE ACCESS BY INDEX ROWID | CUSTOMERS | 1 | 16 | 1 (0)| 00:00:01 | | |
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
6 - access("T43903"."TIME_ID"="T43911"."TIME_ID")
7 - access("T43911"."TIME_ID"=TO_DATE(' 1998-05-03 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
11 - access("T43903"."TIME_ID"=TO_DATE(' 1998-05-03 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
13 - access("T43814"."CHANNEL_ID"="T43903"."CHANNEL_ID")
15 - access("T43866"."PROD_ID"="T43903"."PROD_ID")
16 - access("T43841"."CUST_ID"="T43903"."CUST_ID")

Let’s try now moving those tables, in their entirety, into TimesTen for Exalytics 11.1.2.2.5.0, the version of TimesTen that ships with Exalytics 1.0.0.3. I could use a tool like ODI or GoldenGate to transport the data into the TimesTen database, but instead I’ll use a new utility that comes with this release of TimesTen called ttimportfromOracle. which connects TimesTen to an Oracle database, creates TimesTen tables to reflect the Oracle table structures (but using more optimal TimesTen datatypes), and can take advantage of TimesTen for Exalytics’ column-based compression, potentially reducing the disk space used when storing table data by unto a factor of 5.

Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\Administrator>cd c:\temp\tt
c:\TEMP\tt>mkdir tt_sh
c:\TEMP\tt>cd tt_sh
c:\TEMP\tt\tt_sh>C:\TimesTen\tt1122_64_3\support\ttImportFromOracle.exe 
-oraConn sh/password@orcl -compression 1 -tables sales promotions products customers channels times
Beginning processing
Resolving any tablename wildcards
Eliminating any duplicate tables
Getting metadata from source
Generating database user list
Assigning TimesTen datatypes
Analyzing source tables (this may take some time)
Analyzing table 'SH.SALES' ...
Analyzing table 'SH.PROMOTIONS' ...
Analyzing table 'SH.PRODUCTS' ...
Analyzing table 'SH.CUSTOMERS' ...
Analyzing table 'SH.CHANNELS' ...
Analyzing table 'SH.TIMES' ...
Estimating compression ratios
Generating output files
Finished processing

Using the utility gives me a set of scripts for creating a corresponding TimesTen user for the Oracle database user; scripts to create the tables and indexes, load data from Oracle into the TimesTen tables, and then gather stats on the resulting TimesTen database (note that earlier releases of TimesTen don’t come with this utility as standard, and the 11.1.2.2.4.x version’s utility can’t load into compressed TimesTen tables). Using the ttisql command-line interface to the TimesTen server, I therefore run the scripts, and then check-out the resulting tables using SQL*Developer.

Sshot 4

So that’s 900K+ rows loaded into TimesTen, using compression, and easily fitting into my 1TB of RAM. Next step is to model the new tables in the RPD, like this:

Sshot 6

and then create the same set of reports. Running them feels roughly the same as the Oracle ones; maybe ever-so-slightly-slower, but nothing really noticeable. And again – this is on a single user system, whereas TimesTen really comes into itself with lots of concurrent users. Taking the TimesTen SQL query for the analysis equivalent to the Oracle one we explain-planned earlier on, you can see though that there’s no partition elimination going on, something that’s going to hit TimesTen if we get noticeably bigger tables.

Sshot 5

But overall, it’s on a par. The Oracle source performs fine, and so does TimesTen. But this is only just under a million rows of data though – what if we loaded a larger dataset, for example the Airline Delays dataset used for Exalytics demos, that has around 130 million rows in the main fact table? Loading the main fact table plus a handful of dimensions into TimesTen using compression takes up just under 8GB of RAM, so we’re OK there , but what about response times? Let’s start by trying out a few queries against the source Oracle database, on its own. The report below, summarising flights from SFO in 1994 by carrier and quarter, took around 4 seconds to run against my Oracle database source – again, not bad considering it had to sum-up 130m rows of data.

Sshot 7

Running the equivalent report against the TimesTen source took about double the time though, about 6 to 8 seconds; not bad considering there’s no partitioning or PQ, but there’s still indexes, and the dataset is in-memory of course. But it’s still 8G of RAM to store it, and it’s not “speed of thought”, if you know what I mean. So how about storing it in an Essbase ASO database then – how will that work? And how do you go about creating an Essbase ASO database?

UPDATE 1st August 2013: I’ve since discovered that there was a crucial, post-load step that I didn’t perform with these TimesTen tables – running the Index Advisor. See this follow-up blog post on how the Index Advisor brought the TimesTen query response times down dramatically, and the final, concluding post where I discuss my overall findings.

There’s actually a number of ways that you could turn a relational star schema into an Essbase ASO database, including the new cube spin-off feature I blogged about the other week, but building the cube automatically assumes that your data is all lined-up correctly, and with Essbase there’s lots of ways your source data can trip you up; for example, by default every dimension member ID and alias (description) needs to be unique, not only within a level, or even within a particular dimension, but in fact across all dimensions in the cube (database). Because of this, you often need to pre-process your incoming data to make dimension members unique in this way, and my preferred way to do this is through Essbase Studio, bundled and pre-integrated with Exalytics 1.0.0.3 and OBIEE 11.1.1.7.

NewImage

The resulting ASO database was only about 24MB in size (just the level-0 members, no aggregate views), and I then brought it into the BI Repository in a similar way to the TimesTen database.

Sshot 10

Running reports and analyses though, was a revelation. Whatever query I threw at it, however deep I drilled into the dataset, response times were instant, and that was even without adding any aggregate views (equivalent to materialised views with Oracle. Not a bad start, but what about doing the same with the Airlines dataset, 130m+ rows of flight data across many years and US states?

NewImage

Well surprisingly, the 130m rows loaded in about a minute or so, and the resulting Essbase ASO database was only around 250MB in size, as opposed to the 8GB required by TimesTen for the same dataset. Even more surprisingly, all the data went in, and response time was either instant, or a second or so – noticeably faster than the TimesTen dataset, or even the Oracle dataset. So why is this, and how can Essbase ASO (which resides on disk) be faster than TimesTen (which runs in-memory)?

There’s actually a couple of reasons, and some important caveats too. As to why ASO is so fast – it’s designed to be fast, with Hyperion reacting to competition back in the early 2000′s from the likes of Microsoft, as well as Oracle with Oracle OLAP’s compressed composites, and its designed to store lots of detail-level data, across large numbers of dimensions, and aggregate it quickly. Contrast this with TimesTen, which started-off as an OLTP database and only now is getting the sorts of analytic and VLDB features you’d expect from databases of this type.

But there’s some important limitations that you need to be aware of, if you’re thinking about moving an entire EBS-style database into Essbase ASO. First of all – you need to know Essbase, and you need to know Essbase’s data rule limitations, and you also need to consider what happens when users want to query across multiple facts using conformed dimensions. Now you can do all of that using Essbase and techniques such as cube partitioning, dummy dimensions and so on, but it’s not as simple as creating the equivalent TimesTen database tables, or even better, using the Summary Advisor to recommend aggregates based on usage patterns.

But if you’re set on caching your entire detail-level dataset into Exalytics, think about using Essbase ASO as your storage engine, as it’s designed from the ground-up to quickly analyse and aggregate large, sparse, datasets. For source data around the size of the SH Sales History star schema, there’s not much in it, but for datasets of the size of Exalytics’ Airline Delays dataset, you’ll most probably notice the difference immediately.

Update 29th July 2013: The statement about Exalytics’ memory-to-CPU bandwidth being less than it’s disk-to-CPU bandwidth was incorrect and was removed. Exalytics’ memory bandwidth is actually (max) around 90GB/sec compared to the disk controller’s 16-32GB/sec, and should therefore not be an impediment to TimesTen’s performance. Apologies for any confusion caused.

Update 1st August 2013: See this blog post where I followed-up the initial tests with some further optimisation of the TimesTen database, and this final post where I further optimised the Oracle, and Essbase ASO data sources, and came up with some final conclusions on how to store detail-level data in Exalytics’ memory.

New With OBIEE 11.1.1.7 and SampleApp v305 – Essbase Cube Spin-Off

Now that the new v305 OBIEE SampleApp that we previewed in May is out,  I thought it’d be a good time to take a look at an experimental feature added into OBIEE 11.1.1.7′s logical SQL that gives you the ability to create, or “spin-off”, an entire Essbase cube from an OBIEE dashboard or RPD.

If you read my coverage of the new Essbase integration features in OBIEE 11.1.1.6.2 just over a year ago, you’ll remember that this release gave OBIEE the ability to persist aggregates to an Essbase ASO cube as well as regular Oracle, SQL Server or IBM DB/2 databases. This was impressive stuff but it came with one significant limitation: the Essbase ASO databases that the aggregate persistence wizard created contained just a single aggregation, for example sales by product category, month and customer type, rather than all levels and aggregations as you’d expect for a multi-dimensional database. The reason for this was obvious – this is how the aggregate persistence wizard worked with relational databases, and Oracle just took the same paradigm but allowed it to persist to Essbase as well as the various relational stores.

So fast-forward now to OBIEE 11.1.1.7, and the new v305 SampleApp has an intriguing dashboard page within the “8.21 Oracle Essbase Interaction” dashboard called “Cube Spin Off”. What this page demonstrates is a kind of preview of where OBIEE is going in the future, where it’ll be easy for users to take data within a subject area, spin it off as an Essbase cube and then automatically report against it, enabling faster reporting and access to MDX functions, forecasts and so on. In this first iteration, put together by the SampleApp team to show off the new logical SQL “CREATE CUBE” command, we’ve got an example of a command that will spin-off the Essbase cube along with an analysis created against the result of that command.

Sshot 1

To create, or “spin-off” an Essbase cube using this command, cut and paste the logical SQL displayed in the analysis description, then select Administration > Issue SQL, and then paste the logical SQL into the text box provided, like this:

Sshot 2

Note how I’ve bumped-up the logging level to 3, so I can take a closer look at what the BI Server does when I press the Issue SQL button.

Looking at the logical SQL command itself, what’s interesting is that it’s referencing logical dimension levels from the RPD in the command, rather than logical tables as would have been the case with the aggregate persistence wizard. Looking at the SampleApp v305 RPD in question, you can see that there’s more logical dimensions in total within this business model, so the logical SQL command can pick an arbitrary subset of the dimensions within the model, pick an arbitrary level to aggregate up from, and then pick one or more of the measures to create the cube definition.

Pressing the Issue SQL button, and then switching over to Essbase Administration Services (http://obieesample:9704/easconsole), you can see the Essbase database and application created by the CREATE CUBE logical SQL command.

Sshot 3

Looking at the database (cube) definition itself, you can see that it’s a BSO (Block Storage Option) database rather than the ASO type that the BI Administraton tool’s Aggregate Persistence Wizard creates, and its also set up to allow duplicate member names across dimensions and levels, something that’s usually required when building cubes against relational sources that don’t have this member name restriction.

Sshot 4

Looking back at the NQQuery.log file entries create by the logical SQL command, all you can see is the CREATE CUBE statement itself, like this:


[2013-06-12T05:07:09.000-04:00] [OracleBIServerComponent] [TRACE:3] [USER-0] [] [ecid: 274d73ee2c4a9d00:42d39d4a:13f2e44ecb4:-8000-00000000000018c7] [tid: 2e497940] [requestid: 37b70014] [sessionid: 37b70000] [username: weblogic] ############################################## [[
-------------------- SQL Request, logical request hash:
c88166e2
set variable LOGLEVEL = 3;CREATE CUBE "ESSCUBE2" for "B - Sample Sales Exa".."Base Facts"
("2- Billed Quantity","11- Fixed Costs")
AT LEVELS (
"B - Sample Sales Exa".."Products"."Products Hierarchy"."Product",
"B - Sample Sales Exa".."Time"."Time Hierarchy"."Month",
"B - Sample Sales Exa".."Offices"."Offices Hierarchy"."Offices");
/* QUERY_SRC_CD='rawSQL' */

]]

All of the complexity of the cube creation is abstracted away by the BI Server, with (presumably) the embedded Essbase Studio servlet used by the Aggregate Persistance Wizard also being used to define the Essbase database and load data into it.

When you open up the BI Administration too and view the repository that provided the source for the CREATE CUBE statement, what’s particularly impressive is that the new Essbase database has been mapped into the Physical layer of the repository, and then into the Business Model and Mapping layer as an LTS for the source logical tables, as the aggregate persistence wizard would do with regular, single-aggregation aggregate tables.

Sshot 5

Very interesting stuff, and presumably at some point it’ll be incorporated formally into the Aggregate Persistence Wizard UI rather than having to run it from a logical SQL script. I was warned also that this feature is “experimental”, so expect there to be issues, limitations etc, but it’s an interesting glimpse into what appears to be Oracle’s original vision for data mart automation in OBIEE – just press a button and your collection of mapped-in sources becomes the design for an Essbase cube load, with the cube then taking the place of the original sources within the RPD model.

Finally and on a similar topic, you can read about the wider changes and improvements to Essbase integration in OBIEE 11.1.1.7 in my earlier post on this blog, as well as full details on the new SmartView integration in OBIEE 11.1.1.7 that also came along with this new release.

An Oracle BI “Blast from the Past”: Ten Products from Oracle’s BI History You May Not Have Heard Of…

With Oracle BI Enterprise Edition, the BI Apps and Hyperion EPM Suite, Oracle have a set of reporting and analysis tools that can be considered “best of breed” and compete with any other tool on the market. Coupled with the Oracle database, the engineered systems strategy and the Fusion Apps/Apps Unlimited ERP suites, as a BI developer it’s pretty clear we “bet on the right horse” in terms of which company we backed as developers. But it’s not always been as plain sailing as this, and like every other software company Oracle have released a few products over the years that didn’t take-off so well, never really came together or just got eclipsed by other acquisitions Oracle made over time. So lets take a light-hearted count-down through the Oracle BI, DW and ETL products released over the years that you may not have heard of, but at least some of us bear the scars from trying to implement ….

10. Oracle Warehouse Builder 10gR2′s Data Quality and Enterprise ETL Options

Back in the days before Oracle Data Integrator, Oracle Warehouse Builder was Oracle’s strategic data integration tool, initially sold on a per-developer basis but in time, bundled in with the Oracle database to match similar product packaging from Microsoft.

NewImage

LIke the old saying about technology reaching the point of perfection before then becoming obsolete, in retrospect OWB10gR2 pretty much got it right in terms of Oracle-centric data integration, and the Data Quality option in particular has yet (to my mind) to be surpassed in term of data quality integration with an Oracle ETL tool. Of course, with the acquisition of Sunopsis Oracle went with ODI as their strategic ETL tool, driven mostly by the fact that it was cross-platform and had a bit more of a SOA/middleware angle than did OWB, but it’s still a shame to see OWB being de-emphasised over the years and we still miss its dimensional data modelling capabilities, integration with Oracle OLAP, and of course the data quality features that were introduced with OWB10gR2′s Data Quality Option.

9. Oracle Browser, Oracle Data Query, and Oracle Discoverer

Again, products made obsolete by newer and better ones coming through, rather than failing in themselves, Oracle Browser was Oracle’s first proper ad-hoc query tool, which in turn begat Oracle Data Query, which then begat Oracle Discoverer, still in use across many EBS sites and still with some features yet to be incorporated into Oracle BI Enterprise Edition.

NewImage

But its easy to get rose-tinted-spectacles about Discoverer; having to dig out and maintain ancient Oracle JInitiator Java browser plug-ins to get Discoverer Plus to work; Discoverer Viewer only having a fraction of the functionality of Discoverer Plus; the web-based version of Discoverer first appearing with Oracle 9iAS, possibly the worst piece of Oracle software ever released, and so on. But for getting the job done with minimal fuss, Discoverer is still hard-to-beat as an ad-hoc query tool for the Oracle database, and of course its’ still available and runs now as part of the Fusion Middleware 11g setup, and it’ll still be maintained and developer for as long as there’s EBS customers out there wanting to do some simple exploration of their ERP data.

8. Oracle Data Mart Suite

Here’s one for an Oracle BI Trivial Pursuit quiz – what was the name of Oracle’s first combined ETL, data warehousing and reporting product suite, based around Oracle 8i and using ETL code licensed from Sagent? The answer is Oracle Data Mart Suite, a largely-forgotten precursor to Oracle Warehouse Builder that combined Oracle Discoverer, Oracle Designer and a tool called Oracle Data Mart Builder along with Oracle 8i to create Oracle’s first end-to-end BI & data warehousing tool.

NewImage

Some of the concepts are very familiar to us now – a central repository, agents to collect metadata and run code, graphical tools to create data models and define ETL processes – but Data Mart Builder was Windows-only when Oracle were just about to move wholesale into Linux with Oracle 9i, and of course the ETL functionality pre-dates the inbuilt SQL ETL that came with Oracle 9i. Oracle Warehouse Builder 2.1 came along towards the start of the 2000s and replaced Data Mart Builder and Data Mart Designer, but Discoverer lived on and Oracle still really haven’t got a single install, single metadata store solution to replace it. Data Mart Suite even pre-dates my involvement with Oracle, but I’d be interested if anyone reading this (my guess – Kent Graziano ;-)) has any first-hand experience in working with it.

7. Oracle Darwin

Just around the same time as Oracle 9i was released, Oracle made the acquisition of Thinking Machines, a data-mining and supercomputer company based out in the States who sold a product called Darwin, a Clementine-like GUI analytical workbench that Oracle later rebranded as “Oracle Data Mining”.

NewImage

Darwin never really saw the light of day with Oracle but the internal algorithms and technologies went on to form the core of the Data Mining Option for the Oracle Database (now part of the wider database Advanced Analytics Option), which now has GUI elements of its own but does all of the processing in the Oracle database. Technology derived from Darwin can also be found today in products like Oracle Spend Classification, working under the covers to classify corporate spend using data mining classification algorithms.

6. Oracle BI Applications 7.9.5.2

Now we’re getting into the interesting ones. Four weeks of my life I’ll never get back were spent back in 2009 getting to grips with the first version of the Oracle BI Apps that used ODI, rather than Informatica, as the ETL tool. Sources and targets in this initial release were limited to just EBS 11.5.10 on Oracle Database 10gR2, but other than that it was fully-functional, with a method for doing customisations, new configuration tools that did away with the need for the DAC, and all of the relevant SDE and SIL mappings re-implemented as ODI interfaces and packages.

NewImage

But this was back in the days of ODI10g, and there were no load plans or any other features since introduced to ease the move to ODI with the BI Apps, and the customisation approach was slightly scary and complex, to say the least. In the end, only one release of BI Apps on ODI10g ever came out, but of course we’re now just nearing the point where BI Apps 11g gets full support for ODI as an alternative to Informatica as the ETL tool, and seeing the various workarounds Oracle had to do with ODI to get it to work as an Informatica alternative back in 2009 made it obvious to me why features such as load plans were introduced over the past few years.

5. The Oracle BI Spreadsheet  Add-in for Excel

A spreadsheet add-in with less useful functionality than Oracle BI Office, the GUI performance of Warehouse Builder and an OLAP server back-end that nobody used. One of a number of entries in this chart based around Oracle 9i OLAP.

NewImage

This, and No.1 on our list were probably the single biggest reason Oracle ended-up buying Hyperion – the chances of an accountant actually using this Excel add-in, as opposed to say Hyperion’s Essbase add-in, were about as close to zero as you could get, assuming you could find anyone still using Oracle OLAP after the bodged migration from Express Server. But – Oracle 9i OLAP going so badly paved the way, in time and several years later, for the Hyperion acquisition, and now Oracle OLAP sensibly focuses on the Simba MDX Provider for Oracle OLAP along with Microsoft Excel pivot tables, the descendants from this product release are actually pretty darn good.

4. Oracle Business Intelligence Essbase Integrator

Another one for Oracle BI Trivial Pursuit – which product from Oracle integrates OBIEE, Essbase and the BI Apps, so that you can click on links on your BI Apps dashboard and launch Smarview, preserving the POV context from the related analysis on the dashboard?

NewImage

The answer is the Oracle Business Intelligence Essbase Integrator, a product launched by Oracle back in 2010 and which appeared to me, at the time, as wildly ambitious but solved a real problem – how do you combine the capabilities of Essbase and OBIEE whilst in the background, keeping their metadata in-sync.

NewImage

Rather predictably, we didn’t really hear much about this product again which was a shame, as the concept was superb (albeit a bit “Heath Robinson”, or “Rube Goldberg” as the Americans would say). I suspect we’ll see something around this same concept going into the 12c timeline as it’s such an obvious “win” for Oracle, but for now, it’s a product best known as the challenge we set Stewart Bryson back at the Brighton BI Forum 2011 when Venkat couldn’t make it over – present Venkat’s OBI Essbase Integrator session as if it were his, and without seeing the content of each slide until it came up on the projector.

3. Oracle Daily Business Intelligence

Prior to the BI Apps and around the time of Oracle E-Business Suite 11i, reporting against EBS data was done either through Discoverer, or through Oracle Reports and FSGs. Oracle Discoverer could integrate with Oracle Portal, but it was fairly primitive integration and Portal wasn’t really cut-out to be a BI Portal, more being used for intranet-style applications like data entry and staff directories.

Oracle Daily Business Intelligence (DBI) improved on this situation in two ways; first, it was based off-off Oracle Database materialised views, speeding up report response times and, in theory, taking the load off of your underlying ERP system; second, it had its own custom portal and dashboard framework that was faster, more responsive and “tighter” than Oracle Portal and its Discoverer portlets.

NewImage

DBI may well have got more traction over time but probably was eclipsed by the major acquisition spree that Oracle went on in the early 2000′s, buying Peoplesoft (and thereby JD Edwards) and Siebel, and then using Siebel Analytics along with BEA’s WebLogic tooling to create the successor BI-on-ERP platform, OBIEE and the BI Apps. Which was probably a good thing in the end, as I never met anybody actually able to customise Daily Business Intelligence, a task that makes customising the BI Apps seem like plugging your iPod into iTunes running on a Mac.

2. Oracle 9i OLAP, and Oracle Business Intelligence Beans

On to the final two, and they’re both based around Oracle 9i OLAP, Oracle’s (in retrospect) disasterous replacement for Oracle Express Server which lost them the OLAP market they owned back in the late 90′s. I’ve got mixed feelings on 9i OLAP as I know many of the people involved in its development and marketing, and nowadays in its database OLAP option form it’s an excellent product, I think technically and architecturally better than Essbase. But the introduction of Oracle 9i OLAP was a masterclass in suicidal product marketing; first, there was no backward compatibility with Express Server tools, so all of the users of Oracle Financial Analyzer and Oracle Sales Analyzer had to wait years for the product at #1 in our list, with of course most of them decamping to Microsoft OLAP Services or Arbor Essbase instead. The first version of Oracle 9i OLAP was ROLAP only, with terrible, buggy performance and a wait of a year or so before the MOLAP version came out, again without backwards compatibility with Express Server tools (due to the removal of the key SNAPI and XCA communications APIs that Express Server applications used)

NewImage

 

All of this made great technical sense, and if the MOLAP version of 9i OLAP had come out at the same time as the ROLAP version, and if Oracle somehow managed to keep SNAPI support so that OFA and OSA customers could technically migrate their OLAP cube to Oracle 9i without loosing their tool access. It may have all worked out. But the “icing on the cake” was the lack of any proper ad-hoc or OLAP query tool support right at the start (Discoverer “Drake” came a few years later), with customers expected to – get this -write their own BI tool using a Java component technology called Oracle Business Intelligence Beans.

NewImage

A few parters, including myself in this bit of history from around ten years ago, gamely tried to generate interest around 9i OLAP and BI Beans, but combined with Microsoft’s entry into the OLAP market and Arbor (and then Hyperion’s) focus on the finance department, rather than DBAs and IT who never actually buy OLAP servers, Oracle OLAP never regained the market share that Express Server had, even though as I said earlier it’s arguably a better, more scalable and easier-to-manage OLAP Server than Essbase.

The last laugh is on the BI Beans product development team though, as the BI Beans query builder became the inspiration for OBIEE 11g’s “Selection Steps” feature, whilst its data visualisation components found their spiritual successor in ADF’s Data Visualization Tools (DVT) feature, which provides the BI visuals behind OBIEE, the latest version of Endeca Information Discovery, and of course the Oracle Fusion Apps.

1. Oracle Enterprise Planning & Budgeting

Number one in our list of Oracle’s slightly crazy BI tools from the past was Enterprise Planning & Budgeting, the long-awaited replacement for Oracle Financial Analyzer and Oracle Sales Analyzer based around the Oracle 9i OLAP platform. More akin to the Stone Roses’ “Second Coming” and about as well critically received, EPB was the “aircraft carrier” to OFA’s “motor torpedo boat”, had a list as long as your arm of critical patches you had to apply before you could use it, and required installation along with EBS (and knowledge of a set of arcane setup steps) before you could use it.

NewImage

Coupled with a painfully-slow user interface for users typically used to split-second OFA response-times, EPB was long in the coming but quickly despatched when Oracle first adopted Siebel Analytics and the BI Apps as their new BI Platform, and then bought Hyperion and made Essbase and Hyperion Planning the centrepiece of their performance management strategy, something that carries on to this day.

So there we have it – a light-hearted look through some of the Oracle BI products that didn’t make it to the big time, and a bit of history to explain why OBIEE and EPM Suite are the tools we use today. Most of the Oracle PMs who looked after these tools are still with us, working on OBIEE and its related technologies, so apologies if I’ve inadvertently offended anyone by picking on one of the products they looked after – it was all fun at the time and most of the products would have stayed with us, and gone on to be successes were it not for the massive strategic shift Oracle made back at the turn of the century towards cross-platform, and away from the Oracle database begin the centre of everything. Let me know if you’ve had any experiences with these tools, or if you’ve got any corrections or additions to their stories.

SmartView as the Replacement for BI Office with OBIEE 11.1.1.7

Apart from system-wide improvements to Essbase integration across OBIEE 11.1.1.7, the other Essbase-related improvement that came with this latest release was the (re-)introduction of SmartView as the replacement for Oracle BI Add-in for Microsoft Office (“BI Office”), OBIEE’s previous MS Office solution. As a reminder, BI Office appeared with OBIEE 10.1.3.3 back in 2007/8 and supported integration with Microsoft Excel and Powerpoint, allowing you to download analysis views from the BI Catalog and then view them within Excel and Powerpoint. 

NewImage

What you couldn’t do with BI Office though was use it to create new analyses, or upload what you’d created back to the BI Catalog. There was also no integration with Microsoft Word or Outlook, which meant it was a feature meant more for viewing and copying analyses into Excel and Powerpoint rather than as a “first class” report authoring environment.

Then when OBIEE 10.1.3.4 was released, a number of EPM Suite products were integrated with OBIEE, including Workspace (now resurrected with OBIEE 11.1.1.7), SmartSpace (where did that go?) and SmartView, the long-term replacement for Essbase’s somewhat minimalist Excel Add-in. This was all good stuff except that, in terms of OBIEE support, this version of SmartView was essentially unusable, rendering OBIEE data in an Essbase-like way that made little sense for an OBIEE user.

NewImage

“The UI takes a bit of getting used to” was my comment at the time, which in retrospect was a bit of an understatement and this version of SmartView had little to no take-up within the OBIEE world, with BI Office carrying on until now as the only viable MS Office integration approach. Now though, the new improved version of SmartView is with us, so how well does it work with OBIEE data?

SmartView can be download from the BI Presentation Services homepage, but note that this is the 32-bit version and you’ll need to go to My Oracle Support for the 64-bit version, available using patch ID 16238382 (at the time of writing, for SmartView version 11.1.2.2.310). Once its installed, select SmartView > Options > Advanced and enter your general EPM Suite Smartview Provider Services URL into the Shared Connections URL setting (in the format http://[machine_name:port}/workspace/SmartViewProviders), like this:

Sshot 5

This setting only covers SmartView connecting to Essbase and Financial Reporting, so to connect to OBIEE's Presentation Services Catalog you'll need to create what's called a Private Connection (or define a shared connection for OBIEE within an XML file, as detailed in the SmartView 11.1.2.2.310 docs), by pressing the Panel button in the menu ribbon, selecting Private Connections from the Smart View menu, then clicking on the Create new connection button.

NewImage

Then, when prompted for the SmartView connection type, select Oracle BI EE, then type in the OBIEE SmartView URL in the format http://[machine_name:port]/analytics/jbips, and press Finish to complete this part of the process.

NewImage

Then, when prompted enter the username and password for your OBIEE system, and then save the connection as a private connection to your workstation.

NewImage

Now you should be able to browse the BI Catalog and select a SmartView report, for example, to view within Excel.

NewImage

Or you can select any view from a regular analysis, and add that to Excel just as you did with BI Office.

Sshot 14

More importantly though, the View Designer feature allows you to create a new report from scratch, selecting from any subject area in the BI Catalog and creating a report from right within Excel.

Sshot 15

This report can then be manipulated either as an Excel pivot table (pictured below) or an OBIEE pivot table, giving you an OBIEE-within-Excel experience far more intuitive and usable than the earlier incarnation of SmartView.

Sshot 16

Additional calculated fields can be added, in what is arguably a more obvious way than you’d do so in the Analysis Editor…

Sshot 17… and charts can be developed as well, using a similar set of of chart types to the ones provided by the Analysis Editor.

Sshot 18

Then, once you’re done, you can either save the Excel (or Word, or Powerpoint, or whatever) document to your workstation’s filesystem, or you can upload to the BI Presentation Catalog using the Publish View button…

Savesmartviewreport

… and then – get this – open the report in the Analysis Editor, just like any other analysis in the catalog. Impressive stuff (although the calculation defined in Excel didn’t make it through to OBIEE, and the upload feature only seems to bring a single view at a time, but this is version 1.0)

NewImage

There’s tons more to Smartview and in reality, presumably some of the new OBIEE stuff won’t work properly in this first release, but it’s a huge improvement over the old OBIEE MS Office plug-in, and it’s also useful being able to use the same MS Office plugin for all Oracle’s BI & EPM tools, with full 32 and 64-bit support for all the modern MS Office versions.