Tag Archives: Hyperion Essbase

Oracle Openworld 2013 Day 2: Exalytics, TimesTen and Essbase Futures

Monday’s almost over at Oracle Openworld 2013 in San Francisco, and it started for me today with a presentation on Enterprise Manager 12c and the BI Management Pack, alongside Henrik Blixt (PM for the BI Management Pack) and Dhananjay Papde, author of a book on EM12c. I’ve covered EM12c and the BI Management Pack extensions quite a bit on the blog over the past few months so it was good to exchange a few ideas and observations with Henrik, and it was also good to meet Dhananjay, who’s been working with EM for a long time and has particularly specialized in the configuration management, and SLA-monitoring parts of the app.

Similarly, I finished-up the day with another joint session this time on TimesTen for Exalytics, with Peak Indicators’ Tony Heljula and Chris Jenkins, one of the TimesTen development PMs. As with all these sessions, it’s the audience interaction that makes them interesting, and we had a number of excellent questions, particularly at the TimesTen one given the very interesting product announcements during the day – more on which in a moment.

NewImage

Before I get onto those though, here’s the links to today’s RM presentation downloads, with presentations today given by myself, Jérôme Françoisse (with Gurcan Orhan) and Venkat:

So, onto the product roadmap sessions:

1) Oracle Exalytics In-Memory Machine

The first set of announcements was around Oracle Exalytics In-Memory Machine, which started off as a Sun x86_64 server with 1TB RAM and 40 CPU cores, then recently went to 2TB and SSD disks, and now is available in a new configuration called Oracle Exalytics T5-8. This new version comes with 4TB RAM and is based on Sun SPARC T5 processes with, in this configuration, a total of 128 CPU cores, and is aimed at the BI consolidation market – customers who want to consolidate several BI applications, or BI environments, onto a single server – priced in this case around the $350k mark excluding the software.

What’s also interesting is that the T5-8 will use Sun Solaris on SPARC as the OS, giving it access to Solaris’ virtualisation and resource isolation technologies again positioning it as a consolidation play rather than host for a single, huge, BI application. Given the price I can’t quite see us getting one yet, but it’s an obvious upgrade path from the X2-3 and X2-4 servers and something you’d want to seriously consider if you’re looking at setting up a “private cloud”-type server infrastructure.

The Exalytics roadmap session also previewed other potential upcoming features for OBIEE, I would imagine earmarked for Exalytics given some of the computation work that’d need to go into the background to support them, including:

  • A “Google Search” /. “Siri”-type feature called BI Ask, that presented the user with a Google-style search box into which you could type phrases such as “revenue for May 2010 for Widget A”, with the feature then dynamically throwing-up a table or graph based on what you’ve requested. Rather than attempting natural language parsing, BI Ask appears to work with a structured dictionary of words based on objects in the Presentation Services catalog, with choices available to the user (for example, lists of measures or dimensions) appearing under the search box in a similar manner to the “Google Suggest” feature.Although the demo was done using a desktop browser, where I think this could be particularly useful is in a mobile context, especially given most browsers’ and mobile platforms’ in-built ability to receive speed input and automatically pass that as text to the calling application. If you can imagine Siri for mobile analytics, with you holding your iPhone up and saying to it “revenue for southern region for past three months, compared to last year” and a graph of revenue over this period automatically appearing on your iPhone screen – that’s what I think BI Ask is trying to get towards.
  • A user-driven data mashup feature that allowed the user to browse to a spreadsheet file on their local desktop, upload it to the OBIEE server (maybe to an Oracle Cloud database?), and then automatically join it to the main corporate dataset so that they could add their own data to that provided to the BI system. Clearly any setup like this needs to clearly differentiate between metrics and attributes uploaded by the user, compared to the “gold standard” ones provided as part of the RPD and Presentation Services Catalog, but this is potentially a very useful feature for users who’d otherwise export their OBIEE data to Excel, and then do the data combining there.
  • Probably more for “Exalytics v2″, but a totally revamped aggregate refresh and reload framework, probably based around DAC technology, that would leverage the DAC’s own data loading capabilities and tools such as GoldenGate to perform incremental refreshes of the Exalytics adaptive in-memory cache. No specific details yet but it’s pretty obvious how this could improve over the current Exalytics v1 setup.

2) Oracle TimesTen for Exalytics

Yesterday of course had the big announcement about the new In-Memory Option for Oracle Database 12c, and this of course then led to the obvious question – what about TimesTen, which up until now was Oracle’s in-memory database – and what about Exalytics, where TimesTen was the central in-memory part of the core proposition? And so – given that I was on the TimesTen Birds of a Feather Panel this evening and no doubt would need to field exactly those questions, I was obviously quite keen to get along to one of the TimesTen roadmap sessions earlier in the day to hear Oracle’s story around this.

And – it actually does make sense. What’s happening is this:

  • TimesTen’s development team has now been brought under the same management as Oracle Database 12c’s In-Memory option, with (over time) the same core libraries, and same performance features
  • TimeTen will get the ability to store its tables (which are already held in memory) in columnar format as well as the existing row format – the difference being that unlike the Oracle in-memory feature, this is not done through on-the-fly data replication – it’s either stored row-store or column-store, something you decide when you create the table, and the only thing disk is used for is checkpointing and data persistence between reboots
  • TimesTen will also gain the ability to be set up as a grid of servers that provide a single database instance – a bit like RAC and it’s single instance/cache fusion, and with support for replication so that you can copy data across the nodes to protect against machine failure. Currently you can link TimesTen servers together but each one is its own database instance, and you’d typically do this for high-availability and failover rather than creating one large database. What this grid setup also gives us though is the ability to do parallel query – Oracle didn’t say whether this would be one slave per grid node, or whether it’d support more than one slave per node, but coupled with the in-memory column store feature, presumably this is going to mean bigger TimesTen databases and a lot faster queries (and it’s fast already).

So what about the positioning of TimesTen vs. Oracle Database In-Memory Option – does one replace the other, or do you use the two together? Oracle’s ideas on this were as follows:

  • Clearly the in-memory Oracle Database option is going to be a great query accelerator for large-scale data warehouses, but there’s still a lot of value in having a mid-tier in-memory data layer that’s under the control of the BI system owner, rather than the DBAs. You’ll have control over the data model, you can implement it quicker than you’d be able to upgrade the whole data warehouse database, and its physically co-located closer to the BI Server, so you’ll have less of an issue with network latency.
  • TimesTen’s in-memory columnar storage technology will be based on a similar approach to that which is being taken by the database, and developed by the same overall team. But TimesTen most probably will have shorter development cycles, so new features might appear in TimesTen first, and it’s also lower risk for customers to test out new in-memory approaches in TimesTen rather than trying to reconfigure the whole warehouse to try out a new approach

And I think this makes sense. Of course, until we actually get hold of the two products and test them out, and see how the pace of development works out over time, we’re not going to fully know which product to deploy where – and of course pricing and packaging has yet to be announced; for example, I’d strongly predict that columnar storage for TimesTen will be an Exalytics-only feature, whilst the In-Memory Option for the database might be priced more like RAC than Partitioning, or even packaged up with Partitioning and OLAP as some sort of “data warehousing option”. We’ll just have to wait and see.

3) Oracle Essbase

The Essbase roadmap was the last session I managed to attend today, and again there were some pretty exciting new features announced or trailed (and it was made clear that the new features at the end of this list were more at planning or conceptual stage at the moment, and may well not make it into the product). Anyway, here’s what was talked about in the session, for BI and Exalytics-type use cases:

  • Improved MDX query creation when working with the BI Server, including support for sub-selects – something that might help to reduce the number of separate MDX queries that OBIEE has to generate to work-out all the subtotals required for hierarchical column queries
  • Improvements to the MDX AGGREGATE function and a revamped cube spin-off feature for OBIEE, including a prototype new web-based MOLAP Acceleration Wizard for auto-generating Essbase cubes for OBIEE aggregate persistence
  • A new Cube Deployment Service private API, that’s used by the MOLAP Aggregation Wizard (amongst others) to generate and deploy an Essbase cube within a cloud-type environment
  • A “renegade member” feature used for collecting in all the data load records for members that can’t be located – aimed at avoiding the situation where totals in an Essbase cube don’t then match the totals in the source system, because records got dropped during the data load
  • Very speculatively – a potential hybrid BSO/ASO storage mode, combining BSO’s calculation capabilities with ASO’s dynamic aggregation.

So – lots of potential new features and a peek into what could be in the roadmap for three key OBIEE and Exalytics technologies. More tomorrow as we get to attend roadmap sessions for OBIEE in the Cloud, and ODI 12c.

Installing Essbase Analytics Link for HFM 11.1.2.3

If your requirement is to install EAL for HFM, then this post is for you. Oracle has not yet released a new version of EAL that is certified with HFM 11.1.2.3 and hence you will not be able to find one in the edelivery. The question is – what version of EAL one can use with EPM 11.1.2.3. A few days ago, MOS published a Doc – 1570187.1 that said we could use EAL 11.1.2.2.301 PSU for the same. This PSU is a full installation, which is available for download at MOS site. Also, any previous installation of EAL must be uninstalled before installing this release.

The pre-requisites from the documentation include (these are applicable only if you have an existing EAL instance, and not applicable for fresh installations) -

  • Clear the existing Analytics Link repository (this is bad since you need to redefine the connections, regions, bridges etc.)
  • Unregister previous instance of Analytics Link Server from Foundation Services
  • Reset Data Sync Server

This post will demonstrate a fresh installation on a Windows 2008 64-bit server. Also, this post will not give you a step-by-step approach but will highlight the key steps in the installation.

Like other EPM products, EAL cannot use EPM System Installer for installation. To install EAL 11.1.2.2.301, you should use 32-bit version Oracle Universal Installer 11.2, even if the installation is going to be on a 64-bit machine. The OUI will install Analytics Link version that matches the bitness of the operating system. This comes with the EAL download and hence no separate download required. Alternatively, you can use the OUI that comes with Oracle database installed if you’ve one on the same server.

Installation:

Run the OUI installer; select the installation type and destination folder.

eal1

To let OUI know what product we’re going to install, we need to browse to the path where products.xml exists under the unzipped EAL part.

eal2

Specify a path where you want to install EAL and kick-off installation. Make sure the installer displays 11.1.2.2.301 version during the install.

eal3

Once the installation is finished, the configuration tool starts up. You must enter the Weblogic server details and Analytics Link repository details when prompted.

eal4

eal5

The Doc ID mentioned earlier also suggests not using the default EPM Instance Home location at the Foundation Configuration step.

eal5

Give a suitable Username and password for the Data Sync Server and use an account that is an Administrator to configure Analytics Link services. Unfortunately, the configuration tool doesn’t show the progress of configuration, so you’ll have to wait until you see the ‘success’ message on the window. As specified earlier, this is a full installation and cannot be rolled back.

Verifying the installation:

After successful installation, to verify that EAL is able to connect to HFM, we have to log on to Essbase Administration Services Console (use client installers to install EAS Console) and import the EAL plug-in which is shown in the below screenshots. Go to Tools>Configure components and click Add to import the EAL plug-in.

eal7

Navigate to the directory where Analytics Link server is installed (HFS_HOME which is C:\EAL_Home in our case) and import the jar file eas-plugin_wl.jar.

eal8

After the import is finished, you may need to exit and restart EAS Console to see the ‘Analytics Link Servers’ node.

eal9

Add a new Analytics Link Server by specifying the username and password that is given at the time of configuration.

eal10

Now that we have successfully imported EAL plug-in and added our First Analytics Link server, to verify the HFM connectivity – we need to define which HFM application that EAL should connect to etc. and to what Essbase database it should write the outline/data based on HFM application. Basically, we should configure all the objects under the ‘First’ Analytics Link Server.

HFM Server and Application:

eal11

Essbase Application and Database:

eal12

After you define the Data Sync Server and Data Store – create a bridge that acts as a link between HFM application and Essbase database refreshing the outline and data.

eal13

Open the bridge, create a bridge application and check if the outline is created.

eal14

eal15

Now, we can conclude that there are no configuration related issues since we’re able to refresh the metadata to Essbase without any issues. This I assume gives a good walk through of installing and configuring Essbase Analytics Link for HFM.

New Feature in OEM Cloud Control 12cR3 – Exalytics Server Management

A few months ago a wrote a series of posts about the new 12cR2 release of Oracle Enterprise Manager Cloud Control, which re-instated the BI Management Pack used for monitoring OBIEE 11g systems. Since then, the 12cR3 update for Cloud Control has been released, and one of the new features it added to the BI Management Pack was support for adding Exalytics systems and monitoring them as a whole. So how does this new feature work, and what’s involved in setting it up?

If you read my previous post on registering host systems for use with the BI Management Pack the process for registering an Exalytics system is more or less the same, except that there’s now a specific target type of “Exalytics System” that can be used with the “Add Targets using Guided Process” registration step. In this instance, I’m using EM12cR3 to register an Exalytics server that’s running OBIEE 11.1.1.7 and the rest of the Exalytics v1.0.0.5 stack, including Essbase and TimesTen.

NewImage

Then, assuming the Exalytics server doesn’t currently have the EM Management Agent already installed, one is installed on the Exalytics server automatically by the EM Management Server, so that the Exalytics server can then be monitored at a hardware and software level from the main Enterprise Manager web console.

NewImage

Then, you can either use the auto-discovery feature in EM to “sniff-out” the various middleware components on the Exalytics server, or you can register them manually, this time registering the WebLogic domain on the Exalytics server and automatically picking-up all of the OBIEE components on the server.

NewImage

Essbase Server gets picked-up as part of the discovery process for the OBIEE Fusion Middleware components, but TimesTen has to be separately registered through a plug-in that you have to download into your EM environment.  I covered the TimesTen plug-in download, registration and deployment process back in my earlier post on EM12cR2 and it’s the same with 12cR3, with the TimesTen plugin deployed to the Oracle Management Server in the EM environment, and then deployed to the Management Agent on the Exalytics server, Once you’ve registered both the OBIEE elements and the TimesTen elements for your Exalytics server, you can see them together as part of the overall Exalytics “system” as shown in the screenshot below.

NewImage

The key addition here in 12cR3 is that Exalytics is now a first-class EM target type, and each Exalytics server is registered as an EM “system”, grouping together all of the hardware and software elements into a single management group.

NewImage

Some basic host (server) monitoring comes out of the box and with the BI Management Pack, and there are additional plug-ins and management packs you can use to get further insights into the hardware side, ILOM etc.

NewImage

Then, for each of the OBIEE11g and TimesTen elements, you can drill further into their various monitoring and management pages, just as you would with the 12cR2 version of Cloud Control and the BI Management Pack.

NewImage

As I mentioned before, registering Exalytics Server targets is part of the new 12cR3 release of Enterprise Manager Cloud Control and is part of the pay-extra BI Management Pack, which also gives you additional insights and management features for OBIEE and Essbase. All versions of Exalytics can be registered and managed by EM using this feature, but earlier (pre v1.0.0.3) releases will need a configuration file added to their servers to enable the EM discovery process to recognise them as proper Exalytics servers. All-in-all, not a bad extra feature for the BI Management pack, and it brings Exalytics into line with the other Oracle Engineered Systems in terms of EM target registration – and it’s also a pre-requisite if you’re planning to virtualise Exalytics using OVM and the “trusted partitions” licensing scheme.

OBIEE 11.1.1.7 + Essbase : Essbase Filters within Fusion Middleware 11g Security

Data and metadata filters are used with Essbase to limit access to data in a cube. When you use data filters to restrict access to a certain set of dimension members in the cube, the end-user can see that the member exists, but can’t see any data associated with it. Metadata filters go one step further, stopping the end-user even seeing the dimension member if they’ve not got permission to view it. In the past, Essbase filters were defined using MaxL or Essbase Administration Services, and then assigned to individual users or groups using Hyperion Shared Services; however, if you install Essbase as part of OBIEE 11.1.1.7, these filters are assigned to users and groups using Fusion Middleware Security’s application roles and policies. So how does this process work?

There are three main steps required to set up an Essbase filter under this new Fusion Middleware security model:

  1. Identify the user or group to which the filter is to be applied (or, as in the case of this note, create a new FMW user and add to an existing default group)
  2. Create an Essbase Filter, typically using Essbase Administration Services
  3. Set up the required policies within Fusion Middleware security, using either WLST scripting or more usually, Oracle Enterprise Manager Fusion Middleware Control. 
Let’s go through these steps now, by creating a new user in the embedded WLS LDAP server, then assigning a filter to it.

Creating a new Embedded LDAP Server User

To start by creating a new user, login to Weblogic Administration Console (eg http://localhost:7001/console/login/LoginForm.jsp) , and click on Domain Structure > Security Realms, like this:

NewImage

On the subsequent screen, click on the Users and Groups tab.  Then, click on New to add a user.  On the resultant form, enter a user name (eg markcann) and a password.  Confirm the password, and then click OK to save those details.

NewImage

 Back in the main users screen, click on the newly-added user, like this, so that we can then assign it to a group.

NewImage

Then, in the resultant screen, click on the Groups tab, then select just the BIConsumers group, and press Save. This user is now part of the basic group that can access OBIEE, and will be able to run reports and other catalog content.

NewImage

Create the Essbase FIlter

Now we’re going to create an Essbase filter to apply to this user. In this example, the Essbase test database EXA102_2.EXA102_2 contains the following dimensions:

NewImage

The filter to be created needs to hide the bottom levels of the Product Hierarchy from view: For example PT1 within PC12 contains PI3, PI2, etc as lowest level values.  All such leaf nodes need to be hidden from user view for this dimension.

NewImage

Although filters can be written / uploaded using MAXL, even though we are using Fusion MIddleware security with Essbase, they can still be created in Essbase Administration Services (EAS).   Once logged into EAS, navigate through the Enterprise View tree to locate the required database, then right-click on the database, and choose Edit Filters.

NewImage

In the Filter Editor screen, click New to create a new filter.  Assign the new filter a name (for example, “DropLeaves”), and add a MetaRead filter, the most restrictive of the two filter types.

@REMOVE(@IDESCENDANTS(“Product”),@LEVMBRS(“Product”,0))

NewImage

 Click Verify to check the content of the filter code, then click Save to return to the main filter screen: 

NewImage

Then finally, click Close.  The filter is now created and ready for assignment. 

Assigning the Filter using Application Policy Permission Objects

According to the OBIEE 11.1.1.7 Systems Administrator Guide’s section on Configuring Data-Level Security using Essbase Filters:

“An application role requires at least two policy store permission grants to access to a specific filter. You must give an application role permission to use filters within a specific scope eg

• oracle.essbase.application, /EssbaseCluster-1/Demo, use_filter

and

• oracle.essbase.filter, /EssbaseCluster-1/Demo/Basic/read_filter, apply”

This is true – an application role does indeed require at least two policy store permission grants to access to a specific filter. It requires a whole lot more to work if starting from scratch though.

It is important to recognise that the default groups set up by the install contain a lot of policies by default and they are nested and hence inherit policies from their ‘subordinate’ members. The three groups allocated and set up for the BI elements of FMW are set up thus:

  • BIAdministrators
    • BIAuthors
      • BIConsumers
Therefore, for example, any users assigned to the BIConsumers group will automatically be assigned access to all applications in the main Essbase cluster by this policy lodged against the BI Consumer group:

NewImage

By adding our new user, markcann, to the BIConsumers group, it automatically assumes all of the rights of that group.  We can however overlay additional policies just to the markcann user that will not affect the other members of the group.  It is here that the two additional policies to effect the filter – ie to this user – need to be added.   

To do this, login now to Enterprise Manager Fusion Middleware Control, and in the left hand pane navigate to Business Intelligence > Core Application and left-click on it.  Once the right-hand pane has refreshed, click on the dropdown next to Business Intelligence, then select Security / Application Policies, like this:

NewImage

In the resultant screen, select obi as the Application Stripe (if not already defaulted to this value), select User as the Principal Type, and then press the Create button. 

NewImage In the resultant screen, click the top Add:

NewImage

 

Change the Type field value to User, then locate the new user (click on the blue arrow on the right hand side of the form after having entered the first letter in the search box if required).  Highlight the user, then click OK.

NewImage 

Back in the main screen, the selected user will now be showing.  Click the bottom Add to start adding the required policy permission.

NewImage

 In the resultant dialog, click the Resource Types radio button.  Choose oracle.essbase.application as the Resource Type, then click Continue.

NewImage

Type in the cluster / app name (/EssbaseCluster-1/EXA102_2) as the resource.  Check only the use_filter permission action, then press the Select button.

NewImage

Back in the main screen, the newly added application policy permission should now be showing. Click the bottom Add button again to add a second policy permission, following these steps:

  1. Check the ‘Resource Types’ radio button
  2. Choose oracle.essbase.filter as the Resource Type
  3. Click ‘Continue’ to access the second screen

Enter the Cluster / App / DB / Filter (eg /EssbaseCluster-1/EXA102_2/EXA102_2/DropLeaves) as the Resource Name. Check the apply check box, then press Select to return to the main application policy screen for the user.

NewImage

 Back in the main screen, with the two application policy permissions added now showing, click on OK

NewImage

 The previously created filter is now effective for the markcann user. 

Testing the New Filter

To test the filter, I now log into Smartview, initially as the main administrator user ‘weblogic’. To test access without the filter applied, I access the relevant database (in this case, EXA102_2.EXA102_2), place the Product dimension in the ‘row’ position, and perform a Zoom In / All Levels operation.  Note that level 0 values of product ARE visible:

NewImage 

Now I re-connect to the database via Smartview using the new markcann user and repeat the orientation / Zoom actions.  Note that the level 0 values are no longer visible:

NewImage

When actually drilling (zooming) from Smartview, double-clicking on (e.g.) Bread Clubs results in no change to the report, exactly as if it was the bottom level of the hierarchy.

Obviously in a real system you would not look to set up access like this on a user by user basis.  One approach may be to add access-based groups (UserGroup1, UserGroup2, etc) to the BIConsumers (or indeed, BIAuthors) group, assign the relevant filter (or filters….Essbase under FMW allows recognises multiple filters for a user, which is new) to those groups.  Provisioning (access to functions) would then be controlled at ‘Standard Group level, and Scoping (preventing access to data) controlled by the Custom Groups.

Screen Shot 2013 07 26 at 16 22 33

So – a bit of a different process than we’re used to with Shared Services, but it has the advantage of using the same users, groups, application roles and application policies as the rest of OBIEE. Remember also to ensure that any connection from OBIEE’s repository to the Essbase database uses the actual users’ credentials (i.e. :USER and :PASSWORD, or use the CSS Token feature) otherwise the filters won’t get applied to the users’ analyses and reports.

 

 

Conclusions on Storing Detail-Level Data in Oracle Exalytics

Last week I wrote a blog post on the topic of storing lots of detail-level data in Oracle Exalytics, not the primary use-case for the product but something we’re increasingly asked about. Most implementations of Exalytics use OBIEE’s Summary Advisor to recommend aggregations against large, data warehouse data sets with those aggregates then getting stored in TimesTen, but some customers are looking at their data set, seeing that its well-under 1 or 2TB in size, and wondering whether it’s possible to put the whole dataset in-memory, rather than just specific aggregations.

My initial conclusion on writing the article was that TimesTen, although fast for small datasets, wasn’t particularly suited to querying large, sparse datasets because it lacked common VLDB features such as parallel query and partitioning. Instead, I recommended readers looked at the other database engine that comes with Exalytics – Essbase, and in particular the Aggregate Storage Option, as an alternative way of storing this volume of data.

And Essbase ASO performed very well at the task, rapidly loading 130m source rows of data into an ASO database, and then responding to queries typically within a couple of seconds. The source Oracle database took up to a minute to return queries (having disabled any materialised views in the background), and then TimesTen database performed as expected, typically taking 10, 20 seconds to return answers. Except – having discussed the results with the TimesTen development team over the weekend, I’d missed out a crucial step, by not running TimesTen’s Index Advisor after some queries had run, to create indexes that suited the dashboards I was analysing. In yesterday’s post then I ran the Index Advisor on the two TimesTen databases, and as Oracle advised, the results were much, much better – in fact, faster than the Essbase ASO database that I’d been singing the praises of in the previous post.

But of course – now things aren’t being fair on Essbase, as there’s a similar step that you can perform on Essbase ASO databases, to run an advisor that recommends what are called “aggregate views”, similar to Oracle database materialised views, based on the structure of your cube and the workload on your system. And, of course, there’s the Summary Advisor that I can run on the original Oracle source database, which of course should make certain queries against this source run faster. So which one is best?

Before we get into any more testing though, it’s worth thinking about why we’re looking at this scenario, and what Exalytics is designed for – at least., the OBIEE parts of Exalytics. When Exalytics was put together, Oracle’s vision for the product was as a complement to technologies such as Exadata and Big Data Appliance, where the reporting dataset could typically be many, many terabytes in size. As such, you’re never going to get all of the data into the 1TB (now 2TB) of RAM that’s in the Exalytics server, so you’ve got two choices; either you take a detail-level subset of the entire dataset and put it into RAM (the SAP HANA approach, perhaps) or you take a slice of it – say, just the aggregates – and put those into memory instead. Going down the latter route means that Exalytics can work with source datasets many times the size of Exalytics’ memory, and the approach fits with the general way that data warehouse systems are optimised, by pre-aggregating and pre-caching data.

NewImage

But some customers don’t fit this model, but still want to take advantage of the power of Exalytics. Some might buy Exalytics to give them their own, departmental analytics server, in the same way that companies used to buy Hyperion Essbase or Oracle Express to be able to “own” their own analysis server. In this case, it’s reasonable that they might want to put all the data required for analysis “in the Exalytics server”, but this isn’t a scenario catered for well by the Summary Advisor. Others might just have a reporting dataset that’s within the 1 or 2TB of RAM within the Exalytics server, and just say – let’s put it all in there. Another scenario might involve copying just this month’s data into TimesTen (“hot data”), and then use a feature such as OBIEE’s LTS fragmentation to get the hot data from TimesTen, and the older data from Oracle. Whichever way, Exalytics has to be able to handle both summary-level queries, and retrievals of individual sets of rows, right down to the grain of the dataset.

NewImage

So the way that TimesTen, with the Index Advisor, speeds-up queries against large datasets actually serves this well – index access will suit retrieval of small sets of rows, and in fact the new indexes recommended by the Index Advisor in the example in yesterday’s post even seemed to speed up aggregations as well, which was a bonus. The equivalent advisor within Essbase concentrates on aggregations though, not single “row” retrievals, so how does that work?

Whilst you can create ASO aggregate views manually, probably the easiest place to create them is within Essbase Administration Services Console, where there’s a menu option against ASO databases for designing these aggregate views (remember, ASO databases automatically aggregate when queries run against non level-0 members, but this process is about pre-calculating more aggregates in order to speed up those queries).

Sshot 4

Selecting this menu option brings up a wizard, that lets you either have the wizard pick and materialize the aggregations, or let you select and materialise them individually, based either on the structure of the ASO database or a query workload captured earlier.

NewImage

Creating the aggregates, for both the Sales History and the AIrline Delays dataset, was surprisingly fast, and going back to the dashboards and running the queries again, response time as predicted, dropped.

Sshot 7

So in-fact, from a purely response-time perspective, there’s not really much in it between TimesTen (plus the Index Advisor) and Essbase ASO (and the Aggregation Design Wizard). For good measure I also created some in-memory aggregate for the Oracle source using the Summary Advisor, which recommended a couple for each, albeit with the TimesTen one coming in at a whopping 10GB.

NewImage

So – to answer the question – can I put my entire dataset into Exalytics’s RAM, either because in total it’s less than 1TB in size, or because I want all my reporting data on my own server, in-memory, which option comes out best? In truth, there’s no straight answer, with both TimesTen and Essbase ASO capable of doing the job, but each with their own limitations.

During the Essbase testing I’d been corresponding with Dan Pressman, author of the “How ASO Works and How to Design for Performance” chapter in the “Developing Essbase Applications” book, who was very bullish about Essbase Aggregate Storage Option in this type of scenario. In his opinion (and to paraphrase), Essbase ASO creates such compact databases (250MB vs. TimesTen’s 10GB) and returns both detail-level and summary data so quickly because:

  • “It effectively “compresses” the metadata when it creates the bitmap (which could be described as containing an efficient surrogate key for each piece of metadata)
    • The bitmap not only identifies each piece of data at the lowest level but includes the full ancestry within the hierarchy.
    • The full alternate hierarchy ancestry (for all alternates) of each data item is additionally represented in alternate bitmaps, which while not physically in the data file is available to the query evaluator
  • This compressed metadata is very much like a column store index. Since the full identity all of the way up the hierarchy (and on alternate hierarchies) is included all possible “additive” queries can be answered from this one compact representation.
    • ANY aggregate can be generated dynamically by querying the compressed bitmap. And if it fits in RAM then ANY aggregate can be calculated at RAM speeds. There is no need to use a summary advisor and hope that you got the right Aggregates.
  • Even though there is no need to use a summary advisor, one exists: the aggregation wizard. It creates summary aggregations which have all the advantages of the original metadata compression – simply limited to the upper portions of the hierarchies not summarised in the aggregation process.
    • This means that an aggregation can serve many queries.”

In addition, Dan observed (correctly) that the queries I ran from the various dashboards were pretty simple, and in his words “if instead you ran a more complicated grid type query that required results from all query categories, I believe you would find Essbase would win on the UNAGGREGATED cube – as TT would need results that would have to come from a number of indexed queries that would then have to be correlated.  Essbase would be able to do all the levels in a single pass of the full 900k bitmapped rows.”

But one thing that’s worth also bearing in mind is that it was a lot harder to create the Essbase ASO dataset, even using Essbase Studio, than it was to replicate the tables into TimesTen using ttimportfromOracle. Its certainly possible to model a multiple-fact scenario in Essbase, but in practice reporting across multiple Essbase databases and modelling multiple “facts” is trickier in Essbase (within the OBIEE context), and I had to do lots of data-munging such as prefixing member names to ensure uniqueness, that I didn’t have to do with TimesTen. That said, the out-of-the-box performance with the Essbase ASO source was a lot better than the TimesTen one, and the RPD modelling part was easier as the import process creates all the RPD metadata out of the Essbase database outline.

And all of this assumes that you can get your full dataset into Exalytics’ 1 or 2TB of RAM. With the Essbase ASO database, even with the additional aggregate views, the dataset was tiny, whereas with TimesTen the tables I imported in came to around 10GB (with compression), fine for an Exalytics server but a slight warning that we’d have to carefully manage space when loading data into the server. And this is where the regular Summary Advisor comes into its own, taking just the aggregated data and storing that, leaving the detail in the source database with Exalytics just holding the aggregates.

Unfortunately, therefore, there’s no black-and-white answer as to which technology is best when you want to store all of your reporting data, not just the aggregates, in Exalytics. The good news is that they both work (in the end), but each approach has its benefits – Essbase is easier to setup and model, gives the best initial results, but requires Essbase modelling skills and may not scale-up to the point where you could reasonably model the entire BI Apps dataset in Essbase ASO, for example. TimesTen is a bit more fiddly, takes a bit of post-load tuning, but has the benefit of matching closely the structure and layout of the source Oracle database, making it perhaps a more realistic option when replicating a whole schema into RAM – as long as it all fits in.

For me though – in the short term, I’ll be presenting this along with Stewart Bryson at next week’s Enkitec Extreme Exadata Expo (E4) Conference in Dallas, Texas, as part of a wider “tips and tricks from the field” session on Monday afternoon. Beyond that, my next set of tests will be to revisit the work that Pete Scott and I did on accelerating Oracle BI Apps dataset using Exalytics and TimesTen, this time with the Index Advisor and with the hope that we’ll finally hit that goal of a data warehouse in-memory, with split-second response times. Watch this space, and maybe see one or two of you in Texas next week.