Tag Archives: Hyperion Essbase

Financial Reports – which tool to use? Part 2

Financials in BI Publisher

Financial Reports - which tool to use? Part 2

I find it interesting that BI Publisher is mostly known for the creation of pixel perfect repeating forms (invoices, labels, checks, etc) and its ability to bursting them. To me, BI Publisher is the best kept secret for the most challenging reports known to mankind.

In my last blog - https://www.rittmanmead.com/blog/2017/02/financial-reports-which-tool-to-use-part-1/, I discussed some of the challenges of getting precisely formatted financial reports in OBIEE, as well as some pros and cons of using Essbase/HFR. Although we can work through difficult solutions and sometimes get the job done, BI Publisher is the tool that easily allows you to handle the strangest requirements out there!

If you have OBIEE, then you already have BI Publisher, so there is no need to purchase another tool. BI Publisher comes integrated with OBIEE, and they can both be used from the same interface. The transition between BI Publisher and OBIEE is often seamless to the user, so you don’t need to have concerns over training report consumers in another tool, or even transitioning to another url.

The BIP version that comes embedded with OBIEE 12c comes loaded with many more useful features like encryption and delivering documents to Oracle Document Cloud Service. Check out the detailed new features here: http://www.oracle.com/technetwork/middleware/bi-publisher/new-features-guide-for-12-2-1-1-3074557.pdf

In BI Publisher, you can leverage data from flat files, from different databases, from an Essbase cube, from the OBIEE RPD, from one (or multiple) OBIEE analyses, from web services and more:

Financial Reports - which tool to use? Part 2

So, if you already have very complex OBIEE analyses that you could not format properly, you can use these analyses, and all the logic in them, as sources for your perfectly formatted BI Publisher reports.

Every BI Publisher report consists of three main components:

  1. Data Model - data source that you will use across one or more reports

  2. Layout(s) - which will define how your data is presented

  3. Properties - which are related to how it generates, displays and more

You start a BI Publisher project by creating a data model that contains the different data sets that you would like to use on your report (or across multiple reports). These data sets, which reside inside of your data model, can be of the same source or can come from multiple sources and formats. If you regularly use OBIEE, you can think of a data model as the metadata for one or more reports. It is like a very small, but extremely flexible and powerful RPD.

Financial Reports - which tool to use? Part 2

Inside the data model you can connect your data sets using bind variables (which creates a hierarchical relationship between data sets), or you can leave them completely disconnected. You can also connect some of your data sets while leaving others disconnected.

The most impressive component of this tool is that it will allow you to do math from the results of disconnected data sets, without requiring ETL behind the scenes. This may be one of the requirements of a very complex financial report, and one that is very difficult to accomplish with most tools. The data model can extract and transform data within a data set, or extract only, so that it can later be transformed during your report template design!

For example, within a data set, you can create new columns to suit most requirements - they can be filtered, concatenated, or have mathematical functions applied to them, if they come from the same data source.

Financial Reports - which tool to use? Part 2

If they do not come from the same source, you can transform your data using middle tier systems, such as Microsoft Word during your template creation. You can perform math and other functions to any result that comes from any of your data sets using an RTF template, for example.

Financial Reports - which tool to use? Part 2

The example above was mentioned in Part 1 of this blog. It was created using BI Publisher and represents what I would call a "challenging report" to get done in OBIEE. The data model in this example consisted of several OBIEE analyses and their results were added/subtracted/multiplied as needed in each cell.

Financial Reports - which tool to use? Part 2

This second example was another easy transition into BI Publisher: the entire report contained 10 pages that were formatted entirely differently, one from the other. Totals from all pages needed to be added in some specific cells. Better yet, the user entered some measures at the prompt, and these measures needed to be accounted for in every sub-total and grand total. You may be asking: why prompt for a measure? Very good question indeed. In this case, there were very few measures coming from a disconnected system. They changed daily, and the preferred way for my client to deal with them was to enter them at the prompt.

So, do you always have to add apples to apples? Not necessarily! Adding apples and bananas may be meaningful to you.

Financial Reports - which tool to use? Part 2

And you can add what is meaningful with BI Publisher!

For example, here is a sample data model using sources from Excel, OBIEE and a database. As you see, two of these data sets have been joined, while the other two are disconnected:

Financial Reports - which tool to use? Part 2

A data model such as this one would allow you to issue simultaneous queries across these heterogeneous sources and combine their results in the report template. Meaning, you can add anything that you would like in a single cell. Even if it involves that measure coming from the prompt! Goes without saying, you should have the exact purpose and logic behind this machination.

Once your data model is complete: your data sets are in place, you have created the relationships within them (where applicable), you created custom columns, created your parameters and filters, then you generate some sample data (XML) and choose how you will create your actual report.

As I mentioned, there are additional functionalities that may be added when creating the report, depending on the format that you choose for your template:

Financial Reports - which tool to use? Part 2

One very simple option is to choose the online editor, which has a bit more limited formatting capability, but will allow you to interact with your results online.

In my experience, if I had to cross the bridge away from OBIEE and into BI Publisher, it is because I needed to do a lot of customization within my templates. For those customizations, I found that working with RTF templates gave me all the additional power that I could possibly be missing everywhere else. Even when my financial report had to be read by a machine, BI Publisher/RTF was able to handle it.

The power of the BI Publisher data model combined with the unlimited flexibility of the RTF templates was finally the answer to eliminate the worst excel monsters. With these two, you can recreate the most complex reports, and do it just ONCE - not every month. You can use your existing format - that you either love, or are forced to use for some reason - and reuse it within the RTF. Inside of each RTF cell, you define (once!) what that cell is supposed to be. That specific cell, and all others, will be tested and validated to produce accurate results every month.

Once this work is done, you are done forever. Or well, at least until the requirements change… So, if you are battling with any one of these monsters on a monthly basis, I highly encourage you to take a step forward and give BI Publisher a try. Once you are done with the development of your new report, you may find that you have hours per month back in your hands. Over time, many more hours than what you spent to create the report. Time worth spending.

Financial Reports - which tool to use? Part 2

Financial Reports – which tool to use? Part 1

Financial Reports - which tool to use? Part 1

One of the treats of working in the Business Intelligence world is that we are asked to analyze different aspects of a business. In fact, we are asked to analyze many different types of businesses, too. Most of us using BI tools have come from some previous background. Be it Marketing, Finance, Supply Chain or any other, we most likely had work experience before we got here. Maybe one of our jobs even led to Business Intelligence. The fact is, we are not experts in all areas. It would take several lives to make such a claim, because each area can be very complex and take years to master. The truth, for most of us, is that we have our favorite areas. They are often related to what we are most familiar with.

Financial Reports - which tool to use? Part 1

Over time, I came to really appreciate how simple numbers can be, and developed this - hard to understand - favoritism towards financial reports. While some business areas can be artistic and even vague, numbers are never vague. I have a great appreciation for that. Working with numbers is always precise. In the end, they have to match. No matter how great your report looks, if the numbers don’t add up the report is always wrong. Plus, financial layouts are generally very defined going in, so there is little room for error.

Financials in OBIEE

So, the endeavor begins when you are a BI consultant and everything is supposed to add up properly and look very nice. OBIEE is an extremely powerful tool, and this gives users the impression that it can solve all problems. While it can solve most problems, it falls short on some key features needed for easy financial reporting. That is not to say that Financials can’t be handled in OBIEE - but it is definitely to say that it is not easy.

So, if financial reports are not easy to create in OBIEE, than we are left with two very simple options:

  1. Struggle through it and make it happen

  2. Choose another tool

I have made the mistake of choosing option 1 some times, but quickly realized that option 2 couldn’t be as bad. Countless times, I have been asked to create financial reports in OBIEE. Of course, they needed to tie up and match a specific format: they needed to have blank lines inserted between one section and another, and the alignment of the categories was very important. They often required very detailed variance calculations, so that a company could see where they stood as far as change overtime. Variance percentages are key on these types of reports, and if you have dealt with them in OBIEE, you know that different types of variances and their grand totals can often pose challenges for report writers.

So, in order to accomplish the formatting needed, you end up adding extra code here and there, in essence trying to make OBIEE do something that it’s not supposed to do. Soon, you are experiencing performance issues and a new array of considerations are in place. You start removing your “special code”, then you loose your formatting. The numbers on your financial statement are still correct, but your report looks something like this:

Financial Reports - which tool to use? Part 1

While, in reality, you were trying to get here:

Financial Reports - which tool to use? Part 1

** The Balance Sheet above was created using HFR for illustration of formatting only.

Looking at a different OBIEE financial report (below), you will see that a lot of formatting can be done in these reports, but they will always look like OBIEE reports, if you know what I mean.

Financial Reports - which tool to use? Part 1

In this example, the first column is out of order - as far as Income Statements go. This was left alone on purpose to display one of the issues with creating these statements in OBIEE. The tool does not easily allow you to choose which items will go in each row. So, in the criteria tab, in Answers, you choose the order of the columns, but if you need the rows in order, you will need to either:

  1. Use a hidden column created just for sorting purposes

  2. Leverage selection steps, or

  3. Create a measure column for each row that you will need, use a pivot table, and add the Measure Labels as rows on your pivot

I will illustrate the third option, as it is my preferred way of ordering rows. Suppose that you have a very simple criteria tab such as this:

Financial Reports - which tool to use? Part 1

Naturally, your results would default like this on a table:

Financial Reports - which tool to use? Part 1

If you use a Pivot table instead, you can drag your Measure Labels onto the Rows:

Financial Reports - which tool to use? Part 1

And now, you will be able to see your measures as rows. You can easily reorder them as needed by just moving the order of the columns in the Measures section of your Layout editor.

Financial Reports - which tool to use? Part 1

This seems like a simple solution if you know precisely what all your rows should be, and even better, if you don’t have a huge amount of measures on the report. In real life, this type of row ordering is high maintenance:

  1. You must label each measure to match the account category name for each row

  2. You must filter each measure by its account category (or account number)

  3. If the account category name changes in your DB, you must manually rename your columns to match the new naming convention

  4. If you add or delete account categories, you must manually add and delete columns from your report

OBIEE 12c offers a great improvement in this area: the ability to “save columns” is described very well by Jason Baer on this blog: https://www.rittmanmead.com/blog/2016/01/my-favorite-obiee-12c-feature-that-almost-no-one-is-talking-about/

With the new release of the product you can save as many financial columns as you would like in the web catalog, which allows you to reuse them. As a consequence, you will streamline report maintenance by updating the columns’ format and formula directly from the catalog (instead of inside every report). In fact, if you are spending too much time maintaining your existing reports out of OBIEE 11g, you will automatically benefit from an upgrade to 12c just based on this single feature. Check here for more info: https://www.rittmanmead.com/obiee-12c-upgrade/

This is a great improvement, but you will still need to deal with an overall lack of flexibility for dynamically adding and deleting columns, setting orders, adding blank space, indenting and calculating variances along with proper grand totals.

After spending more time than you should in order to create a simple report, you really start considering other tools. If you are already working in the Oracle stack, the obvious choices will be BI Publisher and Hyperion Financial Reporting (HFR).

Financials in Essbase/HFR

Hyperion Financial Reporting (HFR) brings a powerful solution to financial statements, because it allows you to create pixel perfect reports that are pre-aggregated in an Essbase cube. Just with that, two big problems were just solved: formatting and performance.

In the example below, you see that HFR allows you to place metrics on both sides of the Account Category (butterfly layout - difficult to accomplish in OBIEE):

Financial Reports - which tool to use? Part 1

In addition to formatting and performance, there are some definite pros to consider when choosing HFR:

  1. The calculations in HFR dynamically reference cells, as in excel. So, if a cell changes, the cells that are referencing the original cell will automatically be updated

  2. HFR has the ability to create financial books and batches, and also has a powerful bursting feature

HFR is a great solution for Income Statements, Balance Sheets and other reports that come from Essbase cubes. In a simplistic way, an Essbase cube is a combination of tables that have been joined and pre-aggregated. Since most tables coming out of a financial module in a system can often be joined, you should be able to create Essbase cubes to use as a source for your HFR reports. You will rarely have a requirement that cannot be handled by HFR and Essbase, but some situations may be problematic, for example, if your report requires a measure to be entered at run-time, if results from multiple cubes need to be added, or if your layout is very complex. This is why :

In an HFR report, you start by inserting a grid onto your report and then you associate that grid with a specific Essbase cube. If you need data from two cubes on the report, you can insert another grid and associate that with the second cube. You can also create a report that leverages calculations between existing grids (for the purpose of doing math with two or more separate cubes):

Financial Reports - which tool to use? Part 1

Many thanks to my collegue, Mark Cann https://www.rittmanmead.com/blog/author/mark-cann/, for working through this solution with me

The challenge here is that you may end up with multiple layout grids on your HFR report, which will complicate the report creation and maintenance going forward. It is important to know that if your requirements call for strange off-setting of cells and multiple different looking blocks, then HFR may not be the best tool for the job. If you choose HFR for this purpose, you will spend too much time trying to make things right.

Financial Reports - which tool to use? Part 1

*This is a simple Essbase implementation with 2 cubes (or databases): a Balance Sheet and an Income Statement cube.

The fact is, some financial reports are very tricky and do not come solely from a Financial module. For example, if your company is evaluated monthly for a line of credit, your bank may require to look at several components of your business in order to determine the amount that you can borrow. They will base their decision not only on your monthly revenue, but also your liabilities, such as accounts payables, and some of your assets, such as inventory. What they ask for really depends on their internal lending requirements, and also on the type of business that you have. These are, therefore, highly customized reports that never come out-of-the box anywhere. For this reason, most companies spend a lot of man hours creating these reports as a huge excel report, after the employees have managed to pull information from many different modules together.

These excel “monsters” do the job. They are accepted by the banks, and will get you that loan. On the downside, they need to be redone every month and will drag resource hours out of profitable projects. The flat excel files are also prone to mistakes, as the values are manually keyed in each time. If you make a mistake favorable to the company, your bank will look at it as a very negative issue. If you make an unfavorable mistake, you will not be able to borrow as much as you qualify for. This is a no win situation, so the reports must be accurate every time.

Financial Reports - which tool to use? Part 1 To check for accuracy, there is nothing like testing overtime. But, since you must rework the report each month, you don’t have that opportunity.

The solution is to create a template that will pull from all of these different modules, calculate the numbers, add the results automatically to a pixel perfect formatted report. Over the development cycle, these mappings and calculations will be thoroughly tested, and then they will only be reused going forward.

While you may spend some time pulling this logic together, you will only have to click a few buttons after you are done, for months or years to come. In fact, I have clients that have been running reports such as this one for years. They have been saving a couple of weeks in report creation every month.

Let’s look at an example of what I am talking about:

Financial Reports - which tool to use? Part 1

On this report, each number (disguised as $1234) has been mapped to a calculation that will be pulled dynamically, according to the date entered on the prompt. The inventory amounts are adjusted according to banking requirements, and a rate is allocated depending on the row. This amount is later added/subtracted from receivables and existing contracts. Most of these numbers were created as separate OBIEE analyses. Some amounts could even be tied into web services to get the daily futures prices to estimate the value of contracts when the report runs. All lines are considered in the final equation before the total borrowing amount can be calculated. Per this bank’s requirement, this form needed to be printed and signed, then submitted monthly.

Lending/financing reports may be the most tricky, and the most time consuming for companies to generate every month. The reports may be required by the bank, or by a company that is leasing or financing valuable equipment to your company. These reports need to show your prospective lender everything about your business. They will often need to be done in a format that is specified by your lender. These formats are not negotiable, in fact, some lenders still use old forms that used to be read by a machine.

Here is another small snippet of a financing report that I had to create recently. Now, which tool would you use for 10 different pages of something like this, which required some of the amounts to be entered in the prompt? *Note: the report had to look “exactly” like this:

Financial Reports - which tool to use? Part 1

Well, as I mentioned in the beginning of this article, OBIEE would not be your partner in this type of endeavor. I can guarantee that this relationship would fail: strange formatting with black boxes, line numbers, need for Headers (footers too, not shown here), indenting, etc.

You may consider Essbase/HFR combo, for formatting and performance, but you will soon realize that:

  1. Performance does not tend to be an issue with these reports, as they are generally submitted to lenders on a monthly basis, and therefore can be scheduled to run automatically in the middle of the night.

  2. As mentioned earlier, HFR requires a layout grid to be inserted before the report can be designed. Here, you would end up with multiple grids to handle the calculation of different cells from multiple cubes - which can be cumbersome to create and maintain.

  3. The measures in an HFR report should come from the pre-aggregated cube. In this example, some of the measures were entered as part of the prompt and are calculated at run time. At this point, you must scratch the Essbase/HFR option for this one!

So, now you are still stuck with your monster excel spreadsheet, then retyping the numbers onto the required form.
Financial Reports - which tool to use? Part 1

Before you marry this solution, let me present you with the tool that can do everything: BI Publisher.

Stay tuned for the second part of this blog, when I will share why I believe that BIP can solve the most challenging reporting requirements out there!

ASO Slice Clears – How Many Members?

Essbase developers have had the ability to (comparatively) easily clear portions of our ASO cubes since version 11.1.1, getting away from fiddly methods involving manually contra-ing existing data via reports and rules files, making incremental loads substantially easier.

Along with the official documentation in the TechRef and DBAG, there are a number of excellent posts already out there that explain this process and how to effect “slice clears” in detail (here and here are just two I’ve come across that I think are clear and helpful). However, I had a requirement recently where the incremental load was a bit more complex than this. I am sure people must have fulfilled in the same or a very similar way, but I could not find any documentation or articles relating to it, so I thought it might be worth recording.

For the most part, the requirements I’ve had in this area have been relatively straightforward—(mostly) financial systems where the volatile/incremental slice is typically a months-worth (or quarters-worth) of data. The load script will follow this sort of sequence:

  • [prepare source data, if required]
  • Perform a logical clear
  • Load data to buffer(s)
  • Load buffer(s) to new database slice(s)
  • [Merge slices]

With the last stage being run here if processing time allows (this operation precludes access to the cube) or in a separate routine “out of hours” if not.

The “logical clear” element of the script will comprise a line like (note: the lack of a “clear mode” argument means a logical clear; only a physical clear needs to be specified explicitly):

alter database ‘Appname‘.’DBName‘ clear data in region ‘{[Jan16]}’

or more probably

alter database ‘Appname‘.’DBName‘ clear data in region ‘{[&CurrMonth]}’

i.e., using a variable to get away from actually hard coding the member values to clear. For separate year/period dimensions, the slice would need to be referenced with a CrossJoin:

alter database ‘Appname‘.’DBName‘ clear data in region ‘Crossjoin({[Jan]},{[FY16]})’
alter database ‘${Appname}’.’${DBName}’ clear data in region ‘Crossjoin({[&{CurrMonth]},{[&CurrYear]})’

which would, of course, fully nullify all data in that slice prior to the load. Most load scripts will already be formatted so that variables would be used to represent the current period that will potentially be used to scope the source data (or in a BSO context, provide a FIX for post-load calculations), so using the same to control the clear is an easy addition.

Taking this forward a step, I’ve had other systems whereby the load could comprise any number of (monthly) periods from the current year. A little bit more fiddly, but achievable: as part of the prepare source data stage above, it is relatively straightforward to run a select distinct period query on the source data, spool the results to a file, and then use this file to construct that portion of the clear command (or, for a relatively small number, prepare a sequence of clear commands).

The requirement I had recently falls into the latter category in that the volatile dimension (where “Period” would be the volatile dimension in the examples above) was a “product” dimension of sorts, and contained a lot of changed values each load. Several thousand, in fact. Far too many to loop around and build a single command, and far too many to run as individual commands—whilst on test, the “clears” themselves ran satisfyingly quickly, it obviously generated an undesirably large number of slices.

So the problem was this: how to identify and clear data associated with several thousand members of a volatile dimension, the values of which could change totally from load to load.

In short, the answer I arrived at is with a UDA.

The TechRef does not explicitly say or give examples, but because the Uda function can be used within a CrossJoin reference, it can be used to effect a clear: assume the Product dimension had an UDA of CLEAR against certain members…

alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)})’

…would then clear all data for all of those members. If data for, say, just the ACTUAL scenario is to be cleared, this can be added to the CrossJoin:

alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)}, {[ACTUAL]})’

But we first need to set this UDA in order to take advantage of it. In the load script steps above, the first step is prepare source data, if required. At this point, a SQLplus call was inserted to a new procedure that

  1. examines the source load table for distinct occurrences of the “volatile” dimension
  2. populates a table (after initially truncating it) with a list of these members (and parents), and a third column containing the text “CLEAR”:

picture1

A “rules” file then needs to be built to load the attribute. Because the outline has already been maintained, this is simply a case of loading the UDA itself:

picture2

In the “Essbase Client” portion of the load script, prior to running the “clear” command, the temporary UDA table needs to be loaded using the rules file to populate the UDA for those members of the volatile dimension to be cleared:

import database ‘AppName‘.’DBName‘ dimensions connect as ‘SQLUsername‘ identified by ‘SQLPassword‘ using server rules_file ‘PrSetUDA’ on error write to ‘LogPath/ASOCurrDataLoad_SetAttr.err’;

picture3

 

With the relevant slices cleared, the load can proceed as normal.

After the actual data load has run, the UDA settings need to be cleared. Note that the prepared table above also contains an empty column, UDACLEAR. A second rules file, PrClrUDA, was prepared that loads this (4th) column as the UDA value—loading a blank value to a UDA has the same effect as clearing it.

The broad steps of the load script therefore become these:

  • [prepare source data, if required]
  • ascertain members of volatile dimension to clear from load source
  • update table containing current load members / CLEAR attribute
  • Load CLEAR attribute table
  • Perform a logical clear
  • Load data to buffers
  • Load buffer(s) to new database slice(s)
  • [Merge slices]
  • Remove CLEAR attributes

So not without limitations—if the data was volatile over two dimensions (e.g., Product A for Period 1, Product B for Period 2, etc.) the approach would not work (at least, not exactly as described, although in this instance you could possible iterate around the smaller Period dimension)—but overall, I think it’s a reasonable and flexible solution.

Clear / Load Order

While not strictly part of this solution, another little wrinkle to bear in mind here is the resource taken up by the logical clear. When initializing the buffer prior to loading data into it, you have the ability to determine how much of the total available resource is used for that particular buffer—from a total of 1.0, you can allocate (e.g.) 0.25 to each of 4 buffers that can then be used for a parallel load operation, each loaded buffer subsequently writing to a new database slice. Importing a loaded buffer to the database then clears the “share” of the utilization afforded to that buffer.

Although not a “buffer initialization” activity per se, a (slice-generating) logical clear seems to occupy all of this resource—if you have any uncommitted buffers created, even with the lowest possible resource utilization of 0.01 assigned, the logical clear will fail:

picture4

The Essbase Technical Reference states at “Loading Data Using Buffers“:

While the data load buffer exists in memory, you cannot build aggregations or merge slices, as these operations are resource-intensive.

It could perhaps be argued that as we are creating a “clear slice,” not merging slices (nor building an aggregation), that the logical clear falls outside of this definition, but a similar restriction certainly appears to apply here too.

This is significant as, arguably, the ideally optimum incremental load would be along the lines of

  • Initialize buffer(s)
  • Load buffer(s) with data
  • Effect partial logical clear (to new database slice)
  • Load buffers to new database slices
  • Merge slices into database

As this would both minimize the time that the cube was inaccessible (during the merge), and also not present the cube with zeroes in the current load area. However, as noted above, this does not seem to be possible—there does not seem to be a way to change the resource usage (RNUM) of the “clear,” meaning that this sequence has to be followed:

  • Effect partial logical clear (to new database slice)
  • Initialize buffer(s)
  • Load buffer(s) with data
  • Load buffers to new database slices
  • Merge slices into database

I.e., the ‘clear’ has to be fully effected before the initialization of the buffers. This works as you would expect, but there is a brief period—after the completion of the “clear” but before the load buffer(s) have been committed to new slices—where the cube is accessible and the load slice will show as “0” in the cube.

The post ASO Slice Clears – How Many Members? appeared first on Rittman Mead Consulting.

Oracle OpenWorld 2015 Roundup Part 3 : Oracle 12cR2 Database Sharding, Analytic Views and Essbase 12c

With the UKOUG conference starting tomorrow I thought it about time I finished off my three-part post-OOW 2015 blog series, with a final post on some interesting announcements around Oracle Database and Essbase. As a reminder the other two posts were on OBIEE12c and the new Data Visualisation Cloud Service, and Data Integration and Big Data. For now though let’s look first at two very significant announcements about future 12cR2 functionality – database sharding and Analytic Views.

Anyone who’s been involved in Oracle Data Warehousing over the years will probably be aware of the shared-everything vs. shared-nothing architecture debate. Databases like Oracle Database were originally designed for OLTP workloads with the optimal way to increase capacity being to buy a bigger server. When RAC (Real Application Clusters) came along the big selling point was  a single shared database instance spread over multiple nodes, making application development easy (no real changes) but with practical limits as to how big that cluster can get – due to the need to synchronise shared memory across all nodes in the cluster, and network bottleneck caused by compute and storage being spread across the whole cluster, not co-located as we get with Hadoop and HDFS, for example.

NewImage

Shared-nothing databases such as Netezza, for example, take a different approach and “shard” the database instance over multiple nodes in the cluster so that processing and storage are co-located on the same node for particular ranges of data. This gives the advantage of much greater scalability that a shared-nothing approach (again, this is why Hadoop uses a similar approach for its massively-clustered distributed compute approach) but with the drawback of having to consider data locality when writing ETL and other code; at worst it means data loading and processing needs to be rewritten when you add more nodes and re-shard the database, and it also generally precludes OLTP work and consequently mixed-workloads on the same platform.

NewImage

But if it’s just data warehousing you want to do, you don’t really care about mixed workloads and its generally considered that shared-nothing and sharding is what you need if you want to get to very-large scale data warehousing, such that Oracle went partly down the shared-nothing route with Exadata and push-down of filtering, projection and other operations to storage nodes thereby adding an element of data locality and reducing the network throughput between storage and compute.

NewImage

But both types of database are loosing out to Hadoop for very, very large datasets with Hadoop distributed compute approach designed right from the start for large distributed workloads at the expense of not supporting OLTP at all and, at least initially, all intermediate resultsets being written to disk. For those types of workloads and database size Oracle just wasn’t an option, but a certain top their of Oracle’s data warehousing customers wanted to be able to scale to hundreds or thousands of nodes and most of them have ULAs, so cost isn’t really a limiting factor; for those customers, Oracle announced that the 12c Release 2 version of Oracle Database would support sharding … but with warnings it’s for sophisticated and experienced customers only.

NewImage

Oracle are positioning what they’re referring to as “Oracle Elastic Sharding” as for both scaling and fault-tolerance, with up to 1,000 nodes supported and with data routed to particular shards through use of a “sharding key” passed the connection pool.

NewImage

Sharding in 12c Release 2 was described to me as a featured aimed to the “top 5%” of Oracle customers where price isn’t the issue but they want Oracle to scale to the size of cluster supported by Hadoop and NoSQL. Time will tell how well it’ll work and what it’ll cost, but it certainly completes Oracle’s journey from strict shared-everything for data warehousing to more-or-less shared nothing, if you want to go down that extreme-scalability route.

The other announcement from the Oracle Database side was the even-more-unexpected “Analytic Views”. A clue came from who was running the session – Bud Endress, of Oracle Express / Oracle OLAP fame and more recently, the Vector Group By feature in the In-Memory Option – but what we got was a lot more than Oracle OLAP re-imagined for in-memory; instead what Oracle are looking to do is bring the business metadata and calculation layers that BI tools use right into the database, provide an MDX query  interface over it, simplify SQL so that you just select measures, attributes and hierarchies – and then optimise the whole thing so it runs in-memory if you have that option licensed.

NewImage

Its certainly an “interesting” goal with considerable overlap with OBIEE’s BI Server and Essbase Server, but the goal of bringing all this functionality closer to the data and available to all tools is certainly ambitious, if it gets traction it should bring business metadata layers and simpler queries to a wider audience – but the fact that it seems to be being developed separately to Oracle’s BI and Essbase teams means it probably won’t be subsuming Essbase or the BI Server’s functionaliy.

The last area I wanted to look at was Essbase. Essbase Cloud Service was launched at this event with it positioned as a return to Essbase’s roots as a tool you could use in the finance department without requiring IT’s help, except this time it’s because Essbase is running as a service in the cloud rather than on an old PC under your desk. What was particularly interesting though is that the version of Essbase being used in the cloud is the new 12c version, that replaces some of the server components (the Essbase Agent, but not the core Essbase Server part) with new Java components that presumably fit better with Oracle’s cloud infrastructure and also support greater levels of concurrency

NewImage

Apart from the announcement of a future ability to link to R libraries, the other really interesting part of Essbase 12c is that for now the only on-premise version of it is as part of OBIEE12c, and it’ll have a very fixed role there as a pure query accelerator for OBIEE’s BI Server – perhaps the answer to Qlikview and Tableau’s in-memory column-store caches. Essbase as part of an OBIEE12c store doesn’t work with Essbase Studio or any of the other standard Essbase tools, but instead has a new Essbase Business Intelligence Acceleration Wizard that deploys Hybrid ASO/BSO Essbase cubes directly from the OBIEE BI Server and RPD.

NewImage

Coupled with the changes to Essbase announced a couple of years ago at Openworld 2013 designed to improve compatibility with OBIEE, this co-located version of Essbase seems to have completed it’s transformation into the BI Server mid-tier aggregate cache layer of choice that started back with the 11.1.1.6.2 BP1 version of OBIEE – but it does mean this version can’t be used for anything else, even custom Essbase cubes you load and design yourself. Interesting developments across both database server products though, and that wraps up my overview of OOW2015 announcements. Next stop – UKOUG Tech’15 in Birmingham, where I’ve just arrived ready for my masterclass session in tomorrow’s Super Sunday event – on data reservoirs and Customer 360 on Oracle Big Data Appliance.

The post Oracle OpenWorld 2015 Roundup Part 3 : Oracle 12cR2 Database Sharding, Analytic Views and Essbase 12c appeared first on Rittman Mead Consulting.

Essbase Studio checkModelInSyncExternal Error

This week I am back at one of my favourite clients to assist with some issues since the latest upgrade to 11.1.1.9. These guys are seriously technical and know their stuff, so when issues pop up I’m always ready for an interesting and challenging day in the office.

As with all the recent 11g versions, they had to decide between in-place and out-of-place upgrades, and here they opted for an in-place upgrade because they have a fairly settled configuration using Answers, Publisher and Essbase. They were trying to avoid the need to reconfigure components like:

  • SQL Group Providers
  • Customisations
  • Application Roles used in OBIEE and Essbase Filters

Plus, when reconfiguring the above you also run the risk of missing something and it could take a long time to track down where it went wrong.

The Problem

In the end this was not a very complicated problem or solution but seeing as we couldn’t find anything on the web or Oracle Support regarding this error, I thought it might be useful to share in case others run into this same issue.

After performing the upgrade from OBIEE 11.1.1.7.140114 to 11.1.1.9 the client was unable to view or edit the Essbase Model properties in Essbase Studio. In fact, they couldn’t even see their Essbase Model at all. Only the Cube Schema was visible.

cube_schema_only1

When we tried to select or edit the Schema the following message appeared:

Error in the external procedure 'checkModelInSyncExternal'. Line = 136.

checkinsyncerror1

Oracle Support Came To The Party

After trying several different options to fix it, none of which made any difference, the client raised a P1 query with Oracle Support. After going through the initial standard questions and a few messages between the client and Oracle Support, they came back with a solution. All of this within 24 hours…

The Reason

After applying the patch the catalog version is not synchronised with the new version of Essbase Studio Server.

The Solution

Even thought we couldn’t find any reference to this in the post-patching section of the documentation, when you read the Oracle Essbase Studio 11.1.2.4.000 Readme there is a section in there describing the problem and solution.

To fix the problem we followed these simple steps

  1. Navigate to
    <ORACLE_BI_HOME>productsEssbaseEssbaseStudioServerscripts.template
  2. Copy startCommandLineClient.bat.template to
    <ORACLE_BI_HOME>productsEssbaseEssbaseStudioServer
  3. Rename the new startCommandLineClient.bat.template to startCommandLineClient.bat
  4. Edit startCommandLineClient.bat so it looks like this
    NOTE – Update the paths according to your environment AND use full paths
    @echo off
    
    set JAVA_HOME=C:oracleOracle_BI1jdkjre
    "%JAVA_HOME%binjava" -Xmx128m %JAVA_OPTIONS% -jar "C:oracleOracle_BI1productsEssbaseEssbaseStudioServerclient.jar"
    
    if %errorlevel% == 0 goto finish
    
    echo .
    echo Press any key to finish the client session
    pause > null
    
    : finish
  5. Open a CMD window and start startCommandLineClient.bat
  6. Enter the requested information (Host, Username and Password)
  7. Enter the following command and wait for it to complete
    reinit
  8. Type Exit to leave the script and close the CMD window

You should now be able to open the Model Properties in Essbase Studio without any issues.

cube_schema_and_essb_props1