Tag Archives: Oracle BI Suite EE

The BI Survey 14 – Have Your Voice Heard!

Long-term readers of this blog will know that we’ve supported for many years the BI Survey, an independent survey of BI tools customers and implementors. Rittman Mead have no (financial or other) interest in the BI Survey or its organisers, but we like the way it gathers in detailed data on which tools work best and when, and it’s been a useful set of data for companies such as Oracle when they prioritise their investment in tools such as OBIEE, Essbase and the BI Applications.

Here’s the invite text and link to the survey:

“We would like to invite you to participate in The BI Survey 14, the world’s largest annual survey of business intelligence (BI) users.

To take part in this year’s survey, visit: https://digiumenterprise.com/answer/?link=1906-PHB5RT7V

As a participant, you will:

  • Receive a summary of the results from the full survey
  • Be entered into a draw to win one of ten $50 Amazon vouchers
  • Ensure that your experiences are included in the final analyses

BARC’s annual survey gathers input from thousands of organizations to analyze their buying decisions, implementation cycles and the benefits they achieve from using BI software.

The BI Survey 14 is strictly vendor-independent: It is not sponsored by any vendor and the results are analyzed and published independently.

You will be asked to answer questions on your usage of a BI product from any vendor. Your answers will be used anonymously and your personal details will not be passed on to software vendors or other third parties.

Business and technical users, as well as vendors and consultants, are all encouraged to participate.

The BI Survey 14 should take about 20 minutes to complete. For further information, please contact Jevgeni Vitsenko at BARC (jvitsenko@barc.de). 

Click below to take part in The BI Survey 14: https://digiumenterprise.com/answer/?link=1906-PHB5RT7V

How We Deliver Agile OBIEE Projects – Introducing ExtremeBI

Most OBIEE projects that we see are delivered through some sort of “waterfall” method, where requirements are defined up-front, there’s several stages of development, one or more major releases at the end, and any revision to requirements takes the form of a change request. These work well where requirements can be defined upfront, and can be reassuring to customers when they want to agree a fixed-price up-front with every subsequent change clearly costed. But, as with the development world in general, some customers are starting to look at “agile” methods for delivering BI projects, where requirements emerge over the course of a project, there isn’t so much of a fixed design or specification at the start, but instead the project adds features or capabilities in response to what are called “user stories”, making it more likely in-the-end that what ends-up getting delivered is more in-line with what users want – and where changes and additions to requirements are welcomed, rather than extra-cost change requests.

OBIEE naturally lends itself to working in an agile manner, through the three-layer nature of the repository (RPD); by separating the physical representation of the source data from how it is then presented to the end-users, you can start from the off with the dimensional model that’s your end goal, and then over time evolve the back-end physical layer from pointing directly at the source system to instead point at a data warehouse or OLAP cube. In fact, I covered this approach back in 2008 in a blog post called “A Future Oracle OBIEE Architecture” where I positioned OBIEE’s BI Server as a “business logic layer”, and speculated that at some point in the future, OBIEE might be able to turn the logical > physical mappings in the RPD into actual ODI mappings and transformation.

NewImage

In the end, although OBIEE’s aggregate persistence feature gave us the ability to spin-off aggregate tables and cubes from the RPD logical model, full ETL “push-down” never came although you can see traces of it if you have a good poke around the DLLs and directories under the BI Server component. What did happen though was Exadata; with Exadata, features such as SmartScan, and its ability to do joins between normalised tables much faster than regular databases meant that it became possible to report directly against an OLTP schema, or a ODS-like foundation layer, only adding ETL to build a performance star schema layer if it was absolutely necessary. We covered this in a series of posts on Agile Data Warehousing with Exadata, and the focus of this method was performance – by adding Exadata, and the metadata flexibility in OBIEE’s RPD, we could deliver agile projects where Exadata gave us the performance even when we reported directly against a third-normal form data source.

NewImage

 

 

And this approach worked well for our customers; if they’d invested in Exadata, and were open to the idea of agile, iterative development, we could typically deliver a working system in just a few months, and at all times what the users got was what they’d requested in their user story backlog. But there were still ways in which we could improve this method; not everyone has access to Exadata, for example, and reporting directly against a source system makes it tricky to add DW features like history, and surrogate keys, so recently we introduced the successor to this approach, in the form of an OBIEE development method we called “ExtremeBI”. Building our previous agile work, ExtremeBI introduced an integration element, using GoldenGate and ODI to replicate in real time any source systems we were interested in to the DW foundation layer, add the table metadata that DW systems expect, and then provide a means to transform the logical to physical RPD mappings into ODI ETL specifications.

NewImage

But in a way, all the technical stuff is by-the-by; what this means in practice for customers is that we deliver working systems from the first iteration; initially, by reporting directly against a replicated copy of their source system (with replication and metadata enhancement by GoldenGate, and optionally ODI),and then over subsequent iterations adding more end-user functionality, OR hardened ODI ETL code, all the while driven by end-user stories and not some technical design signed-off months ago and which no longer reflects what users actually want.

NewImage

What we’ve found though from several ExtremeBI customer engagements, is that it’s not just down to the technology and how well ODI, OBIEE and GoldenGate work; the major factors in successful projects are firstly, having the project properly pre-qualified at the start; not every project, and not every client, suits agile working, and agile works best if you’re “all in” as opposed to just agreeing to work in sprints but still having a set-in-stone set of requirements which have to be met at a certain time. The second important success factor is proper project organisation; we’ve grown from just a couple of guys with laptops back in 2007 to a fully-fledged, end-to-end development organisation, with full-time delivery managers,a managed services desk and tools such as JIRA, and you need to have this sort of thing in place, particularly a project management structure that’s agile-friendly and a good relationship with the customer where they’re fully-signed up to the agile approach. As such, we’ve found the most success where we’ve used ExtremeBI for fairly technically-savvy customers, for example a MIS department, who’ve been tasked with delivering something for reasonable price and over a short amount of months, who understand that not all requirements can be delivered, but really want their system to get adopted, delight their customer and focus its features on what’s important to end-users.

As well as processes and a method, we’ve also developed utilities and accelerators to help speed-up the initial setup, and ensure the initial foundation and staging layers are built consistently, with GoldenGate mappings already put in place, and ready for our developers to start delivering reports against the foundation layer, or use these foundation-layer tables as the basis of a data mart or warehouse build-out. The screenshot below shows this particular tool, built using Groovy and run from within the ODI Studio user interface, where the developer selects a set of source tables from an ODI model, and then the utility builds out the staging and foundation layers automatically, typically saving days over the manual method.

NewImage

We’ve also built custom KMs for ExtremeBI, including one that uses Oracle Database’s flashback query feature to pull historical transactions from the UNDO log, as an alternative to Oracle Streams or Oracle GoldenGate when these aren’t available on the project.

All together, using Rittman Mead’s ExtremeBI method along with OBIEE, ODI and optionally GoldenGate has meant we’ve been able to deliver working OBIEE systems for customers over just a few months, typically for a budget less than £50k. Coupled with cloud hosting, where we can get the customer up-and-running immediately rather than having to wait for their IT department to provision servers, we think this the best way for most OBIEE11g projects to be delivered in the future. If you’re interested, we’ve got more details on our “ExtremeBI in the Cloud” web page, or you can contact me via email – mark.rittman@rittmanmead.com – if you’d like to discuss it more,

Visual Regression Testing of OBIEE with PhantomCSS

Earlier this year I wrote a couple of blogs posts (here and here) discussing the topic of automated Regression Testing and OBIEE. One of the points that I was keen make was that OBIEE is a stack of elements and depending on the change being tested, it may be sensible to focus on certain elements in the stack instead of all of it. For example, if you are changing the RPD, there is little value in doing a web-based test when you can actually test for the vast majority of regressions using the nqcmd tool alone.

I also argued that testing the front end of OBIEE using tools such as Selenium is difficult to do comprehensively, it can be inflexible, time-consuming and in some cases just not a sensible use of effort. These tools work around the idea of parsing the web page that is served up and checking for presence (or absence) of a particular piece of text or an element on a web page. So for example, you could run a test and tell it to fail if it finds the text “Error” on the page, or you could say only pass the test if some known-content is present, such as a report title or data figure. This type of testing is prone to a great deal of false-negatives, because to efficiently build any kind of test case you must focus on something to check for in the page, but you cannot code for every possible error or failure. It is also usually based heavily on the internal IDs of elements on the page in locating the ‘something’ to check for. As the OBIEE Document Object Model (DOM) is undocumented code, Oracle are at presumably at liberty to change it whenever they feel like it, and thus any tests written based on it may fail. Finally, OBIEE 11g still defaults to serving up graphs as Flash objects, which Selenium et al just cannot handle, and so cannot be tested.

So, what do we do about regression testing the OBIEE front end?

What do we need to test in the front end?

There is still a strong case for regression testing the OBIEE front end. Analyses get changed, Dashboards break, permissions are updated – all these things can cause errors or problems for the end user, but which are something that testing further down the OBIEE stack (using something like nqcmd) will not cover.

Consider a simple dashboard:

If one of the dashboard pages that are linked to in the central section get moved in the Presentation Catalog, then this happens:

OK, so Invalid Link Path: is pretty easy to code in as an error check into Selenium. But, what about if the permissions on an analysis used in the dashboard get changed and the user can no longer access it when running the dashboard?

This is a different problem altogether. We need to check for the absence of something. There’s no error, there just isn’t the analysis that ought to be present. One way around this would be to code for the presence of the analysis title text or content – but that is not going to scale nor be maintainable to do for every dashboard being tested.

Another thing that is important to check in the front end is that authorisations are enforced as they should be. That is, a user can see the dashboards that they should be able to, and that they cannot see the ones they’re not. Changes made in the LDAP directory holding users and their groups, or a configuration change in the Application Roles, could easily mean that a user can no longer see the dashboards they should be able to. We could code for this specific issue using something like Web Services to programatically check each and every actual permission – but that could well be overkill.

What I would like to introduce here is the idea of testing OBIEE for regressions visually - but automated, of course.

Visual Regression Testing

Driven by the huge number of applications that are accessed solely on the web (sorry, “Cloud”), a new set of tools have been developed to support the idea of testing web pages for regressions visually. Instead of ‘explaining’ to the computer specifically what to look for in a page (no error text, etc), visual regression testing uses a process to compare images of a web page, comparing a baseline to a sample taken afterwards. This means that the number of false-negatives (missing genuine errors because the test didn’t detect them) drops drastically because instead of relying on coding a test program to parse the Document Object Model (DOM) of an OBIEE web page (which is extremely complex), instead it is simply considering if two snapshots of the resulting rendered page look the same.

The second real advantage of this method is that typically the tools (including the one I have been working with and will demonstrate below, PhantomCSS) are based on the actual engine that drives the web browsers in use by real end-users. So it’s not a case of parsing the HTML and CSS that the web server sends us and trying to determine if there’s a problem or not – it is actually rendering it the same as Chrome etc and taking a snapshot of it. PhantomCSS uses PhantomJS, which uses the engine that Safari is built on, WebKit.

Let’s Pretend…

So, we’ve got a tool – that I’ll demonstrate shortly – that can programatically fetch and snapshot OBIEE pages, and compare the snapshots to check for any changes. But what about graphs rendered in flash? These are a blindspot usually. Well here we can be a bit cheeky. If you pretend (in the User-Agent HTTP request header) to be an iPhone or iPad (devices that don’t support flash) then OBIEE obligingly serves up PNG graphs plus some javascript to do the hover tooltips. Because it’s a PNG image that means that it will be rendered correctly in our “browser”, and so included in the snapshot for comparison.

CasperJS

Let’s see this scripting in action. Some clarification of the programs we’re going to use first:

  • PhantomJS is the core functionality we’re using, a headless browser sporting Javascript (JS) APIs
  • CasperJS provides a set of APIs on top of PhantomJS that make working with web page forms, navigation etc much easier
  • PhantomCSS provides the regression testing bit, taking snapshots and running code to compare them and report differences.

We’ll consider a simple CasperJS example first, and come on to PhantomCSS after. Because PhantomCSS uses CasperJS for its core interactions, it makes sense to start with the basics.

Here is a bare-bones script. It loads the login page for OBIEE, echoes the page title to the console, takes a snapshot, and exits:

var casper = require('casper').create();

casper.start('http://rnm-ol6-2:9704/analytics', function() {
  this.echo(this.getTitle());
  this.capture('casper_screenshots/login.png');
});

casper.run();

I run it from the command line:

$ casperjs casper_example_01.js
Oracle Business Intelligence Sign In
$

As you can see, it outputs the title of the page, and then in the screenshots folder I have this:

I want to emphasise again to make clear why this is so useful: I ran this from the commandline only. I didn’t run a web browser, I didn’t take any snapshots by hand – it was all automatic.

Now, let’s build a bit of a bigger example, where we login to OBIEE and see what dashboards are available to us:

// Set the size of the browser window as part of the 
// Casper instantiation
var casper = require('casper').create({viewportSize: {
        width: 800,
        height: 600
    }});

// Load the login page
casper.start('http://rnm-ol6-2:9704/analytics', function() {
  this.echo(this.getTitle());
  this.capture('casper_screenshots/login.png');
});

// Do login
casper.then(function(){
  this.fill('form#logonForm', { NQUser: 'weblogic' ,
                                NQPassword: 'Password01'
                              }, true);
}).
waitForUrl('http://rnm-ol6-2:9704/analytics/saw.dll?bieehome',function(){
  this.echo('Logged into OBIEE','INFO')
  this.capture('casper_screenshots/afterlogin.png');
  });

// Now "click" the Dashboards menu
casper.then(function() {
  this.echo('Clicking Dashboard menu','INFO')
  casper.click('#dashboard');
  this.waitUntilVisible('div.HeaderPopupWindow', function() {
    this.capture('casper_screenshots/dashboards.png');
  });
});

casper.run();

So I now get a screenshot of after logging in:

and after “clicking” the Dashboard menu:

The only bit of the script above that isn’t self-explanatory is where I am referencing elements. The references are as CSS3 selectors and are easily found using something like Chrome Developer Tools. Where the click on Dashboards is simulated, there is a waitUntilVisible function, which is crucial for making sure that the page has rendered fully. For a user clicking the menu, they’d obviously wait until it appears but computers work much faster so functions like this are important for reining them back.

To round off the CasperJS script, let’s add to the above navigating to a Dashboard, snapshotting it (with graphs!), and then logging out.

[...]
casper.then(function(){
  this.echo('Navigating to GCBC Dashboard','INFO')
  casper.clickLabel('GCBC Dashboard');
})

casper.waitForUrl('http://rnm-ol6-2:9704/analytics/saw.dll?dashboard', function() {
  casper.waitWhileVisible('div.AjaxLoadingOpacity', function() {
    casper.waitWhileVisible('div.ProgressIndicatorDiv', function() {
      this.capture('casper_screenshots/dashboard.png');
    })
  })
});

casper.then(function() {
  this.echo('Signing out','INFO')
  casper.clickLabel('Sign Out');
});

Again, there’s a couple of waitWhileVisible functions in there, necessary to get CasperJS to wait until the dashboard has rendered properly. The dashboard rendered is captured thus:

PhantomCSS

So now let’s see how we can use the above CasperJS code in conjunction with PhantomCSS to generate a viable regression test scenario for OBIEE.

The script remains pretty much the same, except CasperJS’s capture gets replaced with a phantomcss.screenshot based on an element (html for the whole page), and there’s some extra code “footer” to include that executes the actual test.

So let’s see how the proposed test method holds up to the examples above – broken links and disappearing reports.

First, we run the baseline capture, the “known good”. The console output shows that this is the first time it’s been run, because there are no existing images against which to compare:

In the screenshots folder is the ‘baseline’ image for each of the defined snapshots:

Now let’s break something! First off I’ll rename the target page for one of the links in the central pane of the dashboard, which will cause the ‘Invalid Link Path’ message to display.

Now I run the same PhantomCSS test again, and this time it tells me there’s a problem:

When an image is found to differ, a composite of the two highlighting the differences is created:

OK, so first test passed (or rather, failed), but arguably this could have been picked up simply by parsing the page returned from the OBIEE server for known error strings. But what about a disappearing analysis – that’s more difficult to ascertain from the page source alone.

Again, PhantomCSS picks up the difference, and highlights it nice and clearly in the generated image:

For the baseline image that you capture it would be against a “gold” version of a dashboard – no point including ad-hoc reports or dashboards under development. You’d also want to work with data that was unchanging, so where available a time filter fixed at a point in the past, rather than ‘current day’ which will be changing frequently.

Belts and Braces?

So visual regression testing is a great thing, but I think a hybrid approach, of parsing the page contents for text too, is worthwhile. CasperJS provides its own test APIs (which PhantomCSS uses), and we can write simple tests such as the following:

this.test.assertTextDoesntExist('Invalid Link Path', 'Check for error text on page');
this.test.assertTextDoesntExist('View Display Error', 'Check for error text on page');
phantomcss.screenshot('div.DashboardPageContentDiv','GCBC Dashboard page 1');

So check for a couple of well-known errors, and then snapshot the page too for subsequent automatic comparison. If an assertion is failed, it shows in the console:

This means that what is already be being done in Selenium (or for which Selenium is an assumed default tool) could even be brought into the same single test rig based around CasperJS/PhantomCSS.

Frame of Reference

The eagle-eyed of you will have noticed that the snapshots generated by PhantomCSS above are not the entire OBIEE webpage, whereas the ones from CasperJS earlier in this article are. That is because PhantomCSS deliberately wants to focus on an area of the page to test, identified using a CSS3 selector. So if you are testing a dashboard, then considering the toolbar is irrelevant and can only lead to false-positives.

phantomcss.screenshot('div.DashboardPageContentDiv','GCBC Dashboard page 1');

Similarly, considering the available dashboard list (to validate enforced authorisations) just needs to look at the list itself, not the rest of the page.  (and yes, that does say “Protals” – even developers have fat fingers sometimes ;-) )

phantomcss.screenshot('div.HeaderSharedProtals','Dashboard list');

Using this functionality means that the generated snapshots used for comparison can be done to exclude things like the alerts bar (which may appear or disappear between tests).

The Devil’s in the Detail

I am in no doubt that the method described above has definitely got its place in the regression testing arsenal for OBIEE. What I am yet to be fully convinced of is quite to what extent. My beef with Selenium et al is the level of detail one has to get in to when writing tests – identifying strings to test for, their location in the DOM, and so on. Yet above in my CasperJS/PhantomCSS examples, I have DOM selectors too, so is this just the same problem? At the moment, I don’t think so. For Selenium, to build a comprehensive test, you have to dissect the DOM for every single test you want to build. Whereas with CasperJS/PhantomCSS I think there is the need to write a basic framework for OBIEE (the basics of which are provided in this post; you’re welcome), which can then be parameterised based on dashboard name and page only. Sure, additional types of tests may need new code, but it would be more reusable.

Given that OBIEE doesn’t come with an out of the box test rig, whatever we build to test it is going to be bespoke, whether its nqcmd, Selenium, JMeter, LoadRunner, OATS, QTP, etc etc — the smart money is picking the option that will be the most flexible, more scalable, easiest to maintain, and take the least effort to develop. There is no one “program to rule them all” – an accurate, comprehensive, and flexible test suite is invariably going to utilise multiple components focussing on different areas.

In the case of regression testing – what is the aim of the testing? What are you looking to validate hasn’t broken after what kind of change?  If all that’s changed in the system is the DBAs adding some indexes or partitioning to the data, I really would not be going anywhere near the front end of OBIEE. However, more complex changes affecting the Presentation Catalog and the RPD can be well covered by this technique in conjunction with nqcmd. Visual regression testing will give you a pass/fail, but then it’s up to you to decipher the images, whereas nqcmd will give you a pass/fail but also an actual set of data to show what has changed.

Don’t forget that other great tool — you! Or rather, you and your minions, who can sit at OBIEE for 5 minutes and spot certain regressions that would take magnitudes of order greater in time to build a test to locate. Things like testing for UI/UX changes between OBIEE versions is something that is realistically handled manually. The testing of the dashboards can be automated, but faster than I can even type the requirement, let alone build a test to validate it – does clicking on the save icon bring up the save box? Well go click for yourself – done? Next test.

Summary

I have just scratched the surface of what is possible with headless browser scripting for testing OBIEE. Being able to automate and capture the results of browser interactions as we’ve seen above is hugely powerful. You can find the CasperJS API reference here if you want to find out more about how it is possible to interact with the web page as a “user”.

I’ve put the complete PhantomCSS script online here. Let me know in the comments section or via twitter if you do try it out!

Thanks to Christian Berg and Gianni Ceresa for reading drafts of this article and providing valuable feedback. 

Mobile App Designer mis-configuration error

I’ve been doing some work recently with OBIEE’s new Mobile App Designer (MAD). It’s a great bit of software that I’m genuinely impressed with, but it’s got its little v1 quirks, and helpful error messages are not its forte. I hit a MADdening (sorry) problem with it that Google and My Oracle Support both drew blanks on, so I’m posting it here in case it helps out others with the same problem.

Setting up MAD is a bit of a fiddly process involving patching OBIEE (regardless of the base version you’re on – hopefully in the future it will get rolled into the patchsets) and performing other bits of setup detailed in the documentation. The problem that I hit manifested itself twofold:

  1. Publishing an App to the Apps Library worked fine, but updating an existing App threw an error in the browser:
    Failed to publish /~weblogic/GCBC Mobile - Phone.xma:oracle.xdo.webservice.exception.AccessDeniedException: PublicReportService::executeUpdateTemplateForReport Failure: user has no access to report[/Apps Library//GCBC Mobile - Phone.xma] due to [Ljava.lang.StackTraceElement;@4e6106df
  2. Trying to subscribe to any App threw a generic error in the browser: “Error occurred while accessing server. Please contect administrator.” with the corresponding bipublisher.log showing: 
    [2014-05-13T16:49:53.449+01:00] [bi_server1] [WARNING] [] [oracle.xdo] [tid: 24] [userId: <anonymous>] [ecid: 3f3d2d8955322f32:2f756afc:145f4d10b2f:-8000-0000000000003eea,0] [APP: bimad#11.1.1] User (weblogic) with session id: q2fq8fkh66f85ghamsq164u9qs98itvnk0c826i is looking for object in biee path: /shared/Apps Library//GCBC.xma/_mreport.xma[[
    Object Error [Context: 0, code: QM3V3HLV, message: Invalid path (/shared/Apps Library//GCBC.xma/_mreport.xma) -- ]
    ]]
    [2014-05-13T16:49:53.450+01:00] [bi_server1] [WARNING] [] [oracle.xdo] [tid: 24] [userId: <anonymous>] [ecid: 3f3d2d8955322f32:2f756afc:145f4d10b2f:-8000-0000000000003eea,0] [APP: bimad#11.1.1] oracle.xdo.XDOException: Target app not found in the repository :/Apps Library//GCBC.xma[[
        at oracle.xdo.online.AppStoreIO.doPost_subscribeApp(AppStoreIO.java:311)
        at oracle.xdo.online.AppStoreIO.doPost(AppStoreIO.java:120)
    [...]

One of my esteemed Rittman Mead colleagues, Francesco Tisiot, pointed out that the path referenced in the errors has a double slash in it. On checking my configuration, I had indeed fat-fingered one of the settings. APPS_LIBRARY_FOLDER_LOCAL is defined in the <DOMAIN_HOME>/config/bipublisher/repository/Admin/Configuration/xmlp-server-config.xml file, and mine looked like this:

<property name="APPS_LIBRARY_FOLDER_LOCAL" value="/Apps Library/"/>

All I needed to do was to remove the trailing slash after Library:

<property name="APPS_LIBRARY_FOLDER_LOCAL" value="/Apps Library"/>

After restarting the bimad application deployment all was good again with the MAD world and I could republish and subscribe to Apps happily.

 

The State of the OBIEE11g World as of May 2014

I’m conscious I’ve posted a lot on this blog over the past few months about hot new topics like big data, Hadoop and Oracle Advanced Analytics, and not so much about OBIEE, which traditionally has been the core of Rittman Mead’s business and what we’ve written about most historically. Part of this is because there’s a lot of innovative stuff coming out of the big data world, but a part of it is because there’s not been a big new OBIEE11g release this year, as we had last year with 11.1.1.7, before that 11.1.1.6, and so on. But there’s actually a lot interesting going on in the OBIEE11g world at the moment without a big headline release, and what with the Brighton RM BI Forum 2014 taking place last week and the product keynotes it gave us, I thought it’d be worth talking a look back at where we are in May 2014, where the innovation is happening and what’s coming up in the next few months for OBIEE.

Product Versions and Capabilities

As of the time of writing (May 11th 2014) we’re currently on the 11.1.1.7.x version of OBIEE, updated with a few patch sets since the original April 2013 release to include features such as Mobile App Designer. OBIEE 11.1.1.7.x saw a UI update to the new FusionFX theme, replacing the theme used from the 11.1.1.3 release, and brought in new capabilities such as Hadoop/Hive integration as well as a bunch of “fit-and-finish” improvements, such that at the time I referred to it as “almost like 11g Release 2”, in terms of usability, features and general “ready-for-deployment” quality.

NewImage

The other major new capability OBIEE 11.1.1.7 brought in was better integration with Essbase and the Hyperion-derived products that are now included in the wider Oracle BI Foundation 11g package. Earlier versions of OBIEE gave you the ability to install Essbase alongside OBIEE11g for the purposes of aggregate persistence into Essbase cubes, but the 11.1.1.7 release brought in a single combined security model for both Essbase and OBIEE, integration of EPM Workspace into the OBIEE environment and the re-introduction of Smartview as OBIEE (and Essbase’s) MS Office integration platform.

Outside of core OBIEE11g but complementing it, and the primary use-case for a lot of OBIEE customers, are the Oracle BI Applications and 2013 saw the release of Oracle BI Applications 11.1.1.7.1, followed just a few days ago by the latest update, OBIA 11.1.1.8.1. What these new releases brought in was the replacement of Informatica PowerCenter by Oracle Data Integrator, and a whole new platform for configuring and running BI Apps ETL jobs based around JEE applications running in WebLogic Server. Whilst at the time of OBIA 11.1.1.7.1’s release most people (including myself) advised caution in using this new release and said most new customers should still use the old 7.9.x release stream – because OBIA 11g skills would be scarce and relatively speaking, it’d have a lot of bugs compared to the more mature 7.9.x stream – in fact I’ve only heard about 11g implementations since then, and they mostly seem to have gone well. OBIA 11.1.1.8.1 came out in early May 2014 and seems to be mostly additional app content, bug fixes and Endeca integration, and there’s still no upgrade path or 11g release for Informatica users, but the 11g release of BI Apps seems to be a known-quantity now and Rittman Mead are getting a few implementations under our belt, too.

Oracle BI Cloud Service (BICS)

So that’s where we are now … but what about the future? As I said earlier, there hasn’t been a major release of OBIEE 11g this year and to my mind, where Oracle’s energy seems to have gone is the “cloud” release of OBIEE11g, previewed back at Oracle Openworld 2013 and due for release in the next few months. You can almost think of this as the “11.1.1.8 release” for this year with the twist being it’s cloud-only, but what’ll be interesting about this version of OBIEE11g is that it’ll probably be updated with new functionality on a much more regular basis than on-premise OBIEE, as Oracle (Cloud) will own the platform and be in a much better position to push-through upgrades and control the environment than for on-premise installs.

NewImage

Headline new capabilities in this cloud release will include:

  • Rapid provisioning, with environments available “at the swipe of a credit card” and with no need to download and install the software yourself
  • Built-in storage, with Oracle’s schema-as-a-service/ApEx database environment backing the product and giving you a place to store data for reporting
  • A consumer-style experience, with wizards and other helper features aimed at getting users familiar with on-premise OBIEE11g up and started on this new cloud version
  • Access to core OBIEE11g features such as Answers, dashboards, mobile and a web-based repository builder

It’s safe to say that “cloud” is a big deal for Oracle at the moment, and it’s probably got as much focus within the OBIEE development team as Fusion Middleware / Fusion Apps integration had back at the start of OBIEE 11g. Part of this is technology trends going on outside of BI, and OBIEE – customers are moving their IT platforms into the cloud anyway, so it makes sense for your BI to be there too, rather than being the only thing left back on-premise, but a bit part of it is the benefits it gives Oracle, and the OBIEE product team – they can own and control much more of the end-to-end experience, giving them control over quality and much more customers on the latest version, and of course the recurring revenues Oracle gets from selling software-as-a-service in the cloud are valued much higher by the market than the one-off license sales they’ve relied on in the past.

But for customers, too, running BI and OBIEE in the cloud brings quite a few potential benefits – both in terms of Oracle’s official “BI in the Cloud Service”, and the wider set of options when you consider running full OBIEE in a public cloud such as Amazon AWS – see my Collaborate’14 presentation on the topic on Slideshare. There’s none of the hassle and cost of actually setting up the software on your own premises, and then doing upgrades and applying patches over time – “empty calories” that have to be spent but don’t bring any direct benefit to the business.  OBIEE in the Cloud also promises to bring a bit of independence to the business from IT, as they’ll be able to spin-up cloud BI instances without having to go through the usual procurement/provisioning cycle, and it’ll be much easier to create temporary or personal-use OBIEE environments for tactical or short-lived work particularly as you’ll only have to license OBIEE for the users and months you actually need it for, rather than buying perpetual licenses which might then sit on the shelf after the immediate need has gone.

Data Visualization, and the Competition from Tableau

It’s probably safe to say that, when OBIEE 11.1.1.3 came out back in 2010, its main competitors were other full-platform, big vendor BI products such as SAP Business Objects and IBM Cognos. Now, in 2014, what we’re hearing anecdotally and from our own sales activity around the product, the main competitor we hear OBIEE 11g coming up against is Tableau. Tableau’s quite a different beast to OBIEE – like QlikTech’s QlikView it’s primarily a desktop BI tool that over the years has been giving some server-based capabilities, but what it does well is get users started fast and give them the ability to create compelling and beautiful data visualisations, without spending days and weeks building an enterprise metadata layer and battling with their IT department.

Of course we all know that as soon as any BI tool gets successful, its inevitable that IT will have to get involved at some point, and you’re going to have to think about enterprise definitions of metrics, common dimensions and so forth, and it’s this area that OBIEE does so well, primarily (in my mind) selling well to the IT department, and with Oracle focusing most of their attention recently on the integration element of the BI world, making it easy to link your ERP and CRM applications to your BI stack, and the whole lot playing well with your corporate security and overall middleware stack. But none of that stuff is important to end users, who want a degree of autonomy from the IT department and something they can use to quickly and painlessly knock-together data visualisations in order to understand the data they’re working with.

So to my mind there’s two aspects to what Tableau does well, that OBIEE needs to have an answer for; ease of setting-up and getting started, and its ability to create data visualisations beyond the standard bar charts and line charts people most associate with OBIEE. And there’s a couple of initiatives already in place, and coming down the line, from Oracle that aim to address this first point; BI Publisher, for example, now gives users the option to create a report directly off-of data in the RPD without the intermediate requirement to create a separate data model, and presents a list of commonly-used report formats at report creation to make the process a bit more “one-stop”.

NewImage

Another initiative that’ll probably come along first as part of the BI in the Cloud Service is personal data-mashups; what this is is a way for users to upload, from spreadsheets or CSV files, data that they want to add to their standard corporate metrics to allow them to produce reports that aren’t currently possible with the standard curated RPD from corporate IT. Metrics users add in this way will have their data stored (probably) in the catalog but marked in a way that it’s clear they’re not “gold-standard” ones, with the aim of the feature being to avoid the situation where users export their base data from OBIEE into Excel and then bring in the additional data there. It does beg a few questions in terms of where the data goes, how it all gets stored and how well it’d work on an on-premise install, but if you work on the basis that users are going to do this sort of thing anyway, it’s best they do it within the overall OBIEE environment than dump it all to Excel and do their worst there (so to speak).

Another even-more intriguing new product capability that’s coming along, and is technically possible with the current 11.1.1.7 release, is the concept of “mini-apps”. Mini-apps are something Philippe Lion’s “SampleApp” team have been working on for a while now, and are extensions to core OBIEE that are enabled via Javascript and allow developers to create self-contained applications, including table creation scripts, to solve a particular user problem or requirement. This Youtube video from one of Philippe’s team goes through the basic concept, with custom Javascript used to unpack a mini-app setup archive and then create tables, and set up the analysis views, to support requirements such as linear regression and trend analysis.

NewImage

It’s likely the BI Cloud Service will take this concept further and introduce a more formalised way of packaging-up BI mini-applications and deploying them quickly to the cloud, and also maybe introduce the concept of a BI App Store or Marketplace where pre-built analytic solutions can be selected and deployed faster even than if the user tried to built the same themselves using Excel (or Tableau, even).

Of course the other aspect to Tableau is its data visualisation capabilities, and while OBIEE 11.1.1.7 improved in this area a bit – with trellis charts being introduced and a new visualisation suggestion engine – it’s probably fair to say that OBIEE 11g has dropped behind the industry-state-of-the-art in this area. What’s been interesting to see though, over the past twelve months, is the widespread adoption of technologies such as D3 and other third-part visualisation tools as additional ways to add graphs and other visuals to OBIEE, with Accenture’s Kevin McGinley showcasing the art of the possible on his blog recently (parts 1, 2 and 3) and presenting on this topic at the Atlanta Rittman Mead BI Forum later this week. Techniques such as those described by Kevin involve deploying separate third-party visualisation libraries such as D3 and Flot to the WebLogic server running OBIEE, and then calling those libraries using custom code contained within narrative views; while these aren’t as developer-friendly as built-in visualisation features in the tool, they do give you the ability to go beyond the standard graphs and tables provided by OBIEE 11g, as Tom Underhill from our team explained in a blog post on OBIEE11g and D3 back in 2013.

NewImage

The upcoming 2014 OBIEE11g SampleApp will most probably feature some more third-party and externally-produced visualisations along these lines, including new HTML5 and Javascript integration capabilities for 11.1.1’7’s mapping feature:

NewImage

and an example of integration ADF charts – which have far more options and capabilities that the subset used in OBIEE 11g – into the OBIEE dashboard. All of this is possible with OBIEE 11.1.1.7 and standard Jdeveloper/ADF, with the video previewing the SampleApp PoC Demo going through the integration process at the end.

NewImage

Community Development of OBIEE Best Practices, Techniques, Product Add-Ons

One of the advantages of OBIEE now being a mature and known product is that best practices are starting to emerge around deployment, development, performance optimisation and so-on around the product. For example, our own Stewart Bryson has been putting a lot of thought into agile development and OBIEE, and topics such as automated deployment of OBIEE RPDs using Git and scripting, giving us a more industry-standard way of building and deploying RPDs now that we’ve got the ability to work with repository metadata in a more atomic format. Robin Moffatt, also from Rittman Mead, has published many articles over the past few years on monitoring, measuring and testing OBIEE performance, again giving us a more industry-standard way of regression testing OBIEE reports and monitoring the overall OBIEE experience using open-source tools.

There’s even a third-party add-on industry for OBIEE, with Christian Screen’s / Art of BI’s “BI Teamwork” being the showcase example; OBIEE still doesn’t have any collaboration or social features included in the base product, unless you count wider integration with WebCenter as the answer for this, and Christian’s BI Teamwork product fills this gap by integrating collaboration, social and SaaS integration features into the core product including localisation into key overseas OBIEE markets.

NewImage

Hadoop and Big Data Integration

You’ll probably have guessed from the amount of coverage we’ve given the topic on the blog over the past few months, but we think Hadoop and big data, and particularly the technologies that will spin-off from this movement, are quite a big deal and will revolutionise what we think-of as analytics and BI over the next few years. Most of this activity has taken place outside the core world of OBIEE using tools such as Cloudera Impala, R and Tableau as the default visualisation tool, but OBIEE will play a role too, primarily through its ability to incorporate big data insights and visualisations into the core enterprise semantic model and corporate dashboards.

What this means in-practice is that OBIEE needs to be able to connect to Hadoop data sources such as Hive and Impala, and also provide a means to incorporate, visualise and explore data from non-traditional sources such as NoSQL and document databases. OBIEE 11.1.1.7 made a first step in this direction with its ability to use Apache Hive as a datasource, but this really is a minimal step-one in support for big data sources, as Hive is generally considered too-slow for ad-hoc query use and the HiveServer1 ODBC driver OBIEE 11.1.1.7 ships with no longer being compatible with recent Cloudera Hadoop (CDH 4.5+) releases. What’s really needed is support for Impala – an in-memory version of Hive – as a datasource, something we hacked-together with a workaround but most probably coming as a supported data source in a future version of OBIEE. What would be very interesting though is support for document-style databases such as MongoDB, giving OBIEE (or most probably, Endeca) the capability to create 360 degree-views of customer activity, including unstructured data held in these NoSQL-style databases.

Exalytics and Engineered Systems

I’d almost forgotten Exalytics from this round-up, which is ironic given its prominence in Oracle BI product marketing over the past couple of years, but not all that surprising given the lack of real innovation around the product recently. There’s certainly been a number of Exalytics updates in terms of product certification – the graphic below shows the software evolution of Exalytics since launch, going up to autumn last year when we presented on it at Enkitec E4:

NewImage

whilst the Exalytics hardware over the same period has seen RAM double, and SSD disk added to improve TimesTen and Essbase startup-times.

NewImage

What Exalytics has lacked, though, is something game-changing that’s only available as part of this platform. There’s a central dilemma for Oracle over Exalytics; do they develop something for OBIEE that only works on OBIEE, that’s substantial and that they hold-back from the on-premise version, or do they largely release the same software for both Exalytics, and non-Exalytics OBIEE and rely on performance tweaks which are hard to quantify for customers, and are hard for Oracle salespeople to use as differentiation for the product. So far they’ve gone for the latter option, making Exalytics – if we’re honest – a bit underwhelming for the moment, but what would be really interesting is some capability that clearly can only be supported on Exalytics – some form of in-memory analysis or processing that needs 1TB+ of RAM for enterprise datasets, possibly based on an as-yet unreleased new analytic engine, maybe based on Essbase or Oracle R technology, maybe even incorporating something from Endeca (or even – left-field – something based on Apache Spark?)

My money however is on this differentiation growing over time, and Exalytics being used extensively by Oracle to power their BI in the Cloud Service, with less emphasis over time on on-premise sales of the products and more on “powered by Exalytics” cloud services. All of that said, my line with customers when talking about Exalytics has always been – you’re spending X million $/£ on OBIEE and the BI Apps, you might as well run it on the hardware its designed for, and which in the scheme of things is only a small proportion of the overall cost; the performance difference might not be noticeable now, but over time OBIEE will be more-and-more optimised for this platform, so you might as well be on it now and also take advantage of the manageability / TCO benefits.

So anyway, that’s my “state-of-the-nation” for OBIEE as I see it today – and if you’re coming along to the Atlanta RM BI Forum event later this week, there’ll be futures stuff from Oracle that we can’t really discuss on here, beyond the 3-6 month timeline, that’ll give you a greater insight into what’s coming in late 2014 and beyond.