Tag Archives: Oracle BI Apps

Openworld is Over – Now We’re Coming to India..!

Oracle Openworld 2013 is now over, but no sooner have we unpacked from that trip, we’re packing again for our next one – our BI Masterclass Tour for India, starting in a few week’s time in Bangalore.

Running in partnership with ODTUG and with myself, Venkat Janakiraman and Stewart Bryson leading the sessions, we’re looking forward to sharing the news from Openworld, talking about the latest in Oracle BI and EPM development, and meeting Oracle BI enthusiast at each event.

The event is taking place over three cities – Bangalore, Hyderabad and Mumbia, with each masterclass running for a full day. We’ll go to Bangalore on Tuesday 15th October, Hyderabad on Thursday 17th October and then fly up to Mumbai for Saturday, 19th October 2013. Full details are on the event home page including details on how to register, with each masterclass’s agenda looking like this:

  • 9.30am – 10.00am: Registration and Welcome
  • 10.00am – 10.30am: Oracle BI, Analytics and EPM Product Update – Mark Rittman
  • 10.30am – 11.30pm: Extreme BI: Agile BI Development using OBIEE, ODI and Golden Gate – Stewart Bryson
  • 11.30pm – 12.30pm: OBIEE 11g Integration with the Oracle EPM Stack – Venkatakrishnan J
  • 12.30pm – 1.30pm: Lunch & Networking
  • 1.30pm – 2.30pm: OBIEE and Essbase on Exalytics Development & Deployment Best Practices – Mark Rittman
  • 2.30pm – 3.30pm: Oracle BI Multi-user Development: MDS XML versus MUDE – Stewart Bryson
  • 3.30pm – 4.00pm: Coffee Break & Networkng
  • 4.00pm – 5.00pm: Intro and tech deep dive into BI Apps 11g + ODI
  • 5.00pm – 6.00pm: Metadata & Data loads to EPM using Oracle Data Integrator - Venkatakrishnan J

The dates, locations and registration links for the three events are as follows:

We’re also investigating the idea of bringing our Rittman Mead BI Forum to India in 2014, so this would be a good opportunity to introduce yourself to us and the other attendees if you’d like to present at that event, and generally let us know what you’re doing with Oracle’s BI, EPM, analytics and data warehousing tools. There’ll also be lots of ODTUG goodies and giveaways, and a social event in the evening after the main masterclass finishes.

Numbers are limited though, and places are going fast – check out the event page for full details, and hopefully we’ll see some of you in either Bangalore, Hyderabad or Mumbai!

Using DAC11g to Replicate BI Apps DW Tables into TimesTen

A couple of months ago Peter Scott and I wrote a series of posts on using Exalytics with the Oracle BI Apps, with the idea begin to try and use Exalytics’ In-Memory database cache to speed up BI Apps dashboards. We looked at a number of approaches, including using OBIEE11g’s Summary Advisor to replicate commonly-used BI Apps aggregates into TimesTen, whilst Pete looked at the more manual route of using TimesTen’s ttimportfromOracle utility, which I expanded on a couple of weeks later. But there’s another way you can replicate data into TimesTen that doesn’t involve the Summary Advisor, scripts or indeed anything outside of a regular BI Apps installation – it’s actually the DAC, which in the 11g release has a feature for replicating tables into the TimesTen in-memory database. So how does this work, and is it a viable solution for moving your BI Apps data warehouse in-memory?

As a bit of background for anyone new to Exalytics and TimesTen but that knows the BI Apps, there’s two typical use-cases for TimesTen when used in the context of OBIEE:

  • As an in-memory aggregate cache, automatically populated and managed by OBIEE’s Summary Advisor, as part of the Exalytics hardware and software package
  • As a regular database but stored in-memory, used to hold detail-level data warehouse data plus any aggregate tables you choose to add-in

Strictly-speaking, the second use-case can be done outside of Exalytics, but you’ll most probably need to use the TimesTen table compression feature only available within Exalytics if you want to store any significant volume of data in TimesTen. In fact, given the typical size of data warehouses including the BI Apps data warehouse, it’s unlikely that you’ll ever get your whole dataset into TimesTen as it has to store everything in-memory, so you’ve then got to choose either to:

  • Just store aggregates in TimesTen, which is the route Oracle went down with Exalytics
  • Only try this with data marts or other small data warehouses
  • Put just the “hot” data from your data warehouse into TimesTen – for example, the last couple of years data across the data warehouse, or a particular fact table and its dimensions

It’s this third approach that the new DAC11g is most suited to, with functionality to mark one or more BI Apps DW tables as “in-memory”, then replicate them into TimesTen using an ETL engine actually built-in to the DAC, so there’s no need to add new Informatica routines or otherwise customise your main load routine. It does, however, come with some significant limitations, some of which you can work around but others you’d need to just live with. Let’s go through what’s involved though, using BI Apps 7.9.6.4 and DAC 11g as currently downloadable from OTN.

DAC 11g has a number of new concepts within this release, and one we’re interested in for this example is the “external executor”. What this allows is the DAC to use ETL engines other than Informatica to do work for it, so you could use ODI, for example, to do some ELT-centric work in support for a wider Informatica load if that made more sense. Another type of external executor that you can register though is called “Data Copy”, and this is a special additional data movement feature that’s actually built-in to DAC 11g and uses parallel Java processes to copy data from one table to another. The TimesTen replication feature in DAC 11g uses this to move data from the BI Apps data warehouse into TimesTen tables, so lets register this Data Copy external executor before we do anything else, by switching to the Setup view within the DAC console and creating a new entry within the External Executors section.

NewImage

Notice the ODI 11g Embedded Agent option that’s also there, but for now I give the external executor a name, press the Generate button to create the parameters for this particular external executor type, then review the default settings for the parallel data load engine. DAC 11g executes the data load using multiple Java JDBC calls that take subsets of the source table’s data and copy them into the target table, and you’ll most probably need to fiddle-around with these settings for your own environment, to get the best mix of concurrent data load vs. overall capacity vs. avoiding TimesTen table locks.

NewImage

Now we can go and mark some tables as being “in-memory”, ready for replicating them into TimesTen using this new feature.

Space is of course at a premium with TimesTen, so it’d make sense to start-off with a small source BI Apps DW table and try and replicate that, to get a sense of the space it takes up, and how that might extrapolate to the wider source data set. W_AR_BALANCE_F in my sample BI Apps data warehouse has just 62 rows in it, and it joins to a number of dimension tables, once of which (W_PROFIT_CENTER_D) has just 162 rows of data in it. So let’s start with these two, and see where things go.

To select tables for replicating into TimesTen, switch to the Design view, then use the Tables tab to locate the tables, and check the In Memory checkbox, like this:

NewImage

I then do the same for the W_PROFIT_CENTER_D table, with the DAC Console creating copies of the table metadata in the new source system container I’d set up prior to this exercise.

For each of these tables, I now also need to define the datatype lengths and precisions of each column. To do this, with each of the two tables tables selected, click on the Columns tab and then navigate to the In Memory Length and In-Memory Precision columns, and either copy across the default values from each column, or use a utility such as ttimportfromOracle (if your data warehouse source is the Oracle Database) to first generate optimal datatype lengths, then copy those settings into the in-memory column definitions.

NewImage

Now the TimesTen tables are defined in the DAC repository, I now need to set up a connection from the DAC to the TimesTen server. To do so, I perform the following steps:

1. Ensure TimesTen 11.1.2.2.4+ (ideally 11.1.2.2.5+) is installed either on the server running OBIA, or the server is installed somewhere else and the client is installed on the OBIA server.

2. Copy across the TimesTen library files from $TIMESTEN_HOME/lib to $DAC_HOME/dac/lib – I just copied and pasted all of the .jar and .lib files from the TimesTen /lib directory into the DAC one.

3. If you’re running the DAC server on a 64-bit machine, and you’ve got 64-bit TimesTen installed, alter the DAC config file so that it points to a 64-bit JVM rather than the default 32-bit one that gets installed with the DAC client – I pointed it to the one that ships with 64-bit OBIEE 11.1.1.7, as also installed on this server.

4. Create the required TimesTen client ODBC DSN on your OBIA server, and the TimesTen Data Manager server DSN if you’ve got both TimesTen server and client on your OBIA machin

5. Go into the DAC client, select Setup, then the Physical Data Sources tab, then create a new data source that points through to the TimesTen Client ODBC DSN

At the end of these steps you should have an additional physical datasource connection registered in the DAC console, that successfully connects through to your TimesTen database, like this:

NewImage

Now you need to create a logical data source that we’ll use later to point to this physical connection. To do so, from the DAC Client select Tools > Seed Data > Logical Data Sources, then create a new logical data source called DBConnection_TimesTen.

To create the data copy tasks, I do the following:

1. Switch to the DAC Client Design view, select the Tables tab, press Query, check the In Memory checkbox, then press Go. The list of tables marked as in-memory, to be replicated into TimesTen, is then displayed. In this case, it’s the W_AR_BALANCE_F table and the W_PROFIT_CENTER_D one

2. Right-click anywhere in this list of tables, and select Generate Data Copy Tasks. When prompted, select the All Records in the List radio button.

NewImage

3. When the Generate Data Copy Tasks dialog is shown, select the following values:

NewImage

Primary Logical Source: DBConnection_OLAP
Primary Logical Target: DBConnection_TimesTen
Task Phase: Post Load

Note that the Task Phase can be changed to suit where you want the data copying to happen in a real-life project, and that you can check the Enable Incremental Data Copy checkbox if you want update just the new and changed data in your TimesTen tables – see the online docs on this feature for more details. Once done, press OK to have the DAC Client create the data copy tasks for you.

You’ll also need to create the corresponding TimesTen tables, a task you can also do from the DAC Client. With the same set of In Memory-flagged tables displayed, this time select the Generate DW Table Scripts option, like this:

NewImage

When the Generate DW Table Scripts dialog is shown, select TimesTen as the Physical Data Source Type, check the Execute checkbox, and then select the tt_dac option for the Physical Data Source, like this:

NewImage

The DAC Client will now generate the TimeTen table DDL scripts and execute them, to create the tables to hold your replicated DW data. Assuming all is well, the Generate DW Table Scripts dialog should show a success message like this:

NewImage

You should then repeat this to create the DW Index scripts, which the DAC Client will base on the indexes on the original BI Apps DW tables. As I talked about in my blog post on indexing TimesTen tables a few weeks ago, you should follow-up this replication process with the use of the TimesTen Index Advisor, which generates further index recommendations based on the actual query workload on your system, and can dramatically improve the performance of queries against TimesTen tables compared to what you’d get with the default DAC-generated indexes.

Now back to the data copy tasks. These new tasks need now to be added to an existing, or a new, subject area so that you can then include them in an execution plan. In this instance, I’m going to create a new subject area for the tasks, .hen add the new tasks to this subject area – in reality, you might choose to include these tasks in with your existing DAC subject areas, so that the TimesTen replication happens as the rest of the load takes place, but in my example I’m going to keep this replication separate from my main data load to keep things simple. To create this subject area and add the data copy tasks to it, I do the following:

1. From the DAC Client, select the Design view, then the Subject Areas tab. Press New to create new subject area and call it Data Copy Tasks, then press Save.

2. With the subject area still open for editing, select the Tasks tab, press the Add / Remove button, then type in “DataCopy*” into the Name field and press Go. The two data copy tasks should then be displayed. Press the Add All button to add these tasks to the right-hand list of selected tasks, then press OK.

NewImage

Now it’s a case of either adding this subject area to an existing DAC execution plan or creating a new execution plan just for this data copy subject area. I’m going to do the latter.

1. Switch to the Execute view, select the Execution Plans tab, then press New, and call the new execution plan DataCopy Tasks, and press Save.

NewImage

2. Now with that execution plan still open for editing, switch to the Subject Areas tab and use the Add / Remove button to add the new subject area you just created to this execution plan.

NewImage

3. Now switch to the Connectivity Parameters tab and press the Generate button. Once the parameter generation process completes, select the following values for the two parameters:

DBConnection_OLAP: DataWarehouse
DBConnection_TimesTen: tt_dac

NewImage

This connects the tasks’ logical data source connections to the physical ones, including the physical TimesTen database connection I created earlier in the process.

4. Finally, for this stage, press the Build button above the list of execution plans with this one still selected, to create the list of ordered tasks for this new execution plan.

Now we’re ready to run the data copy process, and replicate these two DW’s tables into the TimesTen database. To do so, again within the Execute view and with the new execution plan selected, press Run. Once the execution plan completed, you should see it listed under the Run History tab, hopefully with a success message next to it.

NewImage

Finally, if you move over to SQL*Developer and create a connection through to the TimesTen database, you should be able to see the replicated data in these two tables.

NewImage

So there you have it – replication of BI Apps DW data into equivalent TimesTen tables, all handled by the DAC and its internal JDBC-based ETL engine. But … what happens next, and how do you get these tables into the BI Apps repository and make them available to the BI Apps dashboards, so that they can be used in-place of the regular Oracle tables to speed up queries? Well – you’ll have to wait for the follow up…. ;-)

Diving Deeper into BI Apps 11.1.1.7.1 use of ODI11g for Data Loading

A few months ago I posted a series of articles on the new BI Apps 11.1.1.7.1 release, which used ODI11g in the background to perform data loads, rather than Informatica PowerCenter. Since then, I’ve collaborated with Accenture’s Kevin McGinley on a “getting started” BI Apps 11.1.1.7.1 installation cookbook, where I went through the install and Kevin did the data load, with the aim being to provide readers with a “minimum viable install” OBIA 11.1.1.7.1 system. For me though, the most interesting part about this new release is how it embeds ODI11g into the platform infrastructure, and how element of ODI are used (and extended) to provide a proper, embedded ETL tool that Oracle can presumably take forward in a way that they couldn’t with Informatica. In the next couple of blog posts then, I’m going to look at this ODI integration in a bit more detail, starting today with “lifting up the lid” on the configuration and data load process, to see just how the various BI Apps platform tools integrate with ODI and control how it works.

If you’ve worked with earlier, Informatica-based releases of the BI Apps, you’ll be familiar with the many, many manual steps involved in configuring and integrating the various platform components, including two initial configuration phases that are particularly involved and prone to operator error:

  • Linking all of the ETL, ETL control (DAC), database and other elements together, so that the DAC can run tasks that in turn, run Informatica Workflows that in turn, load data into the BI Apps data warehouse
  • Preparing lots of CSV files to contain the various domain values, segment codes, flex field details and so on for your source systems, even though you know these same details are held in tables in EBS, for example.

BI Apps 11.1.1.7.1 simplifies this setup process greatly, though, by automatically pushing all source and target configuration details entered into the web-based Configuration Manager application directly into ODI’s repository. You can see this in action when defining the EBS data source in our cookbook example, where in the screenshot below connection details to the EBS database are entered into Configuration Manager, and then appear thereafter in ODI Studio’s Topology navigator.

NewImage

Configuration Manager stores copies of these settings in its own metadata tables, in this instance C_DATA_SERVER in a schema called DEV_BIACOMP, which can be thought of as similar to some of the DAC repository tables in BI Apps 7.9.x; in this instance though, Configuration Manager automatically pushes the values through to the ODI repository held in the DEV_BIA_ODIREPO schema, rather than you having to manually create the same entries in Informatica Workflow Manager yourself.

It’s a similar story with the setting up of domain values – what happens with BI Apps 11.1.1.7.1 is that you configure a special type of load plan, analogous to BI Apps 7.9.6.4′s execution plans, to read from the various EBS (in this instance) metadata table and set up the domain values automatically. In the screenshots below, the one on the left shows one of the special “domain extract only” load plans being set up for a particular set of fact table groups, while the screenshot on the right shows the same load plan being executed, loading domain values into the Configuration Manager metadata tables.

NewImage

Notice also that what ODI11g is using to load data into these tables, is a standard ODI11g 11.1.1.5+ load plan, and it’s presumably to support the BI Apps and their more complex loading routines and dependencies that load plans were introduced. And, because these are standard load plans, when they go wrong, as an ODI developer they’re a lot easier to diagnose and debug than Informatica/DAC load routines, which left log files all over the place and used Informatica terminology for the load, rather than ODI’s familiar ELT approach. In the screenshots below, this domain-load load process has failed at the point where data starts to get read from the EBS instance, and looking at ODI’s error message view, you can quickly see that the error was caused by the EBS server being unreachable.

NewImage

So far the screenshots of ODI’s internals that you’ve seen are from ODI Studio, the Java thick-client developer tool that all ODI developers are familiar with. But the BI Apps tries to hide the details of the ETL tool from you, treating it as an embedded, “headless” server that ideally you administer as much as possible from Configuration Manager (for system-wide configuration) and Functional Setup Manager (for application-specific configuration). To achieve this, Configuration Manager can run ODI Console embedded within its web view, so its possible to click on a load plan, view its status and drill into the individual steps all from your web browser, no ODI Studio install needed.

NewImage

So how do these load plans get generated, when there’s no DAC and no historic way within ODI of dynamically-generating load plans based on metadata? What enables this dynamic creation of load plans is a new utility included with BI Apps 11g called “Load Plan Generator”, which ships as a JEE library within the WebLogic domain and a plug-in to ODI Studio, for creating test load plans as part o the ETL customisation process.

NewImage

This blog post by Oracle’s Saurabh Verma describes the Load Plan Generator JEE back-end utility in a fair bit of detail, but in summary you can think of the load plan generation process as going like this:

1. The ODI developer decides to create a new load plan, for loading data about inventories from EBS, for example. As shown in the screenshots before, domain values for this area within EBS will need to have been loaded prior to this, but assuming this is in place, the developer first selects one or more fact table groups, with each fact group a kind of “subject area” containing one or more data warehouse fact tables.

NewImage

2. In the background, ODI doesn’t have the concept of fact groups, but it instead uses flex field metadata for each fact table to specify which fact group each belongs to. You can see what’s included in each fact group by looking at the warehouse domains view in Configuration Manager, and you can see the fact table flex fields in ODI Studio, when you view details of the table (or “model”) in the Designer navigator.

NewImage

3. So when the BI Apps ODI developer tells Configuration Manager to create a new load plan, the steps it goes through and metadata it consults looks like this:

NewImage

4. Giving you, in the end, a load plan with a standard set of phases, and optimally-selected and orchestrated load plan steps to load data into the required fact groups.

NewImage

5. Then, once the load plan runs, you’ve got the same in-built restartability capabilities that you get in all other load plans, together with the concept of load plan instances, exceptions and so forth, so again as an ODI developer all of this is fairly obvious and fairly transparent to debug.

In the background, WebLogic Server hosts the ODI agent within an ODI-specific managed server and within the overall BI Apps WebLogic domain, with ODI’s security linked to WebLogic so that the same developer and operator IDs work across the whole system, and with a set of additional Fusion Middleware Security application roles for the various levels of administrative access.

So – you could say that BI Apps 11.1.1.7.1 is a “developer’s dream” for ODI professionals, as it brings the productivity and source system compatibility benefits of the BI Apps to the familiar, more efficient world of ODI. Everything is connected, and you can see where everything is stored in the various underlying repository tables. But – and this is probably the most obvious next thought from experienced BI Apps developers – how do you go about customising these ETL routines, adding new data sources, and upgrading the system over time? And where does the new integration with GoldenGate come in, giving us the ability to break-out of restrictive load windows and potentially move BI Apps into the cloud – watch this space for more details.

New OTN Article: Installing and Configuring Oracle BI Apps 11.1.1.7.1

A few months ago I posted a series of articles on our blog about the new 11.1.1.7.1 release of the BI Applications, focusing on the new ODI 11g-based architecture and its supporting utilities. At the same time, Accenture’s Kevin McGinley was also investigating and writing about this new release, and co-presented with Oracle’s Florian Schouten on it at this year’s BI Forum event in Atlanta. At the time, the three of us agreed that what developers really needed was a step-by-step guide, or “cookbook”, for getting BI Apps 11.1.1.7.1 up and running in a test environment, as so much of the architecture was new and the official docs covered all eventualities rather than just the steps needed to get a “minimal install” up and running.

Kevin and I therefore went away and put such a guide together, based on a Windows-only (typically VM) environment, using Oracle E-Business Suite 12.1.3 as the data source and aiming to get a basic set of data into the BI Apps tables and dashboards. It’s just now been published OTN as “Cookbook: Installing and Configuring Oracle BI Applications 11.1.1.7.1″, and whilst It’s by no means an exhaustive guide (ignoring Linux installs for example, or multi-tier installs), it takes you through the download, install, configuration and then load of some basic EBS data into the BI Apps data warehouse. For anyone who’s interested, I put the architecture and install piece together, with Kevin then tackling the part around system configuration and the initial data load. Hopefully it’ll give readers a head-start in pulling a BI Apps 11.1.1.7.1 system together, show you how the ODI part works, and see how Informatica and the DAC are replaced by ODI and some web-based configuration tools.

NewImage

Finally, as we put the article together around the time of ODTUG KScope’13, Rittman Mead and Accenture decided to donate the article fee to ODTUG’s chosen KScope’13 charity – Wine to Water, a 501(c)(3) non-profit aid organization focused on providing clean water to needy people around the world. If you were lucky enough to be at KScope’13 you’ll probably know all about Wine to Water, but if not, full details on the excellent work they do can be found on their website.

Rittman Mead BI Masterclass Tour: India, October 2013- Register Now!

I’m very pleased to announce a one-off Rittman Mead BI Masterclass Tour that we’re organising, in partnership with ODTUG, to take place in India in October 2013. Along with the US and UK, India has the largest amount of readers of our blog, and so we thought it’d be nice to put together an event especially for our Indian readers and run it in conjunction with our favourite international user group, ODTUG.

NewImage

The event is taking place over three cities – Bangalore, Hyderabad and Mumbia – on the week commencing 13th October 2013. We’ll go to Bangalore on Tuesday 15th October, Hyderabad on Thursday 17th October and then fly up to Mumbai for Saturday, 19th October 2013. Joining me will be Venkatakrishnan J, who you’ll all know from the OBIEE and EPM posts on our blog, and Stewart Bryson, Managing Director for Rittman Mead in the US and also prolific blogger, writer and presenter on OBIEE, DW, data integration and data modelling. We’ll be bringing the best of our recent presentations including new content from Oracle Openworld, and we’ll leave plenty of time for networking, introductions, questions and deep-dive discussions, with topics covering the following Oracle products:

  • Oracle BI Enterprise Edition 11.1.1.7 development, administration and new features
  • Oracle BI Applications 11.1.1.7.1 configuration and development
  • Oracle EPM Suite 11.1.2.3+
  • Oracle Exalytics In-Memory Machine
  • Oracle Data Integrator

Stewart, Venkat and I will deliver a number of sessions over each day, with the planned agenda as follows:

  • 9.30am – 10.00am: Registration and Welcome10.00am – 10.30am: Oracle BI, Analytics and EPM Product Update – Mark Rittman
  • 10.30am – 11.30pm: Reporting against Transactional Schemas using OBIEE11g – Stewart Bryson
  • 11.30pm – 12.30pm: OBIEE 11g Integration with the Oracle EPM Stack – Venkatakrishnan J
  • 12.30pm – 1.30pm: Lunch & Networking
  • 1.30pm – 2.30pm: OBIEE and Essbase on Exalytics Development & Deployment Best Practices – Mark Rittman
  • 2.30pm – 3.30pm: Oracle BI Multi-user Development: MDS XML versus MUDE – Stewart Bryson
  • 3.30pm – 4.00pm: Coffee Break & Networkng
  • 4.00pm – 5.00pm: Intro and tech deep dive into BI Apps 11g + ODI
  • 5.00pm – 6.00pm: Metadata & Data loads to EPM using Oracle Data Integrator - Venkatakrishnan J

The dates, locations and registration links for the three events are as follows:

We’re also investigating the idea of bringing our Rittman Mead BI Forum to India in 2014, so this would be a good opportunity to introduce yourself to us and the other attendees if you’d like to present at that event, and generally let us know what you’re doing with Oracle’s BI, EPM, analytics and data warehousing tools. There’ll also be lots of ODTUG goodies and giveaways, and a social event in the evening after the main masterclass finishes. The last events we ran in India were back in 2010 for the launch of OBIEE 11g, each of which was a sell-out and were great fun. Hopefully some of you who came in 2010 can join us again in 2013, where we can look at how Oracle’s BI products have progressed since then, and see what’s planned and announced at Openworld 2013.

NewImage

Numbers are strictly limited, so register now using one of the three links above, or contact either Mark Rittman (mark.rittman@rittmanmead.com) if you have any other questions about the agenda, or Venkatakrishnan J (venkat@rittmanmead.com) for questions about registration and the venues.. Note that this is a lecture-based format, there are no hands-on labs, although you are welcome to bring your laptops + installed software if you would like to try out any of the techniques we describe, or discuss any aspect of Oracle BI development and administration. Hopefully we’ll see you in India in October 2013, on one of the three days!