Tag Archives: Oracle BI Apps
A BI Developers’ Guide to Oracle OpenWorld 2011
A couple of people have asked me how they should plan for the upcoming Oracle OpenWorld 2011, running in San Francisco on October 2-6th. If you’ve never been to OpenWorld before, and perhaps you’re looking to make a case to put to your boss around attending, the sheer amount of content can sometimes be overwhelming, so I’ve put together this guide to what you might want to take a look at, if you’re an OBIEE developer.
To start with the basics, OpenWorld runs from the Sunday through to the Thursday, with Sunday being set aside for the various user groups (IOUG, ODTUG, EOUC etc) to run Sunday-afternoon symposiums usually geared around their various SIGs. I’m helping organize the ODTUG BI Sunday sessions, for example, with myself, Stewart Bryson, Jeff McQuigg, Christian Screen and Kevin McGinley doing sessions around various OBIEE 11g new features. I’m also taking part in the IOUG Sunday sessions, doing an updated version of my OBIEE 11g Architecture & Internals session. If you’re interested in finding out what’s running on the Sunday, go to the Content Catalog, then select Business Intelligence as the Track, and then User Group Forum (Sunday Only) as the Type, like this:
There’s currently fourteen BI sessions on this day, across the various user groups, though unfortunately they don’t at this time say times or rooms. If you arrive in SF on the Saturday, though, I’d recommend you come along to these sessions, get to meet the various user group BI leaders (Shyam Nath from the IOUG BIWA SIG, Faun De Henry from the OAUG BI SIG, myself and Kent Graziano from the ODTUG BI/DW SIG), and have some real-world user group sessions before you get into the OOW week proper.
OpenWorld proper then starts on Monday, and be prepared to get yourself over to the Moscone Center for some early starts, as sessions start at 8am and go through until 5 or 6pm in the evening. Again, if you use the Content Catalog and search for Business Intelligence as the Track, and then Conference Session this time as the type, you’ll see at current count eighty-one BI sessions, across OBIEE, BI Apps, BI Publisher and all the major tools.
You won’t see much Discoverer these days, except around upgrades, nor will you see much on older tools such as Oracle Reports, Daily Business Intelligence and so on. This is Oracle’s showcase for what they want to sell, and what they want to to start planning to adopt in the next year or so, and so most sessions are focused on current tools and whatever it is that Oracle are looking to push at this point in time. That said, the quality of presentations is usually excellent, the Oracle sessions are usually top-notch (particularly ones from the PMs about the products they are working on), and over the past few years more and more non-Oracle speakers have been given session slots as well, such as the team from Rittman Mead who together, last year, gave sixteen presentations between us.
From looking at the catalog now, sessions that I’d like to attend include:
- “Accelerating Business Intelligence with Oracle TimesTen In-Memory Database”, Marie-Anne Neimat, Oracle
I get the impression TimesTen is going to have a lot of new features to support analytics and in-memory capabilities for OBIEE, so this will be a good opportunity to get a heads-up on what’s coming
- “Become a Reporting Superstar with Oracle Business Intelligence Publisher Best Practices”, Klaus Fabian & Nikolas Psomas, Oracle
Klaus and Nikolas are two members of the BIP development team and have been very helpful to us in the past, answering questions about the product and its future direction. I’m keen to see what’s coming up for BIP and if anything’s announced at OOW, it’ll likely to be here.
- “Oracle Business Intelligence Foundation: Advanced Modeling Capabilities and Best Practices”, Bob Ertl, Oracle
A bit worried here that this will be the same presentation Bob’s given for the last two OOW’s, but Bob’s one of the best when it comes to RPD modeling and of course, he’ll know what’s coming up in future releases. An essential session if your interest is in RPD modeling.
- “Oracle Business Intelligence Foundation: Advanced Modeling and Analysis with Oracle Essbase”, Gabby Rubin, Oracle
Essbase support in OBIEE11g is still an unfinished area, so I’ll be looking out here for news on how Oracle might better handle MDX in future releases.
- “Agile BI with Oracle Exadata and Oracle Business Intelligence Enterprise Edition 11g” – Stewart Bryson, Rittman Mead
Fast becoming Stewart’s “signature” presentation, this session will set out just what’s possible if you combine the sheer horsepower of Exadata with the virtualized dimension model provided by OBIEE 11g.
- “Oracle Business Intelligence Foundation: Deploying from Department to Enterprise”, Bob Ertl, Oracle
Upcoming releases of OBIEE are likely to have big improvements in terms of developer IDEs, storage of the RPD, team development and so on. I’m looking out here for more details on how this might play out, and perhaps get a first glimpse at the new, Fusion Development IDE-based admin tool?
- “Oracle Business Intelligence Product and Technology Roadmap”, Paul Rodwick, Oracle
The “Roadmap and Strategy” session is where Oracle publically announce what’s coming with OBIEE, so expect details on 11.1.1.6 and what their thoughts are for the product in the longer term. Essential session if your job involves knowing the future direction of Oracle’s BI tools.
- “Oracle Business Intelligence Systems Management Best Practices and New Features” – Me!
The first of my two main conference sessions, this first one on systems-level management of OBIEE including managing weblogic, backup & recovery, WLST scripting, and monitoring the system
- “Oracle Business Intelligence/Oracle ADF Integration Using the Action Framework”, Me, and Andrejus Baranovskis
A collaboration between myself and fellow ACE Director Andrejus Baranovskis, where we’ll be building a “roll-your-own” Fusion app made up of ADF and BI components, and using the Action Framework to tie it all together. Essential for both OBIEE, and ADF, developers
- “Oracle Scorecard, Strategy Management, and KPIs” – Jacques Vigeant, Oracle
I still think scorecards & KPIs are the hidden secret in OBIEE, and I’m keen to hear what’s coming around the corner with the 11.1.1.6/7 releases. Anyone moving up from OBIEE 10g to 11g and looking for justifications for the migration should get along to this session, this just takes OBIEE to the next level.
- “Packaged Cloud-Based BI with Oracle GoldenGate and Oracle Business Intelligence Applications”, Jon Mead, Rittman Mead
If only to see what Jon comes up with, this is a hugely ambitious and very relevant look at how we can take the BI Apps into the cloud, courtesy of Golden Gate, Oracle’s latest replication and lightweight-ETL tool
- “Performing a Security Audit on Oracle Business Intelligence Enterprise Edition 11g”, Venkat Janakiraman, Rittman Mead
Venkat always puts a huge effort into the R&D behind his OOW presentations, and I know he’s been doing a lot of work behind the scenes with our clients on coming up with an automated way to test the security in OBIEE installations. Be ready to take lots of notes for this one.
- “Taking Oracle Business Intelligence Enterprise Edition to the Next Level of Performance”, Jon McGale, Performance Architects Inc.
I have to say, I’ve not met Jon McGale before, but this is a topic close to my heart. It’ll be interesting to see an alternative take on performance enhancement for OBIEE, particular in terms of how others use caching, configure the RPD and so on.
- “Under the Covers: Oracle Business Intelligence Foundation Architecture and Infrastructure”, Mike Durran & Nick Tuson, Oracle
Again, what I’m looking out for here is news on what’s coming up in future releases, plus the PM’s perspective on some of the features I’ve been working with day-to-day (WebLogic, scripting, product architecture, high-availability etc)
- “Using Oracle Business Intelligence Actions Framework to Improve Your Business Processes”, Adam Bloom & Nick Tuson, Oracle
Either this will be a re-run of the standard Action Framework presentation, or we’ll get to hear about what’s coming up in future releases. I’ll pop my head around the door, get a feel for how much I’ve heard it before, if there’s new content I’ll definitely stay.
- “What’s Next for Oracle Business Intelligence Applications? A Sneak Peak into the Future” – Florian Schouten, Oracle
Probably for me, one of the most important talks of OOW. The BI Apps, in contrast to OBIEE “the platform”, is a bit of a mystery in terms of what’s coming up in future releases. We know the big-picture stuff (Fusion Apps will be supported first, later on ODI will be an option, that sort of thing) but I’m keen to hear a bit more detail on how we are going to transition from the fur-ball that is the current BI Apps through to something that uptakes all the “rapid application development” stuff that we’re hearing is coming for OBIEE itself.
So you probably get the idea that OOW, at least for me, is mostly about getting a handle on what’s coming up for the products, what new features are likely to appear in future releases, and what new products are starting to become “strategic” for Oracle. For me, OOW is essential as it’s when anything “new” is announced, and so I use it to work out what training courses we should be offering in the next year, what skills our team are going to need to get to learn, what products we need or orientate our services around.
The other main attraction of OOW for me is that it generally marks the start of the conference season, in that we generally write new content for OOW and then re-use it at the various regional events over the next twelve months. Certainly for me, I tend to propose sessions that fill-out areas I’ve not properly explored before but for which I know there is a lot of interest, and the content I write for this event then becomes the main new material I use at the events such as the UKOUG conference, Collaborate and ODTUG KScope. So if you are particularly interested in hearing new content, this is the conference to be at, particulary in terms of product announcements from Oracle
If you’re a customer, the main reason you’d go to OOW is to get a condensed set of briefings on Oracle’s product direction, and on the key new products they want you to work with. For example, if you’re a BI Apps customer, you should attend the roadmap sessions, and any on where they think developers should be focusing their attention in the next twelve months. Be sure also to ask lots of questions (I’m usually the guy with the English accent doing this, sat near the front jotting down the bullet points from the slides) and also try and say hello to the speakers, take down their contact details and so on, particularly if it’s a PM presenting and they sound like they know the internals of the product. Also, be aware that for the “roadmap”-type sessions that Oracle run, often the slides don’t make it onto the download page after the conference, so make lots of notes if you’ll need to refer to them afterwards.
Apart from the content then, what else do you need to know? If you’re looking to book accommodation, make sure you firstly, use the OOW booking system (available when you register, and afterwards if you log back in) as you get discounts on the hotels using this; secondly, don’t leave booking your hotel until the last minute as otherwise you’ll be stuck out by Fisherman’s Wharf, out along the dodgy part of Market Street, or worse, out by the airport.
In terms of places to meet up and socialize (particularly if you’re new to OOW), typically the bars along Howard Street (Thirsty Bear being my usual haunt), 4th Street (the ApEx guys often meet at the 4th Street Bar & Deli), and up near Geary/Mason/O’Farrell near the Hilton (Johnny Foley’s is usually packed with OOW attendees). There’s usually some sort of event on most nights as well, with the OTN night often on Monday or Tuesday, the big concert on the Wednesday, and regional / product-specific events on most nights.
For a lot of people, the highlight of the week are the keynotes, particularly the one from Larry on the Wednesday and the Fusion Middleware one by Thomas Kurian on the Tuesday. For the Larry keynote, as it can often go on for quite a while I tend to watch it, with a beer in hand, in the OTN Lounge in Moscone West (reading the various tweets from attendees as it goes along), but the Fusion Middleware one is particularly worth attending as it tends to be the showcase for whatever Oracle have been doing on the development side of the last twelve months. The other benefit of watching the keynotes in the OTN lounge is that you can do something else when the dreadfully-earnest-but-dreadfully-boring Infosys/HP/Other Vendor presentations take place before Larry’s session, or at least catch up with email or do something to kill the time. If you do intend to go in person to the keynotes though, particularly in the case of Larry’s one make sure you queue up early, and if you’re a blogger, consider getting registered for blogger credentials as you’ll get reserved seating near the front, with power outlets for your laptop.
Another important aspect of the event for customers in particular, is the exhibition, which usually runs in the main Moscone Center building (North?) from the Monday onwards. You’ll find most of the major vendors, consultancies and so on there, and if you’re looking for a partner to come and help you with an implementation, this is a good way to meet a lot of companies in a short space of time. For me, I’m not so sure on the exhibition – as an attendee, I’m not sure I want to spend what little time I have there being hit-on by salespeople, or collecting pens and bags, but if there was a particular product or vendor I wanted to speak to, it could be a good way to get this done, particulary if I was looking to justify a visit out to the event. What can be more interesting though is the Oracle Demogrounds that are usually in the same exhibition hall – often these are manned by the same PMs that you’ll see presenting later in the week, and it’s a good opportunity to get some hands-on time with the products and ask questions of the PMs responsible.
So, for me, it’ll be an especially long OOW this year, as I arrive on the Wednesday before to attend the Oracle ACE Director 2-day briefing at HQ, then I go down to San Francisco on the Saturday to get ready for the start on Sunday. I’m also staying for an extra day to meet some of the product people on the Friday, before flying back late on Friday night. But if you’re an OBIEE enthusiast, or a customer looking to get a heads-up on what’s coming for the products in the next twelve months, this is the event you can’t afford to miss – just make sure you do your homework (and book into the sessions, as they usually fill up fast) before you go.
A First Look at the BI Apps 7.9.5.2 Part 4: Customization
In the first three parts of this series, I gave an overview of the new 7.9.5.2 release of the Oracle BI Applications, went through how the technology differs now that it uses Oracle Data Integrator rather than Informatica, and then went through what you need to do to perform an initial load and debug failed mappings. In this final posting, I’ll look at what’s involved in performing a customization.
Customizations are necessary in the BI Applications when you need to bring in some new data into the data warehouse. This might be because the predefined mappings miss one of the columns you are interested in, or it might be because at the moment it filters out some of the rows you are interested in. You can even add new facts and dimensions to the BI Applications data warehouse, using and extending the framework that comes with the prebuilt content to add your own content. Now as the 7.9.5.2 release uses Oracle Data Integrator and a new Web-based tool called Configuration Manager to replace Informatica and the DAC, the means by which you effect the customizations is a bit different, but the type of customization that you make is divided into the same categories.
Category 1 customization are where you need to add a new column to a mapping, to bring in source system data that was missed out by the predefined mappings. Category 2 customizations are where you want to add new facts or dimensions, whilst category 3 customizations involve bringing in new rows of data from non-packaged sources (I don’t know why bringing in new rows from sources with adapters is not also Category 3, but there you go.)
Now if you followed my postings on this topic for previous versions of the BI Apps, you’ll have picked up on the three most important considerations when customizing the BI Apps, namely
1) Make sure your customizations survive upgrades to the mapping repository,
2) Follow the Oracle methodology as closely as possible, so that Oracle (and others) can work out what you’ve done, and
3) If you change something that’s “out of the box”, make sure you keep a copy of what it was originally.
Now with previous versions of the BI Apps that use Informatica, this was accomplished by only customizing copies of prebuild mappings and putting them into “custom” folders, and by either following the “safe path” through mappings for Category 1 customizations and replicating the functionality of existing mappings when creating your own ones. With this ODI release of the BI Apps, following the methodology is still more or less the same, but the way that you preserve changes through upgrades is a bit different.
Whereas in Informatica versions of the BI Apps you firstly created copies of the SDE_ORA_xxx and SILOS folders and then copied the original mappings into them before customizing them, with ODI you create “versions” of the mappings which are then stored in the repository so that you can recall them later on. To take the same example that I used in this previous posting on the BI Apps 7.9.5 where we are adding columns to the W_ORG_D and W_ORG_DS table, the mapping we need to customize is called SDE_ORA_OrganizationDimension_Customer which we version by selecting this from the right-click menu in the ODI Designer application:
This then presents you with a dialog, where you can enter the version number.
This much like the versioning (snapshot) feature you get in OWB, except for some reason it doesn’t take ten minutes to run
Then, assuming you’ve got some new fields to add into your warehouse tables,, you should version the Model folder that contains all of the warehouse table definitions (you can only version the folder, not the individual table), then go and add the columns into the table that you need to add.
Then, still within the Model view of the ODI Designer application, you right-click on the BI Applications model folder and choose the Generate DDL option. Then after selecting this table from the sync list, ODI deploys the additional columns to the warehouse table.
This is a big improvement over the releases that use Informatica, as you need to change the table definition in both of those tools rather than just the one, the ODI Designer application.
Next it’s time time to edit the interfaces. Mappings in the ODI release of the BI Apps consist of a folder, containing a package and a number of interfaces to implement full and incremental loads. Some of these interfaces use a new knowledge module that implements inline views, that are then used to replicate the mapplet and source qualifier features in Informatica so that access to the underlying EBS tables can be abstracted into a simple interface.
We then edit the SDE_ORA_OrganizationDimension_Customer.SQ_BCI_CUSTOMERS interface as it’s this one that will map onto the relevant EBS tables. The first step in editing it is to add new columns to the interface target datastore to hold the new columns.
Then you link into this new column the source column that you want to bring through.
After this the process is very similar to when you are using Informatica and the DAC – you feed the new column across from the SQ_BCI_ interface into the main interface that loads the dimension staging table, then you repeat this for the SIL mapping that actually loads the dimension. It’s just a different mapping approach and set of tools to use.
Category 2 customizations, where you define and then load your own facts and dimensions, are of course a bit trickier. Like BI Apps customizations that use Informatica and the DAC, you need to add DATASOURCE_NUM_ID and INTEGRATION_ID columns to your dimension staging tables, and these plus ROW_WID and ETL_PROD_WID to the SIL mappings.
In addition, the same sorts of variables, such as IS_INCREMENTAL, TYPE2_FLG and so on, are passed to the master package for each set of interfaces but this time, instead of you implementing these features using packaged Informatica transformations, most of them are implemented through features of the model datastores (SCD2 type 2 handling, for example) and the new BI Apps-specific knowledge modules that ship with this release. So again logically, the process is the same as before, but there’s a whole new set of tools and techniques to learn if you want to do this.
So what happens then when a new release of the BI Apps comes along? This is where versions come in. What you need to do is to make sure you create a new version of this interface before you apply the update, then apply the update and create a new version, and then compare the two versions.
By doing this you can see what’s changed in this new release (presumably your customization is no longer there), and so you can either restore your old version, or go and apply the customization again if there’s something you want to preserve in the new release of the interface. Of course it’d be nice if you could automatically reapply just your customization but for now, this is the best approach.
So there you go. In four parts, there’s an introduction to the BI Apps 7.9.5.2, a look at the underlying technology, steps on how to do the first load and now a look at customizations. If you’re interested in learning more about this release, come along to my session at this year’s Open World, or even better come along to our Training Days event just after in London where we’ll cover this release in more detail.
A First Look at the BI Apps 7.9.5.2 Part 3: Performing a Data Load
In the past couple of postings (here, here), I’ve gone through an introduction to the new BI Apps 7.9.5.2 release and talked about how the introduction of ODI has altered much of the technology underneath it. In today’s posting I’ll go through how data is loaded into the BI Apps data warehouse, and how you can use the various tools to diagnose why a load has failed (something that happens quite a lot when you first install the product).
The first step in performing a load is to start up the Configuration Manager and define the execution plan. If you’ve used the previous releases of the BI Apps that use Informatica and the DAC, you’ll know that you can either run one of the predefined execution plans or you can define your own, using a list of subject areas that you select. In 7.9.5.2, the Configuration Manager doesn’t provide you with any predefined execution plans and so you need to use the tool to create your own. You start off by creating a connection through to the Oracle database that contains the metadata tables used by the Configuration Manager (these are a much smaller set of tables than used by the DAC, as details of the execution plans etc are actually held in the ODI Work Repository).
Once you’ve created this first connection you can save it, so that you can just select it from a list next time you log in. Once you’ve logged in, you are presented with a list of options, to manage connections, manage parameters, edit, run and then monitor execution plans.
“Manage connections” allows you to define the connection through to the ODI workflow agent, and to the schema that holds the Configuration Manager tables. This is optional but saves you having to retype it all every time you submit an execution plan for running.
You can also edit the parameters used that are global (are independent of specific ETL runs), common (apply to all subject areas) or application specific (specific to a particular module, such as Financials Analytics). I leave these as the default, I also haven’t updated or amended any of the domain values CSV files that ship with the BI Apps as I’m running against a subset of the Vision EBS database and don’t really know/care about the specific codes used in this database.
Now it’s time to define the execution plan. I click on the Administer Execution Plans link and start selecting the subject areas. These map to the license options for the product and are restricted to the modules that this release supports (see this posting for more details). I select Backlogs, Booking Line, Cycle Lines and Invoice lines from the Order Management module.
There’s an option at the bottom of the select list to “Synchronize package structure”, which you can also do when saving the execution plan. What this step actually does is a bit unclear; looking at the docs, it says:
“In order for the package structures to be displayed correctly on the Package Structure tab, the subject areas associated to an execution plan must be synchronized within internal Oracle BI Applications Configuration Manager tables. If you do not synchronize the package structures when creating the execution plan or when you made changes to the subject areas associated to the execution plan, you must synchronize by clicking the Synchronize package structure button that appears on the Package Structure tab in order to get correct package structure. The synchronization process may take up to several minutes depending on the network traffic between the Oracle Business Analytics Warehouse database and the Oracle BI Applications Configuration Manager installation.”
The schema used by the Configuration Manager only actually has one table though, and what it appears to contain is a list of “transactions” associated with the Configuration Manager, like this:
If you view the contents of the BLOB as an XML file, each one contains something a bit different and, I think, is storing details of parameter changes, setting up of execution plans and so on. Here’s a look at the entry that was created just after I defined the above execution plan:
You can see the name of my execution plan mentioned in the XML, so I guess (a) this table stores details of the execution plans, parameter values and so on as you create them, and the synchronization step takes what’s in these tables and copies it across to the ODI Work Repository.
Once you’ve created the execution plan, you can view its structure in the Configuration Manager application.
Notice how the execution plan package is structured; there is a “Master Execution Plan” at the top, which then has “Master General”, “Master SDE Facts”, Master SDE Dimensions” and so on. Within each of these are packages for the various dimension categories, and these then go down to the individual mappings that you’ll need to run. If you saw my previous article on how ODI holds execution plans you’ll have seen that these packages and mappings translate to packages and interfaces in the Project View in ODI Designer, like this:
The numbers next to the packages indicates the ETL phase for the package, the complete list of which looks like this:
So how about we try and run the execution plan. To do this, you can either start it from the ODI Designer application (you can use this to schedule execution plans as well), or you can run it on-demand from the Configuration Manager. I run it from the configuration manager and enter the details for the ODI super user (SUPERVISOR).
After the execution plan is started, you can switch between the Monitor Executions view in the Configuration Manager and the Operator ODI application to see how the plan develops.
Things start off with some general routines to create indexes, truncate staging tables, refresh the execution plan and so on.
Taking a look at the execution plan status in the Configuration Manager, all looks OK at this stage.
Going back to the Operator, you can see the Workflow agent kicking off Interface agent jobs to load the individual dimensions, at this point using the SDE mappings.
The view from the Configuration Manager still looks good, as well.
About five minutes later though, I notice that the load has ground to a halt. Taking a look in the Operator, I can see that one of the mappings has failed.
The Configuration Manager also shows an error, but also shows the log which indicates that the mapping has failed due to a table being missing.
I half-expected this as the dataset I’ve got is a stripped-down version of the Vision dataset, originally designed to be demo’d with the 7.9.5 version of the BI Apps. Even for point releases such as 7.9.5.2 some of the mappings change though and require additional tables that aren’t in my dataset.
So which table is missing? To find out, I go back to the Operator and have a look at the SQL that this step was trying to execute.
So the step that failed was one that was trying to create a view over one of the tables in the EBS schema, and for whatever reason this table doesn’t exist.
Now as the ODI Model View contains the definition of all the tables that are expected as sources for the mappings, to get around the problem I use the ODI Designer to generate the DDL for this table and then apply it to the EBS source schema. Then, as you could do with the DAC, I go back to the Configuration Manager and restart the execution plan. As the mappings in the ODI repository are designed to be restartable, after a minute or so I go back to the Operator and check the progress again.
A few minutes later, the execution plan stops again, and looking at the Operator I can see that the Workflow agent has failed this time, whilst trying to create an index.
Looking at the error message associated with the step, I can see that a unique index has failed to create because the table has duplicate values in the indexed column.
For some reason the W_GL_ACCOUNT_DS dimension staging table has got duplicate values in the DATASOURCE_NUM_ID, INTEGRATION_ID and the source effective from column, which is strange as this indicates a problem with the source data (there are duplicates in the source EBS table). To start working out what’s gone wrong, I first run a query on the W_GL_ACCOUNT_DS table to find out which INTEGRATION_IDs are duplicates.
Sure enough, there are two that are duplicated. Now even if I’ve run this mapping a few times we shouldn’t get duplicates as the table is truncated each time that the mapping is run, so there must be duplicates in the EBS table that provides the source. So which source table populates this mapping?
Now with the old DAC you could have taken this table name, viewed it in the DAC Console Design view and then displayed which tasks had this table as a target. This is a bit trickier in 7.9.5.2 as there’s no UI for querying the relationship between tables and mappings, so I instead go back up the list of completed mapping steps and find the package that called this index creation step; I then look up the steps and see the view that’s created to provide information for the table load.
So the view is created over an SQ_BCI_ table, which is something that the Source Qualifier interfaces that I mentioned in my previous posting creates to mimic the mapplet and source qualifier transformations that Informatica uses. Going back to the Designer application, I can see that the mapping to load the W_GL_ACCOUNT_DS table consists of a package that calls two incremental load interfaces and two full load interfaces. The ones in yellow are the “source qualifier” interfaces that retrieve the data from EBS, the other two are the interfaces that load the warehouse staging table.
Taking a look at the full load interfaces, I first look at the interface that loads the W_GL_ACCOUNT_DS table to see where the INTEGRATION_ID column is populated from.
So INTEGRATION_ID is populated from the CODE_COMBINATION_ID from the SQ_BCI_ interface. So where does this interface get the CODE_COMBINATION column from?
So it comes from the GL_CODE_COMBINATIONS.CODE_COMBINATION_ID table. Now it’s a case of running a query against that table, and maybe I’ll find a duplicate, or it could be that there are no duplicates but the SQ_BCI_ interface creates some because of the join that takes place in order to load the table. However looking at the data that’s been loaded into the table, it actually looks like it’s valid but I need to include another key value in the integration ID so that it becomes unique.
The task now is to resolve this issue and then work through any similar issues until the load completes without error, a task that in my experience usually takes up to a week to complete (meaning that including the install, working out these data issues and then getting the RPD set up, I usually budget for two weeks to get the first numbers up in front of the client) for initial evaluation.
If all else fails and you can’t get a mapping to run, but you want to mark it as completed and run the rest of the mappings in the execution plan, in theory you can set the status of the session to “Done” from “Error”. This should then allow the rest of the process to complete as if the erroneous step completed correctly (although your data of course may not now be consistent), however when testing this I found that even though I could mark the session as “done” and other steps then started running, this step later on got marked as “error” again later so I don’t think it works in quite the same way as the equivalent feature in the DAC Console.
If nothing is working, you can mark the whole load as having completed or even reset (truncate) the warehouse tables in order to start again.
So there you go. As with the versions of the BI Apps that used Informatica, your first load will probably take a couple of attempts to go through and you need to know how to diagnose errors, restart the load and monitor progress in order to complete the task. The process is a bit different to using the DAC and Informatica but if anything, the mappings are a bit simpler as all of the SCD2 handling, ETL_PROC_WID applying and so on is done through the various knowledge modules. In the final posting on this topic later in the week, I’ll be completing things by talking about customizations, and how they work in this new version.
A First Look at the BI Apps 7.9.5.2 Part 2: Technology Changes Between ODI and Informatica
So at the end of last week I posted an overview of the new 7.9.5.2 release of the Oracle BI Applications, which replaces Informatica and the DAC with Oracle Data Integrator and Configuration Manager. I said at the time that we still had SDE (Source-Dependent Extract) and SIL (Source-Independent Layer) mappings, together with PLP and the other mapping types, and whilst execution plans are still around they now called ODI packages and not Informatica Workflows. So what else has changed?
Going through the install and setup process, the basic approach of installing Oracle BI EE and then the BI Applications is the same as before, except this time there are less BI Apps modules to select from (see my previous posting for details). You obviously don’t install Informatica any more, you install ODI 10.1.3.5 instead, and once you’ve got all of these installed it’s time to set up the data sources.
Now setting up data sources is one area that’s greatly improved in this release. With the previous version of the BI Apps, you had to define data sources in both the DAC and the Informatica Workflow Manager, and you also had to define connections between the DAC and Informatica and between the DAC client and the DAC server (and in pre-7.9.5. releases, between the Informatica Repository Server and the Informatica Integration Server). Now you just define your data connections in the ODI Topology Manager, like this:
The connections themselves are defined as JDBC connections, and as database links for Oracle-to-Oracle mapping flows. You also use this interface to define the connections to the interface and workflow (scheduling) agents, and that’s about it (you also need to define the link from the Configuration Manager to the ODI repository, but that takes just a second and again is through JDBC). So score 1 to BI Apps 7.9.5.2 for this welcome simplification.
In the previous releases of the BI Apps, you import the BI Applications repository into Informatica using the Informatica web-based Console, and then you import the DAC repository into the DAC and then create the required containers for your particular source systems. In 7.9.5.2 you first import your ODI Master Repository which contains entries for the various sources and agents you are going to use, like this:
Then shortly afterwards you import the ODI Work Repository, which contains the actual PLP, SIL, SDE etc mappings and the various master execution plans for sequencing them together. All in all this takes a couple of hours, around the same time (possibly a bit shorter) than the Informatica release of the BI Apps.
Once you’ve imported all of the mappings, you can see how they are stored in the Project view of the ODI Designer application.
In the screenshot above, you can see the folders that map to the Adapters that you got in the previous release (SDE_ORA11510_Adaptor, etc), with the naming consistent with the Informatica release but of course only with support for EBS 11.5.10 as a source. Under the Mapping folder are folders for utilities and other tasks that you will need to carry out, together with the master (template) execution plans that are leveraged in the Configuration Manager application.
Taking a closer look at one of the mappings in the SDE folder, you can see that it corresponds to a single mapping, and a single workflow, in the previous Informatica-backed BI Apps releases. If you expand the mapping you can actually see that it’s a “package” (equivalent to a “workflow” in Infomatica), with the package calling a number of ODI interfaces. The ones with blue icons are ODI interfaces as we would normally know them, whilst the ones with a yellow icon correspond to a cross between the “source qualifier” transformations in the Informatica mappings and the “mapplets” that were used to encapsulate access to EBS data areas such as customers or invoices.
In the list above, the AP_XACT_FS interface is used for incremental extraction of AP data from EBS and uses the SQ_AP_INVOICES_ALL interface as a source, something that’s now possible through a new “inline view” knowledge module that ships with the BI Apps. Taking a closer look at the AP_XACT_FS mapping you can see it using the SQ_AP_INVOICES_ALL interface as a data source in the same way that the Informatica mapping used mapplets and source qualifier transformations:
Looking then at the corresponding, SQ_AP_INVOICES_ALL interface, you can see that this is where the main extract logic goes, corresponding to the mapplets and source qualifiers in the Informatica version.
It’s a similar story with the SIL mappings, with SQ interfaces providing an abstraction away from the main data sources and thereafter providing the main data source for the regular ODI interface. One significant different though between the old Informatica mappings and the new ODI ones are that much of what you uses to do with custom Informatica transformations – get the ETL_PROC_WID, check that rows haven’t been inserted already, handle SCD2 history tracking and so on – is either handled by new BI Apps-specific ODI knowledge modules that are selected in the “flow” part of the interface, like this:
or, in the case of SCD2 handling, are defined as part of the column properties in the ODI Designer Model view, like this:
From what I can see these changes are either driven by changes in the way that mappings work in ODI and Informatica – in Informatica, like OWB, a mapping can have several steps and can make use of shared transformations, whereas in ODI mappings translate to simple SELECT, UPDATE etc statements – or because ODI has more in-built data warehousing functionality in the form of it’s knowledge modules so that we don’t need to code a solution for ETL ID handling, or for SCD2 handling, each time we write a mapping. Of course the interesting test of this will be when we try to customize an existing mapping or write our own new one, and of course it’ll be interesting to see how well the ODI-based mappings run compared to the new set-based ODI ones.
There are a number of utilities that come with the mappings, scenarios and interfaces, that you can use to reset (truncate) the data warehouse, mark execution plans as having completed, generate the scenarios at the start, create extract views (presumably over the EBS tables) and so on.
In the next post I’ll look at performing the first data load, and we’ll see how these utilities are used during a data load. It’ll also be interesting to see how execution plan restarts work, how easy it is to debug failed mappings and so on.
The execution plans themselves are stored in the ODI repository along with the mappings, with an initial set of master execution plans that are then added to as you create your own customizations. I’ll cover customizations later, but for now I’ll say that whilst the customization process is similar – you have category 1,2 and 3 customizations and you follow the “safe path” through the predefined interfaces – the way that you preserve customizations through upgrades and patches is a fair bit different. More from this in a later post.
Moving on to the Configuration Manager, again one benefit of this and ODI compared to the DAC and Informatica is that you only have to define tables, columns and so on once (in the ODI repository) and not twice, in the DAC and the Informatica Source and Target Analyzer utilities. One downside though is that you can’t use the Configuration Manager to take a target table, trace it back to the task that loaded it, then trace this back to the SDE task and then the source tables that provided its data, although I’ve seen demos of Oracle BI EE running reports that show the data lineage in the ODI repository, so presumably that will replace this DAC functionality.
Defining the execution plan is as simple as selecting the subject areas (no generation of parameters, and no generation of the ordered task list, so I wonder how the final tasks dependencies are generated when you select more than one subject area?) and then saving the resulting execution plan.
Once the execution plan is defined you can take a look in the Package Structure view, like this:
This is not too dissimilar to the ordered task list in the DAC (though handily, it’s shown in a tree structure), and then you can run it either manually from the Configuration Manager, or manually or scheduled from the ODI Designer application.
Unlike the SIL workflows in Informatica, the SIL packages that are executed from the Configuration Manager contain logic for both incremental and full loads, and they also call lots of individual interfaces and ODI tools rather than just the one Informatica mapping that Informatica workflows call.
The way that execution plans work is obviously a bit different to the Informatica versions of the BI Apps. In those versions, the execution plan exists only in the DAC repository and is used to make calls out to the Informatica Workflow Manager to run individual mappings that correspond to DAC tasks. In the 7.9.5.2 release, the steps to load the warehouse, then to run the SDE, PLP, SIL etc mappings, then within these the mappings that correspond to the HR, OE, Financials etc modules are held in a hierarchy of ODI packages that themselves call the various SIL, SDE and PLP mappings.
Presumably then this is how the “ordered task list” step is skipped, as the sequence in which the mappings are executed are defined within this master execution plan packages which are then run in parallel for each stage in the ETL process (the numbers next to the various stages in the above screenshot). I’ll cover this more in the next two postings, where I’ll look at performing the first load and then making some customizations.
Finally, once you’ve defined the execution plan (something you can only do in the Configuration Manager), you can actually execute and monitor it directly in the DAC Operator application, which gives you greater visibility as to what’s gone wrong and where things are at the moment.
In the view above, you can see the mapping tasks being executed by the interface agent, and the various supporting tasks being executed by the workflow agent. Two of the mappings in the interface list have failed, so when we come to do the full load later in the week we’ll have to see how we can fix these.
A First Look at the BI Apps 7.9.5.2 Part 1: General New Features
If you’ve been wondering why I’ve not been posting to the blog recently, it’s because I’ve been grappling with the new release of the Oracle BI Applications that uses Oracle Data Integrator as the ETL engine. This is an interesting release as it does away with Informatica and even the DAC, but it uses the same approach of SDE and SIL mappings but mapped on to ODI and a new tool called the Confguration Manager. So how does it work and how does it compare to the 7.9.5 and 7.9.6 releases of the BI Apps?
First of all it’s worth setting out what this 7.9.5.2 release does, and what it doesn’t do. First of all, the sources and targets that it supports are only a small subset of those covered by the 7.9.5 and 7.9.6 releases. The only source supported is Oracle E-Business Suite 11.5.10 which necessarily is only on the Oracle Database, and the only target supported is Oracle 10gR2. Within the BI Apps, the only modules that are supported are Financials, Human Resources, Supply Chain & Order Management, and Procurement and Spend and moreover, as this is the 7.9.5.2 release – a branch of the 7.9.5 main release – you don’t get any of the new subject areas that are in the 7.9.6 release which continues to use Informatica. One other important thing to bear in mind is that there is no guarantee that there’ll be a 7.9.6.x release using ODI, and therefore you might need to wait until the 11g BI Apps release before you can take advantage of areas like Projects, a wait that may well be twelve to eighteen months or so (assuming that BI EE 11g is out in the first half of 2010). Finally, if you’re a customer looking to implement the BI Apps, there are going to be far fewer people out there with 7.9.5.2 experience than with mainstream BI Apps experience, and so you’re going to really have to want to use ODI with the BI Apps to go for this version – for most customers I’d still recommend going down the Informatica route at least until the 11g release.
If you take a look at the architecture of the 7.9.5.2 release of the BI Apps, you can see that pre-built ODI interfaces, packages and transformations take the place of the equivalent mappings, workflows and transformations that come with the 7.9.5 and 7.9.6 version of the BI Apps. Knowledge Modules within ODI handle the extracts, loading and transformation of data between source and target and the BI Apps ships with a number of custom knowledge modules to replicate, for example, the mapplets feature in Informatica.
The stated aims for Oracle in this ODI-backed release are firstly, to speed up data loading through the use of database set-based transformations; secondly, to make deployment quicker and simpler, and thirdly, to make all the technologies more tightly integrated. For the first objective, we’ll have to wait and see until we can run the same execution plans using both Informatica and the DAC, and I’m hoping to have these test results in place for my session on this release at the forthcoming Open World in San Francisco, and for our Training Days event just after this in London. Deployment and Integration is however better (at least once you work out how it all installs, it took me about six attempts before I got it all working).
If you look at this topology diagram from the Oracle documentation, there are three servers involved in the Informatica-driven 7.9.5 and 7.9.6 architecture, made up of the repository and integration servers for Informatica, and the DAC server that controls the Informatica workflows. There are also Informatica and DAC clients which are used for customizing mappings, and for controlling and monitoring ETL execution plans respectively.
If you take a look at the corresponding topology diagram for 7.9.5.2, the number of servers has gone down to one, as has the number of clients.
Now I think this is slightly disingenuous as you typically use two ODI agents – one for workflow, one for executing the packages – and there are also two clients, the ODI client and also the Configuration Manager which takes the place of the DAC. But it’s still definately the case that the 7.9.5.2 release seems to have less “moving parts”, there is no DAC server (with all of it’s technologies to learn) controlling Informatica (another, non-Oracle technology to learn) and there’s no DAC repository as all of it’s functionality has now been moved into the ODI repository or into a set of tables used by the Configuration Manager. It’s not quite the simplification that Oracle claim but if you’re already an Oracle shop and use ODI, it’s just another set of ODI projects and models that you need to run.
So once you’ve got it all installed, what does it all look like? Well once you’re in the ODI Designer application, the Projects view shows you all of the PLP, SDE, SIL and so on folders that correspond to the various adapters in the Informatica repository, except of course they only cover EBS 11.5.10 as a source.
You can also see the area where execution plans are stored; the concept around these is a little different to how they worked in the DAC before, but you still select subject areas to load and let the Configuration Manager (the DAC replacement) turn these into calls to the equivalent to Informatica workflows in this release, ODI “packages”.
The ODI Designer application also holds the data definitions for the source and target databases, and you use this tool to create the data warehouse tables in the target database before you commence your first load. If you’ve read my posting on adding partitioning to the 7.9.5 release of the Oracle BI Applications you’ll be interested to note that the 7.9.5.2 release supports range-based partitioning on tables (though not compression, though this should work if you manually apply it after the initial table creation), presumably creating bitmap indexes “local” rather than the default “global”.
So with the DAC gone, what’s it’s replacement? Well much of the functionality has moved into the various ODI client tools and the DAC repository itself is now just part of the ODI project metadata, but you still need a user-friendly tool to select and then run the data warehouse load routines and the tool that does this is called the Configuration Manager. This is a web (ADF)-based application that runs on Weblogic that currently you need to separately install after the main BI Apps installation (I understand in future this will be automatic, at least for Windows platforms). This has a subset of the functionality of the DAC Console, allowing you for example to construct and then run execution plans, however it doesn’t have the data lineage and metadata exploration features that I found useful in the DAC Console, presumably this functionality will be (or already has been) replaced by packaged reports that run against the ODI repository for use with Oracle BI EE.
The Configuration Manager interface is quite easy to use, once you log in you are presented with a list of options where you can define parameters, build and then execute an execution plan, and then monitor the execution of those plans.
Creating execution plans is a simpler process than with the DAC, as you don’t need to define and populate the various parameters that are passed between the DAC Server and the Informatica Server, and there is no “create orderered tasks” step either that sequences all of the mappings that are required to load your chosen subject areas. In the screenshot below, you can see where I’m choosing the various subject areas that I want to see in my execution plan.
In future postings in this series, I’ll look in more detail at the technical differences under the covers in this release, how you go about doing the initial load and how the customization process works. For now though, you can download BI Apps 7.9.5.2 from OTN, and if you’re going to use it you’ll need a full license copy of ODI 10.1.3.5 as well in order to use the mappings.