Tag Archives: Uncategorized

Update on my OBIEE / Exalytics Books

Just a quick update on my two recently published books, “Oracle Business Intelligence Developers Guide” and “Oracle Exalytics Revealed”, both of which are now published and available to purchase around the world in printed and electronic format. Here are some links to the two books on Amazon.com, Amazon.co.uk, Google Play and the Apple iTunes Store.

NewImage

The main Oracle Business Intelligence 11g book has had some great, five-star reviews on Amazon.com and Amazon.co.uk, with comments such as:

  • “To put it straight: Many years will pass before we will witness another masterpiece like this. This book is epic and a total MUST for anyone who works with OBIEE 11g” (5-star review on Amazon.co.uk)
  • “It has has taken a long time for a good OBIEE 11 book to come out, but well worth the wait. This is not just a “warmed over” documentation book, this book has “Meat” and is a must buy for all OBIEE 11g developers. This is one book that covers it all” (5-Star Review on Amazon.com)
  • “Although it is applicable to all developers, the people who would benefit most from this book are 1) developers who new or relatively new to Oracle BI 2) developers who are are experienced with 10g and are looking to upgrade to 11g and 3) developers experienced in one or two areas (such as dashboards) and are now looking to expand their knowledge. It would also be useful to team members such as Tech Team Leads, Solution Architects and Operations staff who are not full-time Oracle BI developers but who still need to understand the concepts and “lingo”" (5-Star Review on Amazon.com)
  • “If you’ve followed OBIEE during it’s evolution from Siebel to Oracle, then you have indubitably heard of Mark, and have likely read something he’s written: a blog, a magazine article, etc. Therefore, you know there is no better source in the world for a book on this subject.” (5-Star Review on Amazon.com)

I’ve also recorded a podcast about the book with Oracle’s VP in charge of Oracle Business Intelligence product management, Paul Rodwick, which can be accessed in MP3 format from here. In the podcast we talk about how I ended up writing the book, my favourite chapters and why I chose Oracle Business Intelligence 11g to write about – it’s about 10 minutes in length and available for download now.

I’d encourage you particularly to take a look at the standalone “Oracle Exalytics Revealed” ebook in Kindle and Apple iBooks format – this was an experiment by McGraw Hill (Oracle Press) and myself to see if shorter, focused ebooks might be popular – we’ve taken the existing Exalytics chapter from the main book (which was based on the initial 11.1.1.6 release) and then extended it, covering 11.1.1.6.2 BP1 and adding additional content around testing approaches, management using Integrated Lights-Out Management, and the new Presentation Server features aimed at Exalytics that became available with the 11.1.1.6.2 BP1 patchset (trellis charts etc). The Exalytics book is only $9.99 and  is a great complement to the main book, or just something you’d buy if you’re contemplating buying an Exalytics server. Here’s a couple of review comments also from Amazon.

NewImage

  • “Nowhere will you find a more comprehensive look at Oracle Exalytics from start to finish than in this book … extremely thorough and easy to follow … and I find this narrative incredibly useful to explore the product.”(5-Star Review on Amazon.com)
  • “The book … delivers what it promises on its title, a thorough review on the most important aspects of this engineered system created by Oracle. Either if you are working with Exalytics or if you are going to do an implementation in the near future, you should have this book around as it’s packed with valuable information in a much easy way to digest than the official Exalytics Documentation” (5-Star Review on Amazon.co.uk)

The sample data for the main book is also now available from the Oracle Press / McGraw Hill website along with a sample chapter, whilst Amazon and Apple also offer sample pages from the book if you want to try it out electronically.

Finally, a bit more news about out training courses – going forward, we’ll be giving away a copy of the book to every attendee of our OBIEE 11g courses in the UK, USA and India, and to clients who hire us to deliver training for their team on-site in their offices. Full details of our training courses can be found on our Training page, with the key benefit to trainees being that the examples in the book are based around the same ones we used for our OBIEE 11g hands-on labs, so you’ll be able to read-up on areas you’re particularly interested in after the course finishes, and practice the examples using the downloadable sample data that comes with the book.

Integrating Oracle WebCenter and Oracle BI EE Part 3 : Adding BI Content to Custom Portal Applications

In the previous two postings in this series I looked at Oracle’s WebCenter Portal application at a high-level, and then explained how analyses and other BI content from OBIEE 11g could be added to WebCenter Portal : Spaces applications. In most cases, if you’re predominantly a BI developer this is as close as you’ll come to WebCenter development, but there is another option where you can create your own custom Portal application, using Oracle JDeveloper 11g and a feature called Oracle WebCenter Portal : Framework.

Going back to my original product architecture diagram, WebCenter Portal : Framework and WebCenter Portal : Services are two parts of the WebCenter and ADF toolkit that WebCenter Portal : Spaces was built from.

NewImage

By using the toolkit though rather than the prebuilt Spaces product, you can put together your own custom Portal application, mixing in elements of WebCenter Portal, bits of the general ADF toolkit, other products from Oracle that expose components through JDeveloper 11g, and of course BI content from OBIEE 11g. If you’ve been following this blog for a while now, you’ll maybe remember that we covered integrating OBIEE 11g with Oracle ADF, the lower-level componentry from which Portal : Framework and Portal : Services are themselves built. Now though, let’s take a high-level look at what’s involved with the Portal : Framework toolkit, first by creating a simple ADF application that uses Portal : Framework components, and then looking at what’s involved in integrating OBIEE content in with it, first by adding BI views directly into the ADF application, and then by delivering them as part of WebCenter Portal.

To start off, you’ll need Oracle JDeveloper 11g, with the version number matching the version of OBIEE and WebCenter Portal that you’re going to be using (at the time of writing, 11.1.1.6). Note that you can’t use JDeveloper 12.x – this is built on WebLogic 12c and isn’t compatible with either WebCenter Portal, or OBIEE, and presumably we’ll need to wait for 12c versions of those products before JDeveloper 12c can work with them. Once you’ve got JDeveloper installed, you’ll need to add a couple of JDeveloper extensions to make use of WebCenter Portal functionality (the first is mandatory, the second you might as well add at the same time):

  • WebCenter Customization Framework Design Time
  • WebCenter Framework and Services Design Time

Once you’ve added these, you can create a simple WebCenter Portal : Framework application by selecting this application type from the New menu.

Sshot 21

Then, after stepping through a few wizard pages, you end up with a basic ADF application that uses the Portal : Framework component set to provide user login, page tabs, page templates and so forth. Once you deploy this application to the built-in Oracle WebLogic application server for testing, you can see the initial application, log in and test it.

NewImage

This basic portal has no public pages associated with it except for this login (home) page, but you can add more pages to it, either from within the application itself using the same administration menu that we looked at in the previous post, or you can add the pages programatically using JDeveloper and ADF. For now thought let’s think about how we can add BI content into this application.

  • You can add BI objects such as analyses, KPI watchlists and dashboards directly into an ADF page definition, as we did in last year’s OpenWorld presentation
  • You can add the BI Composer component to your application, in order that users can create their own analyses to add to your application
  • You can create a connection through to the BI Presentation Services Catalog and give users the ability to add their own BI content into the portal page, as we did with Portal : Spaces in the previous post

Before we can do any of these things though, we need to do a couple of additional things within JDeveloper 11g to enable BI content within the application. First-off, there’s another set of JDeveloper extensions to install: 

  • Business Intelligence SOAP Connection
  • Business Intelligence Logical SQL View Objects
  • Business Intelligence ADF View Regions
  • Business Intelligence ADF Task Flow
  • Oracle BI Composer Extension

If you want your users to be able to save personalisations that they make to analyses, dashboards, prompts and so forth, you then need to enable to MDS Runtime feature in your JDeveloper project, which is the same metadata store that OBIEE uses (stored in a database schema, rather than the default file store that JDeveloper gives you). Once you’ve done that, using the same Project Properties dialog in JDeveloper you should add Business Intelligence ADF View Components to the set of technologies that your JDeveloper project uses, so that you’ll be given access to these features when creating your project.

NewImage

Regardless of how you add BI content to your ADF application, the next step is to create a connection in JDeveloper to your BI Presentation Services server. Typically, you’d want to set up a BIImpersonateUser within your OBIEE security realm and have the ADF application connect as that, so that when the application uses security and the user enters their login credentials, these are passed to OBIEE and used (via impersonation) to view the catalog and analyses as that person (assuming usernames match across the systems). Using this approach, the simplest way to create the connection is to right-click on the Connections folder in the JDeveloper Application Resources pane, select New > BI Presentation Services … and then enter the connection details into the dialog that’s then displayed.

NewImage

A couple of things to bear in mind when creating this connection; first, if you intend to deploy the ADF application to a proper application server (rather than the embedded WebLogic Server within JDeveloper), you’ll need to package up the Presentation Server credentials using migrateSecurityStore.py and then import them into that application server’s credential store, so that the application can then refer to them when trying to connect to OBIEE (see this presentation from last year’s Openworld for the process for JDeveloper/OBIEE 11.1.1.5). Second, you may well have SSL set up on your OBIEE server, and if that’s the case you’d need to generate an SSL certificate from the OBIEE side, then import that into the environment that’ll be running your application, with details of what’s involved here in the OBIEE 11.1.1.6 Developers Guide manual. But now, we’ll go for a simple unencrypted connection using the BIImpersonateUser to initially connect, and then the application’s logged-in user credentials used to browse the Presentation Services catalog.

Next I need to add the Business Intelligence ADF View Object component to the project’s technology scope, by right-clicking on the application within JDeveloper and selecting it from the Available list, like this:

Sshot 3

You’re now at the point where you could, if you wish, start dragging and dropping Presentation Catalog content directly into the ADF application, typically in a new page. To create a new page and add BI content directly into it, start by locating the Pages folder within the PortalWeb Content / oracle / webcenterportalapp / pages project folder, right-click and select New … Web Tier > JSF Page, like this:

Sshot 1

This creates a new Java Server Faces page (a web technology designed for templated, rich-UI rapid-development applications) and gives you the option to select one of the Portal : Framework templates, or create a regular JSF page with no pre-formatting. In this first instance we’re actually going to add a BI view directly into the page, to see how this compares with using the Portal : Framework approach. To do so, I locate some BI content (in this case, the Order Details dashboard page from the SampleAppLite application) and then either drag it onto the Design view of the page, or directly into the page source, locating it where I’d like the content to appear on the page. When you do so, if the BI object had parameters (i.e. filters) associated with it, a dialog is then shown allowing you to manually enter values, or associate them with values from variables, or with other components such as drop-down menus that could provide values for them. 

NewImage

If you deploy BI content like this, you’ll see that it’s embedded directly into the application web page, which may be what you want if you’re delivering a fixed application where users’ don’t need to add content or manage the page layout themselves. One final step before we try out the new page is to add it to the Portal page menu via the PortalWeb Content / Oracle / webcenterportalapp / page hierarchy / pages.xml file, and set permissions on it so that the page only displays once a user has successfully logged in (for security purposes, and also so that we can pass a user ID through the impersonation system and display their correct view of the Presentation Services catalog).

NewImage

Once you’ve deployed the application to WebLogic (either a full install with all of the WebCenter Portal components, or the built-in WebLogic server that comes for testing purposes with JDeveloper 11g) you’re then presented with the login page, like this:

Sshot 7

Then once you’re successfully authenticated, the new page comes up on the top-level page menu, and when you click on it, the BI content we’ve just added is displayed.

Sshot 6

So that’s the first option for adding BI content, where we’ve embedded it directly in the application. But what about enabling it via Portal : Framework, and making it available through the list of content types that’s presented to users in the WebCenter Portal : Resource Catalog dialog, align with blogs, wikis, polls and other collaborative content? We can do this, but it requires a bit more configuration on the application side in JDeveloper, but once it’s done then everything will be registered for the user and they can just add BI content as portlets, customising pages and content as they wish.

Before we do this though, there’s another setup step in JDeveloper, this time adding some Java libraries into the application to include more BI functionality, like this:

Sshot 12

These libraries came as part of the JDeveloper extensions we added earlier, and some of them (BI ADF Runtime, for example) may already be registered. Also, we need to edit three files that come as part of the JDeveloper project:

  • Portal / Web Content / oracle / webcenter / portalapp / catalogs / default-catalog.xml, to register the BI Presentation Server as a portlet source
  • Portal / Web Content / WEB-INF / weblogic.xml, to add some library references (these may already actually be there)
  • Descriptors / META-INF / weblogic-application.xml, again to add another library reference
  • Descriptors / META-INF / jazn-data.xml, to enable permissions on a BI ADF task flow type that’s now available for use

I won’t detail the exact edits here as they change from release-to-release, but this bit of documentation is what I used and I can confirm it all worked for me.

Now it’s a case of creating a new JSF page for your application but this time choosing one of the WebCenter Portal templates, and then adding the Change Mode Button or Link component, along with the Page Customizable component, to make this new page editable and customisable by end-users.

NewImage

One final step is to right-click on this new page and select the Edit Authorization… menu item, then set permissions on the page so that authorised users can customise, view and perform other required functions with the page.

NewImage

All that’s left now is to deploy the application, and then log in to see the set of pages available to us. When I click on the new page that I’ve just created, there’s an Edit link in the corner, which when clicked on opens that particular page for editing, via a feature called Oracle Composer. When I click on the Add Content button that’s then displayed, I can select from the list of content in the Resource Catalog, which now includes a folder for content from the BI Presentation Services catalog.

NewImage

Selecting Oracle BI from the folder list then displays the top-level folders in the Presentation Catalog, and I can then navigate through this, using the logged-in users’ credentials (via the BIImpersonateUser).

Sshot 16

Finally, when I add the required catalog object to the portal page, I can see it in preview form on the page, allowing me to resize the display area and make any other changes before saving it and returning back to the main display view.

Sshot 13

So there you have it. Over the past three postings we’ve looked at what WebCenter Portal is, and how you can use the prebuilt WebCenter Portal : Spaces product to quickly put together a team portal and add BI content to it. In this last posting, we’ve looked at the componentry behind WebCenter Portal and how you can create your own Portal applications from the ground-up in JDeveloper, and use the Portal : Framework features to programatically add BI content to your portal application. That’s it for now, but keep an eye out for further postings on adding BI Composer to ADF applications, and using OBIEE as a data source for ADF applications along with using ADF view objects as data sources for OBIEE.

UKOUG Oracle BI Partner of the Year – Vote for Rittman Mead

Rittman Mead has been nominated for the UKOUG Partner of the Year Awards for another year, in the Business Intelligence category.

We take awards like these very seriously as we feel they reflect the effort and service we provide directly to our clients and indirectly to the wider Oracle BI community through the blog and speaking at various conferences. Could I therefore urge everyone to vote for us at the following link: www.registrationline.org.uk/pya/voter.asp.

Introducing Oracle Enterprise Data Quality

Visually, BI dashboards and analyses can be impressive and appear to meet the user’s needs, but if the data displayed on the dashboards isn’t reliable or correct then those same users can quickly lose faith in what they contain. Therefore, it’s important that assessing and managing data quality becomes an important part of any BI and data warehousing project.

Oracle recently introduced Oracle Enterprise Data Quality (EDQ) as their latest solution to the data quality “problem”, originally developed by Datanomic and known as “Dn: Director”. The acquisition of Datanomic is part of a renewed push by Oracle into data quality management. The acquisition follows on from previous efforts based around Oracle Data Integrator and, before that, Oracle Warehouse Builder.

Readers of this blog may well remember previous data quality initiatives based around ODI and OWB, with products like “Oracle Data Profiling and Quality” (OEM’d from Trillium) and “Data Quality Option for Oracle Database” based around Oracle Warehouse Builder. Mark posted a summary of the two tools a few years ago, but it’s probably true to say that neither gained much traction within the marketplace, hence the reboot of their strategy around Datanomic and what is now Oracle Enterprise Data Quality.

Of course “data quality” as a topic area has been around for a while and there have always been tools to address it, so why should we be interested in another Oracle initiative in this area?  Enterprise Data Quality is a bit different to previous efforts Oracle have made in this area, principally in that it’s more of a complete solution, it’s been on the market (successfully) for a while in it’s Datanomic guise. It tries to extend the data quality management process through to business users, who of course in the end govern the data and have the power to improve its quality long-term.

As a product, EDQ has more to offer in terms of data quality than tools like ODI and OWB in that it’s a complete data quality management platform, rather than being just a plug-in to an ETL tool. Key features include:

  • All data quality operations are bundled in one tool (profiling, auditing, cleansing, matching).
  • EDQ is designed to be easy and intuitive to use, making it possible to involve business users in the data quality management process.
  • Support for data governance best practices, and display of data quality KPIs through a dashboard.
  • The ability to create and raise cases (like help-desk tickets) as data quality issues are identified, and then being assigned to developers or business users to resolve in a structured, tracked way
  • A library of standardized data quality operations that can be applied and if needed, customized, the aim being to avoid “re-inventing the wheel” and instead leveraging standard techniques to identify and resolve common data quality issues.
  • Although EDQ runs standalone, Oracle have however developed an ODI plug-in (via a tool that you add to packages and load plans) that call-out to EDQ functionality, in a similar way to the ODI tool that currently integrates with the Trillium-based Oracle Data Profiling and Quality. We’ll look at this tool and how EDQ works with ODI in a future post.
  • Another feature of EDQ is the ability to build a package and then deploy and re-use it across the whole organization. Packages can also be developed for specific vertical markets and applications, such as fraud detection and risk and compliance application. Datanomic took their core data quality technologies and used them to create packaged DQ products such as their risk and compliance application, an approach that gained a fair bit of traction in areas such as financial services and banking. A good example of such a solution is Watchscreen.

Future Roadmap of Oracle Data Integration Solution

Datanomic’s data quality software will become part of the Oracle Data Integration solution and will deliver a complete solution for data profiling, data quality and data integration. In addition, Datanomic offered a number of data quality-based applications, pre-packaged business-ready solutions.

DQ Main principles

Data quality can be thought of as having four main principles, with the EDQ platform providing solutions aligned with them:

  • Understand the data
    Analyze the data, identify issues, gaps and errors.
  • Improve the data
    Transforming and correcting the data to improve the data quality.
  • Protect the data
    Having a continuous process by measuring and correcting the data by integrating it into the daily ETL load
  • Govern the data
    Having a process for monitoring the data quality and showing the results on a dashboard.
    Have a process for tracking and resolving issues.

The main application of EDQ, called “Director”, is used for profiling, analyzing and cleaning your data. EDQ has capabilities for logging, detecting issues and assigning them to resources, as well as monitoring and matching data.

EDQ is similar to ETL tools that define sources and targets, and then use mappings to transfer the data. In EDQ mappings are called processes, which can be scheduled to run at a chosen time, and can be integrated with the load plans and packages used by Oracle Data Integrator, through the Open Tools SDK.

Example of a transformation

In the example below, data is loaded from a file or from a database table, following which the data is analyzed for invalid characters and cleaned. Finally the ‘bad’ data will be saved to a separate file or table and ‘good’ data will be written to a file/table.

Prebuilt processors
EDQ has prebuilt processors, which can be customised if required. A group of processors makes a process (similar to an ETL mapping). I will only highlight some of the available tool palettes.

The frequency profiler is very interesting because it helps you to understand your data and helps you to spot the first data issues. When there is an abnormally high or low number of frequencies for a column then you may perhaps consider investigating these fields.

As you can see in the above screenshot, there are a lot of transformation processes that you can make use of, to for example convert a string to data, concatenate strings, replace characters within strings and so forth.

There are also many auditing options such as ones around email and business rules checking. Do not forget that these processors are all prebuilt, and as stated before you can define your own processes or customize existing ones.

EDQ Reporting and case management

Having the business involved is key in any data quality project; therefore the EDQ platform has an integrated tool for data quality issue management. Within EDQ there is another tool for the creation of reports and dashboards, which can help to improve communication with the business as they will be able to see any data quality issues in graph form.

Data Profiling

Data profiling is concerned with the first analysis of a data source in order to understand the data, and identify issues with its quality or content. In EDQ data profiling is automated, so that when creating a process a wizard will prompt the user to begin data profiling at the start of it. After adding data profiling to a project, you can the analyze the initial results within the result browser. To take an example, I have loaded a sample of a customer table and started profiling on the customer table, as shown in the screenshot below:

The data profiler checks the consistency for a column for the date format. In the above screenshot the column ‘username’ is 71.4% consistent, so a closer look is needed to validate its data content.


The results browser also indicates if a field has unique values or of there are duplicates or if it has null values.
When choosing a sample of the data set a trend for the complete dataset can be derived. It is depending of the data quality strategy which approach you want to take.

Data Cleansing
Data cleansing is the process of detecting and correcting corrupt or inaccurate records from a record set, table or database.

Implementing data cleansing is very straightforward, in that you create a process to which you then add or create the needed processors. In the following screenshot the email address of a column is being checked and it will be checked for duplicates. All validated records will go to the target file/table, and all bad records will be saved in a temporary file/table. Depending on the implemented data quality strategy, you can then either correct the invalid data, or save the data and keep it as a source for improvement of the data quality.

Processing / Scheduling
Once the processes have been made and tested they are placed into a job and are scheduled.

EDQ and ODI Integration
Typically EDQ will be used within an ODI data flow (during a data migration or when populating a data warehouse). Oracle Data Integrator 11.1.1.6.0 introduced a new Open Tool named “Enterprise Data quality” that allows ODI developers to invoke an Oracle Enterprise Data quality job, in the form of an Open Tool. This Oracle blog article explains how to integrate EDQ and ODI in this way.

Data Quality and Oracle Reference Architecture.
Oracle recommends checking data quality before any data gets loaded into the Enterprise Data Warehouse (EDW). Referring to the Oracle Reference Architecture the data should be cleansed before it enters the staging area and should be maintained in one single data source.

Hardware recommendations
For processing data volumes of up to 10 million source records, Oracle recommends a server with 4 cores, 16 GB RAM and a 250 GB hard disk for the EDQ repository.

EDQ architecture
EDQ has a web graphical user interface and several client applications.The server application is built around a Java servlet engine and has a SQL RDBMS system (the data repository), using a repository that contains two database schemas: the director schema and the results schema. The director schema stores configuration data for EDQ and the results schema stores snapshot, staged and results data. It is highly dynamic, with tables being created and dropped as required to store the data handled by processors running on the server. Temporary working tables are also created and dropped during process execution to store any working data that cannot be held in the available memory.  When loading and processing data the memory of the server will be used and there will be a lot of I/O to the database.

Conclusion
As said in the beginning of this post, most BI projects suffer in some way from data quality issues, therefore a data quality strategy should really be put in place. The Datanomic acquisition by Oracle can help to deliver data quality solutions within BI projects, based on Datanomic’s proven experience in this area. With regards to EDQ, it differentiates itself from other data quality tools because it’s offering a full data quality solution. Another strong point is that a lot of the processes are already prepackaged and ready to use. The tool is designed for having a close communication with the business users by having the case management tool and the reporting option. The tool is also very easy and intuitive to use, therefore a business user can be involved in the creation of data quality processes. We will keep you posted on further news regarding the EDQ solutions.

Looking Towards the BI Apps 11g Part 2 : Oracle BI Apps 11g Technology Innovations

In our previous posting in this series on the Oracle BI Applications 11g, we looked at the overall product roadmap for Oracle BI Applications 11g, and how in the next twelve months the 7.9.6.x branch that currently covers Applications Unlimited customers will eventually merge with the 11.1.1.x release, giving us a platform that covers all of Oracle’s ERP applications and also supports both ODI and Informatica as the ETL tools. For a recap on yesterday’s post and pointers to the other posts in this set, here’s links to the whole series:

So for Apps Unlimited customers wondering what the move to BI Applications 11g will bring them, and for existing BI Applications customers interested in where the platform is going, let’s look at what’s in the BI Apps 11g now and what’s been announced as planned functionality.

The current version of Oracle BI Apps 11g (at the time of writing, 11.1.1.6) is recognisable to all BI Apps customers in that it uses Informatica PowerCenter as the ETL engine, and uses OBIEE 11g as the BI platform. The DAC is still used to orchestrate Informatica mappings, but behind the scenes there are a couple of new tools aimed at making the setup and configuration process a bit simpler.

Oracle BI Applications Configuration Manager is a web-based application that takes you through the initial configuration steps for the BI Apps, and the data warehouse and ETL routines that provide its data.

NewImage

Configuration Manager is concerned with defining system-level settings such as sources and targets, initial load date and other DAC parameters, and in certain places launches another web-based tool, Oracle BI Applications Functional Setup Manager, which helps you select and then set up the various modules that you’ve licensed for your deployment of BI Applications 11g.

NewImage

In this initial release that uses Informatica in the background, these tools work with the DAC but don’t replace it; for the upcoming BI Apps 11g release that supports ODI as the data integration tool though, these tools will (with additional functionality) replace the DAC, making all ETL control tools web-based.

Installing the BI Apps 11g it its current form is very different to previous BI Apps 7.9.6.x installations, in that you install it as part of the Oracle Fusion Applications – there’s no separate download. Presumably, once Apps Unlimited customers start getting supported, there’ll be standalone installers similar to the ones currently used for the 7.9.6.x branch,

For the front-end, the major innovation in the BI Apps 11g is that as well as being available through traditional dashboards, they will also be embedded directly in the Fusion Applications; BI permeates the whole platform, with analyses, dashboards, scorecards and catalog views distributed throughout the product suite. So as well as providing a BI-centric dashboard interface aimed at analysts and managers, BI views will also be provided right alongside the transactional applications that users work with all day, broadening BI “smarts” to everyone in the business who needs to make a decision, and the more casual type of users who just needs access to one or two reports to do their job.

NewImage

BI within the Oracle Fusion Apps actually comes in two forms, the first of which comes standard as part of the Fusion Applications, the other being an additional license cost:

  • Oracle Transactional BI, used for operational and real-time reporting on the Fusion Apps database, uses the OBIEE 11g framework and accesses current-state transactional data through ADF View Objects
  • Oracle BI Applications, the BI Apps 11.1.1.x we’ve been talking about so far and licensed separately to the Fusion Apps, which will appear as additional analyses, dashboards and other BI objects in the catalog, focuses on trend and analytic views and uses the Oracle Business Analytics Warehouse as its data source.

Oracle BI Composer, the lightweight version of the OBIEE Analysis Editor previewed on this blog a year or so ago, will also ship with the Fusion Apps to provide a simple, guided environment for end-user report creation. BI Publisher will be there in the background providing published reports, and Oracle BI Mobile will be used for delivering analytics on mobile devices.

As well as these two “horizontal” BI products there’ll be, as with the current 7.9.6.x branch, specialist analytic applications around areas like salesforce management, pharmaceuticals, marketing and so. The diagram below shows the BI Applications 11g architecture, with the OBIEE BI Repository mapping to both real-time data sources via ADF view objects, and to historical and trend data via the Oracle Business Analytics Warehouse.

NewImage

The Oracle Business Analytics Warehouse uses a different data model to the current 7.9.6.3 OBAW, but as there won’t be migration support from moving from Informatica to ODI, this won’t really affect people as anyone using this data model will be working on a new implementation anyway. 

All of the elements mentioned in this post are actually already place now with the BI Applications 11.1.1.6, but going forward there are a number of planned innovations coming along aimed at reducing the total cost of ownership for the platform:

  • BI Extender, which will interrogate the Fusion Applcations setup and identify flex-fields that need to be pushed-down into the BI Apps repository and ETL process
  • Support for data integration tools like Oracle GoldenGate to permit real-time, trickle-feed data loads into the Business Analytics Warehouse
  • Updated, “consumer-style” perspective-style dashboards based around concepts like “Projects”, “People”, “Risks” and “Finances” using some of the concepts brought over from the Endeca acquisition.

From my perspective, it’ll be interesting to see how Oracle balance the opportunities for tighter integration between the Fusion Apps and the BI Applications versus maintaining support for Apps Unlimited customers; and the integration opportunities coming from owning both ODI and OBIEE together with the BI Apps versus maintaining support and relevance for Informatica customers. So, with all of this in mind, what can you do as customers, and developers, to get yourself ready for the BI Applications 11g? Keep an eye out for the last post in this short series, when we look at getting yourself ready for BI Applications 11g.