Tag Archives: Uncategorized
Looking Towards the BI Apps 11g Part 1 : BI Apps 11g Product Roadmap
Whilst this blog is most well known for articles on OBIEE, ODI, Essbase and data warehousing, many of the projects we work on are actually Oracle BI Applications implementations, out of our offices in the UK, India, USA and Australia. Recently we’re getting asked more and more about the product roadmap for the BI Applications, with questions typically around whether ODI will be supported going forward, whether ODI will actually replace Informatica, and whether customers can actually upgrade to BI Applications 11g now. So over the next three postings, I’ll be looking at what’s coming up for the Oracle BI Applications, answering some questions around migrations and technology platforms, and looking at what we can do as customers and developers to get ourselves ready for BI Apps 11g.
I’ll add the links in as the postings are published, but here’s the topics and links for the other parts of the series:
- Looking Towards the BI Apps 11g Part 1 : Oracle BI Apps 11g Product Roadmap
- Looking Towards the BI Apps 11g Part 2 : Oracle BI Apps 11g Technology Innovations
- Looking Towards the BI Apps 11g Part 3 : Preparing for Oracle BI Apps 11g
For anyone new to Oracle BI or Oracle’s packaged applications, the Oracle BI Applications take the Oracle Business Intelligence Enterprise Edition platform and use it to deliver sets of predefined ETL routines, dashboards and analyses all built on a common, dimensional data model. Oracle BI Applications is licensed per module (Financials, Supply Chain etc) and separate to the BI Apps purchase you also need to license Informatica PowerCenter, a third-party ETL tool that Oracle resells, and these ETL routines are orchestrated by an Oracle product called the Data Warehouse Administration Console, or DAC, which maintains a central record of what ETL routines are required to load your data. For some background into the BI Applications 7.9.x series, take a look at this Introduction to the Oracle BI Apps posting back from 2008, and this one on what’s in the Oracle BI Applications data warehouse from around the same time.
When Oracle acquired Siebel back in 2007, from a BI perspective Siebel had two main product lines; Siebel Analytics, which became the foundation for Oracle Business Intelligence Enterprise Edition, and Siebel Business Analytics, which more or less is what the Oracle BI Applications are today. Siebel Business Analytics covered areas such as sales analytics, service and contact centre analytics as well as ERP areas such as financials and supply chain; if you’re interested in a bit of history, here’s an original Siebel Business Analytics brochure back from 2005 that’s details both the Business Analytics and Analytics products. In fact, the CRM and ERP parts of Siebel Business Analytics had different data models, with the ERP content originally coming from Informatica Warehouse, a product Informatica sold alongside PowerCenter and their own query tool, Informatica PowerAnalyzer (and which Informatica themselves acquired through a company called Influence Software, which they acquired back in the late 90′s). Already well underway by the time of the Oracle acquisition was “Project Tenerife”, to merge the two data models together, with the product then becoming the Oracle BI Applications that we know today.
Now though, there are three distinct development branches of the BI Applications, and a fourth one that had just one release and then ended:
- Oracle BI Applications 7.9.6.x (currently at 7.9.6.3), the mainstream version of the BI Applications with modules covering Oracle EBS, Peoplesoft, JD Edwards and Siebel source applications (often referred to as ”Applications Unlimited” sources, due to their legacy nature but Oracle’s commitment to support them forever)
- Oracle BI Applications 11.1.1.x (currently at 11.1.1.6), the next-generation version of the BI Applications that currently only sources data from the Oracle Fusion Applications, but in time will cover the Applications Unlimited sources as well
- Oracle BI Applications 7.9.7.x, which sources data from SAP and has a much smaller amount of analytic modules, but unlike the other two BI Apps branches uses Oracle Data Integrator (ODI) for the data integration tasks, not Informatica PowerCenter.
Those of you with long memories might also remember a branch of the BI Apps back in 2009 that sourced data from Oracle EBS but used Oracle Data Integrator as the data integration tool; Oracle BI Apps 7.9.5.2 turned out to be a one-off release though, with no further modules, sources or targets covered, but was interesting as a preview of how Oracle might tackle an eventual move to ODI as the ETL tool, and also it highlighted those areas where ODI didn’t have the required functionality to replace Informatica. We covered the 7.9.5.2 release in quite a bit of details when it first came out, and you can read some background to it in four blog posts that set out an introduction to the release, technology changes to accommodate ODI as an alternative to Informatica PowerCenter, data loading and customisation.
Oracle have also stated some long term plans and objectives for the BI Applications:
- To enable customers to choose between either Informatica PowerCenter, or Oracle Data Integrator, as their data integration tool
- To eventually support Applications Unlimited customers within the BI Apps 11g release as well, so that it will cover Oracle EBS, Oracle Fusion Applications, Siebel CRM, Peoplesoft and JD Edwards
- To try and reduce the total cost of ownership for the BI Apps, particularly around areas such as setup and configuration, and the customisation process
- Increase the scope and depth of the subject area modules, move ETL to as near to real-time as possible, and also make it practical to run the BI Applications in the cloud.
As of Collaborate’12 in April 2012 in Las Vegas, the current release roadmap for these three development branches was presented by Florian Schouten, Oracle’s Senior Director, Product Management/Strategy in charge of BI Apps as looking like this:
What this is telling us is a few things:
- There’ll probably be one more major BI Apps 7.9.6.x release, more as a tidy-up and integrating all bug fixes, extensions etc. This will then be the terminal release for this branch of the BI Apps
- At some point in the next twelve months, there’ll be a release of the BI Applications 11.1.1.x that will support Apps Unlimited customers but will only use Oracle Data Integrator as the data integration tool.
- Later on in the next twelve months, there will be a further 11.1.1.x release that will also offer Informatica as the ETL tool for Apps Unlimited customers.
This has a few implications and things to be aware of:
- At the point where the 11.1.1.x release supports Apps Unlimited customers but only through ODI, you’ll be able to start new, green-field BI Apps projects covering both the Fusion Apps and Apps Unlimited data sources, but there’ll be no migration path to take you from Informatica to ODI, so existing BI Apps customers using Apps Unlimited sources will need to wait for the next release before moving to BI Apps 11g
- When the subsequent 11.1.1.x release comes along, again within a twelve month (planned) timescale, existing Apps Unlimited customers will them be able to move to 11g as Informatica will be supported again. There’s still be no (automated) Informatica to ODI migration path though, so the current thinking is that if you’re on Informatica, you’ll stay on Informatica
Going forward from this point, the plan is to offer all modules and all sources through both Informatica and Oracle Data Integrator, with no forced migration from one to the other. SAP customers using the 7.9.6.x product will stay for the time being in their own development stream, continuing to use ODI as their data integration engine, with the aim being to merge the SAP stream back into the BI Apps 11.1.1.x series in due course.
So now we know a bit more about the roadmap, what does Oracle BI Applications 11g look like, and what new end-user, development and administration tools can we expect to see? Stick around for a couple of days and we’ll cover this topic next in the series.
Last Week to Complete the BI Survey 11!
[Re-posted as there's just one week left to get your responses in...!]
Every year we’re pleased to help publicise the BI Survey, an annual independent survey of BI tools customers organized by BARC. What’s good about the BI Survey is that it helps you understand what others using your favourite BI tool think of it, and where it’s useful and not so useful, and it’s also a good way to gauge how your BI tools are rated relative to the competition. There’s also sections on BI implementation approaches, tool selection and so forth, so it’s a good all-round survey of the BI tools marketplace (note that we have no financial or commercial interest or links to BARC, we just think they’re “good guys”).
Anyway, here’s the invite. Make sure you take part, so that there’s a good sample size for Oracle BI, EPM and OLAP tools in the survey:
“We would appreciate your participation in ‘The BI Survey 11: The Customer Verdict’, the world’s largest survey of business intelligence (BI) and performance management (PM) users.
Click the link below to take part:
https://digiumenterprise.com/answer?link=982-8V8SYN3Z
As a participant, you will:
- Receive a summary of the results from the full survey
- Be entered into a draw to win one of ten $50 Amazon vouchers
- Ensure that your experiences are included in the final analyses
BARC’s annual survey gathers input from a large number of organizations to better understand their buying decisions, their implementation cycles and the benefits they achieve from using BI software. The BI Survey 11 is strictly vendor-independent: BARC does not accept vendor sponsorship of the Survey, and the results are analyzed and published without any vendor involvement.
You will be able to answer questions on your usage of a BI product from any vendor. Your answers will be used anonymously, and your personal details will never be passed on to vendors or other third parties. Business and technical users, as well as vendors and consultants, are all welcome to participate.
The BI Survey 11 should take about 25 minutes to complete. For further information, please contact Silke Hopf at BARC (shopf@barc.de).”
How is OBIEE running for you?
We all need a checkup every now and then, a visit to the doctor, taking your car to the mechanic, virus checking your computer. OBIEE is no exception.
The daily loading of data, report development/re-development, modfications and changes all can have an impact on your environment. A bit like someone who doesn’t know when to stop going to the all you can eat buffet.
If your users are saying “This is taking almost twice a long as it did before” or “The report we created is not returning the right results” maybe it’s time for healthcheck.
Rittman Mead has for years been offering OBIEE health checks to customers looking to identify issues, improve performance or ensure best practices are being utilized.
Depending on the size of your OBIEE rollout, a 5-10 day health check can be performed along with a detailed report, (written in plain english), so you have a complete overview of the health of your OBIEE implementation.
Creating an Oracle Endeca Information Discovery 2.3 Application Part 3 : Creating the User Interface
In the first part of this three part Oracle Endeca Information Discovery 2.3 development series, we first looked at why you would build an OEID application, and then went on to look at how you load data into the Endeca Server, the hybrid search/analytic database that powers all of the Endeca application. Here’s the links to all of the articles in this series, in case you’ve come straight here via a Google search.
- Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design
- Creating an Oracle Endeca Information Discovery 2.3 Application Part 2 : Preparing and Loading Data
- Creating an Oracle Endeca Information Discovery 2.3 Application Part 3 : Creating the User Interface
At the end of the previous post in this series, we’d loaded records into the Endeca Server datastore sourced from a relational fact table, along with dimension attribute data from some dimension table file exports. At this point though whilst the system is usable, the record attribute names are as they came in from the file export, and there’s no logical grouping of attributes into dimensions or functional areas. There’s also some extra work we need to do to configure indexing and searching on the datastore records, tasks that we can either do through Integrator graphs that call the Endeca Server web service APIs, or we can do this using Oracle Endeca Information Discovery Studio, an application server and dashboard development environment that’s used to administer the Endeca Server as well as create the end-user search interface.
So let’s take a quick look at Oracle Endeca Information Discovery Studio, or just “Studio” as we’ll call it from now. Studio is currently built on the open-source Liferay Portal framework, but over time I’d expect this to move to Oracle technology, as the charts and visuals already have done in the 2.3 release. The data visualisation portlets are built to the JSR286 standard, and the whole thing runs as a Java application in a Java application server, by default Tomcat but also installable in Oracle WebLogic Server 11g and IBM WebSphere 7.
To start using Studio you first start the Studio server, and then log into the web-based administration and development environment, where you’re presented with a sign-in portlet, any pages that you’ve added to your main page, and a menu on the right-hand side that provides access to other pages, templates and administration functions, if you’re a system admin.
In the next two screencasts in the Getting Started with Endeca Information Discovery series, on which we’re basing this series of blog posts, Studio is then used to perform further configuration of the Endeca Server datastore, starting with enabling more extensive search on various attributes. Here’s the links to the next two screencasts:
- 3.1 – Enhance the User Experience through Configuration: Learn about the various ways you can enhance the user experience of your application through Configuration.
- 3.2 – Enable Record Search: Learn how to enable record search in your Studio application.
So how does this work, and what’s the purpose of these tasks? Going back to the very simple Studio page that I created in the previous posting to view the contents of the datastore records (we’ll get onto creating pages in more detail later on), I can see that if I start typing into the Search Box at the top left-hand side of the page, Studio performs a “value search” looking for occurrences of this text in records, highlighting those and the attributes they are contained in as matches occur.
Underneath the Search Box is another box labelled Breadcrumbs, that provides another type of search called a “record search”. Record search isn’t enabled by default, and to enable it you need to define one or more “search interfaces” for the datastore, lists of attributes grouped by interface name that define the scope of a attribute value search. The value search box that you used earlier is a good way of identifying which attributes need to be in these search interfaces, and to create them you’ll need to use Integrator again. Using Integrator, you will need to create a new graph that takes a flat file (or database table lookup) of attribute names group by interface name, and then calls another Endeca Server web service API to first set the selected attributes as searchable, which then creates and enables the required search interfaces.
Then, you go back to Studio and bring up the preferences screen for the Search Box component. This then displays a list of search configurations, which initially will show that none are being used. To enable record search, you then have to create a new search configuration and add to it those search interfaces, defined in the Integrator graph a moment ago, that you wish to use with this search box, like this:
Now when you start typing into the search box, you can select a particular search interface (or just leave it to the default “All” in this example), make sure you press the Search (looking glass) button at the right of the box, and then your records will be filtered by that particular attribute value, as shown in the screenshot below.
The next two screencasts in the series are concerned with arranging the attributes into groups (mostly corresponding to what we’d term dimension tables), and then giving them more meaningful names.
- 3.3 – Create Attribute Groups: Learn how to create Attribute Groups, both within Studio and via Integrator.
- 3.4 – Update Attribute Metadata: Learn how to update Attribute Metadata, such as display names, attribute value sort order, and selection type.
Both of these tasks are actually carried out using the Studio application at first, which has a Control Panel that brings up a list of datastore and Endeca Server tasks, as well as tasks specific to the Studio application. In the screenshot below using the Attribute Settings page, attributes currently unassigned to attribute groups (and therefore in the “Other” group) are being added to new attribute groups on the right-hand side, grouping them by Product, Employee and so on.
Both this task, and the one that follows that gives the attributes more meaningful names, can instead be carried out using an Integrator graph, which takes the groupings and attribute names from flat files (or from database tables, or wherever) and uses the Endeca Server web service APIs to configure the datastore’s attribute list (the screenshot for which is at the end of yesterday’s post on data loading and Integrator). Once you’ve carried out these configuration tasks, your search interface and list of attributes in the Guided Navigation box looks a bit more user-friendly.
At this point we’ve now got the basics of a working system, and we can start to add graphs, tables and other visualisations to our dashboard page. The screencast series starts this process by adding a Cross Tab component to the page, selected from the list of components registered with the Studio application (and under the covers, Liferay).
Adding a Cross Tab component to the existing page adds it, but it then needs to be configured, with the most important configuration setting being the query that returns the component’s dataset, a process described in detail in the next screencast in the series:
To set up the query for the Cross Tab component, you use a query language called EQL (“Endeca Query Language”) which is like SQL but set up for Endeca’s particular needs. In the example below, we’re using an EQL query to Return a dataset called “SalesTotal” that SELECTs the sum of SalesAmount and then GROUPs it by FiscalYear, FiscalQuarter and SalesTerritoryCountry. This EQL query will then get sent to the Endeca Server via a web service call, and the result set returned back to the component for display.
EQL is similar to SQL in that you have SELECT, GROUP BY and other similar clauses, but EQL is predicated on having a single table whose rows might have different sets of attributes (columns) to each other. There are two types of EQL statement; a statement with a RETURN clause, that returns a named dataset directly back to the calling component like this:
Return SalesTotal AS SELECT SUM(FactSales_SalesAmount) as TotalSales GROUP BY DimDate_FiscalYear, DimDate_FiscalQuarter, DimSalesTerritory_SalesTerritoryCountry
Note how columns referenced in the GROUP BY clause have an implied SELECT, so you don’t need to list them in the main SELECT clause. The other type of EQL statement is one that creates a temporary results set for use later on, and uses a DEFINE clause instead of RETURN, like this:
DEFINE RegionTotals AS SELECT SUM(Amount) AS Total GROUP BY Region
Once you’ve defined the EQL query you can then test it, and then use the returned columns to create the rows, columns and metrics for the pivot table. Once you’ve assigned the Cross Tab settings, you can then save the configuration and return back to the dashboard view, where you’ll see the Cross Tab returning data from the Endeca Server datastore.
The other way to provide data for visualisation components is through a view. Views, like views in an Oracle database, provide an abstraction and simplification layer for users, taking subsets of data and, in some cases, data aggregations and transformations, and making them available to select from when providing a dataset for a chart or other component. The diagram below shows the relationship between the Endeca Server data store, this “view” layer, and the dashboard components that can make use of them.
Views, and the chart and table components that make use of them, are described in the next three screencasts in the series, and the last ones that we’ll look at in these articles:
- 4.2 – Understand Views: Learn how to configure Views via the View Manager.
- 4.3 – Configure a Chart: Learn how to configure the Chart component.
- 4.4 – Configure a Results Table: Learn how to configure the Results Table component.
Views are defined within the Studio application’s Control Panel function, and once defined can be exported and then used in an Integrator graph to load view definitions programmatically.
Once you’ve created your views, they are then available for use with various components, such as the chart component in the screenshot below, which is about to use the Transactions view to provide a sales transactions line items dataset.
Finally, once you’ve created all the required views and set up your various visualisations, you’ll end up at the QuickStart demo dashboard that we started with at the start of this article series.
So there we come to the end of this three-part series. Obviously, there’s a lot more you can do with the Endeca Information Discovery toolset, including content acquisition from sources such as web pages, Twitter feeds and the like, and there’s a lot more options around text parsing, enrichment and sentiment analysis that we’ve not touched on yet. But for now, this was a brief introduction to what’s involved in creating an OEID application, and for more details make sure you take a look at the screencast series that I’ve linked to through the article.
Oracle Fusion Middleware Innovation Awards 2012 : BI & EPM Category
At Rittman Mead we’re always interested in innovative Oracle BI and EPM solutions, ones that “push the envelope” in terms of how they use BI to provide competitive advantage for their organization. Exalytics has been out now and deployed for a while at some early adopter sites, so we’re very interested to hear whether moving analysis in-memory provides real customer benefits and makes different types of analysis now possible. We’re also keen see whether embedded BI in applications and business processes changes how organizations run their business, and whether all of the work Oracle have done to integrate BI with Oracle Fusion Middleware is providing infrastructure benefits outside of Oracle’s own Fusion Apps program.
So we’re happy to help publicise the 2012 Oracle Fusion Middleware Innovation Awards, an annual competition to pick the most innovative solutions built around Oracle’s Fusion Middleware tooling, and which has a category for Business Analytics (BI, EPM and Analytics). The winners get to go to Openworld later this year to collect their awards, and nominations are currently open with a closing date of July 17th 2012. Details of the awards and the nomination process can be found on this web page, with some additional information on this WebCenter blog post.
The awards themselves are jointly sponsored by Oracle and several user groups, including ODTUG, UKOUG and IOUG, and the winners are chosen by a panel of both Oracle, and external judges, so hopefully it’s not just down to which customer has bought the most Oracle products this year ;-) Anyway, the nomination form for the BI & EPM category is here, so start working on this now if you want to hit the July 17th deadline, and look out for the winners later this year.