Tag Archives: Oracle BI Suite EE

Previewing the New OBIEE 11.1.1.7 SampleApp

OBIEE 11.1.1.7 came out a few weeks ago, and recently we looked at Hadoop integration, one of the more interesting new features in this release. Over the next week I’ll be looking in more detail at the changes around Essbase and EPM Suite integration, but today I’ll be looking at a preview release of the upcoming OBIEE 11.1.1.7 SampleApp, provided courtesy of Philippe Lions and the BI Tech Demos team within Oracle. Philippe will be going through the new SampleApp release at the upcoming Rittman Mead BI Forum 2013 events in Atlanta and Brighton, but in the meantime lets take at what’s likely to appear in this new OBIEE 11.1.1.7 demo showcase.

As with previous OBIEE 11g SampleApps, the dashboard front page lists out all of the content, and highlights in bright blue those areas that are new to this release. The 11.1.1.7 SampleApp is largely based on earlier releases to support the 11.1.1.5 and 11.1.1.6 OBIEE versions, with any new content either showing off 11.1.1.7 new features, or adding new functional areas to the SampleApp demo.

NewImage

The best place to start looking is the New Features Demo dashboard, which highlights new 11.1.1.7 features such as performance tiles, 100% stacked bar charts and waterfall charts, on this first dashboard page:

NewImage

Totals within tables, pivot tables and other visualisations can now have action links associated with them, to display a financial summary report for example:

NewImage

Another page on this dashboard shows of the new layout capabilities within dashboard pages; object containers can now have fixed (absolute) width and height sizes, whilst dashboard columns, and rows/columns within table and pivot table views, can also be frozen whilst other areas scroll by.

NewImage

The new 11.1.1.7 SampleApp is likely to ship with Endeca Information Discovery pre-installed, and configured to provide the catalog search for OBIEE’s BI Presentation Services (a new feature in OBIEE 11.1.1.7). The SampleApp 11.1.1.7 screenshot below shows a typical “faceted search” against the web catalog, displaying key attributes and an attribute search box via an Endeca Information Discovery-style guided navigation pane. The benefit of Endeca providing catalog search vs. the Presentation Server’s regular search feature is that it looks much deeper into the catalog metadata, allows searching across many more attributes, and because it uses an in-memory index, it’s really fast.

NewImage

There’s also some nice new Oracle R Enterprise content and demos, including an example where R scripts can be selected via the dashboard, parameter values supplied and the scripts then run; ORE using OBIEE’s BI Repository as a data source, and some more examples of ORE analysing the Flight Delays dataset to predict delays on future flights, for example.

NewImage

11.1.17 comes with a lot of “fit and finish” tweaks to standard visualisations, and one of the dashboard pages shows of new table and pivot table features such as tooltips over row and column values, as well as features such as rollover headers, removable pivot table corners and the like – none of these are world-changing, but they’re often the sort of thing that particular customers want for their systems, and in the past we’ve had to hack-around in Javascript files and the like to meet similar requirements.

NewImage

If you’re an Endeca Information Discovery developer who’s also interested in the state-of-play around OBIEE integration, there’s a whole dashboard setting out current examples around OBIEE / Endeca integration, including examples of parameter passing between OBIEE and Endeca Studio, Endeca using OBIEE’s BI Repository as its data source, and BI Publisher reporting against Endeca Server data via web service calls.

NewImage

Finally, the dashboard pages added in to support DBA and developer tasks have been added to, with a new dashboard page for example displaying a list of all the physical SQL queries sent to the database.

NewImage

Thanks to Philippe and the BI Tech Demos team for the early preview. Check back tomorrow when we’ll continue the look at what’s new with OBIEE 11.1.1.7, by taking a closer look at what’s changed, and dramatically improved, in the area of integration with Essbase and the Oracle EPM (Hyperion) Suite.

Using SampleApp Scripts to run a Simple OBIEE 11g Load Test

If you’ve not done so already, I’d advise anyone interested in OBIEE 11g to subscribe to the Oracle BI Tech Demos channel on Youtube, where Philippe Lions and his team showcase upcoming SampleApp releases, and highlight new features like SmartView integration (to be covered later this week on this blog), integration with Oracle R Enterprise, what’s coming in the new 11.1.1.7 SampleApp, and OBIEE 11.1.1.7 leveraging the Advanced Analytics Option in Oracle Database 11gR2. One demo that I’ve been aware of for some time via the Exalytics program, and that’s also featured in the Tech Demos channel, is a load test demo that uses scripts and internal OBIEE features to run the test, and is used by Oracle to show how many concurrent users an Exalytics server can handle.

Exalytics Load Test Video

What’s particularly interesting about this load test though is that it doesn’t require any external tools such as LoadRunner or JMeter, and the scripts it uses are actually shipped with the full v207 SampleApp VirtualBox VM that is downloadable from OTN. On a recent customer engagement a need came up for a “quick and dirty” load test for their system, so I thought I’d go through how this load test example works, and how it can be adapted for use with any other generic OBIEE 11g (11.1.1.6+) environment.

In the example used in the Youtube video, a report (which actually looks like a dashboard page, but is actually an single analysis compound layout containing two graph views, and a pivot table view) is set up with a special set of filter values; when requested, this analysis will use “randomised” filter values so that response times aren’t skewed by the same values being used each time, and a controlling process outside of the dashboard ramps up 10, 100, 200 and so on separate sessions up to  a maximum of 2,000, to simulate the sort of user numbers that an Exalytics server might be required to support.

NewImage

Then, when the load test is running, the metric display within Fusion Middleware Control is used to show how the server copes with the load (in terms of # of sessions, average response time per query etc), as well as a dashboard page based off of the usage tracking data that shows a similar set of information.

NewImage

Now of course the reason this sort of test data is important (apart from selling Exalytics servers) is that a report that takes 10 seconds to run, on a system otherwise unused and with only you running queries, might take considerably longer to run when all your users are on the system, due to factors such as disk contention, queuing on database server and mid-tier server CPUs, parallel query getting scaled-back when more than a few users try to run reports at the same time, and so on – so you need to do this sort of load test before unleashing your new dashboards onto your user community. But performing a load test is hard – just ask our Robin Moffatt – so having a ready-made system shipped with SampleApp, that doesn’t require additional software, certainly sounds interesting. So how does it work?

The scripts that control the load test process are contained within the /home/oracle/scripts/loadtest folder on SampleApp, and look like this within the Linux file manager:

NewImage

The folder actually contains three scripts, and a Java JAR archive file:

  • runtest actually runs the load test
  • users_list.txt is a set of usernames, that are central to the load test process (more on this in a moment)
  • Loadtest_README.txt is the instruction file, and
  • LoadTest.jar is a Java program that is called by runtest to log into OBIEE and request the report

Looking through the readme file, the way the process works is that you need to create a set of users with a common password within the OBIEE LDAP directory, and put their usernames in the users_list.txt file. Then, the LoadTest.jar file is called by the runtest script, passing the hostname and port number of the WebLogic server hosting Presentation Services, the path to the analysis that you wish to test against, and the common password, and the script will then initiate a session for each user and then run the report.

Looking at the list of names in the users_list.txt file is interesting, because they all appear to be airport three-letter codes; for example:

SAN
SAT
SAV
SBA
SBN
SBP
MKC
MKE
MKG
MLB
MLI
MLU
MMH

The reason for this becomes clear when you look at the filters behind the analysis that the runtest script calls; to provide the filter predicate randomisation, each run of the report uses the username to filter the origin airport selection, and the other filter values are generated through MOD and RAND functions that in essence, generate random values for each call of the report. So given that we’re not all going to want to test reports based on airport codes, and how the overall testing process works, this presents two challenges to us:

  1. How we generate a very large number of user accounts with a common password, given that the test process runs the report just once for each user, and how we get rid of these accounts once we’ve finished the testing.
  2. How we configure the report we want to test to generate “random” filter values – the approach Oracle took with this example is quite “clever”, but we’ll need to come up with something equally clever if we want to do this for our report.

Question 1 seems extricably linked to question 2, so let’s create an example report that we can easily randomise the values for, create a number of views that we can include in a compound layout as Oracle did in the load test demo, and give it a go.

Taking the SampleApp dataset and the A – Sample Sales subject area, let’s create an analysis that has the following columns in the analysis criteria:

  • Products.P2 Product Type
  • Time.T03 Per Name Qtr
  • Time.T02 Per Name Month
  • Customer.C3 Customer Type
  • Ship To Regions.R50 Region
  • Base Facts.1 – Revenue
  • Base Facts.1 – Discount Amount

For good measure, create another derived measure, called Base Facts.1 – Gross Revenue, which uses the formula:

  • “Base Facts”.”1- Revenue”+”Base Facts”.”3- Discount Amount”

and then create some views off of this criteria so that your analysis looks something along these lines:

NewImage

Now comes the tricky part of randomising it. We could take the approach that Oracle took with the Airlines load test example and create, for example, a user for each country in the dataset, but instead let’s use Logical SQL’s RAND function to pick a region and calendar quarter at random, and then three of the five customer types, to use as the analysis filters. To do this, we create a filter against this column in the analysis criteria and then convert the filter to SQL, using something like the following SQL clause to filter the quarter randomly:


"Time"."T03 Per Name Qtr" in (
SELECT a.s_1 from
(SELECT
0 s_0,
"A - Sample Sales"."Time"."T03 Per Name Qtr" s_1,
RAND()*100 s_2
FROM "A - Sample Sales"
WHERE
(BOTTOMN(RAND()*100,1) <= 1)
ORDER BY 1, 2 ASC NULLS LAST, 3 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY) a
)

The same goes for the region filter, which we define as:


"Ship To Regions"."R50 Region" in (
SELECT a.s_1 from
(
SELECT
0 s_0,
"A - Sample Sales"."Ship To Regions"."R50 Region" s_1,
RAND()*100 s_2
FROM "A - Sample Sales"
WHERE
(BOTTOMN(RAND()*100,1) <= 1)
ORDER BY 1, 2 ASC NULLS LAST, 3 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY) a )

whereas for the customer type filter, we return the top 3 ordered (random) values, not just the first one:


"A - Sample Sales"."Customers"."C3 Customer Type" in
( SELECT a.s_1
FROM
(SELECT
0 s_0,
"A - Sample Sales"."Customers"."C3 Customer Type" s_1,
DESCRIPTOR_IDOF("A - Sample Sales"."Customers"."C3 Customer Type") s_2,
RAND()*100 s_3
FROM "A - Sample Sales"
WHERE
(BOTTOMN(RAND()*100,3) <= 3)
ORDER BY 1, 2 ASC NULLS LAST, 4 ASC NULLS LAST
FETCH FIRST 5000001 ROWS ONLY) a )

Now when you run the report you should see different filter selections being used each time you run it, similar to what's shown in the preview screenshot below.

NewImage

One thing I noticed at this stage is, whilst the customer type filtering returned three values, only one would ever be used in the graph prompt, because that's how prompts in a view work vs. the multi-select prompts you get as dashboard prompts. So I then needed to move the customer type column from the prompts are to the Pies and Slices > Pies part of the graph layout (so I then got one pie chart per customer type, not just the one type I was seeing via the graph prompt before), so that my final report looked like this:

NewImage

and my analysis criteria, including these special filters, looked like this:

NewImage

Next we need to create an initial set of users so that we can perform the concurrency test. I do this by using the WebLogic Scripting Tool (WLST) script shown below which creates 30 users, assigns them to an LDAP group and then adds that group to the  BIConsumers LDAP group, so that they can run the analysis in question (if you're new to WLST or are interested in reading a bit more about it, take a look at this Oracle Magazine of mine that explains the feature).


serverConfig()

password = 'welcome1'

atnr=cmo.getSecurityConfiguration().getDefaultRealm().lookupAuthenticationProvider('DefaultAuthenticator')

group = 'Loadtest-Users'
atnr.createGroup(group,group)

atnr.addMemberToGroup('BIConsumers','Loadtest-Users')

users = ['user1','user2','user3','user4','user5','user6','user7','user8','user9','user10','user11','user12',
'user13','user14','user15','user16','user17','user18','user19','user20','user21','user22','user23','user24',
'user25','user26','user27','user28','user29','user30']
for user in users:
atnr.createUser(user,password,user)
atnr.addMemberToGroup(group,user)

After saving the WLST script to the /home/oracle/scripts/loadtest folder as create_users.py, I then go back to my Mac workstation and SSH into the SampleApp VirtualBox VM to run the script:


Last login: Sat Apr 20 12:58:38 on ttys000
markmacbookpro:~ markrittman$ ssh oracle@obieesampleapp.rittmandev.com
oracle@obieesampleapp.rittmandev.com's password:
Last login: Sun Apr 21 17:13:37 2013 from 192.168.2.31

[oracle@obieesampleapp ~]$ cd obiee/Oracle_BI1/common/bin

[oracle@obieesampleapp bin]$ ./wlst.sh

wls:/offline> connect('weblogic','Admin123','localhost:7001')
Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server 'AdminServer' that belongs to domain 'bifoundation_domain'.

Warning: An insecure protocol was used to connect to the
server. To ensure on-the-wire security, the SSL port or
Admin port should be used instead.

wls:/bifoundation_domain/serverConfig> execfile('/home/oracle/scripts/loadtest/create_users.py')
Already in Config Runtime

wls:/bifoundation_domain/serverConfig> exit()

Exiting WebLogic Scripting Tool.

Then using the same SSH session I create a new users_list.txt file containing the usernames of these 30 users (use CTRL-D in a Unix session to send the EOF signal to CAT, and stop copying text into the users_list.txt.new file)


[oracle@obieesampleapp bin]$ cd /home/oracle/scripts/loadtest/

[oracle@obieesampleapp loadtest]$ cat > users_list.txt.new
user1
user2
user3
user4
user5
user6
user7
user8
user9
user10
user11
user12
user13
user14
user15
user16
user17
user18
user19
user20
user21
user22
user23
user24
user25
user26
user27
user28
user29
user30

[oracle@obieesampleapp loadtest]$ mv ./users_list.txt users_list.txt.original
[oracle@obieesampleapp loadtest]$ mv ./users_list.txt.new users_list.txt

Finally, I then edit the runtest script to change the path to point to the analysis I created earlier, update the password setting for the users:


[oracle@obieesampleapp loadtest]$ vi ./runtest

so that the final runtest file looks like this:

[oracle@obieesampleapp loadtest]$ cat ./runtest
export JAVA_HOME=/home/oracle/obiee/Oracle_BI1/jdk
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib

echo "Start time: `date`"
echo "Load Test Starting..."
java -jar LoadTest.jar "localhost" "7001" "/shared/loadtest/SampleAnalysis" "welcome1"
echo "Load Test Completed..."
echo "End time: `date`"

Now, we've got everything we need for the initial test; an analysis to run, a set of users to run it with, and the JAR file to perform the test. So let's give it a go...


[oracle@obieesampleapp loadtest]$ chmod a+x runtest
[oracle@obieesampleapp loadtest]$ ./runtest
Start time: Sun Apr 21 18:21:39 PDT 2013
Load Test Starting...

----------------------------------------------
Creating User Sessions for Concurrency Test..
Total active sessions: 30

Initiating Queries..
Total queries initiated: 30

Cleaning up User Sessions created for Concurrency Test..
- Remaining Active Sessions: 30
Completed User Sessions Cleanup
----------------------------------------------

Load Test Completed...
End time: Sun Apr 21 18:21:54 PDT 2013

Where it gets interesting though is when you go over to Fusion Middleware Control, and view the DMS metrics graphs at Capacity Management > Metrics > View the full set of system metrics, where you can then see various metrics such as # of active sessions, request processing time (i.e. how long the analysis took to run), and # requests per minute.

NewImage

But of course, our current test only runs thirty queries through our thirty users, so its not much of a concurrency test; also, I've got caching enabled, so I'd expect the figure to look fairly good (though this may be what we use in real-life, so the key thing is to make the test as close a reflection of your actual system as possible). 

To create a more realistic test in terms of user numbers, there's a couple of options you can use; one option, and the one I use, is to copy the same set of users over and over again into the users_list.txt file, to the point where there are hundreds of rows in the file to simulate hundreds of sessions. Another approach, and perhaps the more purist, is to create many more user accounts and have each one only run one or two reports, but that involves creating the required amount of users and then deleting them afterwards from the LDAP server. I'll go for the first option, using the following Unix commands within my SSH session to copy the file back onto itself many times, giving me at the end around 1600 users to use in my concurrency test:


[oracle@obieesampleapp loadtest]$ wc -l users_list.txt
30 users_list.txt
[oracle@obieesampleapp loadtest]$ cat users_list.txt users_list.txt users_list.txt users_list.txt users_list.txt users_list.txt users_list.txt users_list.txt users_list.txt users_list.txt >> big_users_list.txt
[oracle@obieesampleapp loadtest]$ wc -l big_users_list.txt
420 big_users_list.txt
[oracle@obieesampleapp loadtest]$ cat big_users_list.txt big_users_list.txt big_users_list.txt big_users_list.txt > users_list.txt
[oracle@obieesampleapp loadtest]$ wc -l users_list.txt
1680 users_list.txt

FInally I run the test again, to simulate 1600 users running queries at once:


[oracle@obieesampleapp loadtest]$ ./runtest
Start time: Sun Apr 21 18:42:42 PDT 2013
Load Test Starting...

----------------------------------------------
Creating User Sessions for Concurrency Test..
- Active Sessions: 100
- Active Sessions: 200
- Active Sessions: 300
- Active Sessions: 400
- Active Sessions: 500
- Active Sessions: 600
- Active Sessions: 700
- Active Sessions: 800
- Active Sessions: 900
- Active Sessions: 1000
- Active Sessions: 1100
- Active Sessions: 1200
- Active Sessions: 1300
- Active Sessions: 1400
- Active Sessions: 1500
- Active Sessions: 1600
Total active sessions: 1680

Initiating Queries..
- Queries initiated: 100
- Queries initiated: 200
- Queries initiated: 300
- Queries initiated: 400
- Queries initiated: 500
- Queries initiated: 600
- Queries initiated: 700
- Queries initiated: 800
- Queries initiated: 900
- Queries initiated: 1000
- Queries initiated: 1100
- Queries initiated: 1200
- Queries initiated: 1300
- Queries initiated: 1400
- Queries initiated: 1500
- Queries initiated: 1600
Total queries initiated: 1680

Cleaning up User Sessions created for Concurrency Test..
- Remaining Active Sessions: 1680
- Remaining Active Sessions: 1600
- Remaining Active Sessions: 1500
- Remaining Active Sessions: 1400
- Remaining Active Sessions: 1300
- Remaining Active Sessions: 1200
- Remaining Active Sessions: 1100
- Remaining Active Sessions: 1000
- Remaining Active Sessions: 900
- Remaining Active Sessions: 800
- Remaining Active Sessions: 700
- Remaining Active Sessions: 600
- Remaining Active Sessions: 500
- Remaining Active Sessions: 400
- Remaining Active Sessions: 300
- Remaining Active Sessions: 200
- Remaining Active Sessions: 100
Completed User Sessions Cleanup
----------------------------------------------

Load Test Completed...
End time: Sun Apr 21 18:45:34 PDT 2013

Going back over to EM, I can see the load building up on the server and the response time increasing.

NewImage

Notice though how the response time actually starts to fall as more queries run? That's most probably caching kicking in, so next time I'll disable caching completely and run the test again. But for now though, this is the Oracle load test script running, and the steps I've outlined here should allow you to run a similar test yourself. Thanks to Phillipe and the Oracle BI Tech Demos team for this, and on a similar topic I'll be previewing the new v303 11.1.1.7 SampleApp in a posting tomorrow.

OBIEE, ODI and Hadoop Part 1: So What Is Hadoop, MapReduce and Hive?

Recent releases of OBIEE and ODI have included support for Apache Hadoop as a data source, probably the most well-recognised technology within the “big data” movement. Most OBIEE and ODI developers have probably heard of Hadoop and MapReduce, a data-processing programming model that goes hand-in-hand with Hadoop, but haven’t tried it themselves or really found a pressing reason to use them. So over this next series of three articles, we’ll take a look at what these two technologies actually are, and then see how OBIEE 11g, and also ODI 11g connect to them and make use of their features.

Hadoop is actually a family of open-source tools sponsored by the Apache foundation that provides a distributed, reliable shared storage and analysis system. Designed around clusters of commodity servers (which may actually be virtual and cloud-based) and with data stored on the server, not on separate storage units, Hadoop came from the world of Silicon Valley social and search companies and has spawned a raft of Apache foundation sub-projects such as Hive (for SQL-like querying of Hadoop clusters), HBase (a distributed, column-store database based on Google’s “BigTable” technology), Pig (a procedural language for writing Hadoop analysis jobs that’s PL/SQL to Hive’s SQL) and HDFS (a distributed, fault-tolerant filesystem). Hadoop, being open-source, can be downloaded for free and run easily on most Unix-based PCs and servers, and also on Windows with a bit of mucking-around to create a Unix-like environment; the code from Hadoop has been extended and to an extent commercialised by companies such as Cloudera (who provide the Hadoop infrastructure for Oracle’s Big Data Appliance) and Hortonworks, who can be though of as the “Red Hat” and “SuSE” of the Hadoop world.

MapReduce, on the other hand, is a programming model or algorithm for processing data, typically in parallel. MapReduce jobs can be written, theoretically, in any language as long as they exposure two particular methods, steps or functions to the calling program (typically, the Hadoop Jobtracker):

  • A “Map” function, that takes input data in the form of key/value pairs and extracts the data that you’re interested in, outputting it again in the form of key/value pairs
  • A “Reduce” function, which typically sorts and groups the “mapped” key/value pairs, and then typically passes the results down to the line to another MapReduce job for further processing

Joel Spolsky (of Joel on Software fame, one of mine and Jon’s inspirations in setting up Rittman Mead) explains MapReduce well in this article back from 2006, when he’s trying to explain the fundamental differences between object-orientated languages like Java, and functional languages like Lisp and Haskell. Ironically, most MapReduce functions you see these days are actually written in Java, but it’s MapReduce’s intrinsic simplicity, and the way that Hadoop abstracts away the process of running individual map and reduce functions on lots of different servers , and the Hadoop job co-ordination tools take care of making sense of all the chaos and returning a result in the end, that make it take off so well and allow data analysis tasks to scale beyond the limits of just a single server..

NewImage

I don’t intend to try and explain the full details of Hadoop in this blog post though, and in reality most OBIEE and ODI developers won’t need to know how Hadoop works under the covers; what they will often want to be able to do though is connect to a Hadoop cluster and make use of the data it contains, and its data processing capabilities, either to report against directly or more likely, use as an input into a more traditional data warehouse. An organisation might store terabytes or petabytes of web log data, details of user interactions with a web-based service, or other e-commerce-type information in an HDFS clustered, distributed fault-tolerant file system, and while they might then be more than happy to process and analyse the data entirely using Hadoop-style data analysis tools, they might also want to load some of the nuggets of information derived from that data in a more traditional, Oracle-style data warehouse, or indeed make it available to less technical end-users more used to writing queries in SQL or using tools such as OBIEE.

Of course, the obvious disconnect here is that distributed computing, fault-tolerant clusters and MapReduce routines written in Java can get really “technical”, more technical than someone like myself generally gets involved in and certainly more technical than you average web analytics person will want to get. Because of this need to provide big-data style analytics to non-Java programmers, some developers at Facebook a few years ago came up with the idea of “Hive”, a set of technologies that provided a SQL-type interface over Hadoop and MapReduce, along with supporting technologies such as a metadata layer that’s not unlike the RPD that OBIEE uses, so that non-programmers could indirectly create MapReduce routines that queried data via Hadoop but with Hive actually creating the MapReduce routines for you. And for bonus points, because the HiveQL language that Hive provided was so like SQL, and because Hive also provided ODBC and JDBC drivers conforming to common standards, tools such as OBIEE and ODI can now access Hadoop/MapReduce data sources and analyse their data just like any other data source (more or less…)

Hive

So where this leaves us is that the 11.1.1.7 release of OBIEE can access Hadoop/MapReduce sources via a HiveODBC driver, whilst ODI 11.1.1.6+ can access the same sources via a HiveJDBC driver. There is of course the additional question as to why you might want to do this, but we’ll cover how OBIEE and then ODI can access Hadoop/MapReduce data sources in the next two articles in this series, as well as try and answer the question as to why you’d want to do this, and what benefits OBIEE and ODI might provide over more “native” or low-level big data query and analysis tools such as Cloudera’s Impala or Google’s Dremel (for data analysis) or Hadoop technologies such as Pig or Sqoop (for data loading and processing). Check back tomorrow for the next instalment in the series.

OBIEE, OEM12cR2 and the BI Management Pack Part 1: Introduction to OEM12cR2

A few years ago I wrote a series of blog posts, and an OTN article, on managing OBIEE 10g using Oracle Enterprise Manager 10gR4 and the BI Management Pack, an extra-licensable option for OEM that provided additional management capabilities for OBIEE  and the BI Apps Data Warehouse Administration Console. The BI Management Pack was reasonably popular at the time but disappeared with the move to Enterprise Manager 12c Cloud Control, but with the recent release of EM12cR2 it’s come back again, but now with additional capabilities around WebLogic, GoldenGate, TimesTen and Essbase. I covered the news of this new release a few months ago, and since then our customers are often asking about these new capabilities, but information on Oracle’s website and the web is pretty thin so I thought I’d go through it in a bit more detail, today talking about how the product works, tomorrow going through installation and configuration and then on the third day, covering some of the common requests and questions we’ve had from our own customers.

Unlike Oracle Enterprise Manager Fusion Middleware Control (or indeed Database Control, the equivalent for the Oracle Database), Enterprise Manager 12cR2 Cloud Control is designed to manage multiple target systems, not just the one that its installed on. What this means is that you can manage all of your BI domains from one place, along with all of your databases, your GoldenGate installation, the DAC, Essbase and so forth, with their details held in a management repository stored in an Oracle database. The diagram below shows a typical OEM12cR2 topology, with the OEM installation on a server connected to the repository database, and OBIEE and other BI “targets” installed on other servers in the organisation.

NewImage

OEM is actually made up of two parts, and a database repository. The Oracle Management Service runs within a WebLogic domain and comprises of a Web Console (what we’d know as Enterprise Manager) and “Platform Background Services”, a set of background services that communicate with the target hosts and store the relevant information. The other part of OEM is the “Oracle Management Agent”, a server process thats installed on each monitored host that collects metrics to pass back to OMS and PBS, and executes tasks such as stopping and starting the target on behalf of OMS. OEM12cR2 Cloud Controls stores its metadata and monitoring data in a separate repository database, which can either be on the same server as OMS or on a separate machine – note that if you use a database instance that’s previously had Database Control enabled on it (as most of them have), you need to disable and remove it before you can use it for OEM’s own repository.

One of the main benefits of OEM12cR2 compared to standalone management consoles is that it manages the majority of Oracle’s server products – WebLogic Server, Oracle Database, Exadata, Exalogic, E-Business Suite and so on, though you need to read the small print as management covers more features in some products than others – we’ll get back to this point later on. At its best though, OEM12cR2 becomes your central monitoring point for all products (including some third party ones, via plugins), allowing you to monitor, manage, patch and maintain all of your servers from the one place.

NewImage

As well as managing all hosts in one place, headline benefits of OEM12cR2 over “free” Fusion Middleware Control include:

  • Monitor all BI Domains in one place, so you can see their versions, host types, patch levels etc
  • Perform WebLogic lifecycle-type tasks such as patching the installation, packing and unpacking managed servers to move them between hosts, deploying test-to-production
  • Define quality of service checks, create alerts for slow response times, hosts down etc
  • Persist and store metrics, rather than only display them whilst you have the Metric screen open in your browser

Like the Oracle database though, Enterprise Manager comes with a number of extra-cost packs, including:

  • Database Lifecycle Management Pack for Oracle Database
  • Data Masking Pack
  • Test Data Management Pack
  • WebLogic Server Management Pack Enterprise Edition
  • SOA Management Pack

and, of course, the BI Management Pack. So what do you get in the base version of OEM before you need to start paying for these packs? For all of the database, middleware and other targets, you can deploy agents, set alerts and define metric thresholds, and for Oracle Database specifically you can use the data movement features, view errors, use Advisor Central and so on, whereas the stuff you really want such as performance pages, wait event breakdowns and so on are extra cost. Same goes for WebLogic, with a small base-level set of functionality that’s pretty-much limited to discovering the WebLogic installation, then stopping and starting it, in other words what you get for “free” with Fusion Middleware Control. For BI, again you can display what you would normally see in Fusion Middleware Control (database and middleware licensed customers can use base-level Oracle Enterprise Manager at no extra license cost, so this would follow), but if you’re after anything else such as persisted metrics, service tests and so forth, figure on buying a few of the add-on management packs.

My article on OEM Grid Control 10gR4′s BI Management pack described the features that are still the core of OEM12cR2′s BI Management Pack, which at the time included the features below, as shown in the screenshot below.

  • The ability to collect and record BI Server, BI Presentation Server, BI Cluster Controller and other BI target metrics, and define thresholds and events against those metrics
  • The ability to connect to the BI repository database tables, to read for example the BI scheduler information about failed iBot executions and use it to alert you
  • The ability to connect to the DAC repository, and then graph out ETL run information such as execution time, number of errors and so forth
  • Record configuration settings, and then report on what’s changed for a target configuration compared to the previous settings

NewImage

So now that the BI Management Pack is back with OEM12cR2, what do you get with it? Well you get everything that you had before, plus some new features:

  • The ability to discover, and then monitor, Essbase installations
  • All the new functionality around WebLogic (albeit with the requirement to license the WebLogic Management Pack)
  • Compatibility with OBIEE 11g, along with continuing support for 10g

The screenshots below show some of these features in use, with the new EM12cR2 “Fusion” look and feel for the web console.

NewImage

So how do we get EM12cR2 connecting to OBIEE, and make use of some of the new BI Management Pack features; also, how do we register an OBIEE installation with it, and how does it work with a BI Apps installation, or even about Exalytics? Come back tomorrow when we’ll cover off the installation and configuration parts of the product.

What’s Coming at the Rittman Mead BI Forum 2013, Atlanta, May 15th – 17th 2013

At the end of last week I talked about what we had planned for the BI Forum in Brighton, and today I want to talk about what we’ve got planned for the Rittman Mead BI Forum 2013 in Atlanta, running the week after at the Georgia Tech Hotel & Conference Center on May 15th – 17th 2013.

NewImage

As with the Brighton BI Forum event, we’ve got a mix of partner, independent developer, customer and Oracle speakers, all covering topics centered around OBIEE and its supporting technologies. The central idea around the BI Forum is that its “the conference we’d want to go to”, with content aimed at developers who already know the basics, want to hear solid technical and implementation talk not marketing fluff, and want to meet their friends and peers in the industry to share stories and trade tips. We keep numbers strictly limited, run just a single track so that everyone gets to take part in the same sessions, and maximize the networking and social elements so that you get to meet everyone who attends, hopefully staying in touch well after the event closes.

This year our speakers and sessions in Atlanta include:

  • Jeff McQuigg talking about OBIEE testing (and making the topic interesting), Christian Screen on OBIEE plug-ins, Tim Vlamis covering OBIEE forecasting and time-series analysis, Kevin McGinley looking at ODI and the BI Apps (and hoping it’s GA by then, otherwise he’ll sing us Bohemian Rhapsody in Klingon for an hour), and our own Adam Seed will take us beyond the demos with Endeca
  • From Oracle, we’ll have Marty Gubar and Alan Lee talking about the new Admin Tool, and OBIEE’s support for Hadoop as a data source, Florian Schouten will talk about BI Apps futures, and Jack Berkowitz will take us through the latest in OBIEE presentation, mobility and interactivity

Jack will also deliver the opening Oracle keynote on the Wednesday evening, and before that earlier in the day will be our optional masterclass, this year being delivered by Stewart Bryson, Michael Rainey and myself and focusing on Oracle’s data integration technologies. And – to top things off, we’re joined by two special guests, Method R’s Cary Millsap, and Pythian’s Alex Gorbachev, two of our friends from the Oracle database world and who’ll talk to us about reponse-time based performance tuning, and what’s new in the worlds of Big Data and unstructured analytics.

NewImage

Of course – it’s not all about learning and technology, and we make a special effort around the social events usually to the point where we spend all the proceeds on free bars and making sure everyone’s having a good time. There’ll be a debate on the Friday, TED-style 10-minute sessions on the Thursday, a competition to see who can speak longer than Paul Rodwick about Oracle BI without blinking, full delegate packs and t-shirts to take home, and we share all delegate contact details amongst the attendees so you can stay in touch after everything closes. Registration is open and there are still a few places left, so if you’re thinking of attending and don’t want to lose your place, register now before we sell out…!