Tag Archives: Oracle BI Suite EE
Rittman Mead BI Forum 2015 Now Open for Registration!
I’m very pleased to announce that the Rittman Mead BI Forum 2015, running in Brighton and Atlanta in May 2015, is now open for registration.
Back for its seventh successful year, the Rittman Mead BI Forum once again will be showcasing the best speakers and presentations on topics around Oracle Business Intelligence and data warehousing, with two events running in Brighton, UK and Atlanta, USA in May 2015. The Rittman Mead BI Forum is different to other Oracle tech events in that we keep the numbers attending limited, topics are all at the intermediate-to-expert level, and we concentrate on just one topic – Oracle Business Intelligence Enterprise Edition, and the technologies and products that support it.
As in previous years, the BI Forum will run on two consecutive weeks, starting in Brighton and then moving over to Atlanta for the following week. Here’s the dates and venue locations:
- Rittman Mead BI Forum 2015 – Hotel Seattle, Brighton, May 6th – 8th 2015
- Rittman Mead BI Forum 2015 – Renaissance Atlanta Midtown Hotel, May 13th – 15th 2015
This year our optional one-day masterclass will be delivered by Jordan Meyer, our Head of R&D, and myself and will be on the topic of “Delivering the Oracle Big Data and Information Management Reference Architecture” that we launched last year at our Brighton event. Details of the masterclass, and the speaker and session line up at the two events are on the Rittman Mead BI Forum 2015 homepage.
Each event has its own agenda, but both will focus on the technology and implementation aspects of Oracle BI, DW, Big Data and Analytics. Most of the sessions run for 45 minutes, but on the first day we’ll be holding a debate and on the second we’ll be running a data visualization “bake-off” – details on this, the masterclass and the keynotes and our special guest speakers will be revealed on this blog over the next few weeks – watch this space!
OBIEE Monitoring and Diagnostics with InfluxDB and Grafana
In this article I’m going to look at collecting time-series metrics into the InfluxDB database and visualising them in snazzy Grafana dashboards. The datasets I’m going to use are OS metrics (CPU, Disk, etc) and the DMS metrics from OBIEE, both of which are collected using the support for a Carbon/Graphite listener in InfluxDB.
The Dynamic Monitoring System (DMS) in OBIEE is one of the best ways of being able to peer into the internals of the product and find out quite what’s going on. Whether performing diagnostics on a specific issue or just generally monitoring to make sure things are ticking over nicely, using the DMS metrics you can level-up your OBIEE sysadmin skills beyond what you’d get with Fusion Middleware Control out of the box. In fact, the DMS metrics are what you can get access to with Cloud Control 12c (EM12c) – but for that you need EM12c and the BI Management Pack. In this article we’re going to see how to easily set up our DMS dashboard.
N.B. if you’ve read my previous articles, what I write here (use InfluxDB/Grafana) supersedes what I wrote in those (use Graphite) as my recommended approach to working with arbitrary time-series metrics.
Overview
To get the DMS data out of OBIEE we’re going to use the obi-metrics-agent tool that Rittman Mead open-sourced last year. This connects to OPMN and pulls the data out. We’ll store the data in InfluxDB, and then visualise it in Grafana. Whilst not mandatory for the DMS stats, we’ll also setup collectl so that we can show OS stats alongside the DMS ones.
InfluxDB
InfluxDB is a database, but unlike a RDBMS such as Oracle – good for generally everything – it is a what’s called a Time-Series Database (TSDB). This category of database focuses on storing data for a series, holding a given value for a point in time. Generally they’re optimised for handling large quantities of inbound metrics (think Internet of Things), rather than necessarily excelling at handling changes to the data (update/delete) – but that’s fine here since metric events in the past don’t generally change.
I’m using InfluxDB here for a few reasons:
- Grafana supports it as a source, with lots of active development for its specific features.
- It’s not Graphite. Whilst I have spent many a happy hour using Graphite I’ve spent many a frustrating day and night trying to install the damn thing – every time I want to use it on a new installation. It’s fundamentally long in the tooth, a whilst good for its time is now legacy in my mind. Graphite is also several things – a data store (whisper), a web application (graphite web), and a data collector (carbon). Since we’re using Grafana, the web front end that Graphite provides is redundant, and is where a lot of the installation problems come from.
- KISS! Yes I could store time series data in Oracle/mySQL/DB2/yadayada, but InfluxDB does one thing (storing time series metrics) and one thing only, very well and very easily with almost no setup.
For an eloquent discussion of Time-Series Databases read these couple of excellent articles by Baron Schwarz here and here.
Grafana
On the front-end we have Grafana which is a web application that is rapidly becoming accepted as one of the best time-series metric visualisation tools available. It is a fork of Kibana, and can work with data held in a variety of sources including Graphite and InfluxDB. To run Grafana you need to have a web server in place – I’m using Apache just because it’s familiar, but Grafana probably works with whatever your favourite is too.
OS
This article is based around the OBIEE SampleApp v406 VM, but should work without modification on any OL/CentOS/RHEL 6 environment.
InfluxDB and Grafana run on both RHEL and Debian based Linux distros, as well as Mac OS. The specific setup steps detailed here might need some changes according on the OS.
Getting Started with InfluxDB
InfluxDB Installation and Configuration as a Graphite/Carbon Endpoint
InfluxDB is a doddle to install. Simply download the rpm, unzip it, and run. BOOM. Compared to Graphite, this makes it a massive winner already.
wget http://s3.amazonaws.com/influxdb/influxdb-latest-1.x86_64.rpm sudo rpm -ivh influxdb-latest-1.x86_64.rpm
This downloads and installs InfluxDB into /opt/influxdb
and configures it as a service that will start at boot time.
Before we go ahead an start it, let’s configure it to work with existing applications that are sending data to Graphite using the Carbon protocol. InfluxDB can support this and enables you to literally switch Graphite out in favour of InfluxDB with no changes required on the source.
Edit the configuration file that you’ll find at /opt/influxdb/shared/config.toml
and locate the line that reads:
[input_plugins.graphite]
In v0.8.8 this is at line 41. In the following stanza set the plugin to enabled, specify the listener port, and give the name of the database that you want to store data in, so that it looks like this.
# Configure the graphite api [input_plugins.graphite] enabled = true # address = "0.0.0.0" # If not set, is actually set to bind-address. port = 2003 database = "carbon" # store graphite data in this database # udp_enabled = true # enable udp interface on the same port as the tcp interface
Note that the file is owned by a user created at installation time, influxdb, so you’ll need to use sudo to edit the file.
Now start up InfluxDB:
sudo service influxdb start
You should see it start up successfully:
[oracle@demo influxdb]$ sudo service influxdb start Setting ulimit -n 65536 Starting the process influxdb [ OK ] influxdb process was started [ OK ]
You can see the InfluxDB log file and confirm that the Graphite/Carbon listener has started:
[oracle@demo shared]$ tail -f /opt/influxdb/shared/log.txt [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/cluster.func·005:1187) Recovered local server [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/server.(*Server).ListenAndServe:133) recovered [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/coordinator.(*Coordinator).ConnectToProtobufServers:898) Connecting to other nodes in the cluster [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/server.(*Server).ListenAndServe:139) Starting admin interface on port 8083 [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/server.(*Server).ListenAndServe:152) Starting Graphite Listener on 0.0.0.0:2003 [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/server.(*Server).ListenAndServe:178) Collectd input plugins is disabled [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/server.(*Server).ListenAndServe:187) UDP server is disabled [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/server.(*Server).ListenAndServe:187) UDP server is disabled [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/server.(*Server).ListenAndServe:216) Starting Http Api server on port 8086 [2015/02/02 20:24:04 GMT] [INFO] (github.com/influxdb/influxdb/server.(*Server).reportStats:254) Reporting stats: &client.Series{Name:"reports", Columns:[]string{"os", "arch", "id", "version"}, Points:[][]interface {}{[]interface {}{"linux", "amd64", "e7d3d5cf69a4faf2", "0.8.8"}}}
At this point if you’re using the stock SampleApp v406 image, or if indeed any machine with a firewall configured, you need to open up ports 8083 and 8086 for InfluxDB. Edit /etc/sysconfig/iptables
(using sudo) and add:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8083 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 8086 -j ACCEPT
immediately after the existing ACCEPT rules. Restart iptables to pick up the change:
sudo service iptables restart
If you now go to http://localhost:8083/ (replace localhost
with the hostname of the server on which you’ve installed InfluxDB), you’ll get the InfluxDB web interface. It’s fairly rudimentary, but suffices just fine:
Login as root/root, and you’ll see a list of nothing much, since we’ve not got any databases yet. You can create a database from here, but for repeatability and a general preference for using the command line here is how to create a database called carbon with the HTTP API called from curl
(assuming you’re running it locally; change localhost
if not):
curl -X POST 'http://localhost:8086/db?u=root&p=root' -d '{"name": "carbon"}'
Simple huh? Now hit refresh on the web UI and after logging back in again you’ll see the new database:
You can call the database anything you want, just make sure what you create in InfluxDB matches what you put in the configuration file for the graphite/carbon listener.
Now we’ll create a second database that we’ll need later on to hold the internal dashboard definitions from Grafana:
curl -X POST 'http://localhost:8086/db?u=root&p=root' -d '{"name": "grafana"}'
You should now have two InfluxDB databases, primed and ready for data:
Validating the InfluxDB Carbon Listener
To make sure that InfluxDB is accepting data on the carbon listener use the NetCat (nc
) utility to send some dummy data to it:
echo "example.foo.bar 3 `date +%s`"|nc localhost 2003
Now go to the InfluxDB web interface and click Explore Data ». In the query field enter
list series
To see the first five rows of data itself use the query
select * from /.*/ limit 5
InfluxDB Queries
You’ll notice that what we’re doing here (“SELECT … FROM …”) looks pretty SQL-like. Indeed, InfluxDB support a SQL-like query language, which if you’re coming from an RDBMS background is nicely comforting ;-)
The syntax is documented, but what I would point out is the apparently odd /.*/
constructor for the “table” is in fact a regular expression (regex) to match the series for which to return values. We could have written select * from example.foo.bar
but the .*
wildcard enclosed in the / /
regex delimiters is a quick way to check all the series we’ve got.
Going off on a bit of a tangent (but hey, why not), let’s write a quick Python script to stick some randomised data into InfluxDB. Paste the following into a terminal window to create the script and make it executable:
cat >~/test_carbon.py<<EOF #!/usr/bin/env python import socket import time import random import sys CARBON_SERVER = sys.argv[1] CARBON_PORT = int(sys.argv[2]) while True: message = 'test.data.foo.bar %d %d\n' % (random.randint(1,20),int(time.time())) print 'sending message:\n%s' % message sock = socket.socket() sock.connect((CARBON_SERVER, CARBON_PORT)) sock.sendall(message) time.sleep(1) sock.close() EOF chmod u+x ~/test_carbon.py
And run it: (hit Ctrl-C when you’ve had enough)
$ ~/test_carbon.py localhost 2003 sending message: test.data.foo.bar 3 1422910401 sending message: test.data.foo.bar 5 1422910402 [...]
Now we’ve got two series in InfluxDB:
example.foo.bar
– that we sent usingnc
test.data.foo.bar
– using the python script
Let’s go back to the InfluxDB web UI and have a look at the new data, using the literal series name in the query:
select * from test.data.foo.bar
Well fancy that – InfluxDB has done us a nice little graph of the data. But more to the point, we can see all the values in the series.
And a regex shows us both series, matching on the ‘foo’ part of the name:
select * from /foo/ limit 3
Let’s take it a step further. InfluxDB supports aggregate functions, such as max, min, and so on:
select count(value), max(value),mean(value),min(value) from test.data.foo.bar
Whilst we’re at it, let’s bring in another way to get data out – with the HTTP API, just like we used for creating the database above. Given a query, it returns the data in json format. There’s a nice little utility called jq which we can use to pretty-print the json, so let’s install that first:
sudo yum install -y jq
and then call the InfluxDB API, piping the return into jq:
curl --silent --get 'http://localhost:8086/db/carbon/series?u=root&p=root' --data-urlencode "q=select count(value), max(value),mean(value),min(value) from test.data.foo.bar"|jq '.'
The result should look something like this:
[ { "name": "test.data.foo.bar", "columns": [ "time", "count", "max", "mean", "min" ], "points": [ [ 0, 12, 14, 5.666666666666665, 1 ] ] } ]
We could have used the Web UI for this, but to be honest the inclusion of the graphs just confuses things because there’s nothing to graph and the table of data that we want gets hidden lower down the page.
Setting up obi-metrics-agent to Send OBIEE DMS metrics to InfluxDB
obi-metrics-agent is an open-source tool from Rittman Mead that polls your OBIEE system to pull out all the lovely juicy DMS metrics from it. It can write them to file, insert them to an RDBMS, or as we’re using it here, send them to a carbon-compatible endpoint (such as Graphite, or in our case, InfluxDB).
To install it simply clone the git repository (I’m doing it to /opt
but you can put it where you want)
# Install pre-requisite sudo yum install -y libxml2-devel python-devel libxslt-devel python-pip sudo pip install lxml # Clone the git repository git clone https://github.com/RittmanMead/obi-metrics-agent.git ~/obi-metrics-agent # Move it to /opt folder sudo mv ~/obi-metrics-agent /opt
and then run it:
cd /opt/obi-metrics-agent ./obi-metrics-agent.py \ --opmnbin /app/oracle/biee/instances/instance1/bin/opmnctl \ --output carbon \ --carbon-server localhost
I’ve used line continuation character \
here to make the statement clearer. Make sure you update opmnbin for the correct path of your OPMN binary as necessary, and localhost if your InfluxDB server is not local to where you are running obi-metrics-agent.
After running this you should be able to see the metrics in InfluxDB. For example:
select * from /Oracle_BI_DB_Connection_Pool\..+\.*Busy/ limit 5
Setting up collectl to Send OS metrics to InfluxDB
collectl is an excellent tool written by Mark Seger and reports on all sorts of OS-level metrics. It can run interactively, write metrics to file, and/or send them on to a carbon endpoint such as InfluxDB.
Installation is a piece of cake, using the EPEL yum repository:
# Install the EPEL yum repository sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/`uname -p`/epel-release-6-8.noarch.rpm # Install collectl sudo yum install -y collectl # Set it to start at boot sudo chkconfig --level 35 collectl on
Configuration to enable logging to InfluxDB is a simple matter of modifying the /etc/collectl.conf
configuration file either by hand or using this set of sed
statements to do it automagically.
The localhost
in the second sed
command is the hostname of the server on which InfluxDB is running:
sudo sed -i.bak -e 's/^DaemonCommands/#DaemonCommands/g' /etc/collectl.conf sudo sed -i -e '/^#DaemonCommands/a DaemonCommands = -f \/var\/log\/collectl -P -m -scdmnCDZ --export graphite,localhost:2003,p=.os,s=cdmnCDZ' /etc/collectl.conf
If you want to log more frequently than ten seconds, make this change (for 5 second intervals here):
sudo sed -i -e '/#Interval = 10/a Interval = 5' /etc/collectl.conf
Restart collectl for the changes to take effect:
sudo service collectl restart
As above, a quick check through the web UI should confirm we’re getting data through into InfluxDB:
Note the very handy regex lets us be lazy with the series naming. We know there is a metric called in part ‘cputotal’, so using /cputotal/
can match anything with it in.
Installing and Configuring Grafana
Like InfluxDB, Grafana is also easy to install, although it does require a bit of setting up. It needs to be hooked into a web server, as well as configured to connect to a source for metrics and storing dashboard definitions.
First, download the binary (this is based on v1.9.1, but releases are frequent so check the downloads page for the latest):
cd ~ wget http://grafanarel.s3.amazonaws.com/grafana-1.9.1.zip
Unzip it and move it to /opt
:
unzip grafana-1.9.1.zip sudo mv grafana-1.9.1 /opt
Configuring Grafana to Connect to InfluxDB
We need to do a bit of configuration, so first create the configuration file based on the template given:
cd /opt/grafana-1.9.1 cp config.sample.js config.js
And now open the config.js
file in your favourite text editor. Grafana supports various sources for metrics data, as well as various targets to which it can save the dashboard definitions. The configuration file helpfully comes with configuration elements for many of these, but all commented out. Uncomment the InfluxDB stanzas and amend them as follows:
datasources: { influxdb: { type: 'influxdb', url: "http://sampleapp:8086/db/carbon", username: 'root', password: 'root', }, grafana: { type: 'influxdb', url: "http://sampleapp:8086/db/grafana", username: 'root', password: 'root', grafanaDB: true }, },
Points to note:
- The servername is the server host as you will be accessing it from your web browser. So whilst the configuration we did earlier was all based around ‘localhost’, since it was just communication within components on the same server, the Grafana configuration is what the web application from your web browser uses. So unless you are using a web browser on the same machine as where InfluxDB is running, you must put in the server address of your InfluxDB machine here.
- The default InfluxDB username/password is root/root, not admin/admin
- Edit the database names in the url, either as shown if you’ve followed the same names used earlier in the article or your own versions of them if not.
Setting Grafana up in Apache
Grafana runs within a web server, such as Apache or nginx. Here I’m using Apache, so first off install it:
sudo yum install -y httpd
And then set up an entry for Grafana in the configuration folder by pasting the following to the command line:
cat > /tmp/grafana.conf <<EOF Alias /grafana /opt/grafana-1.9.1 <Location /grafana> Order deny,allow Allow from 127.0.0.1 Allow from ::1 Allow from all </Location> EOF sudo mv /tmp/grafana.conf /etc/httpd/conf.d/grafana.conf
Now restart Apache:
sudo service httpd restart
And if the gods of bits and bytes are smiling on you, when you go to http://yourserver/grafana you should see:
Note that as with InfluxDB, you may well need to open your firewall for Apache which is on port 80 by default. Follow the same iptables instructions as above to do this.
Building Grafana Dashboards on Metrics Held in InfluxDB
So now we’ve set up our metric collectors, sending data into InfluxDB.
Let’s see now how to produce some swanky dashboards in Grafana.
Grafana has a concept of Dashboards, which are made up of Rows and within those Panels. A Panel can have on it a metric Graphs (duh), but also static text or single figure metrics.
To create a new dashboard click the folder icon and select New:
You get a fairly minimal blank dashboards. On the left you’ll notice a little green tab: hover over that and it pops out to form a menu box, from where you can choose the option to add a graph panel:
Grafana Graph Basics
On the blank graph that’s created click on the title (with the accurate text “click here”) and select edit from the options that appear. This takes you to the graph editing page, which looks equally blank but from here we can now start adding metrics:
In the box labelled series start typing Active_Sessions and notice that Grafana will autocomplete it to any available metrics matching this:
Select Oracle_BI_PS_Sessions.Active_Sessions and your graph should now display the metric.
To change the time period shown in the graph, use the time picker at the top of the screen.You can also click & drag (“brushing”) on any graph to select a particular slice of time.
So, set the time filter to 15 minutes ago and from the Auto-refresh submenu set it to refresh every 5 seconds. Now login to your OBIEE instance, and you should see the Active Sessions value increase (one per session login):
To add another to the graph you can click on Add query at the bottom right of the page, or if it’s closely related to the one you’ve defined already click on the cog next to it and select duplicate:
In the second query add Oracle_BI_General.Total_sessions (remember, you can just type part of the string and Grafana autocompletes based on the metric series stored in InfluxDB). Run a query in OBIEE to cause sessions to be created on the BI Server, and you should now see the Total sessions increase:
To save the graph, and the dashboard, click the Save icon. To return to the dashboard to see how your graph looks alongside others, or to add a new dashboards, click on Back to dashboard.
Grafana Graph Formatting
Let’s now take a look at the options we’ve got for modifying the styling of the graph. There are several tabs/sections to the graph editor – General, Metrics (the default), Axes & Grid, and Display Styles. The first obvious thing to change is the graph title, which can be changed on the General tab:
From here you can also change how the graph is sized on the dashboard using the Span and Height options. A new feature in recent versions of Grafana is the ability to link dashboards to help with analysis paths – guided navigation as we’d call it in OBIEE – and it’s from the General tab here that you can define this.
On the Metrics tab you can specify what text to use in the legend. By default you get the full series name, which is usually too big to be useful as well as containing a lot of redundant repeating text. You can either specify literal text in the alias field, or you can use segments of the series name identified by $x where x is the zero-based segment number. In the example I’ve hardcoded the literal value for the second metric query, and used a dynamic segment name for the first:
On the Axes & Grid tab you can specify the obvious stuff like min/max scales for the axes and the scale to use (bits, bytes, etc). To put metrics on the right axis (and to change the colour of the metric line too) click on the legend line, and from there select the axis/colour as required:
You can set thresholds to overlay on the graph (to highlight warning/critical values, for example), as well as customise the legend to show an aggregate value for each metric, show it in a table, or not at all:
The last tab, Display Styles, has even more goodies. One of my favourite new additions to Grafana is the Tooltip. Enabling this gives you a tooltip when you hover over the graph, displaying the value of all the series at that point in time:
You can change the presentation of the graph, which by default is a line, adding bars and/or points, as well as changing the line width and fill.
- Solid Fill:
- Bars only
- Points and translucent fill:
Advanced InfluxDB Query Building in Grafana
Identifying Metric Series with RegEx
In the example above there were two fairly specific metrics that we wanted to report against. What you will find is much more common is wanting to graph out a set of metrics from the same ‘family’. For example, OBIEE DMS metrics include a great deal of information about each Connection Pool that’s defined. They’re all in a hierarchy that look like this:
obi11-01.OBI.Oracle_BI_DB_Connection_Pool.Star_01_-_Sample_App_Data_ORCL_Sample_Relational_Connection
Under which you’ve got
Capacity Current Connection Count Current Queued Requests
and so on.
So rather than creating an individual metric query for each of these (similar to how we did for the two session metrics previously) we’ll use InfluxDB’s rather smart regex method for identifying metric series in a query. And because Grafana is awesome, writing the regex isn’t as painful as it could be because the autocomplete validates your expression in realtime. Let’s get started.
First up, let’s work out the root of the metric series that we want. In this case, it’s the orcl connection pool. So in the series box, enter /orcl/
. The /
delimiters indicate that it is a regex query. As soon as you enter the second /
you’ll get the autocomplete showing you the matching series:
/orcl/
If you scroll down the list you’ll notice there’s other metrics in there beside Connection Pool ones, so let’s refine our query a bit
/orcl_Connection_Pool/
That’s better, but we’ve now got all the Connection Pool metrics, which whilst are fascinating to study (no, really) complicate our view of the data a bit, so let’s pick out just the ones we want. First up we’ll put in the dot that’s going to precede any of the final identifiers for the series (.Capacity, .Current Connection Count, etc). A dot is a special character in regex so we need to escape it \.
/orcl_Connection_Pool\./
And now let’s check we’re on the right lines by specifying just Capacity to match:
/orcl_Connection_Pool\.Capacity/
Excellent. So we can now add in more permutations, with a bracketed list of options separated with the pipe (regex OR) character:
/orcl_Connection_Pool\.(Capacity|Current)/
We can use a wildcard .*
for expressions that are not directly after the dot that we specified in the match pattern. For example, let’s add any metric that includes Queued:
/orcl_Connection_Pool\.(Capacity|Current|.*Queued)/
But now we’ve a rather long list of matches, so let’s refine the regex to narrow it down:
/orcl_Connection_Pool\.(Capacity|Current|Peak.*Queued).+(Requests|Connection)/
(Something else I tried before this was regex negative look-behind, but it looks like Go (which InfluxDB is written in) doesn’t support it).
Setting the Alias to $4
, and the legend to include values in a table format gives us this:
Now to be honest here, in this specific example, I could have created four separate metric queries in a fraction of the time it took to construct that regex. That doesn’t detract from the usefulness and power of regex though, it simply illustrates the point of using the right tool for the right job, and where there’s a few easily identified and static metrics, a manual selection may be quicker.
Aggregates
By default Grafana will request the mean of a series at the defined grain of time from InfluxDB. The grain of time is calculated automatically based on the time window you’ve got shown in your graph. If you’re collecting data every five seconds, and build a graph to show a week’s worth of data, showing all 120960 data points will end up in a very indistinct line:
So instead Grafana generates an InfluxDB query that rolls the data up to more sensible intervals – in the case of a week’s worth of data, every 10 minutes:
You can see, and override, the time grouping in the metric panel. By default it’s dynamic and you can see the current value in use in lighter text, like this:
You can also set an optional minimal time grouping in the second of the “group by time” box (beneath the first). This is a time grouping under which Grafana will never go, so if you always want to roll up to, say, at least a minute (but higher if the duration of the graph requires it), you’d set that here.
So I’ve said that InfluxDB can roll up the figures – but how does it roll up multiple values into one? By default, it takes the mean of all the values. Depending on what you’re looking at, this can be less that desirable, because you may miss important spikes and troughs in your data. So you can change the aggregate rule, to look at the maximum value, minimum, and so on. Do this by clicking on the aggregation in the metric panel:
This is the same series of data, but shown as 5 second samples rolled up to a minute, using the mean, max, and min aggregate rules:
For a look at how all three series can be better rendered together see the discussion of Series Specific Overrides later in this article.
You can also use aggregate functions with measures that may not be simple point in time values. For example, an incrementing/accumulating measure (such as a counter like “number of requests since launch”) you actually want to graph the rate of change, the delta between each point. To do this, use the derivative function. In this graph you can see the default aggregation (mean, in green) against derivative, in yellow. One is in effect the “actual” value of the measure, the other is the rate of change, which is much more useful to see in a time series.
Note that if you are using derivative you may need to fix the group by time to the grain at which you are storing data. In my example I am storing data every 5 seconds, but if the default time grain on the graph is 1s then it won’t show the derivative data.
See more details about the aggregations available in InfluxDB in the docs here. If you want to use an aggregation (or any query) that isn’t supported in the Grafana interface simply click on the cog icon and select Raw query mode from where you can customise the query to your heart’s content.
Drawing inverse graphs
As mentioned just above, you can customise the query sent to InfluxDB, which means you can do this neat trick to render multiple related series that would otherwise overlap by inverting one of them. In this example I’ve got the network I/O drawn conventionally:
But since metrics like network I/O, disk I/O and so on have a concept of adding and taking, it feels much more natural to see the input as ‘positive’ and output as ‘negative’.
Which certainly for my money is easier to see at a glance whether we’ve got data coming or going, and at what volume. To implement this simply set up your series as usual, and then for the series you want to invert click on the cog icon and select Raw query mode. Then in place of
mean(value)
put
mean(value*-1)
Series Specific Overrides
The presentation options that you specify for a graph will by default apply to all series shown in the graph. As we saw previously you can change the colour, width, fill etc of a line, or render the graph as bars and/or points instead. This is all good stuff, but presumes that all measures are created equal – that every piece of data on the graph has the same meaning and importance. Often we’ll want to change how display a particular set of data, and we can use Series Specific Overrides in Grafana to do that.
For example in this graph we can see the number of busy connections and the available capacity:
But the actual (Busy Connections) is the piece of data we want to see at a glance, against the context of the available Capacity. So by setting up a Series Specific Override we can change the formatting of each line individually – calling out the actual (thick green) and making the threshold more muted (purple):
To configure a Series Specific Override got to the Display Styles panel and click Add series override rule. Pick the specific series or use a regex to identify it, and then use the + button to add formatting options:
A very useful formatting option is Z-index, which enables you to define the layering on the graph so that a given series is rendered on top (or below) another. To bring something to the very front use a Z-index of 3; for the very back use -3. Series Specific Overrides are also a good way of dynamically assigning multiple Y axes.
Another great use of Series Specific Overrides is to show the min/max range for data as a shaded area behind the main line, thus providing more context for aggregate data. I discussed above how Grafana can get InfluxDB to roll up (aggregate) values across time periods to make graphs more readable when shown for long time frames – and how this can mask data exceptions. If you only show the mean, you miss small spikes and troughs; if you only show the max or min then you over or under count the actual impact of the measure. But, we can have the best of all worlds! The next two graphs show the starting point – showing just the mean (missing the subtleties of a data series) and showing all three versions of a measure (ugly and unusable):
Instead of this, let’s bring out the mean, but still show it in context of the range of the values within the aggregate:
I hope you’d agree that this a much cleaner and clearer way of presenting the data. To do it we need two steps:
- Make sure that each metric has an alias. This is used in the label but importantly is also used in the next step to identify each data series. You can skip this bit if you really want and regex the series to match directly in the next step, but setting an alias is much easier
- On the Display Styles tab click Add series override rule at the bottom of the page. In the alias or regex box you should see your aliases listed. Select the one which is the maximum series. Then choose the formatting option Fill below to and select the minimum series
You’ll notice that Grafana automagically adds in a second rule to disable lines for the minimum series, as well as on the existing maximum series rule.
Optionally, add another rule for your mean series, setting the Z-index to 3 to bring it right to the front.
All pretty simple really, and a nice result:
Variables in Grafana (a.k.a. Templating)
In lots of metric series there is often going to be groups of measures that are associated with reoccurring instances of a parent. For example, CPU details for multiple servers, or in the OBIEE world connection pool details for multiple connection pools.
centos-base.os.cputotals.user
db12c-01.os.cputotals.user
gitserver.os.cputotals.user
media02.os.cputotals.user
monitoring-01.os.cputotals.user
etc
Instead of creating a graph for each permutation, or modifying the graph each time you want to see a different instance, you can instead use Templating, which is basically creating a variable that can be incorporated into query definitions.
To create a template you first need to enable it per dashboard, using the cog icon in the top-right of the dashboard:
Then open the Templating option from the menu opened by clicking on the cog on the left side of the screen
Now set up the name of the variable, and specify a full (not partial, as you would in the graph panel) InfluxDB query that will return all the values for the variable – or rather, the list of all series from which you’re going to take the variable name.
Let’s have a look at an example. Within the OBIEE DMS metrics you have details about the thread pools within the BI Server, and there are different thread pool types, and it is that type that I want to store. Here’s a snippet of the series:
[...] obi11-01.OBI.Oracle_BI_Thread_Pool.DB_Gateway.Peak_Queued_Requests obi11-01.OBI.Oracle_BI_Thread_Pool.DB_Gateway.Peak_Queued_Time_milliseconds obi11-01.OBI.Oracle_BI_Thread_Pool.DB_Gateway.Peak_Thread_Count obi11-01.OBI.Oracle_BI_Thread_Pool.Server.Accumulated_Requests obi11-01.OBI.Oracle_BI_Thread_Pool.Server.Average_Execution_Time_milliseconds obi11-01.OBI.Oracle_BI_Thread_Pool.Server.Average_Queued_Requests obi11-01.OBI.Oracle_BI_Thread_Pool.Server.Average_Queued_Time_milliseconds obi11-01.OBI.Oracle_BI_Thread_Pool.Server.Avg_Request_per_sec [...]
Looking down the list, it’s the DB_Gateway and Server values that I want to extract. First up is some regex to return the series with the thread pool name in:
/.*Oracle_BI_Thread_Pool.*/
and now build it as part of an InfluxDB query:
list series /.*Oracle_BI_Thread_Pool.*/
You can validate this against InfluxDB directly using the web UI for InfluxDB or curl as described much earlier in this article. Put the query into the Grafana Template definition and hit the green play button. You’ll get a list back of all series returned by the query:
Now we want to extract out the threadpool names and we do this using the regex capture group ( )
:
/.*Oracle_BI_Thread_Pool\.(.*)\./
Hit play again and the results from the first query are parsed through the regex and you should have just the values you need:
If the values are likely to change (for example, Connection Pool names will change in OBIEE depending on the RPD) then make sure you select Refresh on load. Click Add and you’re done.
You can also define variables with fixed values, which is good if they’re never going to change, or they are but you’ve not got your head around RegEx. Simply change the Type to Custom and enter comma-separated values.
To use the variable simply reference it prefix with a dollar sign, in the metric definition:
or in the title:
To change the value selected just use the dropdown from the top of the screen:
Annotations
Another very nice feature of Grafana is Annotations. These are overlays on each graph at a given point in time to provide additional context to the data. How I use it is when analysing test data to be able to see what script I ran when:
There’s two elements to Annotations – setting them up in Grafana, and getting the data into the backend (InfluxDB in this case, but they work with other data sources such as Graphite too).
Storing an Annotation
An annotation is nothing more than some time series data, but typically a string at a given point in time rather than a continually changing value (measure) over time.
To store it just just chuck the data at InfluxDB and it creates the necessary series. In this example I’m using one called events but it could be called foobar for all it matters. You can read more about putting data into InfluxDB here and choose one most suitable to the event that it is you want to record to display as an annotation. I’m running some bash-based testing, so curl
fits well here, but if you were using a python program you could use the python InfluxDB client, and so on.
Sending data with curl is easy, and looks like this:
curl -X POST -d '[{"name":"events","columns":["id","action"],"points":[["big load test","start"]]}]' 'http://monitoring-server.foo.com:8086/db/carbon/series?u=root&p=root'
The main bit of interest, other than the obvious server name and credentials, is the JSON payload that we’re sending. Pulling it out and formatting it a bit more nicely:
{ "name":"events", "columns":[ "test-id", "action" ], "points":[ [ "big load test", "start" ] ] }
So the series (“table”) we’re loading is called events, and we’re going to store an entry for this point in time with two columns, test-id and action storing values big load test and start respectively. Interestingly (and something that’s very powerful) is that InfluxDB’s schema can evolve in a way that no traditional RDBMS could. Never mind that we’re not had to define events before loading it, we could even load it at subsequent time points with more columns if we want to simply by sending them in the data payload.
Coming back to real-world usage, we want to make the load as dynamic as possible, so with a few variables and a bit of bash magic we have something like this that will automatically load to InfluxDB the start and end time of every load test that gets run, along with the name of the script that ran it and the host on which it ran:
INFLUXDB_HOST=monitoring-server.foo.com INFLUXDB_PORT=8086 INFLUXDB_USER=root INFLUXDB_PW=root HOSTNAME=$(hostname) SCRIPT=`basename $0` curl -X POST -d '[{"name":"events","columns":["host","id","action"],"points":[["'"$HOSTNAME"'","'"$SCRIPT"'","start"]]}]' "http://$INFLUXDB_HOST:$INFLUXDB_PORT/db/carbon/series?u=$INFLUXDB_USER&p=$INFLUXDB_PW" echo 'Load testing bash code goes here. For now let us just go to sleep' sleep 60 curl -X POST -d '[{"name":"events","columns":["host","id","action"],"points":[["'"$HOSTNAME"'","'"$SCRIPT"'","end"]]}]' "http://$INFLUXDB_HOST:$INFLUXDB_PORT/db/carbon/series?u=$INFLUXDB_USER&p=$INFLUXDB_PW"
Displaying annotations in Grafana
Once we’ve got a series (“table”) in InfluxDB with our events in, pulling them through into Grafana is pretty simple. Let’s first check the data we’ve got, by going to the InfluxDB web UI (http://influxdb:8083) and from the Explore Data » page running a query against the series we’ve loaded:
select * from events
The time value is in epoch milliseconds, and the remaining values are whatever you sent to it.
Now in Grafana enable Annotations for the dashboard (via the cog in the top-right corner)
Once enabled use the cog in the top-left corner to open the menu from which you select the Annotations dialog. Click on the Add tab. Give the event group a name, and then the InfluxDB query that pulls back the relevant data. All you need to do is take the above query that you used to test out the data and append the necessary time predicate where $timeFilter
so that only events for the time window currently being shown are returned:
select * from events where $timeFilter
Click Add and then set your time window to include a period when an event was recorded. You should see a nice clear vertical line and a marker on the x-axis that when you hover over it gives you some more information:
You can use the Column Mapping options in the Annotations window to bring in additional information into the tooltip. For example, in my event series I have the id of the test, action (start/end), and the hostname. I can get this overlaid onto the tooltip by mapping the columns thus:
Which then looks like this on the graph tooltips:
N.B. currently (Grafana v1.9.1) when making changes to an Annotation definition you need to refresh the graph views after clicking Update on the annotation definition, otherwise you won’t see the change reflected in the annotations on the graphs.
Sparklines
Everything I’ve written about Grafana so far has revolved around the graphs that it creates, and unsurprisingly because this is the core feature of the tool, the bread and butter. But there are other visualisation options available – “Singlestat”, and “Text”. The latter is pretty obvious and I’m not going to discuss it here, but Singlestat, a.k.a. Sparkline and/or Performance Tiles, is awesome and well worth a look. First, an illustration of what I’m blathering about:
A nice headline figure of the current number of active sessions, along with a sparkline to show the trend of the metric.
To add one of these to your dashboard go to the green row menu icon on the left (it’s mostly hidden and will pop out when you hover over it) and select Add Panel -> singlestat.
On the panel that appears go to the edit screen as shown :
In the Metrics panel specify the series as you would with a graph, but remember you need to pull back just a single series – no point writing a regex to match multiple ones. Here I’m going to show the number of queued requests on a connection pool. Note that because I want to show the latest value I change the aggregation to last:
In the General tab set a title for the panel, as well as the width of it – unlike graphs you typically want these panels to be fairly narrow since the point is to show a figure not lots of detail. You’ll notice that I’ve also defined a Drilldown / detail link so that a user can click on the summary figure and go to another dashboard to see more detail.
The Options tab gives you the option to set font size, prefixes/suffixes, and is also where you set up sparkline and conditional formatting.
Tick the Spark line box to draw a sparkline within the panel – if you’re not seen them before sparklines are great visualisations for showing the trend of a metric without fussing with axes and specific values. Tick the Background mode to use the entire height of the panel for the graph and overlay the summary figure on top.
Now for the bit I think is particularly nice – conditional formatting of the singlestat panel. It’s dead easy and not a new concept but is really great way to let a user see at a real glance if there’s something that needs their attention. In the case of this example here, queueing connections, any queueing is dodgy and more than a few is bad (m’kay). So let’s colour code it:
You can even substitute values for words – maybe the difference between 61 queued sessions and 65 is fairly irrelevant, it’s the fact that there are that magnitude of queued sessions that is more the problem:
Note that the values are absolutes, not ranges. There is an open issue for this so hopefully that will change. The effect is nice though:
Conclusion
Hopefully this article has given you a good idea of what is possible with data stored in InfluxDB and visualised in Grafana, and how to go about doing it.
If you’re interested in OBIEE monitoring you might also be interested in the ELK suite of tools that complements what I have described here well, giving an overall setup like this:
You can read more about its use with OBIEE here, or indeed get in touch with us if you’d like to learn more or have us come and help with your OBIEE monitoring and diagnostics.
Rittman Mead’s Development Cluster, EM12c and the Blue Mendora VMware EM Plugin
For development and testing purposes, Rittman Mead run a VMWare VSphere cluster made up of a number of bare-metal servers hosting Linux, Windows and other VMs. Our setup has grown over the years from a bunch of VMs running on Mac Mini servers to where we are now, and was added-to considerably over the past twelve months as we started Hadoop development – a typical Cloudera CDH deployment we work with requires six or more nodes along with the associated LDAP server, Oracle OBIEE + ODI VMs and NAS storage for the data files. Last week we added our Exalytics server as a repurposed 1TB ESXi VM server giving us the topology shown in the diagram below.
One of the purposes of setting up a development cluster like this was to mirror the types of datacenter environments our customers run, and we use VMWare VSphere and VCenter Server to manage the cluster as a whole, using technologies such as VMWare VMotion to test out alternatives to WebLogic, OBIEE and Oracle Database HA. The screenshot below shows the cluster setup in VMWare VCenter.
We’re also big advocates of Oracle Enterprise Manager as a way of managing and monitoring a customer’s entire Oracle BI & data warehousing estate, using the BI Management Pack to manage OBIEE installations as whole, building alerts off of OBIEE Usage Tracking data, and creating composite systems and services to monitor a DW, ETL and BI system from end-to-end. We register the VMs on the VMWare cluster as hosts and services in a separate EM12cR4 install and use it to monitor our own development work, and show the various EM Management Packs to customers and prospective clients.
Something we’ve wanted to do for a while though is bring the actual VM management into Enterprise Manager as well, and to do this we’ve also now setup the Blue Mendora VMWare Plugin for Enterprise Manager, which connects to your VMWare VCenter, ESXi, Virtual Machines and other infrastructure components and brings them into EM as monitorable and manageable components. The plugin connects to VCenter and the various ESXi hosts and gives you the ability to list out the VMs, Hosts, Clusters and so on, monitor them for resource usage and set up EM alerts as you’d do with other EM targets, and perform VCenter actions such as stopping, starting and cloning VMs.
What’s particularly useful with such a virtualised environment though is being able to include the VM hypervisors, VM hosts and other VMWare infrastructure in the composite systems we define; for example, with a CDH Hadoop cluster that authenticates via LDAP and Kerberos, is used by OBIEE and ODI and is hosted on two VMWare ESXi hosts part of a VSphere cluster, we can get an overall picture of the system health that doesn’t stop at the host level.
If your organization is using VMWare to host your Oracle development, test or production environments and you’re interested in how Enterprise Manager can help you monitor and manage the whole estate, including the use of Blue Mendora’s VMWare EM Plugin, drop me a line and I’d be happy to take you through what’s involved.
Enable Your Dashboard Designers to Concentrate on User Experience Rather Than Syntax (or How to Add a Treemap in Two Lines)
JavaScript is a powerful tool that can be used to add functionality to OBIEE dashboards. However, for many whose wheelhouses are more naturally aligned with Stephen Few rather than John Resig, adding JavaScript to a dashboard can be intimidating. To facilitate this process, steps can be taken to centralize and simplify the invocation of this code. In this post, I will demonstrate how to create your very own library of custom HTML tags. These tags will empower anyone to add 3rd party visualizations from libraries like D3 without a lick of JavaScript experience.
What is a “Custom Tag”?
Most standard HTML tags provide very simple behaviors. Complex behaviors have typically been reserved for JavaScript. While, for the most part, this is still the case, custom tags can be used to provide a more intuitive interface to the JavaScript. The term “custom tag” library refers to a developer defined library of HTML tags that are not natively supported by the HTML standard, but are instead included at run-time. For example, one might implement a <RM-MODAL> tag to produce a button that opens a modal dialog. Behind the scenes, JavaScript will be calling the shots, but the code in your narrative view or dashboard text section will look like plain old HTML tags.
Developing a JavaScript Library
The first step when incorporating an external library onto your dashboard is to load it. To do so, it’s often necessary to add JavaScript libraries and css files to the <head> of a document to ensure they have been loaded prior to being called. However, in OBIEE we don’t have direct access to the <head> from the Dashboard editor. By accessing the DOM, we can create style and script src objects on the fly and append them to the <head>. The code below appends external scripts to the document’s <head> section.
Figure 1. dashboard.js
01 function loadExtFiles(srcname, srctype){ 02 if (srctype=="js"){ 03 var src=document.createElement('script') 04 src.setAttribute("type","text/JavaScript") 05 src.setAttribute("src", srcname) 06 } else if (srctype=="css"){ 07 var src=document.createElement("link") 08 src.setAttribute("rel", "stylesheet") 09 src.setAttribute("type", "text/css") 10 src.setAttribute("href", srcname) 11 } 12 13 if ((typeof src!==undefined) && (src!==false)) { 14 parent.document.getElementsByTagName("head")[0].appendChild(src) 15 } 16 } 17 18 window.onload = function() { 19 loadExtFiles("/rm/js/d3.v3.min.js", "js") 20 loadExtFiles("/rm/css/visualizations.css", "css") 21 loadExtFiles("/rm/js/visualizations.js", "js") 22 }
In addition to including the D3 library, we have included a CSS file and a JavaScript file, named visualizations.css and visualizations.js respectively. The visualizations.css file contains the default formatting for the visualizations and visualizations.js is our library of functions that collect parameters and render visualizations.
The D3 gallery provides a plethora of useful and not so useful examples to fulfill all your visualizations needs. If you have a background in programming, these examples are simple enough to customize. If not, this is a tall order. Typically the process would go something like this:
- Determine how the data is currently being sourced.
- Rewrite that section of the code to accept data in a format that can be produced by OBIEE. Often this requires a bit more effort in the refactoring as many of the examples are sourced from CSV files or JSON. This step will typically involve writing code to create objects and add those objects to an array or some other container. You will then have to determine how you are passing this data container to the D3 code. Will the D3 code be rewritten as a function that takes in the array as a parameter? Will the array be scoped in a way that the D3 code can simply reference it?
- Identify how configurations like colors, sizing, etc. are set and determine how to customize them as per your requirements.
- Determine what elements need to be added to the narrative view to render the visualization.
If you are writing your own visualization from scratch, these same steps are applied in the design phase. Either way, the JavaScript code that results from performing these steps should not be the interface exposed to a dashboard designer. The interface should be as simple and understandable as possible to promote re-usability and avoid implementation syntax errors. That’s where custom HTML tags come in.
Wait… Why use tags rather than exposing the Javascript function calls?
Using custom tags allow for a more intuitive implementation than JavaScript functions. Simple JavaScript functions do not support named arguments. What this means is JavaScript depends on order to differentiate arguments.
<script>renderTreemap("@1", "@2", @3, null, null, "Y");</script>
In the example above, anyone viewing this call without being familiar with the function definition would have a hard time deciphering the parameters. By using a tag library to invoke the function, the parameters are more clear. Parameters that are not applicable for the current invocation are simply left out.
<rm-treemap name="@1" grouping="@2" measure=@3 showValues="Y"/>
That being said, you should still familiarize yourself with the correct usage prior to using them.
Now some of you may be saying that named arguments can be done using object literals, but the whole point of this exercise is to reduce complexity for front end designers, so I wouldn’t recommend this approach within the context of OBIEE.
What do these tags look like and how do they pass the data to the JavaScript?
For this example, we will be providing a Treemap visualization. As could be expected, the example provided by the link is sourced by a JSON object. For our use, we will have to rewrite that code to source the data from the attributes in our custom HTML tags. The D3 code is expecting a hierarchical object made up of leaf node objects contained within grouping objects. The leaf node objects consists of a “name” field and a “size” field. The grouping object consists of a “name” field and a “children” field that contains an array of leaf node objects. By default, the size values, or measures, are not displayed and are only used to size the nodes. Additionally, the dimensions of the treemap are hard coded values. Inevitably users will want to change these settings, so for each of the settings which we want to expose for configuration we will provide attribute fields on the custom tag we build. Ultimately, that is the purpose of this design pattern.
- Name you custom tag
- Identify all your inputs
- Create a tag attribute for each input
- Within a javascript library, extract and organize the the values
- Pass those values to D3
For this example we will configure behaviours for a tag called <rm-treemap>. Note: It is a good practice to add a dash to your custom tags to ensure they will not match an existing HTML tag. This tag will support the following attributes:
- name – Name of the dimension being measured
- measure: – Used to size the node boxes
- grouping: – Used to determine color for node boxes
- width: Width in pixels
- height: Height in pixels
- showValues: Y/N
It will be implemented within a narrative view like so:
<rm-treemap name="@1" grouping="@2" measure=@3 width="700" height="500" showValues="Y"/>
In order to make this tag useful, we need to bind behaviors to it that are controlled by the tag attributes. To extract the attribute values from <rm-treemap>, the javascript code in visualizations.js will use two methods from the Element Web API, Element.getElementsByTagName and Element.getAttributes.
Fig 2. Lines 8-11 use these methods to identify the first <rm-treemap> tag and extract the values for width, height and showValues. It was necessary specify a single element, in this case the first one, as getElementsByTagName returns an array of all matching elements within the HTML document. There will most likely be multiple matches as the OBIEE narrative field will loop through query results and produce a <rm-treemap> tag for each row.
In Fig 2. Lines 14-41, the attributes for name, measure and grouping will be extracted and bound to either leaf node objects or grouping objects. Additionally lines 11 and 49-50 configure the displayed values and the size of the treemap. The original code was further modified on line 62 to use the first <rm-treemap> element to display the output.
Finally, lines 99-101 ensure that this code only executed when the <rm-treemap> is detected on the page. The last step before deployment is documentation. If you are going to go through all the trouble of building a library of custom tags, you need to set aside the time to document their usage. Otherwise, regardless of how much you simplified the usage, no one will be able to use them.
Figure 2. visualizations.js
01 var renderTreemap = function () { 02 // Outer Container (Tree) 03 var input = {}; 04 input.name = "TreeMap"; 05 input.children = []; 06 07 //Collect parameters from first element 08 var treeProps = document.getElementsByTagName("rm-treemap")[0]; 09 canvasWidth = treeProps.getAttribute("width") ? treeProps.getAttribute("width") : 960; 10 canvasHeight = treeProps.getAttribute("height") ? treeProps.getAttribute("height") : 500; 11 showValues = treeProps.getAttribute("showValues").toUpperCase(); 12 13 // Populate collection of data objects with parameters 14 var mapping = document.getElementsByTagName("rm-treemap"); 15 for (var i = 0; i < mapping.length; i++) { 16 var el = mapping[i]; 17 var box = {}; 18 var found = false; 19 20 box.name = (showValues == "Y") ? el.getAttribute("name") + 21 "<br> " + 22 el.getAttribute("measure") : el.getAttribute("name"); 23 box.size = el.getAttribute("measure"); 24 curGroup = el.getAttribute("grouping"); 25 26 // Add individual items to groups 27 for (var j = 0; j < input.children.length; j++) { 28 if (input.children[j].name === curGroup) { 29 input.children[j].children.push(box); 30 found = true; 31 } 32 } 33 34 if (!found) { 35 var grouping = {}; 36 grouping.name = curGroup; 37 grouping.children = []; 38 grouping.children.push(box); 39 input.children.push(grouping); 40 } 41 } 42 43 var margin = { 44 top: 10, 45 right: 10, 46 bottom: 10, 47 left: 10 48 }, 49 width = canvasWidth - margin.left - margin.right, 50 height = canvasHeight - margin.top - margin.bottom; 51 52 // Begin D3 visualization 53 var color = d3.scale.category20c(); 54 55 var treemap = d3.layout.treemap() 56 .size([width, height]) 57 .sticky(true) 58 .value(function (d) { 59 return d.size; 60 }); 61 62 var div = d3.select("rm-treemap").append("div") 63 .style("position", "relative") 64 .style("width", (width + margin.left + margin.right) + "px") 65 .style("height", (height + margin.top + margin.bottom) + "px") 66 .style("left", margin.left + "px") 67 .style("top", margin.top + "px"); 68 69 var node = div.datum(input).selectAll(".treeMapNode") 70 .data(treemap.nodes) 71 .enter().append("div") 72 .attr("class", "treeMapNode") 73 .call(position) 74 .style("background", function (d) { 75 return d.children ? color(d.name) : null; 76 }) 77 .html(function (d) { 78 return d.children ? null : d.name; 79 }); 80 81 function position() { 82 this.style("left", function (d) { 83 return d.x + "px"; 84 }) 85 .style("top", function (d) { 86 return d.y + "px"; 87 }) 88 .style("width", function (d) { 89 return Math.max(0, d.dx - 1) + "px"; 90 }) 91 .style("height", function (d) { 92 return Math.max(0, d.dy - 1) + "px"; 93 }); 94 } 95 //End D3 visualization 96 } 97 98 // Invoke visualization code only if rm-treemap tag exists 99 var doTreemap = document.getElementsByTagName("rm-treemap"); 100 if (doTreemap !== null) { 101 renderTreemap(); 102 }
Figure 3. visualizations.css
01 .treeMapNode { 02 border: solid 1px white; 03 border-radius: 5px; 04 font: 10px sans-serif; 05 line-height: 12px; 06 overflow: hidden; 07 position: absolute; 08 text-indent: 2px; 09 } |
Putting it all together
The first step to implementing this code is to make is accessible. To do this, you will need to deploy your code to the weblogic server. Many years ago, Venkatakrishnan Janakiraman, detailed how to deploy code to weblogic in his blog about skinning. For this application this process still applies, however you don’t need to be concerned with the bits about modifying the instanceconfig.xml or skinning.
Once that the code has been deployed to the server, there are literally only two lines of code required to implement this visualization. First the libraries need to be included. This is done by sourcing in the dashboard.js file. This can be done within the Narrative view’s prefix field, but I have chosen to add it to a text section on the dashboard. This allows multiple analyses to use the libraries without duplicating the load process in multiple places.
The text section should be configured as follows. (Note: The path to Dashboard.js is relative to the root path specified in your deployment.)
From the Narrative View, add the <rm-treemap> tag to the Narrative field and populate the attributes with the appropriate data bind variables and your desired settings.
This should result in the following analysis.
In summary:
- Deploy the dashboard.js, visualization.js and visualization.css files to weblogic
- From a dashboard text section, source in dashboard.js, which will in turn include visualization.js and visualization.css
- Add the <rm-treemap> tag to the Narrative field of a Narrative view.
As you can see implementing custom HTML tags to serve as the interface for a D3 visualization will save your dashboard designers from having to sift through dozens if not hundreds of lines of confusing code. This will reduce implementation errors, as the syntax is much simpler than JavaScript and will promote conformity, as all visualizations will be sourced from a common library. Hopefully, this post was informative and will inspire you to consider this pattern or a similarly purposed one to make your code easier to implement.
Concurrent RPD Development in OBIEE
OBIEE is a well established product, having been around in various incarnations for well over a decade. The latest version, OBIEE 11g, was released 3.5 years ago, and there are mutterings of OBIEE 12c already. In all of this time however, one thing it has never quite nailed is the ability for multiple developers to work with the core metadata model – the repository, known as the RPD – concurrently and in isolation. Without this, development is doomed to be serialised – with the associated bottlenecks and inability to scale in line with the number of developers available.
My former colleague Stewart Bryson wrote a series of posts back in 2013 in which he outlines the criteria for a successful OBIEE SDLC (Software Development LifeCycle) method. The key points were :
- There should be a source control tool (a.k.a version control system, VCS) that enables us to store all artefacts of the BI environment, including RPD, Presentation Catalog, etc etc. From here we can tag snapshots of the environment at a given point as being ready for release, and as markers for rollback if we take a wrong turn during development.
- Developers should be able to do concurrent development in isolation.
- To do this, source control is mandatory in order to enable branch-based development, also known as feature-driven development, which is a central tenet of an Agile method.
Oracle’s only answer to the SDLC question for OBIEE has always been MUDE. But MUDE falls short in several respects:
- It only manages the RPD – there is no handling of the Presentation Catalog etc
- It does not natively integrate with any source control
- It puts the onus of conflict resolution on the developer rather than the “source master” who is better placed to decide the outcome.
Whilst it wasn’t great, it wasn’t bad, and MUDE was all we had. Either that, or manual integration into source control (1, 2) tools, which was clunky to say the least. The RPD remained a single object that could not be merged or managed except through the Administration Tool itself, so any kind of automatic merge strategies that the rest of the software world were adopting with source control tools were inapplicable to OBIEE. The merge would always require the manual launching of the Administration Tool, figuring out the merge candidates, before slowly dying in despair at having to repeat such a tortuous and error-prone process on a regular basis…
Then back in early 2012 Oracle introduced a new storage format for the RPD. Instead of storing it as a single binary file, closed to prying eyes, it was instead burst into a set of individual files in MDS XML format.
For example, one Logical Table was now one XML files on disk, made up of entities such as LogicalColumn, ExprText, LogicalKey and so on:
It even came with a set of configuration screens for integration with source control. It looked like the answer to all our SDLC prayers – now us OBIEE developers could truly join in with the big boys at their game. The reasoning went something like:
- An RPD stored in MDS XML is no longer binary
- git can merge code that is plain text from multiple branches
- Let’s merge MDS XML with git!
But how viable is MDS XML as a storage format for the RPD used in conjunction with a source control tool such as git? As we will see, it comes down to the Good, the Bad, and the Ugly…
The Good
As described here, concurrent and unrelated developments on an RPD in MDS XML format can be merged successfully by a source control tool such as git. Each logical object is an file, so git just munges (that’s the technical term) the files modified in each branch together to come up with a resulting MDS XML structure with the changes from each development in it.
The Bad
This is where the wheels start to come off. See, our automagic merging fairy dust is based on the idea that individually changed files can be spliced together, and that since MDS XML is not binary, we can trust a source control tool such as git to also work well with changes within the files themselves too.
Unfortunately this is a fallacy, and by using MDS XML we expose ourselves to greater complications than we would if we just stuck to a simple binary RPD merged through the OBIEE toolset. The problem is that whilst MDS XML is not binary, is not unstructured either. It is structured, and it has application logic within it (mdsid, of which see below).
Within the MDS XML structure, individual first-class objects such as Logical Tables are individual files, and structured within them in the XML are child-objects such as Logical Columns:
Source control tools such as git cannot parse it, and therefore do not understand what is a real conflict versus an unrelated change within the same object. If you stop and think for a moment (or longer) quite what would be involved in accurately parsing XML (let alone MDS XML), you’ll realise that you basically need to reverse-engineer the Administration Tool to come up with an accurate engine.
We kind of get away with merging when the file differences are within an element in the XML itself. For example, the expression for a logical column is changed in two branches, causing clashing values within ExprText and ExprTextDesc. When this happens git will throw a conflict and we can easily resolve it, because the difference is within the element(s) themselves:
Easy enough, right?
But taking a similarly “simple” merge conflict where two independent developers add or modify different columns within the same Logical Table we see what a problem there is when we try to merge it back together relying on source control alone.
Obvious to a human, and obvious to the Administration Tool is that these two new columns are unrelated and can be merged into a single Logical Table without problem. In a paraphrased version of MDS XML the two versions of the file look something like this, and the merge resolution is obvious:
But a source control tool such as git looks as the MDS XML as a plaintext file, not understanding the concept of an XML tree and sibling nodes, and throws its toys out of the pram with a big scary merge conflict:
Now the developer has to roll up his or her sleeves and try to reconcile two XML files – with no GUI to support or validate the change made except loading it back into the Administration Tool each time.
So if we want to use MDS XML as the basis for merging, we need to restrict our concurrent developments to completely independent objects. But, that kind of hampers the ideal of more rapid delivery through an Agile method if we’re imposing rules and restrictions like this.
The Ugly
This is where is gets a bit grim. Above we saw that MDS XML can cause unnecessary (and painful) merge conflicts. But what about if two developers inadvertently create the same object concurrently? The behaviour we’d expect to see is a single resulting object. But what we actually get is both versions of the object, and a dodgy RPD. Uh Oh.
Here are the two concurrently developed RPDs, produced in separate branches isolated from each other:
And here’s what happens when you leave it to git to merge the MDS XML:
The duplicated objects now cannot be edited in the Administration Tool in the resulting merged RPD – any attempt to save them throws the above error.
Why does it do this? Because the MDS XML files are named after a globally unique identifier known as the mdsid, and not their corresponding RPD qualified name. And because the mdsid is unique across developments, two concurrent creations of the same object end up with different mdsid values, and thus different filenames.
Two files from separate branches with different names are going to be seen by source control as being unrelated, and so both are brought through in the resulting merge.
As with the unnecessary merge conflict above, we could define process around same object creation, or add in a manual equalise step. The issue really here is that the duplicates can arise without us being aware because there is no conflict seen by the source control tool. It’s not like merging an un-equalised repository in the Administration Tool where we’d get #1 suffixes on the duplicate object so that at least (a) we spot the duplication and (b) the repository remains valid and the duplicate objects available to edit.
MDS XML Repository opening times
Whether a development strategy based on MDS XML is for you or not, another issue to be aware of is that for anything beyond a medium sized RPD opening times of an MDS XML repository are considerable. As in, a minute from binary RPD, and 20 minutes from MDS XML. And to be fair, after 20 minutes I gave up on the basis that no sane developer would write off that amount of their day simply waiting for the repository to open before they can even do any work on it. This rules out working with any big repositories such as that from BI Apps in MDS XML format.
So is MDS XML viable as a Repository storage format?
MDS XML does have two redeeming features :
- It reduces the size of your source control repository, because on each commit you will be storing just a delta of the overall repository change, rather than the whole binary RPD each time.
- For tracking granular development progress and changes you can identify what was modified through the source control tool alone – because the new & modified objects will be shown as changes:
But the above screenshots both give a hint of the trouble in store. The mdsid unique identifier is used not only in filenames – causing object duplication and strange RPD behaviour- but also within the MDS XML itself, referencing other files and objects. This means that as a RPD developer, or RPD source control overseer, you need to be confident that each time you perform a merge of branches you are correctly putting Humpty Dumpty back together in a valid manner.
If you want to use MDS XML with source control you need to view it as part of a larger solution, involving clear process and almost certainly a hybrid approach with the binary RPD still playing a part — and whatever you do, the Administration Tool within short reach. You need to be aware of the issues detailed above, decide on a process that will avoid them, and make sure you have dedicated resource that understands how it all fits together.
If not MDS XML, then what?…
Source control (e.g. git) is mandatory for any kind of SDLC, concurrent development included. But instead of storing the RPD in MDS XML, we store it as a binary RPD.
Wait wait wait, don’t go yet ! … it gets better
By following the git-flow method, which dictates how feature-driven development is done in source control (git), we can write a simple script that determines when merging branches what the candidates are for an OBIEE three-way RPD merge.
In this simple example we have two concurrent developments – coded “RM–1” and “RM–2”. First off, we create two branches which take the code from our “mainline”. Development is done on the two separate features in each branch independently, and committed frequently per good source control practice. The circles represent commit points:
The first feature to be completed is “RM–1”, so it is merged back into “develop”, the mainline. Because nothing has changed in develop since RM–1 was created from it, the binary RPD file and all other artefacts can simply ‘overwrite’ what is there in develop:
Now at this point we could take “develop” and start its deployment into System Test etc, but the second feature we were working on, RM–2, is also tested and ready to go. Here comes the fancy bit! Git recognises that both RM–1 and RM–2 have made changes to the binary RPD, and as a binary RPD git cannot try to merge it. But now instead of just collapsing in a heap and leaving it for the user to figure out, it makes use of git and the git-flow method we have followed to work out the merge candidates for the OBIEE Administration Tool:
Even better, it invokes the Administration Tool (which can be run from the command line, or alternatively use command line tools comparerpd/patchrpd) to automatically perform the merge. If the merge is successful, it goes ahead with the commit in git of the merge into the “develop” branch. The developer has not had to do any kind of interaction to complete the merge and commit.
If the merge is not a slam-dunk, then we can launch the Administration Tool and graphically figure out the correct resolution – but using the already-identified merge candidates in order to shorten the process.
This is not perfect, but there is no perfect solution. It is the closest thing that there is to perfection though, because it will handle merges of :
- Unique objects
- Same objects, different modifications (c.f. two new columns on same table example above)
- Duplicate objects – by equalisation
Conclusion
There is no single right answer here, nor are any of the options overly appealing.
If you want to work with OBIEE in an Agile method, using feature-driven development, you will have to adopt and learn specific processes for working with OBIEE. The decision you have to make is on how you store the RPD (binary or multiple MDS XML files, or maybe both) and how you handle merging it (git vs Administration Tool).
My personal view is that taking advantage of git-flow logic, combined with the OBIEE toolset to perform three-way merges, is sufficiently practical to warrant leaving the RPD in binary format. The MDS XML format is a lovely idea but there are too few safeguards against dodgy/corrupt RPD (and too many unnecessary merge conflicts) for me to see it as a viable option.
Whatever option you go for, make sure you are using regression testing to test the RPD after you merge changes together, and ideally automate the testing too. Here at Rittman Mead we’ve written our own suite of tools that do just this – get in touch to find out more.