Tag Archives: Performance
OBIEE Performance – Why Metrics Matter (and…Announcing obi-metrics-agent v2!)
One of the first steps to improve OBIEE performance is to determine why it is slow. That may sound obvious—can’t fix it if you don’t know what you’re fixing, right? Unfortunately, the “Drunk Man anti-method”, in which we merrily stumble from one change to another, maybe breaking things along the way and certainly having a headache at the end of it, is far too prevalent. This comes about partly through unawareness of a better method to follow, and partly encouraged by tuning documents comprising reams of configuration settings to “tune” and fiddle with without really knowing why or how to prove if they indeed actually fixed anything…
Determining the cause of performance problems is often a case of working out what it’s not just as much as what it is. This is for two important reasons. Firstly, we begin to narrow down the area of focus and analysis. Secondly, we know what to leave alone. If we can prove that, for example, the database is running the query behind a report quickly, then there is no point “tuning” the database, because the problem doesn’t lie there. Similarly, if we can see that a report taking 60 seconds in total to run spends 59 seconds of that in the database, fiddling with Java Heap Size settings on OBIEE is going to at the very, very most reduce our total runtime to…59 seconds! This kind of time profiling is important to do, and something that we produce automatically in our Performance Analytics Report:
So, how do we pinpoint what is, or isn’t, going wrong? We need data, and specifically, we need metrics. We need log files, too, maybe for the real nitty-gritty of explain plans, but a huge amount can be understood about a system by looking at the metrics available.
Any modern operating system, from Windows to Linux, AIX to Solaris, will have copious utilities that will expose important metrics such as CPU usage, disk throughout, and so on. These can often be of great assistance in diagnosing performance problems.
OBIEE DMS Metrics
When it comes to OBIEE itself, we are spoilt by the performance counters available that since 11g (and still in 12c) have been exposed through the Dynamic Monitoring System (DMS). They were even there in 10g too, but accessed through JMX. These metrics give us information ranging from things like the number of logged in users, through how many connections are open to a given database, down to real low-level internals like how many threads are in use for handling LDAP lookups. Crucially, there are also metrics showing current and peak levels of queueing within the various internal systems in OBIEE, which is where DMS becomes particularly important.
By being able to prove that OBIEE has, for example, run out of available connections to the database, we can confidently state that by changing a given configuration parameter we will alleviate a bottleneck. Not only that, but we can monitor and determine how many connections we really do need at a given workload level. The chart below illustrates this. The capacity of the connection pool is plotted against the number of busy connections. As the number of active sessions increases so does the pressure on the connection pool, until it hits capacity at which point queueing starts—which now means queries are waiting for a connection to the database before they can even begin to execute (and it’s at this point we’d expect to see response times suffer).
So this is the kind of valuable information that is just not available anywhere other than the DMS metrics, and you can see from the above illustration just how useful it is. To access DMS metrics in OBIEE 11g and 12c, you have several options available out of the box:
- DMS Spy Servlet
- This includes the very useful (but undocumented) option to pull the metrics out in XML format, by including
format=xml
as a request parameters—thanks to etcSudoers on the #obihackers IRC channel for this gem!
- This includes the very useful (but undocumented) option to pull the metrics out in XML format, by including
- Fusion Middleware Control
- WLST
- opmnctl (not 12c)
- There’s also the paid-for option of the BI Management Pack.
Some of these are useful for programmatically scraping the data, others for interactively checking values at a point in time.
obi-metrics-agent – v2
At Rittman Mead, we always recommend collecting and storing DMS metrics (alongside others, including OS) all the time—not just if you find yourself with performance problems. That way you can compare before and after states, you can track historical trends—and you’re all set to hit the ground running with your diagnostics when (if) you do hit performance problems.
You can capture DMS metrics with the BI Management Pack in Enterprise Manager, you can write something yourself, or you can take advantage of an open-source tool from Rittman Mead, obi-metrics-agent.
I wrote about obi-metrics-agent originally when we first open-sourced it almost two years ago. The principle in version 2 is still the same, we’ve just rewritten it in Jython so as to remove the need for any dependencies like Python and associated libraries. We’ve also added native InfluxDB output, as well as retained the option to send data in the original carbon/graphite protocol.
You can run obi-metrics-agent and just write the DMS data to CSV, but our recommendation is always to persist it straight to a time series data store such as InfluxDB. Once you’ve collected the data you can analyse and monitor it with several tools, our favourite being Grafana (read more about this here).
As part of our Performance Analytics Service we’ve built a set of Performance Analytics Dashboards, making available a full-stack view of OBIEE metrics (including DMS, OS, and even Oracle ASH data), as seen in this video here (click on the image to enlarge it):
If you’d like to find out more about these and the Performance Analytics service offered by Rittman Mead, please get in touch. You can download obi-metrics-agent itself freely from our github repository.
The post OBIEE Performance – Why Metrics Matter (and…Announcing obi-metrics-agent v2!) appeared first on Rittman Mead Consulting.
OBIEE 12c – Infrastructure Tuning Guide
The following whitepaper describes techniques for monitoring and optimizing the performance of Oracle Business Intelligence Enterprise Edition (OBIEE) 12c components. The reader of this document should have knowledge of server administration, Oracle Fusion Middleware (FMW), hardware performance tuning fundamentals, web servers, java application servers and database. OBIEE12c: Best Practices Guide for Infrastructure Tuning Oracle® Business Intelligence Enterprise Edition 12c (12.2.1)
|
Using Linux Control Groups to Constrain Process Memory
Linux Control Groups (cgroups) are a nifty way to limit the amount of resource, such as CPU, memory, or IO throughput, that a process or group of processes may use. Frits Hoogland wrote a great blog demonstrating how to use it to constrain the I/O a particular process could use, and was the inspiration for this one. I have been doing some digging into the performance characteristics of OBIEE in certain conditions, including how it behaves under memory pressure. I’ll write more about that in a future blog, but wanted to write this short blog to demonstrate how cgroups can be used to constrain the memory that a given Linux process can be allocated.
This was done on Amazon EC2 running an image imported originally from Oracle’s OBIEE SampleApp, built on Oracle Linux 6.5.
$ uname -a Linux demo.us.oracle.com 2.6.32-431.5.1.el6.x86_64 #1 SMP Tue Feb 11 11:09:04 PST 2014 x86_64 x86_64 x86_64 GNU/Linux
First off, install the necessary package in order to use them, and start the service. Throughout this blog where I quote shell commands those prefixed with #
are run as root and $
as non-root:
# yum install libcgroup # service cgconfig start
Create a cgroup (I’m shamelessly ripping off Frits’ code here, hence the same cgroup name ;-) ):
# cgcreate -g memory:/myGroup
You can use cgget to view the current limits, usage, & high watermarks of the cgroup:
# cgget -g memory:/myGroup|grep bytes memory.memsw.limit_in_bytes: 9223372036854775807 memory.memsw.max_usage_in_bytes: 0 memory.memsw.usage_in_bytes: 0 memory.soft_limit_in_bytes: 9223372036854775807 memory.limit_in_bytes: 9223372036854775807 memory.max_usage_in_bytes: 0 memory.usage_in_bytes: 0
For more information about the field meaning see the doc here.
To test out the cgroup ability to limit memory used by a process we’re going to use the tool stress, which can be used to generate CPU, memory, or IO load on a server. It’s great for testing what happens to a server under resource pressure, and also for testing memory allocation capabilities of a process which is what we’re using it for here.
We’re going to configure cgroups to add stress
to the myGroup group whenever it runs
$ cat /etc/cgrules.conf *:stress memory myGroup
[Re-]start the cg rules engine service:
# service cgred restart
Now we’ll use the watch command to re-issue the cgget
command every second enabling us to watch cgroup’s metrics in realtime:
# watch --interval 1 cgget -g memory:/myGroup /myGroup: memory.memsw.failcnt: 0 memory.memsw.limit_in_bytes: 9223372036854775807 memory.memsw.max_usage_in_bytes: 0 memory.memsw.usage_in_bytes: 0 memory.oom_control: oom_kill_disable 0 under_oom 0 memory.move_charge_at_immigrate: 0 memory.swappiness: 60 memory.use_hierarchy: 0 memory.stat: cache 0 rss 0 mapped_file 0 pgpgin 0 pgpgout 0 swap 0 inactive_anon 0 active_anon 0 inactive_file 0 active_file 0 unevictable 0 hierarchical_memory_limit 9223372036854775807 hierarchical_memsw_limit 9223372036854775807 total_cache 0 total_rss 0 total_mapped_file 0 total_pgpgin 0 total_pgpgout 0 total_swap 0 total_inactive_anon 0 total_active_anon 0 total_inactive_file 0 total_active_file 0 total_unevictable 0 memory.failcnt: 0 memory.soft_limit_in_bytes: 9223372036854775807 memory.limit_in_bytes: 9223372036854775807 memory.max_usage_in_bytes: 0 memory.usage_in_bytes: 0
In a separate terminal (or even better, use screen!) run stress, telling it to grab 150MB of memory:
$ stress --vm-bytes 150M --vm-keep -m 1
Review the cgroup, and note that the usage fields have increased:
/myGroup: memory.memsw.failcnt: 0 memory.memsw.limit_in_bytes: 9223372036854775807 memory.memsw.max_usage_in_bytes: 157548544 memory.memsw.usage_in_bytes: 157548544 memory.oom_control: oom_kill_disable 0 under_oom 0 memory.move_charge_at_immigrate: 0 memory.swappiness: 60 memory.use_hierarchy: 0 memory.stat: cache 0 rss 157343744 mapped_file 0 pgpgin 38414 pgpgout 0 swap 0 inactive_anon 0 active_anon 157343744 inactive_file 0 active_file 0 unevictable 0 hierarchical_memory_limit 9223372036854775807 hierarchical_memsw_limit 9223372036854775807 total_cache 0 total_rss 157343744 total_mapped_file 0 total_pgpgin 38414 total_pgpgout 0 total_swap 0 total_inactive_anon 0 total_active_anon 157343744 total_inactive_file 0 total_active_file 0 total_unevictable 0 memory.failcnt: 0 memory.soft_limit_in_bytes: 9223372036854775807 memory.limit_in_bytes: 9223372036854775807 memory.max_usage_in_bytes: 157548544 memory.usage_in_bytes: 157548544
Both memory.memsw.usage_in_bytes and memory.usage_in_bytes are 157548544 = 150.25MB
Having a look at the process stats for stress shows us:
$ ps -ef|grep stress oracle 15296 9023 0 11:57 pts/12 00:00:00 stress --vm-bytes 150M --vm-keep -m 1 oracle 15297 15296 96 11:57 pts/12 00:06:23 stress --vm-bytes 150M --vm-keep -m 1 oracle 20365 29403 0 12:04 pts/10 00:00:00 grep stress $ cat /proc/15297/status Name: stress State: R (running) [...] VmPeak: 160124 kB VmSize: 160124 kB VmLck: 0 kB VmHWM: 153860 kB VmRSS: 153860 kB VmData: 153652 kB VmStk: 92 kB VmExe: 20 kB VmLib: 2232 kB VmPTE: 328 kB VmSwap: 0 kB [...]
The man page for proc gives us more information about these fields, but of particular note are:
- VmSize: Virtual memory size.
- VmRSS: Resident set size.
- VmSwap: Swapped-out virtual memory size by anonymous private pages
Our stress
process has a VmSize of 156MB, VmRSS of 150MB, and zero swap.
Kill the stress process, and set a memory limit of 100MB for any process in this cgroup:
# cgset -r memory.limit_in_bytes=100m myGroup
Run cgset
and you should see the see new limit. Note that at this stage we’re just setting memory.limit_in_bytes and leaving memory.memsw.limit_in_bytes unchanged.
# cgget -g memory:/myGroup|grep limit|grep bytes memory.memsw.limit_in_bytes: 9223372036854775807 memory.soft_limit_in_bytes: 9223372036854775807 memory.limit_in_bytes: 104857600
Let’s see what happens when we try to allocate the memory, observing the cgroup and process Virtual Memory process information at each point:
- 15MB:
$ stress --vm-bytes 15M --vm-keep -m 1 stress: info: [31942] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd # cgget -g memory:/myGroup|grep usage|grep -v max memory.memsw.usage_in_bytes: 15990784 memory.usage_in_bytes: 15990784 $ cat /proc/$(pgrep stress|tail -n1)/status|grep VmVmPeak: 21884 kB VmSize: 21884 kB VmLck: 0 kB VmHWM: 15616 kB VmRSS: 15616 kB VmData: 15412 kB VmStk: 92 kB VmExe: 20 kB VmLib: 2232 kB VmPTE: 60 kB VmSwap: 0 kB
- 50MB:
$ stress --vm-bytes 50M --vm-keep -m 1 stress: info: [32419] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd # cgget -g memory:/myGroup|grep usage|grep -v max memory.memsw.usage_in_bytes: 52748288 memory.usage_in_bytes: 52748288 $ cat /proc/$(pgrep stress|tail -n1)/status|grep Vm VmPeak: 57724 kB VmSize: 57724 kB VmLck: 0 kB VmHWM: 51456 kB VmRSS: 51456 kB VmData: 51252 kB VmStk: 92 kB VmExe: 20 kB VmLib: 2232 kB VmPTE: 128 kB VmSwap: 0 kB
- 100MB:
$ stress --vm-bytes 100M --vm-keep -m 1 stress: info: [20379] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd # cgget -g memory:/myGroup|grep usage|grep -v max memory.memsw.usage_in_bytes: 105197568 memory.usage_in_bytes: 104738816 $ cat /proc/$(pgrep stress|tail -n1)/status|grep Vm VmPeak: 108924 kB VmSize: 108924 kB VmLck: 0 kB VmHWM: 102588 kB VmRSS: 101448 kB VmData: 102452 kB VmStk: 92 kB VmExe: 20 kB VmLib: 2232 kB VmPTE: 232 kB VmSwap: 1212 kB
Note that VmSwap has now gone above zero, despite the machine having plenty of usable memory:
# vmstat -s 16330912 total memory 14849864 used memory 10583040 active memory 3410892 inactive memory 1481048 free memory 149416 buffer memory 8204108 swap cache 6143992 total swap 1212184 used swap 4931808 free swap
So it looks like the memory cap has kicked in and the stress
process is being forced to get the additional memory that it needs from swap.
Let’s tighten the screw a bit further:
$ stress --vm-bytes 200M --vm-keep -m 1 stress: info: [21945] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
The process is now using 100MB of swap (since we’ve asked it to grab 200MB but cgroup is constraining it to 100MB real):
$ cat /proc/$(pgrep stress|tail -n1)/status|grep Vm VmPeak: 211324 kB VmSize: 211324 kB VmLck: 0 kB VmHWM: 102616 kB VmRSS: 102600 kB VmData: 204852 kB VmStk: 92 kB VmExe: 20 kB VmLib: 2232 kB VmPTE: 432 kB VmSwap: 102460 kB
The cgget
command confirms that we’re using swap, as the memsw value shows:
# cgget -g memory:/myGroup|grep usage|grep -v max memory.memsw.usage_in_bytes: 209788928 memory.usage_in_bytes: 104759296
So now what happens if we curtail the use of all memory, including swap? To do this we’ll set the memory.memsw.limit_in_bytes parameter. Note that running cgset
whilst a task under the cgroup is executing seems to get ignored if it is below that currently in use (per the usage_in_bytes
field). If it is above this then the change is instantaneous:
- Current state
# cgget -g memory:/myGroup|grep bytes memory.memsw.limit_in_bytes: 9223372036854775807 memory.memsw.max_usage_in_bytes: 209915904 memory.memsw.usage_in_bytes: 209784832 memory.soft_limit_in_bytes: 9223372036854775807 memory.limit_in_bytes: 104857600 memory.max_usage_in_bytes: 104857600 memory.usage_in_bytes: 104775680
- Set the limit below what is currently in use (150m limit vs 200m in use)
# cgset -r memory.memsw.limit_in_bytes=150m myGroup
- Check the limit – it remains unchanged
# cgget -g memory:/myGroup|grep bytes memory.memsw.limit_in_bytes: 9223372036854775807 memory.memsw.max_usage_in_bytes: 209993728 memory.memsw.usage_in_bytes: 209784832 memory.soft_limit_in_bytes: 9223372036854775807 memory.limit_in_bytes: 104857600 memory.max_usage_in_bytes: 104857600 memory.usage_in_bytes: 104751104
- Set the limit above what is currently in use (250m limit vs 200m in use)
# cgset -r memory.memsw.limit_in_bytes=250m myGroup
- Check the limit – it’s taken effect
# cgget -g memory:/myGroup|grep bytes memory.memsw.limit_in_bytes: 262144000 memory.memsw.max_usage_in_bytes: 210006016 memory.memsw.usage_in_bytes: 209846272 memory.soft_limit_in_bytes: 9223372036854775807 memory.limit_in_bytes: 104857600 memory.max_usage_in_bytes: 104857600 memory.usage_in_bytes: 104816640
So now we’ve got limits in place of 100MB real memory and 250MB total (real + swap). What happens when we test that out?
$ stress --vm-bytes 245M --vm-keep -m 1 stress: info: [25927] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
The process is using 245MB total (VmData
), of which 95MB is resident (VmRSS
) and 150MB is swapped out (VmSwap
)
$ cat /proc/$(pgrep stress|tail -n1)/status|grep Vm VmPeak: 257404 kB VmSize: 257404 kB VmLck: 0 kB VmHWM: 102548 kB VmRSS: 97280 kB VmData: 250932 kB VmStk: 92 kB VmExe: 20 kB VmLib: 2232 kB VmPTE: 520 kB VmSwap: 153860 kB
The cgroup stats reflect this:
# cgget -g memory:/myGroup|grep bytes memory.memsw.limit_in_bytes: 262144000 memory.memsw.max_usage_in_bytes: 257159168 memory.memsw.usage_in_bytes: 257007616 [...] memory.limit_in_bytes: 104857600 memory.max_usage_in_bytes: 104857600 memory.usage_in_bytes: 104849408
If we try to go above this absolute limit (memory.memsw.max_usage_in_bytes) then the cgroup kicks in a stops the process getting the memory, which in turn causes stress
to fail:
$ stress --vm-bytes 250M --vm-keep -m 1 stress: info: [27356] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd stress: FAIL: [27356] (415) <-- worker 27357 got signal 9 stress: WARN: [27356] (417) now reaping child worker processes stress: FAIL: [27356] (451) failed run completed in 3s
This gives you an indication of how careful you need to be using this type of low-level process control. Most tools will not be happy if they are starved of resource, including memory, and may well behave in unstable ways.
Thanks to Frits Hoogland for reading a draft of this post and providing valuable feedback.
The post Using Linux Control Groups to Constrain Process Memory appeared first on Rittman Mead Consulting.
Driving OBIEE User Engagement with Enhanced Usage Tracking for OBIEE
Measuring and monitoring user interactions and behaviour with OBIEE is a key part of Rittman Mead’s User Engagement Service. By understanding and proving how users are engaging the system we can improve the experience for the user, driving up usage and ensuring maximum value for your OBIEE investment. To date, we have had the excellent option of Usage Tracking for finding out about system usage, but this only captures actual dashboard and analysis executions. What I am going to discuss in this article is taking Usage Tracking a step further, and capturing and analysing every click that that the user makes. Every login, every search, every report build action. This can be logged to a database such as Oracle, and gives us Enhanced Usage Tracking!
Why?
Because the more we understand about our user base, the more we can do for them in terms of improved content and accessibility, and the more we can do for us, the OBIEE dev/sysadmin, in terms of easier maintenance and better knowledge of the platform for which we are developing.
Here is a handful of questions that this data can answer – I’m sure once you see the potential of the data you will be able to think of plenty more…
How many users are accessing OBIEE through a mobile device?
Maybe you’re about to implement a mobile strategy, perhaps deploying MAD or rolling out BI Mobile HD. Wouldn’t it be great if you could quantify its uptake, and not only that but the impact that the provision of mobile makes on the general user engagement levels of your OBIEE user base?
Perhaps you think your users might benefit from a dedicated Mobile OBIEE strategy, but to back up your business case for the investment in mobile licences or time to optimise content for mobile consumption you want to show how many users are currently accessing full OBIEE through a web browser on their mobile device. And not only ‘a mobile device’, but which one, which browser, and which OS. Enhanced Usage Tracking data can provide all this, and more.
Which dashboards get exported to Excel the most frequently?
The risks that Excel-marts present are commonly discussed, and broader solutions such as data-mashup capabilities within OBIEE itself exist – but how do you identify which dashboards are frequently exported from OBIEE to Excel, and by whom? We’ve all probably got a gut-instinct, or indirect evidence, of when this happens – but now we can know for sure. Whilst Usage Tracking alone will tell us when a dashboard is run, only Enhanced Usage Tracking can show what the user then did with the results:
What do we do with this information? It Depends, of course. In some cases exporting data to Excel is a – potentially undesirable but pragmatic – way of getting certain analysis done, and to try to prevent it unnecessarily petulant and counterproductive. In many other cases though, people use it simply as a way of doing something that could be done in OBIEE but they lack the awareness or training in order to do it. The point is that by quantifying and identifying when it occurs you can start an informed discussion with your user base, from which both sides of the discussion benefit.
Precise Tracking of Dashboard Usage
Usage Tracking is great, but it has limitations. One example of this is where a user visits a dashboard page more than once in the same session, meaning that it may be served from the Presentation Services cache, and if that happens, the additional visit won’t be recorded in Usage Tracking. By using click data we can actually track every single visit to a dashboard.
In this example here we can see a user visiting two dashboard pages, and then going back to the first one – which is captured by the Enhanced Usage Tracking, but not the standard one, which only captures the first two dashboard visits:
This kind of thing can matter, both from an audit point of view, but also a more advanced use, where we can examine user behaviour in repeated visits to a dashboard. For example, does it highlight that a dashboard design is not optimal and the user is having to switch between multiple tabs to build up a complete picture of the data that they are analysing?
Predictive Modelling to Identify Users at Risk of ‘Churn’
Churn is when users disengage from a system, when they stop coming back. Being able to identify those at risk of doing this before they do it can be hugely valuable, because it gives you opportunity to prevent it. By analysing the patterns of system usage in OBIEE and looking at users who have stopped using OBIEE (i.e. churned) we can then build a predictive model to identify those with similar patterns of usage but are still active.
Measures such as the length of time it takes to run the first dashboard after login, or how many dashboards are run, or how long it takes to find data when building an analysis, can all be useful factors to include in the model.
Are any of my users still accessing OBIEE through IE6?
A trend that I’ve seen in the years working with OBIEE is that organisations are [finally] moving to a more tolerant view on web browsers other than IE. I suppose this is as the web application world evolves and IE becomes more standards compliant and/or web application functionality forces organisations to adopt browsers that provide the latest capabilities. OBIEE too, is a lot better nowadays at not throwing its toys out of the pram when run on a browser that happens to have been written within the past decade.
What’s my little tirade got to do with enhanced usage tracking? Because as those responsible for the development and support of OBIEE in an organisation we need to have a clear picture of the user base that we’re supporting. Sure, corporate ‘standard’ is IE9, but we all know that Jim in design runs one of those funny Mac things with Safari, Fred in accounts insists on Firefox, Bob in IT prides himself on running Konquerer, and it would be a cold day in hell before you prise the MD’s copy of IE5 off his machine. Whether these browsers are “supported” or not is only really a secondary point to whether they’re being used. A lot of the time organisations will take the risk on running unsupported configurations, consciously or in blissful ignorance, and being ‘right’ won’t cut it if your OBIEE patch suddenly breaks everything for them.
Enhanced Usage Tracking gives us the ability to analyse browser usage over time:
as well as the Enhanced Usage Tracking data rendered through OBIEE itself, showing browser usage in total (nb the Log scale):
It’s also easy to report on the Operating System that users have:
Where are my users connecting to OBIEE from?
Whilst a lot of OBIEE deployments are run within the confines of a corporate network, there are those that are public-facing, and for these ones it could be interesting to include location as another dimension by which we analyse the user base and their patterns of usage. Enhanced Usage Tracking includes the capture of a user’s IP, which for public networks we can easily lookup and use the resulting data in our analysis.
Even on a corporate network the user’s IP can be useful, because the corporate network will be divided into subnets and IP ranges, which will usually have geographical association to them – you just might need to code your own lookup in order to translate 192.168.11.5 to “Bob’s dining room”.
Who deleted this report? Who logged in? Who clicked the Do Not Click Button?
The uses for Enhanced Usage Tracking are almost endless. Any user interaction with OBIEE can now be measured and monitored.
A frequent question that I see on the OTN forums is along the lines of “for audit purposes, we need to know who logged in”. Since Usage Tracking alone won’t capture this directly (although the new init block logging in > 11.1.1.9 probably helps indirectly with this) this information usually isn’t available….until now! In this table we see the user, their session ID, and the time at which they logged in:
What about who updated a report last, or deleted it? We can find that out too! This simple example shows some of the operations in the Presentation Catalog recorded as clear as day in Enhanced Usage Tracking:
Want to know more? We’d love to tell you more!
Measuring and monitoring user interactions and behaviour with OBIEE is a key part of Rittman Mead’s User Engagement Service. By understanding and proving how users are engaging the system we can improve the experience for the user, driving up usage and ensuring maximum value for your OBIEE investment.
If you’d like to find out more, including about Enhanced Usage Tracking and getting a free User Engagement Report for your OBIEE system, get in touch now!
The post Driving OBIEE User Engagement with Enhanced Usage Tracking for OBIEE appeared first on Rittman Mead Consulting.
Introducing the Rittman Mead OBIEE Performance Analytics Service
Fix Your OBIEE Performance Problems Today
OBIEE is a powerful analytics tool that enables your users to make the most of the data in your organisation. Ensuring that expected response times are met is key to driving user uptake and successful user engagement with OBIEE.
Rittman Mead can help diagnose and resolve performance problems on your OBIEE system. Taking a holistic, full-stack view, we can help you deliver the best service to your users. Fast response times enable your users to do more with OBIEE, driving better engagement, higher satisfaction, and greater return on investment. We enable you to :
- Create a positive user experience
- Ensure OBIEE returns answers quickly
- Empower your BI team to identify and resolve performance bottlenecks in real time
Rittman Mead Are The OBIEE Performance Experts
Rittman Mead have many years of experience in the full life cycle of data warehousing and analytical solutions, especially in the Oracle space. We know what it takes to design a good system, and to troubleshoot a problematic one.
We are firm believers in a practical and logical approach to performance analytics and optimisation. Eschewing the drunk man anti-method of ‘tuning’ configuration settings at random, we advocate making a clear diagnosis and baseline of performance problems before changing anything. Once a clear understanding of the situation is established, steps are taken in a controlled manner to implement and validate one change at a time.
Rittman Mead have spoken at conferences, produced videos, and written many blogs specifically on the subject of OBIEE Performance.
Performance Analytics is not a dark art. It is not the blind application of ‘best practices’ or ‘tuning’ configuration settings. It is the logical analysis of performance behaviour to accurately determine the issue(s) present, and the possible remedies for them.
Diagnose and Resolve OBIEE Performance Problems with Confidence
When you sign up for the Rittman Mead OBIEE Performance Analytics Service you get:
- On-site consultancy from one of our team of Performance experts, including Mark Rittman (Oracle ACE Director), and Robin Moffatt (Oracle ACE).
- A Performance Analysis Report to give you an assessment of the current performance and prioritised list of optimisation suggestions, which we can help you implement.
- Use of the Performance Diagnostics Toolkit to measure and analyse the behaviour of your system and correlate any poor response times with the metrics from the server and OBIEE itself.
- Training, which is vital for enabling your staff to deliver optimal OBIEE performance. We work with your staff to help them understand the good practices to be looking for in design and diagnostics. Training is based on formal courseware along with workshops based on examples from your OBIEE system where appropriate
Let Us Help You, Today!
Get in touch now to find out how we can help improve your OBIEE system’s performance. We offer a free, no-obligation sample of the Performance Analysis Report, built on YOUR data.
Don’t just call us when performance may already be problematic – we can help you assess your OBIEE system for optimal performance at all stages of the build process. Gaining a clear understanding of the performance profile of your system and any potential issues gives you the confidence and ability to understand any potential risks to the success of your project – before it gets too late.