Tag Archives: Performance
Performance and OBIEE – part V – Execute and Measure
Having designed and built our tests, we now move on to looking at the real nitty-gritty – how we run them and collect data. The data that we collect is absolutely crucial in getting comprehensible test results and as a consequence ensuring valid test conclusions.
There are several broad elements to the data collected for a test:
- Response times
- System behaviour
- Test details
The last one is very important, because without it you just have some numbers. If someone wants to reproduce the test, or if you want to rerun it to check a result or test a change, you’ve got to be able to run it as it was done originally. This is the cardinal sin that too many performance tests I’ve seen commit. A set of response times in isolation is interesting, sure, but unless I can trace back exactly how they were obtained so that I can:
- Ensure or challenge their validity
- Rerun the test myself
then they’re just numbers on a piece of paper.
The other common mistake committed in performance test execution is measuring the wrong thing. It might be a wrong metric, or the right metric but in the wrong context. For example, if I build a test that runs through multiple dashboards, I could get a response time for “Go to Dashboard” transaction:
But what does this tell me? All it tells me really is that some of my dashboard transactions take longer than others to run. Sure, we can start aggregating and analysing the data, talking about percentile response times – but by only measuring the transaction generically, rather than per dashboard or dashboard type, I’m already clouding the data. Much better to accurately identify each transaction and easily see the clear difference in performance behaviour:
How about this enticing looking metric:
We could use that to record the report response times for our test, yes? Well, honestly, I have no idea. That’s because a “Request” in the context of the OBIEE stack could be any number of things. I’d need to check the documentation to find out what this number was actually showing and how it was summarised. Don’t just pick a metric because it’s the first one you find that looks about right. Make sure it actually represents what you think it does.
As Zed Shaw puts it:
It’s pretty simple: If you want to measure something, then don’t measure other shit.
Please consider the environment before running this test
Your test should be done on as ‘clean’ an environment as possible. The more contaminating factors there are, the less confidence you can have in your test results, to the point of them becoming worthless.
- Work with a fixed code version. This can be difficult to do during a project particularly, but there is little point testing the performance of code release 1.00 if when you come to test your tuning changes 1.50 is in use. Who knows what the developers changed between 1.00 and 1.50? It whips the rug from out under your original test results. By fixed code, I mean:
- Database, including:
- DDL
- Object statistics
- BI Server Repository (RPD)
- Dashboard and report definitions (Webcat)
If you can’t insist on this – and sometimes pragmatism dictates so – then at least have a very clear understanding of the OBIEE stack. This way you can understand the potential impact of an external code change and caveat your test results accordingly. For example, if a change was made to the RPD but in a different Business Model from the one you are testing then it may not matter. If, however, they have partitioned an underlying fact table, then this could drastically change your results to the extent you should be discarding your first results and retesting.
In an ideal world, all the above code artefacts will be under source control, and you can quote the revision/commit number in your test log.
- Database, including:
- Make sure the data in the tables from which you are reporting is both unchanging and representative of Production. Unchanging is hopefully obvious, but representative may benefit from elaboration. If you are going live with 10M rows of data then you’d be pretty wise to do your utmost to run your performance test against 10M rows of data. Different types of reports might behave differently, and this is where judgement comes in. For example, a weekly report that is based on a fact table partitioned by week might perform roughly the same whether all or just one partition is loaded. However, the same fact table as the source for a historical query going back months, or a query cutting across partitions, is going need more data in to be representative. Finally, don’t neglect future growth in your testing. If it’s a brand new system with brand new data, you’ll be starting with zero rows on day one, but if there’ll be millions of rows within a month or so you need to be testing against a million rows or so in your performance tests.
- The configuration of the software should be constant. This means obvious configuration such as BI Server caching, but also things like version numbers and patch levels of the OBIEE stack. Consider taking a snapshot of all main configuration files (
NQSConfig.INI
instanceconfig.xml
, etc) to store alongside your test data.- You should aim to turn off BI Server caching for your initial tests, and then re-enable it if required as a properly tested optimisation step. The appropriate use and implementation of BI Server caching is discussed in the optimisation article of this series.
Measure
Before you execute your performance test, work out what data you want to collect and how you will collect it. The reason that it is worth thinking about in advance is that once you’ve run your test you can’t usually retrospectively collect additional data.
The data you should collect for your test itself includes:
- Response times at the lowest grain possible/practical – see Dashboard example above. Response times should be 2-dimensional; transaction name, plus time offset from beginning of test.
- Number of test users running over time (i.e. offset from the start of your test)
- Environment details – a diagram of the top-to-bottom stack, software versions, code levels, and data volumes. Anything that is going to be relevant in assessing the validity of the test results, or rerunning the test in the future.
- System metrics – if response times and user numbers are the eye-catching numbers in a test, system metrics are the oft-missed but vital numbers that give real substance to a test and make it useful. If response times are bad, we need to know why. If they’re good, we need to know how good. Both these things come from the system metrics.
- Query Metrics – depending on the level at which the testing is being done, collecting metrics for individual query executions can also be vital for aiding performance analysis. Consider this more of a second round, drill down, layer of metrics rather than one to always collect in a large test since it can be a large volume of data.
Response times
Depending on your testing method, how you capture response time will be different. Always capture the raw response time and test duration offset – don’t just record one aggregate figure for the whole test. Working in BI you hopefully are clear on the fact that you can always aggregate data up, but can’t break a figure down if you don’t have the base data.
JMeter has “Sample time” or “Response Time”. Use the setSampleLabel
trick to make sure you get a response time per specific dashboard, not just per dashboard refresh call. Some useful listeners to try out include:
jp@gc - Response Times Over Time
jp@gc - Response Times vs Threads
(although be aware that this shows an average response time, which is not the best summary of the metric to use)View Results in Table
JMeter can also write data to csv file, which can be a very good starting point for your own analysis of the data,
If you are doing a test for specific tuning, you might well want to capture response times at other points in the stack too; for example from the database, BI Server, and Presentation Server.
Whichever time you capture, make sure you record what that time represents – is it response time of the query at the BI Server, response time back to the HTTP client, is it response time including rendering – and so on. Don’t forget, standard load test tools such as JMeter don’t include page render time.
System metrics
There are two key areas of system metrics:
- Environment metrics – everything outside OBI – the host OS, the database, the database OS.
- OBI metrics – internal metrics that let us understand how OBI is ticking along and where any problems may lie
One of the really important things about system metrics is that they are useful come rain or shine. If the performance is not as desired, we use them to analyse where the bottlenecks are. If performance is good, we use them to understand system behaviour at a “known-good” point in time, as reference for if things do go bad, and also as a long-term capacity planning device.
Some of the tools described below for capturing system metrics could be considered for putting in place as standard monitoring for OBIEE, whilst others are a bit more detailed than you’d want to be collecting all the time.
OS metrics
OS stats should be collected on every server involved in the end-to-end serving of OBIEE dashboards. If you have a separate web tier, a 2-node OBIEE cluster and a database, monitor them all. The workload can manifest itself in multiple places and the more “eyes-on” you have the easier it is to spot the anomalies in behaviour and not just be stuck with a “it was slow” response time summary.
As well as the whole stack, you should also be monitoring the server(s) generating your load test. If you’re testing large numbers of concurrent users the overhead on the generator can be very high, so you need to be monitoring to ensure you’re not hitting a ceiling there, rather than in what is being monitored.
On Windows, I would use the built-in Performance Monitor (perfmon) tool to collect and analyse data. You can capture CPU, IO, Memory, Network and process-specific data to file, and analyse it to your heart’s content afterwards within the tool or exported to CSV.
On Linux and other *nix systems there is a swathe of tools available, my tool of choice being collectl, optionally visualised through graphite or graphiti. There are plenty of alternatives, including sar, glance, and so on
Finally, it’s worth noting that JMeter also offers collection of OS stats through the JMeter plugins project.
OBI metrics
The OBIEE performance counters are a goldmine of valuable information, and one it’s well worth the time mining for nuggets. The counters give you both a better picture of how different workloads are executed within OBIEE but also where any bottlenecks may arise.
The counters can be accessed in several ways:
- Through Fusion Middleware Control, under Capacity Management -> Metrics -> View the full set of system metrics. This gives a point-in-time view of the data for the last 15 minutes, and does not store history.
- Presentation Services includes its own Performance Monitor which can be accessed at
http://<yourserver>:<analytics port>/saw.dll?perfmon
. In OBIEE 10g it showed BI Server metrics too, but in 11g seems to only show Presentation Services (Oracle BI PS) metrics. It is point in time with no history.
- Similar to Performance Monitor but with a lot more metrics available, DMS Spy is a java deployment hosted on the Admin Server by default, available at
http://<yourserver>:<adminserver port>/dms/Spy
- Through a manual call to
opmn
on the commandline. For example:[oracle@obieesampleapp bin]$ ./opmnctl metric op=query COMPONENT_TYPE=OracleBIServerComponent HTTP/1.1 200 OK Connection: close Content-Type: text/html <?xml version='1.0'?> <!DOCTYPE pdml> <pdml version='11.0' name='Oracle BI Server' host='obieesampleapp' id='4399' timestamp='1359703796346'> <statistics> <noun name="Oracle BI Server" type="Oracle_BI_Applications"> <noun name="Usage_Tracking" type="Oracle_BI_Thread_Pool"> <metric name="Peak_Thread_Count.value"> <value type="integer"><![CDATA[5]]></value> </metric> <metric name="Current_Thread_Count.value"> <value type="integer"><![CDATA[2]]></value> </metric> <metric name="Lowest_Queued_Time_(milliseconds).value"> <value type="integer"><![CDATA[0]]></value> </metric> […]
- DMS through WLST
- BI Server diagnostics through ODBC/JDBC
- It is worth noting that one place you cannot get the OBI metrics from any more is through JMX. In 10g this was an option and interfaced very well with industry-standard monitoring tools. In 11g, JMX is available for metrics outside core OBI, but not the core metrics themselves.
In addition to the out-of-the-box options above, here at RittmanMead we have developed our own OBIEE monitoring tool. DMS metrics are stored directly on disk or through a database, enabling both immediate and retrospective analysis. Custom dashboards enable the display of both OBIEE and OS data side-by-side for ease of analysis. Integration with third-party tools is also an option.
Query Metrics
If you are running a test for a specific performance issue then capturing query metrics is important as these will feed into the diagnosis that you do including building a time profile.
A Logical SQL query has two elements to it for which we capture information:
- BI Server (Logical SQL)
- Query runtime
- Total Rows returned from the database
- Total Bytes returned from the database
- Number of database queries
- Database (x n Physical SQL queries)
- Response time
- Rows returned
- Bytes returned
All of the above information can be obtained from the BI Server’s nqquery.log
, or most of it from Usage Tracking tables S_NQ_ACCT
and S_NQ_DB_ACCT
.
For really detailed analysis you may want to capture additional information from the database about how a query ran such as its execution plan. On Oracle, Real Time SQL Monitoring is a very useful tool, along with several others. For further details, speak to a DBA … this is an OBIEE blog ;-)
Execute
And now the moment you’ve all been waiting for … it’s party time!
Here is a checklist to work through for executing your test:
- Clear down the logs of any component you’re going to be analysing (eg
nqquery.log
,sawlog.log
) - Record any configuration/environment changes from the original baseline
- Record test start time
- Restart the OBIEE stack (i.e. WLS, OPMN)
- Start OS metric collection
- Start OBIEE metric collection (if used)
- Run the test!
- Monitor for errors and excessively bad response times
- Record test end time
- Record any errors observed during test
- Copy all logs, metric outputs, etc to a named/timestamped folder
Monitoring the test as it executes is important. If something goes wrong then time can be saved by abandoning the test rather than let the test script run to completion. There’s no point letting a big test run for hours if the results are going to be useless. Some of the things that could go wrong include:
- Your test script is duff. For example, there’s a typo in the login script and no users are logging in let alone executing dashboards. All your test will tell you is how fast OBIEE can reject user logins.
- Your test has hit a performance bottleneck in the stack. If you leave your test running beyond a certain point, all you’re doing is collecting data to show how bad things still are. If response times are flat on their back at 50 concurrent users, what’s the point leaving a test to run all the way up to 200? It’s best to curtail it and move swiftly on with the analysis and tuning stage
- Your test framework has hit a bottleneck in itself. For example, the host machine cannot sustain the CPU or network traffic required. If this happens then your test data is worthless because all you’re now measuring is the capacity of the host machine, not the OBIEE stack.
Monitoring for errors is also vital for picking up messages that OBIEE might start to produce if it is hitting an internal threshold that could constrain performance.
Don’t fiddle the books!
If your test execution doesn’t work, or you spot an improvement or fix – resist the biggest temptation which is to ‘just fix it’. Hours become days with this approach and you lose complete track of what you changed.
Take a step back, make a note of what needs fixing or changing, and document it as part of the full cycle.
There is nothing wrong whatsoever with aborting a test for the reason that “I defined it incorrectly” or “I forgot to change a config setting”. Better to have a half dozen aborted tests lying around showing that you got your hands dirty than a suspiciously pristine set of perfectly executed tests.
Don’t forget that pesky documentation
Always document your testing, including method, definition, and results.
- You will not remember precisely how you ran the test, even a few days later
- How can you identify possible confounding of results, without recording a clear picture of the environment?
- If you find something unexpected, you can quickly seek a second opinion
- Without things written down, people will not be able to reproduce your testing
- Test results on their own are worthless; they are just a set of numbers.
- If it’s not written down, it didn’t happen
What next?
With a test run completed and a set of data collected, it’s time to make sense of the numbers and understand what they can tell us by analysing the results
Other articles in this series
This article is part of a series on the subject of Performance and OBIEE:
Comments?
I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?
Because there are several articles in this series, and I’d like to keep comments in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.
Performance and OBIEE – part I – Introduction
Performance matters. Performance really matters. And performance can actually be easy, but it takes some thinking about. It can’t be brute-forced, or learnt by rote, or solved in a list of Best Practices, Silver Bullets and fairy dust.
The problem with performance is that it is too easy to guess and sometimes strike lucky, to pick at a “Best Practice Tuning” setting that by chance matches an issue on your system. This leads people down the path of thinking that performance is just about tweaking parameters, tuning settings, and twiddling knobs. The trouble with trusting this magic beans approach is that down this path leads wasted time, system instability, uncertainty, and insanity. Your fix that worked on another system might work this time, or a setting you find in a “Best Practice” document might work. But would it not be better to know that it would?
I wanted to write this series of posts as a way of getting onto paper how I think analysing and improving performance in OBIEE should be done and why. It is intended to address the very basic question of how do we improve the performance of OBIEE. Lots of people work with OBIEE, and many of them will have lots of ideas about performance, but not all have a clear picture of how to empirically test and improve performance.
Why does performance matter?
Why does performance matter? Why are some people (me) so obsessed with testing and timing and tuning things? Can’t we just put the system live and see how it goes, since it seems fast enough in Dev?…
Why performance matters to a project’s success
- Slow systems upset users. No-one likes to be kept waiting. If you’re withdrawing cash from an ATM, you’re going to be quite cross if it takes five minutes. In fact, a pause of five seconds will probably get you fidgeting.
Once users dislike a system, regaining their favour is an uphill battle. “Trust is hard to win and easily lost”. One of the things about performance is perception of speed, and if a user has decided a system is slow you will have to work twice as hard to get them to simply recognise a small improvement. You not only have to fix the performance problem, you also have to win round the user again and prove that it is genuinely faster. - From a cost point of view, poorly performing systems are inefficient:
- They waste hardware resource, increasing the machine capacity required, decreasing the time between hardware upgrades
- They cost more to support, particularly as performance bottlenecks can cause unpredictable stability issues
- They cost more to maintain, in two ways. Firstly, each quick-win used in an attempt to resolve the problem will probably add to the complexity or maintenance overhead of the system. Secondly, a proper resolution of the problem may involve a redesign on such a scale that it can become a rewrite of the entire system in all but name.
- They cost more to use. User waiting = user distracted = less efficient at his job. Eventually, User waiting = disgruntled user = poor system usage and support from the business.
Why performance matters to the techie
Performance is not a fire-and-forget task, and box on a checklist. It has many facets and places in a project’s life cycle.
Done properly, you will have confidence in the performance of your system, knowledge of the limits of its capacity, a better understanding of the workings of it, and a repeatable process for validating any issues that arise or prospective configuration changes.
Done badly, or not at all, you might hit lucky and not have any performance problems for a while. But when they do happen, you’ll be starting from a position of ignorance, trying to learn at speed and under pressure how to diagnose and resolve the problems. Silver bullets appear enticing and get fired at the problem in the vain hope that one will work. Time will be wasted chasing red herrings. You have no real handle on how much capacity your server has for an increasing user base. Version upgrades fill you with fear of the unknown. You don’t dare change your system for fear of upsetting the performance goblin lest he wreak havoc.
Building a good system is not just about one which cranks out the correct numbers. A good system is one which not only cranks out the good numbers, but performs well when it does so. Performance is a key component of any system design.
OBIEE and Performance
Gone are the days of paper reports, when a user couldn’t judge the performance of a computer system except by whether the paper reports were on their desk by 0800 on Monday morning. Now, users are more and more technologically aware. They are more aware of the concept and power of data. Most will have smartphones and be used to having their email, music and personal life at the finger-swipe of a screen. They know how fast computers can work.
One of the many strengths of OBIEE is that it enables “self-service” BI. The challenge that this gives us is that users will typically expect their dashboards and analyses to run as fast as all their other interactions with technology. A slow system risks being an unsuccessful system, as users will be impatient, frustrated, even angry with it.
Below I propose an approach, a method, which will support the testing and tuning of the performance of OBIEE during all phases of a project. Every method must have a silly TLA or catchy name, and this one is no different….
Fancy a brew? Introducing T.E.A., the OBIEE Performance Method
In working with performance one of the most important things is to retain a structured and logical approach to it. Here is mine:
- Test creation
- A predefined, repeatable, workload
- Execute and Measure
- Run the test and collect data
- Analyse
- Analyse the test results, and if necessary apply a change to the system which is then validated through a repeat of the cycle
The emphasis is on this method being applicable at any time in a system’s lifecycle, not just the “Performance Test” phase. Here are a few examples to put it in context:
- Formal performance test stage of a project
- Test : define and build a set of tests simulating users, including at high concurrency
- Execute and Measure: run test and collect detailed statistics about system profile
- Analyse : check for bottlenecks, diagnose, redesign or reconfigure system and retest
- Continual Monitoring of performance
- Test could be a standard prebuilt report with known run time (i.e. a baseline)
- Execute could be just when the report gets run on demand, or a scheduled version of the report for monitoring purposes. Measure just the response time, alongside standard OS metrics
- Analyse – collect response times to track trends, identify problems before they escalate. Provides a baseline against which to test changes
- Troubleshooting a performance problem
- Test could be existing reports with known performance times taken from OBIEE’s Usage Tracking data
- Execute Rerun reports and measure response times and detailed system metrics
- Analyse Diagnose root cause, fix and retest
Re-inventing the wheel
T.E.A. is nothing new in the overall context of Performance. It is almost certainly in existence elsewhere under another name or guise. I have deliberately split it into three separate parts to make it easier to work with in the context of OBIEE. The OBIEE stack is relatively complex and teasing apart the various parts for consideration has to be done carefully. For example, designing how we generate the test against OBIEE should be done in isolation from how we are going to monitor it. Both have numerous ways of doing so, and in several places can interlink. The most important thing is that they’re initially considered separately.
The other reason for defining my own method is that I wanted to get something in writing on which I can then hang my various OBIEE-specific performance rants without being constrained by the terminology of another method.
What’s to come
This series of articles is split into the following :
- Introduction (this page)
- Test – Define
- Test – Design
- Test – Build
- Execute
- Analyse
- Optimise
- Summary and FAQ
I’m tempted to hyperlink these in the fashion of Choose Your Own Adventure and if you click straight from here onto the last section, Optimise, without having read the other parts first, it will redirect you back to them ;-)
Comments?
I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?
Because there are several articles in this series, and I’d like to keep the thread of comments in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.
Performance and OBIEE – part I – Introduction
Performance matters. Performance really matters. And performance can actually be easy, but it takes some thinking about. It can’t be brute-forced, or learnt by rote, or solved in a list of Best Practices, Silver Bullets and fairy dust.
The problem with performance is that it is too easy to guess and sometimes strike lucky, to pick at a “Best Practice Tuning” setting that by chance matches an issue on your system. This leads people down the path of thinking that performance is just about tweaking parameters, tuning settings, and twiddling knobs. The trouble with trusting this magic beans approach is that down this path leads wasted time, system instability, uncertainty, and insanity. Your fix that worked on another system might work this time, or a setting you find in a “Best Practice” document might work. But would it not be better to know that it would?
I wanted to write this series of posts as a way of getting onto paper how I think analysing and improving performance in OBIEE should be done and why. It is intended to address the very basic question of how do we improve the performance of OBIEE. Lots of people work with OBIEE, and many of them will have lots of ideas about performance, but not all have a clear picture of how to empirically test and improve performance.
Why does performance matter?
Why does performance matter? Why are some people (me) so obsessed with testing and timing and tuning things? Can’t we just put the system live and see how it goes, since it seems fast enough in Dev?…
Why performance matters to a project’s success
- Slow systems upset users. No-one likes to be kept waiting. If you’re withdrawing cash from an ATM, you’re going to be quite cross if it takes five minutes. In fact, a pause of five seconds will probably get you fidgeting.
Once users dislike a system, regaining their favour is an uphill battle. “Trust is hard to win and easily lost”. One of the things about performance is perception of speed, and if a user has decided a system is slow you will have to work twice as hard to get them to simply recognise a small improvement. You not only have to fix the performance problem, you also have to win round the user again and prove that it is genuinely faster. - From a cost point of view, poorly performing systems are inefficient:
- They waste hardware resource, increasing the machine capacity required, decreasing the time between hardware upgrades
- They cost more to support, particularly as performance bottlenecks can cause unpredictable stability issues
- They cost more to maintain, in two ways. Firstly, each quick-win used in an attempt to resolve the problem will probably add to the complexity or maintenance overhead of the system. Secondly, a proper resolution of the problem may involve a redesign on such a scale that it can become a rewrite of the entire system in all but name.
- They cost more to use. User waiting = user distracted = less efficient at his job. Eventually, User waiting = disgruntled user = poor system usage and support from the business.
Why performance matters to the techie
Performance is not a fire-and-forget task, and box on a checklist. It has many facets and places in a project’s life cycle.
Done properly, you will have confidence in the performance of your system, knowledge of the limits of its capacity, a better understanding of the workings of it, and a repeatable process for validating any issues that arise or prospective configuration changes.
Done badly, or not at all, you might hit lucky and not have any performance problems for a while. But when they do happen, you’ll be starting from a position of ignorance, trying to learn at speed and under pressure how to diagnose and resolve the problems. Silver bullets appear enticing and get fired at the problem in the vain hope that one will work. Time will be wasted chasing red herrings. You have no real handle on how much capacity your server has for an increasing user base. Version upgrades fill you with fear of the unknown. You don’t dare change your system for fear of upsetting the performance goblin lest he wreak havoc.
Building a good system is not just about one which cranks out the correct numbers. A good system is one which not only cranks out the good numbers, but performs well when it does so. Performance is a key component of any system design.
OBIEE and Performance
Gone are the days of paper reports, when a user couldn’t judge the performance of a computer system except by whether the paper reports were on their desk by 0800 on Monday morning. Now, users are more and more technologically aware. They are more aware of the concept and power of data. Most will have smartphones and be used to having their email, music and personal life at the finger-swipe of a screen. They know how fast computers can work.
One of the many strengths of OBIEE is that it enables “self-service” BI. The challenge that this gives us is that users will typically expect their dashboards and analyses to run as fast as all their other interactions with technology. A slow system risks being an unsuccessful system, as users will be impatient, frustrated, even angry with it.
Below I propose an approach, a method, which will support the testing and tuning of the performance of OBIEE during all phases of a project. Every method must have a silly TLA or catchy name, and this one is no different….
Fancy a brew? Introducing T.E.A., the OBIEE Performance Method
In working with performance one of the most important things is to retain a structured and logical approach to it. Here is mine:
- Test creation
- A predefined, repeatable, workload
- Execute and Measure
- Run the test and collect data
- Analyse
- Analyse the test results, and if necessary apply a change to the system which is then validated through a repeat of the cycle
The emphasis is on this method being applicable at any time in a system’s lifecycle, not just the “Performance Test” phase. Here are a few examples to put it in context:
- Formal performance test stage of a project
- Test : define and build a set of tests simulating users, including at high concurrency
- Execute and Measure: run test and collect detailed statistics about system profile
- Analyse : check for bottlenecks, diagnose, redesign or reconfigure system and retest
- Continual Monitoring of performance
- Test could be a standard prebuilt report with known run time (i.e. a baseline)
- Execute could be just when the report gets run on demand, or a scheduled version of the report for monitoring purposes. Measure just the response time, alongside standard OS metrics
- Analyse – collect response times to track trends, identify problems before they escalate. Provides a baseline against which to test changes
- Troubleshooting a performance problem
- Test could be existing reports with known performance times taken from OBIEE’s Usage Tracking data
- Execute Rerun reports and measure response times and detailed system metrics
- Analyse Diagnose root cause, fix and retest
Re-inventing the wheel
T.E.A. is nothing new in the overall context of Performance. It is almost certainly in existence elsewhere under another name or guise. I have deliberately split it into three separate parts to make it easier to work with in the context of OBIEE. The OBIEE stack is relatively complex and teasing apart the various parts for consideration has to be done carefully. For example, designing how we generate the test against OBIEE should be done in isolation from how we are going to monitor it. Both have numerous ways of doing so, and in several places can interlink. The most important thing is that they’re initially considered separately.
The other reason for defining my own method is that I wanted to get something in writing on which I can then hang my various OBIEE-specific performance rants without being constrained by the terminology of another method.
What’s to come
This series of articles is split into the following :
- Introduction (this page)
- Test – Define
- Test – Design
- Test – Build
- Execute
- Analyse
- Optimise
- Summary and FAQ
I’m tempted to hyperlink these in the fashion of Choose Your Own Adventure and if you click straight from here onto the last section, Optimise, without having read the other parts first, it will redirect you back to them ;-)
Comments?
I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?
Because there are several articles in this series, and I’d like to keep the thread of comments in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.
Performance and OBIEE – part IV – Test – Build
This article is the fourth in a series about performance and OBIEE. In previously articles I described how test design should be approached, and in this article I explain in detail how to use some of the tools available to build the tests.
JMeter
JMeter is open-source software maintained by the Apache Software Foundation. It is a flexible and powerful tool that has a GUI design interface that can run on any java-supporting desktop (e.g. Windows/Linux/Mac). The test execution can be run from the GUI, or “headless” from the commandline.
To get started with JMeter, simply download it, and uncompress the archive (tgz / zip). Go to the bin
folder and doubleclick on jmeter.sh or jmeter.bat. This will launch JMeter.
There are plenty of JMeter tutorials on the web and I am not going to replicate them here. Any tutorial that demonstrates how to record a web browsing session into a JMeter script should suffice for creating an initial OBIEE load test. Below I will detail some specifics that it is useful to be aware of.
- When a user clicks on a dashboard in OBIEE, the server returns shortly after the text “Loading…”. You need to make sure that your test doesn’t incorrectly accept this as the completed report. There are a couple of ways to do this:
- Design your test so that it waits until the text isn’t “Loading…”
- Set NewCursorWaitSeconds in
instanceconfig.xml
so that the “Loading…” text isn’t displayed before the query results are returned. See this blog post for more details
- The JMeter Plugins are very useful for additional graphing options and better user control (eg jp@gc – Stepping Thread Group)
- Try to use variables throughput to increase code reusability and reduce maintenance. It’s worth taking time to refactor a test that has evolved into something complex.
- Use the Constant Timer object to add think time
Response Assertion
steps are a very good way of ensuring that your test is getting the result it should at each stage. For example, to check the OBIEE login page is loading, check forEnter your user id and password.
.- Use the Not option in a
Response Assertion
to check for things that definitely shouldn’t be there, such asOdbc driver returned an error
orLoading…
- For a flexible test, parameterise the dashboard pages fetched. This is done in several stages:
- Add a CSV Data Set Config step, configured to read a TSV (Tab-Separated) file from a path you specify. NB relative paths in JMeter are relative to the folder that the JMeter script (JMX) resides
If you want to end the test once each dashboard page has been run once, set Stop thread on EOF to False. To control the longevity of the test elsewhere, set Recycle on EOF to True
- Load OBI’s Catalog Manager and navigate to the part of the catalog for which you want to generate a list of dashboards. Go to Tools -> Create Report. Set the Type to
Dashboard
and columns:Folder
andDashboard Page Name
.
Save the resulting TSV file local to the JMeter script you have built.
- The TSV file is written with a header, and this acts as a source for the variable names, so in this case
Folder
andDashboard Page Name
. Use these in the Dashboard HTTP request
- Finally, use the following trick to ensure that each dashboard is recorded as a separately labelled transaction. Insert a BeanShell PostProcessor as a child of the HTTP Request with the following code:
prev.setSampleLabel("Get Dashboard: " + vars.get("Dashboard Page Name"));
- Add a CSV Data Set Config step, configured to read a TSV (Tab-Separated) file from a path you specify. NB relative paths in JMeter are relative to the folder that the JMeter script (JMX) resides
- Validate your JMeter test response times by actually using the system yourself, both at single user and when the test is ramped up. If JMeter says response times are subsecond but you’re waiting 30 seconds to run a report, it’s possible that your JMeter test is not measuring the correct timings.
- If a system under test is failing to cope, an increased response time and/or increase error rate is usually seen. To avoid unnecessarily excessive test durations, a test can be set to abort at a certain threshold, using the jp@gc – AutoStop Listener step.
- A simple OBIEE JMeter script may have the following steps:
- A sample OBIEE JMeter jmx script can be downloaded from here
The other thing to remember when you are building your JMeter tests is where and how you will run them:
- Be very concious of where you run your JMeter script from. If you run it over a poor wifi connection from a 3 year old laptop, don’t be surprised if your test apparently tells you that your 40-core OBIEE server only supports a handful of users. Rather than observing bottlenecks on OBIEE, you’ll actually just be measuring how easy it is to saturate your local machine’s CPU and/or network connection. Particularly around network, be aware that each dashboard request will pull back the complete dashboard content, which can be 100s of KB. Multiply that by the number of simultaneous users you plan to simulate, and you’ll quickly see that your JMeter script needs to run from a machine with a nice fat network pipe
- JMeter can be run distributed across servers, controlled from a central one. This is a good way to simulate many users if you find a single machine cannot keep up with the load.
- JMeter can be run from the command line if you are not able (or don’t want) to use a GUI.
BI Server + Database
The BI Server uses ODBC or JDBC for communication with clients (such as Presentation Services). We can use a client to send Logical SQL requests to it and record the response time :
nqcmd
nqcmd is the ODBC client that is provided with OBIEE. You can wrap it in a series of shell scripts to create a complete test
This method is as simple as extracting the Logical SQL for a report, putting it in a file, and referencing this file in invoking nqcmd.
Extracting the Logical SQL can be done from several places including:
- Usage Tracking
select query_blob from s_nq_acct;
- nqquery.log
You’ll find this file in the path$FMW_HOME/instances/instance1/diagnostics/logs/OracleBIServer/coreapplication_obis1/
(it will be different for deployments using scaleout).
You can also view and download nqquery.log through Enterprise Manager
- Answers editor
This is particularly useful for extracting Logical SQL in a report you’re working on or investigating.
Go to Edit Report and then the Advanced tab:
Having extracted the Logical SQL, write it to a file, for example test_report.lsql
. I use the suffix lsql (Logical SQL) to differentiate it from normal SQL (‘Physical’ SQL).
Now you can send the Logical SQL to the BI Server from an ODBC/JDBC client, of which the most commonly used with OBIEE is nqcmd.
To use nqcmd, you first need to set up the OBI environment variables. On Linux/Unix this is done by “dot-sourcing” a file which sets the environment for the current shell. The first character to type is dot, then space, then the path to the bi-init.sh
script:
cd $FMW_HOME
. instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh
On Windows, you can just run bi-init.cmd
and it will open a new command window for you to use, with the environment correctly set.
Once the environment is set correctly, call nqcmd, supplying:
- The DSN of the BI Server
- on Linux/Unix as defined in
$FMW_HOME/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/odbc.ini
- In Windows, it’s defined in Administrative Tools -> Data Sources
- on Linux/Unix as defined in
- Your OBIEE username and password
- The path of the Logical SQL to run:
$ nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -s test_report.lsql
This will execute the Logical SQL and return the results to your screen. With a few more tweaks we can easily get some performance data:
- Unless you want to see the data, use the
-o
flag to route it to a file, or-q
for no output. Remember if there is lots of data from the query there will be a time overhead in sending it all to your console (or to file with-o
) - Use the undocumented but very useful
-T
flag to return a detailed timings breakdown. - On Linux/unix prefix the nqcmd call with
time
to give a report of the time taken by the whole command:$ time nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -T -s test_report.lsql -q ------------------------------------------------------------------------------- Oracle BI ODBC Client Copyright (c) 1997-2011 Oracle Corporation, All rights reserved ------------------------------------------------------------------------------- Connection open with info: [0][State: 01000] [DataDirect][ODBC lib] Application's WCHAR type must be UTF16, because odbc driver's unicode type is UTF16 […] Clock time: batch start: 08:08:07.000 Query from: 08:08:07.000 to: 08:08:07.000 Row count: 36 total: 0.046 prepare: 0.003 execute: 0.034 fetch: 0.009 Cumulative time(seconds): Batch elapsed: 0 Query total: 0.046 prepare: 0.003, execute: 0.034, fetch: 0.009, query count: 1, cumulative rows: 0 ---------------------------------------------- Processed: 1 queries real 0m1.783s user 0m0.060s sys 0m0.017s
Running Logical SQL through JMeter
You can build a more flexible and scalable test rig based on the same design as above using JMeter with the OBIEE JDBC driver.
Take the OBI JDBC driver file bijdbc.jar
from $FMW_HOME/Oracle_BI1/bifoundation/jdbc/
and put it in the JMeter /lib
folder, or specify it in JMeter’s root test step under Add directory or jar to classpath.
In JMeter add a User Group and within it:
- JDBC Connection Configuration
- Variable Name : eg,
OBI
- Validation Query : blank
- Database URL:
jdbc:oraclebi://<host>:9703/
(see the jdbc README.txt for full syntax of the URL including clustering) - JDBC Driver class:
oracle.bi.jdbc.AnaJdbcDriver
- Username + Password : Valid OBIEE logins
- Variable Name : eg,
- JDBC Request
- Variable Name :
OBI
(or whatever you specified in the config above) - Query Type : Select Statement
- SQL Query: <Your Logical SQL>
- Variable Name :
Add a listener to visualise the results, such as jp@gc – Response Times Over Time (part of JMeter Plugins)
Since this is JMeter, you can easily parameterise this test, running a predefined set of Logical SQL extracted from Usage Tracking or nqquery.log.
Download the JMeter JMX script here.
Database
If nothing in your OBI configuration has changed and you just want to test the performance of the database, then you can just use standard database performance test tools. Take the Physical SQL from either the nqquery.log or (in 11.1.1.6+) table S_NQ_DB_ACCT
and run it against the database.
For example, using sqlplus, the input file would look like this, with a set termout off
to supress the rows, and exit
to quit sqlplus after execution:
$ cat test_report.sql
set termout off
select sum(T233609.DEPARTURES_PERFORMED) as c1,
T233732.Cal_Month as c2,
T233804.ROUTE as c3
from BI_AIRLINES.OBIEE_ROUTE T233804 /* 18 ROUTE */ , BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ , BI_AIRLINES.DOMESTIC_SEGMENT T233609 /* 01 DOMESTIC Segment */
where ( T233609.DEST = T233804.DEST and T233609.MONTH = T233732.Cal_Month and T233609.ORIGIN = T233804.ORIGIN and T233609.QUARTER = T233732.Cal_Qtr and T233609.YEAR = T233732.Cal_Year )
group by T233732.Cal_Month, T233804.ROUTE;
exit
Run it with time
prefix to see how long it takes:
$ time sqlplus BISAMPLE/BISAMPLE @test_report
SQL*Plus: Release 11.2.0.3.0 Production on Wed Feb 6 22:32:41 2013
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
real 0m0.669s
user 0m0.028s
sys 0m0.083s
LoadTest.jar
Snuck in with the other goodies of SampleApp v207 is LoadTest.jar. You’ll find it in /home/oracle/scripts/loadtest
, along with a readme file detailing how to use it. It takes as input a single Dashboard path, and runs concurrent user requests against it
[oracle@obieesampleapp loadtest]$ ./runtest
Start time: Wed Feb 6 22:42:39 GMT 2013
Load Test Starting...
----------------------------------------------
Creating User Sessions for Concurrency Test..
Total active sessions: 54
Initiating Queries..
Total queries initiated: 54
Cleaning up User Sessions created for Concurrency Test..
- Remaining Active Sessions: 54
Completed User Sessions Cleanup
----------------------------------------------
Load Test Completed...
End time: Wed Feb 6 22:42:45 GMT 2013
It requires a list of usernames (all having the same password) and these users to have been created in the security directory beforehand.
What next?
With a test designed and built we’re now ready for launch … read on to see how best to make sure all the important metrics are captured with your test when you execute it.
Other articles in this series
This article is part of a series on the subject of Performance and OBIEE:
Comments?
I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?
Because there are several articles in this series, and I’d like to the comments in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.
Performance and OBIEE – part IV – Test – Build
This article is the fourth in a series about performance and OBIEE. In previously articles I described how test design should be approached, and in this article I explain in detail how to use some of the tools available to build the tests.
JMeter
JMeter is open-source software maintained by the Apache Software Foundation. It is a flexible and powerful tool that has a GUI design interface that can run on any java-supporting desktop (e.g. Windows/Linux/Mac). The test execution can be run from the GUI, or “headless” from the commandline.
To get started with JMeter, simply download it, and uncompress the archive (tgz / zip). Go to the bin
folder and doubleclick on jmeter.sh or jmeter.bat. This will launch JMeter.
There are plenty of JMeter tutorials on the web and I am not going to replicate them here. Any tutorial that demonstrates how to record a web browsing session into a JMeter script should suffice for creating an initial OBIEE load test. Below I will detail some specifics that it is useful to be aware of.
- When a user clicks on a dashboard in OBIEE, the server returns shortly after the text “Loading…”. You need to make sure that your test doesn’t incorrectly accept this as the completed report. There are a couple of ways to do this:
- Design your test so that it waits until the text isn’t “Loading…”
- Set NewCursorWaitSeconds in
instanceconfig.xml
so that the “Loading…” text isn’t displayed before the query results are returned. See this blog post for more details
- The JMeter Plugins are very useful for additional graphing options and better user control (eg jp@gc – Stepping Thread Group)
- Try to use variables throughput to increase code reusability and reduce maintenance. It’s worth taking time to refactor a test that has evolved into something complex.
- Use the Constant Timer object to add think time
Response Assertion
steps are a very good way of ensuring that your test is getting the result it should at each stage. For example, to check the OBIEE login page is loading, check forEnter your user id and password.
.- Use the Not option in a
Response Assertion
to check for things that definitely shouldn’t be there, such asOdbc driver returned an error
orLoading…
- For a flexible test, parameterise the dashboard pages fetched. This is done in several stages:
- Add a CSV Data Set Config step, configured to read a TSV (Tab-Separated) file from a path you specify. NB relative paths in JMeter are relative to the folder that the JMeter script (JMX) resides
If you want to end the test once each dashboard page has been run once, set Stop thread on EOF to False. To control the longevity of the test elsewhere, set Recycle on EOF to True
- Load OBI’s Catalog Manager and navigate to the part of the catalog for which you want to generate a list of dashboards. Go to Tools -> Create Report. Set the Type to
Dashboard
and columns:Folder
andDashboard Page Name
.
Save the resulting TSV file local to the JMeter script you have built.
- The TSV file is written with a header, and this acts as a source for the variable names, so in this case
Folder
andDashboard Page Name
. Use these in the Dashboard HTTP request
- Finally, use the following trick to ensure that each dashboard is recorded as a separately labelled transaction. Insert a BeanShell PostProcessor as a child of the HTTP Request with the following code:
prev.setSampleLabel("Get Dashboard: " + vars.get("Dashboard Page Name"));
- Add a CSV Data Set Config step, configured to read a TSV (Tab-Separated) file from a path you specify. NB relative paths in JMeter are relative to the folder that the JMeter script (JMX) resides
- Validate your JMeter test response times by actually using the system yourself, both at single user and when the test is ramped up. If JMeter says response times are subsecond but you’re waiting 30 seconds to run a report, it’s possible that your JMeter test is not measuring the correct timings.
- If a system under test is failing to cope, an increased response time and/or increase error rate is usually seen. To avoid unnecessarily excessive test durations, a test can be set to abort at a certain threshold, using the jp@gc – AutoStop Listener step.
- A simple OBIEE JMeter script may have the following steps:
- A sample OBIEE JMeter jmx script can be downloaded from here
The other thing to remember when you are building your JMeter tests is where and how you will run them:
- Be very concious of where you run your JMeter script from. If you run it over a poor wifi connection from a 3 year old laptop, don’t be surprised if your test apparently tells you that your 40-core OBIEE server only supports a handful of users. Rather than observing bottlenecks on OBIEE, you’ll actually just be measuring how easy it is to saturate your local machine’s CPU and/or network connection. Particularly around network, be aware that each dashboard request will pull back the complete dashboard content, which can be 100s of KB. Multiply that by the number of simultaneous users you plan to simulate, and you’ll quickly see that your JMeter script needs to run from a machine with a nice fat network pipe
- JMeter can be run distributed across servers, controlled from a central one. This is a good way to simulate many users if you find a single machine cannot keep up with the load.
- JMeter can be run from the command line if you are not able (or don’t want) to use a GUI.
BI Server + Database
The BI Server uses ODBC or JDBC for communication with clients (such as Presentation Services). We can use a client to send Logical SQL requests to it and record the response time :
nqcmd
nqcmd is the ODBC client that is provided with OBIEE. You can wrap it in a series of shell scripts to create a complete test
This method is as simple as extracting the Logical SQL for a report, putting it in a file, and referencing this file in invoking nqcmd.
Extracting the Logical SQL can be done from several places including:
- Usage Tracking
select query_blob from s_nq_acct;
- nqquery.log
You’ll find this file in the path$FMW_HOME/instances/instance1/diagnostics/logs/OracleBIServer/coreapplication_obis1/
(it will be different for deployments using scaleout).
You can also view and download nqquery.log through Enterprise Manager
- Answers editor
This is particularly useful for extracting Logical SQL in a report you’re working on or investigating.
Go to Edit Report and then the Advanced tab:
Having extracted the Logical SQL, write it to a file, for example test_report.lsql
. I use the suffix lsql (Logical SQL) to differentiate it from normal SQL (‘Physical’ SQL).
Now you can send the Logical SQL to the BI Server from an ODBC/JDBC client, of which the most commonly used with OBIEE is nqcmd.
To use nqcmd, you first need to set up the OBI environment variables. On Linux/Unix this is done by “dot-sourcing” a file which sets the environment for the current shell. The first character to type is dot, then space, then the path to the bi-init.sh
script:
cd $FMW_HOME
. instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh
On Windows, you can just run bi-init.cmd
and it will open a new command window for you to use, with the environment correctly set.
Once the environment is set correctly, call nqcmd, supplying:
- The DSN of the BI Server
- on Linux/Unix as defined in
$FMW_HOME/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/odbc.ini
- In Windows, it’s defined in Administrative Tools -> Data Sources
- on Linux/Unix as defined in
- Your OBIEE username and password
- The path of the Logical SQL to run:
$ nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -s test_report.lsql
This will execute the Logical SQL and return the results to your screen. With a few more tweaks we can easily get some performance data:
- Unless you want to see the data, use the
-o
flag to route it to a file, or-q
for no output. Remember if there is lots of data from the query there will be a time overhead in sending it all to your console (or to file with-o
) - Use the undocumented but very useful
-T
flag to return a detailed timings breakdown. - On Linux/unix prefix the nqcmd call with
time
to give a report of the time taken by the whole command:$ time nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -T -s test_report.lsql -q ------------------------------------------------------------------------------- Oracle BI ODBC Client Copyright (c) 1997-2011 Oracle Corporation, All rights reserved ------------------------------------------------------------------------------- Connection open with info: [0][State: 01000] [DataDirect][ODBC lib] Application's WCHAR type must be UTF16, because odbc driver's unicode type is UTF16 […] Clock time: batch start: 08:08:07.000 Query from: 08:08:07.000 to: 08:08:07.000 Row count: 36 total: 0.046 prepare: 0.003 execute: 0.034 fetch: 0.009 Cumulative time(seconds): Batch elapsed: 0 Query total: 0.046 prepare: 0.003, execute: 0.034, fetch: 0.009, query count: 1, cumulative rows: 0 ---------------------------------------------- Processed: 1 queries real 0m1.783s user 0m0.060s sys 0m0.017s
Running Logical SQL through JMeter
You can build a more flexible and scalable test rig based on the same design as above using JMeter with the OBIEE JDBC driver.
Take the OBI JDBC driver file bijdbc.jar
from $FMW_HOME/Oracle_BI1/bifoundation/jdbc/
and put it in the JMeter /lib
folder, or specify it in JMeter’s root test step under Add directory or jar to classpath.
In JMeter add a User Group and within it:
- JDBC Connection Configuration
- Variable Name : eg,
OBI
- Validation Query : blank
- Database URL:
jdbc:oraclebi://<host>:9703/
(see the jdbc README.txt for full syntax of the URL including clustering) - JDBC Driver class:
oracle.bi.jdbc.AnaJdbcDriver
- Username + Password : Valid OBIEE logins
- Variable Name : eg,
- JDBC Request
- Variable Name :
OBI
(or whatever you specified in the config above) - Query Type : Select Statement
- SQL Query: <Your Logical SQL>
- Variable Name :
Add a listener to visualise the results, such as jp@gc – Response Times Over Time (part of JMeter Plugins)
Since this is JMeter, you can easily parameterise this test, running a predefined set of Logical SQL extracted from Usage Tracking or nqquery.log.
Download the JMeter JMX script here.
Database
If nothing in your OBI configuration has changed and you just want to test the performance of the database, then you can just use standard database performance test tools. Take the Physical SQL from either the nqquery.log or (in 11.1.1.6+) table S_NQ_DB_ACCT
and run it against the database.
For example, using sqlplus, the input file would look like this, with a set termout off
to supress the rows, and exit
to quit sqlplus after execution:
$ cat test_report.sql
set termout off
select sum(T233609.DEPARTURES_PERFORMED) as c1,
T233732.Cal_Month as c2,
T233804.ROUTE as c3
from BI_AIRLINES.OBIEE_ROUTE T233804 /* 18 ROUTE */ , BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ , BI_AIRLINES.DOMESTIC_SEGMENT T233609 /* 01 DOMESTIC Segment */
where ( T233609.DEST = T233804.DEST and T233609.MONTH = T233732.Cal_Month and T233609.ORIGIN = T233804.ORIGIN and T233609.QUARTER = T233732.Cal_Qtr and T233609.YEAR = T233732.Cal_Year )
group by T233732.Cal_Month, T233804.ROUTE;
exit
Run it with time
prefix to see how long it takes:
$ time sqlplus BISAMPLE/BISAMPLE @test_report
SQL*Plus: Release 11.2.0.3.0 Production on Wed Feb 6 22:32:41 2013
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
real 0m0.669s
user 0m0.028s
sys 0m0.083s
LoadTest.jar
Snuck in with the other goodies of SampleApp v207 is LoadTest.jar. You’ll find it in /home/oracle/scripts/loadtest
, along with a readme file detailing how to use it. It takes as input a single Dashboard path, and runs concurrent user requests against it
[oracle@obieesampleapp loadtest]$ ./runtest
Start time: Wed Feb 6 22:42:39 GMT 2013
Load Test Starting...
----------------------------------------------
Creating User Sessions for Concurrency Test..
Total active sessions: 54
Initiating Queries..
Total queries initiated: 54
Cleaning up User Sessions created for Concurrency Test..
- Remaining Active Sessions: 54
Completed User Sessions Cleanup
----------------------------------------------
Load Test Completed...
End time: Wed Feb 6 22:42:45 GMT 2013
It requires a list of usernames (all having the same password) and these users to have been created in the security directory beforehand.
What next?
With a test designed and built we’re now ready for launch … read on to see how best to make sure all the important metrics are captured with your test when you execute it.
Other articles in this series
This article is part of a series on the subject of Performance and OBIEE:
Comments?
I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?
Because there are several articles in this series, and I’d like to the comments in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.