Tag Archives: Oracle BI Suite EE

Performance and OBIEE – part IV – Test – Build

This article is the fourth in a series about performance and OBIEE. In previously articles I described how test design should be approached, and in this article I explain in detail how to use some of the tools available to build the tests.

JMeter

JMeter is open-source software maintained by the Apache Software Foundation. It is a flexible and powerful tool that has a GUI design interface that can run on any java-supporting desktop (e.g. Windows/Linux/Mac). The test execution can be run from the GUI, or “headless” from the commandline.

To get started with JMeter, simply download it, and uncompress the archive (tgz / zip). Go to the bin folder and doubleclick on jmeter.sh or jmeter.bat. This will launch JMeter.

There are plenty of JMeter tutorials on the web and I am not going to replicate them here. Any tutorial that demonstrates how to record a web browsing session into a JMeter script should suffice for creating an initial OBIEE load test. Below I will detail some specifics that it is useful to be aware of.

  • When a user clicks on a dashboard in OBIEE, the server returns shortly after the text “Loading…”. You need to make sure that your test doesn’t incorrectly accept this as the completed report. There are a couple of ways to do this:
    • Design your test so that it waits until the text isn’t “Loading…”
    • Set NewCursorWaitSeconds in instanceconfig.xml so that the “Loading…” text isn’t displayed before the query results are returned. See this blog post for more details
  • The JMeter Plugins are very useful for additional graphing options and better user control (eg jp@gc – Stepping Thread Group)
  • Try to use variables throughput to increase code reusability and reduce maintenance. It’s worth taking time to refactor a test that has evolved into something complex.
  • Use the Constant Timer object to add think time
  • Response Assertion steps are a very good way of ensuring that your test is getting the result it should at each stage. For example, to check the OBIEE login page is loading, check for Enter your user id and password..
  • Use the Not option in a Response Assertion to check for things that definitely shouldn’t be there, such as Odbc driver returned an error or Loading…
  • For a flexible test, parameterise the dashboard pages fetched. This is done in several stages:
    1. Add a CSV Data Set Config step, configured to read a TSV (Tab-Separated) file from a path you specify. NB relative paths in JMeter are relative to the folder that the JMeter script (JMX) resides

      If you want to end the test once each dashboard page has been run once, set Stop thread on EOF to False. To control the longevity of the test elsewhere, set Recycle on EOF to True

    2. Load OBI’s Catalog Manager and navigate to the part of the catalog for which you want to generate a list of dashboards. Go to Tools -> Create Report. Set the Type to Dashboard and columns: Folder and Dashboard Page Name.


      Save the resulting TSV file local to the JMeter script you have built.

    3. The TSV file is written with a header, and this acts as a source for the variable names, so in this case Folder and Dashboard Page Name. Use these in the Dashboard HTTP request
    4. Finally, use the following trick to ensure that each dashboard is recorded as a separately labelled transaction. Insert a BeanShell PostProcessor as a child of the HTTP Request with the following code:
      prev.setSampleLabel("Get Dashboard: " + vars.get("Dashboard Page Name"));
      
  • Validate your JMeter test response times by actually using the system yourself, both at single user and when the test is ramped up. If JMeter says response times are subsecond but you’re waiting 30 seconds to run a report, it’s possible that your JMeter test is not measuring the correct timings.
  • If a system under test is failing to cope, an increased response time and/or increase error rate is usually seen. To avoid unnecessarily excessive test durations, a test can be set to abort at a certain threshold, using the jp@gc – AutoStop Listener step.
  • A simple OBIEE JMeter script may have the following steps:
    A set of JMeter steps in a typical OBIEE script
  • A sample OBIEE JMeter jmx script can be downloaded from here

The other thing to remember when you are building your JMeter tests is where and how you will run them:

  • Be very concious of where you run your JMeter script from. If you run it over a poor wifi connection from a 3 year old laptop, don’t be surprised if your test apparently tells you that your 40-core OBIEE server only supports a handful of users. Rather than observing bottlenecks on OBIEE, you’ll actually just be measuring how easy it is to saturate your local machine’s CPU and/or network connection. Particularly around network, be aware that each dashboard request will pull back the complete dashboard content, which can be 100s of KB. Multiply that by the number of simultaneous users you plan to simulate, and you’ll quickly see that your JMeter script needs to run from a machine with a nice fat network pipe
  • JMeter can be run distributed across servers, controlled from a central one. This is a good way to simulate many users if you find a single machine cannot keep up with the load.
  • JMeter can be run from the command line if you are not able (or don’t want) to use a GUI.

BI Server + Database

The BI Server uses ODBC or JDBC for communication with clients (such as Presentation Services). We can use a client to send Logical SQL requests to it and record the response time :

nqcmd

nqcmd is the ODBC client that is provided with OBIEE. You can wrap it in a series of shell scripts to create a complete test

This method is as simple as extracting the Logical SQL for a report, putting it in a file, and referencing this file in invoking nqcmd.

Extracting the Logical SQL can be done from several places including:

  • Usage Tracking
    select query_blob from s_nq_acct;
    
  • nqquery.log
    You’ll find this file in the path $FMW_HOME/instances/instance1/diagnostics/logs/OracleBIServer/coreapplication_obis1/ (it will be different for deployments using scaleout).
    Logical SQL in nqquery.log

    You can also view and download nqquery.log through Enterprise Manager

  • Answers editor
    This is particularly useful for extracting Logical SQL in a report you’re working on or investigating.
    Go to Edit Report and then the Advanced tab:
    Logical SQL in the Advanced tab of Answers

Having extracted the Logical SQL, write it to a file, for example test_report.lsql. I use the suffix lsql (Logical SQL) to differentiate it from normal SQL (‘Physical’ SQL).

Now you can send the Logical SQL to the BI Server from an ODBC/JDBC client, of which the most commonly used with OBIEE is nqcmd.

To use nqcmd, you first need to set up the OBI environment variables. On Linux/Unix this is done by “dot-sourcing” a file which sets the environment for the current shell. The first character to type is dot, then space, then the path to the bi-init.sh script:

 cd $FMW_HOME
 . instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh

On Windows, you can just run bi-init.cmd and it will open a new command window for you to use, with the environment correctly set.

Once the environment is set correctly, call nqcmd, supplying:

  • The DSN of the BI Server
    • on Linux/Unix as defined in $FMW_HOME/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/odbc.ini
    • In Windows, it’s defined in Administrative Tools -> Data Sources
  • Your OBIEE username and password
  • The path of the Logical SQL to run:
$ nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -s test_report.lsql

This will execute the Logical SQL and return the results to your screen. With a few more tweaks we can easily get some performance data:

  • Unless you want to see the data, use the -o flag to route it to a file, or -q for no output. Remember if there is lots of data from the query there will be a time overhead in sending it all to your console (or to file with -o)
  • Use the undocumented but very useful -T flag to return a detailed timings breakdown.
  • On Linux/unix prefix the nqcmd call with time to give a report of the time taken by the whole command:
    $ time nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -T -s test_report.lsql -q
    
    -------------------------------------------------------------------------------
                  Oracle BI ODBC Client
                  Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
    -------------------------------------------------------------------------------
    
    Connection open with info:
    [0][State: 01000] [DataDirect][ODBC lib] Application's WCHAR type must be UTF16, because odbc driver's unicode type is UTF16
    
    […]
    
    Clock time: batch start: 08:08:07.000 Query from: 08:08:07.000 to: 08:08:07.000 Row count: 36
     total: 0.046 prepare: 0.003 execute: 0.034 fetch: 0.009
    Cumulative time(seconds): Batch elapsed: 0 Query total: 0.046 prepare: 0.003, execute: 0.034, fetch: 0.009, query count:  1, cumulative rows:  0
    
    ----------------------------------------------
    
    Processed: 1 queries
    
    real    0m1.783s
    user    0m0.060s
    sys     0m0.017s

Running Logical SQL through JMeter

You can build a more flexible and scalable test rig based on the same design as above using JMeter with the OBIEE JDBC driver.

Take the OBI JDBC driver file bijdbc.jar from $FMW_HOME/Oracle_BI1/bifoundation/jdbc/ and put it in the JMeter /lib folder, or specify it in JMeter’s root test step under Add directory or jar to classpath.

In JMeter add a User Group and within it:

  • JDBC Connection Configuration
    • Variable Name : eg, OBI
    • Validation Query : blank
    • Database URL: jdbc:oraclebi://<host>:9703/ (see the jdbc README.txt for full syntax of the URL including clustering)
    • JDBC Driver class: oracle.bi.jdbc.AnaJdbcDriver
    • Username + Password : Valid OBIEE logins
  • JDBC Request
    • Variable Name : OBI (or whatever you specified in the config above)
    • Query Type : Select Statement
    • SQL Query: <Your Logical SQL>

Add a listener to visualise the results, such as jp@gc – Response Times Over Time (part of JMeter Plugins)

Since this is JMeter, you can easily parameterise this test, running a predefined set of Logical SQL extracted from Usage Tracking or nqquery.log.

Download the JMeter JMX script here.

Database

If nothing in your OBI configuration has changed and you just want to test the performance of the database, then you can just use standard database performance test tools. Take the Physical SQL from either the nqquery.log or (in 11.1.1.6+) table S_NQ_DB_ACCT and run it against the database.

For example, using sqlplus, the input file would look like this, with a set termout off to supress the rows, and exit to quit sqlplus after execution:

$ cat test_report.sql 
set termout off

select sum(T233609.DEPARTURES_PERFORMED) as c1,
     T233732.Cal_Month as c2,
     T233804.ROUTE as c3
from      BI_AIRLINES.OBIEE_ROUTE T233804 /* 18 ROUTE */ ,     BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,     BI_AIRLINES.DOMESTIC_SEGMENT T233609 /* 01 DOMESTIC Segment */
where  ( T233609.DEST = T233804.DEST and T233609.MONTH = T233732.Cal_Month and T233609.ORIGIN = T233804.ORIGIN and T233609.QUARTER = T233732.Cal_Qtr and T233609.YEAR = T233732.Cal_Year )
group by T233732.Cal_Month, T233804.ROUTE;

exit

Run it with time prefix to see how long it takes:

$ time sqlplus BISAMPLE/BISAMPLE @test_report

SQL*Plus: Release 11.2.0.3.0 Production on Wed Feb 6 22:32:41 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

real    0m0.669s
user    0m0.028s
sys     0m0.083s

LoadTest.jar

Snuck in with the other goodies of SampleApp v207 is LoadTest.jar. You’ll find it in /home/oracle/scripts/loadtest, along with a readme file detailing how to use it. It takes as input a single Dashboard path, and runs concurrent user requests against it

[oracle@obieesampleapp loadtest]$ ./runtest 
Start time: Wed Feb  6 22:42:39 GMT 2013
Load Test Starting...

----------------------------------------------
Creating User Sessions for Concurrency Test..
Total active sessions: 54

Initiating Queries..
Total queries initiated: 54

Cleaning up User Sessions created for Concurrency Test..
        - Remaining Active Sessions: 54
Completed User Sessions Cleanup
----------------------------------------------

Load Test Completed...
End time: Wed Feb  6 22:42:45 GMT 2013

It requires a list of usernames (all having the same password) and these users to have been created in the security directory beforehand.

What next?

With a test designed and built we’re now ready for launch … read on to see how best to make sure all the important metrics are captured with your test when you execute it.

Other articles in this series

This article is part of a series on the subject of Performance and OBIEE:

  1. Introduction
  2. Test – Define
  3. Test – Design
  4. Test – Build
  5. Execute
  6. Analyse
  7. Optimise
  8. Summary and FAQ

Comments?

I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?

Because there are several articles in this series, and I’d like to the comments in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.

Performance and OBIEE – part IV – Test – Build

This article is the fourth in a series about performance and OBIEE. In previously articles I described how test design should be approached, and in this article I explain in detail how to use some of the tools available to build the tests.

JMeter

JMeter is open-source software maintained by the Apache Software Foundation. It is a flexible and powerful tool that has a GUI design interface that can run on any java-supporting desktop (e.g. Windows/Linux/Mac). The test execution can be run from the GUI, or “headless” from the commandline.

To get started with JMeter, simply download it, and uncompress the archive (tgz / zip). Go to the bin folder and doubleclick on jmeter.sh or jmeter.bat. This will launch JMeter.

There are plenty of JMeter tutorials on the web and I am not going to replicate them here. Any tutorial that demonstrates how to record a web browsing session into a JMeter script should suffice for creating an initial OBIEE load test. Below I will detail some specifics that it is useful to be aware of.

  • When a user clicks on a dashboard in OBIEE, the server returns shortly after the text “Loading…”. You need to make sure that your test doesn’t incorrectly accept this as the completed report. There are a couple of ways to do this:
    • Design your test so that it waits until the text isn’t “Loading…”
    • Set NewCursorWaitSeconds in instanceconfig.xml so that the “Loading…” text isn’t displayed before the query results are returned. See this blog post for more details
  • The JMeter Plugins are very useful for additional graphing options and better user control (eg jp@gc – Stepping Thread Group)
  • Try to use variables throughput to increase code reusability and reduce maintenance. It’s worth taking time to refactor a test that has evolved into something complex.
  • Use the Constant Timer object to add think time
  • Response Assertion steps are a very good way of ensuring that your test is getting the result it should at each stage. For example, to check the OBIEE login page is loading, check for Enter your user id and password..
  • Use the Not option in a Response Assertion to check for things that definitely shouldn’t be there, such as Odbc driver returned an error or Loading…
  • For a flexible test, parameterise the dashboard pages fetched. This is done in several stages:
    1. Add a CSV Data Set Config step, configured to read a TSV (Tab-Separated) file from a path you specify. NB relative paths in JMeter are relative to the folder that the JMeter script (JMX) resides

      If you want to end the test once each dashboard page has been run once, set Stop thread on EOF to False. To control the longevity of the test elsewhere, set Recycle on EOF to True

    2. Load OBI’s Catalog Manager and navigate to the part of the catalog for which you want to generate a list of dashboards. Go to Tools -> Create Report. Set the Type to Dashboard and columns: Folder and Dashboard Page Name.


      Save the resulting TSV file local to the JMeter script you have built.

    3. The TSV file is written with a header, and this acts as a source for the variable names, so in this case Folder and Dashboard Page Name. Use these in the Dashboard HTTP request
    4. Finally, use the following trick to ensure that each dashboard is recorded as a separately labelled transaction. Insert a BeanShell PostProcessor as a child of the HTTP Request with the following code:
      prev.setSampleLabel("Get Dashboard: " + vars.get("Dashboard Page Name"));
      
  • Validate your JMeter test response times by actually using the system yourself, both at single user and when the test is ramped up. If JMeter says response times are subsecond but you’re waiting 30 seconds to run a report, it’s possible that your JMeter test is not measuring the correct timings.
  • If a system under test is failing to cope, an increased response time and/or increase error rate is usually seen. To avoid unnecessarily excessive test durations, a test can be set to abort at a certain threshold, using the jp@gc – AutoStop Listener step.
  • A simple OBIEE JMeter script may have the following steps:
    A set of JMeter steps in a typical OBIEE script
  • A sample OBIEE JMeter jmx script can be downloaded from here

The other thing to remember when you are building your JMeter tests is where and how you will run them:

  • Be very concious of where you run your JMeter script from. If you run it over a poor wifi connection from a 3 year old laptop, don’t be surprised if your test apparently tells you that your 40-core OBIEE server only supports a handful of users. Rather than observing bottlenecks on OBIEE, you’ll actually just be measuring how easy it is to saturate your local machine’s CPU and/or network connection. Particularly around network, be aware that each dashboard request will pull back the complete dashboard content, which can be 100s of KB. Multiply that by the number of simultaneous users you plan to simulate, and you’ll quickly see that your JMeter script needs to run from a machine with a nice fat network pipe
  • JMeter can be run distributed across servers, controlled from a central one. This is a good way to simulate many users if you find a single machine cannot keep up with the load.
  • JMeter can be run from the command line if you are not able (or don’t want) to use a GUI.

BI Server + Database

The BI Server uses ODBC or JDBC for communication with clients (such as Presentation Services). We can use a client to send Logical SQL requests to it and record the response time :

nqcmd

nqcmd is the ODBC client that is provided with OBIEE. You can wrap it in a series of shell scripts to create a complete test

This method is as simple as extracting the Logical SQL for a report, putting it in a file, and referencing this file in invoking nqcmd.

Extracting the Logical SQL can be done from several places including:

  • Usage Tracking
    select query_blob from s_nq_acct;
    
  • nqquery.log
    You’ll find this file in the path $FMW_HOME/instances/instance1/diagnostics/logs/OracleBIServer/coreapplication_obis1/ (it will be different for deployments using scaleout).
    Logical SQL in nqquery.log

    You can also view and download nqquery.log through Enterprise Manager

  • Answers editor
    This is particularly useful for extracting Logical SQL in a report you’re working on or investigating.
    Go to Edit Report and then the Advanced tab:
    Logical SQL in the Advanced tab of Answers

Having extracted the Logical SQL, write it to a file, for example test_report.lsql. I use the suffix lsql (Logical SQL) to differentiate it from normal SQL (‘Physical’ SQL).

Now you can send the Logical SQL to the BI Server from an ODBC/JDBC client, of which the most commonly used with OBIEE is nqcmd.

To use nqcmd, you first need to set up the OBI environment variables. On Linux/Unix this is done by “dot-sourcing” a file which sets the environment for the current shell. The first character to type is dot, then space, then the path to the bi-init.sh script:

 cd $FMW_HOME
 . instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh

On Windows, you can just run bi-init.cmd and it will open a new command window for you to use, with the environment correctly set.

Once the environment is set correctly, call nqcmd, supplying:

  • The DSN of the BI Server
    • on Linux/Unix as defined in $FMW_HOME/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/odbc.ini
    • In Windows, it’s defined in Administrative Tools -> Data Sources
  • Your OBIEE username and password
  • The path of the Logical SQL to run:
$ nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -s test_report.lsql

This will execute the Logical SQL and return the results to your screen. With a few more tweaks we can easily get some performance data:

  • Unless you want to see the data, use the -o flag to route it to a file, or -q for no output. Remember if there is lots of data from the query there will be a time overhead in sending it all to your console (or to file with -o)
  • Use the undocumented but very useful -T flag to return a detailed timings breakdown.
  • On Linux/unix prefix the nqcmd call with time to give a report of the time taken by the whole command:
    $ time nqcmd -d AnalyticsWeb -u Prodney -p Admin123 -T -s test_report.lsql -q
    
    -------------------------------------------------------------------------------
                  Oracle BI ODBC Client
                  Copyright (c) 1997-2011 Oracle Corporation, All rights reserved
    -------------------------------------------------------------------------------
    
    Connection open with info:
    [0][State: 01000] [DataDirect][ODBC lib] Application's WCHAR type must be UTF16, because odbc driver's unicode type is UTF16
    
    […]
    
    Clock time: batch start: 08:08:07.000 Query from: 08:08:07.000 to: 08:08:07.000 Row count: 36
     total: 0.046 prepare: 0.003 execute: 0.034 fetch: 0.009
    Cumulative time(seconds): Batch elapsed: 0 Query total: 0.046 prepare: 0.003, execute: 0.034, fetch: 0.009, query count:  1, cumulative rows:  0
    
    ----------------------------------------------
    
    Processed: 1 queries
    
    real    0m1.783s
    user    0m0.060s
    sys     0m0.017s

Running Logical SQL through JMeter

You can build a more flexible and scalable test rig based on the same design as above using JMeter with the OBIEE JDBC driver.

Take the OBI JDBC driver file bijdbc.jar from $FMW_HOME/Oracle_BI1/bifoundation/jdbc/ and put it in the JMeter /lib folder, or specify it in JMeter’s root test step under Add directory or jar to classpath.

In JMeter add a User Group and within it:

  • JDBC Connection Configuration
    • Variable Name : eg, OBI
    • Validation Query : blank
    • Database URL: jdbc:oraclebi://<host>:9703/ (see the jdbc README.txt for full syntax of the URL including clustering)
    • JDBC Driver class: oracle.bi.jdbc.AnaJdbcDriver
    • Username + Password : Valid OBIEE logins
  • JDBC Request
    • Variable Name : OBI (or whatever you specified in the config above)
    • Query Type : Select Statement
    • SQL Query: <Your Logical SQL>

Add a listener to visualise the results, such as jp@gc – Response Times Over Time (part of JMeter Plugins)

Since this is JMeter, you can easily parameterise this test, running a predefined set of Logical SQL extracted from Usage Tracking or nqquery.log.

Download the JMeter JMX script here.

Database

If nothing in your OBI configuration has changed and you just want to test the performance of the database, then you can just use standard database performance test tools. Take the Physical SQL from either the nqquery.log or (in 11.1.1.6+) table S_NQ_DB_ACCT and run it against the database.

For example, using sqlplus, the input file would look like this, with a set termout off to supress the rows, and exit to quit sqlplus after execution:

$ cat test_report.sql 
set termout off

select sum(T233609.DEPARTURES_PERFORMED) as c1,
     T233732.Cal_Month as c2,
     T233804.ROUTE as c3
from      BI_AIRLINES.OBIEE_ROUTE T233804 /* 18 ROUTE */ ,     BI_AIRLINES.OBIEE_TIME_MTH_D T233732 /* 41 TIME MONTH */ ,     BI_AIRLINES.DOMESTIC_SEGMENT T233609 /* 01 DOMESTIC Segment */
where  ( T233609.DEST = T233804.DEST and T233609.MONTH = T233732.Cal_Month and T233609.ORIGIN = T233804.ORIGIN and T233609.QUARTER = T233732.Cal_Qtr and T233609.YEAR = T233732.Cal_Year )
group by T233732.Cal_Month, T233804.ROUTE;

exit

Run it with time prefix to see how long it takes:

$ time sqlplus BISAMPLE/BISAMPLE @test_report

SQL*Plus: Release 11.2.0.3.0 Production on Wed Feb 6 22:32:41 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

real    0m0.669s
user    0m0.028s
sys     0m0.083s

LoadTest.jar

Snuck in with the other goodies of SampleApp v207 is LoadTest.jar. You’ll find it in /home/oracle/scripts/loadtest, along with a readme file detailing how to use it. It takes as input a single Dashboard path, and runs concurrent user requests against it

[oracle@obieesampleapp loadtest]$ ./runtest 
Start time: Wed Feb  6 22:42:39 GMT 2013
Load Test Starting...

----------------------------------------------
Creating User Sessions for Concurrency Test..
Total active sessions: 54

Initiating Queries..
Total queries initiated: 54

Cleaning up User Sessions created for Concurrency Test..
        - Remaining Active Sessions: 54
Completed User Sessions Cleanup
----------------------------------------------

Load Test Completed...
End time: Wed Feb  6 22:42:45 GMT 2013

It requires a list of usernames (all having the same password) and these users to have been created in the security directory beforehand.

What next?

With a test designed and built we’re now ready for launch … read on to see how best to make sure all the important metrics are captured with your test when you execute it.

Other articles in this series

This article is part of a series on the subject of Performance and OBIEE:

  1. Introduction
  2. Test – Define
  3. Test – Design
  4. Test – Build
  5. Execute
  6. Analyse
  7. Optimise
  8. Summary and FAQ

Comments?

I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?

Because there are several articles in this series, and I’d like to the comments in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.

Performance and OBIEE – part III – Test – Design

Having defined our test, we can now design it. Wordplay? Not really. Regardless of how we run our tests, we’ve got to define up front what it is we are testing. Once we’ve got that, we can then work out how to actually create a test that simulates what we’ve defined.

Pretty boring huh? Can’t we just, y’know, like, run LoadRunner a bunch of times?

Sure, go ahead. But unless everything is hunky-dory and passes with flying colours, you’ll only end up retrofitting the definition to your test in order to properly analyse the problem and create further tests to get to root-cause. Therein lies the path to madness, and more importantly, wasted time and poor quality tests.

Don’t jump the gun and wade into the ‘fun’ part, otherwise you’ll spend too much time trying to figure out what to build into your test on the hoof instead of carefully working it out up front.

The OBI stack

The key to designing a good performance test, and comprehending the results from it, is understanding the OBI stack. Doing so enables us to inject tests into the OBI stack at various points to focus our testing and reduce the time it takes to build a test. Being able to pinpoint where the work happens is also essential to focussing troubleshooting.

At a very high level, the stack looks like this:

High level view of the OBI stack
(NB this is not an exhaustive diagram, for example, it doesn’t include any alternative security providers)

The user accesses OBI through their web browser, which connects to WebLogic. Weblogic is a web and application server. It connects to OBI which in turn generates queries to run on the database

Examining the stack in a bit more detail, it looks like this:

OBI stack

When the user interacts with OBI all the web browser calls are done through WebLogic. WebLogic runs Java deployments including one called analytics which communicates with the Presentation Services OBI component. When the user requests a dashboard, the following happens:

  1. Presentation Services takes all of the constituent analyses in the dashboard and sends the Logical SQL for each one to BI Server
  2. BI Server runs each Logical SQL through the RPD and generates one or more Physical SQL queries to run on the Database
  3. Once the Database has executed all of the queries required for one of the Logical SQL statements, BI Server processes the results and does any further calculations, federation, and so on. A single resultset per Logical SQL request is passed back to Presentation Services.
  4. Presentation Services receives the data from BI Server, sends any chart requests to Javahost, and does additional processing on the data for things like pivot tables. It then sends the rendered data for each analysis back to the web browser.
  5. In the web browser each analysis in the dashboard can receive its data independently. As the data is received, the browser will render it.

Logical SQL in OBIEE

For each Logical SQL request from Presentation Services, there will be one or more Physical SQL queries, but just one resultset passed back to Presentation Services. For each Dashboard there may be many Logical SQL queries run.

OBI stack

£5 for the hammer, £95 for knowing where to hit it

Building an accurate, flexible performance test rig takes time and therefore it is wise to test only what needs testing. For example:

  • If our performance test is for a greenfield OBI deployment then we’d be pretty foolish not to conduct an end-to-end test. We need to make sure that all the components are performing together at the required workload.
  • However, what about if we’re testing a existing mature deployment where there’s a specific problem in the database? We could do an end to end test, but if all that’s changing is the database (for example, new indexes) then we should probably just focus our test on the database.
  • What about if the RPD is changing in an existing deployment and nothing else? Is there really any point performance testing the response time all the way to the user? If the data returned is the same then we can presume that the performance in Presentation Services and back to the user is going to be the same. Therefore we can build a more streamlined (and thus efficient) test against just the BI Server (because the RPD has changed) and the database (because a changed RPD could generate different Physical SQL).

The flipside of not testing what doesn’t need testing is to test what does need testing and may get overlooked. For example, are your users connecting over 3G yet you’re conducting your performance tests on a LAN?

Tools & Designs for OBI performance testing

toolshed

There are several different ways to build a performance test for OBIEE. I have described them here as well as suggested where each one is most appropriate to use.

End-to-end (almost)

Traditional load testing tools such as JMeter, Oracle Application Testing Suite (OATS), and HP LoadRunner can be used to simulate a user clicking on a report in a web browser and recording how long it takes for the report to run. They can have multiple report definitions, and run many concurrently. This is usually sufficient for a comprehensive performance test. One should be aware when using these that there is still an element that has not been tested:

  1. Outbound. A user interacting with OBI uses a modern web browser with lots of asynchronous traffic (AJAX). For example, clicking on a prompt to get a list of values, clicking on the dashboard menu to see a list of available dashboards. A standard load test will typically only send the HTTP request necessary to trigger the dashboard refresh itself.
  2. Inbound. A real user’s web browser will receive the data from a dashboard as it is returned from the OBI server and render it. Once it’s finished rendering, it is viewable to the user. If it is a large pivot table with hundreds of thousands of cells (“because that’s how the Excel report we’re duplicating does it”) then this might take many seconds or even minutes – or even not work at all. But a load test tool will parse the data that comes back from the server, and once all the data’s received, it will mark that as the completed response time – the time to render is not included.

How traditional load testing tools differ from real users

This is not to dismiss this approach of test design – generally it is a very good one. These are the subtleties that a standard load test can overlook. The AJAX dashboard menu call might not take long, but the list of values might be a hidden bottleneck if it hits an unoptimised data source (for example, SELECT DISTINCT DIM_VALUE FROM 60M_ROW_UNAGGREGATED_DIM_TABLE).

It is important that the above limitations are noted, and the test design adjusted to compensate where these weaknesses are observed (for example, large pivot tables, list of values, etc).

Presentation Services via Web Services

Snuck in with the other goodies of SampleApp v207 is LoadTest.jar. You’ll find it in /home/oracle/scripts/loadtest, along with a readme file detailing how to use it. It takes as input a single Dashboard path, and runs concurrent user requests against it. Since it is using a dashboard path we can presume it is interfacing with Presentation Services, and my guess is that it utilises the OBIEE web services to generate the load.

This utility will presumably have the same limitation as JMeter, LoadRunner et al in that page render times are not included in the response time. LoadTest.jar also only returns timings at a total level for a batch of concurrent users – how long does it take, in total, for x users to run a given dashboard. Individual response times from the BI Server for the constituent queries can be seen in Usage Tracking. LoadTest.jar could also be invoked multiple times to simulate multiple users/dashboards and the timing for each invocation be recorded to get a more granular set of response times. Finally, LoadTest.jar can only fetch a dashboard in its default state – there is no apparent way to change prompt values or interact with hierarchical columns, drill paths, etc.

An alternative to LoadTest.jar for this type of testing could be using JMeter to invoke the web services, which could give more flexibility (at the expense of time to set it up and complexity to maintain).

BI Server + Database

The BI Server supports ODBC or JDBC for communication with clients (such as Presentation Services). We can use a client to send Logical SQL requests to it and record the response time. Because Presentation Services also sends Logical SQL to BI Server this test approach is a good one because from the BI Server + Database point of view, the behaviour is exactly the same whether the Logical SQL comes from Presentation Services or another client (eg injected by us for performance testing).

nqcmd is the ODBC client that is provided with OBIEE. You can wrap this in a series of shell scripts to create a complete test, or you could build a test using JMeter with the OBIEE JDBC driver for a more flexible and scalable test rig.

What this method doesn’t test is Presentation Services and upstream, so the additional processing of resultsets, passing the data back to the web browser, and the rendering of it. But, if you are targeting your testing at just the RPD and Database then this is a very good way to avoid unnecessary complexity.

Database

If nothing in your OBI configuration has changed and you just want to test the performance of the database, then you can just use standard database performance test tools. Take the Physical SQL from the report being tested and run it against the database using your database’s standard tools.
This is useful if you are doing tuning work at the database level (partitioning, indexing, etc), but only use this approach if nothing is changing in your RPD. As soon as the RPD changes, different physical SQL could potentially result.

Remember also that the response time from a physical SQL executed on the database is not the same as the response time of a report – it still has to go back up through the stack back to the user’s web browser. For a complex report or a large number of rows this can add significant time.

The only genuine end-to-end test – a stopwatch!

Monkey

I kid you not – before spending weeks and £££ building complex LoadRunner scripts, stop and think about the testing you are doing. If you want to test all the dashboards with a single user concurrency – find a willing monkey to sit there for 20 minutes and run the dashboards. Maybe you need to test multiuser concurrency but can make do with fewer dashboards tested this way than for a single user.

This is also a very easy way to determine page render times. As I discussed in the paragraph above, traditional load test tools such as LoadRunner and JMeter only measure the time taken for an OBI dashboard to return data and not how long it takes to render it. For the user obviously the only time they are going to be interested in is the complete time including rendering. So using a stopwatch (or more scientifically, inbuilt browser tools such as Chrome’s Developer Tools) you can measure the complete time as the user will experience it.

Which tool?

In an OBI performance test lifecycle I would expect to use several of the different tools described above. Typically a greenfield test will look at the performance of the stack end-to-end. If there are problems with the end-to-end, or if the testing is in response to a problem in an existing system, I would use a response time profile (of which more later) to understand where the bottleneck lay and focus the testing (and tuning) there. For example, if the bottleneck appeared to be the SQL that the RPD was generating I might use nqcmd to validate changes I made to the RPD and/or the database model

Test Design tips

Whichever method you use, bear the following in mind:

  1. Generating a big load is easy; generating a big load that provides useful data is more difficult. For example, when a test shows 100 concurrent users running 10 different dashboards doesn’t work, how do you break it down into components to see where the bottleneck is? A good test design will take this into account up front and include granular response times and instrumentation.
  2. K.I.S.S. : Sometimes a set of shell scripts run concurrently will do just as good a job of applying a high load to OBI as a “proper” load testing tool can. The time command will always give you the duration of an execution, and you can modularise sets of shell scripts to your heart’s content.
    1. If you use shell scripting, don’t forget to KISS – a complex mess of shell scripts isn’t Simple either.
  3. You may see a conflict between the above points. How do you build a test that is granular and well instrumented, yet isn’t overly complex? Well, therein lies the gold ;-) Because all performance tests are different, I can only outline here good approaches to take, but you have to walk that line between complexity and flexibility in your design.

Heuristic and Holistic performance testing

There is more than one way to evaluate the performance of OBIEE. Tools like those I have described above are great for simulating known workloads with a fixed definition. Unfortunately, all of these assume a known set of reports and predicates, and user behaviour. What about testing a system where there is ad hoc capability and/or a great number of possible predicate values and thus execution plans?

Here I would recommend a thorough review of the system’s implementation, with the view to determining possible bottlenecks before they occur. It is more difficult to prescribe this area as it comes down to a combination of personal experience of good design, careful evaluation of right practices, and the ability to isolate and test particular configurations.

The difficulty with this type of testing is that is produces less tangible data with which to say yea or nay on a system’s ability to perform. It means that the audience for the performance test report must be receptive to “It Depends”. Just because a table doesn’t have a particular index doesn’t mean in itself that the performance test is a failure and the index must be defined before go live. What it does mean is that the missing index may cause a worse execution plan which may require additional database resources which may limit how many queries can run concurrently before a bottleneck is reached.

For specific suggestions of what to examine in this type of evaluation, see here. I am well aware of the apparent contradiction in advocating a heuristic approach here yet criticising “best practices”. My problem with best practices is when they are treated as absolutes and without context; they normally originate from good intentions and are subsequently abused. In the context of evaluating a system’s performance that is impossible to comprehensively test (eg ad hoc report workload) then looking at how past implementations of the technology is the only sensible option. The success of these past implementations may or may not be encoded as ‘best practices’; the point is that they are evaluated in a conscious manner and not followed blindly.

Resource management

To reduce the risk to a Production system of running ad hoc or untested reports, resource management at the database level could be used. On Oracle, Database Resource Manager (DBRM) enables queries to be allocated a greater or lower priority for CPU on the database, or limit the execution time. Fixed reports which have been through a strict performance QA process could be given greater resource since the effect has been quantified and analysed in advance, whilst ad hoc or unproven reports could be more restricted.

To implement this, pass through the report or user name through in the connection pool, and then define a Resource Manager rule to assign the resource management policy accordingly.

Resources can also be controlled at the BI Server level by setting restrictions on query runtimes and rows returned on a per-application role basis.

What next?

Now that we’ve defined the test we want to perform, and have designed how it will be implemented, we move onto actually building it, here

Other articles in this series

This article is part of a series on the subject of Performance and OBIEE:

  1. Introduction
  2. Test – Define
  3. Test – Design
  4. Test – Build
  5. Execute
  6. Analyse
  7. Optimise
  8. Summary and FAQ

Comments?

I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?

Because there are several articles in this series, and I’d like to retain the discussion in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.

Performance and OBIEE – part III – Test – Design

Having defined our test, we can now design it. Wordplay? Not really. Regardless of how we run our tests, we’ve got to define up front what it is we are testing. Once we’ve got that, we can then work out how to actually create a test that simulates what we’ve defined.

Pretty boring huh? Can’t we just, y’know, like, run LoadRunner a bunch of times?

Sure, go ahead. But unless everything is hunky-dory and passes with flying colours, you’ll only end up retrofitting the definition to your test in order to properly analyse the problem and create further tests to get to root-cause. Therein lies the path to madness, and more importantly, wasted time and poor quality tests.

Don’t jump the gun and wade into the ‘fun’ part, otherwise you’ll spend too much time trying to figure out what to build into your test on the hoof instead of carefully working it out up front.

The OBI stack

The key to designing a good performance test, and comprehending the results from it, is understanding the OBI stack. Doing so enables us to inject tests into the OBI stack at various points to focus our testing and reduce the time it takes to build a test. Being able to pinpoint where the work happens is also essential to focussing troubleshooting.

At a very high level, the stack looks like this:

High level view of the OBI stack
(NB this is not an exhaustive diagram, for example, it doesn’t include any alternative security providers)

The user accesses OBI through their web browser, which connects to WebLogic. Weblogic is a web and application server. It connects to OBI which in turn generates queries to run on the database

Examining the stack in a bit more detail, it looks like this:

OBI stack

When the user interacts with OBI all the web browser calls are done through WebLogic. WebLogic runs Java deployments including one called analytics which communicates with the Presentation Services OBI component. When the user requests a dashboard, the following happens:

  1. Presentation Services takes all of the constituent analyses in the dashboard and sends the Logical SQL for each one to BI Server
  2. BI Server runs each Logical SQL through the RPD and generates one or more Physical SQL queries to run on the Database
  3. Once the Database has executed all of the queries required for one of the Logical SQL statements, BI Server processes the results and does any further calculations, federation, and so on. A single resultset per Logical SQL request is passed back to Presentation Services.
  4. Presentation Services receives the data from BI Server, sends any chart requests to Javahost, and does additional processing on the data for things like pivot tables. It then sends the rendered data for each analysis back to the web browser.
  5. In the web browser each analysis in the dashboard can receive its data independently. As the data is received, the browser will render it.

Logical SQL in OBIEE

For each Logical SQL request from Presentation Services, there will be one or more Physical SQL queries, but just one resultset passed back to Presentation Services. For each Dashboard there may be many Logical SQL queries run.

OBI stack

£5 for the hammer, £95 for knowing where to hit it

Building an accurate, flexible performance test rig takes time and therefore it is wise to test only what needs testing. For example:

  • If our performance test is for a greenfield OBI deployment then we’d be pretty foolish not to conduct an end-to-end test. We need to make sure that all the components are performing together at the required workload.
  • However, what about if we’re testing a existing mature deployment where there’s a specific problem in the database? We could do an end to end test, but if all that’s changing is the database (for example, new indexes) then we should probably just focus our test on the database.
  • What about if the RPD is changing in an existing deployment and nothing else? Is there really any point performance testing the response time all the way to the user? If the data returned is the same then we can presume that the performance in Presentation Services and back to the user is going to be the same. Therefore we can build a more streamlined (and thus efficient) test against just the BI Server (because the RPD has changed) and the database (because a changed RPD could generate different Physical SQL).

The flipside of not testing what doesn’t need testing is to test what does need testing and may get overlooked. For example, are your users connecting over 3G yet you’re conducting your performance tests on a LAN?

Tools & Designs for OBI performance testing

toolshed

There are several different ways to build a performance test for OBIEE. I have described them here as well as suggested where each one is most appropriate to use.

End-to-end (almost)

Traditional load testing tools such as JMeter, Oracle Application Testing Suite (OATS), and HP LoadRunner can be used to simulate a user clicking on a report in a web browser and recording how long it takes for the report to run. They can have multiple report definitions, and run many concurrently. This is usually sufficient for a comprehensive performance test. One should be aware when using these that there is still an element that has not been tested:

  1. Outbound. A user interacting with OBI uses a modern web browser with lots of asynchronous traffic (AJAX). For example, clicking on a prompt to get a list of values, clicking on the dashboard menu to see a list of available dashboards. A standard load test will typically only send the HTTP request necessary to trigger the dashboard refresh itself.
  2. Inbound. A real user’s web browser will receive the data from a dashboard as it is returned from the OBI server and render it. Once it’s finished rendering, it is viewable to the user. If it is a large pivot table with hundreds of thousands of cells (“because that’s how the Excel report we’re duplicating does it”) then this might take many seconds or even minutes – or even not work at all. But a load test tool will parse the data that comes back from the server, and once all the data’s received, it will mark that as the completed response time – the time to render is not included.

How traditional load testing tools differ from real users

This is not to dismiss this approach of test design – generally it is a very good one. These are the subtleties that a standard load test can overlook. The AJAX dashboard menu call might not take long, but the list of values might be a hidden bottleneck if it hits an unoptimised data source (for example, SELECT DISTINCT DIM_VALUE FROM 60M_ROW_UNAGGREGATED_DIM_TABLE).

It is important that the above limitations are noted, and the test design adjusted to compensate where these weaknesses are observed (for example, large pivot tables, list of values, etc).

Presentation Services via Web Services

Snuck in with the other goodies of SampleApp v207 is LoadTest.jar. You’ll find it in /home/oracle/scripts/loadtest, along with a readme file detailing how to use it. It takes as input a single Dashboard path, and runs concurrent user requests against it. Since it is using a dashboard path we can presume it is interfacing with Presentation Services, and my guess is that it utilises the OBIEE web services to generate the load.

This utility will presumably have the same limitation as JMeter, LoadRunner et al in that page render times are not included in the response time. LoadTest.jar also only returns timings at a total level for a batch of concurrent users – how long does it take, in total, for x users to run a given dashboard. Individual response times from the BI Server for the constituent queries can be seen in Usage Tracking. LoadTest.jar could also be invoked multiple times to simulate multiple users/dashboards and the timing for each invocation be recorded to get a more granular set of response times. Finally, LoadTest.jar can only fetch a dashboard in its default state – there is no apparent way to change prompt values or interact with hierarchical columns, drill paths, etc.

An alternative to LoadTest.jar for this type of testing could be using JMeter to invoke the web services, which could give more flexibility (at the expense of time to set it up and complexity to maintain).

BI Server + Database

The BI Server supports ODBC or JDBC for communication with clients (such as Presentation Services). We can use a client to send Logical SQL requests to it and record the response time. Because Presentation Services also sends Logical SQL to BI Server this test approach is a good one because from the BI Server + Database point of view, the behaviour is exactly the same whether the Logical SQL comes from Presentation Services or another client (eg injected by us for performance testing).

nqcmd is the ODBC client that is provided with OBIEE. You can wrap this in a series of shell scripts to create a complete test, or you could build a test using JMeter with the OBIEE JDBC driver for a more flexible and scalable test rig.

What this method doesn’t test is Presentation Services and upstream, so the additional processing of resultsets, passing the data back to the web browser, and the rendering of it. But, if you are targeting your testing at just the RPD and Database then this is a very good way to avoid unnecessary complexity.

Database

If nothing in your OBI configuration has changed and you just want to test the performance of the database, then you can just use standard database performance test tools. Take the Physical SQL from the report being tested and run it against the database using your database’s standard tools.
This is useful if you are doing tuning work at the database level (partitioning, indexing, etc), but only use this approach if nothing is changing in your RPD. As soon as the RPD changes, different physical SQL could potentially result.

Remember also that the response time from a physical SQL executed on the database is not the same as the response time of a report – it still has to go back up through the stack back to the user’s web browser. For a complex report or a large number of rows this can add significant time.

The only genuine end-to-end test – a stopwatch!

Monkey

I kid you not – before spending weeks and £££ building complex LoadRunner scripts, stop and think about the testing you are doing. If you want to test all the dashboards with a single user concurrency – find a willing monkey to sit there for 20 minutes and run the dashboards. Maybe you need to test multiuser concurrency but can make do with fewer dashboards tested this way than for a single user.

This is also a very easy way to determine page render times. As I discussed in the paragraph above, traditional load test tools such as LoadRunner and JMeter only measure the time taken for an OBI dashboard to return data and not how long it takes to render it. For the user obviously the only time they are going to be interested in is the complete time including rendering. So using a stopwatch (or more scientifically, inbuilt browser tools such as Chrome’s Developer Tools) you can measure the complete time as the user will experience it.

Which tool?

In an OBI performance test lifecycle I would expect to use several of the different tools described above. Typically a greenfield test will look at the performance of the stack end-to-end. If there are problems with the end-to-end, or if the testing is in response to a problem in an existing system, I would use a response time profile (of which more later) to understand where the bottleneck lay and focus the testing (and tuning) there. For example, if the bottleneck appeared to be the SQL that the RPD was generating I might use nqcmd to validate changes I made to the RPD and/or the database model

Test Design tips

Whichever method you use, bear the following in mind:

  1. Generating a big load is easy; generating a big load that provides useful data is more difficult. For example, when a test shows 100 concurrent users running 10 different dashboards doesn’t work, how do you break it down into components to see where the bottleneck is? A good test design will take this into account up front and include granular response times and instrumentation.
  2. K.I.S.S. : Sometimes a set of shell scripts run concurrently will do just as good a job of applying a high load to OBI as a “proper” load testing tool can. The time command will always give you the duration of an execution, and you can modularise sets of shell scripts to your heart’s content.
    1. If you use shell scripting, don’t forget to KISS – a complex mess of shell scripts isn’t Simple either.
  3. You may see a conflict between the above points. How do you build a test that is granular and well instrumented, yet isn’t overly complex? Well, therein lies the gold ;-) Because all performance tests are different, I can only outline here good approaches to take, but you have to walk that line between complexity and flexibility in your design.

Heuristic and Holistic performance testing

There is more than one way to evaluate the performance of OBIEE. Tools like those I have described above are great for simulating known workloads with a fixed definition. Unfortunately, all of these assume a known set of reports and predicates, and user behaviour. What about testing a system where there is ad hoc capability and/or a great number of possible predicate values and thus execution plans?

Here I would recommend a thorough review of the system’s implementation, with the view to determining possible bottlenecks before they occur. It is more difficult to prescribe this area as it comes down to a combination of personal experience of good design, careful evaluation of right practices, and the ability to isolate and test particular configurations.

The difficulty with this type of testing is that is produces less tangible data with which to say yea or nay on a system’s ability to perform. It means that the audience for the performance test report must be receptive to “It Depends”. Just because a table doesn’t have a particular index doesn’t mean in itself that the performance test is a failure and the index must be defined before go live. What it does mean is that the missing index may cause a worse execution plan which may require additional database resources which may limit how many queries can run concurrently before a bottleneck is reached.

For specific suggestions of what to examine in this type of evaluation, see here. I am well aware of the apparent contradiction in advocating a heuristic approach here yet criticising “best practices”. My problem with best practices is when they are treated as absolutes and without context; they normally originate from good intentions and are subsequently abused. In the context of evaluating a system’s performance that is impossible to comprehensively test (eg ad hoc report workload) then looking at how past implementations of the technology is the only sensible option. The success of these past implementations may or may not be encoded as ‘best practices’; the point is that they are evaluated in a conscious manner and not followed blindly.

Resource management

To reduce the risk to a Production system of running ad hoc or untested reports, resource management at the database level could be used. On Oracle, Database Resource Manager (DBRM) enables queries to be allocated a greater or lower priority for CPU on the database, or limit the execution time. Fixed reports which have been through a strict performance QA process could be given greater resource since the effect has been quantified and analysed in advance, whilst ad hoc or unproven reports could be more restricted.

To implement this, pass through the report or user name through in the connection pool, and then define a Resource Manager rule to assign the resource management policy accordingly.

Resources can also be controlled at the BI Server level by setting restrictions on query runtimes and rows returned on a per-application role basis.

What next?

Now that we’ve defined the test we want to perform, and have designed how it will be implemented, we move onto actually building it, here

Other articles in this series

This article is part of a series on the subject of Performance and OBIEE:

  1. Introduction
  2. Test – Define
  3. Test – Design
  4. Test – Build
  5. Execute
  6. Analyse
  7. Optimise
  8. Summary and FAQ

Comments?

I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?

Because there are several articles in this series, and I’d like to retain the discussion in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.

Performance and OBIEE – part II – Test – Define

Take the first tentative steps along the yellow brick road of investigating performance and OBIEE with this jump into the nitty gritty of how we should define a performance test.

An overview of defining a performance test

The first step in assessing the performance of OBIEE is defining the test we are going to use. For troubleshooting a performance problem in Production, this might be a single report. For performance testing a new system it might be more complex. The process to define it needs to be the same though.

Let us consider for a moment what it is we want to achieve through a performance test. It should be a validation – or possibly determination – of some or all of the following, which may or may not be formally documented in a project as ‘Non-Functional Requirements’ (NFRs):

  • Report response times
  • User concurrency levels
  • Hardware utilisation

It doesn’t sound too difficult does it? We can run a report and time how long it takes, and we can run the same report multiple times in parallel and see how long it takes then, and around all of that we can gather system OS stats.

This seems to be the level at which some people perceive the complexity of performance testing; that is, not at all.

But, to put any weight in the above results would presume that the system under test had :

  • Only one report
  • No parameters or other interactivity on the report
  • Every user runs the report at the exact same time

How many OBIEE implementations do you know that match this description?

As soon as you accept this premise, then you have to accept the need to design your performance test properly if you want any kind of confidence in the results that you get.

Of course, there is the chance – [sarcasm]vastly improbable and unheard of I know[/sarcasm] – that Performance Testing is a tick box on a project plan (normally about a week before Go Live), from which there is the expectation that it can deliver the confidence of an accurate test with the simplicity and brevity of the one-report test I describe above. If that is the case then I hope this article will explain why it is a fallacy.

Walk before you run

There are three key things to aim for in the design of a performance test:

  1. Repeatability. You must be able to rerun the test, so that you can (a) prove bugs/performance problems have been fixed, and (b) re-use the test as a baseline for future measurements
  2. Accuracy. This can be difficult with a stack as complex as the OBIEE one, but you must do all you can to avoid conflation and misinterpretation of data.
  3. Ease of Analysis. If the result of your test is unsuccessful, you will have to start analysing the issue. If your test is unnecessarily big or complex then it will be very difficult and time consuming to pare it back to find root cause.

Blunderbuss Avoid the blunderbus approach. It’s tempting (and fun!) to get carried away building a huge test and run it with a big number of users – but if it brings the system down then the only immediately conclusion you can draw is that you can crash the system. To get to any kind of root cause of the limitation you will have to run more tests, and you will have to simplify the test that you did run to pinpoint the actual issue(s).

Build up small tests with single users and check things work at this level first. You can then combine these together into larger tests for load testing. By working this way you get the added benefit of a better understanding of the workings of your load test and system being tested, as you will have gained this experience from the smaller tests.

Think of it as assembling a car’s engine for yourself versus buying a car out of the showroom. If it breaks (or you want to tune it), you’ll be in a better place to start if you have been under the bonnet [hood] already.

Define a User

The first step in defining a good test is to define extremely precisely and clearly what it is we are testing. This section’s heading I take from Zed Shaw’s superb article “Programmers need to learn statistics…”. For me it encapsulates in three words the challenge that we face in designing a good test. We know we want our system to support more than one user, and we probably have a user concurrency figure (of which more below) that the system must support. So, how do we define ‘a user’? To quote from the aforementioned article, do we just “click around a bit, you know, like a user”? Maybe we run the OBIEE-equivilent of SELECT 1 FROM DUAL a thousand times simultaneously, and then declare our system can support a thousand concurrent users?

The challenge is that good test or not, test results and system capacity will often be boiled down to a discussion of simple “user” numbers. Will the system support 100 users? 200 users? And so on.

Modelling the workload of a transactional (OLTP) system is relatively straightforward in that there is a finite number of transactions, even if it is a large number. When we consider how to model an OBIEE user in a performance test, we have additional challenges:

  • Navigation paths
    • In an OLTP system such as an online store once a user clicks on ‘add to cart’ we can be pretty sure they will then either (a) carry on shopping or (b) go to checkout. What about a BI user once they’ve run their first report, which will they run then? Maybe a different report, but quite likely it will be navigation within the existing report (eg drill down, prompt changes, etc)
  • Ad hoc
    • Whilst some reporting will be fixed reports, much business value from BI comes in the analysis and exploration of data in an ad hoc manner to discover trends etc to give competitive advantage. These ad hoc queries cannot, by definition, be known in advance, so the performance test definition must find a way to address this.

Think about how many types of users you have on your system, and how you would characterise and group them. There will be different sets of data they access, and varying degrees of interactivity with which they use the tool. Some will be using pre-built reports with fixed filters, other may be doing extensive ad-hoc analyses, and somewhere in the middle will be pre-built reports but with a fair amount of interactivity through functionality such as prompts and selection steps.

Define a Report

So for each of our User types that we define, we need to precisely define the workload for them. This will be made up of the execution of one or more dashboards/reports. Defining the report that our model users are going to run is not as simple as it may appear. As well as the distinction to make between individual reports (Analyses/Answers) and Dashboards, we also need to consider:

  • Filters
    • How do we model a report which has six possible filters? We can’t just call that ‘report A’, because each filter permutation could cause different SQL to be generated by the BI reporting tool and consequently vastly different execution plan(s) on the database where the query runs.
  • Aggregate navigation
    • One of OBIEE’s many strengths is its aggregate navigation capability, enabling the user to drill down into data whilst in the background the tool switches between pre-defined aggregate tables on the database to pick the fastest one to use. For performance testing we need to make sure that within a report each possible aggregate is considered separately.

As an example, let us examine a dashboard from SampleApp.

SampleApp Dashboard 3.10

At an initial glance, we could specify our “Report” as simply being “Dashboard 3.10”. But…the dashboard is made up of three analyses (plus some static text/image objects)

  • “Product Line Analysis”
  • “Ragged Hierarchy”
  • “Value Based (Parent Child) Hierarchy”

SampleApp Dashboard 3.10 breakdown

We need to consider each of these reports individually. In addition, there are hierarchical columns which, like Prompts, must be included in our test specification. Here’s one of the reports from the dashboard in two different states of hierarchical navigation.

Collapsed - 3 DB queries
Expanded - 21 DB queries

The first version, mostly collapsed, requires 3 queries against the database. The second version requires 21 queries! Clearly the performance of the two different versions of the report is going to differ.

Depending on the hierarchical column states, depending on dashboard prompt values, depending on aggregate drill paths followed, the same individual report has the potential to behave quite differently.

Any report or dashboard that’s using prompts needs to have those prompt values stated in the test definition. The execution of a query on the database can vary considerably depending on the predicates supplied, and it is in part from the report/dashboard prompts that these predicates come.

Ad-hoc reports

Conspicuous by its absence from the above is how we deal with ad-hoc reporting with OBIEE in our performance tests. This is the elephant in the room when it comes to performance testing OBIEE, because we cannot go through the same methodical process of cataloging the workload – because it is ad hoc. This is discussed in detail later.

User workload

Having precisely defined the reports which are going to be run, we can sequence them together to represent different types of users. Clearly this is going to be a generalisation, an estimation, of how a particular group of users is going to tend to use the reports. Unlike an OLTP transaction, where the steps are much more predefined and free from possible deviations, users of OBIEE have the flexibility to run any number of reports, drill within existing reports, and so on.

Taking the above SampleApp dashboard as an example, one user (let’s call them the “Product Analyst”) might have the following test sequence:

  1. View Dashboard 3.10 in default state
  2. Drill into hierarchical columns of Product Line Analysis
  3. View 1.3 Sample Sales - Product Details

Some user types may just run one report and exit; they don’t have to have a complex sequence.

You don’t need to restrict yourself to one user type per actual group of users. You could model several different instances of each user group in order or increase the test coverage and realism of different reports running concurrently. Be aware that the more user types and tests you have, the longer it will take to build, and the more complex the troubleshooting could be.

Think Time

Users are not computers, which is a shame because it would make reasoning with them a lot easier. It also means that they do not just one run report after another. They run a report, look at the numbers, get a cup of tea, change a prompt value, have a chat with a colleague, and so on.

As part of your user workload, plan in some think times. Don’t spend too long thinking about it (geddit), because remember this user workload is an approximation of what a certain type of user is going to be doing.

User concurrency


Once we have designed a set of individual user workloads, we can bring these together into a larger test to assess the performance of the system as a whole under load. The aim of load testing is to assess what happens to the response time of reports, and hardware utilisation, as the number of concurrent users on the system increases. There are three key things to define in a load test:

  • the workload of each user
  • the different types of user
  • the number of concurrent users

The first two of these are discussed above. When it comes to user concurrency, we need to be very careful how we derive our figure, because it has a big impact on the scale of the tests that we run and the discussion of the outcomes.

First, make sure that you are clear in the definition of concurrency that you use. These include:

  • Concurrent user sessions logged into OBIEE, viewing dashboards, formatting reports, and so on.
  • Concurrently executing queries at the BI Server level, which could be a single user running one report or one dashboard.
  • Concurrently executing queries on the database, which could be a single user in OBIEE running a single report or dashboard requiring more than one SQL request to get its data.

Each of these have their own particular load on the system, but can easily differ by orders of magnitude. Typically a user concurrency figure will be derived from “bums on seats”, i.e. number of users. From this, a reckoning is made on how many will be using the system at one time. So unless it’s qualified otherwise, I would work with the first definition – concurrent user sessions logged into OBIEE. But if this is your definition, remember it when we come to simulating users later on …

For some reason, user concurrency is often a contentious number to define. With the greatest of apologies to Monty Python, I offer this paraphasing of a familiar conversation:

“yah, my system needs to support 100 concurrent users”

“100? I’d take that as a QUIET DAY for my system, bwahaha, I need to test at least 1,000 concurrent users!”

“1,000? You were LUCKY! My system would run a gazillion concurrent users at three o’clock in the morning, work twenty hours down t’pit and still run 5,000 concurrent users before having a handful of cold gravel for its tea”

There does seem to be BSD syndrome around user concurrency figures, with people always pitching high numbers. This is counterproductive. If you run a performance test using a completely simulated workload with an excessively high number of users as the target then the margin of ‘error’ (how accurately your test represents reality) inherent in the simulation approach will be magnified as the user concurrency increases. The net effect of this is that you cannot be confident that any problems that do happen will actually happen in real life. Conversly, if you are taking real workload and running it for many users, you can honestly say if it doesn’t work in testing, it won’t work in real life.

If you are worried about pitching your user estimate too low and so double it or stick a zero on the end just in case, consider the following: If you define your test as being a fail if it does not support your high figure of concurrent users – what then? You have to ask for money, either literally to buy bigger hardware, or in kind, to support the time to optimise the system you do have. It’s not a great place to be in, and one which may be unnecessary. A sensible approach is the following:

  1. Load test to confirm whether the system can support the realistic number of users in the short-term, e.g. go-live and a few months beyond
  2. Defer part of the load testing until after go-live, when you can run a test for medium/long-term user numbers using real user workload for your tests. The benefit of this is that your second test will be far more accurate because your simulation will not be based on how you think you system will be used but how you know is it being used.

If this approach doesn’t suit, then your alternative should be to

  1. take a realistic figure to start with for your first test
  2. run a second performance test to validate a future growth figure for your user base. If you don’t have a future growth figure, then you could consider the second test being to simply determine how many users your system can support by ramping numbers up until things start to creak.

The point of doing it this way is that the first test – assuming it gives a positive result – gives clarity and focus to your overall results. If the second test fails, then at least you have the confidence from the first with which to proceed with a decision. If you just run one big test then it’s an “all or nothing” approach, and if the test fails you have the option of either delaying launch until a successful test, or just crossing your fingers and hoping.

User concurrency numbers for an existing system

If you have an existing system then it is easy to determine your actual user concurrency through either Usage Tracking (you do have it enabled, don’t you?) or the OBI DMS metrics.

  • If your user concurrency figure is based on users logged into OBIEE, not necessarily running reports, then the metric Oracle BI PS Sessions : Sessions Logged In should be appropriate.
  • For the number of queries (logical SQL) concurrently running on the BI Server, use Usage Tracking (see below), or DMS metric t.b.c!
  • The number of concurrent queries on the database can either be obtained from your database’s monitoring tool, or through the Oracle BI DB Connection Pool DMS metrics.

To find out more about viewing and capturing OBI metrics over time, see here.

To estimate user concurrency from Usage Tracking, you can use the following query. It will analyse Usage Tracking to give an approximate number of concurrent users, but this is based on individual queries run on the BI Server, so it not the same as users simply logged into OBIEE, nor the same as dashboard refreshes (since one dashboard could be many queries):

-- Usage Tracking Concurrency usage estimate
-- Per minute granularity

SELECT TO_CHAR(start_ts, 'YYYY-MM-DD HH24:MI') AS start_ts_min, 
       COUNT(DISTINCT user_name)               AS distinct_users 
FROM   s_nq_acct 
GROUP  BY TO_CHAR(start_ts, 'YYYY-MM-DD HH24:MI') 
ORDER  BY 1 DESC    

Estimating user concurrency numbers

If you don’t have empirical numbers to use then 10% of your total user base is the rule of thumb often worked to.

User concurrency is multi dimensional

Just as there is no such thing as “A User”, to ensure an accurate performance test user concurrency figures should also be specified in terms of :

  • Types of concurrent user, for example:
    • “10 Concurrent Finance Power Users”
    • “2 Concurrent Finance Ad-Hoc Users”
    • “60 Concurrent HR Report Consumers”
  • Periods of system usage, for example:
    • Monday morning peak
    • EOD peak
    • ‘Average’

You may only actually test the peak user concurrency model, but it’s a good idea to have a picture of the system profile, particularly when analysing the impact and capacity for ad-hoc usage.

Summary : User and Report Definition

Remember:

Before you can measure something you really need to lay down a very concrete definition of what you’re measuring.Zed Shaw

Whatever kind of testing you are doing, aim to make it modular wherever possible. It will make both analysis and reuse of test code easier.
Almost all test definitions should be made up of one or more of the following levels:

  1. Specific Report or dashboard along with all prompt values, hierarchical column settings, etc. This is the base unit of your performance test.
  2. Then take these individual tests, and use them together to create an individual User Workload model for each user.
  3. Finally, combine the User models to produce an overall Concurrent User workload simulation.

See below for a worked example of a sample test definition.

Points to remember in your test definition:

  1. Whilst each dashboard does have its own name, a good idea is to use your own ID system to make it easy to refer to a specific dashboard state. For example, when using prompts there might be two or more versions of the dashboard to test (for different prompt values).
    It’s also easier to refer to something like “ID01” in conversation and notes than it is it “Finance Dashboard – Profitability EMEA blah blah blah”.
  2. Don’t forget default values in your navigation. If you are testing specific dashboard prompt values, you’ll need to also test the default values first – because that is what will get executed when the user first clicks onto the dashboard.
  3. As well as testing dashboard response times, factor in the usability of any prompts that the user will interact with. A fast dashboard is not quite so impressive if the user had to wait a long time for the dashboard prompts to display the available values.
  4. You need to be writing down nuts and bolts detail here. Resist the temptation to skimp on the detail or assume you’ll remember – you won’t, and certainly your colleague who can reuse your testing won’t appreciate having to fill in the blanks to make use of it.

What do you mean, not all of it has been tested?

ScreamBefore you run off to write down all your dashboards and types of user, consider the following. Unless you have :

  1. A very small system
  2. A lot of time, and/or an infinite number of monkeys

you are not going to be able to actually test all of your OBIEE system. Fact.

There are too many factors (prompt values, aggregate paths, ad hoc usage) to be able to model all plausible (let alone possible) permutations of these.

Designing the scope of a performance test needs to take into account the acceptable risk of things going wrong vs time/budget available vs functionality required. By functionality, I mean factors such as the breadth of dashboards delivered, or data made available.

  • In some systems the stability is paramount and therefore the relative risk must be minimised. If this is the case and your time/budget is constrained then therefore the functionality will have to be constrained.
  • If you need to deliver a lot of functionality and assuming that time/budget is fixed, then you will need to accept an increased risk to the stability of your system.
  • If you have limitless time/budget then you can deliver lots of functionality and do large and wide-ranging performance tests to ensure minimal risk.

The point here is an important one because it is one that needs discussing with the project owner, or more specifically, whoever will be accountable if things don’t go right in Production. For a standard OLTP system it is much easier (relatively) to design a performance test with a high degree of confidence that most [common] workloads have been simulated and tested. However, for a reporting system such as OBIEE there are two reasons why it isn’t so easily determined, as discussed above and re-iterated here:

  1. Permutations of fixed reporting – each dashboard can have multiple prompt dimensions, hierarchical column expansions. Users can follow many different drill and navigation paths from any one dashboard.
  2. Ad hoc reporting – by definition, the report is not known until the user builds it, so we cannot test it beforehand

But, all is not lost. Even if we accept that to test every permutation of report that might run is impossible, it is still perfectly realistic to perform a thorough performance assessment of a system prior to go-live. The most sensible option is a hybrid approach of:

  • Traditional performance testing of fixed reports as described above
  • Testing predefined ad-hoc reports (if that isn’t a paradox)
  • Holistic and heuristic evaluation of the system design and data model
  • Resource management to curtail the impact of bad queries
  • Active monitoring to identify bad queries and deal with them (using the T.E.A. method)
  • Feedback loop : as system usage becomes clearer, refine original performance testing using a more accurately informed model (using the T.E.A. method)

This may seem a bit of an excuse for a cop-out. If we can’t performance test all our system and be confident in the results, what’s the point? Here, I defer to Cary Millsap, a great proponent of the rigorous, scientific, and rational approach to performance testing:

How can you possibly test a new application enough to be confident that you’re not going to wreck your production implementation with performance problems?

You can model. And you can test. However, nothing you do will be perfect. It is extremely difficult to create models and tests in which you’ll foresee all your production problems in advance of actually encountering those problems in production.

Some people allow the apparent futility of this observation to justify not testing at all. Don’t get trapped in that mentality. The following points are certain:

  • You’ll catch a lot more problems if you try to catch them prior to production than if you don’t even try.
  • You’ll never catch all your problems in pre-production testing. That’s why you need a reliable and efficient method for solving the problems that leak through your pre-production testing processes.

Somewhere in the middle between “no testing” and “complete production emulation” is the right amount of testing. The right amount of testing for aircraft manufacturers is probably more than the right amount of testing for companies that sell baseball caps. But don’t skip performance testing altogether. At the very least, your performance test plan will make you a more competent diagnostician (and clearer thinker) when it comes time to fix the performance problems that will inevitably occur during production operation.

An example OBIEE test definition

Here’s an example test definition, put together from SampleAppv207.

What we are going to model is two groups of users. One likes pretty pictures and just runs two chart-based dashboards and then logs out. The other is interested in numbers and does some filtering on an analysis dashboard.

First, we’ll define the dashboards in use. For brevity, I’ve omitted the constituent analysis details, but in normal practice it’s a good idea to include them (for future reference, ease of understanding, clarity, etc).

Dashboards

Dashboard : ID01

  • Dashboard : 3.20 Vanilla Visuals
  • Page : Standard Visuals
  • Path : /shared/3. Analysis and Dashboards/_portal/3.20 Vanilla Visuals
  • Dashboard Prompts: None
  • Analyses :
    • /shared/3. Analysis and Dashboards/Report Views/Gallery/Examples of Standard Visualizations

Dashboard : ID02

  • Dashboard : 3.20 Vanilla Visuals
    • Page : Trellis-Product Sales
    • Path : /shared/3. Analysis and Dashboards/_portal/3.20 Vanilla Visuals Trellis-Product Sales
    • Dashboard Prompts: None
    • Drill state : Default (NB each chart can be clicked on to drill into detail)
    • Analyses :
      • /shared/3. Analysis and Dashboards/Report Views/Trellis/Revenue Breakup
      • /shared/3. Analysis and Dashboards/Report Views/Trellis/Product Cost and Unit Price Trend
      • /shared/3. Analysis and Dashboards/Report Views/Trellis/Brand Cost and Unit Price Trend
        • Offices Region Hier : All collapsed, except AMERICAS expanded one level

Dashboard : ID03

  • Dashboard : 1.3 Sample Sales
  • Page : Brand Analysis
  • Path : /shared/1. Quick Demos/_portal/1.3 Sample Sales Brand Analysis
  • Dashboard Prompts:
    • T05 Per Name Year : Default (2008, 2009, 2010, 2011)
    • D4 Company : Default (none)
    • T31 Cal Week : Default (between 0 and 53)
    • “Products”.“Products Hierarchy” : Default (none)
  • Analyses :
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Product Line Analysis
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Trending Analysis
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Gauges
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Top Customers Bookings
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Daily Revenue Timeline

Dashboard : ID04

  • Dashboard : 1.3 Sample Sales
  • Page : Brand Analysis
  • Path : /shared/1. Quick Demos/_portal/1.3 Sample Sales Brand Analysis
  • Dashboard Prompts:
    • T05 Per Name Year : 2010, 2011
    • D4 Company : Stockplus Inc.
    • T31 Cal Week : between 1 and 27
    • “Products”.“Products Hierarchy” : Default (none)
  • Analyses :
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Product Line Analysis
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Trending Analysis
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Gauges
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Top Customers Bookings
    • /shared/1. Quick Demos/Simple Demo Dashboard/Brand Analysis/Daily Revenue Timeline

Tip: Use Catalog Manager to generate a report of dashboards and their constituent analyses to make the creation of the above specification easier. :
Catalog manager
Catalog manager
Catalog manager

User Workload

Having defined the dashboards (ID01 – ID04), we define the user types and workloads:

User A

  • Description: Department X, executive
  • Workload model:
    1. ID01
    2. Think time: 60 seconds
    3. ID02
    4. Think time: 60 seconds
    5. Logout

User B

  • Description: Department Y, analyst
  • Workload model:
    1. ID03
    2. Think time: 120 seconds
    3. ID04
    4. Think time: 90 seconds
    5. Logout

Concurrency

Our concurrent user model looks like this. I am defining concurrent users as active users, that is, executing dashboards or simulated thinking.

Peak usage

  • User A : 5 concurrent users
  • User B : 2 concurrent users

Light usage

  • User A : 3 concurrent users
  • User B : 1 concurrent users

Types of testing

Before we can consider the types of test, we need to define what it is we are testing, hence the long section above. Once you’ve identified the types of user and specific reports they will run, you should also look at the type of test you want to run. The terminology is frequently used interchangably, so here’s my version of it:

  • Performance testing
    • With a single user, is the response time acceptable?
  • Load testing
    • With concurrent users, is the response time acceptable and stable?
  • Capacity testing
    • How does the system behave when concurrent users are simulated? do any areas show signs of reaching capacity of resource?
  • Soak testing
    • How does the system behave if the concurrent user workload is left to run for many hours? For example, are there any memory leaks lurking, or connections which are not closed down properly and will eventually run out?
  • Stress testing
    • As we increase the number of concurrent users beyond that required (and already tested successfully), how far will the system go until something breaks or response times suffer?
    • In a high-availability deployment, what happens to response times if we simulate partial stack failures? Does the system handle this gracefully under load?

These types of test do not have to be mututally exclusive in execution, but in defining your testing make sure you have covered them all, or explained why they are not necessary. An example would be performance testing a single report change on an existing system would probably not warrant a full scale soak test.

In summary

The output of this phase of the method should typically be a set of test definitions, not one single one. As already discussed, keeping tests modular and simple is a much better approach than trying to kill all the birds with one boulder.

Footnote: The soft squidgy side of performance testing

Tangent: Enterprise change control processes

Taking the premise discussed above – that it is impossible to test every possible report execution – raises an interesting question around change control and the stability of a Production environment. If your company is one where releases must be triple tested and signed off in the blood of your firstborn the you might want to have this conversation sooner rather than later. Why? Well if your system has any element of ad-hoc usage then there is a chance that a user will run a query that disrupts other users on the system. There is always the potential for the perfect storm of it being a critical business period, high user traffic, and a rogue ad-hoc query. Without proper understanding of the nature of ad-hoc queries, an almighty witch hunt can ensue, looking for who broke Production.
If this kind of risk is unacceptable then you will have to look into mitigating it with things like resource management and environment partitioning.

Estimating performance test for project planning

The challenge that the complexity and multiple facets of performance testing gives is finding a way to communicate it accurately to project managers and business sponsors.

Unlike an estimate for code delivery, which will be <x> days and after which the code will either function or not, performance testing can pretty much take as long as you like, and the kicker is that the tests will only give varying degrees of confidence that things should be OK in production, but with no guarantees.

Doing the initial analysis on the system is a fixed time, as is building the test rig(s) with which to test it. Once you have done this, then defining the number of dashboards to test can be done based on the time available. The more dashboards are tested, the greater certainty can be had that any problems will be uncovered. Other fixed tasks should be a review of the overall design, as well as time to properly document the tests undertaken and an analysis of the results obtained. It is a false economy to skimp on the documentation, as to do so renders almost useless the results obtained for future work. Good documentation of testing undertaken and results found can greatly shorten the time necessary in the future when revisiting the work either for a new phase of development or to troubleshoot performance problems that have arisen.

What next?

Next up is designing the test that we’ve just defined … read on!

Other articles in this series

This article is part of a series on the subject of Performance and OBIEE:

  1. Introduction
  2. Test – Define
  3. Test – Design
  4. Test – Build
  5. Execute
  6. Analyse
  7. Optimise
  8. Summary and FAQ

Comments?

I’d love your feedback. Do you agree with this method, or is it a waste of time? What have I overlooked or overemphasised? Am I flogging a dead horse?

Because there are several articles in this series, and I’d like to keep the discussion in one place, I’ve enabled comments on the summary and FAQ post here, and disabled comments on the others.