Tag Archives: Oracle BI Apps

Exalytics X3-4 Now Available, 2TB RAM + SSD and Supports OBIEE 11.1.1.7

It’s been on the Oracle Tech Price list for a while now, but Oracle Exalytics In-Memory Machine X3-4 was officially launched this week, with headline new features of 2TB of RAM (up from 1TB in the earlier, X2-4 version), and 2.4TB of SSD flash disk. This new version has a higher price due to the additional RAM and SSD (Ed Roske gives a good explanation for the price increase, and your options if you bought the earlier version), so the old version is still available for purchase at the original price but most customers will no doubt go for the new, uprated version, because overall the cost of Exalytics hardware is only a small proportion of the overall hardware+software cost. So what’s in this new version, has the Exalytics software changed, and what’s the point of the SSD disk if it’s “in-memory”?

Exalytics

As you’re probably aware from earlier posts on Exalytics, the Exalytics software is just regular OBIEE, Essbase and TimesTen, but with optimisations for running on this in-memory platform. As such, it’s the new Patchset 3 for Exalytics 1.0 that gives you any new features, and this patchset also applies to people like us with the earlier release. So Exalytics Release 1 Patchset 3 primarily gives us OBIEE 11.1.1.7 (which is big news in itself), new drivers for SSD disks, and a bunch of new Exalytics admin and configuration utilities as detailed in this update to the product docs. OBIEE 11.1.1.7 is a major new release of the product and gives us an updated look and feel, new visualisations, SmartView and so on, and also an integrated install of Essbase though you’ll need to do a fresh OBIEE install to get this. Within the new scripts there’s also some interesting new “capacity on demand” features that allows the admin to turn-off, or turn-on, capacity within the server, but this is aimed at Oracle’s IaaS (Infrastructure as a Service) offering an not something you or I would do. Of course you can license and provision smaller segments of the Exalytics server through OVM, Enterprise Manager and sub-capacity licensing, but this looks like something that could be tweaked on a month-by-month basis as demand for capacity rises and falls.

As well as updates to OBIEE, there’s also new certifications for Endeca Information Discovery 3.0, Oracle BI Apps 11.1.1.7.1, EPM Suite 11.2.3 and Hyperion Planning 11.1.2.1, all of which also apply to the earlier, X2-4 version as well.

From a hardware perspective though, it’s the 2TB of RAM and the SSD disks that are the most exciting new features. With the RAM upgrade, this means you can typically take TimesTen database sizes up to and in excess of 1TB, given that you’ll need to allocate a similar amount of RAM for temp space, but it’s the SSD disks that are most interesting. Something we’re finding from customer engagements on Exalytics is that Essbase ASO storage, rather than TimesTen, is becoming the predominant way we’re storing detail-level sparse datasets when they get beyond a certain size, and having SSD disks available to store the ASO databases means their data retrieval times drops more or less to zero. This is particularly important when you consider that, even if you could somehow store all of the ASO database in-memory (using a RAM disk, for example), there’s actually more bandwidth between the SSD disks and the CPU than between RAM and CPU, so the more SSD the better even when there’s RAM to spare. More on this phenomenon at a later date, but as Ed Roske pointed out a while ago, this is a major plus point for Essbase users on Exalytics.

Full details on Oracle Exalytics In-Memory Machine X3-4 are on this Oracle data sheet, and you can read more about the platform in-general on our Rittman Mead Oracle Exalytics homepage (including details of our free Exalytics test-drives and PoCs) and on Robin and my articles on the Rittman Mead blog.

Using Oracle TimesTen With Oracle BI Applications (Part 3)

In part 2 I introduced TimesTen Columnar Compression and explained that it creates-column value look up tables so a longer data value can be replaced with a short token. If we take a traditional Fact table we have a set of (usually) integer keys and some measures (again most likely numeric). Replace a 2 or 4 bytes long dimension key with a token of the same size will not save space on compression, in fact we use more memory as we also have to store the token look up table that gets created as part of the compression; I don’t think that compression is useful for most fact tables. On the other hand with dimensions we do get repeating longish columns for example many customers live in the city “San Francisco” we can replace the whole string with a single token of two or four bytes. Using Column Groups we could include STATE with CITY to get “San Francisco, California” reduced to single token. Column Groups can also be used where we have multiple attributes at dimension grain. Taking customer again we may have attributes for GENDER, MARITAL STATUS, HOME OWNER FLAG. For the example columns I gave the product of the number of distinct values for each column is under 255 so we could compress these three columns to a single byte value in the dimension table. In my opinion TimesTen Compression is more suited for dimensions than facts.

So far I have only looked at estimated sizes on empty tables. These are just estimates, real sizing and relative performance evaluation comes from the use of representative data loaded into actual TimesTen tables. Having created tables we can look at loading them. We have many options here:

  • ttLoadFromOracle - requires TimesTen 11.2.2.4 or higher,
  • ttBulkCp - loads data from flat file (similar to SQL/Loader).
  • ttImportFromOracle - creates scripts to transfer data (11.2.2.4 or later) – I introduced this in part 2.
  • OBIA ETL target changes - supports non-Oracle sources.
  • ODI - supports non-Oracle sources.
  • Oracle GoldenGate - replicate changes on source system (OBIA data warehouse warehouse tables) to TimesTen.
  • Custom code that uses ODBC/JDBC.
  • SQLDevloper - Export as insert statements and execute in TimesTen session. Good for quick and dirty on low data volumes.
There is also the Exalytics OBI Summary Advisor but that is really for the creation of aggregate tables based on OBI usage; Mark Rittman has blogged on this topic, most recently here. To keep things manageable in this blog post I will look at only two options, ttBulkCP and ttLoadFromOracle. This approach fits well with my plan to consider a single fact and its associated dimensions,

Bulk Copy

Old school, but effective, ttBulkCp works well. We spool flat files from our source tables and then use the utility to load the data into TimesTen. If you use SQLDeveloper you will see that ‘ttbulkcp’ is one of the supported export formats. The great thing about using SQL developer is that we can impose a where clause on to the export to move only a subset of data, a great feature if we are only using a slice of data or developing some form of incremental load into TimesTen. As is often the case there are are a few things to watch out for: date formats need to be TimesTen format or we need to supply a Date format mask to the ttBulkCp command and watch out for value limits on any of the native TimesTen data types.

Load From Oracle

The new kid on the block, allows parallel data loading which can be a big advantage for loading large tables. This feature was new in TimesTen 11.2.2.4 and currently does not support columnar compression, it is expected that compression will be available in a subsequent release. Unlike ttBulkCp, ttLoadFromOracle makes a direct connection to the source database base and executes a SQL query to extract the data set required. The ability to define SQL in the select will be especially useful when we develop processes such as data fragmentation or incremental loading.

Data Load Tests

Firstly, let me emphasise that these are not benchmark timings, I am using artificial test data in a fairly small environment; however they give a general indication for what works and what does not.

In part 2 of this series I showed that using the same DDL on TimesTen as for the original OBIA data warehouse tables gave a massive increase in table size (RAM required compared to disk utilised in Oracle) Changing the data types to TimesTen native types reduced the space requirement, but still came in as larger than the original OBIA usage. For my dimension loads I extracted the source table DDL and modified it  to use native TimesTen data types. As I am using TimesTen as storage for a data warehouse style database I added the OPTIMIZED FOR READ directive to my CREATE statements. I exported the data from the source tables using SQLDeveloper to create  ttBulkCp files. No data filtering was used on extract. Each file was in turn loaded into pre-created TimesTen Tables. Two additional dimensions were created to experiment with compression (compressed address and compressed product). I also tried a compressed fact table and as expected I saw no space saving (in fact a small increase in memory usage). The compress fact was not used in further tests

For the first set of tests I duplicated the whole fact table to TimesTen, again using ttBulkCp. After loading the tables with data TimesTen object statistics were gathered. Just as in its traditional cousin the TimesTen Cost Based Optimizer needs accurate table statistics to do a good job. I then ran a few queries at differing levels of aggregation over the OBIA Oracle 11g database, the TimesTen copy and the TimesTen copy with compressed dimensions. For full tables at no aggregation the best query performance was seen with the original OBIA data warehouse. Of the two TimesTen table structures the best performance (nearly as good as the OBIA database) was seen with compressed dimensions.

Both in terms of database size and query performance it would appear that using TimesTen as replacement for a detail grain data warehouse is not viable. We do have some other options to exploit the fragmentation or federation features of OBI to allow us to utilize TimesTen with OBIA. We can follow the approach of the Exalyitics Summary Advisor and build aggregated tables in TimesTen or we can use TimesTen to store a narrow time-based slice of data at the same level of grain as the original fact table. In this blog posting I am not going to map my test tables into OBI, instead I am going to look at things from SQL.

For this section I will be using LoadFromOracle and building my aggregate or timeslice as a query over the source database, in the real world I would look at some form of incremental load rather than a full refresh. Building a time-slice table is perhaps the simplest extract; we add a where clause to the extract that limits the select to a subset of rows. In this case I am extracting a particular month from my fact table. The resulting TimesTen table is of course much smaller than before and queries very quickly. This approach will work well if a large amount of user queries are looking at current month but for queries over longer time frames OBI will potentially need to combine the results from both the OBIA database for historic data and TimesTen for current, this happens in the OBI server. 

The final test were to create summary tables at a higher level of aggregation, again I have used the SQL to build the aggregates  as part of the ttLoadFromOracle call. Again our aggregates are much smaller than the original source and execute very quickly as much less data needs to be looped through. 

In conclusion I would say that the use of TimesTen is viable over OBIA apps providing you only work with data subsets. The choice between using a recent time-slice or an aggregate table will depend on you reporting needs. If most queries require base data (that is more operational reporting) it is probably best to use a recent time-slice in TimesTen, on the other hand if people mainly look at management dashboards then targeted summary aggregates are the way to go. However any aggregate in TimesTen must be a lot smaller than source table or else there is no advantage in using the aggregate

Using the Exalytics Summary Advisor and Oracle BI Apps 7.9.6.4

Peter Scott is currently mid-way through a series of posts on our blog on replicating parts of the BI Apps data warehouse into TimesTen, as a potential way for running all or part of the BI Apps “in-memory”. We’ll have to wait and see how Pete’s experiments work out, but the thing that prompted that study into replicating whole parts of the BI Apps data warehouse into TimesTen was the fact that, when we last checked, the Exalytics Summary Advisor “blew-up” when we tried to run it against the BI Apps repository; using OBIEE 11.1.1.6 (the initial Exalytics-compatible release of OBIEE 11g) the Summary Advisor just wouldn’t run, because the BI Apps repository was just too-complex, and broke too many Exalytics RPD-validity rules, to allow the aggregate recommendation and generation process to complete.

But that was back with OBIEE 11.1.1.6, and we’re now up to the 11.1.1.6.11 patchset for this particular release, so whilst Pete checks out the “brute force” method I thought it’d be worth revisiting the use of the Summary Advisor, with my BI Apps installation bumped-up to version 7.9.6.4, and the Exalytics environment patched up to 11.1.1.6.9, the latest patch release we’ve applied to that environment. To get ready for the test, I’d loaded a minimal amount of EBS 12.1.3 source data into the BI Apps data warehouse tables, mostly around the Accounts Receivable and Sales subject areas, giving us a set of dashboards like those in the screenshot below to use for testing.

NewImage

I then created a relatively small TimesTen database for my initial testing (2GB perm size, 1GB temp size) as our test & dev environment is only a subset of the whole Exalytics environment; also, for the initial testing all of the timeout and other settings were left at their default value, with the idea being I’d play around with them as testing proceeded.

Running the Summary Advisor against a BI Apps repository is the same as against any other RPD; first select Tools > Utilities, then select Oracle BI Summary Advisor from the list of available BI Administration tool utilities (note that the Summary Advisor menu option only appears if you’re working on an Exalytics server).

NewImage

Then you go through the process of narrowing down the set of table sources that the Summary Advisor’s aggregate recommendation process will consider, first by selecting the date range for user queries, and then by setting a threshold for the accumulated time on a particular logical table source. For my example, I left the start and end date open, but discarded any logical table sources that had been queried for less than 15 seconds in total, thereby removing those table sources that we’re really used that much in queries.

NewImage

Next I select the TimesTen database as the target for the recommended aggregates, fine-tune a few further settings around when the Summary Advisor should stop trying out new aggregate options, and then finally review the aggregates that the Summary Advisor has proposed, based on usage patterns recorded in the various usage tracking and summary statistics tables maintained by Exalytics, In this case, the Summary Advisor has recommended five aggregates, each one of which would involve an summary fact table and one or more shortened dimension tables.

Sshot 6

I left all of the candidate aggregate table options selected, and then generated the script file that the BI Server then uses to create the aggregate tables, load-up with summary data and then register in the BI Apps repository. In this instance, the script file contained logical SQL statements to first drop the aggregate tables, then create them as described above, as shown below.


delete aggregates
"tt_aggr_store".."EXALYTICS"."ag_320723981",
"tt_aggr_store".."EXALYTICS"."ag_1385967556";

create aggregates

"ag_3423183827"
for "Core"."Fact - Fins - GL Revenue Posted"("Revenue","P&L Amount","Credit Amount","Debit Amount","Credit Local Amount","Debit Local Amount","Credit Account Amount","Debit Account Amount","Revenue Document Amount","Revenue Local Amount","Revenue Amount","Revenue Transaction Quantity","Period Start Date","Period End Date")
at levels ("Core"."Ledger"."Detail", "Core"."Date - Fiscal Calendar"."Fiscal Year")
using connection pool "tt_aggr_store"."Connection Pool"
in "tt_aggr_store".."EXALYTICS",

"ag_3448667885"
for "Core"."Fact - Fins - GL Revenue Posted"("Revenue","P&L Amount","Credit Amount","Debit Amount","Credit Local Amount","Debit Local Amount","Credit Account Amount","Debit Account Amount","Revenue Document Amount","Revenue Local Amount","Revenue Amount","Revenue Transaction Quantity","Period Start Date","Period End Date")
at levels ("Core"."Profit Center"."Profit Center Detail", "Core"."Ledger"."Detail", "Core"."Date - Fiscal Calendar"."Fiscal Period")
using connection pool "tt_aggr_store"."Connection Pool"
in "tt_aggr_store".."EXALYTICS",

"ag_3350139264"
for "Core"."Fact - Fins - GL Revenue Posted"("Revenue","P&L Amount","Credit Amount","Debit Amount","Credit Local Amount","Debit Local Amount","Credit Account Amount","Debit Account Amount","Revenue Document Amount","Revenue Local Amount","Revenue Amount","Revenue Transaction Quantity","Period Start Date","Period End Date")
at levels ("Core"."Date - Fiscal Calendar"."Fiscal Year")
using connection pool "tt_aggr_store"."Connection Pool"
in "tt_aggr_store".."EXALYTICS",

"ag_630606962"
for "Core"."Fact - Fins - GL Revenue Posted"("Revenue","P&L Amount","Credit Amount","Debit Amount","Credit Local Amount","Debit Local Amount","Credit Account Amount","Debit Account Amount","Revenue Document Amount","Revenue Local Amount","Revenue Amount","Revenue Transaction Quantity","Period Start Date","Period End Date")
at levels ("Core"."Profit Center"."Profit Center Detail", "Core"."Ledger"."Detail", "Core"."Date - Fiscal Calendar"."Fiscal Day Detail")
using connection pool "tt_aggr_store"."Connection Pool"
in "tt_aggr_store".."EXALYTICS",

"ag_4212509594"
for "Core"."Fact - Fins - GL Revenue Posted"("Revenue","P&L Amount","Credit Amount","Debit Amount","Credit Local Amount","Debit Local Amount","Credit Account Amount","Debit Account Amount","Revenue Document Amount","Revenue Local Amount","Revenue Amount","Revenue Transaction Quantity","Period Start Date","Period End Date")
at levels ("Core"."Profit Center"."Profit Center Detail", "Core"."Ledger"."Detail", "Core"."Date - Fiscal Calendar"."Fiscal Year")
using connection pool "tt_aggr_store"."Connection Pool"
in "tt_aggr_store".."EXALYTICS";

So now we’ve got the aggregate creation script, it’s a case of running it through the nqcmd.exe BI Server ODBC command-line client, and waiting for the results … which in the first instance, failed. Looking through the various log files it turned out the issue was the W_MCAL_QTR_D table, which contained two values for the same ROW_WID and which caused the aggregate creation process to fail. And this is an issue you’re probably likely to find a few times if using the Summary Advisor against the BI Apps RPD; the RPD physical layer tables define this column as being a unique key, but in reality there’s no unique key constraint on the underlying table and the load process actually loaded two values for that key into the table. So expect to do a bit of data cleansing and validation before letting the Summary Advisor loose on your BI Apps repository, and I’d imagine issues like this will get sorted in future releases of the BI Apps so that Exalytics “just works” without having to fix the data like this.

Once the underlying data was fixed-up though, the Summary Advisor script ran as expected, giving us the output below (I’d added an extra “delete aggregates” command to the file at the start to clear up some previous attempts, hence the three commands that completed successfully).

Sshot 7

Moving back over to the BI Administration tool I could see the new TimesTen in-memory tables registered in the Physical layer, and over in the Business Model and Mapping Layer I could see those tables mapped in as new logical table sources.

NewImage

Running a quick “View Data” on the new TimesTen tables showed they contained the aggregated EBS data as expected, now held in TimesTen for Exalytics in-memory tables.

Sshot 9

So, after a bit of data cleansing and a few changes to the TimesTen settings to up the timeout time for user queries, it now works, which also shows the value of testing something again that didn’t work a few patch releases ago. I suspect in-practice running this against a fully-loaded BI Apps environment will uncover a few more data issues, so my next step is to pull together a larger BI Apps data warehouse dataset and test it out against a number of modules and a wide range of query patterns. But – encouraging so far, and it’ll be interesting to compare the results with the findings from Pete’s experiments.

Using Oracle TimesTen With Oracle BI Applications (Part 2)

Last time I posted an introductory blog to this series of posts on using Oracle TimesTen with Oracle BI Applications (OBIA). Today, I am going to look at some of the OBIA structures and then start to explore the ways we can use TimesTen with OIBA. As TimesTen may be a new topic for some readers, I will also talk about some of the features of TimesTen and some of the differences from Oracle 11g that we need to allow for in building a solution that incorporates TimesTen.

As I described, OBIA consists of three principal pieces: an ETL system to extract source data, stage, transform it  and, finally, publish it to a data warehouse; a database to host both the staged data and the target data warehouse tables; and OBI to provide dashboards and other analysis. During the data load process the ETL extracts the propriety formatted data of the source systems (EBS, JD Edwards, SAP, whatever) and transforms it into a common model that is usable by the reporting layer. The OBIA product development team have done most of the legwork in mapping the complex ERP, HR and CRM application schemas through to the reporting database, however some work may be needed to deal with source customisations such as the use of flexfields. From our point of view we are only concerned with two areas, the OBIA database’s warehouse layer tables and how they are mapped into the OBI physical and logical data models. How data is extracted and staged is not of so important for our TimesTen work

In OBIA we can readily find the tables of interest. The table names all begin with ‘W_’ for warehouse, have a content related name such as INVOICE and end with a suffix that indicates the usage. There are three suffixes that we need to consider for loading into Oracle TimesTen:

  • _F for the fact tables
  • _D for the dimension tables
  • _A for aggregated fact tables

OBIA Files

Looking at the structure of a fact table we can see that we have columns for the measures and key columns that are used to join to the related dimension tables keys, in other words a Star Schema. On the face of it it sounds like we can take the simple approach and copy the required tables to TimesTen or perhaps amend the ETL process to use TimesTen as the target database. Remember that TimesTen is an in-memory database and every object transferred to the database needs to be loaded into the TimesTen server RAM;  we are thus unlikely to have enough RAM to store all of the OBIA warehouse tables. We must also remember that not all of the RAM on the server can be dedicated to TimesTen table storage. In addition to the RAM needed by the OS we may well need to dedicate memory for other processes running on the same platform – in the case of Exalyitics this could well be both Essbase and OBI. Even then not all of the memory allocated to TimesTen can be used for table storage. There is a significant amount need for working space (similar to the temporary tablespace in Oracle 11g) and space may be need for any other TimesTen database structures such as indexes, materialized view and PL/SQL packages.

Building our TimesTen Database

Having downloaded and installed the Oracle TimesTen software we need to create a TimesTen database server. This process is well described in the quickstart.html document in the The TimesTen install directory. We need to create both a TimesTen server and a TimesTen client. Both of these are configured as ODBC data sources; on Linux we will have to edit odbc.ini files but for Microsoft Windows we can use the Data Sources applet and create our connections in the GUI. Once the database is running we can connect to using ODBC or JDBC and start to create tables. I tend to use SQLDeveloper for this as I am used to the tool, if you like using the command line there is a program called ttisql which looks quite similar to SQL/Plus. We will need to use ttisql anyway to physically instantiate the database for the first time

My TimesTen Server is called DW_PETE_TT. As it is running on a small testing environment I have kept the memory size allocations down to 1.5GB for both data and TEMP. A real implementation will have properly sized allocations. I have also created a user (A_TEST) and given the user the necessary grants to create sessions and tables

As I have already mentioned, Oracle TimesTen is a relational database so the basic CREATE TABLE syntax (less any storage specific features such as assigning tablespaces) will work. So by extracting the DDL from the OBIA W_xxxx_F and W_yyyy_D tables we can create empty tables of the same structure in TimesTen. We can then use the TimesTen ttsize utility to estimate the size of the table based on structure and expected row counts. Working with the REVENUE_F table in my sample source schema I have 1,000,000 rows. In Oracle 11gR2 this table occupies about 140MB of disk storage – the same table is estimated to be around 500MB when created like for like in Oracle TimesTen – that’s about 3.5 times larger.

TimesTen sizing

Being this much larger is clearly bad news, we may well struggle to fit our database into available RAM (don’t forget we need TEMP space too) ; the initial load of data from disk to the database will take a longer as more bytes need to be read from disk, and our queries will need to trawl through a large amount of memory to find our results. We can however improve things. If we are on Exalytics we can use column compression and even if not, we can modify our table structures to use native TimesTen datatypes to save on space. The first thing to tackle is the datatype issue. We can do this manually by changing our create table statement or we can use a new Oracle TimesTen utility (ttImportFromOracle) that may be obtainable from Oracle’s TimesTen product team. This utility will inspect the source table and content and generate SQL scripts to: create the TimesTen table with the appropriate column data types and if using Exalytics columnar compression, to extract the data from source and load the target and finally update optimiser stats.

Lets revise the structure of the TimesTen table to use native numeric types – I’ll use TT_INTEGER for the key columns and BINARY_FLOAT for the numeric measures, doing this brings the size estimate down to 228313208 bytes, still larger than the original table in Oracle but less than half the size of our original straight copy of the structure. There is further scope for size reduction through optimising the VARCHAR2 columns.

-- Using Oracle Datatypes
create table A_TEST.REVENUE_F (
SHIPTO_ADDR_KEY NUMBER,
OFFICE_KEY NUMBER,
EMPL_KEY NUMBER,
PROD_KEY NUMBER,
ORDER_KEY NUMBER,
REVENUE NUMBER,
UNITS NUMBER,
DISCNT_VALUE NUMBER,
BILL_MTH_KEY NUMBER,
BILL_QTR_KEY NUMBER,
BILL_DAY_DT DATE,
ORDER_DAY_DT DATE,
PAID_DAY_DT DATE,
DISCNT_RATE NUMBER,
ORDER_STATUS VARCHAR2(20 BYTE) INLINE,
CURRENCY VARCHAR2(3 BYTE) INLINE,
ORDER_TYPE VARCHAR2(20 BYTE) INLINE,
CUST_KEY NUMBER,
SHIP_DAY_DT DATE,
COST_FIXED NUMBER,
COST_VARIABLE NUMBER,
SRC_ORDER_NUMBER VARCHAR2(20 BYTE) INLINE,
ORDER_NUMBER NUMBER);
-- using TimesTen Native Datatypes
CREATE TABLE "REVENUE_F_NATIVE"
( "SHIPTO_ADDR_KEY" TT_INTEGER,
"OFFICE_KEY" TT_INTEGER,
"EMPL_KEY" TT_INTEGER,
"PROD_KEY" TT_INTEGER,
"ORDER_KEY" TT_INTEGER,
"REVENUE" BINARY_FLOAT,
"UNITS" BINARY_FLOAT,
"DISCNT_VALUE" BINARY_FLOAT,
"BILL_MTH_KEY" TT_INTEGER,
"BILL_QTR_KEY" TT_INTEGER,
"BILL_DAY_DT" DATE,
"ORDER_DAY_DT" DATE,
"PAID_DAY_DT" DATE,
"DISCNT_RATE" BINARY_FLOAT,
"ORDER_STATUS" VARCHAR2(20 BYTE),
"CURRENCY" VARCHAR2(3 BYTE),
"ORDER_TYPE" VARCHAR2(20 BYTE),
"CUST_KEY" TT_INTEGER,
"SHIP_DAY_DT" DATE,
"COST_FIXED" BINARY_FLOAT,
"COST_VARIABLE" BINARY_FLOAT,
"SRC_ORDER_NUMBER" VARCHAR2(20 BYTE),
"ORDER_NUMBER" TT_INTEGER
) ;
/* Size comparison
[oracle@oracle2go ~]$ ttsize -tbl A_TEST.REVENUE_F -rows 1000000 DW_PETE_TT
Rows = 1000000
Total in-line row bytes = 500365425
Total = 500365425
[oracle@oracle2go ~]$ ttsize -tbl A_TEST.REVENUE_F_NATIVE -rows 1000000 DW_PETE_TT
Rows = 1000000
Total in-line row bytes = 228313201
*/

On Exalyitcs we have access to Columnar Compression. TimesTen Compression works by replacing data with tokens held in a dictionary. When we create a compressed table we specify a column (or group of columns) to compress and the token size (based on number of distinct values): 1 byte tokens support 256 distinct values, 2 byte up to 65,535 values, the largest token is 4 bytes. It is obvious that high compression is only possible for character strings longer than the replacement token. We can repeat the compression process on other columns or column groups, each compressed column has its own token dictionary table.

Next time I will look moving data into our TimesTen table and techniques we can use to boost query performance

Using Oracle TimesTen With Oracle BI Applications (Part 1)

Oracle BI Applications (OBIA) is a product for delivering industry best-practice dashboards and analysis over enterprise sources such as ERP, HR, SCM and CRM systems. OBIA includes the necessary functionality to extract data from the source systems, transform and load it to the target database’s standardised data model and to provide a web-based analytic platform over the target database. The ETL functionality is (depending on OBIA version) delivered by either Informatica or Oracle Data Integrator (ODI). The reporting and dashboards are delivered through OBI.

Simplistically, Oracle BI Apps extracts data from the source system(s) and loads it into a custom data warehouse data model, this is then used as the data source for the pre-built OBI analyses and dashboards.

Basic OBIA

Over my next few blog postings I will be exploring the use of Oracle TimesTen as an additional data storage for OBIA reporting. If you are not already aware, Oracle TimeTen is an in-memory relational database that finds usage as in-memory caching technology for Oracle databases and as a standalone in-memory database, especially for rapid update transactional systems. It is also a key part of the Exalytics software platform (along with Oracle Business Intelligence and Essbase). For these articles I will be using the Exalytics edition of TimesTen. This gives access to some of the special Data Warehouse-style performance features that are not included with the standard TimesTen license. Some of the techniques I will cover are equally applicable to the standard TimesTen product and do not require the special Exalytics features. Pictorially, I will be using Oracle TimesTen as a second data source to OBI.

Adding in TimesTen

To simplify these blog postings I will consider a single fact table and its associated dimensions and use hand-coded TimesTen methods for data loading. I am writing my own data load code as I want to compare different loading techniques. In a production scenario I might consider using ODI or the code generated by the Exalytics BI Summary Advisor – both topics for another time.

Before I get into detail on loading TimesTen tables is may be useful to mention what Oracle TimesTen actually provides. TimesTen is a relational database that uses the server’s RAM and not disk to hold the data. Although the database is “in memory” it does have a disk footprint in that the database is read from disk on startup and written back to disk on unloading in addition there are database log (checkpoint) files for use in database recovery from backup. Just as with Oracle 11g, TimesTen supports all the usual DDL of a relational database: CREATE TABLE, CREATE VIEW, CREATE INDEX, CREATE PACKAGE etc. Packages are written in PL/SQL. The Exalytics version of TimesTen adds two additional features we may find useful in building aggregate tables: in-memory columnar compression and the OLAP grouping operators (ROLLUP, CUBE and GROUPING SET). Query plans are generated by the TimesTen Cost Based Optimiser and we able to modify query plans by more accurate table statistics, optimal indexing and the use of database hints

I will look at some of these features in more detail as I develop the code in my next two blog posts.