Tag Archives: Oracle Database

Oracle Openworld 2013 Day 1 : User Group Forum, News on Oracle Database In-Memory Option

The Sunday before Oracle Openworld proper is “User Group Forum Sunday”, with each of the major user groups and councils having dedicated tracks for topics such as BI&DW, Fusion Development, Database and so on. Stewart, myself and Venkat were honoured to be presenting on behalf of ODTUG, IOUG and APAC covering topics around the new 11g release of the BI Applications, Hyperion/EPM Suite, and agile development using OBIEE, ODI and Golden Gate. Links to the presentation PDFs from each of our sessions are listed below:

All of the sessions drew a good crowd, and I was especially pleased to see the number of people that came along to the BI Apps 11.1.1.7.1 sessions, and that there were a few early-adopters in the audience who’d either completed their initial implementations, or had carried out pilot or PoC exercises. Feedback from those attendees was as I’d expected – some initial early adopter issues but generally positive feedback on the simplified architecture, and use of ODI. Stewart’s session on the data integration aspects of this new release included content on its new Golden Gate integration, and the new Source-Dependent Store ODI concept that it supports, again which went down well with an audience looking for more technical details on how this new release works.

After the user group sessions finished, it was time to go over to Moscone North for Larry Ellison’s opening keynote, where three new products were announced. First up was the new In-Memory option for the Oracle Database, which adds an in-memory, column-store capability to Oracle databases on all platforms, not just Exadata. Aimed at the upcoming 12.1.2 release, this new feature will provide a column-store capability alongside the existing on-disk row-store, with the column-store being used for DW-style queries whilst the row-store will continue to be used for OLTP.

The way this in-memory column store will work, is as follows:

  • The DBA will enable the in-memory feature by setting the database parameter “in memory_size = XX GB”, with the memory then being allocated in the database’s SGA (System Global Area, one of the shared areas in the overall database memory allocation)
  • Tables, partitions or sets of columns will be enabled for in-memory storage by an “alter table … in memory” DDL command
  • Existing query indexes on the source tables can then be dropped

The database will then take care of copying these tables, columns or partitions into the in-memory column-store area, and then refreshing those tables on a regular basis, so that the database will have both row-store, and column-store versions of the tables available at the same time.

Oracle’s assertion is that the overhead in maintaining both the row- and column-store versions of the tables will be balanced out by the removal of the need to maintain query indexes on the source tables, and performance improvements of 100x to 1000x were quoted for DW-style queries, and 2x for OLTP-style queries. Unlike the Hybrid Columnar Compression feature announced a couple of years ago at Openworld, none of this is Exadata-specific, but it will be an option for the Enterprise Edition of the database, and it will require the 12.1.2 release, so you’ll need to budget for it and you’ll need to be on the most recent release to make use of it.

Other than the in-memory option, the other two product announcements in the keynote were:

  • The M6-32 “Big Memory Machine”, with 32TB of DRAM and a SPARC M6 chip architecture – positioned as the ideal server for the in-memory option
  • The “Oracle Database Backup, Logging and Recovery” appliance, a server designed to receive and then store incremental database backups for private and public clouds, and then restore those databases as necessarily – basically a backup server optimised for database backup and recovery.

So that was it for today – more news tomorrow once the main conference sessions and keynotes start.

Using Non-Oracle, and Cloud-Hosted Databases with OBIEE 11g

The vast majority of our customers use Oracle Database as the underlying data source for OBIEE 11g, sometimes on Exadata but almost always the Enterprise Edition of the database. OBIEE typically sells best into customers that already extensively use Oracle products, and often it’s often bundled-in with a wider technology or apps deal including database and middleware products. But for some customers, the cost of the Oracle Database as a data source can be prohibitive and in many cases, they’re also looking at purchasing and deployment options that don’t involve upfront perpetual license deals and running the software on-premise. One of our longest-standing customers came to us with just such a scenario a few weeks ago, and so I thought it’d be interesting to take a look at what alternatives there are to licensing Oracle Database Enterprise Edition, and whether running these alternatives in the cloud was a practical option.

To set the context: imagine that you wanted to deploy a small-to-medium sized data warehouse on two servers:

  • A “production” server with 64GB RAM, 4 x 4 core servers, Intel x86_64 architecture and running Oracle Linux 6 (for licensing purposes, 8 processors)
  • A “dev/test” server with the same spec, onto which we’d install two separate databases
NewImage

For now we’ll ignore the ETL element, but if we were licensing Oracle Data Integrator Enterprise Edition, we’d need to license it for these two servers too at around $23k/processor.

So what are the alternatives to Oracle Database Enterprise Edition as a data source? To my mind, there are three main options:

Let’s take a look at the Oracle options first of all. Based on 8 processors (4 x 4 Intel x86_64 cores, x0.5 multiplication factor), the cost of licensing Oracle DB EE + commonly-used database options based on the current Tech Price List would be as follows (at list price):

  1. Oracle Database Enterprise Edition ($47.5k x 8 = $380k)
  2. Partitioning Option ($11.5k x 8 = $92k)
  3. EM Tuning Pack ($5k x 8 = $40k)
  4. EM Diagnostics Pack ($5k x 8 = $40k)

Giving us a total of $552k for 8 processors (covering both of our servers). Add onto that the standard 22% support and software updates, and you’re looking at an initial payment of $673k or the perpetual license, and an ongoing $121k/year for annual maintenance. Factor in a DBA and the hardware as well, and you’ll probably not get much change out of $800k in year one, and $250k in the years afterwards.

You could switch-down to one of the cheaper editions of Oracle Database, which also don’t come with the add-on packs, which could take you down to $5.8k/processor for SE1 or $17.5k for SE, but you’ll then have to do without most of the high-end data warehousing features in the Oracle database, and work within socket restrictions when it comes to SE1. Now of course you get what you pay for with Oracle Database, including potentially lower ongoing admin costs because much of the tuning and optimisation work is either unnecessary or automated, but if price is the issue and the alternative is a cut-down version of Oracle, what else is on offer – and that works with OBIEE?

If you start to look around at the alternatives to Oracle, and assuming you’re discounting products like IBM DB/2 or Microsoft SQL Server, you’ll find there are two main “conventional” types of RBDMS that you could use with OBIEE:

  1. General purpose RBDMS’s, such as mySQL and PostgreSQL
  2. “Analytical” databases, typically in-memory, column-store and shared-nothing, such as Greenplum and Vertica

If you look under the covers with the analytical databases, in most cases you’ll find that the core RBDMS technology they use is in fact PostgreSQL, with the (open-source) code then forked and adapted to work with the particular technology angle each vendor chooses to emphasise. Most developers tend to think that PostgreSQL is a more “enterprise-scale” free RDBMS than mySQL, and so given that it also provides the core DB technology in the other products, how might this work with OBIEE?

As a bit of background, PostgreSQL is an open-source database that’s of a similar vintage to Oracle Database, and you can download it for free or install it as part of most Linux distributions. Many developers swear by PostgreSQL itself as being all they need for data warehouse scenarios, but a couple of vendors have taken the PostgreSQL code and commercialised it, adding their own management tools and support options around the core open-source code. The most famous of these is EnterpriseDB, who package up PostgreSQL into a number of options one of which is called Postgres Plus Advanced Server. If you’re looking for an open-source based, low-cost but mature RBDMS engine for your database, this is more or less the flagship option, so I set about creating a new OBIEE 11.1.1.7 VM for testing purposes, with the aim of using Postgres Plus Advanced Server as my data source.

In fact, the first issue you hit when trying to go “non-Oracle” for the database, is that you need a Oracle, SQL Server or IBM DB/2 database to install the RCU schemas into. PostgreSQL isn’t an option, but in reality most customers using OBIEE will have a small Oracle database available somewhere, so I cheated here and installed the RCU schemas into a spare Oracle database, and then went on with the install.

Installing Postgres Plus Advanced Server itself was pretty straightforward, with the installer separately downloading the core PostgreSQL files and the Apache HTTP server used for its web-based admin tools. On Windows, once you’ve installed the software there’s a new Start Menu entry with the EnterpriseDB-specific (value-add) tools, including a console-based Enterprise Manager-type tool called pgAdmin. 

Sshot 1

Opening up pgAdmin gives you an environment that’s similar to that you’d find in tools such as TOAD, SQL*Developer and the old Java thick-client based Enterprise Manager, and using the tool I was able to connect to my PostgreSQL database, create a few tables, and run some test SQL queries.

Sshot 2

It’s pretty basic stuff compared to Oracle’s Enterprise Manager product, but it may be all you need if you’re looking for a tactical, or departmental solution. EnterpriseDB price Postgres Plus Advanced Server at just under $5k/processor/year for the core database (one processor = one socket, however many cores), with their Enterprise Manager product costing $1500/year for two monitored hosts, and their “Solution Pack” another $5k/year, so based on the same server sizing as we used for the Oracle costings, the price of an EnterpriseDB solution looks a bit like this:

  1. Postgres Plus Advanced Server ($5k x 8 = $40k)
  2. Enterprise Manager EM Tuning Pack ($1.5k)
  3. Solutions Pack ($5k)

Coming out at around $47k/year for the two servers. Now the Oracle licenses were perpetual not annual, but even so, the total cost of the on-premise PostgreSQL solution is about 1/3rd of just the annual maintenance cost of the Oracle software, let alone the software license cost, so it’s considerably cheaper. So what don’t you get with PostgreSQL – either core, or commercialised via EnterpriseDB, that you get with Oracle?

Now as a disclaimer, I’m mainly an Oracle person and can’t really speak with any great depth on PostgreSQL, but the major things that are missing in PostgreSQL compared to Oracle Database 11g are:

  • Proper, in-built support for parallel query – PostgreSQL has an open-source GridSQL project but this looks more like a federated, shared-nothing solution rather than PQ within a single server
  • Built-in ETL in the form of Oracle Warehouse Builder
  • Anything like Oracle’s built in MOLAP server, Advanced Analytics and the like
  • All the DR options, security options and so forth

And no doubt the core DW features – bitmap indexing, cost-based optimiser, support for VLDBs and so on – are not nearly as developed, and easy to manage, as with the Oracle database. But – given that not all customers need these features, and that many customers even on 10g and 11g are still using the database as if it were Oracle 7 or 8i – lack of such features may not be a show-stopper if money is tight.

Now so far we’ve been talking about running PostgreSQL “on-premise”, more or less a a direct substitute for Oracle Database; however, it’s also possible to run Postgres Plus Advanced Server on Amazon’s AWS cloud platform, with no local install of the software, simplified administration but access to the same core Postgres Plus Advanced Server features billed at an hourly rate, giving you an architecture like the diagram below:

NewImageEnterpriseDB and AWS provide a free 24-hour trial, so I set up another database this time on the Amazon cloud, provisioned it and created the same set of tables.

NewImage

Connecting to the cloud-based Postgres Plus database was the same as connecting to a regular, on-premise one, and in fact this is more-or-less just commercial PostgreSQL on Amazon AWS. Instead of charging for software and support up-front as with regular Oracle or on-premise Postgres Plus Advanced Server, you’re instead charged, AWS-style, an hourly rate based on the size of instance – one comparable to one of the Oracle servers mentioned beforehand comes in at $6.48/hour or $56k/year, or $112k for the two servers we’d need, still overall less than even the Oracle software maintenance charge but covering hosting as well (though you’ll need to pay Amazon storage and bandwidth charges on top, too).

Connecting OBIEE 11g to these two databases is also fairly straightforward – the Data Direct Postgres ODBC drivers that come with OBIEE work fine, with the connection process involving creating an ODBC DSN to the Postgres database either on-premise on on AWS, then importing the tables into the BI Repository and modelling them as normal. Connecting to the AWS cloud as a data source worked fine as wasn’t laggy, but you’d obviously need to test it at greater volume and with a realistic network connection. But – it works and seems to perform OK, at least with three rows…

Sshot 5

So something like Postgres Plus Advanced Server in the cloud would probably appeal to a customer with a temporary requirement, a departmental budget, or an infrastructure that’s already in the cloud, most probably on Amazon AWS. But you can also run Oracle Database on Amazon’s AWS cloud as well, through a partnership between Oracle and Amazon’s RDS (“Relational Database Service”). In this case, you go to the Amazon AWS website, sign-up for the service, put in your credit card details and within a few minutes, you’ve got a provisioned, Standard Edition One database ready to connect to via all the usual tools. And what this is, again, is the standard Oracle Database (or with EnterpriseDB’s offering, Postgres Plus Advanced Server) but hosted on AWS, charged hourly, with two main options for pricing:

  • A license-included option (Oracle Database Standard Edition One) at $3.14/hour for a reasonably specced-instance, coming out at about $27k/year, or $54k for the two servers we’d need
  • A “bring-your-own licence” (BYOL) option, where you provide the Oracle licenses and Amazon AWS hosts it, at around $19k/year for the same size instance, $38k/year for the two servers we’d need

So if you work on the assumption that Oracle Database SE1 is roughly equivalent to Postgres Plus Advanced Server in terms of DW features, it’s not a bad option, and probably a better one if you’re already heavily invested in Oracle and have lots of skills there. The BYOL option is really just about hosting, but at $38k/year to host the two servers, back them up and so on, it’s not a bad option compared to buying some servers and putting them somewhere.

So, with Amazon RDS or Postgres Plus Advanced Server in the cloud, if you’re happy to take the hit on features and scaleability it’s possible to bring your database costs down considerably, which may be an option when your project first starts off, and with the database platform reviewed say in a couple of years’ time to see whether it’s worth trading up to Oracle Database Enterprise Edition, or bring the whole thing back in-house. But Postgres and Oracle aren’t the only players in the analytical database market – there are other on-premise vendors such as Greenplum and Vertica who’ll sell you either a full-priced, Oracle-comparable database and who have cut-down, community editions available for single-node installs, or there are new players such as Amazon again who are bringing out cloud-native analytic databases again based on instance sizes and hourly charging – so lets look at one of them, Amazon’s “Redshift”.

NewImage

Amazon Redshift is actually based on an analytic database product called ParAccel, a column-store database in the same vein as Vertica but with major bits cut-out in it’s Amazon Redshift form. That said – it’s a lot more of an interesting starting point than core PostgreSQL or Oracle Database SE1, an given that it’s a core Amazon AWS product, it’s no doubt going to receive a lot of investment, developer evangelism and usage over the next few years. It’s also Postgres-compatible in terms of ODBC drivers, so viable for OBIEE, though like Postgres unsupported, so you’ll not get much in the way of BI Server optimisations or help from Oracle Support if it goes wrong.

NewImage

To connect OBIEE to Amazon Redshift there’s a set of ODBC and JDBC drivers that you can download, and once you’ve opened up the port to Redshift in the AWS firewall, you can create an ODBC connection on the OBIEE side just like most other PostgreSQL connections, and then import the Redshift metadata into OBIEE’s BI Repository.

NewImage

Redshift is priced around the same mark as Postgres Plus Advanced Server, but without the Amazon AWS storage and bandwidth charges. It’s a bit higher-risk than Postgres Plus, and it’s not got the Oracle Database compatibility features EnterpriseDB added to core Postgres, but it’s column-store, cloud-native and backed fully by Amazon.

So – with all of this in mind, what are the conclusions? Well it’s fair to say that, based on what turned-up today and OBIEE’s pre-existing heterogenous database support, you can connect non-Oracle databases to OBIEE as well as Oracle ones, but you’ll need to bear in mind that the newer cloud-based ones, and the open-source ones, won’t necessarily come with support from Oracle. If you’re looking to keep as close to an Oracle solution as possible but cut costs dramatically, using Amazon’s RDS service particularly with the license-included SE1 option could bring down the cost considerably, from $800k or so to around $50k/year, but you’ll need to work within the product constraints of SE/SE1, so no materialised views, partitioning option and the like – so it’s probably a matter of preference whether you’d go for Oracle SE1 or a Postgres-based solution such as EnterpriseDB’s Postgres Plus Advanced Server (I’d probably go with Amazon RDS and Oracle Database SE1, if only because I can upgrade in-time to Oracle Database Enterprise Edition and my Oracle skills can be re-used). 

Going forward – analytic databases-as-a-service certainly sound interesting, with Amazon Redshift looking particularly interesting. How much customers will take up cloud-based data warehousing databases in-general though will probably be limited until OBIEE itself goes into the cloud – either through cloud-friendly licensing policies that charge by the hour and instance capacity, or with a cloud-native version of OBIEE designed to interoperate with Oracle’s Cloud Database offering, which up until now doesn’t provide external SQL*Net access and is really designed for powering cloud-based Java apps. Watch this space as they say – and it’ll be interesting to hear what Oracle announce in this area at next months’ Oracle Openworld.

Oracle Database 12cR1 – New Features for BI & DW

Oracle Database 12c Release 1 came out last week, initially available for 64-bit Linux and Solaris (SPARC and x86_64), with the Windows and other versions presumably due over the next few months. So what’s in this new release for Oracle BI & DW customers, and how does compatibility stand with OWB, ODI and Oracle BI Enterprise Edition? It’s early days yet, but let’s take an initial look.

Probably the headline new feature for Database 12cR1 is “pluggable databases” – a system where a single “container” database can be created that can then contain one or more “pluggable” databases, similar to how a single physical server can host one or more virtualised, VMWare or Virtualbox servers. The advantage of container and pluggable databases is that whilst each pluggable databases appears to applications to be its own, self-contained database in-fact it shares the underlying database system infrastructure with the other pluggable databases in that container, reducing the total overhead compared to setting up several full Oracle databases.

NewImage

Clearly this is something Oracle have put together to support their move into the cloud, but it’ll benefit ordinary customers by permitting a greater degree of data isolation than you’d get by a multi-schema approach, without all the overhead of creating virtual machines to run your databases in.

NewImage

Whilst not a feature specifically aimed at BI & DW developers, those of us who are constantly spinning-up R&D and sandbox-type environments now have another option alongside multiple schemas, multiple databases and VMs/containers, though as we’ll see in a moment not all database features are available when running in container/pluggable mode. Note thought that use of more than one pluggable database within a container database requires a new Enterprise Edition option called the Multitenant Option, so its most probably aimed at customers serious about cloud, multi-tenancy and TCO reduction.

Another new feature that’ll affect all developers is the replacement of Database Control (the cut-down Enterprise Manager equivalent to Fusion Middleware Control) with DB Express, an Adobe Flash-based management console that comes with one or two new features and a couple of features taken away. As you can see from the screenshots below, the look is similar to Enterprise Manager Cloud Control, and for the average database developer or admin, it still looks like a pretty-good web-based admin tool.

NewImage

One thing to note is that starting up DB Express is different to the old DB Console; before you can even use it, you need to set the HTTP port it listens on (for example, 8080) using the command below whilst logged in as SYS:

exec dbms_xdb_config.sethttpport(8080);

DB Express actually runs using XDB rather than a standalone EM instance, and this article on the AMIS blog by Marco Gralike runs through a bit more of the new product background along with a few other settings you might want to check-out and enable. You’ll also need to have Flash installed and enabled on any browser that uses it, which I guess rules out using Safari on my iPad for checking out 12c databases (at least, without using full Enterprise Manager Cloud Control 12c).

So, onto the BI & DW-specific features now. There’s not, at least with this initial 12cR1 release, much new in the OLAP Option area and of course there’s no new 12c release of Oracle Warehouse Builder, with instead an updated version of OWB 11gR2 now available that’s compatible with Database 12cR1. ODI11g is compatible with Oracle Database 12c in that it can connect to it via the usual JDBC drivers (as can OBIEE through the usual OCI ones), and David Allen from the ODI team posted this article the other day on installing the ODI repositories into a 12c pluggable (as opposed to regular, non-pluggable) database. Steve Karam has been updating and maintaining an excellent list of community Oracle Database 12c-related articles over at his blog, but let’s take a quick look at a couple of new 12c features that have interested us over at Rittman Mead.

The first one that grabbed Stewart’s eye was temporal validity. Whilst this sounds like something out of Doctor Who, it’s actually a feature that leverages flashback (or flash-forward, as Tom Kyte suggests) to return version of a table row entry based on validity at a certain time. To take an example from this Oracle Learning Library article on the topic, a query that uses temporal validity might look like:

select first_name, to_char(valid_time_start,'dd-mon-yyyy') "Start", to_char(valid_time_end,'dd-mon-yyyy') "End"
from hr.emp_temp versions
period for valid_time between to_date('01-SEP-1995') and to_date('01-SEP-1996')
order by 2;

which of course looks very-much like the types of queries we use in BI applications that need to make use of type-2 slowly changing dimensions. In effect, the valid-from and valid-to columns we usually create and maintain become hidden columns used by the Oracle database, and which we can make use of through queries like the one above, or the one below that sets a particular valid time period and then just queries the table, without any reference to this time columns:

exec dbms_flashback_archive.enable_at_valid_time('CURRENT');
select first_name, to_char(valid_time_start,'dd-mon-yyyy') "Start", to_char(valid_time_end,'dd-mon-yyyy') "End"
from hr.emp_temp order by 2;

Taking this to its logical conclusion, what this potentially gives us is a way of implementing SCD2-type dimension tables without having to use surrogate keys, as the temporal validity part will take care of row versioning, though there’s probably other benefits of surrogate keys that we’d then lose that I’ve not thought through, Where this does also get more interesting is when you consider another new 12cR1 feature, in-database row archiving, which allow you to “soft-delete” individual rows so they are excluded from normal DML statements, but they’re still there for regulatory, compliance or archiving reasons. For example, using the command:

update emp_arch set ora_archive_state=dbms_ilm.archivestatename(1)
where employee_id in (102, 103);

you can set a particular set of rows to be archived (i.e., not normally visible), run queries against that table and have those rows excluded, but then issue commands such as:

alter session set row archival visibility = all;

or

select employee_id, first_name, ora_archive_state
from emp_arch where ora_archive_state = dbms_ilm.archivestatename(1);

to see the archived, soft-deleted rows. This is quite an interesting new option we could consider when having to store in a data warehouse those table rows that have been deleted in the source system but we need to keep in the warehouse for regulatory or compliance purposes, and together with temporal validity gives us a few new ideas for putting together data warehouse physical models. To be honest – I’m expecting there to be a few gotchas – one we’ve spotted already is that temporal validity isn’t available when working with pluggable databases – but they’re a couple of interesting new features, nonetheless.

Other new features of interest in 12cR1 include adaptive SQL query plans that can change as the query executes, something that sounds very useful when the initial cardinality estimates turn out to be way-off, the death of advanced replication and streams (in favour of GoldenGate), the ability to disable archive logging when using data pump, enhanced WITH clauses and subquery factoring including the ability to include inline PL/SQL calls in the subqueries, invisible columns to go with 11g’s invisible indexes (with presumably invisible databases and servers to come later on), automatically incrementing identity columns (yay!), and bunch of other optimiser enhancements.

The final new feature that’s got us pretty excited at Rittman Mead Towers is probably the most significant new BI&DW-related SQL feature in 12c – SQL pattern matching. We’ll no-doubt go into SQL pattern matching in a lot more detail in a future blog post, but to take an example in the Using SQL for Pattern Matching Oracle-by-Example tutorial, the pattern-matching query below:

SELECT * FROM Ticker
MATCH_RECOGNIZE
PARTITION BY symbol
ORDER BY tstamp
MEASURES STRT.tstamp AS start_tstamp, LAST(DOWN.tstamp) AS bottom_tstamp, LAST(UP.tstamp) AS end_tstamp
ONE ROW PER MATCH
AFTER MATCH SKIP TO LAST UP PATTERN (STRT DOWN+ UP+)
DEFINE DOWN AS DOWN.price < PREV(DOWN.price), UP AS UP.price > PREV(UP.price) ) MR
ORDER BY MR.symbol, MR.start_tstamp;

would look for patterns of rows that featured a “v-shaped” up-and-then-down movement, such as that shown in the graph below, that might be useful in identifying stock disposals around a key date.

Taking this further, you might use pattern-matching to find CEOs/Executives whose pattern of selling publicly traded stocks two months before financial close is inversely similar to Financial Performance(or Profitability) of all companies over the last two years, or follow the beer and diapers example to find group of products where there is sudden 100% spike in sale (inverted V graph) when compared with the previous day or next day (or previous/next n days/months, for example). Look out for blog posts from our own Stewart on features such as temporal validity and in-database row archiving, and Venkat who came up with the pattern matching examples, over the new few weeks – for now though, back to investigating this exciting new database release.

Using Oracle TimesTen With Oracle BI Applications (Part 1)

Oracle BI Applications (OBIA) is a product for delivering industry best-practice dashboards and analysis over enterprise sources such as ERP, HR, SCM and CRM systems. OBIA includes the necessary functionality to extract data from the source systems, transform and load it to the target database’s standardised data model and to provide a web-based analytic platform over the target database. The ETL functionality is (depending on OBIA version) delivered by either Informatica or Oracle Data Integrator (ODI). The reporting and dashboards are delivered through OBI.

Simplistically, Oracle BI Apps extracts data from the source system(s) and loads it into a custom data warehouse data model, this is then used as the data source for the pre-built OBI analyses and dashboards.

Basic OBIA

Over my next few blog postings I will be exploring the use of Oracle TimesTen as an additional data storage for OBIA reporting. If you are not already aware, Oracle TimeTen is an in-memory relational database that finds usage as in-memory caching technology for Oracle databases and as a standalone in-memory database, especially for rapid update transactional systems. It is also a key part of the Exalytics software platform (along with Oracle Business Intelligence and Essbase). For these articles I will be using the Exalytics edition of TimesTen. This gives access to some of the special Data Warehouse-style performance features that are not included with the standard TimesTen license. Some of the techniques I will cover are equally applicable to the standard TimesTen product and do not require the special Exalytics features. Pictorially, I will be using Oracle TimesTen as a second data source to OBI.

Adding in TimesTen

To simplify these blog postings I will consider a single fact table and its associated dimensions and use hand-coded TimesTen methods for data loading. I am writing my own data load code as I want to compare different loading techniques. In a production scenario I might consider using ODI or the code generated by the Exalytics BI Summary Advisor – both topics for another time.

Before I get into detail on loading TimesTen tables is may be useful to mention what Oracle TimesTen actually provides. TimesTen is a relational database that uses the server’s RAM and not disk to hold the data. Although the database is “in memory” it does have a disk footprint in that the database is read from disk on startup and written back to disk on unloading in addition there are database log (checkpoint) files for use in database recovery from backup. Just as with Oracle 11g, TimesTen supports all the usual DDL of a relational database: CREATE TABLE, CREATE VIEW, CREATE INDEX, CREATE PACKAGE etc. Packages are written in PL/SQL. The Exalytics version of TimesTen adds two additional features we may find useful in building aggregate tables: in-memory columnar compression and the OLAP grouping operators (ROLLUP, CUBE and GROUPING SET). Query plans are generated by the TimesTen Cost Based Optimiser and we able to modify query plans by more accurate table statistics, optimal indexing and the use of database hints

I will look at some of these features in more detail as I develop the code in my next two blog posts.

Photos and Presentation Downloads from the Rittman Mead BI Forum 2013, Brighton & Atlanta

Well, we’re all back home now after two very successful Rittman Mead BI Forum events in Brighton, and then Atlanta, earlier this month in May 2013. Around 70 OBIEE, ODI, Endeca and Essbase developers from around Europe got together in the first week in Brighton, followed by around 60 in Atlanta, and we were joined by Cary Millsap (Method R Corporation), Alex Gorbachev (Pythian) and Toby Potter (Data Sift) as special guest speakers over the two events. Thank you again to everyone who came along and supported the event, and a special thanks to the speakers without whom, of course, the BI Forum couldn’t take place. In addition, sincere thanks to Mike, Adam, Philippe, Alan, Marty, Jack and Florian from Oracle for coming along and sharing plans and insights around the Oracle product roadmap, and finally; congratulations to Antony Heljula (Peak Indicators Ltd) and Jeremy Harms (CD Group) who won the “Best Speaker” award for Brighton and Atlanta respectively.

NewImage

Photos from the two events (a selection from Brighton are above, some from Atlanta below this paragraph) are available in these Flickr photo sets:

NewImage

As we always do, we’re also making the slides (where allowed by the speaker, and not under NDA) available for download using the links below, including the one-day Oracle Data Integration Masterclass provided by Stewart Bryson, Michael Rainey and myself. Note that Christian Screen’s and Jeremy Harms slides are actually online, so I don’t think you’ll be able to download them from whatever service is hosting them – sorry.

Oracle Data Integration Masterclass (Stewart Bryson, Michael Rainey, Mark Rittman, Rittman Mead)

Brighton RM BI Forum, May 8th – 10th 2013

Atlanta RM BI Forum, May 15th – 17th 2013

So once again – thank you to everyone who came along, especially the speakers but also everyone from our Brighton and Atlanta offices who helped set the event up, and made sure it all ran so smoothly. See some of you again in Brighton and Atlanta next year, and our next outing is to ODTUG KScope’13 in New Orleans – another great event with the BI Track organised by Kevin McGinley – make sure you’re there!