Tag Archives: Oracle BI Suite EE
Oracle Openworld 2013 Days 3 & 4 : Oracle Cloud, OBIEE and ODI Futures
It’s getting to the end of day 4 of Oracle Openworld, with Team Oracle USA coming back from 8-1 down and pulling-off the unlikeliest of comeback victories, and Openworld sessions covering OBIEE, ODI and Oracle Cloud futures. Here’s the details:
1) Oracle’s Cloud Strategy
Last year Oracle announced a number of cloud initiatives, with database and java cloud service announcements followed by availability later in 2013. Unlike Amazon’s Web Services offering though which basically gave you an open platform for deploying database and web applications (Infrastructure-as-a-service, or “Iaas”), Oracle’s offerings last year were designed to be simpler, and offered you a single database schema in the case of the Database Cloud product, and a Java Cloud Service offering a single WebLogic managed server.
The Database Cloud Service only let you access the schema via HTTP and a RESTful API though, because providing SQL*Net access opened up security issues around the database listener, which left you with programmatic calls or uploading data via ApEx and spreadsheet uploads. Similarly, the Java Cloud Service was only really aimed at small applications and websites, and you could imagine building a simple web app using the Java Cloud front-end and the database cloud backend – but this was really PaaS (Platform-as-a-Service) and not something that would really be of interest to BI&DW developers - as there was no ETL capability and no way to run BI tools such as OBIEE.
In the meantime, organisations (such as ourselves) that wanted to run complete OBIEE, or a complete data warehouse, in the cloud have had to use services such as Amazon AWS, where compute, storage, network and other infrastructure are provided as-a-service, with few restrictions and with billing by the hour. In most cases, these are considered “BYOL” (bring your own license), with the customer providing the license, and also performing all the DBA and sysadmin work – what Amazon do is provide the virtual hardware, and host it for you in their cloud. Once you’re there though there are other options around the database – a few weeks ago I covered Amazon Redshift and EnterpriseDB as two alternatives to the Oracle database for cloud-based systems – and pricing for the Amazon AWS service itself is rock-bottom, making it quite an interesting option for customers that don’t have a big IT department, or big-company departments looking to spin-up sandbox or short-term development servers.
So Oracle’s announcements on Tuesday were very interesting – what they’re basically planning to offer is a direct competitor to Amazon AWS albeit Oracle-centric, so that they will in future offer infrastructure-as-a-service (basic compute, storage and networking services), alongside their existing platform-as-a-service; and, as we’ll see in a moment, they’ll also be offering applications such as OBIEE and their Fusion ERP stack as turnkey cloud applications with credit card sign-ups and consumer-level interfaces – this is Oracle moving full-scale into the cloud.
As well as this infrastructure-as-a-service offering, their platform-as-a-service database and java offerings will be expanded to now include full database-as-a-service, and a full WebLogic cluster-as-a-service to complement what’s now being referred to as schema-as-a-service and java-as-a-service. The full database-as-a-service (DBaas) will provide a full licensed Oracle instance (via the new pluggable database feature in 12cR1), your own listener, SQL*Net access and therefore the ability to ETL into it, but with backups and “tuning” covered in the background by Oracle’s staff.
Similarly, the updated WebLogic cloud service will support full WLST access, JMX interfaces and so on, arranged in a WebLogic cluster for high-availability and failover. So this setup will be conceptually similar to Amazon AWS, but run using Oracle software and with platform services designed around the needs of database, and Java application server, software.
No details on pricing and release dates were made in the session, but given last year’s releases I’d expect these to be available late summer next year.
2) OBIEE in the Cloud
So following on from the cloud keynote, the OBIEE roadmap session built on this announcement to provide details on their “reporting-as-a-service” offering, based on OBIEE but with a simplified, more consumer-style interface. What this seems to be offering is full OBIEE but with new, web-based tools for building the RPD, and with previews of new capabilities such as data mashups, faster previews of new visualisations, and a new cloud-style look and feel to match the rest of Oracle’s new cloud products.
Importantly though, reporting-as-a-service will come with a number of initial restrictions aimed at making the product as self-service and self-provisioning as possible; it’s likely that in the first iteration, it’ll only connect to the database schema-as-a-service offering, meaning that the only way you’ll get data in is via Apex and spreadsheet uploads or more likely, through Java applications you also build in Oracle’s platform-as-a-service cloud. Over time, presumably it’ll connect to the full DBaaS service making it possible to ETL data in, but for now it’s more aimed at departmental solutions and sandbox applications – but it’s a very interesting taste of the future.
3) ODI, and Moving to it from OWB
Moving on to today now, Rittman Mead’s Stewart Bryson along with Sumit’s Holger Friedrich presented alongside Oracle on ODI12c, with the session I attended being all about making the move to ODI from Oracle Warehouse Builder. ODI12c’s still in beta and we’re on the beta program, so I’ll have to be a bit circumspect with what I say on the blog, but in terms of the planned migration path from OWB, the plan from Oracle is that there’ll be three stages to a typical customer migration (if they want to migrate, that is):
1. From the first release of ODI12c, it’ll be possible to execute and monitor OWB jobs from within ODI, giving you a central place to run and control all of your Oracle ETL jobs
2. Then, shortly afterwards, there’ll be a command-line utility for migrating OWB project objects to their ODI equivalent, covering most object types but not process flows and data quality projects, for example
3. And then for all new ETL work, you’d use ODI 12c.
OWB itself will continue to be supported for many years, but the most recent release was the last one. Look out for more details on the blog once ODI 12c goes GA, and as one of the beta testers for the tool itself, and the migration utility, we’ll have a lot of advice and experiences to share once we’re allowed to talk about it publicly.
So that’s it for now – check back tomorrow for the last post in this series, where I’ll recap on the week
Oracle Openworld 2013 Day 2: Exalytics, TimesTen and Essbase Futures
Monday’s almost over at Oracle Openworld 2013 in San Francisco, and it started for me today with a presentation on Enterprise Manager 12c and the BI Management Pack, alongside Henrik Blixt (PM for the BI Management Pack) and Dhananjay Papde, author of a book on EM12c. I’ve covered EM12c and the BI Management Pack extensions quite a bit on the blog over the past few months so it was good to exchange a few ideas and observations with Henrik, and it was also good to meet Dhananjay, who’s been working with EM for a long time and has particularly specialized in the configuration management, and SLA-monitoring parts of the app.
Similarly, I finished-up the day with another joint session this time on TimesTen for Exalytics, with Peak Indicators’ Tony Heljula and Chris Jenkins, one of the TimesTen development PMs. As with all these sessions, it’s the audience interaction that makes them interesting, and we had a number of excellent questions, particularly at the TimesTen one given the very interesting product announcements during the day – more on which in a moment.
Before I get onto those though, here’s the links to today’s RM presentation downloads, with presentations today given by myself, Jérôme Françoisse (with Gurcan Orhan) and Venkat:
- Make the Most of your Exalytics and BI Investments with Enterprise Manager 12c (Mark Rittman, Henrik Blixt, Dhananjay Papde)
- Birds of a Feather Session: Best Practices for TimesTen on Exalytics (Mark Rittman, Chris Jenkins, Tony Heljula)
- Oracle BI EE Integration with Hyperion Data Sources (Venkatakrishnan J)
- Oracle Data Quality Solutions, Oracle Data Integrator and Oracle GoldenGate on Exadata (Jérôme Françoisse and Gurcan Orhan)
So, onto the product roadmap sessions:
1) Oracle Exalytics In-Memory Machine
The first set of announcements was around Oracle Exalytics In-Memory Machine, which started off as a Sun x86_64 server with 1TB RAM and 40 CPU cores, then recently went to 2TB and SSD disks, and now is available in a new configuration called Oracle Exalytics T5-8. This new version comes with 4TB RAM and is based on Sun SPARC T5 processes with, in this configuration, a total of 128 CPU cores, and is aimed at the BI consolidation market – customers who want to consolidate several BI applications, or BI environments, onto a single server – priced in this case around the $350k mark excluding the software.
What’s also interesting is that the T5-8 will use Sun Solaris on SPARC as the OS, giving it access to Solaris’ virtualisation and resource isolation technologies again positioning it as a consolidation play rather than host for a single, huge, BI application. Given the price I can’t quite see us getting one yet, but it’s an obvious upgrade path from the X2-3 and X2-4 servers and something you’d want to seriously consider if you’re looking at setting up a “private cloud”-type server infrastructure.
The Exalytics roadmap session also previewed other potential upcoming features for OBIEE, I would imagine earmarked for Exalytics given some of the computation work that’d need to go into the background to support them, including:
- A “Google Search” /. “Siri”-type feature called BI Ask, that presented the user with a Google-style search box into which you could type phrases such as “revenue for May 2010 for Widget A”, with the feature then dynamically throwing-up a table or graph based on what you’ve requested. Rather than attempting natural language parsing, BI Ask appears to work with a structured dictionary of words based on objects in the Presentation Services catalog, with choices available to the user (for example, lists of measures or dimensions) appearing under the search box in a similar manner to the “Google Suggest” feature.Although the demo was done using a desktop browser, where I think this could be particularly useful is in a mobile context, especially given most browsers’ and mobile platforms’ in-built ability to receive speed input and automatically pass that as text to the calling application. If you can imagine Siri for mobile analytics, with you holding your iPhone up and saying to it “revenue for southern region for past three months, compared to last year” and a graph of revenue over this period automatically appearing on your iPhone screen – that’s what I think BI Ask is trying to get towards.
- A user-driven data mashup feature that allowed the user to browse to a spreadsheet file on their local desktop, upload it to the OBIEE server (maybe to an Oracle Cloud database?), and then automatically join it to the main corporate dataset so that they could add their own data to that provided to the BI system. Clearly any setup like this needs to clearly differentiate between metrics and attributes uploaded by the user, compared to the “gold standard” ones provided as part of the RPD and Presentation Services Catalog, but this is potentially a very useful feature for users who’d otherwise export their OBIEE data to Excel, and then do the data combining there.
- Probably more for “Exalytics v2″, but a totally revamped aggregate refresh and reload framework, probably based around DAC technology, that would leverage the DAC’s own data loading capabilities and tools such as GoldenGate to perform incremental refreshes of the Exalytics adaptive in-memory cache. No specific details yet but it’s pretty obvious how this could improve over the current Exalytics v1 setup.
2) Oracle TimesTen for Exalytics
Yesterday of course had the big announcement about the new In-Memory Option for Oracle Database 12c, and this of course then led to the obvious question – what about TimesTen, which up until now was Oracle’s in-memory database – and what about Exalytics, where TimesTen was the central in-memory part of the core proposition? And so – given that I was on the TimesTen Birds of a Feather Panel this evening and no doubt would need to field exactly those questions, I was obviously quite keen to get along to one of the TimesTen roadmap sessions earlier in the day to hear Oracle’s story around this.
And – it actually does make sense. What’s happening is this:
- TimesTen’s development team has now been brought under the same management as Oracle Database 12c’s In-Memory option, with (over time) the same core libraries, and same performance features
- TimeTen will get the ability to store its tables (which are already held in memory) in columnar format as well as the existing row format – the difference being that unlike the Oracle in-memory feature, this is not done through on-the-fly data replication – it’s either stored row-store or column-store, something you decide when you create the table, and the only thing disk is used for is checkpointing and data persistence between reboots
- TimesTen will also gain the ability to be set up as a grid of servers that provide a single database instance – a bit like RAC and it’s single instance/cache fusion, and with support for replication so that you can copy data across the nodes to protect against machine failure. Currently you can link TimesTen servers together but each one is its own database instance, and you’d typically do this for high-availability and failover rather than creating one large database. What this grid setup also gives us though is the ability to do parallel query – Oracle didn’t say whether this would be one slave per grid node, or whether it’d support more than one slave per node, but coupled with the in-memory column store feature, presumably this is going to mean bigger TimesTen databases and a lot faster queries (and it’s fast already).
So what about the positioning of TimesTen vs. Oracle Database In-Memory Option – does one replace the other, or do you use the two together? Oracle’s ideas on this were as follows:
- Clearly the in-memory Oracle Database option is going to be a great query accelerator for large-scale data warehouses, but there’s still a lot of value in having a mid-tier in-memory data layer that’s under the control of the BI system owner, rather than the DBAs. You’ll have control over the data model, you can implement it quicker than you’d be able to upgrade the whole data warehouse database, and its physically co-located closer to the BI Server, so you’ll have less of an issue with network latency.
- TimesTen’s in-memory columnar storage technology will be based on a similar approach to that which is being taken by the database, and developed by the same overall team. But TimesTen most probably will have shorter development cycles, so new features might appear in TimesTen first, and it’s also lower risk for customers to test out new in-memory approaches in TimesTen rather than trying to reconfigure the whole warehouse to try out a new approach
And I think this makes sense. Of course, until we actually get hold of the two products and test them out, and see how the pace of development works out over time, we’re not going to fully know which product to deploy where – and of course pricing and packaging has yet to be announced; for example, I’d strongly predict that columnar storage for TimesTen will be an Exalytics-only feature, whilst the In-Memory Option for the database might be priced more like RAC than Partitioning, or even packaged up with Partitioning and OLAP as some sort of “data warehousing option”. We’ll just have to wait and see.
3) Oracle Essbase
The Essbase roadmap was the last session I managed to attend today, and again there were some pretty exciting new features announced or trailed (and it was made clear that the new features at the end of this list were more at planning or conceptual stage at the moment, and may well not make it into the product). Anyway, here’s what was talked about in the session, for BI and Exalytics-type use cases:
- Improved MDX query creation when working with the BI Server, including support for sub-selects – something that might help to reduce the number of separate MDX queries that OBIEE has to generate to work-out all the subtotals required for hierarchical column queries
- Improvements to the MDX AGGREGATE function and a revamped cube spin-off feature for OBIEE, including a prototype new web-based MOLAP Acceleration Wizard for auto-generating Essbase cubes for OBIEE aggregate persistence
- A new Cube Deployment Service private API, that’s used by the MOLAP Aggregation Wizard (amongst others) to generate and deploy an Essbase cube within a cloud-type environment
- A “renegade member” feature used for collecting in all the data load records for members that can’t be located – aimed at avoiding the situation where totals in an Essbase cube don’t then match the totals in the source system, because records got dropped during the data load
- Very speculatively – a potential hybrid BSO/ASO storage mode, combining BSO’s calculation capabilities with ASO’s dynamic aggregation.
So – lots of potential new features and a peek into what could be in the roadmap for three key OBIEE and Exalytics technologies. More tomorrow as we get to attend roadmap sessions for OBIEE in the Cloud, and ODI 12c.
Create a Mobile BI App for iOS, Android or BlackBerry within 5 Days!
A few weeks ago I posted an article on the blog about the new Oracle BI Mobile App Designer, an HTML5-based mobile app builder for OBIEE 11.1.1.7.1. BI Mobile App Designer differs from Oracle BI Mobile HD in that:
- It’s based on HTML5, so it works on most modern mobile devices, including Android, iOS, BlackBerry and Windows Phone
- It uses “responsive design”, so apps built using it work on phones, tablets and laptops
- The technology behind it is based on BI Publisher 11g, to it’s easy to use and builds on skills you already have
- As it’s 100% thin-client, you won’t get caught up in mobile device security issues, or need to involve companies such as Good Technologies
We’re particularly excited about BI Mobile App Designer and its possibilities for LOB (“Line of Business”) mobile BI apps – creating focused, workflow-based analytic apps for a particular department and with the UI focused on doing a particular job. In the screenshots below you can see a couple of examples of user interfaces very-much customised to a particular role or reporting scenario, and users can easily access apps you create using the new “Apps Library”.
Whilst you could give BI Mobile App Designer a go yourself, you’re probably going to get the most success by working alongside a partner who’s got experience with the product, understands the issues around deploying mobile apps, and has a delivery approach that emphasises quick wins and working collaboratively with users. To this end, we’ve put together a special “Quickstart for Oracle BI Mobile App Designer” package, that delivers a working LOB mobile analytic app for you within a week, using this new feature – and leaves you with a roadmap and development plan so you can extend it yourself, afterwards.
Over this five-day, fixed-price and fixed-scope engagement, we will:
- Work with you collaboratively to identify the LOB use-case
- Review your current OBIEE 11g installation, and if necessary work with you to install any required updates to enable the Mobile App Designer feature
- Identify with you the existing analyses and reports to include in the app, and the app structure and navigation menu
- Develop the first iteration of the app, and then review it with the proposed users
- Create the final iteration, including any imagery and corporate design, links to other content etc, and then deploy in the Mobile App “Appstore” within Oracle Business Intelligence
- Work with your team to deploy the app to end-users, and provide hand-over so that they can continue development after the engagement.
Interested? Full details on this package are on the Quickstart for Oracle BI Mobile App Designer page on our website, and also on this data sheet. Get in touch now if you’d like to take advantage of this offer – and have mobile analytics delivered to your workforce in just a week!
OBIEE as a Data Source for Qlikview, and the New “Direct Discovery” Feature: Part 2
In my last post I explored how to use Qlikview, an in-memory query tool similar in concept to Endeca Information Discovery, to connect to OBIEE’s BI Repository and use it as a data source. The conclusion was that a Qlikview report could be easily and successfully produced as long as some knowledge of the presentation layer and OBIEE logical SQL was available. In this follow-up article I’m going to take a look at the new Qlikview feature I mentioned yesterday, “Direct Discovery”, to see whether it’s possible to have Qlikview bypass its own in-memory cache and use the BI Repository instead, something that’s particularly interesting in the context of Exalytics where that repository might in-fact point to a TimesTen database, giving us the ability to use Qlikview’s UI whilst accessing data through the OBIEE enterprise, and engineered systems framework. Before I do that though, a brief diversion into the world of OBIEE’s “logical SQL” syntax, on which this integration idea rests.
Standard logical SQL, either generated by OBIEE or manually written, will access the Presentation layer of the BI Repository (RPD) and will use appropriate joins and security as defined by the business model and mapping layer, and the physical layer, in that repository. Now, a “best practice” RPD hides all the physical datasources, necessary joins, both physical and logical, and includes enough security so that users only see data they are authorised too; for most reports this is enough, but some reports may need to access to data in a slightly different manner and with more access to these underlying implementation details, and this may be achieved by writing logical SQL that accesses these other layers in the repository.
There are several options that can be used with OBIEE logical SQL to target the query at these different layers within the RPD, and these advanced techniques may be used to fulfil more complex reporting requirements. The following sections will demonstrate these techniques.
The first example is a simple logical SQL to get a dataset of total revenue by product by year:–
SELECT
"I - Sample TimesTen"."Products"."P1 Product" as Product,
"I - Sample TimesTen"."Alternate Calendars"."T35 Cal Year" as Year,
"I - Sample TimesTen"."Base Facts"."1- Revenue" as Revenue
FROM "I - Sample TimesTen";
This basic form of logical SQL does not need to have any joins and aggregations specified as they are all provided by the BI Server for you, and only the subject area name needs to be given as the table name. This SQL is translated by the BI Server to the syntax of the underlying technology and the appropriate joins, aggregates and groupings added.
The same dataset can be produced with logical SQL from the Business Model and Mapping layer:–
SELECT_BUSINESS_MODEL
"P1 Product" as Product,
"T35 Cal Year" as Year,
"1- Revenue" as Revenue
FROM "05 - TimesTen Sample"."D1 Products (Level Based Hier)",
"05 - TimesTen Sample"."D0 Time",
"05 - TimesTen Sample"."F0 Revenue Base Measures";
When selecting data from the Business Model and Mapping layer, tables must be added to the FROM section for each object selected, but joins need not be added. The SQL is still translated by the BI Server to the syntax of the underlying technology and the appropriate joins, aggregates and groupings added.
The same dataset can produced with logical SQL from the Physical layer:–
SELECT_PHYSICAL
b.Prod_Key as Product,
A.CAL_YEAR as Year,
sum(B.Revenue) as Revenue
from "05 - Sample App TimesTen".."BISAMPLE_TT"."SAMP_TIME_QTR_D" A,
"05 - Sample App TimesTen".."BISAMPLE_TT"."SAMP_REVENUE_F" B
where A.QTR_KEY = B.Bill_Qtr_Key
group by A.CAL_YEAR, B.Prod_key;
When selecting data from the Physical layer, the SQL statement is passed directly to the underlying datasource, so must have the appropriate and correct syntax and included all joins, aggregations and groupings.
All of these types of logical SQL can be used within a Qlikview report. However, they will need to be manually added to the load script, as the select wizard will only generate standard logical sql.
An example of using and combining these more advanced OBIEE logical sql statements can be seen in the following Qlikview load script:–
ODBC CONNECT TO QlikViewTest (XUserId is HfOdKYJOPDZcWSFPRC, XPassword is dRMWbARMNLacWYIGTJKB);
ProductTable:
SQL SELECT "P1 Product" as Prod_Desc,
"P2 Product Type" as Type,
"P4 Brand" as Brand,
"P0 Product Number" as Product
FROM "A - Sample Sales".Products;
Dates:
SQL SELECT_BUSINESS_MODEL "T35 Cal Year" as Year
FROM "05 - TimesTen Sample"."D0 Time";
RevenueFact:
SQL SELECT_PHYSICAL
b.Prod_Key as Product,
A.CAL_YEAR as Year,
sum(B.Revenue) as Revenue
from "05 - Sample App TimesTen".."BISAMPLE_TT"."SAMP_TIME_QTR_D" A,
"05 - Sample App TimesTen".."BISAMPLE_TT"."SAMP_REVENUE_F" B
where A.QTR_KEY = B.Bill_Qtr_Key group by A.CAL_YEAR, B.Prod_key;
This load script loads the product details using standard logical SQL from the Presentations layer, time details from the Business layer and the revenue dataset from the Physical layer. These three different SQL statements are sent to the BIServer and three datasets are returned. Qlikview stores all this data in its in-memory database and joins the datasets together using the common field names.
The following report can then be produced:–
Now we can have a look at the Direct Discovery feature of Qlikview and create a load script and report that will collect the dimensional data at the load time, but get the measure data on request.
The Direct Discovery syntax is reasonably straightforward and mainly consists of two parts. Firstly, the SQL part of the load syntax is replaced with the keyword DIRECT, and at this point you can also add a staleness timeout for the dataset. The second and more important area is to specify the measures that need to be collected when the query is run.
For example:–
DIRECT ( stale after 10 seconds ) SELECT
"P1 Product" as Product,
"T05 Per Name Year" as Year
FROM "I - Sample TimesTen";
The first line of this example tells Qlikview that the following query is a Direct Discovery query and that any data selected will be stale after 10 seconds, so that any further user interaction after this period will need to re visit the database. The two fields selected in this query will be loaded into the Qlikview memory structures to allow the reporting objects to be created. At this point, Qlikview will also go to the database and obtain all of the other columns associated with the table in order to add these columns to its meta data catalog, This is to allow the report writer to add fields to the Qlikview report and expressions. Such columns are known as implicit columns.
Now, as we are connecting to an OBIEE RPD and using OBIEE logical SQL we do not need to specify tables in the FROM reference of the SQL statement but just the subject area name. However, this causes an issue with Direct Discovery, as Qlikview will attempt to select all of the fields associated with the table to add those column names to its meta data; which in this case is all of the objects in the “I – Sample TimesTen” subject area. Not only can this can take a long time, but errors can be produced from the RPD if everything is selected as once. For example, in this case Qlikview will send the following sql to the RPD:–
SELECT * FROM "I - Sample TimesTen" WHERE 1 = 0
Although this will not select any data, it will try to select every column in the subject area and results in a “No fact table exists at the requested level of detail” error, as it does not make sense to select all of the columns from a single data mart at once.
To avoid this situation we must directly specify as part of the query which columns we wish to include in the report. For example:–
DIRECT ( stale after 10 seconds ) SELECT
"P1 Product" as Product,
"T05 Per Name Year" as Year
IMPLICIT "1- Revenue" as Revenue,
"10- Variable Costs" as Variable_Costs,
"11- Fixed Costs" as Fixed_costs
FROM "I - Sample TimesTen";
By adding this IMPLICIT list of measures, Qlikview will not interrogate the database for a list of implicit fields, but just add the three specified measure objects to its metadata.
Use of the IMPLICIT syntax is vital to using Direct Discovery with the OBIEE RPD, as only the basic logical SQL can be used, trying to use the more advanced SELECT_BUSINESS_MODEL or SELECT_PHYISCAL options does not work.
It is also worth knowing that this initial version of direct discovery only supports one direct discovery table per QlikView report and the source of the direct discovery can only be a single database table or view.
Updating the previous load script to include a Direct Discovery section now looks like this:–
ODBC CONNECT TO QlikViewTest (XUserId is HfOdKYJOPDZcWSFPRC, XPassword is dRMWbARMNLacWYIGTJKB);
ProductTable:
SQL SELECT "P1 Product" as Product,
"P2 Product Type" as "Product Type",
"P3 LOB" as "LIne of Business",
"P4 Brand" as Brand
FROM "I - Sample TimesTen".Products;
Dates:
SQL SELECT "T05 Per Name Year" as Year
FROM "I - Sample TimesTen"."Time";
RevenueFact:
DIRECT ( stale after 10 seconds ) SELECT
"P1 Product" as Product,
"T05 Per Name Year" as Year
IMPLICIT "1- Revenue" as Revenue,
"10- Variable Costs" as Variable_Costs,
"11- Fixed Costs" as Fixed_costs
FROM "I - Sample TimesTen";
This load script loads the product details using standard logical SQL, time details from the Business layer and the revenue dataset is loaded on demand. The following report can then be produced:–
In this successful example, all of the RPD dimensional data was initially loaded into Qlikview’s own in-memory store and then the additional measure totals where loaded upon request when drilling down on the Product Brand. This invoked the Direct Discovery feature and ran additional SQL for each level accessed.
Now that we can use Direct Discovery against the RPD, we have the option to include several different levels and flavours of data aggregation to the reports. Such aggregation could be done using Essbase or TimesTen or even the OBIEE cache, but this would require the appropriate configuration within the RPD.
In summary the following are required to successfully use Qlikview with the OBIEE RPD:–
- An ODBC connection to the BIServer from the Qlikview environment
- Good appreciation of the RPD
- Manual creation of logical SQL to create datasets
- Working knowledge of Qlikview load scripts
- Use of IMPLICIT fields when using direct discovery
From these brief articles it can be seen that we can successfully use the Qlikview UI with the OBIEE RPD to create reports and also make use of the new dynamic Qlikview direct discovery feature, such that any of the aggregation, security, multi data sources, etc.. features available from the RPD are all available to the Qlikview developer and user. However – it doesn’t look like it’s possible to replace the Qlikview in-memory data cache entirely, as in this version at least you can only have a single direct discovery table within each Qlikview report, and it also looks like it’s necessary to have loaded at least some data into the Qlikview cache so that the direct discovery table can then join back to it.
There’s also other restrictions such as direct discovery tables not being available for regular Qlikview drop-down menu controls, so it’s probably fair to say that this is a “work in progress” – however if the feature does mature and potentially allow all data to be sourced on-demand like this, it could be an interesting complement to OBIEE’s regular reporting tools.
OBIEE as a Data Source for Qlikview, and the New “Direct Discovery” Feature: Part 1
A recent release of Qlikview added a new feature called “Direct Discovery”, that allows Qlikview reports to directly access a database datasource on-demand . Qlikview, by default, loads all required data up-front into its own in-memory column-store database, but this new Direct Discovery feature allows it to in addition retrieve data from an SQL data source, on-demand, to supplement this in-memory dataset.
This should allow larger datasets to be accessed from within Qlikview reports, and I therefore began to wonder if I could use this new functionality to access some of the very large data sets that we often see with OBIEE reports. In fact – could I use the Qlikview UI but with the OBIEE RPD as the datasource, potentially replacing the Qlikview in-memory cache completey, instead using OBIEE’s BI Repository, federated query capability and its own caching options? If so, not only would this give me access to all of the joined up heterogeneous datasources contained in the RPD, but also the comprehensive security model that OBIEE provides. So lets first see if I can use Qlikview to access data via an OBIEE RPD, and then in I’ll cover the direct discovery feature in the next post.
To do this, I going to use a local copy of the SampleApp V305 Virtual Machine which contains an OBIEE environment that uses a number of different data sources available via the RPD. I created a simple OBIEE report containing Product, Date and Revenue from the I – Sample TimesTen folder, which I’m going to replicate using Qlikview with the OBIEE RPD as the data source:–
The first task is to use the ODBC driver for BIServer, installed as part of the standard OBIEE installation, to created a system DSN to allow access to the RPD for Qlikview to use.
I started Qlikview and selected File > New, by default this starts a wizard that expects a file based data source, however using a database data source is a much more manual task, so Cancel this wizard and choose File > Edit Script to open the Script editor.
Click the Connect button and select the appropriate BIServer ODBC connection, supply the user name and password, click OK, and a connection statement will be added to the script.
Click the Select button and you will see the Folders and objects you normally see in the Presentation layer of the RPD shown as tables and columns. Using this wizard to create SQL select statements for product, time and Revenue, I created the following script:–
ODBCCONNECTTO QlikViewTest (XUserIdis LMUNCIJOPDZcWSFPDD, XPasswordis HECQSARMNLacWYIGTRNA);
SQL SELECT *
FROM "I - Sample TimesTen".Products;
SQL SELECT *
FROM "I - Sample TimesTen"."Time";
SQL SELECT "1- Revenue"
FROM "I - Sample TimesTen"."Base Facts";
Running this script shows that data is retrieved, but when finished nothing is shown. This is because Qlikview does not have a default presentation, so I add some report objects and get the following:–
I can see some data, but what I cannot do, is the Qlikview thing of clicking on a data item and the rest of the data changing to reflect this choice. Its useful to look at the table joins, select File > Table Viewer to see what Qlikview has done with the datasets :–
Its is easy to see that none of the datasets are linked and all of the datasets need to be linked to allow Qlikview to highlight associated data.
Going back to the Script editor I can see that the select that gets the Revenue does not have any key columns that Qlikview can use to link the values to the data. Qlikview automatically links datasets using common field names.
I reopen the Script Editor, highlight the Revenue select and click the Select button, scroll down to the Base Facts table and examine the available fields. Using this wizard I can only select the fields that have been put into the folder in the presentation layer and this does not provide the necessary product and time keys. This gives me two options:–
- Add suitable key values to the appropriate presentation layer folders
- Manually create a suitable piece of logical SQL
I want to avoid putting key values into measure folders in the presentation layer as that goes against best practice and usability practices. A much easier option is to manually create some logical SQL, if fact OBIEE provides this on the Advanced tab of the Analysis editor.
I don’t need all of this generated logical SQL, but it is a good starting point. I copy this into my Qlikview script editor, and edit it just retrieve the data and my script is now:–
ODBCCONNECTTO QlikViewTest (XUserIdis LMUNCIJOPDZcWSFPDD, XPasswordis HECQSARMNLacWYIGTRNA);
SQL SELECT *
FROM "I - Sample TimesTen".Products;
SQL SELECT *
FROM "I - Sample TimesTen"."Time";
SQL SELECT "I - Sample TimesTen"."Products"."P4 Brand" "P4 Brand",
"I - Sample TimesTen"."Time"."T05 Per Name Year" "T05 Per Name Year",
"I - Sample TimesTen"."Base Facts"."1- Revenue" "1- Revenue"
FROM "I - Sample TimesTen";
Notice that I have removed some of the generated sql and changed the aliases to be the actual column names to help Qlikview link the data. If I execute the script I now get a much more Qlikview type report, where i select a Brand and Qlikview shows me the related years and the total revenue:–
A quick look at the table view shows one of the most important parts of a Qlikview report, the dataset links:–
Now that I’ve got Qlikview using the RPD and I can reproduce the data I saw in OBIEE, but I want to extend the logical sql and make a more interesting report. So i manually edit the sql to get this:–
ODBC CONNECT TO QlikViewTest (XUserId is cJbQJYJOPDZcWSFPCC, XPassword is CHAeeARMNLacWYIGTBUA);
ProductTable:
LOAD Product,
Brand;
SQL SELECT "P1 Product" as Product,
"P2 Product Type",
"P4 Brand" as Brand
FROM "A - Sample Sales".Products;
SQL SELECT "E - Sample Essbase"."Time"."T05 Per Name Year" as Year
FROM "E - Sample Essbase"; RevenueFact:
SQL SELECT "I - Sample TimesTen"."Products"."P1 Product" as Product,
"I - Sample TimesTen"."Time"."T05 Per Name Year" as Year,
"I - Sample TimesTen"."Base Facts"."1- Revenue" as Revenue
FROM "I - Sample TimesTen";
With this script, I’ve only selected the data objects I need to link and display and I’ve used the power of the RPD to select the product data from an Oracle database, the time details from an Essbase cube and revenue data from a TimesTen database. These separate datasets are then combined into a single Qlikview report:–
From this brief article it can be seen that we can successfully use the Qlikview UI with the OBIEE RPD to create reports, but three things are necessary:–
- an ODBC connection to the BIServer
- an appreciation of the Presentation layer, so that the correct objects can be accessed
- Manual creation or editing of generated logical SQL to enable Qlikview to successfully link datasets together.
In the next part, I will look more closely at the different logical sql options and have a look at the Direct Discovery feature to retrieve the data on-demand from the RPD, thereby avoiding duplication of data and the potential use of TimesTen or Essbase as the underlying in-memory database.