Tag Archives: Oracle BI Suite EE
RM BI Forum 2014 Brighton is a Wrap – Now on to Atlanta!
I’m writing this sitting in my hotel room in Atlanta, having flown over from the UK on Saturday following the end of the Rittman Mead BI Forum 2014 in Brighton. I think it’s probably true to say that this year was our best ever – an excellent masterclass on the Wednesday followed by even-more excellent sessions over the two main days, and now we’re doing it all again this week at the Renaissance Atlanta Midtown Hotel in Atlanta, GA.
Wednesday’s guest masterclass was by Cloudera’s Lars George, and covered the worlds of Hadoop, NoSQL and big data analytics over a frantic six-hour session. Lars was a trooper; despite a mistake over the agenda where I’d listed his sessions as being just an hour each when he’d planned (and been told by me) that they were an hour-and-a-half each, he managed to cover all of the main topics and take the audience through Hadoop basics, data loading and processing, NoSQL and analytics using Hive, Impala, Pig and Spark. Roughly half the audience had some experience with Hadoop with the others just being vaguely acquainted with it, but Lars was an engaging speaker and stuck around for the rest of the day to answer any follow-up questions.
For me, the most valuable parts to the session were Lars’ real-world experiences in setting up Hadoop clusters, and his views on what approaches were best to analyse data in a BI and ETL context – with Spark clearly being in-favour now compared to Pig and basic MapReduce. Thanks again Lars, and to Justin Kestelyn from Cloudera for organsising it, and I’ll get a second-chance to sit through it again at the event in Atlanta this week.
The event itself proper kicked-off in the early evening with a drinks reception in the Seattle bar, followed by the Oracle keynote and then dinner. Whilst the BI Forum is primarily a community (developer and customer)-driven event, we’re very pleased to have Oracle also take part, and we traditionally give the opening keynote over to Oracle BI Product Management to take us through the latest product roadmap. This year, Matt Bedin from Oracle came over from the States to deliverer the Brighton keynote, and whilst the contents aren’t under NDA there’s an understanding we don’t blog and tweet the contents in too much detail, which then gives Oracle a bit more leeway to talk about futures and be candid about where their direction is (much like other user group events such as BIWA and ODTUG).
I think it’s safe to say that the current focus for OBIEE over the next few months is the new BI in the Cloud Service (see my presentation from Collaborate’14 for more details on what this contains), but we were also given a preview of upcoming functionality for OBIEE around data visualisation, self-service and mobile – watch this space, as they say. Thanks again to Matt Bedin for coming over from the States to delver the keynote, and for his other session later in the week where he demo’d BI in the Cloud and several usage scenarios.
We were also really pleased to be joined by some some of the top OBIEE, Endeca and ODI developers around the US and Europe, including Michael Rainey (Rittman Mead) and Nick Hurt (IFPI), Truls Bergensen, Emiel van Bockel (CB), Robin Moffatt (Rittman Mead), Andrew Bond (Oracle) and Stewart Bryson (Rittman Mead), and none-other than Christian Berg, an independent OBIEE / Essbase developer who’s well-known to the community through his blog and through his Twitter handle, @Nephentur – we’ll have all the slides from the sessions up on the blog once the US event is over, and congratulations to Robin for winning the “Best Speaker” award for Brighton for his presentation “No Silver Bullets: OBIEE Performance in the Real World”.
We had a few special overseas guests in Brighton too; Christian Screen from Art of BI Software came across (he’ll be in Atlanta too this week, presenting this time), and we were also joined by Oracle’s Reiner Zimmerman, who some of you from the database/DW-side will known from the Oracle DW Global Leaders’ Program. For me though one of the highlights was the joint session with Oracle’s Andrew Bond and our own Stewart Bryson, where they presented an update to the Oracle Information Management Reference Architecture, something we’ve been developing jointly with Andrew’s team and which now incorporates some of our thoughts around the agile deployment of this type of architecture. More on this on the blog shortly, and look out for the white paper and videos Andrew’s team are producing which should be out on OTN soon.
So that’s it for Brighton this year – and now we’re doing it all again in Atlanta this week at the Renaissance Atlanta Midtown Hotel. We’ve got Lars George again delivering his masterclass, and an excellent – dare I say it, even better than Brighton’s – array of sessions including ones on Endeca, the In-Memory Option for the Oracle Database, TimesTen, OBIEE, BI Apps and Essbase. There’s still a few places left so if you’re interested in coming, you can book here and we’ll see you in Atlanta later this week!
Extended Visualisation of OBIEE Performance Data with Grafana
Recently I wrote about the new obi-metrics-agent tool and how it enables easy collection of DMS data from OBIEE into whisper, the time-series based database behind graphite. In this post I’m going to show two things that take this idea further:
- How easy it is to add other data into Graphite
- How to install and use Grafana, a most excellent replacement for the graphite front-end.
Collecting data in Graphite
One of the questions I have been asked about using Graphite for collecting and rendering OBIEE DMS metrics is a very valid one : given that OBIEE is a data visualisation tool, and that it usually sits alongside a database, where is the value in introducing another tool that apparently duplicates both data storage and visualisation.
My answer is that it is horses for courses. Graphite has a fairly narrow use-case but what it does it does superbly. It lets you throw any data values at it (as we’re about to see) over time, and rapidly graph these out alongside any other metric in the same time frame.
You could do this with OBIEE and a traditional RDBMS, but you’d need to design the database table, write a load script, handle duplicates, handle date-time arithmetic, build and RPD, build graphs – and even then, you wouldn’t have some of the advanced flexibility that I am going to demonstrate with Grafana below.
Storing nqquery.log response times in Graphite
As part of my Rittman Mead BI Forum presentation “No Silver Bullets – OBIEE Performance in the Real World”, I have been doing a lot of work examining some of the internal metrics that OBIEE exposes through DMS and how these correlate with the timings that are recorded in the BI Server log, nqquery.log, for example:
[2014-04-21T22:36:36.000+01:00] [OracleBIServerComponent] [TRACE:2] [USER-33] [] [ecid: 11d1def534ea1be0:6faf73dc:14586304e07:-8000-00000000000006ca,0:1:9:6:102] [tid: e4c53700] [requestid: c44b002c] [sessionid: c44b0000] [username: weblogic] -------------------- Logical Query Summary Stats: Elapsed time 5, Response time 2, Compilation time 0 (seconds) [[ ]]
Now, flicking back and forth between the query log is tedious with a single-user system, and as soon as you have multiple reports running it is pretty much impossible to track the timings from the log with data points in DMS. The astute of you at this point will be wondering about Usage Tracking data, but for reasons that you can find out if you attend the Rittman Mead BI Forum I am deliberately using nqquery.log instead.
Getting data in to Graphite is ridiculously easy. Simply chuck a metric name, value, and timestamp, at the Graphite data collector Carbon, and that’s it. You can use whatever method you want for sending it, here I am just using the Linux commandline tool NetCat (nc):
echo "example.foo.bar 3 `date +%s`"|nc localhost 2003
This will log the value of 3 for a metric example.foo.bar for the current timestamp (
). Timestamps are in Unix Time, which is the number of seconds since 1st Jan 1970. You can specify historical values for your metric too:date +%s
echo "foo.bar 3 1386806400"|nc localhost 2003
Looking in Graphite we can see the handle of test values I just sent through appear:
Tip: if you don’t see your data coming through, check out the logs in ~/graphite/storage/log/carbon-cache/carbon-cache-a/ (assuming you have Graphite installed in ~/graphite)
So, we know what data we want (nqquery.log timings), and how to get data into Graphite (send the data value to Carbon via nc). How do we bring the two together? We do this in the same way that many Linux things work, and that it using pipes to join different commands together, each doing one thing and one thing well. The above example demonstrates this – the output from echo is redirected to nc.
To extract the data I want from nqquery.log I am using grep to isolate the lines of data that I want, and then gawk to parse the relevant data value out of each line. The output from gawk is then piped to nc just like above. The resulting command looks pretty grim, but is mostly a result of the timestamp conversion into Unix time:
grep Elapsed nqquery.log |gawk '{sub(/\[/,"",$1);sub(/\]/,"",$1);sub(/\,/,"",$23);split($1,d,"-");split(d[3],x,"T");split(x[2],t,":");split(t[3],tt,".");e=mktime(d[1] " " d[2] " " x[1] " " t[1] " " t[2] " " tt[1]);print "nqquery.logical.elapsed",$23,e}'|nc localhost 2003
An example of the output of the above is:
nqquery.logical.response 29 1395766983 nqquery.logical.response 22 1395766983 nqquery.logical.response 22 1395766983 nqquery.logical.response 24 1395766984 nqquery.logical.response 86 1395767047 nqquery.logical.response 10 1395767233 nqquery.logical.response 9 1395767233
which we can then send straight to Carbon.
I’ve created additional versions for other available metrics, which in total gives us:
# This will parse nqquery.log and send the following metrics to Graphite/Carbon, running on localhost port 2003 # nqquery.logical.compilation # nqquery.logical.elapsed # nqquery.logical.response # nqquery.logical.rows_returned_to_client # nqquery.physical.bytes # nqquery.physical.response # nqquery.physical.rows # NB it parses the whole file each time and sends all values to carbon. # Carbon will ignore duplicates, but if you're working with high volumes # it would be prudent to ensure the nqquery.log file is rotated # appropriately. grep Elapsed nqquery.log |gawk '{sub(/\[/,"",$1);sub(/\]/,"",$1);sub(/\,/,"",$23);split($1,d,"-");split(d[3],x,"T");split(x[2],t,":");split(t[3],tt,".");e=mktime(d[1] " " d[2] " " x[1] " " t[1] " " t[2] " " tt[1]);print "nqquery.logical.elapsed",$23,e}'|nc localhost 2003 grep Elapsed nqquery.log |gawk '{sub(/\[/,"",$1);sub(/\]/,"",$1);sub(/\,/,"",$26);split($1,d,"-");split(d[3],x,"T");split(x[2],t,":");split(t[3],tt,".");e=mktime(d[1] " " d[2] " " x[1] " " t[1] " " t[2] " " tt[1]);print "nqquery.logical.response",$26,e}'|nc localhost 2003 grep Elapsed nqquery.log |gawk '{sub(/\[/,"",$1);sub(/\]/,"",$1);split($1,d,"-");split(d[3],x,"T");split(x[2],t,":");split(t[3],tt,".");e=mktime(d[1] " " d[2] " " x[1] " " t[1] " " t[2] " " tt[1]);print "nqquery.logical.compilation",$29,e}'|nc localhost 2003 grep "Physical query response time" nqquery.log |gawk '{sub(/\[/,"",$1);sub(/\]/,"",$1);split($1,d,"-");split(d[3],x,"T");split(x[2],t,":");split(t[3],tt,".");e=mktime(d[1] " " d[2] " " x[1] " " t[1] " " t[2] " " tt[1]);print "nqquery.physical.response",$(NF-4),e}'|nc localhost 2003 grep "Rows returned to Client" nqquery.log |gawk '{sub(/\[/,"",$1);sub(/\]/,"",$1);split($1,d,"-");split(d[3],x,"T");split(x[2],t,":");split(t[3],tt,".");e=mktime(d[1] " " d[2] " " x[1] " " t[1] " " t[2] " " tt[1]);print "nqquery.logical.rows_returned_to_client",$(NF-1),e}'|nc localhost 2003 grep "retrieved from database" nqquery.log |gawk '{sub(/\[/,"",$1);sub(/\]/,"",$1);sub(/\,/,"",$(NF-9));split($1,d,"-");split(d[3],x,"T");split(x[2],t,":");split(t[3],tt,".");e=mktime(d[1] " " d[2] " " x[1] " " t[1] " " t[2] " " tt[1]);print "nqquery.physical.rows",$(NF-9),e}'|nc localhost 2003 grep "retrieved from database" nqquery.log |gawk '{sub(/\[/,"",$1);sub(/\]/,"",$1);split($1,d,"-");split(d[3],x,"T");split(x[2],t,":");split(t[3],tt,".");e=mktime(d[1] " " d[2] " " x[1] " " t[1] " " t[2] " " tt[1]);print "nqquery.physical.bytes",$(NF-7),e}'|nc localhost 2003
Now I run this script, it scrapes the data out of nqquery.log and sends it to Carbon, from where I can render it in Graphite:
or even better, Grafana:
Grafana
Grafana is an replacement for the default Graphite front-end, written by Torkel Ödegaard and available through the very active github repository.
It’s a great way to very rapidly develop and explore dashbaords of data sourced from Graphite. It’s easy to install too. Using SampleApp as an example, setup per the obi-metrics-agent example, do the following:
# Create a folder for Grafana mkdir /home/oracle/grafana cd /home/oracle/grafana # Download the zip from http://grafana.org/download/ wget http://grafanarel.s3.amazonaws.com/grafana-1.5.3.zip # Unzip it and rearrange the files unzip grafana-1.5.3.zip mv grafana-1.5.3/* . # Create & update the config file cp config.sample.js config.js sed -i -e 's/8080/80/g' config.js # Add grafana to apache config sudo sed -i'.bak' -e '/Alias \/content/i Alias \/grafana \/home\/oracle\/grafana' /etc/httpd/conf.d/graphite-vhost.conf sudo service httpd restart # Download ElasticSearch from http://www.elasticsearch.org/overview/elkdownloads/ cd /home/oracle/grafana wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.1.zip unzip elasticsearch-1.1.1.zip # Run elasticsearch nohup /home/oracle/grafana/elasticsearch-1.1.1/bin/elasticsearch & # NB if you get an out of memory error, it could be a problem with the JDK available. Try installing java-1.6.0-openjdk.x86_64 and adding it to the path.
At this point you should be able to go to the URL on your sampleapp machine http://localhost/grafana/ and see the Grafana homepage.
One of the reasons I like working with Grafana so much is how easy it is to create very smart, interactive dashboards. Here’s a simple walkthrough.
- Click on the Open icon and then New to create a new dashboard
- On the new dashboard, click Add a panel to this row, set the Panel Type to Graphite, click on Add Panel and then Close.
- Click on the title of the new graph and select Edit from the menu that appears. In the edit screen click on Add query and from the select metric dropdown list define the metric that you want to display
From here you can add additional metrics to the graph, or add graphite functions to the existing metric. I described the use of functions in my previous post about OBIEE and Graphite - Click on Back to Dashboard at the top of the screen, to see your new graph in place. You can add rows to the dashboard, resize graphs, and add new ones. One of the really nice things you can do with Grafana is drag to zoom a graph, updating the time window shown for the whole page:
You can set dashboards to autorefresh too, from the time menu at the top of the screen, from where you can also select pre-defined windows. - When it comes to interacting with the data being able to click on a legend entry to temporarily ‘mute’ that metric is really handy.
This really is just scratching the surface of what Grafana can do. You can see more at the website, and a smart YouTube video.
Summary
Here I’ve shown how we can easily put additional, arbitrary data into Graphite’s datastore, called Whisper. In this instance it was nqquery.log data that I wanted to correlate with OBIEE’s DMS data, but I’ve also used it very successfully in the past to overlay the number of executing JMeter load test users with other data in graphite.
I’ve also demonstrated Grafana, a tool I strongly recommend if you do any work with Graphite. As a front-end it is an excellent replacement for the default Graphite web front end, and it’s very easy to use too.
Previewing Three Oracle Data Visualization Sessions at the Atlanta US BI Forum 2014
Many of the sessions at the UK and US Rittman Mead BI Forum 2014 events in May focus on the back-end of BI and data warehousing, with for example Chris Jenkins’ session on TimesTen giving us some tips and tricks from TimeTen product development, and Wayne Van Sluys’s session on Essbase looking at what’s involved in Essbase database optimisation (full agendas for the two events can be found here). But two areas within BI that have got a lot of attention over the past couple of years are (a) data visualisation, and (b) mobile, so I’m particularly pleased that our Atlanta event has three of the most innovative practitioners in this area – Kevin McGinley from Accenture (left in pictures below), Christian Screen from Art of BI (centre), and Patrick Rafferty from Branchbird (right), talking about what they’ve been doing in these areas.
If you were at the BI Forum a couple of years ago you’ll of course know Kevin McGinley, who won “best speaker” award the previous year and most recently has gone on to organise the BI track at ODTUG KScope and write for OTN and his own blog, Oranalytics.blogspot.com. Kevin also hosts, along with our own Stewart Bryson, a video podcast series on iTunes called “Real-Time BI with Kevin & Stewart”, and I’m excited that he’s joining us again at this year’s BI Forum in Atlanta to talk about adding 3rd party visualisations to OBIEE. Over to Kevin…
“I can’t tell you how many times I’ve told someone that I can’t precisely meet a certain charting requirement because of a lack of configurability or variety in the OBIEE charting engine. Combine that with an increase in the variety and types of data people are interested in visualizing within OBIEE and you have a clear need. Fortunately, OBIEE is web-based tool and can leverage other visualization engines, if you just know how to work with the engine and embed it into OBIEE.
In my session, I’ll walk through a variety of reasons you might want to do this and the various approaches for doing it. Then, I’ll take two specific engines and show you the process for building a visualization with them right in an OBIEE Analysis. In both examples, you’ll come away with a capability you’ve never been able to do directly in OBIEE before.”
Another speaker, blogger, writer and developer very-well known to the OBIEE community is Art of BI Software’s Christian Screen, co-author of the Packt book “Oracle Business Intelligence Enterprise Edition 11g: A Hands-On Tutorial” and developer of the OBIEE collaboration add-in, BITeamwork. Last year Christian spoke to us about developing plug-ins for OBIEE, but this year he’s returned to a topic he’s very passionate about – mobile BI, and in particular, Oracle’s Mobile App Designer. According to Christian:
“Last year Oracle marked its mobile business intelligence territory by updating its Oracle BI iOS application with a new look and feel. Unbeknownst to many, they also released the cutting-edge Oracle BI Mobile Application Designer (MAD). These are both components available as part of the Oracle BI Foundation Suite. But it is where they are taking the mobile analytics platform that is most interesting at the moment as we look at the mobile analytics consumption chain. MAD is still in its 1.x release and there is a lot of promise with this tool to satisfy the analytical cravings growing in the bellies of many enterprise organizations. There is also quite a bit of discussion around building new content just for mobile consumption compared to viewing existing content through the mobile applications native to major mobile devices.
The “Oracle BI Got MAD and You Should be Happy” session will discuss these topics and I’ll be sharing the stage with Jayant Sharma from Oracle BI Product Development where we’ll also be showing some cutting edge material and demos for Oracle BI MAD. Because MAD provides a lot of flexibility for development customizations, compared to the Oracle BI iOS/Android applications, our session will explore business use cases around pre-built MAD applications, HTML5, mobile security, and development of plug-ins using the MAD SDK. One of the drivers for this session is to show how many of the Oracle Analytics components integrate with MAD and how an Oracle BI developer can quickly leverage the capabilities of MAD to show the tool’s value within their current Oracle BI implementation.
We will also discuss the common concern of mobile security by touching on the BitzerMobile acquisition and using the central mobile configuration settings for Oracle BI Mobile. The crowd will hopefully walk away with a better understanding of Oracle BI mobility with MAD and a desire to go build something.”
As well as OBIEE and Oracle Mobile App Designer, Oracle also have another product, Oracle Endeca Information Discovery, that combines a data aggregation and search engine with dashboard visuals and data discovery. One of the most innovative partner companies in the Endeca space are Branchbird, and we’re very pleased to have Branchbird’s Patrick Rafferty join us to talk about “More Than Mashups – Advanced Visualizations and Data Discovery”. Over to Patrick …
“In this session, we’ll explore how Oracle Endeca customers are moving beyond simple dashboards and charts and creating exciting visualizations on top of their data using Oracle Endeca Studio. We’ll discuss how the latest trends in data visualization, especially geospatial and temporal visualization, can be brought into the enterprise and how they drive competitive advantage.
This session will show in-production real-life examples of how extending Oracle Endeca Studio’s visualization capabilities to integrate technology like D3 can create compelling discovery-driven visualizations that increase revenue, cut cost and enhance the ability to answer unknown questions through data discovery.”
The full agenda for the Atlanta and Brighton BI Forum agendas can be found on this blog post, and full details of both events, including registration links, links to book accommodation and details of the Lars George Cloudera Hadoop masterclass, can be found on the Rittman Mead BI Forum 2014 home page.
Preview of Maria Colgan, and Andrew Bond/Stewart Bryson Sessions at RM BI Forum 2014
We’ve got a great selection of presentations at the two upcoming Rittman Mead BI Forum 2014 events in Brighton and Atlanta, including sessions on Endeca, TimesTen, OBIEE (of course), ODI, GoldenGate, Essbase and Big Data (full timetable for both events here). Two of the sessions I’m particularly looking forward to though are ones by Maria Colgan, product manager for the new In-Memory Option for Oracle Database, and another by Andrew Bond and Stewart Bryson, on an update to Oracle’s reference architecture for Data Warehousing and Information Management.
The In-Memory Option for Oracle Database was of course the big news item from last year’s Oracle Openworld, promising to bring in-memory analytics and column-storage to the Oracle Database. Maria is of course well known to the Oracle BI and Data Warehousing community through her work with the Oracle Database Cost-Based Optimizer, so we’re particular glad to have her at the Atlanta BI Forum 2014 to talk about what’s coming with this new feature. I asked Maria to jot down a few worlds for the blog on what she’ll be covering, so over to Maria:
“Given this announcement and the performance improvements promised by this new functionality is it still necessary to create a separate access and performance layer in your data warehouse environment or to run your Oracle data warehouse on an Exadata environment?“At Oracle Open World last year, Oracle announced the upcoming availability of the Oracle Database In-Memory option, a solution for accelerating database-driven business decision-making to real-time. Unlike specialized In-Memory Database approaches that are restricted to particular workloads or applications, Oracle Database 12c leverages a new in-memory column store format to speed up analytic workloads.
This session explains in detail how Oracle Database In-Memory works and will demonstrate just how much performance improvements you can expect. We will also discuss how it integrates into the existing Oracle Data Warehousing Architecture and with an Exadata environment.””
The other session I’m particularly looking forward to is one being delivered jointly by Andrew Bond, who heads-up Enterprise Architecture at Oracle and was responsible along with Doug Cackett for the various data warehousing, information management and big data reference architectures we’ve covered on the blog over the past few years, including the first update to include “big data” a year or so ago.
Back towards the start of this year, Stewart, myself and Jon Mead met up with Andrew and his team to work together on an update to this reference architecture, and Stewart carried on with the collaboration afterwards, bringing in some of our ideas around agile development, big data and data warehouse design into the final architecture. Stewart and Andrew will be previewing the updated reference architecture at the Brighton BI Forum event, and in the meantime, here’s a preview from Andrew:
“I’m very excited to be attending the event and unveiling Oracle’s latest iteration of the Information Management reference architecture. In this version we have focused on a pragmatic approach to “Analytics 3.0″ and in particular looked at bringing an agile methodology to break the IT / business barrier. We’ve also examined exploitation of in-memory technologies and the Hadoop ecosystem and guiding the plethora of new technology choices.
We’ve worked very closely with a number of key customers and partners on this version – most notably Rittman Mead and I’m delighted that Stewart and I will be able to co-present the architecture and receive immediate feedback from delegates.”
Full details of the event, running in Brighton on May 7-9th 2014 and Atlanta, May 15th-17th 2014, can be found on the Rittman Mead BI Forum 2014 homepage, and the agendas for the two days are on this blog post from earlier in the week.
Final Timetable and Agenda for the Brighton and Atlanta BI Forums, May 2014
It’s just a few weeks now until the Rittman Mead BI Forum 2014 events in Brighton and Atlanta, and there’s still a few spaces left at both events if you’d still like to come – check out the main BI Forum 2014 event page, and the booking links for Brighton (May 7th – 9th 2014) and Atlanta (May 14th – 16th 2014).
We’re also able now to publish the timetable and running order for the two events – session order can still change between now at the events, but this what we’re planning to run, first of all in Brighton, with the photos below from last year’s BI Forum.
Brighton BI Forum 2014, Hotel
Seattle, Brighton
Wednesday 7th May 2014 – Optional 1-Day Masterclass, and Opening Drinks, Keynote and Dinner
- 9.00 – 10.00 – Registration
- 10.00 – 11.00 : Lars George Hadoop Masterclass Part 1
- 11.00 – 11.15 : Morning Coffee
- 11.15 – 12.15 : Lars George Hadoop Masterclass Part 2
- 12.15 – 13.15 : Lunch
- 13.15 – 14.15 : Lars George Hadoop Masterclass Part 3
- 14.15 – 14.30 : Afternoon Tea/Coffee/Beers
- 14.30 – 15.30 : Lars George Hadoop Masterclass Part 4
- 17.00 – 19.00 : Registration and Drinks Reception
- 19.00 – Late : Oracle Keynote and Dinner at Hotel
- 08.45 – 09.00 : Opening Remarks Mark Rittman, Rittman Mead
- 09.00 – 10.00 : Emiel van Bockel : Extreme Intelligence, made possible by …
- 10.00 – 10.30 : Morning Coffee
- 10.30 – 11.30 : Chris Jenkins : TimesTen for Exalytics: Best Practices and Optimisation
- 11.30 – 12.30 : Robin Moffatt : No Silver Bullets : OBIEE Performance in the Real World
- 12.30 – 13.30 : Lunch
- 13.30 – 14.30 : Adam Bloom : Building a BI Cloud
- 14.30 – 14.45 : TED: Paul Oprea : “Extreme Data Warehousing”
- 14.45 – 15.00 : TED : Michael Rainey : “A Picture Can Replace A Thousand Words”
- 15.00 – 15.30 : Afternoon Tea/Coffee/Beers
- 15.30 – 15.45 : Reiner Zimmerman : About the Oracle DW Global Leaders Program
- 15.45 – 16.45 : Andrew Bond & Stewart Bryson : Enterprise Big Data Architecture
- 19.00 – Late: Depart for Gala Dinner, St Georges Church, Brighton
Friday 9th May 2014
- 9.00 – 10.00 : Truls Bergensen – Drawing in a New Rock on the Map – How will of Endeca Fit in to Your Oracle BI Topography
- 10.00 – 10.30 : Morning Coffee
- 10.30 – 11.30 : Nicholas Hurt & Michael Rainey : Real-time Data Warehouse Upgrade – Success Stories
- 11.30 – 12.30 : Matt Bedin & Adam Bloom : Analytics and the Cloud
- 12.30 – 13.30 : Lunch13.30 – 14.30 : Gianni Ceresa : Essbase within/without OBIEE – not just an aggregation engine
- 14.30 – 14.45 : TED : Marco Klaassens : “Speed up RPD Development”
- 14.45 – 15:00 : TED : Christian Berg : “Neo’s Voyage in OBIEE:”
- 15.00 – 15.30 : Afternoon Tea/Coffee/Beers
- 15.30 – 16.30 : Alistair Burgess : “Tuning TimesTen with Aggregate Persistence”
- 16.30 – 16.45 : Closing Remarks (Mark Rittman)
Atlanta BI Forum 2014, Renaissance Mid-Town Hotel, Atlanta
Wednesday 14th May 2014 – Optional 1-Day Masterclass, and and Opening Drinks, Keynote and Dinner
- 9.00-10.00 – Registration
- 10.00 – 11.00 : Lars George Hadoop Masterclass Part 1
- 11.00 – 11.15 : Morning Coffee
- 11.15 – 12.15 : Lars George Hadoop Masterclass Part 2
- 12.15 – 13.15 : Lunch
- 13.15 – 14.15 : Lars George Hadoop Masterclass Part 3
- 14.15 – 14.30 : Afternoon Tea/Coffee/Beers
- 14.30 – 15.30 : Lars George Hadoop Masterclass Part 4
- 16.00 – 18.00 : Registration and Drinks Reception
- 18.00 – 19.00 : Oracle Keynote & Dinner
Thursday 15th May 2014
- 08.45 – 09.00 : Opening Remarks Mark Rittman, Rittman Mead
- 09.00 – 10.00 : Kevin McGinley : Adding 3rd Party Visualization to OBIEE
- 10.00 – 10.30 : Morning Coffee
- 10.30 – 11.30 : Richard Tomlinson : Endeca Information Discovery for Self-Service and Big Data
- 11.30 – 12.30 : Omri Traub : Endeca and Big Data: A Vision for the Future
- 12.30 – 13.30 : Lunch
- 13.30 – 14.30 : Dan Vlamis : Capitalizing on Analytics in the Oracle Database in BI Applications
- 14.30 – 15.30 : Susan Cheung : TimesTen In-Memory Database for Analytics – Best Practices and Use Cases
- 15.30 – 15.45 : Afternoon Tea/Coffee/Beers
- 15.45 – 16.45 : Christian Screen : Oracle BI Got MAD and You Should Be Happy
- 18.00 – 19.00 : Special Guest Keynote : Maria Colgan : An introduction to the new Oracle Database In-Memory option
- 19.00 – leave for dinner
Friday 16th May 2014
- 09.00 – 10.00 : Patrick Rafferty : More Than Mashups – Advanced Visualizations and Data Discovery
- 10.00 – 10.30 : Morning Coffee
- 10.30 – 12. 30 : Matt Bedin : Analytic Applications and the Cloud
- 12.30 – 13.30 : Lunch
- 13.30 – 14.30 : Philippe Lions : What’s new on 2014 HY1 OBIEE SampleApp
- 14.30 – 15.30 : Stewart Bryson : ExtremeBI: Agile, Real-Time BI with Oracle Business Intelligence, Oracle Data Integrator and Oracle GoldenGate
- 15.30 – 16.00 : Afternoon Tea/Coffee/Beers
- 16.00 – 17.00 : Wayne Van Sluys : Everything You Know about Oracle Essbase Tuning is Wrong or Outdated!
- 17.00 – 17.15 : Closing Remarks (Mark Rittman)
–