Tag Archives: Dimensional Modelling

Using OBIEE against Transactional Schemas Part 1: Introduction

What’s the pervasive myth surrounding the Oracle Business Intelligence Enterprise Edition (OBIEE) product suite?

OBIEE is only useful for reporting against star schemas.

It’s true that the Business Model and Mapping (BMM) layer in OBIEE always presents a dimensional model, or star schema, to report developers, but our imagination, plus some real chops in metadata development, are the only ingredients required for reporting off any kind of model. The best practice in business intelligence delivery is always to build a data warehouse. Even with the rise of game-changing technologies such as in-memory analytics beasts like Exalytics, or database monsters like Exadata, this simple best practice hasn’t changed for the most part. All things equal, a data warehouse is the best location for important business data to drive analytics. The landscape is certainly shifting though, and the power of engineered systems from Oracle combined with metadata-induced BI Tools like OBIEE 11g should give us pause when considering future architectures. Are we now primed to see a return to strict transactional reporting?

Maybe. Pure transactional reporting is problematic. There are, of course, the usual performance issues. Equally troublesome is the difficulty in distilling a physical model down to a format that is easy for business users to understand. Dimensional models are typically the way business users envision their business: simple, inclusive structures for each entity. The standard OLTP data model that takes two of the four walls in the conference room to display will never make sense to your average business user.

So why would we want to report against transactional schemas? One reason is a simple lack of conviction that all the hard work required to deliver a conformed data warehouse can deliver the transformational power that the business seeks. Perhaps we just want to get our feet wet before diving head-first into a full-blown data warehouse delivery project. Budgetary restrictions are another common reason. Similarly, it’s hard to wash the stench off of failed data warehouse projects, and CIO’s are hesitant to undertake another expensive BI project when a previous failure is still in the rearview mirror. The simplest reason, however, is that we can. The power of the OBIEE 11g semantic layer gives us the opportunity to hide the dirty business going on underneath our complex metadata layer. We should feel empowered to make the leap and deliver reports to users quickly, without iterations and iterations of ETL. Besides, the OBIEE product has impressive roots in delivering transactional reporting, which Mark has described in-depth before.

Over the next few posts, I’ll describe some techniques for OLTP reporting using OBIEE 11g. I’ve blogged about transactional reporting before (in a way), as a part of the ExtremeBI Agile methodology I’ve championed over the last few years. But I didn’t go into a lot of detail about specific techniques, so that’s what I really want to focus on with this new series of posts. For those of you who have seen me speak before, or read any of my white papers or other blog entries, you know I have a hard time getting through any of these without mentioning the Oracle Next-Generation Reference Architecture. This blog post is no different. If we were to use GoldenGate to stream our transactional changes to a foundation layer instead, as Michael Rainey described in his series of blog posts on the subject, we would be able to do a lot more interesting things with our transactional schema. However, for the purpose of this series, my demonstrations will be directly against source systems.

Speaking of source systems, I wanted to give a teaser for the transactional systems I’ll be using as examples for these posts. I settled on using a few of Oracle’s public domain pre-canned Application Express (APEX) applications and the accompanying schemas, which used to be available for free as a download, but they don’t seem to be available anymore. The first I’ll use is called Customer Tracking, which is a very simple CRM of sorts which has just the right amount of challenges with a degree of difficulty that would make for a hearty introduction. It uses a third-normal form (3NF) data model for most purposes, and this is exactly the type of schema that most of us find ourselves trying to model in OBIEE when we are modeling transactional schemas.

I’ll be using another of the public domain Oracle APEX applications to describe some other techniques when the application schema isn’t really normalized at all: the Ask the Expert application, which many of you will recognize as the application used to power the AskTom website. I couldn’t resist putting “Ask Stewart” at the top: every Oracle Database junkie has dreamed of seeing their name at the top of that site.

So keep a lookout for Part 2, where we’ll jump immediately in some of the best practices and techniques.

Agile Data Warehousing with Exadata and OBIEE: ETL Iteration

This is the fourth entry in my series on Agile Data Warehousing with Exadata and OBIEE. To see all the previous posts, check the introductory posting which I have updated with all the entries in the series.

In the last post, I describe what I call the Model-Driven iteration, where we take thin requirements from the end-user in the form of a user story and generate the access and performance layer, or our star schema, logically using the OBIEE semantic model. Our first several iterations will likely be Model-Driven as we work with the end user to fine-tune the content he or she wants to see on the OBIEE dashboards. As user stories are opened, completed and validated throughout the project, end users are prioritizing them for the development team to work on. Eventually, there will come a time when an end user opens a story that is difficult to model in the semantic layer. Processes to correct data quality issues are a good example, and despite having the power of Exadata at our disposal, we may find ourselves in a performance hole that even the Database Machine can’t dig us out of. In these situations, we reflect on our overall solution and consider the maxim of Agile methodology: “refactoring”, or “rework”.

For Extreme BI, the main form of refactoring is ETL. The pessimist might say: “Well, now we have to do ETL development, what a waste of time all that RPD modeling was.” But is that the case? First off… think about our users. They have been running dashboards for some time now with at least a portion of the content they need to get their jobs done. As the die-hard Agile proponent will tell you… some is better than none. But also… the process of doing the Model-Driven iteration puts our data modelers and our ETL developers in a favorable position. We’ve eliminated the exhaustive data modeling process, because we already have our logical model in the Business Model and Mapping layer (BMM).

But we have more than that. We also have our source-to-target information documented in the semantic metadata layer. We can see that information using the Admin Tool, as depicted below, or we can also use the “Repository Documentation” option to generate some documented source-to-target mappings.

When embarking on ETL development, it’s common to do SQL prototyping before starting the actual mappings to make sure we understand the particulars of granularity. However, we already have these SQL prototypes in the nqquery.log file… all we have to do is look at it. The combination of the source-to-target-mapping and the SQL prototypes provide all the artifacts necessary to get started with the ETL.

When using ETL processing to “instantiate” our logical model into the physical world, we can’t abandon our Agile imperatives: we must still deliver the new content, and corresponding rework, within a single iteration. So whether the end user is opening the user story because the data quality is abysmal, or because the performance is just not good enough, we must vow to deliver the ETL Iteration time-boxed, in exactly the same manner that we delivered the Model-Driven Iteration. So, if we imagine that our user opens a story about data quality in our Customer and Product dimensions, and we decide that all we have time for in this iteration are those two dimension tables, does it make sense for us to deliver those items in a vacuum? With the image below depicting the process flow for an entire subject area, can we deliver it piecemeal instead of all at once?

The answer, of course, is that we can. We’ll develop the model and ETL exactly as we would if our goal was to plug the dimensions into a complete subject area. We use surrogate keys as the primary key for each dimension table, facilitating joining our dimension tables to completed fact tables. But we don’t have completed fact tables at this point in our project… instead we have a series of transaction tables that work together to form the basis of a logical fact table. How can we use a dimension table with a surrogate key to join to our transactional “fact” table that doesn’t yet have these surrogate keys?

We fake it. Along with surrogate keys, the long-standing best practice of dimension table delivery has been to include the source system natural key, as well as effective dates, in all our dimension tables. These attributes are usually included to facilitate slowly-changing dimension (SCD) processing, but we’ll exploit them for our Agile piecemeal approach as well. So in our example below, we have a properly formed Customer dimension that we want to join to our logical fact table, as depicted below:

We start by creating aliases to our transactional “fact” tables (called POS_TRANS_HYBRID and POS_TRANS_HEADER_HYBRID in the example above), because we don’t want to upset the logical table source (LTS) that we are already using for the pure transactional version of the logical fact table. We create a complex join between the customer source system natural key and transaction date in our hybrid alias, and the natural key and effective dates in the dimension table. We use the effective dates as well to make sure we grab the correct version of the customer entity in question in situations where we have enabled Type 2 SCD’s (the usual standard) in our dimension table.

This complex logic of using the natural key and effective dates is identical to the logic we would use in what Ralph Kimball calls the “surrogate pipeline”: the ETL processing used to replace natural keys with surrogate keys when loading a proper fact table. Using Customer and Sales attributes in an analysis, we can see the actual SQL that’s generated:

We can view this hybrid approach as an intermediate step, but there is also nothing wrong with this as a long-term approach if the users are happy and Exadata makes our queries scream. If you think about it… a surrogate key is an easy was of representing the natural key of the table, which is the source system natural key plus the unique effective dates for the entity. A surrogate key makes this relationship much easier to envision, and certainly code using SQL, but when we are insulated from the ugliness of the join with Extreme Metadata, do we really care? If our end users ever open a story asking for rework of the fact table, we may consider manifesting that table physically as well. Once complete, we would need to create another LTS for the Customer dimension (using an alias to keep it separate from the table that joins to the transactional tables). This alias would be configured to join directly to the new Sales fact table across the surrogate key… exactly how we would expect a traditional data warehouse to be modeled in the BMM. The physical model will look nearly identical to our logical model, and the generated SQL will be less interesting:

Now that I’ve described the Model-Driven and ETL Iterations, it’s time to discuss what I call the Combined Iteration, which is likely what most of the iterations will look like when the project has achieved some maturity. In Combined Iterations, we work on adding new or refactored RPD content alongside new or refactored ETL content in the same iteration. Now the project really makes sense to the end user. We allow the user community–those who are actually consuming the content–to dictate to the developers with user stories what they want the developers to work on in the next iteration. The users will constantly open new stories, some asking for new content, and others requesting modifications to existing content. All Agile methodologies put the burden of prioritizing user stories squarely on the shoulders of the user community. Why should IT dictate to the user community where priorities lie? If we have delivered fabulous content sourced with the Model-Driven paradigm, and Exadata provides the performance necessary to make this “real” content, then there is no reason for the implementors to dictate to the users the need to manifest that model physically with ETL when they haven’t asked for it. If whole portions of our data warehouse are never implemented physically with ETL… do we care? The users are happy with what they have, and they think performance is fine… do we still force a “best practice” of a physical star schema on users who clearly don’t want it?

So that’s it for the Extreme BI methodology. At the onset of this series… I thought it would require five blog posts to make the case, but I was able to do it in four instead. So even when delivering blog posts, I can’t help but rework as I go along. Long live Agile!

Agile Data Warehousing with Exadata and OBIEE: Model-Driven Iteration

After laying the groundwork with an introduction, and following up with a high-level description of the required puzzle pieces, it’s time to get down to business and describe how Extreme BI works. At Rittman Mead, we have several projects delivering with this methodology right now, and more in the pipeline.

I’ll gradually introduce the different types of generic iterations that we engage in, focusing on what I call the “model-driven” iteration for this post. Our first few iterations are always model-driven. We begin when a user opens a user story requesting new content. For any request for new content, we require that all the following elements are including in the story:

  1. A narrative about the data they are looking for, and how they want to see it. We are not looking for requirements documents here, but we are looking for the user to give a complete picture of what it is that they need.
  2. An indication of how they report on this content today. In a new data warehouse environment, this would include some sort of report that they are currently running against the source system, and in a perfect world, this would involve the SQL that is used to pull that report.
  3. An indication of data sets that are “nice to haves”. This might include data that isn’t available to them in the current paradigm of the report, or was simply too complicated to pull in that paradigm. After an initial inspection of these nice-to-haves and the complexity involved with including them in this story, the project manager may decide to pull these elements out and put them a separate user story. This, of course, depends on the Agile methodology used, and the individual implementation of that methodology.

First we assign the story to an RPD developer, who uses the modeling capabilities in the OBIEE Admin Tool to “discover” the logical dimensional model tucked inside the user story, and develop that logical model inside the Business Model and Mapping (BMM) layer. Unlike a “pure” dimensional modeling exercise where we focus only on user requirements and pay very little attention to source systems, in model-driven development, we constantly shift between the source of the data, and how best the user story can be solved dimensionally. Instead of working directly against the source system though, we are working against the foundation layer in the Oracle Next-Generation Reference Data Warehouse Architecture. We work from a top-down approach, first creating empty facts and dimensions in the BMM, and mapping them to the foundation layer tables in the physical layer.

To take a simple example, we can see how a series of foundation layer tables developed in 3NF could be mapped to a logical dimension table as our Customer dimension:

Model-Driven Development of Dimension Table

I rearranged the layout from the Admin Tool to provide an “ETL-friendly” view of the mapping. All the way to the right, we can see the logical, dimensional version of our Customer table, and how it maps back to the source tables. This mapping could be quite complicated, with perhaps dozens of tables. The important thing to keep in mind is that this complexity is hidden from not only the consumer of the reports, but also from the developers. We can generate a similar example of what our Sales fact table would look like:

Another way of making the same point is to look at the complex, transaction model:

We can then compare this to the simplified, dimensional model:

And finally, when we view the subject area during development of an analyses, all we see are facts and dimensions. The front-end developer can be blissfully ignorant that he or she is developing against a complex transactional schema, because all that is visible is the abstracted logical model:

When mapping the BMM to complex 3NF schemas, the BI Server is very, very smart, and understands how to do more with less. Using the metadata capabilities of OBIEE is superior to other metadata products, or superior to a “roll-you-own metadata” approach using database views, because of the following:

  1. The generated SQL usually won’t involve self-joins, even when tables exists in both the logical fact table, and the logical dimension table.
  2. The BI Server will only include tables that are required to facilitate the intelligent request, either because it has columns mapped to the attributes being requested, or because the table is a required reference table to bring disparate tables together. Any tables not required to facilitate the request will be excluded.

Since the entire user story needs to be closed in a single iteration, the user who opened the story needs to be able to see the actual content. This means that the development of the analyses (or report) and the dashboard are also required to complete the story. It’s important to get something in front of the end user immediately, but it doesn’t have to be perfect. We should focus on a clear, concise analyses in the first iteration, so it’s easy for the end user to verify that the data is correct. In future iterations, we can deliver high-impact, eye-catching dashboards. Equally important to closing the story is being able to prove that it’s complete. In Agile methodologies, this is usually referred to as the “Validation Step” or “Showcase”. Since we have already produced the content, then it’s easy to prove to the user that the story is complete. But suppose that we believed we couldn’t deliver new content in a single iteration. That would imply that we would have an iteration during our project that didn’t include actual end-user content. How would you go about validating or showcasing that content? How would we go about showcasing a completed ETL mapping, for instance, if we haven’t delivered any content to consume it?

What we have at the end of the iteration is a completely abstracted view of our model: a complex, transactional, 3NF schema presented as a star schema. We are able to deliver portions of a subject area, which is important for time-boxed iterations. The Extreme Metadata of OBIEE 11g allows us to remove this complexity in a single iteration, but it’s the performance of the Exadata Database Machine that allows us to build real analyses and dashboards and present it to the general user community.

In the next post, we’ll examine the ETL Iteration, and explore how we can gradually manifest our logical business model into a physical model over time. As you will see, the ETL iteration is an optional one… it will be absolutely necessary in some environments, and completely superflous in others.

Agile Data Warehousing with Exadata and OBIEE: Puzzle Pieces

In the previous post, I laid the groundwork for describing Extreme BI: a combination of Exadata and OBIEE delivered with an Agile spirit. I discussed that the usual approach to Agile data warehousing is not Agile at all due to the violation of it’s main principle: working software delivered iteratively.

If you haven’t already deduced from my first post — or if you haven’t already seen me speak on this topic — what I am recommending is bypassing, either temporarily or permanently, the inhibitors specific to data warehousing projects which limit our ability to deliver working software quickly. Specifically, I’m recommending that we wait to build and populate physical star schemas until a later phase, if at all. Remember the two reasons that we build dimensional models: model simplicity and performance. With our Extreme BI solution, we have tools to counter both of those reasons. We have OBIEE 11g, with a rich metadata layer that presents our underlying data model, even if it is transactional, as a star schema to the end user. This removes our dependency on a simplistic physical model to provide a simplistic logical model to end users. We also have Exadata, which delivers world-class performance against any type of model, and can bridge the performance gap afforded by star schemas. With these tools at our disposal, we can postpone the long process of building dimensional models, at least for the first few iterations. This is the only way to get working software in front of the end user in a single iteration, and, as I will argue, this is the best way to collaborate with an end user and deliver the content they are expecting.

Of the puzzle pieces we need to deliver this model, the first is the Oracle Next-Generation Reference DW Architecture (we need an acronym for that), which Mark has already written about in-depth here. As you browse through this post, pay special attention to his formulation of the foundation layer, which is the most important layer for delivering Extreme BI.

Oracle Next-Generation Reference DW Architecture

Foundation Layer

This is our “process-neutral” layer, which means simply that it isn’t imbued with requirements about what users want and how they want it. Instead, the foundation layer has one job and one job only: tracking what happened in our source systems. Typically, the foundation layer logical model looks identical to the source systems, except that we have a few additional metadata columns on each record such as commit timestamps and Oracle Database system change numbers (SCN’s). There are other, more complex solutions for modeling the foundation layer when the 3NF from the source system or systems is not sufficient, such as data vault. Our foundation layer is generally “insert-only”, meaning we track all history so that we are insulated from changing user requirements in the near and distant futures.

UPDATE: Kent Graziano, a major data vault evangelist, has started blogging. Perhaps with some pressure from the public, we could “encourage” him to blog on what data vault would look like in a standard foundation layer.

Capturing Change

Also required for delivering Extreme BI is a process for capturing change from the source systems and rapidly applying it to the foundation layer, which I described briefly in one of my posts on real-time data warehousing. We have a bit of a tug-of-war at this point between Oracle Streams and Oracle GoldenGate. GoldenGate is the stated platform of the future because it’s a simple, flexible, powerful and resilient replication technology. However, it does not yet have powerful change data capture functionality specific to data warehouses, such as easy subscriptions to raw changed data, or support for multiple subscription groups. You can, in general, work around these limitations using the INSERTALLRECORDS parameter and some custom code (perhaps fodder for a future blog post). Regardless of the technology, Extreme BI requires a process for capturing and applying source system changes quickly and efficiently to the foundation layer on the Exadata Database Machine.

Extreme Performance

Although I’ll drill into more detail in the next post, the reason we need Extreme Performance is to offset the performance gains we usually get from star schemas, since we won’t be building those, at least not in the initial iterations. Although Rittman Mead has deployed a variant of this methodology sans Exadata using a powerful Oracle Database RAC instead, there is no substitute for Exadata. Although the hardware on the Database Machine is superb, it’s really the software that is a game-changer. The most extraordinary features include smart scan and storage indexes, as well as hybrid columnar compression, which Mark talks about here and references an article by Arup Nanda found here. For years now, with standard Oracle data warehouses, we’ve pushed the architecture to it’s limits trying to reduce IO contention at the cost of CPU utilization, using database features such as partitioning, parallel query and basic block compression. But Exadata Storage can eliminate the IO boogeyman using combinations of these standard features plus the Exadata-only features to elevate the query performance against 3NF schemas on par with traditional star schemas and beyond.

Extreme Metadata

Extreme performance is only half the battle… we also need Extreme Metadata to provide us the proper level of abstraction so that report and dashboard developers still have a simplistic model to report against. This is what OBIEE 11g brings to the table. We have also delivered a variant of this methodology without OBIEE, using Cognos instead, which has a metadata layer called Framework Manager. As with Exadata, the BI Server has no equal in the metadata department, so my advice… don’t substitute ingredients.

Consider, for a moment, the evolution of dimensional modeling in deploying a data warehouse. Not too long ago, we had to solve most data warehousing issues with the logical model because BI tools were simplistic. Generally… there was no abstraction of the physical into the logical, unless you categorize the renaming of columns as abstraction. As these tools evolved, we often found ourselves with a choice: solve some user need in the logical model, or solve it with the feature set of the BI tool. The use of aggregation in data warehousing is a perfect example of this evolution. Designing aggregate tables used to be just another part of the logical modeling exercise, and were generally represented in the published data model for the EDW. But now, building aggregates is more of a technical implementation than a logical one, as either the BI Server or the Oracle Database can handle the transparent navigation to aggregate tables.

The metadata that OBIEE provides adds two necessary features for Agile delivery. First, we are able to report against complex transactional schemas, but still expose those schemas as simplified dimensional models. This allows us to bypass the complex ETL process at least initially so that we can get new subject areas into the users hands in a single iteration. But OBIEE’s capability to map multiple Logical Table Sources (LTS’s) for the same logical table makes it easy to modify — or “remap” — the source of our logical tables over time. So, in later iterations, if we decide that it’s necessary to embark upon complex ETL processes to complete user stories, we can do this in the metadata layer without affecting our reports and dashboards, or changing the logical model that report developers are used to seeing.

Flow of Data Through the Three-Layer Semantic Model

More to Come…

In the next post, I’ll describe what I call the Model-Driven Iteration, where we use OBIEE against the foundation layer to expose new subject areas in a single iteration. After that, I’ll describe ETL Iterations, where we transform a portion of our model iteratively using ETL tools such as ODI, OWB or Informatica. Finally, I’ll describe what I call Combined Iterations, where both Model-Driven activity and ETL activity are going on at the same time.

Agile Data Warehousing with Exadata and OBIEE: Introduction

Over the last year, I’ve been speaking at conferences on one subject more than any others: Agile Data Warehousing with Exadata and OBIEE. Although I’ve been busy with client work and growing the US business, I realize I need to dedicate more time to blogging again, and this seemed like the logical subject to take up. So I’ll use the next few blog posts to make my case for what I like to call Extreme BI: an Agile approach to data warehousing using the combination of Extreme Performance and Extreme Metadata.

In a standard data warehouse implementation, whether we are walking in the Inmon or Kimball camps, some portion of our data model will be dimensional in nature; a star schema with facts and dimensions. So let me pose a question, which I think will lend itself well to diving into the Extreme BI discussion: Why do we build dimensional models? The first reason is simplicity. We want to model our reporting structures in a way that makes sense to the business user. The standard OLTP data model that takes two of the four walls in the conference room to display is just never going to make sense to your average business user. At the end of a logical modeling exercise, I expect the end-user to have a look at a completed dimensional model and say: “Yep… that’s our business alright”. The second reason we build dimensional models is for performance. Denormalizing highly complex transactional models into simplified star schemas generally produces tremendous performance gains.

So my follow-up question: can the combination of Exadata and OBIEE, or Extreme BI, actually change the way we deliver projects? We’ve all seen the Exadata performance numbers that Oracle publishes, and I can tell you first hand the performance is impressive. Can this Extreme Performance combined with the Extreme Metadata that OBIEE provides give us a more compelling case for delivering data warehouses using Agile methodologies?

To start with, I’d like to paint a picture of what the typical waterfall data warehousing project looks like. The tasks we usually have to complete, in order, are the following:

  1. User interviews
  2. Construct requirement documents
  3. Create logical data model
  4. SQL prototyping of source transactional models
  5. Document source-to-target mappings
  6. ETL development
  7. Front-end development (analyses and dashboards)
  8. Performance tuning

Raise your hand if this looks familiar. We would have to go through all these steps, which could take months, before end users can see the fruits of our labor. To mitigate this scenario, organizations will attempt to deliver data warehouses using “Agile” methodologies. What this usually means, from my experience, is a simple repackaging of the same waterfall project plan into “iterations” or “sprints”, so that the project can be delivered iteratively. So the process might look like the following:

  1. Iteration 1: Interviews and user requirements
  2. Iteration 2: Logical modeling
  3. Iteration 3: ETL Development
  4. Iteration 4: Front-end development

But this, ladies and gentlemen, is not Agile. To get an understanding of what lies at the heart of Agile development, we need to look no further than the Agile Manifesto, or the history of the Agile Movement. When examining the different methodologies, there is one major theme that permeates all of them: working software delivered iteratively. It’s not enough to simply deliver the same old waterfall methodology in “sprints” or “iterations”, because, at the end of those iterations, we don’t have any working software… software that end users can actually use to improve their job or help them make better decisions. In the example above, we still require four iterations before we get any usable content. It doesn’t matter if we’ve written some complex ETL to load a fact table if the end user doesn’t have a working dashboard to go along with it.

To apply the Agile Manifesto to data warehouse delivery, it’s the following key elements that are required for us to deliver with a true Agile spirit:

  1. User stories instead of requirements documents: a user asks for particular content through a narrative process, and includes in that story whatever process they currently use to generate that content.
  2. Time-boxed iterations: iterations always have a standard length, and we choose one or more user stories to complete in that iteration.
  3. Rework is part of the game: there aren’t any missed requirements… only those that haven’t been addressed yet.

I’ve been conscious not to prescribe any distinct Agile methodology, though I can’t help using more Scrum-like concepts in this formulation. However, I think this list is generic enough to apply to most methodologies. Over the next few posts, I’ll discuss the necessary puzzle pieces to engage in Extreme BI, as well as how we might implement new subject area content in a single iteration. Additionally, I’ll discuss how these implementations might be reworked, or “refactored”, over several iterations to produce data warehouses that respond to user stories: what users want and when they want it.

Follow-up Posts

Agile Data Warehousing with Exadata and OBIEE: Puzzle Pieces

Agile Data Warehousing with Exadata and OBIEE: Model-Driven Iteration