Tag Archives: Oracle BI Apps

Successful BI Apps Implementation Part 2: BI Apps 7.9.6

Welcome to Part 2. If you missed Part 1, where I give an introduction to BI Apps and discuss the project life cycle, the link is below.

Successful BI Apps Implementation Part 1: Introduction and the Project Life Cycle

Successful BI Apps Implementation Part 2: BI Apps 7.9.6

In this post, I get technical and take a deeper look at BI Apps 7.9.6. Here goes!

Customising the Data Model

As defined by Oracle, customisations fall into the following categories:

  • Category 1 customisations where we add a new column to an existing fact or dimension table.
  • Category 2 customisations where we create new fact or dimension tables.
  • Category 3 customisations where we add an additional data source to an existing fact or dimension table.
  • Other trivial customisations such as changing filter conditions.

Typically, with the trivial and category 1 customisations, there isn’t much to go wrong. We may need to think about auxiliary change capture (see below) if the column comes from a table other than the base table in the OLTP.

With the category 2 customisations, we need to consider whether new fact tables are suitable for being added to an existing subject area or (more commonly) whether they merit a subject area of their own. I sometimes see subject areas with multiple fact tables and many non-conformed dimensions. This happens when a subject area is iteratively expanded again and again until the usability of the subject area is severely diminished. Things have probably gone wrong if either a) it is possible to create queries that the BI server refuses to interpret or b) the development process constantly involves setting metrics to total over non-conformed dimensions.

Regarding category 2 customisations involving new dimensions, another complication that arises is the modelling of many-to-many (M:M) relationships from the OLTP. Take this example: a user looks at an opportunity in Siebel and as well as seeing all the fields that are visible in OBIEE, they also see a field called Sales Team. The user asks for this to be added to the Sales – CRM Pipeline subject area. This is non-trivial to model in OBIEE as the Siebel Sales Team field actually opens a shuttle applet that displays the multiple sales team members for the opportunity. It might seem obvious that Sales Team can’t be modelled as an attribute of the opportunity dimension but what should be done instead? 99% of the time, my advice would be to examine the business reason behind the request to add the column and find an alternative. It may be that just the primary sales team member is required for reporting purposes or that the Created By or Last Updated By users could be used instead. In the remaining 1% of cases we can consider the use of bridge tables, but considering the number of OOTB metrics that would have to be remodelled to accommodate this change, the functional advantage gained from the change may not be enough to justify such dramatic technical changes. In situations like this, knowing when not to customise can be important for both ease of use and ease of upgrade.

Category 3 customisations do not strictly require RPD changes but we should at least have a column to tell us which source system the row of data originated from. Also, if factual rows from any particular source system cannot be reconciled with certain dimensions (due to the lack of any foreign key column), it can be beneficial to add a row to that dimension (with ROW_WID equal to -1 for example) to cater for these fact rows. It allows us to differentiate between rows from an Oracle source system that do not have an associated dimension row and rows from a non-Oracle source system that cannot have an associated dimension row.

Data Lineage / Tracking Customisations

One of the questions that I sometimes get asked by end users is ‘where does this column come from in EBS / Siebel?’. When dealing with OOTB BI Apps, this is usually quite easy to answer because:

  • Even though our semantic model may be extremely vast, it is usually quite simple and
  • Oracle provides good data lineage documentation and more importantly, descriptive text in the RPD which manifests as hover-over text in the UI.

However, I often find that after a couple of years of customising, the question isn’t so easy to answer. From an IT point-of-view, it’s usually quite easy to determine what is vanilla and what is custom, but from an end-user point-of-view, the exact functional definition of custom columns is not always that obvious. The simple solution is to follow the same standards that Oracle adheres to for the vanilla material: give new subject areas, presentation tables and presentation columns useful descriptions; keep subject areas small and intuitive (1 or 2 fact tables per area); and maintain a data lineage spread sheet that maps columns between source applications and the Oracle data warehouse.

This documentation becomes very useful when we come to upgrade BI Apps. Typically, BI Apps upgrades involve a combination of automated and manual code merging – using a 3-way repository merge on the RPD and some more manual steps to re-apply customisations to the new vanilla Informatica metadata. When testing the merged RPD and customising copies of the new Informatica mappings, the new code should be functionally identical but may be technically different due to data model changes between BI Apps versions. At this point, the above documentation becomes invaluable.

Indexing and Partitioning

In my previous post, I talked about building performance testing and monitoring into the project lifecycle. Here we will focus more on the design and development stages.

In the above section, I describe category 1 customisations as extremely simple. Well, they are, but the one mistake that people often make is not considering whether an index should be created along with the new column. This can lead to a slow but steady increase in average query times as more and more non-indexed columns are used to group and filter data. Generally, if any dimensional attribute is going to be used for grouping or filtering, it will probably benefit from an index. The decision about whether to make it a bitmap index depends on data density. For example, for the field ‘Customer Priority’ (high/med/low), use a bitmap and for ‘Customer ID’, don’t. Also, if a new column is used as a key in an update statement, make sure that the corresponding index has a type of ‘ETL’ in DAC so that it is not dropped as part of the ETL.

Partitioning is an interesting topic in the context of BI Apps. Typically, OOTB BI Apps reports and dashboards perform well even with large data volumes, due to the thousands of indexes that are provided with the OBAW. Therefore, why partition? I can think of 2 main reasons:

  • We have extremely large data volumes in some of our fact tables and have decided to partition by year and include a corresponding filter (i.e. current year) in one of our logical table sources.
  • We have performed a category 3 customization or similar and again have a logical table source filter that could be used as a partition key. For example, our revenue fact contains rows sourced from both Siebel and another system, and only the Siebel data should be visible in the OOTB subject areas.

In both of the above scenarios, we know that the partition key will be used in queries generated by the BI server due to the RPD customisations that accompany the partitioning. So what about scenarios that don’t involve the corresponding RPD changes? Should we consider partitioning to speed up a certain group of reports that have certain filters applied? In my opinion, no, we should only consider it as a viable option when we can guarantee that a) the partition key will be used in queries and b) we expect little or no movement of rows across partitions. Even if these criteria are met, we should only be looking at partitioning after we are convinced that our indexing strategy is perfect. Why? Because badly chosen partition keys can make performance worse! Even the addition of a partition that speeds up some reports can have a negative impact on those that do not reference the partition key due to the increased number of I/O operations involved in reading indexes on multiple partitions.

One important point to note is that bitmap indexes must be defined as LOCAL on partitioned tables. This means that we have to change how DAC creates its OOTB bitmap indexes if we partition an OOTB table. This can be done using Index Actions in DAC but should serve as another deterrent to unnecessary partitioning!

Customising the ETL

There are two main types of customisations that we make to the OOTB DAC and Informatica metadata. Firstly, we may take a copy of an OOTB mapping and make some minor changes. Typically, these will include new columns, sources and lookups, depending on the type of mapping. Secondly, we can create some custom mappings to extract from new sources or load into new targets. Before I give any advice about making these changes, let me first make a point about ETL vs ELT.

In OBIA 7.x, the Informatica server is the ETL engine. Compare this with Oracle Data Integrator where we typically use the Oracle Database as the ELT engine. With ODI, a fact mapping can be used to generate a single SQL statement that takes data from a staging table and loads into a target, populating the necessary foreign keys and generating surrogate keys where necessary. With Informatica, this will instead involve multiple SQL statements just to extract the staging data and data used in lookups, followed by a large amount of processing on the Informatica server itself. Finally, row-by-row insert / update statements will be issued to the target table. Clearly, this is less efficient than the ELT option regardless of how much optimisation we do on the Informatica server side.

The above weakness of the tool means that when we need a mapping to exclusively perform updates, it is often tempting to add some custom SQL to the end of a mapping / workflow rather than creating a whole new mapping that will a) take longer to develop and b) probably run slower. The problem with this approach is code readability. The same applies for the use of stored procedures within mappings (which is often not great for performance either). So my advice is to minimise the number of PLP mappings where possible (save them for populating aggregate tables) and to stick to standard Informatica development methods where possible.

Changed Data Capture

Changed data capture (CDC) is one of the topics that seem to create problems for BI Apps developers. It’s not an intrinsically difficult concept to master but I have seen it overlooked in both the design and testing phases of a couple of projects (mainly when working with Siebel). It’s worth pointing out that CDC is handled very differently for different source systems due to the varying degrees of metadata capture and audit trail functionality between systems.

CDC for EBS is pretty simple – EBS tables have nice metadata columns such as LAST_UPDATE_DATE that allow us to easily extract data that has changes since the last ETL run (minus a few days). Siebel is slightly more complicated and involves creating change capture tables and triggers in the OLTP database.  We still have nice metadata columns like in EBS, but we have to worry more often about auxiliary change capture (see below) and tracking hard-deletes. When working with Peoplesoft, complications exist because update timestamps do not exist on some base tables.

Importantly, when customising the Oracle Business Analytics Warehouse and adding new source tables to the ETL, we must consider the CDC requirements for each new source. As an example, imagine we are loading opportunities from the S_OPTY table in Siebel. By default, our SDE mapping will extract records from the associated change capture view V_OPTY that has been left outer joined to a bunch of auxiliary tables. Now imagine that we need to extract a new custom column called X_IMPORTANCE from the auxiliary table S_OPTY_X. The steps to achieve this are obvious – add S_OPTY_X as a source in the SDE mapping and create a left outer join between S_OPTY and S_OPTY_X, then map the new column from source to target. However, what happens when the value of X_IMPORTANCE is updated in the OLTP? Do we see the update in the OBAW? The answer is ‘maybe’, it depends if the core opportunity record also got updated since the last ETL.

By default, the change in the auxiliary table will not lead to the corresponding opportunity appearing in our change capture view. If we want to see the update in OBIEE without relying on an update to the base opportunity record, we must create or extend a separate mapping that exists purely to populate the change capture view used in the SDE mapping. In this situation, there is a trade-off between functionality (seeing all source system updates in OBIEE) and ETL performance (due to the extra auxiliary change capture process) but I advise starting with the assumption that auxiliary change capture is required. I have seen faulty auxiliary change capture go unnoticed a couple of times so make sure that this functionality is tested!

That’s it for BI Apps 7.9.6. Keep an eye out for more posts in 2014 where I will be blogging about BI Apps 11g.

Successful BI Apps Implementation Part 1: Introduction and the Project Life Cycle

Since I am new to the Rittman Mead blog, let me first introduce myself. I am Mike and I have been working at Rittman Mead for the last year, mainly focusing on Oracle BI Applications. I have a background in Siebel CRM and many of my BI projects have involved integration with Siebel. Being an OBIA consultant, my work combines CRM, ERP, ETL, data warehousing and analytics.

Over the next two postings I will be looking at what makes an OBIA project successful. Firstly, I will try to justify why BI Apps can add so much value to a business and how best to extract this value from the product. Most of this will be non-technical and will examine the BI Apps project lifecycle as a whole. In Part 2, I will look at BI Apps with Informatica and DAC, focusing on good technical implementation and aspects of customisation that are often overlooked. Looking ahead to 2014, I will look at BI Apps 11g and how our steps to a successful project are affected by the architectural changes to the product.

I will add the links in as the postings are published, but here are the topics and links for the other parts of the series:

Successful BI Apps Implementation Part 1: Introduction and the Project Life Cycle

Successful BI Apps Implementation Part 2: BI Apps 7.9.6

Introduction to BI Apps

Oracle Business Intelligence Applications (OBIA or BI Apps) is a packaged BI solution for use with Oracle source systems such as EBS, JD Edwards, Siebel and PeopleSoft. It uses OBIEE as a reporting platform and until recently, Informatica as the ETL tool. Now, Oracle also provides BI Apps versions that use Oracle Data Integrator as the ELT tool. In either case, data from Oracle source systems is loaded into the Oracle Business Applications Warehouse (OBAW) using prebuilt Informatica / ODI metadata. Users then access prebuilt dashboards and subject areas in OBIEE that use the OBAW as a source. OBIA is modular and an individual Application is used to describe all of the Informatica / ODI and OBIEE metadata used for a particular functional area such as Financial Analytics or Sales Analytics. OBIA can be customised to reflect customisations in Oracle source systems or to introduce entirely new reporting functionality including new transactional sources.

BI Apps 7.9.x Architecture

If you have an Oracle transactional system, the TCO and time to production associated with implementing BI Apps should be significantly better than if you were to implement a custom BI solution. The technical quality, functional depth, ease-of-use and extensibility of the applications mean that businesses can gain very rapid insight into their processes without the need for extremely long IT projects.

Defining Success for a BI Apps Project

So let’s define our criteria for a successful OBIA project. My belief is that by the time that BI Apps is in production, the following five statements should hold true:

  • Data is accurate and importantly, users trust that the data is accurate!
  • The system performs ‘well’. I could devote an entire blog post to this topic so let’s say that when a user asks a question, they should get a response in no more than 10 seconds. We must also consider ETL performance.
  • The data model and dashboards presented to the user are intuitive and any data lineage information is self-contained.
  • Users understand their own data visibility rules and these rules are well aligned with the rules in the OLTP source systems (EBS, Siebel etc.).
  • The system is being actively used to drive business decisions, or as an integral part of a business process. Ideally, there should be some measurable ROI.

Now, how do we go about achieving these goals? Well, the answer is long enough to merit a few blog posts! Before we focus on anything too technical, let’s talk about how we handle requirements and design.

Don’t do it blind

Let’s start at the beginning of the Project Life Cycle. Often, big waterfall-style IT projects go through requirements and design phases before users begin to understand exactly how the final product will look and function. Personally, I don’t like this approach in general and I certainly think that there is a better approach for BI Apps projects (see a description of Extreme BI by Stewart Bryson for Agile methodology with OBIEE). With BI Apps, we have the ability to install the out-of-the-box (OOTB) product fairly quickly and allow users to see what the OOTB dashboards and subject areas will look like with their own data. There is some configuration required prior to performing an initial data load but the process of detailed requirements gathering and design becomes much easier if users can perform their own fit/gap analysis with realistic data. This approach allows us to ask users ‘how well aligned is this Subject Area to your needs?’ instead of ‘what would you like to report on?’. Typically, we get much more enthusiasm and useful feedback with this approach.

Great, I can see everyone’s data!

At this discovery and prototyping stage of the project, I believe that implementing enterprise security can be beneficial. Typically, we need to decide: who sees what, who does what and importantly, how we administer this. It may seem counter-intuitive to worry about boring old data security at this point in the project but in some industries, the OOTB security options for integration with applications such as EBS and Siebel may not be sufficient. I have found this to be particularly true in the banking industry and in implementations where OBIEE is customer-facing. Also, when dealing with EBS R12 integration, be aware that Oracle provides an optional EBS patch that provides more granular control of account visibility across different areas of the suite (GL, AR, AP etc.) and that implementing the same functionality in BI Apps requires customisation. Identifying at this stage if and how the BI Apps security model requires customising can save major headaches later on in the project lifecycle.

Design processes, not just dashboards

By this point in the project, we should have a good understanding of the changes required to the BI Apps data model, Dashboards and security configuration. So, there is no more need to speak to users then? Actually, it’s probably time to start thinking about Agents, Actions, and Guided Navigation. I see two main reasons for this.

Firstly, I think it makes a huge difference to initial user experience and user acceptance in the days and weeks immediately following a go-live. As an example, a sales manager may use an OOTB dashboard to perform some pipeline analysis on a daily basis. However, a better solution might be to create an exception report (such as ‘South East Opportunities pending approval for more than 2 days’) and an associated agent. Even better would be to create an Action allowing the manager to approve the opportunity with one click. This way, the sales manager can spend less time getting data and more time acting on it. In order to add this level of business value, we need to commit time to understanding exactly what users want to achieve – not just what data they want.

Secondly, examining existing or future processes in this level of detail should highlight any missing functionality from the customised data model. Using the above example, the custom metric ‘# of Days Pending Approval’ is intrinsic to the process and we would notice if it was missing from the data model.

Monitor, Evaluate, Educate

Now, let’s assume that we have a functioning system in production (see the next blog post for how to get to this stage). Dashboards perform well, functionality is well aligned with business processes and the IT guys are happy because the nightly load only takes 1 hour. Unfortunately, things can change! We should expect and plan for: data growth, a growth in BI content (such as reports and agents) and more users concurrently accessing the system. Importantly, we can and should monitor both query and ETL performance.

I believe that the best way to do this is via an Administrator Dashboard with some associated exception reports and agents.  This should highlight: long running queries, long running ETL jobs, high numbers of concurrent queries (sometimes caused by everyone scheduling their agents for 9am on a Monday) and data growth trends. Creating such a dashboard requires a combination of Usage Tracking (which is well documented) and reporting from our DAC and Informatica (or ODI) repositories. If you want a separate post on this, let me know in the comments.

Some problems with query performance can be solved with a bit of user education (without training, some users like to export everything and then filter in Excel) and others might highlight areas where the data model is functionally lacking. An example of this is where users implement complex selection steps to derive categories that could be derived during the ETL. Therefore, be proactive about engaging with the business if you see large query times.

That’s it for the introduction and project lifecycle. Stay tuned for the next post where we will get technical and talk about: RPD modelling, best practices for Informatica and DAC; data accuracy; making code upgrade-proof and data lineage.

Customisations in BI Apps 11.1.1.7.1 Part 2 : Category 2 Changes

In the last blog I went through a very basic customisation, adding a new column to an existing dimension table. That served as an overview for how the BI Apps mappings are packaged in ODI for use in a dynamic ETL. This blog will go through the creation of a dimension and fact table, illustrating some of the concepts used to maintain data integrity. Most of these are similar to concepts used in previous releases of BI Apps, but modified for use in the new tool. For this example I will be adding two new tables to an existing EBS solution and running it as a single ETL load. The first is a dimension based on RETURN_REASON, the second is a fact based on SALES_RETURNS.

The first step is to create the source and target table definitions in the ODI model if they don’t already exist. Remember that you can just specify the table name and then use the Reverse Engineer feature to get the columns. The only constraint is that the table definitions are made in the correct models, but it’s worth grouping them into sub-models so that they can be navigated easily.

image

There are sample tables seeded in the repository for dimensions, staging tables and facts. These tables indicate the recommended naming convention (prefixing with WC_ instead of W_) as well as required system columns for warehouse tables. Below is a screenshot of the columns from the sample dimension table. All of these were included in tables for previous releases of BI Apps.

image

  • ROW_WID: Surrogate key for dimension tables.
  • INTEGRATION_ID: Natural key from the source table. Is often a concatenation of keys when several tables are used to populate a dimension.
  • DATASOURCE_NUM_ID: The identifier of the source system the data was extracted from. This allows for multiple sources to populate the same warehouse table without conflict.
  • ETL_PROC_WID: Run identifier for the load.
  • EFFECTIVE_FROM_DT/EFFECTIVE_TO_DT: These can be used to enable SCD type 2 dimensions.
  • CREATED_ON_DT/CHANGED_ON_DT: These dates (and all of the auxiliary changed dates) are from system columns on the source system. These are used to extract only newly changed information. The auxiliary dates can be used to improve this logic to derive from several tables.

In addition to the table and column definitions, some other attributes need to be configured in order for the load plan generator (LPG) to calculate the dependencies. The only data the user gives the LPG are the fact groups to load. From then, the following logic is used to generate the plan:

  • Flexfield OBI Fact Group can be set on fact tables to link them to configuration groups.
  • Staging tables are identified from the naming convention, e.g. XX_D will assume a staging table of XX_DS.
  • Required dimensions for a fact are identified by reference constraints defined in the ODI model.

So for my example, I needed to set the fact group flexfield on the fact table as well as the constraints between the foreign keys and the newly created dimension table.

image

There is a fact group X_CUSTOM_FG which is included in each functional area. It is recommended that generic customisations are included in this group. You can set this on the datastore definition as above. In addition to this create various constraints on the new datastores.

  • Staging Tables: Primary Key over INTEGRATION_ID and DATASOURCE_NUM_ID
  • Dimension Tables:
    • Primary Key over ROW_WID
    • Alternate Key over INTEGRATION_ID and DATASOURCE_NUM_ID
  • Fact Tables:
    • Primary Key over ROW_WID
    • Alternate Key over INTEGRATION_ID and DATASOURCE_NUM_ID
    • References for each foreign key to the ROW_WID of the parent table

image

image

After the datastores are configured, it’s time to create the SDE interfaces and packages. Create these in the Custom_SDE folder as well so it’s separate from any prebuilt logic. Most of the columns can map across directly but it is important to use the global variable for DATASOURCE_NUM_ID. Variables are referenced by prefixing with # but also can be inserted using the GUI expression editor.

image

The other important thing for the SDE mappings is to add a filter for the extract logic. Previously, this was done using two workflows and overriding the logic on one of them. Now we only need one interface as we can use a global function (seeded in the repository) to perform the logic. The logic used in the example is as follows:

RUN_FULL_INCREMENTAL("#IS_INCREMENTAL",(RETURN_REASON.CREATION_DATE > TO_DATE_VAR('#INITIAL_EXTRACT_DATE')),(RETURN_REASON.LAST_UPDATE_DATE > TO_DATE_VAR('#LAST_EXTRACT_DATE')))

where #IS_INCREMENTAL is derived from querying a system table: W_ETL_LOAD_DATES. Once the mappings are made, they should be included in a package which refreshes the IS_INCREMENTAL and LAST_EXTRACT_DATE variables first. This is typical of all the extract mappings and can be made by just dragging the necessary objects across, defining one of the variables as the first step and joining them using the green connectors. For all staging mappings, choose the BI Apps Control Append IKM in the flow tab of the interface designer. There are BI Apps versions of all the default IKMs which have some additional features.

image

The SIL mappings are created in much the same way, but require an update key to be selected. It’s important that this is the key defined over the INTEGRATION_ID and DATASOURCE_NUM_ID. The update key can be set on the target datastore properties. In order to populate the ROW_WID, a sequence needs to be created. The prebuilt mappings all use native sequences stored in the data warehouse. This can then be imported into ODI and referenced by using NEXTVAL(SEQUENCE_NAME).

image

The other main difference for the SIL mappings is that they use the BI Apps Incremental Update or BI Apps Slowly Changing Dimension IKMs. For dimensions, the IKM has a useful option to insert the unspecified row automatically. For fact mappings (and some dimensions) it will be necessary to perform lookups. This procedure is done very simply by clicking the magnifying glass icon in the interface designer. That will open a wizard which allows you to select the table and the join condition. After that, any column from the lookup table can be used in the target expressions.

image

The SIL interfaces also need to be put into packages although only the IS_INCREMENTAL variable is required for refresh. Once all of the packages have been created, scenarios need to be generated for each of them. This can be done for each package at once by choosing generate scenarios at a higher folder level. Existing packages will be regenerated. These new scenarios need to be added to the master load plan, in the X_CUSTOM plans for SDE Dims, SDE Facts, SIL Dims and SIL Facts. Add the step by selecting the highest level and choosing Run Scenario step for the add menu. Then set the restart mode to Restart from failed step.

image

Once all of this has been done, the load plan must be edited to include the X_CUSTOM_FG fact group. This is done through the configuration manager where the plan can also be regenerated. After running the plan, I could see all of the tasks being included in the appropriate order. The data was successfully loaded into the fact table, with the foreign keys resolving.

image

That concludes the guide to customisations in the new BI Apps. Hopefully it was helpful with the overall process of how to do these customisations and why some of the steps are necessary. The Oracle documentation is very thorough and is certainly worth a look for some of the finer details. A lot is in common conceptually to previous BI Apps releases, the only step is the new tool which gives some very good new features.

Customisations in BI Apps 11.1.1.7.1 Part 1: Category 1 Changes

Over the last couple of days I’ve been taking a look into the new BI Apps package Oracle have released using ODI instead of Informatica. Mark has already published an article outlining how ODI is used to manage and run the ETL process. However, this blog will focus on how you can make your own customisations in ODI and relate them back to concepts from previous BI Apps releases. If you want to follow along with the examples in this blog, I began by installing the applications using Mark Rittman and Kevin McGinley’s  cookbook. This will take you through the point of generating a load plan to load one or more facts I won’t repeat the steps for this configuration, but will go through how to generate the load plan to include your custom packages. The fact group that I am selecting to load is Inventory Transactions (INVTRX_FG).

The most basic and typical type of customisation is simply adding a column to an existing table, called a Category 1 change. For this, I’ll go through a very simple addition onto W_INVENTORY_PRODUCT_D, just adding a new column to hold the current date. The first step required is to create some new sub folders to hold these custom mappings. This mirrors previous versions and is done for the same reason: to separate customisations from prebuilt metadata in order to allow for an easier upgrade path. This can be done in the Designer tab, using the Projects pane.

New Folders

It is also recommended to edit the release tags for the new folders to register them to the correct BI Apps sources and targets. These tags allow for shortcuts to be made in ODI, and all of the objects relating to specific tags to be referenced together. You can edit release tags by clicking on the icon in the top right hand side of the Designer tab.

Next, find the interface (akin to an Informatica mapping) to be customised in it’s relevant extract folder. In this case I’m using EBS 12.1.3 as a source and modifying SDE_ORA_InventoryProductDimension. Copy the whole subfolder, that way you get the interfaces as well as the packages (similar to an Informatica workflow). At this point I added the custom column, X_CURRENT_DATE,  to the database tables:

  • W_INVENTORY_PRODUCT_DS
  • W_INVENTORY_PRODUCT_D

It’s still worth prefixing new columns with “X_” to denote customisation. ODI has the capability to import these changes into the mode, similarly to importing source and target definitions in Informatica. Open up the Models pane on the left hand side. This contains all of the table definitions for all schemas and is organised by source and then by table type.

Models

After opening a table definition you can edit several attributes including the column definitions. The easiest way to do this is to use the Reverse-Engineer functionality. This will read the table definition from the database and import it into ODI. Another interesting feature of ODI is to choose the OLAP type. This has options of Fact, Dimension (SCD 1) and Slowly Changing Dimension (SCD 2). When set to Slowly Changing, you can edit the column properties to set their update behaviour. This way you can very easily alter a dimension to be SCD type 2 or vice versa.

Reverse Engineer

Slowly Changing Options

Once the table definition has been saved, the new column can be referenced when editing interfaces. The process of adding new columns is relatively simple in that you can drag across the desired column into the target datastore. Furthermore you can use expressions which reference variables and functions defined in ODI. In this example I’ve simply set the new column to be CURRENT_DATE in the extract (SDE) interface. Then this column can then be brought through the load (SIL) interface. Often, the BI Apps interfaces will use Yellow interfaces (as indicated by their icon) as their sources. This is an ODI mapping which doesn’t load into a defined datastore. Instead you define the target columns in the interface itself and ODI will create a temporary table. This interface can be used as a source in another mapping. This can be chained as many times as necessary and hence can replicate flow-based mappings which were frequent in Informatica. Typically, they are used for similar purposes to a source qualifier in previous releases.

Interface Designed

The interface is run as part of a package which can include other steps using functionality from ODI, the database or on the OS itself. This is equivalent to a workflow in Informatica. One of the key differences however, is that there is only one package required for both full and incremental loads whereas we had two Informatica mappings. This is because of the existence of functions and variables defined globally in ODI, whereas previously parameters were defined at a mapping and workflow level. The mechanics of this will be described in part 2 of this blog series. The package for the interface is accessible from the Projects pane as well.

Package

The next step is to right click on the package and generate a scenario which will be executed by the load plan. Note, that before doing this, it is worth changing the user parameter Scenario Naming Convention to %FOLDER_NAME(2)%_%OBJECT_NAME%. This will ensure they match the out of the box scenarios. The final step is to ensure that the new mappings will be included in the generated load plan. As part of the configuration for BI Apps, you are asked to select which fact groups to include in the generated load plan. This is equivalent to adding subject areas to an execution plan and then generating the dependencies. This version of BI Apps has provided similar functionality through it’s Load Plan generator. The mechanics of the load plan generator will be described further in the next part of the blog. In order for the generator to pick up the new mappings, they need to be added to the master plan which is a superset containing all interfaces without any particular order. The master plan can be edited in the Load Plans and Scenarios pane of the Designer tab. The master plan information is under BIAPPS Load Plan/Load Plan Dev Components. They are split into the three extract phases and then subsequently split into fact and dimension groups. In this case, I’ve edited the INVPROD_DIM Load Plans for the SDE and SIL folders. Open the load plan and navigate to the steps section. Here we can modify the relevant task to use the scenario from our custom folder. This is the same as changing the logical folder for a task in DAC.

Load Plan

Now you can go back to the BI Apps Configuration Manager, navigate to Load Plans and regenerate the plan. This will include the custom scenario instead and you can reset data sources and run the plan to load the custom column.

In the next part of the blog I will go through how to do a category 2 customisation, creating a new dimension and fact table and adding that to the load plan.

Rittman Mead BI Forum 2014 Call for Papers Now Open!

It’s that time of year again when we start planning out next year’s BI Forum, which like this year’s event will be running in May 2014 in Brighton and Atlanta. This will be our sixth annual event, and as with previous year’s the most important part is the content – and as such I’m pleased to announce that the Call for Papers for BI Forum 2014 is now open, running through to January 31st 2014.

If you’ve not been to one of our BI Forum events in past years, the Rittman Mead BI Forum is all about Oracle Business Intelligence, and the technologies and techniques that surround it – data warehousing, data analysis, big data, unstructured data analysis, OLAP analysis and this year – in-memory analytics. Each year we select around ten speakers for Brighton, and ten for Atlanta, along with keynote speakers and a masterclass session, with speaker choices driven by attendee votes at the end of January, and editorial input from myself, Jon Mead and Stewart Bryson.

NewImage

Last year we had sessions on OBIEE internals and new features, OBIEE visualisations and data analysis, OBIEE and “big data”, along with sessions on Endeca, Exalytics, Exadata, Essbase and anything else that starts with an “E”. This year we’re continuing the theme, but are particularly looking for sessions on what’s hot this year and next – integration with unstructured and big data sources, use of engineered systems and in-memory analysis, advanced and innovative data visualisations, cloud deployment and analytics, and anything that “pushes the envelope” around Oracle BI, data warehousing and analytics.

NewImage

The Call for Papers entry form is here, and we’re looking for speakers for Brighton, Atlanta, or both venues. We’re also looking for presenters for ten-minute “TED”-style sessions, and any ideas you might have for keynote speakers, send them directly to me at mark.rittman@rittmanmead.com. Other than that – have a think about abstract ideas now, and make sure you get them in by January 31st 2014.