Tag Archives: Data Warehousing

Rittman Mead at Collaborate 16: Data Integration Focus

It’s that time of year again when Oracle technologists from around the world gather in Las Vegas, NV to teach, learn, and, of course, network with their peers. The Collaborate conference, running for 10 years now, has been a collaboration, if you will, between the Independent Oracle Users Group (IOUG), Oracle Applications Users Group (OAUG) and Quest International Users Group (Quest), making it one of the largest user group conferences in the world. Rittman Mead will once again be in attendance, with two data integration focused presentations by me over the course of the week.

My first session, “A Walk Through the Kimball ETL Subsystems with Oracle Data Integration”, scheduled for Monday, April 11 at 10:30am, will focus on how we can implement the ETL Subsystems using Oracle Data Integration solutions. As you know, Big Data integration has been the hot topic over the past few years, and it’s an excellent feature in the Oracle Data Integration product suite (Oracle Data Integrator, GoldenGate, & Enterprise Data Quality). But not all analytics require big data technologies, such as labor cost, revenue, or expense reporting. Ralph Kimball, dimensional modeling and data warehousing expert and founder of The Kimball Group, spent much of his career working to build an enterprise data warehouse methodology that can meet these reporting needs. His book, “The Data Warehouse ETL Toolkit“, is a guide for many ETL developers. This session will walk you through his ETL Subsystem categories; Extracting, Cleaning & Conforming, Delivering, and Managing, describing how the Oracle Data Integration products are perfectly suited for the Kimball approach.

I go into further detail on one of the ETL Subsystems in an upcoming IOUG Select Journal article, titled “Implement an Error Event Schema with Oracle Data Integrator”. The Select Journal is a technical magazine published quarterly and available exclusively to IOUG members. My recent post Data Integration Tips: ODI 12c Repository Query – Find the Mapping Target Table shows a bit of the detail behind the research performed for the article.

error-event-schema

If you’re not familiar with the Kimball approach to data warehousing, I definitely would recommend reading one (or more) of their published books on the subject. I would also recommend attending one of their training courses, but unfortunately for the data warehousing community the Kimball Group has closed shop as of December 2015. But hey, the good news is that two of the former Kimball team members have joined forces at Decision Works, and they offer the exact same training they used to deliver under The Kimball Group name.

GoldenGate to Kafka logo

On Thursday, April 14 at 11am, I will dive into the recently released Oracle GoldenGate for Big Data 12.2 in a session titled “Oracle GoldenGate and Apache Kafka: A Deep Dive into Real-Time Data Streaming”. The challenge for us as data integration professionals is to combine relational data with other non-structured, high volume and rapidly changing datasets, known in the industry as Big Data, and transform it into something useful. Not just that, but we must also do it in near real-time and using a big data target system such as Hadoop. The topic of this session, real-time data streaming, provides us a great solution for that challenging task. By combining GoldenGate, Oracle’s premier data replication technology, and Apache Kafka, the latest open-source streaming and messaging system for big data, we can implement a fast, durable, and scalable solution.

If you plan to be at Collaborate next week, feel free to drop me a line in the comments, via email at michael.rainey@rittmanmead.com, or on Twitter @mRainey, I’d love to meet up and have a discussion around my presentation topics, data integration, or really anything we’re doing at Rittman Mead. Hope to see you all there!

The post Rittman Mead at Collaborate 16: Data Integration Focus appeared first on Rittman Mead Consulting.

Becky’s BI Apps Corner: Incrementals and Future dated Employee records

During the last few posts, we have delved into a few of the many interesting aspects of a BI Apps installation. Today I wanted to change gears a bit and talk about what starts to happen when you are past installation and configuration and begin running load plans. On a client project, I recently worked through a unique constraint error on W_EMPLOYEE_D that I found really interesting related to how the incremental logic was working in the knowledge module (KM). Before I can really get into the workaround, we need to understand how incremental loads work in general for BI Apps.

High Level Overview

In the initial run, the load will grab a full set of data, i.e. all data from the source system, based on the data load parameters set during configuration. The same load plan will be used to load data incrementally, picking up only data that has changed since the most recent load plan has completed (Last Extract Date). The pre-built mappings have incremental change capture built into the knowledge module logic. When a load runs, it will extract records that have changed or been created since the Last Extract Date. The load plan determines which rows to extract by using the formula Source Last Updated Date >= (Last Extract Date – Prune Days).

In the weeds

Is it an incremental load? How does that get decided? Actually, that isn’t decided at the load plan level. Each individual package (run as a scenario) starts with a step that refreshes a variable called #IS_INCREMENTAL.

This variable’s refresh logic, shown in the below screenshot, will determine if this package previously completed successfully. After every successful completion an entry gets made into W_ETL_LOAD_DATES with the package name and date timestamp, amongst other audit information.

So we have a scenario running now with the #IS_INCREMENTAL be set to ‘Y’. What does the Knowledge Module (KM) do? Incremental runs normally have steps to load an I$ table (flow table) from the source logic and update the records in the target table based on the DETECTION_STRATEGY option in the KM. For Fact table loads, the option can accept the possible values (explanation given).

  • OUTER: Outer join to target table when populating flow table in order to determine insert/update/useless records
  • NOT_EXISTS: NOT EXISTS clause is used when populating flow table in order to exclude records, which identically exist in target.
  • POST_FLOW: all records from source are loaded into flow table. After that an update statement is used to flag all rows in flow table, which identically exist in target.
  • NONE: all records from source are loaded into flow table. All target records are updated even when a target record is identical to flow table record.

In most cases, the option OUTER is used for facts, which updates the records based on primary keys (PK’s). Incremental decisions are based on the values of the system date columns like CHANGED_ON_DT, AUX1_CHANGED_ON_DT, AUX2_CHANGED_ON_DT, AUX3_CHANGED_ON_DT and AUX4_CHANGED_ON_DT columns populated from the source. This is better performing than the NOT_EXISTS and POST_FLOW options that compares each and every column to identify the records present.

For Slowly Changing Dimensions (like W_EMPLOYEE_D), the DETECTION_STRATEGY option can take the possible values (explanation given).

  • MINUS: MINUS clause is used when populating flow table in order to exclude records, which identically exist in target.
  • NOT_EXISTS: NOT EXISTS clause is used when populating flow table in order to exclude records, which identically exist in target.

The default option is NOT_EXISTS and Incremental decisions are based on PK’s and the date columns.

Future dated rows

Imagine now that during a full load, all records from the source tables for EMPLOYEES are brought forward into the data warehouse table W_EMPLOYEE_D. One of those records is an entry with an effective start date 2 weeks in the future. For W_EMPLOYEE_D one of the columns in the primary key is the effective start date. Fast forward two weeks to the date when the future dated row’s effective start date is the current date. During the incremental load on that date, the incremental logic for this one record is comparing the primary keys and all of the change indicator columns, and sees that the effective start date is greater than the last extract date from last night. This incremental comparison incorrectly determines this is a record that needs to be added to the fact table, even though the record is already in the fact table. Now we have an ERROR! The familiar unique constraint on the _U1 unique index rears its ugly head. On top of that, troubleshooting this duplicate is not coming up with any duplicate records in the usual places (I$, DS, source tables, nada!). Isolating the two identical records and tracking them back to the source tables however, there is the one record. The only clue is that the effective start date is today’s date. After a second occurrence, discussions and back and forth on an SR, a workaround is now available.

Workaround

Step 1. Remove any Future dated rows in W_EMPLOYEE_D

Step 2. Add a filter on the interface to prevent future dated rows from loading into W_EMPLOYEE_DS until they are <= current date.

At our client, this mapping continues to run without any additional errors. The steps here are most likely version specific, and this issue is a known bug to Oracle, so please don’t hesitate to open an SR if you are getting this specific issue, as a quick turn around is very likely.

There are some other odds and ends about how incremental load plans work and I plan to gather them up and have another post about those in the coming weeks. If you want to learn more ins and outs of incrementals and more, join me for the upcoming remote ODI for BIApps course on March 14th-16th. We have only a few spots left so sign up here.

The post Becky’s BI Apps Corner: Incrementals and Future dated Employee records appeared first on Rittman Mead Consulting.

Data Integration Tips: ODI 12c Repository Query – Find the Mapping Target Table

Well, it’s been awhile since I’ve posted one of our Rittman Mead Data Integration Tips, so I thought a recent challenge might be the next great candidate. I was working through the Kimball ETL Subsystems and the Error Event Schema, Subsystem 5 if you’re familiar with the methodology, and attempting to build the schema from Oracle Data Integrator 12c (ODI 12c) metadata tables. If you caught my presentation “A Walk Through the Kimball ETL Subsystems with Oracle Data Integration” at Oracle OpenWorld 2015, then you’ll know a bit more about where I’m coming from. You can still catch my presentation on this topic at both IOUG Collaborate16 and ODTUG KScope16 (being invited to speak at each of these events is always an honor). Now this Data Integration Tip isn’t solely related to the Kimball ETL Subsystems, though the solution did prove rather useful for my presentation (and upcoming article in the IOUG Select Journal). It’s actually an interesting twist in how Oracle Data Integrator mapping metadata is stored differently between the ODI 11g and ODI 12c repositories.

The challenge is to find the target table, or Datastore, for a given Mapping in ODI 12c, or Interface in ODI 11g. When I say “find”, I mean query the ODI work repository tables and return the target table, or tables in 12c, for a given Mapping. Luckily, I’m equipped with some guidance from our friends at My Oracle Support. If you look at support document Oracle Data Integrator 11g and 12c Repository Description (Doc ID 1903225.1) you’ll find the data dictionaries for both ODI11g (11.1.1.6.0+) and ODI12c (12.1.2, 12.1.3, & 12.2.1). These are invaluable resources from Oracle – though the may be works in progress and somewhat incomplete!

Let’s take a look first at ODI 11g and how simple things used to be. Back when Interfaces were the mechanism for extracting, transforming, and loading, we were allowed only 1 single target Datastore. Ahh, those were the good ‘ol days! Digging into the repository we really only had one place to look for the target table – SNP_POP. This table in the ODI 11g repository, SNP_POP (which essentially stands for Synopsis – Populate), contains a column I_TABLE. This identifying column represents the target table for that particular Interface. Here’s a query that ties it all together.

select
    p.pop_name interface_name,
    t.table_name target_table,
    m.cod_mod model_code
from snp_pop p inner join snp_table t on p.i_table = t.i_table
    inner join snp_model m on t.i_mod = m.i_mod;

As you can see, the key to capturing the target table for an Interface is simply in the SNP_POP.I_TABLE column. Because there is only one target, we can easily figure it out.

interface-target-table

Now, ODI 12c is where the real challenge lies. As you may know, with the move from Interfaces in 11g to flow based Mappings in 12c, we were allowed to do new and exciting things, such as load multiple target tables from a single Mapping. We may also have a case where a Datastore component maps to a Filter component, which then maps to another Datastore component, etc. As you can see in the image below, we can have lots of tables, and lots of tables that may be sources or targets, but we’re only interested in the final target table (or tables, for that matter!).

odi12c-map

Ok, so let’s dig into the Work Repository now. It seems an ODI Mapping can’t be that much more difficult than an Interface, right? Well…

First, there are quite a few tables related to the Mapping itself.

SNP_MAPPING
SNP_MAP_ATTR
SNP_MAP_ATTR_INFO
SNP_MAP_COMP
SNP_MAP_COMP_TYPE
SNP_MAP_CONN
SNP_MAP_CP
SNP_MAP_CP_ROLE
SNP_MAP_DATA_TYPE
SNP_MAP_EXPR
SNP_MAP_EXPR_REF
SNP_MAP_PROP
SNP_MAP_PROP_DEF
SNP_MAP_REF
SNP_MAP_REF_PP

Whoa…we’ve got some work to do. Lucky for you, I’ve already done the work. Let’s look at how it all fits together. We can start with the Mapping (SNP_MAPPING) table. We also have different components on the Mapping, such as Lookups, etc, so we can join in the table SNP_MAP_COMP as well. Here’s the information we’ll be able to see with that simple join.

select ...
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping

odi12c-map-component

That’s interesting, we’ve captured all components in the mapping. But there are still quite a lot of non-targets here. Ok, maybe if we add the connection points for each component we can find the input and output for each. Components each have an input connector point, allowing an output from a different component to flow into it.

select ...
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping
    inner join snp_map_cp cp on mc.i_map_comp = cp.i_owner_map_comp

That just added the connection point information for each component and whether it is an INPUT or OUTPUT. Not extremely useful by itself, so let’s dig a bit deeper. How about adding the SNP_MAP_REF table? This table seems to contain a reference to all types of other attributes about the Mapping and its Components. We also need to consider that Datastores, just as any other Component, will have both an input and output. Right now, the dataset shows both the input and output connectors for each Component. Let’s remove the input connection points to limit our result set.

select ...
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping
    inner join snp_map_cp cp on mc.i_map_comp = cp.i_owner_map_comp
    inner join snp_map_ref mr on mc.i_map_ref = mr.i_map_ref
where cp.direction = 'O' --output connection point only

Joining the reference table has now allowed us to focus on the Datastores in the Mapping and their OUTPUT connection point. What I really want is to see only the Datastores that do not have their OUTPUT connection point connected to an INPUT connector. Therefore, if the OUTPUT is empty, it must be the target table!

select ...
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping
    inner join snp_map_cp cp on mc.i_map_comp = cp.i_owner_map_comp
    inner join snp_map_ref mr on mc.i_map_ref = mr.i_map_ref
where cp.direction = ‘O' and --output connection point only.
    cp.i_map_cp not in
        (select i_start_map_cp from snp_map_conn) --not a starting connection point

The SNP_MAP_CONN, which stores the mapping connections, will allow me to limit the query to the components that only have an output, but not an input. The connections table will contain all component connections in the ODI 12c mappings. Here’s what we get as a result.

mapping target tables found

Hey, now we’re onto something here. In fact, this is what I was looking for! Target table(s) in a single Mapping in ODI12c. Not quite as simple as ODI11g, but with a bit of SQL and understanding of the repository tables, you can do it. Here’s the final query again, joining in the SNP_TABLE & SNP_MODEL tables to complete the dataset.

select
    m.name mapping_name,
    mr.qualified_name,
    mc.name datastore_alias,
    t.table_name target_table,
    mdl.cod_mod model_code
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping
    inner join snp_map_cp cp on mc.i_map_comp = cp.i_owner_map_comp
    inner join snp_map_ref mr on mc.i_map_ref = mr.i_map_ref
    inner join snp_table t on mr.i_ref_id = t.i_table
    inner join snp_model mdl on t.i_mod = mdl.i_mod
where cp.direction = 'O' and --output connection point
    cp.i_map_cp not in
        (select i_start_map_cp from snp_map_conn) --not a starting connection point
;

Please let me know if you find this Data Integration Tip useful or if you have a better way of accessing the target table in a mapping. One of my Rittman Mead colleagues asked, “why not just use the ODI Java API?”. For accessing the ODI repository, I do prefer using some Groovy script and the API. But in this case, I’m interested in building out a dimensional schema and writing ETL to load the dimensions and facts, which lends itself to SQL rather than Groovy script.

As always, if you’re team needs help around Oracle Data Integrator, or Oracle Data Integration Solutions in general, drop us a line at info@rittmanmead.com. Or feel free to reach out to me directly via email (michael.rainey@rittmanmead.com) or Twitter (@mRainey). Cheers!

The post Data Integration Tips: ODI 12c Repository Query – Find the Mapping Target Table appeared first on Rittman Mead Consulting.

Becky’s BI Apps Corner: OBIA install Perl Script Patching and troubleshooting when they fail.

During a recent project installing Oracle BI Applications, I became much better acquainted with OPatch, Oracle’s standard tool for managing application patches. By acquainted, I mean how to troubleshoot when OPatch patching fails. Since, at last count, there are around 50 patches total for Oracle BI Applications 11.1.1.9.2, the first patching attempt may not apply all patches successfully. There are any number of reasons for a failure, like an extra slash at the end of a path, a misspelled word, Weblogic or NodeManager still running, or some other reason. We will take a look at the logs for each section, learn where additional logs can be found, and learn how to turn on OPatch debugging to better understand the issue. Then, following the ideas from a previous OPatch post by Robin, I’ll describe how to manually apply the patches with OPatch at the command line for any patches that weren’t applied successfully using the provided perl script.

*Disclaimers – Please read the readme files for patches and follow all Oracle recommendations. Patch numbers are subject to change depending on OS and OBIA versions. Commands and paths here are of the linux/unix variety, but there are similar commands available for Windows OS.

Perl Script patching

Unzip the patch files to a patch folder. I have included the OBIEE patch as well.

unzip pb4biapps_11.1.1.9.2_.zip -d patches/
unzip pb4biapps_11.1.1.9.2_generic_1of2.zip -d patches/
unzip pb4biapps_11.1.1.9.2_generic_2of2.zip -d patches/
unzip p20124371_111170_.zip -d patches/

While installing the Oracle BI Applications versions 11.1.1.7. and up, patches get applied with a perl script called APPLY_PATCHES.pl. Following Oracle’s install documentation for 11.1.1.9 version of Oracle BI Applications here, there is a text file to modify and pass to the perl script. Both the perl script and the text file reside in the following directory: $ORACLE_HOME/biapps/tools/bin. In the text file, called apply_patches_import.txt, parameters are set with the path to the following directories:

JAVA_HOME
INVENTORY_LOC
ORACLE_HOME
MW_HOME
COMMON_ORACLE_HOME
WL_HOME
ODI_HOME
WORKDIR
PATCH_ROOT_DIR
WINDOWS_UNZIP_TOOL_EXE (only needed if running on Windows platforms)

Some pro tips to modifying this text file:
1. Oracle recommends you use the JDK in the ORACLE_BI1 directory.
2. Use ORACLE_BI1 as the ORACLE_HOME.
3. Ensure WORKDIR and PATCH_ROOT_DIR are writeable directories.
4. Don’t add a path separator at the end of the path.
5. Commented lines are safe to remove.

Then you run the APPLY_PATCHES.pl passing in the apply_patches_import.txt. If everything goes well, at the end of the perl script, the results will look similar to the following:

If this is the case, CONGRATULATIONS!!, you can move on to the next step in the install documentation. Thanks for stopping by and come back soon! However, if any patch or group of patches failed, the rest of this post is for you.

Log file location

First, the above patching report does not tell you where to find the logs, regardless of success or failure. If you remember though, you set a WORKDIR path in the text file earlier. In that directory is where you will find the following log files:

  1. final_patching_report.log
  2. biappshiphome_generic_patches.log
  3. odi_generic_patches.log
  4. oracle_common_generic_patches.log
  5. weblogic_patching.log

Open the final_patching_report.log to determine first if all patches were applied and identify ones that were not successful. For example, looking that this log may show that the Oracle Common patches failed.

cd $WORKDIR
vi final_patching_report.log

However, this doesn’t tell you what caused the failure. Next we will want to look into the oracle_common_generic_patches.log to gather more information.

From the $WORKDIR:

vi oracle_common_generic_patches.log

Here you will see the error, that a component is missing. Patch ######## requires component(s) that are not installed in OracleHome. These not-installed components are oracle.jrf.thirdparty.jee:11.1.1.7.0. Notice also that in this log there is a path to another log file location. The path is in the $COMMON_ORACLE_HOME/cfgtoollogs/opatch/ directory. This directory has more detailed logs specific to patches applied to oracle_common. Additionally, there are logs under $ORACLE_HOME/cfgtoollogs/opatch/, $WL_HOME/cfgtoollogs/opatch/, and $ODI_HOME/cfgtoollogs/opatch/. These locations are very helpful to know, so you can find the logs for each group of patches in the same relative path.

Going back to the above error, we are going to open the most recent log file listed in the $COMMON_ORACLE_HOME/cfgtoollogs/opatch/ directory.

cd $COMMON_ORACLE_HOME/cfgtoollogs/opatch/
vi opatch2015-08-08_09-20-58AM_1.log

The beginning of this log file has two very interesting pieces of information to take note of for use later. It has the actual OPatch command used, and it has a path to a Patch History file. Looks like we will have to page down in the file to find the error message.

Now we see our missing component error. Once the error occurs, the java program starts backing out and then starts cleanup by deleting the files extracted earlier in the process. This log does have more detail, but still doesn’t say much about the missing component. After some digging around on the internet, I found a way to get more detailed information out of OPatch by setting export OPATCH_DEBUG=TRUE. After turning OPatch debugging on, run the OPatch command we found earlier that was at the top of the log. A new log file will be generated and we want to open this most recent log file.

Finally, the results now get me detailed information about the component and the failure.

Side Note: If you are getting this specific error, I’ll refer you back to a previous blog post that happened to mention making sure to grab the correct version of OBIEE and ODI. If you have a wrong version of OBIEE or ODI for the Oracle BI Apps version you are installing, unfortunately you won’t start seeing errors until you get to this point.

Manually running Oracle BI Application patches

Normally, the error or reason behind a patch or group of patches failing doesn’t take that level of investigation, and the issue will be identified in the first one or two logs. Once the issue is corrected, there are a couple of options available. Rerunning the perl script is one option, but it will cycle through all of the patches again, even the ones already applied. There is no harm in this, but it does take longer than running the individual patches. The other option is to run the OPatch command at the command line. To do that, first I would recommend setting the variables from the text file. I also added the Oracle_BI1/OPatch directory to the PATH variable.

export JAVA_HOME=$ORACLE_HOME/jdk
export INVENTORY_LOC=
export COMMON_ORACLE_HOME=$MW_HOME/oracle_common
export WL_HOME=$MW_HOME/wlserver_10.3
export SOA_HOME=$MW_HOME/Oracle_SOA1
export ODI_HOME=$MW_HOME/Oracle_ODI1
export WORKDIR=
export PATCH_FOLDER=/patches
export PATH=$ORACLE_HOME/OPatch:$JAVA_HOME/bin:$PATH

Next, unzip the patches in the required directory. For example, the $PATCH_FOLDER/oracle_common/generic might look like this after unzipping files:

Below are the commands for each group of patches:

Oracle Common Patches:

cd $PATCH_FOLDER/oracle_common/generic
unzip "*.zip"

$COMMON_ORACLE_HOME/OPatch/opatch napply $PATCH_FOLDER/oracle_common/generic -silent -oh $COMMON_ORACLE_HOME -id 16080773,16830801,17353546,17440204,18202495,18247368,18352699,18601422,18753914,18818086,18847054,18848533,18877308,18914089,19915810

BIApps Patches:

cd $PATCH_FOLDER/biappsshiphome/generic
unzip "*.zip"

opatch napply $PATCH_FOLDER/biappsshiphome/generic -silent -id 16913445,16997936,19452953,19526754,19526760,19822893,19823874,20022695,20257578

ODI Patches:

cd $PATCH_FOLDER/odi/generic
unzip "*.zip"

/$ODI_HOME/OPatch/opatch napply $PATCH_FOLDER/odi/generic -silent -oh $ODI_HOME -id 18091795,18204886

Operating Specific Patches:

cd $PATCH_FOLDER/
unzip "*.zip"

opatch napply $PATCH_FOLDER/ -silent -id ,,

Weblogic Patches:

cd $PATCH_FOLDER/suwrapper/generic
unzip "*.zip"

cd $PATCH_FOLDER/weblogic/generic

$JAVA_HOME/bin/java -jar $PATCH_FOLDER/suwrapper/generic/bsu-wrapper.jar -prod_dir=$WL_HOME -install -patchlist=JEJW,LJVB,EAS7,TN4A,KPFJ,RJNF,2GH7,W3Q6,FKGW,6AEJ,IHFB -bsu_home=$MW_HOME/utils/bsu -meta=$PATCH_FOLDER/suwrapper/generic/suw_metadata.txt -verbose > $PATCH_FOLDER/weblogic_patching.log

Even though this is a very specific error as an example, understanding the logs and having the break-down of all of the patches will help with any number of patch errors at this step of the Oracle BI Applications installation. I would love to hear your thoughts if you found this helpful or if any part was confusing. Keep an eye out for the next Becky’s BI Apps Corner where I move on from installs and start digging into incremental logic and Knowledge Modules.

The post Becky’s BI Apps Corner: OBIA install Perl Script Patching and troubleshooting when they fail. appeared first on Rittman Mead Consulting.

Becky’s BI Apps Corner: Installing on Windows Server 2012

Recently I was installing Oracle BI Apps on a VM for a custom training, as mentioned in a previous post. I setup the VM with Windows Server 2012 after verifying in the certification matrix that the Oracle BI Apps version (11.1.1.9.2) was certified to run on Windows Server 2012.

As I was going along, I was getting errors during some of the installs on the pre-requisite checks for the operating system (OS), as can be the case when the OS gets certified after the initial release of that version of software. The workaround is available for that error in MOS Doc ID 1577652.1, and involves adding the OS to an .xml file… Okay, I admit it. I didn’t actually update the .xml file. I just clicked Continue past the pre-requisite check. Since there didn’t seem to be any repercussions and the installs completed successfully, I continued on my merry way through the installation instructions.

When I was running OBIEE’s configuration (config.bat), it failed at 35% on the step Setting Up BI Publisher. The install_DATE_TIME.log file didn’t have enough information, so I dug into the install_DATE_TIME.out file. In it, there was the following error:

updateBIPConfigFiles: OSError: (0, 'Failed to execute command (['sh', '-c', 'java -classpath C:\\Oracle\\Product\\BI_11.1\\Oracle_BI1\\clients\\bipublisher\\xdo-server.jar oracle.xdo.install.UpdateConfigFiles 9704 9703 9710 jdbc/mds/owsm C:\\Oracle\\Product\\BI_11.1\\user_projects\\domains\\bifoundation_domain']): java.io.IOException: Cannot run program "sh": CreateProcess error=2, The system cannot find the file specified')

I left the config.bat open (It only takes one time to learn not to cancel during the config.bat or configApps.bat) and I took a trip over to my friendly neighborhood support.oracle.com where I found MOS Doc ID 1580583.1. Essentially, it says to dig into and update a Jar file, and then directs you to Oracle’s Java Tutorials to learn how. I decided to do just that and post the steps here.

Forays into Java

The MOS doc says I need to update a file called javashell.py by adding the string “Windows Server 2012” to the list of os.names. This javashell.py file is archived inside a jar file located: commonwlstmodulesjython-modules.jar.

A jar file is simply a package of a bunch of files. To view content of a jar file, the command is:
jar tf jar-file

In my command window, I navigated to the folder where the jython-modules.jar was. For this particular jar, the number of files was bigger than my command window would show, so I sent it to a text file.

The command I used was:

C:appproductfmwwlserver_10.3commonwlstmodules>c:javajdk1.7.0_85binjar
tf jython-modules.jar >log.txt

In the jython-modules.jar, there was a Libjavashell.py. Now that I found my file, I needed to extract the file from the jar. To do that, the command is:
jar xf jar-file [archived-file(s)]

Since I was still in the same directory, I used the following command:

C:appproductfmwwlserver_10.3commonwlstmodules>c:javajdk1.7.0_85binjar
xf jython-modules.jar Lib/javashell.py

The file is now extracted and can be edited. I opened the javashell.py file in Notepad++. You can open/edit it in your editor of choice. Not too far into the python script, I found the os.name and the list of other operating systems. I’m not particularly skilled in python, but the change seemed pretty straight forward. As you can see from the screenshot, I just added a comma, and the ‘Windows Server 2012’ at the end of the “nt” operating systems. I am certain this would work for other operating systems added to the certification matrix after the software was released.

Once I’ve saved my changes, I need to get this javashell.py file back into the python-modules.jar.

The command to update a file inside of a jar is:
jar uf jar-file input-file(s)

I used:

C:appproductfmwwlserver_10.3commonwlstmodules>c:javajdk1.7.0_85binjar
uf jython-modules.jar Lib/javashell.py

With everything in order, I went back to the config.bat window, which was still running. I scrolled to the top, checked the box next to the first line, clicked Retry.

The install completed without any further errors! We got to play with Java Jar files and a python script to resolve an error with the config.bat installer. These steps will allow you to update for any OS, which can come in handy for any that are certified after the initial release of a version of software. Let me know in the comments all the different Operating Systems where you have installed BI Apps. So far, I’ve installed on RHEL 5 and 6, Solaris 10, Windows Server 2008, and Windows Server 2012, each with interesting little quirks. Also, if you have training needs, check out our new trainings for 2016, and contact us! And of course, keep an eye out for more Becky’s BI Apps Corner coming soon.

The post Becky’s BI Apps Corner: Installing on Windows Server 2012 appeared first on Rittman Mead Consulting.