Tag Archives: Oracle BI Apps

Becky’s BI Apps Corner: OBIA 11.1.1.10.2 New Features Part 1 – Health Check & ETL Diagnostics

I have been working with BI Applications since OBIA switched to ODI in version 11.1.1.7.0. I have installed and worked with all of the 11.x versions using several different sources. This most recent version of OBIA may only be a Patch-Set release, but it has some new features that make it very compelling; ETL automatic error handling, Health Check, ETL Diagnostics, and built in Email notifications.

If you've been using earlier version of OBIA 11.x (or 7.x for that matter), now may be the time to upgrade. In an Oracle Advisor's Webcast, Rajesh Lakra, BIApps Global Product Support Lead Engineer, said this version will be the upgrade path to the OBIA 12.x, which is slated for release sometime in 2017. Also, it may just be circumstances I've been exposed to, but this latest version seems more performant as well. Since I've not had an opportunity to do benchmark time testing against the same source data, I cannot verify yet. However, I am setting up some environments to do just that. Keep an eye out for a future blog post for performance comparisons.

Load Plan

Check if there are any other running load plans

Check Previous Load Plan Runs is a package that only has the following procedure:
Check State of Previous Load Plan Runs
Load Plan
1) Checks the status of Previous Load Plan Runs. Throws Exception if any other Load Plan is still in Running state.
2) Checks Blackout schedule period from w_etl_control_schedule table. If there is a Blackout schedule then LP would fail and comes out from the execution.

I’ve not found any documentation about this package or procedure. However, the procedure itself has three steps.
Load Plan

  1. Check if Prev Load Plans are Running

    a. SELECT COUNT(1) FROM SNP_LPI_RUN WHERE STATUS IN ('R','W') AND I_LP_INST!=(?)

    b. >>>>>" + ctLPRunning + " Load plan(s) found to be running/waiting apart from the current one. ABORTING this Load Plan as running multiple Load Plans in Parallel can cause DATA CORRUPTION <<<<<< Load Plan
    Load Plan

  2. Check Blackout schedule

    a. select -1 from w_etl_control_schedule cs where sysdate between cs.control_start_dt and cs.control_end_dt and cs.control_code = 'NO_ETL'

    b. >>>>> Blackout schedule coming up as seen in w_etl_control_schedule table, hence no Load plan(s) will be executed. ABORTING this Load Plan as there will be a down time to the repository due to the Blackout schedule <<<<<<

  3. Check Source Upgrade

    a. SELECT 1 FROM W_ETL_SRC_VERSION_HISTORY WHERE SRC_PLV_NEW IS NOT NULL AND SRC_UPGRADE_STATUS IN ('PENDING','ERROR','STARTED’)

    b. >>>>>Found Incomplete Source Upgrade status from the Source Upgrade tracking table (W_ETL_SRC_VERSION_HISTORY).For more information, please refer to the Warehouse table W_ETL_SRC_VERSION_HISTORY for incomplete Source Upgrade status rows and take necessary action to run Source Upgrade Load plans <<<<<<

I believe this has been a good feature to add because Oracle’s OBIA documentation has always recommended not running more than one load plan at a time. Now if there is a load plan running, automatically the load plan will stop and there will be a message warning you about data corruption if more than one load plan is running.

I think it is interesting to see this Blackout schedule and Source Upgrade. I’ve done some looking in the documentation and on support.oracle.com but didn’t come up with any clues to what exactly these are for. It seems to me like the Blackout schedule is a calendar of sorts for when the repository will be down for maintenance. As for the Source Upgrade, that is more of a mystery to me.

Next step in the Load Plan is a Health Check.
Load Plan

Health Check and Diagnostics

Oracle's OBIA ETL Guide has a short paragraph on the Health Check.

Health Check is a preliminary ETL phase in which a diagnostic report is generated to identify problematic source data that might cause ETL failure or data loss or corruption in the data warehouse. The report is downloaded from Configuration Manager and includes any problematic data, a description of detected issues, and actions to resolve health check failures.

Referring again to Oracle's OBIA ETL Guide, the Manage Load Plans page is where you can download the Health Check. Highlight the relevant load plan that has run and/or failed and click on Show Data Problems

Load Plan

Select the desired DIAGNOSTIC HEALTHCHECK and download the zip file to your computer.

This file gets generated through a number of different steps starting with the Execute Healthcheck package and followed by the Create Report and Summarize Healthcheck package.

Execute Healthcheck is a package that only has the following procedure:
Run Diagnostics
Load Plan
This procedure has 12 steps. The two that are the most interesting are:
Define Execute Diagnostic Function and Run Diagnostics Load Plan

The bulk of the code for the Diagnostics are in the step Define Execute Diagnostic Function. The code is Jython and it is really too long to paste here, but I highly recommend digging into how it works.
Load Plan

Then the step Run Diagnostics actually kicks off the process as shown here.
Load Plan

Once the diagnostics are run, there is a step in the load plan called Create Report and Summarize Healthcheck. This package has two procedures.
Load Plan

The first procedure, Create Diagnostic Report does exactly what you might guess. It does a lot of formatting and puts the diagnostic report into an html page.
Load Plan
The second procedure, Summarize Healthcheck Result, takes the information, inserts it into a table, then organizes it for an email to be sent out.
Load Plan
Load Plan

During the September 21st Oracle Advisor Webcast on BI Applications 11.1.1.10.2 New Release Overview, Rajesh Lakra, BIApps Global Product Support Lead Engineer, said the Health Check is most useful during a domain load. It checks to ensure source domain data is valid, collectively letting you know if these domains will cause issues further down in the ETL process.

In Part 2 of OBIA 11.1.1.10.2 New Features, I will show you what the html files look like, how to setup the email process, and how the automatic error handling is now working within many of the knowledge modules. Tune in next time for more exciting parts of the new features in this version of BI Applications!

Becky’s BI Apps Corner: OBIA Back-to-Beginnings – Naming Conventions and Jargon

It's easy to talk about a technology using only jargon. It's much harder to talk about a technology without using jargon. I have seen many meetings between business and IT break down because of this communication barrier. I find it more discouraging when I see this communication breakdown happen between advanced IT staff and new IT staff. For those of us in any technological field, it's easy to forget how long it took to learn all of the ins and outs, the terminology and jargon.

During a recent project, I had another consultant shadowing me to get experience with OBIA. (Hi, Julia!) I was 'lettering' a lot so I decided it was time to diagram my jargon. My scribbles on a whiteboard gave me the idea that it might be helpful to do a bit of connecting the dots between OBIA and data warehousing jargon and naming conventions used in OBIA.


BI Applications Load Plan phases:

SDE - Source Dependent Extract

SDE is the first phase in the ETL process that loads source data into the staging area. SDE tasks are source database specific. SDE mappings that run in the load plan will load staging tables. These tables end with _DS and _FS among others.

SIL - Source Independent Load

SIL is the second phase in the ETL process that takes the staged data from the staging tables and loads or transforms them into the target tables. SILOS mappings that run in the load plan will load dimension and fact tables. These tables end with _D and _F among others.

PLP - Post Load Process

This third and final phase in the ETL process occurs after the target tables have been loaded and is commonly used for loading aggregate fact tables. PLP mappings that run in the load plan will load aggregate tables ending with _A. Aggregate tables are often fact table data that has been summed up by a common dimension. For example, a common report might look at finance data by the month. Using the aggregate tables by fiscal period would help improve reporting response time.

For further information about any of the other table types, be sure to read Table Types for Oracle Business Analytics Warehouse. Additionally, this page has probably the best explanation for staging tables and incremental loads.


Source System Acronyms

Since the SDE tasks are source database specific, the SDE mappings' names also include an acronym for the source system in the mapping name. Below are the supported source database systems and the acronyms used in the names and an example for each.

  • Oracle E-Business Suite - ORA

    • SDE_ORA_DomainGeneral_Currency
  • Oracle Siebel - SBL

    • SDE_SBL_DOMAINGENERAL_CURRENCY
  • JD Edwards Enterprise One - JDEE

    • SDE_JDE_DomainGeneral_Currency
  • PeopleSoft - PSFT

    • SDE_PSFT_DomainGeneral_Currency_FINSCM
  • Oracle Fusion Applications - FUSION

    • SDE_FUSION_DomainGeneral_Currency
  • Taleo - TLO

    • SDE_TLO_DomainGeneral_Country
  • Oracle Service Cloud - RNCX

    • SDE_RNCX_DomainGeneral
  • Universal - Universal

    • SDE_Universal_DomainGeneral

This wraps up our quick "Back-to-Beginnings" refresher on naming conventions and the jargon used in relation to ETL and mappings. Let me know in the comments below if there are other topics you would like me to cover in my "Back-to-Beginnings" series. As always, be sure to check out our available training, which now includes remote training options, and our On Demand Training Beta Program. For my next post I'll be covering two new features in OBIA 11.1.1.10.2, Health Check and ETL Diagnostics, which are the missing pieces you didn't know you've been waiting for.

Becky’s BI Apps Corner: Incrementals and Future dated Employee records

During the last few posts, we have delved into a few of the many interesting aspects of a BI Apps installation. Today I wanted to change gears a bit and talk about what starts to happen when you are past installation and configuration and begin running load plans. On a client project, I recently worked through a unique constraint error on W_EMPLOYEE_D that I found really interesting related to how the incremental logic was working in the knowledge module (KM). Before I can really get into the workaround, we need to understand how incremental loads work in general for BI Apps.

High Level Overview

In the initial run, the load will grab a full set of data, i.e. all data from the source system, based on the data load parameters set during configuration. The same load plan will be used to load data incrementally, picking up only data that has changed since the most recent load plan has completed (Last Extract Date). The pre-built mappings have incremental change capture built into the knowledge module logic. When a load runs, it will extract records that have changed or been created since the Last Extract Date. The load plan determines which rows to extract by using the formula Source Last Updated Date >= (Last Extract Date – Prune Days).

In the weeds

Is it an incremental load? How does that get decided? Actually, that isn’t decided at the load plan level. Each individual package (run as a scenario) starts with a step that refreshes a variable called #IS_INCREMENTAL.

This variable’s refresh logic, shown in the below screenshot, will determine if this package previously completed successfully. After every successful completion an entry gets made into W_ETL_LOAD_DATES with the package name and date timestamp, amongst other audit information.

So we have a scenario running now with the #IS_INCREMENTAL be set to ‘Y’. What does the Knowledge Module (KM) do? Incremental runs normally have steps to load an I$ table (flow table) from the source logic and update the records in the target table based on the DETECTION_STRATEGY option in the KM. For Fact table loads, the option can accept the possible values (explanation given).

  • OUTER: Outer join to target table when populating flow table in order to determine insert/update/useless records
  • NOT_EXISTS: NOT EXISTS clause is used when populating flow table in order to exclude records, which identically exist in target.
  • POST_FLOW: all records from source are loaded into flow table. After that an update statement is used to flag all rows in flow table, which identically exist in target.
  • NONE: all records from source are loaded into flow table. All target records are updated even when a target record is identical to flow table record.

In most cases, the option OUTER is used for facts, which updates the records based on primary keys (PK’s). Incremental decisions are based on the values of the system date columns like CHANGED_ON_DT, AUX1_CHANGED_ON_DT, AUX2_CHANGED_ON_DT, AUX3_CHANGED_ON_DT and AUX4_CHANGED_ON_DT columns populated from the source. This is better performing than the NOT_EXISTS and POST_FLOW options that compares each and every column to identify the records present.

For Slowly Changing Dimensions (like W_EMPLOYEE_D), the DETECTION_STRATEGY option can take the possible values (explanation given).

  • MINUS: MINUS clause is used when populating flow table in order to exclude records, which identically exist in target.
  • NOT_EXISTS: NOT EXISTS clause is used when populating flow table in order to exclude records, which identically exist in target.

The default option is NOT_EXISTS and Incremental decisions are based on PK’s and the date columns.

Future dated rows

Imagine now that during a full load, all records from the source tables for EMPLOYEES are brought forward into the data warehouse table W_EMPLOYEE_D. One of those records is an entry with an effective start date 2 weeks in the future. For W_EMPLOYEE_D one of the columns in the primary key is the effective start date. Fast forward two weeks to the date when the future dated row’s effective start date is the current date. During the incremental load on that date, the incremental logic for this one record is comparing the primary keys and all of the change indicator columns, and sees that the effective start date is greater than the last extract date from last night. This incremental comparison incorrectly determines this is a record that needs to be added to the fact table, even though the record is already in the fact table. Now we have an ERROR! The familiar unique constraint on the _U1 unique index rears its ugly head. On top of that, troubleshooting this duplicate is not coming up with any duplicate records in the usual places (I$, DS, source tables, nada!). Isolating the two identical records and tracking them back to the source tables however, there is the one record. The only clue is that the effective start date is today’s date. After a second occurrence, discussions and back and forth on an SR, a workaround is now available.

Workaround

Step 1. Remove any Future dated rows in W_EMPLOYEE_D

Step 2. Add a filter on the interface to prevent future dated rows from loading into W_EMPLOYEE_DS until they are <= current date.

At our client, this mapping continues to run without any additional errors. The steps here are most likely version specific, and this issue is a known bug to Oracle, so please don’t hesitate to open an SR if you are getting this specific issue, as a quick turn around is very likely.

There are some other odds and ends about how incremental load plans work and I plan to gather them up and have another post about those in the coming weeks. If you want to learn more ins and outs of incrementals and more, join me for the upcoming remote ODI for BIApps course on March 14th-16th. We have only a few spots left so sign up here.

The post Becky’s BI Apps Corner: Incrementals and Future dated Employee records appeared first on Rittman Mead Consulting.

Becky’s BI Apps Corner: OBIA install Perl Script Patching and troubleshooting when they fail.

During a recent project installing Oracle BI Applications, I became much better acquainted with OPatch, Oracle’s standard tool for managing application patches. By acquainted, I mean how to troubleshoot when OPatch patching fails. Since, at last count, there are around 50 patches total for Oracle BI Applications 11.1.1.9.2, the first patching attempt may not apply all patches successfully. There are any number of reasons for a failure, like an extra slash at the end of a path, a misspelled word, Weblogic or NodeManager still running, or some other reason. We will take a look at the logs for each section, learn where additional logs can be found, and learn how to turn on OPatch debugging to better understand the issue. Then, following the ideas from a previous OPatch post by Robin, I’ll describe how to manually apply the patches with OPatch at the command line for any patches that weren’t applied successfully using the provided perl script.

*Disclaimers – Please read the readme files for patches and follow all Oracle recommendations. Patch numbers are subject to change depending on OS and OBIA versions. Commands and paths here are of the linux/unix variety, but there are similar commands available for Windows OS.

Perl Script patching

Unzip the patch files to a patch folder. I have included the OBIEE patch as well.

unzip pb4biapps_11.1.1.9.2_.zip -d patches/
unzip pb4biapps_11.1.1.9.2_generic_1of2.zip -d patches/
unzip pb4biapps_11.1.1.9.2_generic_2of2.zip -d patches/
unzip p20124371_111170_.zip -d patches/

While installing the Oracle BI Applications versions 11.1.1.7. and up, patches get applied with a perl script called APPLY_PATCHES.pl. Following Oracle’s install documentation for 11.1.1.9 version of Oracle BI Applications here, there is a text file to modify and pass to the perl script. Both the perl script and the text file reside in the following directory: $ORACLE_HOME/biapps/tools/bin. In the text file, called apply_patches_import.txt, parameters are set with the path to the following directories:

JAVA_HOME
INVENTORY_LOC
ORACLE_HOME
MW_HOME
COMMON_ORACLE_HOME
WL_HOME
ODI_HOME
WORKDIR
PATCH_ROOT_DIR
WINDOWS_UNZIP_TOOL_EXE (only needed if running on Windows platforms)

Some pro tips to modifying this text file:
1. Oracle recommends you use the JDK in the ORACLE_BI1 directory.
2. Use ORACLE_BI1 as the ORACLE_HOME.
3. Ensure WORKDIR and PATCH_ROOT_DIR are writeable directories.
4. Don’t add a path separator at the end of the path.
5. Commented lines are safe to remove.

Then you run the APPLY_PATCHES.pl passing in the apply_patches_import.txt. If everything goes well, at the end of the perl script, the results will look similar to the following:

If this is the case, CONGRATULATIONS!!, you can move on to the next step in the install documentation. Thanks for stopping by and come back soon! However, if any patch or group of patches failed, the rest of this post is for you.

Log file location

First, the above patching report does not tell you where to find the logs, regardless of success or failure. If you remember though, you set a WORKDIR path in the text file earlier. In that directory is where you will find the following log files:

  1. final_patching_report.log
  2. biappshiphome_generic_patches.log
  3. odi_generic_patches.log
  4. oracle_common_generic_patches.log
  5. weblogic_patching.log

Open the final_patching_report.log to determine first if all patches were applied and identify ones that were not successful. For example, looking that this log may show that the Oracle Common patches failed.

cd $WORKDIR
vi final_patching_report.log

However, this doesn’t tell you what caused the failure. Next we will want to look into the oracle_common_generic_patches.log to gather more information.

From the $WORKDIR:

vi oracle_common_generic_patches.log

Here you will see the error, that a component is missing. Patch ######## requires component(s) that are not installed in OracleHome. These not-installed components are oracle.jrf.thirdparty.jee:11.1.1.7.0. Notice also that in this log there is a path to another log file location. The path is in the $COMMON_ORACLE_HOME/cfgtoollogs/opatch/ directory. This directory has more detailed logs specific to patches applied to oracle_common. Additionally, there are logs under $ORACLE_HOME/cfgtoollogs/opatch/, $WL_HOME/cfgtoollogs/opatch/, and $ODI_HOME/cfgtoollogs/opatch/. These locations are very helpful to know, so you can find the logs for each group of patches in the same relative path.

Going back to the above error, we are going to open the most recent log file listed in the $COMMON_ORACLE_HOME/cfgtoollogs/opatch/ directory.

cd $COMMON_ORACLE_HOME/cfgtoollogs/opatch/
vi opatch2015-08-08_09-20-58AM_1.log

The beginning of this log file has two very interesting pieces of information to take note of for use later. It has the actual OPatch command used, and it has a path to a Patch History file. Looks like we will have to page down in the file to find the error message.

Now we see our missing component error. Once the error occurs, the java program starts backing out and then starts cleanup by deleting the files extracted earlier in the process. This log does have more detail, but still doesn’t say much about the missing component. After some digging around on the internet, I found a way to get more detailed information out of OPatch by setting export OPATCH_DEBUG=TRUE. After turning OPatch debugging on, run the OPatch command we found earlier that was at the top of the log. A new log file will be generated and we want to open this most recent log file.

Finally, the results now get me detailed information about the component and the failure.

Side Note: If you are getting this specific error, I’ll refer you back to a previous blog post that happened to mention making sure to grab the correct version of OBIEE and ODI. If you have a wrong version of OBIEE or ODI for the Oracle BI Apps version you are installing, unfortunately you won’t start seeing errors until you get to this point.

Manually running Oracle BI Application patches

Normally, the error or reason behind a patch or group of patches failing doesn’t take that level of investigation, and the issue will be identified in the first one or two logs. Once the issue is corrected, there are a couple of options available. Rerunning the perl script is one option, but it will cycle through all of the patches again, even the ones already applied. There is no harm in this, but it does take longer than running the individual patches. The other option is to run the OPatch command at the command line. To do that, first I would recommend setting the variables from the text file. I also added the Oracle_BI1/OPatch directory to the PATH variable.

export JAVA_HOME=$ORACLE_HOME/jdk
export INVENTORY_LOC=
export COMMON_ORACLE_HOME=$MW_HOME/oracle_common
export WL_HOME=$MW_HOME/wlserver_10.3
export SOA_HOME=$MW_HOME/Oracle_SOA1
export ODI_HOME=$MW_HOME/Oracle_ODI1
export WORKDIR=
export PATCH_FOLDER=/patches
export PATH=$ORACLE_HOME/OPatch:$JAVA_HOME/bin:$PATH

Next, unzip the patches in the required directory. For example, the $PATCH_FOLDER/oracle_common/generic might look like this after unzipping files:

Below are the commands for each group of patches:

Oracle Common Patches:

cd $PATCH_FOLDER/oracle_common/generic
unzip "*.zip"

$COMMON_ORACLE_HOME/OPatch/opatch napply $PATCH_FOLDER/oracle_common/generic -silent -oh $COMMON_ORACLE_HOME -id 16080773,16830801,17353546,17440204,18202495,18247368,18352699,18601422,18753914,18818086,18847054,18848533,18877308,18914089,19915810

BIApps Patches:

cd $PATCH_FOLDER/biappsshiphome/generic
unzip "*.zip"

opatch napply $PATCH_FOLDER/biappsshiphome/generic -silent -id 16913445,16997936,19452953,19526754,19526760,19822893,19823874,20022695,20257578

ODI Patches:

cd $PATCH_FOLDER/odi/generic
unzip "*.zip"

/$ODI_HOME/OPatch/opatch napply $PATCH_FOLDER/odi/generic -silent -oh $ODI_HOME -id 18091795,18204886

Operating Specific Patches:

cd $PATCH_FOLDER/
unzip "*.zip"

opatch napply $PATCH_FOLDER/ -silent -id ,,

Weblogic Patches:

cd $PATCH_FOLDER/suwrapper/generic
unzip "*.zip"

cd $PATCH_FOLDER/weblogic/generic

$JAVA_HOME/bin/java -jar $PATCH_FOLDER/suwrapper/generic/bsu-wrapper.jar -prod_dir=$WL_HOME -install -patchlist=JEJW,LJVB,EAS7,TN4A,KPFJ,RJNF,2GH7,W3Q6,FKGW,6AEJ,IHFB -bsu_home=$MW_HOME/utils/bsu -meta=$PATCH_FOLDER/suwrapper/generic/suw_metadata.txt -verbose > $PATCH_FOLDER/weblogic_patching.log

Even though this is a very specific error as an example, understanding the logs and having the break-down of all of the patches will help with any number of patch errors at this step of the Oracle BI Applications installation. I would love to hear your thoughts if you found this helpful or if any part was confusing. Keep an eye out for the next Becky’s BI Apps Corner where I move on from installs and start digging into incremental logic and Knowledge Modules.

The post Becky’s BI Apps Corner: OBIA install Perl Script Patching and troubleshooting when they fail. appeared first on Rittman Mead Consulting.

Becky’s BI Apps Corner: Installing on Windows Server 2012

Recently I was installing Oracle BI Apps on a VM for a custom training, as mentioned in a previous post. I setup the VM with Windows Server 2012 after verifying in the certification matrix that the Oracle BI Apps version (11.1.1.9.2) was certified to run on Windows Server 2012.

As I was going along, I was getting errors during some of the installs on the pre-requisite checks for the operating system (OS), as can be the case when the OS gets certified after the initial release of that version of software. The workaround is available for that error in MOS Doc ID 1577652.1, and involves adding the OS to an .xml file… Okay, I admit it. I didn’t actually update the .xml file. I just clicked Continue past the pre-requisite check. Since there didn’t seem to be any repercussions and the installs completed successfully, I continued on my merry way through the installation instructions.

When I was running OBIEE’s configuration (config.bat), it failed at 35% on the step Setting Up BI Publisher. The install_DATE_TIME.log file didn’t have enough information, so I dug into the install_DATE_TIME.out file. In it, there was the following error:

updateBIPConfigFiles: OSError: (0, 'Failed to execute command (['sh', '-c', 'java -classpath C:\\Oracle\\Product\\BI_11.1\\Oracle_BI1\\clients\\bipublisher\\xdo-server.jar oracle.xdo.install.UpdateConfigFiles 9704 9703 9710 jdbc/mds/owsm C:\\Oracle\\Product\\BI_11.1\\user_projects\\domains\\bifoundation_domain']): java.io.IOException: Cannot run program "sh": CreateProcess error=2, The system cannot find the file specified')

I left the config.bat open (It only takes one time to learn not to cancel during the config.bat or configApps.bat) and I took a trip over to my friendly neighborhood support.oracle.com where I found MOS Doc ID 1580583.1. Essentially, it says to dig into and update a Jar file, and then directs you to Oracle’s Java Tutorials to learn how. I decided to do just that and post the steps here.

Forays into Java

The MOS doc says I need to update a file called javashell.py by adding the string “Windows Server 2012” to the list of os.names. This javashell.py file is archived inside a jar file located: commonwlstmodulesjython-modules.jar.

A jar file is simply a package of a bunch of files. To view content of a jar file, the command is:
jar tf jar-file

In my command window, I navigated to the folder where the jython-modules.jar was. For this particular jar, the number of files was bigger than my command window would show, so I sent it to a text file.

The command I used was:

C:appproductfmwwlserver_10.3commonwlstmodules>c:javajdk1.7.0_85binjar
tf jython-modules.jar >log.txt

In the jython-modules.jar, there was a Libjavashell.py. Now that I found my file, I needed to extract the file from the jar. To do that, the command is:
jar xf jar-file [archived-file(s)]

Since I was still in the same directory, I used the following command:

C:appproductfmwwlserver_10.3commonwlstmodules>c:javajdk1.7.0_85binjar
xf jython-modules.jar Lib/javashell.py

The file is now extracted and can be edited. I opened the javashell.py file in Notepad++. You can open/edit it in your editor of choice. Not too far into the python script, I found the os.name and the list of other operating systems. I’m not particularly skilled in python, but the change seemed pretty straight forward. As you can see from the screenshot, I just added a comma, and the ‘Windows Server 2012’ at the end of the “nt” operating systems. I am certain this would work for other operating systems added to the certification matrix after the software was released.

Once I’ve saved my changes, I need to get this javashell.py file back into the python-modules.jar.

The command to update a file inside of a jar is:
jar uf jar-file input-file(s)

I used:

C:appproductfmwwlserver_10.3commonwlstmodules>c:javajdk1.7.0_85binjar
uf jython-modules.jar Lib/javashell.py

With everything in order, I went back to the config.bat window, which was still running. I scrolled to the top, checked the box next to the first line, clicked Retry.

The install completed without any further errors! We got to play with Java Jar files and a python script to resolve an error with the config.bat installer. These steps will allow you to update for any OS, which can come in handy for any that are certified after the initial release of a version of software. Let me know in the comments all the different Operating Systems where you have installed BI Apps. So far, I’ve installed on RHEL 5 and 6, Solaris 10, Windows Server 2008, and Windows Server 2012, each with interesting little quirks. Also, if you have training needs, check out our new trainings for 2016, and contact us! And of course, keep an eye out for more Becky’s BI Apps Corner coming soon.

The post Becky’s BI Apps Corner: Installing on Windows Server 2012 appeared first on Rittman Mead Consulting.