Category Archives: Art of BI

FDMEE 11.2.2.0 Troubleshooting – Part 2

Before reading further, you need to first review my earlier post, “FDMEE 11.2.2.0 Multiple Problems – Buggy Release – Part 1.”

Clarification: this post is about the Linux edition only. The Windows edition does not have these issues.

Without a complete FDMEE database schema, which the post linked above walks you through, your Oracle EPM/Hyperion FDMEE 11.2.2.0 implementation will be doomed.

Now, sadly, we have to dive into Oracle WebLogic 12c. You will need to launch the WebLogic Admin Console so you may follow along.

First problem: nothing is targeted to FDMEE (“ErpIntegrator” in WebLogic’s lingo)….
 

 
In addition, a critical deployment is missing. Can you see which one?
 

 

Answer: The deployment named “AIF” is missing, and it ought to be targeted to the ERPIntegrator cluster. Other targets are missing as well. Here’s how it ought to look:
 

 

 

You can fix this manually within the Admin Console, or you can shut down the Admin Server and edit WebLogic’s config.xml by hand. To do it within the user interface, you would do a Lock & Edit, add deployment, and pick aif.ear from within Middleware/EPMSystem11R1/products/FinancialDataQuality/AppServer/InstallableApps. But wait, there’s more! You may want to shut down AdminServer and paste this text to config.xml:

<app-deployment>
<name>oraclediagent</name>
<target>ErpIntegrator</target>
<module-type>ear</module-type>
<source-path>/Oracle/Middleware/odi/jee/oracledi-agent/oraclediagent-wls.ear</source-path>
<deployment-order>45</deployment-order>
<security-dd-model>DDOnly</security-dd-model>
<staging-mode>nostage</staging-mode>
</app-deployment>
<app-deployment>
<name>odiconsole</name>
<target>ErpIntegrator</target>
<module-type>ear</module-type>
<source-path>/Oracle/Middleware/odi/jee/oracledi-metadata-navigator/odiconsole.ear</source-path>
<deployment-order>45</deployment-order>
<security-dd-model>DDOnly</security-dd-model>
<staging-mode>nostage</staging-mode>
</app-deployment>
<app-deployment>
<name>AIF#11.1.2.0</name>
<target>ErpIntegrator</target>
<module-type>ear</module-type>
<source-path>/Oracle/Middleware/EPMSystem11R1/products/FinancialDataQuality/AppServer/InstallableApps/aif.ear</source-path>
<security-dd-model>DDOnly</security-dd-model>
<staging-mode>nostage</staging-mode>
</app-deployment>

Replace /Oracle above with the Linux path from / leading to Middleware.

We’re not done yet. Restart the Admin Server, go to Deployments->aif->Testing:
 

 

This is what it ought to look like:
 

 

Go back to Deployments and put a checkbox where I’ve highlighted below and change the page size from 10 entries to 1000.
 

 

Here is just a brief snippet of what’s wrong. Many libraries have not been targeted to ERPIntegrator.
 

 

The fix here is you need to do another Lock & Edit and target the ERPIntegrator cluster for many libraries. There are too many to list. If you want to see the gory details, I have posted my modified WebLogic config.xml

Honestly, we need a patch for FDMEE and also possibly the EPM Configurator tool. No one should have to go through all of the steps I did!

As EPM 11.2.2.0 is very new, these types of things are bound to happen. If you have an implementation project on the horizon for EPM 11.2.2.0, ensure your Project Manager bakes in enough time in the project timeline for opening SRs and waiting for patches to be issued.

I hope this post helps unravel the mystery! Stay tuned for Part 3…

Cross-posted from EPM On-Prem Pro. Read the original post here.

The post FDMEE 11.2.2.0 Troubleshooting – Part 2 appeared first on Datavail.

Kerberos Authentication with Oracle Databases

In an effort to simplify Oracle database authentication, Kerberos will be installed and configured to authenticate user’s password against Microsoft AD. This will allow users to maintain only one password for AD and Oracle databases.

There are several parts to the configuration; MS AD, Unix and Oracle Database. This document will consolidate all the parts into one document for a consistent installation across all the database servers.

Microsoft AD

Create a user with the samaccountname that matches the short name. The cn, displayname, givenname and name attributes must match FQDN.

The following was run from PDC emulator (Should be able to use any DC).  The password was a generated, complex password.

$Pass=”…….”

ktpass.exe -princ <userPrincipalName> -mapuser <cn> -crypto all -pass $pass -out C:\temp\krb5.keypass

The pertinent user attributes are below.  The red color represents admin entered.  The black bold represent populated by the ktpass command when it creates the keytab file.

cn                       <servername>.<company.com>
displayName              <servername>.<company.com>
distinguishedName        CN=<servername>.<company.com>,OU=AIX,DC=corp,DC=<company>,DC=com
givenName                <servername>.<company.com>
name                     <servername>.<company.com>
sAMAccountName           <servername>
servicePrincipalName     oracle/<servername>.<company.com>
                         oracle/<servername>.corp.<company.com>
userAccountControl       [512] User
userPrincipalName        oracle/<servername>.<company.com>@CORP.<COMPANY.COM>

The resultant file was transferred to the *nix admin for installation and config.

Unix

Make krb components available:

revise AD server exports to make /export/71 available to target server(s);
mount AD Server:/export/71 /mnt
cd /mnt/lppsource_71TL3SP6_full/installp/ppc
smitty install from current directory

Check/Install these Kerberos components:

krb5.client.rte 1.5.0.3      Network Authentication Servi…
krb5.client.samples 1.5.0.3  Network Authentication Servi…
krb5.doc.en_US.html 1.5.0.3  Network Auth Service HTML Do…
krb5.doc.en_US.pdf 1.5.0.3   Network Auth Service PDF Doc…
krb5.lic 1.5.0.3             Network Authentication Servi…

Revise kerberos entries in /etc/services:

kerberos                88/tcp  kerberos5 krb5  # Kerberos v5
kerberos                88/udp  kerberos5 krb5  # Kerberos v5

Oracle Database Servers

Create or add the following to the sqlnet.ora file ($ORACLE_HOME/network/admin)

SQLNET.KERBEROS5_KEYTAB=/etc/krb5/krb5.keytab
SQLNET.AUTHENTICATION_SERVICES = (BEQ,KERBEROS5PRE,KERBEROS5)
SQLNET.AUTHENTICATION_REQUIRED=TRUE
SQLNET.KERBEROS5_CONF_MIT=TRUE
SQLNET.KERBEROS5_CONF=$ORACLE_HOME/network/admin/krb5.conf
SQLNET.AUTHENTICATION_KERBEROS5_SERVICE=oracle
SQLNET.INBOUND_CONNECT_TIMEOUT=180

Update each user with the following:

alter user <username> identified externally as ‘<username>@CORP.<COMPANY.COM>’;

 

I hope this blog helps provide you with a one stop shop to simplify your Oracle database authentication. If you’re looking for support your Oracle databases, please reach out.

The post Kerberos Authentication with Oracle Databases appeared first on Datavail.

Key #3 to Remote Application Development: DevOps

In my previous two posts, I discussed the first two keys of successful application development in a remote workplace. This final post discusses the last key that will really round out your team and close the gaps you may be experiencing since the “great work from home migration.” Key #3 is a concept you are already familiar with: DevOps.

 
Donovan Brown, a principal DevOps manager at Microsoft, defines the DevOps methodology as “the union of people, process, and products to enable continuous delivery of value to users.”

Common DevOps practices include:

  • Continuous integration/continuous delivery (CI/CD), in which developers make small, incremental changes to the code base that are immediately ready to be released into production.
  • Automation from start to finish, from code generation and testing to deployment and monitoring.
  • Version control that records all modifications to the code base over time, dramatically simplifying the task of change management.
  • Agile planning and lean project management to help DevOps teams collaborate and organize work into shorter, focused “sprints.”

 
Studies have shown that when done right, DevOps can bring tremendous improvements to tech-oriented businesses. In one study, organizations that have “fully embraced” DevOps increased revenues and profits by 60 percent, and were 2.4 times more likely to be enjoying rapid business growth than their competitors.

Many companies have already recognized the immense value that DevOps can bring to their organization but are struggling to implement it in practice. According to a 2019 survey of Harvard Business Review subscribers, 48 percent of respondents said that they “always” use DevOps practices to build software, while another 21 percent said that they “selectively” use DevOps. However, just 10 percent of respondents agree that they can quickly build and deploy software—suggesting that when it comes to DevOps, there’s a significant gap between ideals and reality.

In order to implement DevOps at Datavail, we use Microsoft’s Azure DevOps suite of technology solutions. Azure DevOps includes all the software and solutions you need to successfully bring DevOps to your own organization:

  • CI/CD pipelines
  • Kanban boards, team dashboards, and backlogs
  • Software package and artifact management
  • Custom reporting and analytics
  • Private git repos hosted in the cloud

 
For example, with Azure DevOps, our remote AppDev team members can easily understand the work on their plate simply by looking at their task boards. By updating these boards as they complete their work, users can keep others in the loop in real time. This allows our team leads to quickly course-correct if team members’ progress falls short of expectations.

For many businesses, the transition to remote work has been jarring and abrupt, disrupting their standard application development workflows. But with these three keys – collaboration, communication, and DevOps – your team can begin to thrive again.

To learn more about the three keys and their implementation strategies, download my white paper, “The 3 Keys to Successful Remote Application Development.”

The post Key #3 to Remote Application Development: DevOps appeared first on Datavail.

Amazon Aurora New Feature: Write Forwarding for Secondary AWS Regions

Gone are the days of DBAs having to use secondary cluster for disaster activity. Amazon Aurora has a new feature available for write forwarding that can help using your secondary clusters. Aurora write forwarding can help address some of the current challenges and limitations with Aurora Multi-Master.

 
The new write forwarding feature means less stress to your master server in terms of the connection going to a single endpoint. This also means you can now use SQLs to connect from secondary endpoint to your primary master endpoint (as writes) from your application end.

By using this approach, you can send all your transactions to secondary which will then apply on the primary server and then replicates back to your replicas. Please remember you should have Aurora 2.08.1 as the latest version to implement these features in your instance and the default isolation setup to work. As of now REPEATABLE READ & READ COMMITTED isolation works much better than other isolation setup.

Please follow these steps to enable auto-forwarding

Assuming you have an Aurora 2.08.1 with single writer as of now and you will be adding a new region secondary cluster toward the cluster.

Step 1: In your existing cluster add a new region by using this method shown below:

Existing setup with one cluster, primary + writer in Aurora 2.08.1
 

 

Step 2: Add secondary cluster by selecting via “Add region” to your cluster from Global given below.
 

 

Choose your region:
 

 

Make sure Read replica write forwarding is enabled while creating new secondary cluster towards new region given below.

Choose Availability & durability:
 

 

After clicking Add Region, this will create a new secondary cluster toward your global cluster along with Read replica (Reader) in that new region and it will look like the below. You’ll have two (2) replica in this region created.

Create status of your cluster:
 

 

Once the secondary cluster in your region is ready with reader you will see something like this the creating stage:
 

 

You will need to disable “read_only” parameter in your parameter file only on the secondary cluster used, so it can be used to read and write operations. Edit the cluster parameter file and disable the read_only parameter by having the copy of the existing default file. Then, apply to it using modifying the instance to your secondary cluster, shown below and readily available.
 

 
 

 

Click on the secondary cluster instance to see the endpoint names shown above.

Now open the terminal or putty or using external tools connected to the primary instance, create database and create table like the below from primary:
 

 

Now login into the secondary cluster which has Reader added to it at the time of the creation, use that endpoint to connect to the secondary instance and do the DML operations, you’ll have two databases used to insert some values towards the table:
 

 
 

 

If you disable aurora_replica_consistency variable to NULL, your secondary cluster will not work for writes (DML’s), so make sure your application uses the below statements via session to enable writes, this feature must be set. please find the snip below as an example shown:
 

 

When you set the session variable to aurora_replica_read_consistency other than NULL, then only you can do the DML operations using your secondary cluster with assigned endpoint it has.
 

 

Note: Only DML can be done from secondary, once the session aurora_replica_read_consistency variables is set as global/session/eventual with any one of these. Please see the above snippet shows using three different types of levels used to write operations from secondary cluster.

In contrast, multiple inserts can be scaled using the existing endpoints within the cluster had; and the advantage of using is that the (primary & secondary) Aurora cluster is always in sync with each other. Though it’s accepting multiple DML’s due to the demand and the flexibility the “Write Forwarding” feature that Aurora has support for your application.

This post showed you how to use multiple DML’s from your application using the secondary cluster to replicate your data to primary and secondary in your environment. My hope is that you can now greatly improve writing of data to your tables for a very active database server. If your looking for support with write forwarding and/or your databases or are interested in an AWS migration, please reach out.

The post Amazon Aurora New Feature: Write Forwarding for Secondary AWS Regions appeared first on Datavail.

FDMEE 11.2.2.0 Troubleshooting – Part 1

I did a near-full stack implementation in a sandbox for Oracle EPM/Hyperion 11.2.2.0 in Red Hat Enterprise Linux (“RHEL”) 7. Essbase, Planning, CalcMgr, Foundation, and EAS all seem to be working fine. HFM, of course, is not available for Linux at this time.

FDMEE, however, has multiple issues and does not work. Clarification: this post is about the Linux edition only. The Windows edition does not have these issues.

A patch also needs to be issued for Analytic Provider Services (“APS”). I managed to get it to work by copying over aps.ear from my 11.2.1.0 sandbox. Unfortunately, this solution won’t help you if you didn’t grab 11.2.1.0 from Oracle eDelivery before it was removed. So, Oracle will need to issue a patch for 11.2.2.0 APS as well.

I’ve downloaded 11.2.2.0 for Windows and will check if that version also has the same issues.

Here’s the initial symptom with respect to FDMEE:
 

 

I had deployed it to WebLogic and started it. It binds to port 6550 as it should:
 

 

Digging deeper, I found multiple problems behind the scenes. The problems are in two major areas:

  1. The FDMEE database schema is incomplete; All of the SNP_* tables are missing.
  2. The WebLogic configuration is completely messed up where FDMEE (“ErpIntegrator” as WebLogic calls it) is concerned.

 
I had to do quite a bit of “surgery.” At this point, two of three problems as previously reported by the validate.sh utility are now fixed.
 

 

So how did I manage to get to this point? Hours of troubleshooting and experimentation.

Let’s first address fixing the lack of SNP_* tables in the FDMEE database schema. When we run the “Configure Database” task for FDMEE, here’s what happens behind the scenes. We can reproduce the issue at will by running the back-end utility manually.
 

 

Side note: I edited the utility to add echo statements for “Finished XYZ” so I could see which step was throwing the Java stack trace error.

The assertion that the supervisor password is blank is not true. The script follows the same format used in 11.1.2.4, whereby Supervisor’s userid and password are encrypted and hard-coded within the script.

This is not an issue in 11.1.2.4, 11.2.0.0 and 11.2.1.0. While 11.2.0.0 and 11.2.1.0 are no longer available for download from Oracle eDelivery, I downloaded both releases when they were available, and I have an 11.2.1.0 sandbox online for reference purposes.

In crawling through the various jar files in 11.2.2.0, I found the offending class oracle.odi.setup.suppoort.MasterRepositorySetupImpl is located within Middleware/odi/sdk/lib/odi-core.jar.

When comparing 11.2.2.0 Linux vs. 11.2.1.0 Windows, I found size and timestamp mismatches between the two systems:

  • 2.1.0 Windows: 29,222,611 bytes dated Feb 1, 2020
  • 2.2.0 Linux: 29,032,586 bytes dated Aug 22, 2017

 
Rut roh!

So, I backed up Middleware/odi/sdk in 11.2.2.0 and then copied over the whole directory structure from 11.2.1.0. I reran the utility in my screenshot above, and now all of the SNP_* tables are present.

So, in the EPM 11.2.2.0 Linux installer, the subdirectory “odi” is apparently packaged incorrectly and we need a patch for that.

Part 2 will address issues uncovered within WebLogic.

Cross-posted from EPM On-Prem Pro. Read the original post here.

The post FDMEE 11.2.2.0 Troubleshooting – Part 1 appeared first on Datavail.