Tag Archives: blog

Kerberos Authentication with Oracle Databases

In an effort to simplify Oracle database authentication, Kerberos will be installed and configured to authenticate user’s password against Microsoft AD. This will allow users to maintain only one password for AD and Oracle databases.

There are several parts to the configuration; MS AD, Unix and Oracle Database. This document will consolidate all the parts into one document for a consistent installation across all the database servers.

Microsoft AD

Create a user with the samaccountname that matches the short name. The cn, displayname, givenname and name attributes must match FQDN.

The following was run from PDC emulator (Should be able to use any DC).  The password was a generated, complex password.

$Pass=”…….”

ktpass.exe -princ <userPrincipalName> -mapuser <cn> -crypto all -pass $pass -out C:\temp\krb5.keypass

The pertinent user attributes are below.  The red color represents admin entered.  The black bold represent populated by the ktpass command when it creates the keytab file.

cn                       <servername>.<company.com>
displayName              <servername>.<company.com>
distinguishedName        CN=<servername>.<company.com>,OU=AIX,DC=corp,DC=<company>,DC=com
givenName                <servername>.<company.com>
name                     <servername>.<company.com>
sAMAccountName           <servername>
servicePrincipalName     oracle/<servername>.<company.com>
                         oracle/<servername>.corp.<company.com>
userAccountControl       [512] User
userPrincipalName        oracle/<servername>.<company.com>@CORP.<COMPANY.COM>

The resultant file was transferred to the *nix admin for installation and config.

Unix

Make krb components available:

revise AD server exports to make /export/71 available to target server(s);
mount AD Server:/export/71 /mnt
cd /mnt/lppsource_71TL3SP6_full/installp/ppc
smitty install from current directory

Check/Install these Kerberos components:

krb5.client.rte 1.5.0.3      Network Authentication Servi…
krb5.client.samples 1.5.0.3  Network Authentication Servi…
krb5.doc.en_US.html 1.5.0.3  Network Auth Service HTML Do…
krb5.doc.en_US.pdf 1.5.0.3   Network Auth Service PDF Doc…
krb5.lic 1.5.0.3             Network Authentication Servi…

Revise kerberos entries in /etc/services:

kerberos                88/tcp  kerberos5 krb5  # Kerberos v5
kerberos                88/udp  kerberos5 krb5  # Kerberos v5

Oracle Database Servers

Create or add the following to the sqlnet.ora file ($ORACLE_HOME/network/admin)

SQLNET.KERBEROS5_KEYTAB=/etc/krb5/krb5.keytab
SQLNET.AUTHENTICATION_SERVICES = (BEQ,KERBEROS5PRE,KERBEROS5)
SQLNET.AUTHENTICATION_REQUIRED=TRUE
SQLNET.KERBEROS5_CONF_MIT=TRUE
SQLNET.KERBEROS5_CONF=$ORACLE_HOME/network/admin/krb5.conf
SQLNET.AUTHENTICATION_KERBEROS5_SERVICE=oracle
SQLNET.INBOUND_CONNECT_TIMEOUT=180

Update each user with the following:

alter user <username> identified externally as ‘<username>@CORP.<COMPANY.COM>’;

 

I hope this blog helps provide you with a one stop shop to simplify your Oracle database authentication. If you’re looking for support your Oracle databases, please reach out.

The post Kerberos Authentication with Oracle Databases appeared first on Datavail.

Key #3 to Remote Application Development: DevOps

In my previous two posts, I discussed the first two keys of successful application development in a remote workplace. This final post discusses the last key that will really round out your team and close the gaps you may be experiencing since the “great work from home migration.” Key #3 is a concept you are already familiar with: DevOps.

 
Donovan Brown, a principal DevOps manager at Microsoft, defines the DevOps methodology as “the union of people, process, and products to enable continuous delivery of value to users.”

Common DevOps practices include:

  • Continuous integration/continuous delivery (CI/CD), in which developers make small, incremental changes to the code base that are immediately ready to be released into production.
  • Automation from start to finish, from code generation and testing to deployment and monitoring.
  • Version control that records all modifications to the code base over time, dramatically simplifying the task of change management.
  • Agile planning and lean project management to help DevOps teams collaborate and organize work into shorter, focused “sprints.”

 
Studies have shown that when done right, DevOps can bring tremendous improvements to tech-oriented businesses. In one study, organizations that have “fully embraced” DevOps increased revenues and profits by 60 percent, and were 2.4 times more likely to be enjoying rapid business growth than their competitors.

Many companies have already recognized the immense value that DevOps can bring to their organization but are struggling to implement it in practice. According to a 2019 survey of Harvard Business Review subscribers, 48 percent of respondents said that they “always” use DevOps practices to build software, while another 21 percent said that they “selectively” use DevOps. However, just 10 percent of respondents agree that they can quickly build and deploy software—suggesting that when it comes to DevOps, there’s a significant gap between ideals and reality.

In order to implement DevOps at Datavail, we use Microsoft’s Azure DevOps suite of technology solutions. Azure DevOps includes all the software and solutions you need to successfully bring DevOps to your own organization:

  • CI/CD pipelines
  • Kanban boards, team dashboards, and backlogs
  • Software package and artifact management
  • Custom reporting and analytics
  • Private git repos hosted in the cloud

 
For example, with Azure DevOps, our remote AppDev team members can easily understand the work on their plate simply by looking at their task boards. By updating these boards as they complete their work, users can keep others in the loop in real time. This allows our team leads to quickly course-correct if team members’ progress falls short of expectations.

For many businesses, the transition to remote work has been jarring and abrupt, disrupting their standard application development workflows. But with these three keys – collaboration, communication, and DevOps – your team can begin to thrive again.

To learn more about the three keys and their implementation strategies, download my white paper, “The 3 Keys to Successful Remote Application Development.”

The post Key #3 to Remote Application Development: DevOps appeared first on Datavail.

Amazon Aurora New Feature: Write Forwarding for Secondary AWS Regions

Gone are the days of DBAs having to use secondary cluster for disaster activity. Amazon Aurora has a new feature available for write forwarding that can help using your secondary clusters. Aurora write forwarding can help address some of the current challenges and limitations with Aurora Multi-Master.

 
The new write forwarding feature means less stress to your master server in terms of the connection going to a single endpoint. This also means you can now use SQLs to connect from secondary endpoint to your primary master endpoint (as writes) from your application end.

By using this approach, you can send all your transactions to secondary which will then apply on the primary server and then replicates back to your replicas. Please remember you should have Aurora 2.08.1 as the latest version to implement these features in your instance and the default isolation setup to work. As of now REPEATABLE READ & READ COMMITTED isolation works much better than other isolation setup.

Please follow these steps to enable auto-forwarding

Assuming you have an Aurora 2.08.1 with single writer as of now and you will be adding a new region secondary cluster toward the cluster.

Step 1: In your existing cluster add a new region by using this method shown below:

Existing setup with one cluster, primary + writer in Aurora 2.08.1
 

 

Step 2: Add secondary cluster by selecting via “Add region” to your cluster from Global given below.
 

 

Choose your region:
 

 

Make sure Read replica write forwarding is enabled while creating new secondary cluster towards new region given below.

Choose Availability & durability:
 

 

After clicking Add Region, this will create a new secondary cluster toward your global cluster along with Read replica (Reader) in that new region and it will look like the below. You’ll have two (2) replica in this region created.

Create status of your cluster:
 

 

Once the secondary cluster in your region is ready with reader you will see something like this the creating stage:
 

 

You will need to disable “read_only” parameter in your parameter file only on the secondary cluster used, so it can be used to read and write operations. Edit the cluster parameter file and disable the read_only parameter by having the copy of the existing default file. Then, apply to it using modifying the instance to your secondary cluster, shown below and readily available.
 

 
 

 

Click on the secondary cluster instance to see the endpoint names shown above.

Now open the terminal or putty or using external tools connected to the primary instance, create database and create table like the below from primary:
 

 

Now login into the secondary cluster which has Reader added to it at the time of the creation, use that endpoint to connect to the secondary instance and do the DML operations, you’ll have two databases used to insert some values towards the table:
 

 
 

 

If you disable aurora_replica_consistency variable to NULL, your secondary cluster will not work for writes (DML’s), so make sure your application uses the below statements via session to enable writes, this feature must be set. please find the snip below as an example shown:
 

 

When you set the session variable to aurora_replica_read_consistency other than NULL, then only you can do the DML operations using your secondary cluster with assigned endpoint it has.
 

 

Note: Only DML can be done from secondary, once the session aurora_replica_read_consistency variables is set as global/session/eventual with any one of these. Please see the above snippet shows using three different types of levels used to write operations from secondary cluster.

In contrast, multiple inserts can be scaled using the existing endpoints within the cluster had; and the advantage of using is that the (primary & secondary) Aurora cluster is always in sync with each other. Though it’s accepting multiple DML’s due to the demand and the flexibility the “Write Forwarding” feature that Aurora has support for your application.

This post showed you how to use multiple DML’s from your application using the secondary cluster to replicate your data to primary and secondary in your environment. My hope is that you can now greatly improve writing of data to your tables for a very active database server. If your looking for support with write forwarding and/or your databases or are interested in an AWS migration, please reach out.

The post Amazon Aurora New Feature: Write Forwarding for Secondary AWS Regions appeared first on Datavail.

FDMEE 11.2.2.0 Troubleshooting – Part 1

I did a near-full stack implementation in a sandbox for Oracle EPM/Hyperion 11.2.2.0 in Red Hat Enterprise Linux (“RHEL”) 7. Essbase, Planning, CalcMgr, Foundation, and EAS all seem to be working fine. HFM, of course, is not available for Linux at this time.

FDMEE, however, has multiple issues and does not work. Clarification: this post is about the Linux edition only. The Windows edition does not have these issues.

A patch also needs to be issued for Analytic Provider Services (“APS”). I managed to get it to work by copying over aps.ear from my 11.2.1.0 sandbox. Unfortunately, this solution won’t help you if you didn’t grab 11.2.1.0 from Oracle eDelivery before it was removed. So, Oracle will need to issue a patch for 11.2.2.0 APS as well.

I’ve downloaded 11.2.2.0 for Windows and will check if that version also has the same issues.

Here’s the initial symptom with respect to FDMEE:
 

 

I had deployed it to WebLogic and started it. It binds to port 6550 as it should:
 

 

Digging deeper, I found multiple problems behind the scenes. The problems are in two major areas:

  1. The FDMEE database schema is incomplete; All of the SNP_* tables are missing.
  2. The WebLogic configuration is completely messed up where FDMEE (“ErpIntegrator” as WebLogic calls it) is concerned.

 
I had to do quite a bit of “surgery.” At this point, two of three problems as previously reported by the validate.sh utility are now fixed.
 

 

So how did I manage to get to this point? Hours of troubleshooting and experimentation.

Let’s first address fixing the lack of SNP_* tables in the FDMEE database schema. When we run the “Configure Database” task for FDMEE, here’s what happens behind the scenes. We can reproduce the issue at will by running the back-end utility manually.
 

 

Side note: I edited the utility to add echo statements for “Finished XYZ” so I could see which step was throwing the Java stack trace error.

The assertion that the supervisor password is blank is not true. The script follows the same format used in 11.1.2.4, whereby Supervisor’s userid and password are encrypted and hard-coded within the script.

This is not an issue in 11.1.2.4, 11.2.0.0 and 11.2.1.0. While 11.2.0.0 and 11.2.1.0 are no longer available for download from Oracle eDelivery, I downloaded both releases when they were available, and I have an 11.2.1.0 sandbox online for reference purposes.

In crawling through the various jar files in 11.2.2.0, I found the offending class oracle.odi.setup.suppoort.MasterRepositorySetupImpl is located within Middleware/odi/sdk/lib/odi-core.jar.

When comparing 11.2.2.0 Linux vs. 11.2.1.0 Windows, I found size and timestamp mismatches between the two systems:

  • 2.1.0 Windows: 29,222,611 bytes dated Feb 1, 2020
  • 2.2.0 Linux: 29,032,586 bytes dated Aug 22, 2017

 
Rut roh!

So, I backed up Middleware/odi/sdk in 11.2.2.0 and then copied over the whole directory structure from 11.2.1.0. I reran the utility in my screenshot above, and now all of the SNP_* tables are present.

So, in the EPM 11.2.2.0 Linux installer, the subdirectory “odi” is apparently packaged incorrectly and we need a patch for that.

Part 2 will address issues uncovered within WebLogic.

Cross-posted from EPM On-Prem Pro. Read the original post here.

The post FDMEE 11.2.2.0 Troubleshooting – Part 1 appeared first on Datavail.

Key #2 to Remote Application Development: Communication

Managing remote workers is a delicate balance. Team leaders need to give people the solutions and the independence they need to thrive, while providing opportunities for collaboration and team synergy.

Working together when in a physical office is as simple as walking over to your coworker’s desk. But how can you replicate this effortless collaboration in a remote-only workplace? In this blog post, we’ll discuss key #2 to successful remote appdev: Communication.

No matter how talented your employees are, and how independent-minded they may be, trying to get them to work together without communicating is like herding a group of cats with their eyes closed.

What’s more, it’s undeniable that some personalities are better suited for remote work than others. Certain people truly thrive when they’re in a familiar and comfortable place like their home. Others, however, do better in a structured environment like an office that provides discipline and face-to-face interactions.

Counteracting this challenge, and enabling the human need for social interaction, is one of the top priorities for ensuring the success of remote work. In order to facilitate collaboration, remote AppDev teams need to link up early and often, maintaining open communication channels. This may include:

  • Stand-up meetings on a frequent basis (e.g. daily or twice a week) where team members discuss their recent progress, their current status, and their future plans.
  • Stakeholder meetings for important roles to receive updates on the status of the project.
  • Team meetings led by the product owner to give team members face time with each other and build camaraderie.

 
Communicating early and often also provides more transparency and accountability. When team members know what each person is doing, there’s a complete “paper” trail of how each task gets delivered, from concept and development to QA and deploying.

Remote work also provides different challenges and opportunities for how team members choose to communicate. Real-time instant messages are more effective for getting a quick answer than asynchronous methods like email, while meetings are best for long-winded discussions and complex ideas.

At Datavail, we use Microsoft Teams to manage communication between the members of our remote AppDev workforce. Teams is perfectly suited for our workflows as a communication tool:

  • Teams is fully integrated with Office 365 and other Microsoft products such as Outlook, OneDrive, and SharePoint.
  • Teams lets users create individual channels for a certain topic, allowing team members to cut down on long email conversations and get answers faster.
  • Teams is available on a wide variety of devices, which makes it ideal for a remote workforce: desktops, laptops, tablets, smartphones (iOS, Android, and Windows Phone), and more.

 
Enabling effective communication for your team will take you a long way in building a workforce that can develop applications effectively from afar. But it’s only one piece of the pie. To learn about the other two essential pieces, download my white paper, “The 3 Keys to Successful Remote Application Development.”

The post Key #2 to Remote Application Development: Communication appeared first on Datavail.