Tag Archives: Uncategorized
Interview with Stewart Bryson – Our New Chief Innovation Officer
Regular readers of this blog will know Stewart Bryson, our US Managing Director and in his spare time, Oracle ACE, writer and presenter on Oracle BI &DW development topics. Those of you who’ve met Stewart will know that his first love has always been technology and how to implement it well, so we’re pleased to announce that Stewart is taking on the newly created role of Chief Innovation Officer, working alongside our CTO, Mark Rittman, and CEO, Jon Mead. So what is a “Chief Innovation Officer”, and what does Stewart intend to do with his new role? I had a chance to put a few questions to Stewart about his new appointment and here is what he had to say.
Pippa Old [PO]: “For those of you who don’t you, tell me a little bit about yourself”
Stewart Bryson [SB]: “I like to say that I grew up as an Oracle DBA. While I’ve watched the Oracle BI ranks grow with the acquisitions of Siebel and Hyperion, I’ve been under the Red Tent for my entire career. Lots of folks come to Oracle BI from the top-down: starting as financial users or developers who are trying to find ways to get the data they need. I charted a reverse path, starting from the data warehouse perspective and moving up the stack to work on products such as ODI and OBIEE.
I was awarded the Oracle ACE a few years ago while I was promoting a methodology we call Extreme BI here at Rittman Mead. It’s an agile approach to delivering content that makes the business user a major component in delivery. It requires a thorough understanding of many of Oracle’s BI products, and I’d like to think my unique experience and capabilities are what have made it successful for us. I think these are the same reasons that folks come to see me speak… to see something a little different in the BI space.”
[PO]: “Many people may know you for your podcasts with Kevin McGinley – will you keep doing them?”
[SB]: “Absolutely. Whether we have 10 viewers or 10,000, Kevin and I will certainly keep doing this. We both look forward to recording the show every time. The premise was simple: folks love to get together and talk about sports, or movies, stock portfolios, etc. So why not have a medium where enthusiasts come together and talk about Oracle BI? When I’m speaking at conferences, I usually have a few attendees come up to me afterwards and say “I love the show.” So we will still do it. It’s just lots of fun.”
[PO]: “Explain the new role of Chief Innovation officer at Rittman Mead”
[SB]: “Over the last four years of managing Rittman Mead America, we’ve had a lot of things to be proud of, both technically and commercially, but it is the technical successes that have been most rewarding . So I’m glad I get to focus on our technical capabilities at Rittman Mead, and work closer with Mark Rittman converting all the roadmap and product knowledge he has into a framework and process that we can use to improve our delivery capabilities.
The BI landscape is changing rapidly, and these changes are causing a lot of internal reflection at Rittman Mead about the approaches we take to helping our customers deliver intelligence. The new acquisitions from Oracle such as Endeca make us question whether our current slate of technologies is right for all occasions. We see paradigm shifts in the general technology market that reverberate through to BI… things like Big Data and cloud computing that drastically change delivery approaches. The advances in engineered systems and appliances have a similar effect. I’m excited that I will be the main conduit through which these approaches are vetted and assimilated… there’s no place I would rather be.”
[PO]: “So how important is innovation to you as an individual and Rittman Mead?”
[SB]: “That’s an interesting question. Jon Mead, Mark Rittman and I spent a long time formulating the duties for this new role, but I would say that Jon and I spent almost as much time coming up with the title. I wanted to make sure that, if I traded in my CEO duties with the US business, it was for the right reason, and honestly, there’s nothing more important in today’s marketplace than innovation. New ideas become old ones in the blink of an eye, and a company like Rittman Mead needs to ensure we evolve to always deliver maximum value to our customers. Innovation, when done right, can change the world… Apple is the perfect example. Rittman Mead is not going to change the world the way Apple has… but there’s no reason we can’t leave our mark on the BI space.”
[PO]: “Rittman Mead has 100 or so employees. Why is innovation so important?”
[SB]: “While 100 employees may seem small to folks working for Fortune 100 companies… it’s almost incomprehensible to me. I think I was employee number five for Rittman Mead… and I’m counting Mark and Jon in that. Early on, our Oracle ACE’s were involved with actual delivery: Mark in the UK, Venkatakrishnan Janakiraman in India, me in the US. But as we grow, we have the unique opportunity to take those minds and direct them toward innovation: what we like to call the “Rittman Mead Way”. I want to take the seemingly limitless technical expertise at Rittman Mead and focus it toward building products and frameworks that benefit our customers in both the quality of what we deliver, but also in the pace of that delivery.”
[PO]: “Will you be focusing on how to improve Oracle BI or new innovative uses of the technology?”
[SB]: “I’ll be focused on fleshing out the “Rittman Mead Way”. This encompasses, at the very core, how we approach delivery with our customers. One of the components that I’m focusing on immediately is producing the Rittman Mead Delivery Framework. Though I can’t share exact details yet, this will involve a series of licensed products that are engineered to deliver immediate ROI to our customers. But also, this will involve a series of accelerators that help our consultants deliver our undeniable industry leadership rapidly and with 100% consistency.”
[PO]: “With Salesforce.com and other vendors offering cloud based services, what is RMs view and strategy for cloud and saas?”
[SB]: “Oracle finally dove in and immersed themselves in the Cloud, and you can expect the same from us. Though I can’t discuss specifics yet… expect some major announcements from us in this area in the coming months.”
[PO]: “Finally, on a personal note, what’s your favourite innovation in the last few years?”
[SB]: “It has to be the iPad. That product changed my life. It’s such an intimate delivery mechanism… it’s now how I consume almost all media: books, news, movies, etc.”
Oracle Enterprise Manager 12c Web Transaction Service Beacon
I wanted to continue on Mark Rittman’s excellent posts about the BI Management Packs and Enterprise Manager 12c:
There is an increasing demand to monitor BI systems, as they become more critical to businesses and their daily operations. I work in the Rittman Mead Global Services team, and we are starting to see an increasing demand for complete monitoring solutions. Within Global Services we are conscious that most BI systems go un-monitored or have only basic checks being performed. There are many aspects of a BI System that can be monitored, we need to be able to know if the system is up and how it is performing from both a system level and the end user experience. On that basis, I thought I’d share some of the more advance monitoring aspects of Enterprise Manager 12c (12.1.0.3).
I’d like to start off with looking at “Service Beacons”, which enable us to simulate a test that might be performed by a human, in particular I will be looking at web transactions which are a set of web based events. The goal is to simulate user logon to OBIEE and to test if an analysis comes back with an expected row set. In this example I used a vanilla install of OBIEE 11.1.1.7 with SampleAppLite, and create a simple analysis with ‘Product Type’ and ‘# of Orders’ placed on My Dashboard, which has been set up to be the default dashboard to show once logged in.
After connecting OBIEE to EM12c (as per Marks post), you can now add a Service Beacon to perform the aforementioned simulation. There is however a requirement to use Internet Explorer in order to record the user interaction, these are referred to as Web Transactions; I used Internet Explorer 8 but more on this later.
Start off by navigating to Targets > Services, from this screen click on Create > Generic Service
In the next screen enter a name for the service, in this example User Agent 1
We also need to define what system this service should run on. Ideally you would place it on a dedicated system, in this case we will place it on the BI Server
Click Select System and select the Target Type: Oracle BI Instance, then press Select.
After returning to the General Page click Next to preceded to the Availability stage. Ensuring Service Test is selected click Next.
The Service Test step is where we can define our Web Transaction. Type in a name for the test, for example ”OBIEE Login”, and then select Create a Transaction and click Go.
When clicking on Record, Internet Explorer will prompt you to install a plugin. For this plugin to install though, you’ll most probably need to alter Internet Explorer’s security settings, by editing the zone settings for IE to an unsecure state or adding the site to the trusted list and importing the certificates – see the Internet Explorer help files for full details on how to do this.
Once configured click on Record.
After this button is clicked, another instance of Internet Explorer is opened, displaying a blank web page.
In this window enter the URL for OBIEE into the URL bar, and press Enter. You should then see the standard OBIEE login page, as shown below.
Navigate to the dashboard page that you’re interested in setting up the service beacon for, to simulate a user bringing up this page.
Then log out and close the browser window.
Back in the Enterprise Manager browser window, it should have recorded the login steps which should look similar to the screenshot below. Click Continue to proceed
Back on the service test screen there will a few steps filled out, this is the result of the transaction recording. At this stage it’s not easy to debug the process, so we need to continue the creation process. Click Continue to proceed.
Now back on the Service Test screen click Next.
Next we need to add a beacon, where this service will monitored from. Click the Add button
In the basic setup the beacon installed as part of EM will be used. Ideally a beacon that replicated a client machine to simulate a realistic test.
Select the EM Management Beacon and then Select
Now back on the Beacons screen click Next
Enter the Performance Metrics required, in this example Click Next to go with the default values.
The Usage Metrics step is where we can associate metrics to this service test. For example if you have a few sessions initialised that could affect the login times to Oracle BI, it might be worth monitoring the connection pool usage to ensure that it’s not over used.
The final screen reviews what has been set up, for now we will complete the service creation by clicking the Finish button.
Returning back to the service page, our new service will have a clock next to it, until it has initialized and ran for the first time. After a couple of minutes refresh the page to see if the agent is up
Next we need to further configure this service, click on User Agent 1
note:At this point I will make it clear that my screenshots use a service is called User Agent 2, not User Agent 1
The initial home screen shows the performance of the service beacon, but for more details we need to navigate to the Monitoring Configuration screen
From this screen click on the Service Tests and Beacons Link
This screen is almost like the one during the setup process but has an added benefit, of running through the interaction to show us what is returning by clicking on Verify Service Test button.
Tick the Enable logging box and then Perform test, Note that the Status is up and should be while the webpage is running, but we have not entered any success criteria. After the test, click on the download log icon
Save the log file
After downloading the log, extract it and locate the index.html and open.
This should show the steps that the beacon has performed
When examining the steps, one thing I notice is that OBIEE’s presentation services is complaining about the browser version…
And after looking at the headers, the error is because the service test is identifying itself as Internet Explorer 6.0 – an unsupported browser for recent versions of OBIEE.
So our first task is change the User Agent that this service is running as. Return back to the Service Tests and Beacon page and ensure OBIEE login is selected and click Edit
Navigate to the Advance Properties Tab
On this screen expand Request and replace the user agent with:
Mozilla /5.0 (Compatible MSIE 9.0;Windows NT 6.1;WOW64; Trident/5.0)
Now we can re-run the test from this page by scrolling up and clicking on Verify Service Test
Perform the test and download the log file, and you should notice it’s now much bigger.
Now we need to find the step that has loaded our analysis, in this example it’s the HTTP Redirect element.
Using this we can go into step 2 of our process, navigate back to the steps tab and click Edit
On the Step screen scroll down to the bottom and select the second redirect related step and click edit, this is the step that we need to add our validation test
Scroll down to the Validation section and in this example we will add the validation text Plasma, as it was one of the product types returned in our initial analysis
Click Continue, Continue again and validate the service test again. If all is well it should be up.
You can now test this by either turning off your BI server or editing the analysis so that Plasma does not get returned. Either change should result in the service test going down.
This in depth setup provides a real world test to stop your end users telling you that your system is not working by proactively monitoring:
- The availability of the BI System
- The database returning results (and up to date results)
- The LDAP and login mechanism able to service request
- The time it takes to logon, return data and logoff.
- A growing set of metrics which can be used to generate system up time.
… and provides an Oracle supported route to monitoring a BI system.
Stay tuned as I try to cover other elements of Enterprise Manger 12c that include other types of Service tests, Metric Templates and Administration Groups.
Outer Join LTS Pruning… It’s Here!
A few months ago Charles Elliott and I were tasked with assessing OBIEE performance for a client here in the US. Queries were taking hours to run (literally) and obviously users weren’t happy. In the context of BI, there are many places where performance can be improved (database tuning, data warehouse modeling, query writing, etc.) but we decided to start with the RPD.
One of our resources at this client pointed to us that every query they ran used a database view (no, not a materialized view) which in turn sourced a handful of physical tables. These tables were all joined in the view using outer joins. So, every single query the BI Server ran, had to go through all the joins contained in this view, even if it didn’t need data from all the tables in question. We called this the “Inception Effect” (remember the movie?) because you always had to go 5 levels deep, and each one took longer than the last.
In this post, I want to demonstrate this problem and how we can “trick” the BI Server into behaving the way we want.
IMPORTANT: Please keep in mind that for this first example, we are using version 11.1.1.6.0. This detail will be very relevant later.
The Problem
For this test, we’re obviously not using our client’s data. We will be working with data from the 2010 FIFA World Cup, and our tables are GOALS, PLAYERS and MATCHES. The GOALS table will be the source for both the logical dimension and fact, while PLAYERS and MATCHES will also provide additional information for our logical dimension. This is our physical model:
In our (very simple) business model, we have a fact table called Fact: Goals which uses the GOALS physical table as its only source:
And we also have a dimension table called Dim: GPM, which has one LTS with all three physical tables as sources (the columns in this table were named for easy identification of the source table). Note the join types for this logical table source:
So, let’s see what this looks like when we create a very simple query in Answers.
Note that we have a column from the GOALS table, one from the PLAYERS table and one from our fact. As expected, our query should include one outer join (remember that the LTS for our dimension included a couple of outer joins), but upon closer examination, we see that it actually includes both outer joins, even though it didn’t need to include the MATCHES table.
Furthermore, if we remove the ‘Players – Name’ column as well (leaving only columns that come from the GOALS table), the two outer joins still show up:
You may be wondering: Why do I care, as long as the result is still correct? Well, in our client assessment this was extremely important, because all the outer joins included in the queries all the time were causing them to run much slower than they should.
The Solution
The solution in these situations is to “trick” the BI Server into using the outer-joined logical table sources only when needed. It requires a little more development time, but in the end it is well worth it. Let’s take a look at what our model looked like with this approach and what the results were.
As you can guess by the LTS names, the GOALS LTS includes only the GOALS table as its physical source, the GOALS_PLAYERS LTS includes the GOALS and PLAYERS tables, while the GOALS_PLAYERS_MATCHES LTS includes all three of them. The order of the logical table sources matters, as you want you first choice on top, and your last choice at the bottom. Alternatively, you can use Priority Groups to determine the order in which the BI Server is going to try and use each LTS based on the query criteria. Let’s see if it actually works.
In our multiple LTS test, I added an extra column to our model called ‘LTS ID’ that will help us identify in Answers the LTS being used for each query.
So this is our first result set:
This query includes only columns from the GOALS table, plus our LTS ID column, which shows the LTS being used. The important thing, however, is that when we look at the SQL issued by the BI Server, we see that no outer joins were included, indicating once more that the BI Server was smart enough to choose the most efficient LTS for this query.
If we include a column from the PLAYERS table to our request, this is what we see:
And the SQL validates our theory, including only one outer join:
Lastly, if we add a column from the MATCHES table to the request:
Our SQL should include both outer joins, as it is using the LTS seen in the request, with all 3 tables:
The New World Order
Recently, we heard a rumor from Oracle Product Management that with the 11.1.1.7 release of OBIEE, the BI Server would be a little smarter around pruning outer-joined logical table sources. We decided to put it to the test, and the results were encouraging.
We went back to the one LTS idea:
But in this case, if the theory is correct, the BI Server should be able to prune the unnecessary tables despite the outer joins.
When we run a request in Answers with columns from all three tables:
The BI Server will issue both outer joins in the query as expected:
However, if we remove the ‘Matches – Venue’ column:
The BI Server will remove the MATCHES table from the query, therefore eliminating one of the outer joins:
And finally, if we leave only columns from the GOALS table in our query, no outer joins are included:
And all this, using one single LTS in 11.1.1.7.0.
Now, I’m not claiming that the newer versions of OBIEE will be able to prune every unnecessary outer join in every single situation. This was a simple example, with only a few tables, but it does show that Oracle has been working on this. And that’s very good news if you are a RPD developer.
Until next time!
Inside My Home Office Development Lab (VMWare, OS X Server)
A little bit off-topic for once, but I thought one or two readers might be interested in the setup I’ve got back at my home office, for testing out and developing stuff around Oracle BI. Like most Oracle tech enthusiasts, I’ve been a long-time user of tools such as VMWare and Virtualbox for desktop virtualisation, but over the past couple of years the scope and scale of Oracle Fusion Middleware has meant that a bit more of an “enterprise” approach had to be taken in my test lab. In addition, we’re an Apple-centric family back home, with iMacs in most rooms, iPads and iPhones everywhere, so I thought it’d be a good opportunity to try out OS X Server as well, to see if I could add a bit of management to the whole network, and introduce things such as user and hardware policies, DNS and centralised software update, that sort of thing – as most people do on their home networks … (!)
Let’s go to virtualisation first. Over the past few years I’d been buying a number of Mac Mini servers, each with 16GB RAM and managed centrally through Apple Remote Desktop, as shown in the screenshot below. These worked well, went in a cupboard under the stairs and ran nicely “headless”, with desktop access again being through Remote Desktop, or VNC. Remote Desktop can also manage VMs that run VNC as well, which comes as standard with Red Hat / Oracle Linux, or can be installed separately into Windows-based environments.
This worked well – and had the benefit of isolating one or two VMs on each Mac Mini, so performance issues didn’t hit other VMs – but its an inefficient way to run VMs. So, inspired by Steve Karam’s excellent “Make your own VM training/lab environment for $900″ blog post, I first built a VM Server with 32GB RAM and a bunch of SSDs and HDDs, then built a second server with 64GB of RAM and even more disks. Then, initially with the free version of VMWare’s ESXi hypervisor, and then with the full VSphere setup courtesy of VMWare’s “Guru Program”, I put together a two-node cluster of VMWare ESXi hosts managed through VMWare VCenter, that can host upwards of 20 or so VMs as part of a single managed server pool. The screenshot below shows this setup in action, and you can see the various VMs under the two hosts (servers), with various options to migrate them, manage them and so on.
Part of the driver for getting VMWare, in particular, working at home was that many of our customers use it for development, DR and so forth, and I wanted to test out VMWare’s VMotion feature as an alternative to OBIEE’s WebLogic-based clustering. VMotion (other virtualisation vendors have similar technologies) allows VMs running on one particular server, but with their files stored on shared storage, to automatically restart on another server if the first one goes down. It’s not quite the same as clustering – each machine is independent, and in this example only one runs at a time – but if a customer is just looking for a HA solution, and they already make heavy use of VMWare, its an interesting alternative that I want to try and model in my own lab if possible. I did also look at Oracle VM, but initially on the UEFI-based hardware I first used, I couldn’t get it to boot, and to be honest the VMWare suite is very easy to use, very “industry standard”, with the only real drawback being the requirement for a Windows server to run the VCenter management software.
Apart from the two servers in an ESXi VSphere cluster, I’ve also got a software (FreeNAS) NAS actually running on a VM in that cluster, but i’m phasing that out in favour of a dedicated hardware NAS, a Synology DS413, as the functionality of a NAS VM isn’t quite up there with that in a dedicated unit; it’s running on the same machine as the VMs, taking up their disk space, and using the Synology I can also set up RAID on the four disks to protect against disk failures. I’ve been very pleased with the Synology so far, and I’ve now got around 10TB of NAS storage on the network, accessible from all the Macs in the house.
We’re actually a Mac-centric household here, with iMacs in the kids rooms, a 32GB iMac in my office, a few laptops lying around, and a bunch of Apple TVs and Airport Expresses so we can beam music and video around the house. About six months ago I thought I’d have a play around with Mac OS X Server to see if I could set up a central server for the house, to enable me to manage and maintain all the iMacs as well as test out features such as profiles and centralised backup that I’d been considering for the Rittman Mead office.
OS X Server is now set up with Open Directory (Apple’s LDAP directory, their equivalent to Microsoft AD) and with each iMac having a network logon, with the kids files stored on the server and the ability for anyone to log into any machine in the house, using their central ID. DNS is set up so that all the iMacs, VMs, servers and so forth have proper fully-qualified domain names, and we’ve also set up a central Software Update cache, file sharing and the like. The screenshot below is from the OS X Server “Server” app, and you can see the various network services provided by OS X Server.
There’s also a few Mac Mini servers scattered around the house – one hosts OS X Server, another runs VMs prior to importing them into ESXi, etc, so that the home office network extending out to the rest of the house looks like this:
VMWare ESXi, VSphere (the overall platform) and VCenter (the management server, used when there’s >1 ESXi host you want to run services on) is the most impressive part of the setup, though. As ESXi is a bare-metal hypervisor (like Oracle’s own Oracle VM), there’s little of the OS overhead that you get when running VMs on a regular server or desktop, plus you can take advantage of features like VMotion (for high-availability of VMs when their files are stored on shared storage), host-to-host migration and so on. In the two screenshots of VMWare VCenter Console below, you can see on the left, the set of VMs currently assigned to the larger of the two servers, with the graphs down the right-hand side showing the profile of CPU and RAM usage; whilst on the right, details of one of the VMs, in this case running Oracle Linux and E-Business Suite 12.1.3.
What this means in-efffect is that I’ve got pretty-much every bit of BI-related Fusion Middleware software either running, or suspended but ready-to-run, making it a lot easier to put examples together, try out some integration technique, or refer back to an older version of the product if needed. In addition, as mentioned above there’s an EBS 12.1.3 instance running, a VM running Hadoop, Hive and there rest of Oracle’s big data stack, as well as various Oracle demos and beta releases.
And that’s a lot of Oracle technology to manage – so, as we recommend to our customers, I’ve got another VM this time running Oracle Enterprise Manager 12cR2, linked-in to most of the VMs and managing and monitoring them in various groups.
I’m about to upgrade EM to 12cR2 to make use of its new ability to manage an Exalytics server as a whole unit, but for now I use it to check the status of the various VMs, teach myself EM Cloud Control, and apply patches to the databases and other non-Bi components.
Finally, to access the various VMs, I either VNC into the Linux-based ones using Apple Remote Desktop, or use Microsoft Remote Desktop Connection to access the Windows ones. For the various OBIEE environments, Safari bookmarks synced via iCloud are the main way in, with a VPN server in the home router to provide access from the outside (I did try using the VPN Server in OS X Server, but found it too flakey).
So there we go – a bit geeky I know, and probably a bit over-the-top for most home development labs – but if you’re interested in more details on how I set it up, or you’ve done something similar with other technology, Oracle VM or Open Stack for example – drop me a line or add a comment to the post.
Configuring SSL on OEID v3.0
Oracle Endeca Information Discovery (OEID) version 3.0 now supports secure connection over HTTP, something useful for when implementing Endeca in a production environment where its expected that all security risks been controlled. Sending requests and data over unsecured HTTP is generally considered one of the more higher-risk vulnerabilities, with the solution generally being to add SSL encryption to standard HTTP communications.
However, enabling SSL connections between different parts of the OEID package seems on the surface to be a bit tricky, since all communications with the Endeca Server are via web service calls; therefore, having the Endeca Server configured for SSL requires all other servers that connect to it to be updated and re-configured. In this blog post therefore I’m going to walk through a step-by-step guide on how to setup this feature, and how to ensure everything subsequently works correctly.
SSL configuration on Oracle WebLogic Server
All Weblogic domains will need to be ‘SSL enabled’ and the secure port should be defined. This can be done as one of the steps in the ‘Fusion Middleware configuration Wizard’. To see the setting select the ‘Administrator Server’ from the list, as shown in the screenshot below, when you reach this option point.
The next step is to check the ‘SSL enabled’ option and choose a ‘SSL Listen Port’.
Generate keys
The first step to configure SSL configuration between different parts of OEID is to generate an SSL key, and then share it between them. A key generator script comes as part of the OEID Weblogic domain installation, and should be accessible on:
WebLogicInstallationpath/user_projects/Endeca_domain/EndecaServer/bin/generate_ssl_keys.sh
When running this script, the Endeca Server WebLogic Server needs to be running, and the script requires both credentials to access the Endeca server, and also an SSL passphrase.
Browser certification
As an OEID developer, you might want to check some web-service requests on your browser; for example, if you want to make sure the Endeca Server is up and running you can try requesting its WSDL document:
https://Endeca_server_host:Endeca_server_port/endeca-server/ws/manage?wsdl
Having the SSL configuration, you need to add SSL certificates to your browser in order to receive reply from the server.
Open your browser and go to its preferences page. Go to Advanced > Encryption tab and click on ‘view certificates’. This is the place that you will need to import the generated esClientCert.p12 file and private passphrase to.
Integrator Configuration
Integration configuration would be done in three areas; Integrator initial file, JRE variables and each graph component’s settings.
- Integrator.ini, This files is by default in the root of the Integrator installation directory. Add following lines under “-vmargs”.
-Djavax.net.ssl.keyStore=yourcertkeystorefile.jks
-Djavax.net.ssl.keyStorePassword=keystorepass
-Djavax.net.ssl.trustStore=yourtruststorefile.jks
-Djavax.net.ssl.trustStorePassword=truststorepass
- JRE Configuration, Same variables should be added to the Integrator designer JRE. To do so open clover Preferences on Window menu. Under Java> Installed JREs select the available jdk and click Edit. Add same option to ‘Default VM arguments’ and Finish the edit.
Components
Any graph component requesting a web-service call must be configured for SSL connection. Settings are not all the same and differ for each type. For a WEB_SERVICE_CLIENT component like below it is enough to make sure all calls are to a https address and the correct port has been defined. You’ll also need to Disable SSL Certificate Validation.
For a BULK ADD/REPLACE component it is enough to check SSL Enabled option.
Endeca Studio Data Sources
Create a folder under default lifreay path/data and call it ‘endeca-data-sources’. All you need to do is to re-copy generated key-stores to the new folder.
As a result if you add the ssl passphrase to the data source definition in the Studio control panel, the definition should be correct and connect successfully.
Provisioning Service
In case of Provisioning Service, firstly copy key-stores to path to Oracle Web-Logic/user-projects/domains/oracle.eid-ps/eidProvisioningConfig/
Secondly, go to the WebLogic Administration console. For the current server, enter the SSL passphrase in the key-stores and SSL configuration page, and then restart the server from the control page.