Category Archives: Rittman Mead
Oracle Analytics Cloud Workshop FAQ
A few weeks ago, I had the opportunity to present the Rittman Mead Oracle Analytics Cloud workshop in Oracle's head office in London. The aim of the workshop was to educate potential OAC customers and give them the tools and knowledge to decide whether or not OAC was the right solution for them. We had a great cross section of multiple industries (although telecoms were over represented!) and OBIEE familiarity. Together we came up with a series of questions that needed to be answered to help in the decision making process. In the coming workshops we will add more FAQ style posts to the blog to help flesh out the features of the product.
If you are interested in coming along to one of the workshops to get some hands on time with OAC, send an email to training@rittmanmead.com and we can give you the details.
Do Oracle provide a feature comparison list between OBIEE on premise and OAC?
Oracle do not provide a feature comparison between on-premise and OAC. However, Rittman Mead have done an initial comparison between OAC and traditional on-premise OBIEE 12c installations:
High Level
- Enterprise Analytics is identical to 12c Analytics
- Only two Actions available in OAC: Navigate to BI content, Navigate to Web page
- BI Publisher is identical in 12c and OAC
- Data Visualiser has additional features and a slightly different UI in OAC compared to 12c
BI Developer Client Tool for OAC
- Looks exactly the same as the OBIEE client
- Available only for Windows, straightforward installation
- OAC IP address and BI Server port must be provided to create an ODBC data source
- Allows to open and edit online the OAC model
- Allows offline development. Snapshots interface used to upload it to OAC (it will completely replace existing model)
Data Modeler
- Alternative tool to create and manage metadata models
- Very easy to use, but limited compared to the BI Developer Client.
Catalog
- It's possible to archive/unarchive catalog folders from on-premise to OAC.
BAR file
- It's possible to create OAC bar files
- It's possible to migrate OAC bar files to OBIEE 12c
Can you ever be charged by network usage, for example connection to an on premise data source using RDC?
Oracle will not charge you for network usage as things stand. Your charges come from the following:
- Which version of OAC you have (Standard, Data Lake or Enterprise)
- Whether you are using Pay-as-you-go or Monthly Commitments
- The amount of disk space you have specified during provisioning
- The combination of OCPU and RAM currently in use (size).
- The up-time of your environment.
So for example an environment that has 1 OCPU with 7.5 GB RAM will cost less than an environment with 24 OCPUs with 180 GB RAM if they are up for the same amount of time, everything else being equal. This being said, there is an additional charge to the analytics license as a cloud database is required to configure and launch an analytics instance which should be taken into consideration when choosing Oracle Analytics Cloud.
Do you need to restart the OAC environment when you change the RAM and OCPU settings?
Configuring the number of OCPUs and associated RAM is done from the Analytics Service Console. This can be done during up time without a service restart, however the analytics service will be unavailable:
PaaS Service Manager Command Line Interface (PSM Cli), which Francesco covered here, will allow this to be scripted and scheduled. An interesting use case for this would be to allow an increase in resources during month end processing where your concurrent users are at its highest, whilst in the quieter parts of the month you can scale back down.
This is done using the 'scale' command, this command takes a json file as a parameter which contains information about what the environment should look like. You will notice in the example below that the json file refers to an object called 'shape'; this is the combination of OCPU and RAM that you want the instance to scale to. Some examples of shapes are:
- oc3 — 1 OCPU with 7.5 GB RAM
- oc4 — 2 OCPUs with 15 GB RAM
- oc5 — 4 OCPUs with 30 GB RAM
- oc6 — 8 OCPUs with 60 GB RAM
- oc7 — 16 OCPUs with 120 GB RAM
- oc8 — 24 OCPUs with 180 GB RAM
- oc9 — 32 OCPUs with 240 GB RAM
For example:
The following example scales the rittmanmead-analytics-prod service to the oc9 shape.
$ psm analytics scale -s rittmanmead-analytics-prod -c ~/oac-obiee/scale-to-monthend.json
where the JSON file contains the following:
{ "components" : { "BI" : "shape" : "oc9", "hosts":["rittmanmead-prod-1"] } } }
Oracle supply documentation for the commands required here: https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html .
How is high availability provisioned in Oracle Analytics Cloud?
Building a high available infrastructure in the cloud needs to take into consideration three main areas:
Server Failure: Oracle Analytics Cloud can be clustered, additional nodes (up to 10) can be added dynamically in the Cloud 'My Services' console should they need to be:
It is also possible to provision a load balancer, as you can see from the screenshot below:
Zone Failure: Sometimes it more than just a single server that causes the failure. Cloud architecture is built in server farms, which themselves can be network issues, power failures and weather anomalies. Oracle Analytics Cloud allows you to create an instance in a region, much like Amazons "availability zone". A sensible precaution would be to create a disaster recover environment in different region to your main prod environment, to help reduce costs this can be provisioned on the Pay-as-you-go license model and therefore only be chargeable when its being used.
Cloud Failure: Although rare, sometimes the cloud platform can fail. For example both your data centres that you have chosen to counter the previous point could be victim to a weather anomaly. Oracle Analytics Cloud allows you to take regular backups of your reports, dashboards and metadata which can be downloaded and stored off-cloud and re implemented in another 12c Environment.
In addition to these points, its advisable to automate and test everything. Oracle supply a very handy set of scripts and API called PaaS Service Manager Command Line Interface (PSM Cli) which can be used to achieve this. For example it can be used to automate backups, set up monitoring and alerting and finally and arguably most importantly it can be used to test your DR and HA infrastructure.
Can you push the user credentials down to the database?
At this point in time there is no way to configure database authentication providers in a similar way to Weblogic providors of the past. However, Oracle IDCS does have a REST API that could be used to simulate this functionality, documentation can be found here: https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html
You can store user group memberships in a database and for your service’s authentication provider to access this information when authenticating a user's identity. You can use the script configure_bi_sql_group_provider to set up the provider and create the tables that you need (GROUPS and GROUPMEMBERS). After you run the script, you must populate the tables with your group and group member (user) information.
Group memberships that you derive from the SQL provider don't show up in the Users and Roles page in Oracle Analytics Cloud Console as you might expect but the member assignments work correctly.
These tables are in the Oracle Database Cloud Service you configured for Oracle Analytics Cloud and in the schema created for your service. Unlike the on-premises equivalent functionality, you can’t change the location of these tables or the SQL that retrieves the results.
The script to achieve this is stored on the analytics server itself, and can be accessed using SSH (using the user 'opc') and the private keys that you created during the instance provisioning process. They are stored in: /bi/app/public/bin/configure_bi_sql_group_provider
Can you implement SSL certificates in Oracle Analytics Cloud?
The short answer is yes.
When Oracle Analytics Cloud instances are created, similarly to on-premise OBIEE instances, a a self-signed certificate is generated. The self-signed certificate is intended to be temporary and you must replace it with a new private key and a certificate signed by a certification authority. Doc ID 2334800.1 on support.oracle.com has the full details on how to implement this, but the high level steps (take from the document itself) are:
- Associate a custom domain name against the public ip of your OAC instance
- Get the custom SSL certificate from a Certificate Authority
- Specify the DNS registered host name that you want to secure with SSL in servername.conf
- Install Intermediate certificateRun the script to Register the new private key and server certificate
Can you implement Single Sign On (SSO) in Oracle Analytics Cloud?
Oracle Identity Cloud Service (IDCS) allows administrators to create security providors for OAC, much like the providors in on premise OBIEE weblogic providors. These can be created/edited to include single sign on URLs,Certificates etc, as shown in the screenshot below:
Oracle support Doc ID 2399789.1 covers this in detail between Microsoft Azure AD and OAC, and is well worth the read.
Are RPD files (BAR files) backwards compatible?
This would depend what has changed between the releases. The different version numbers of OAC doesn't necessarily include changes to the OBIEE components themselves (e.g. it could just be an improvement to the 'My Services' UI). However, if there have been changes to the way the XML is formed in reports for example, these wont be compatible with different previous versions of the catalog. This all being said, the environments look like they can be upgraded at any time so you should be able to take a snapshot of your environment and upgrade it to match the newer version and then redeploy/refresh from your snapshot
How do you connect securely to AWS?
There doesn't seem to be any documentation on how exactly Visual Analyzer connects to Amazon Redshift using the 'Create Connection' wizard. However, there is an option to create an SSL ODBC connection to the Redshift database that can then be used to connect using the Visual Analyzer ODBC connection wizard:
Can you still edit instanceconfig and nqsconfig files?
Yes you can, you need to use your ssh keys to sign into the box (using the user 'opc'). They are contained in the following locations:
/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml
/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI
Its also worth mentioning that there is a guide here which explains where the responsibility lies should anything break during customisations of the platform.
Who is responsible for what regarding support?
Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services (Doc ID 2309936.1)
DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli
This summer we unselfish Italians decided to not participate to the World Cup to give another country the opportunity to win (good luck with that England!). This decision, which I strongly support, gives me lot of time for blogging!
As already written, two weeks ago while in Orlando for Kscope18, I presented a session about DevOps and OBIEE focusing on how to properly source control, promote and test for regression any component of the infrastructure.
Development Isolation
One key aspect of DevOps is providing the Development Isolation: a way of allowing multiple development streams to work independently and merging the outcome of the process into the main environment only after this has been tested and validated. This is needed to avoid the standard situation where code promotions are blocked due to different working streams not being in sync: forcing a team to postpone a code release just because another team doesn't have the UAT OK is just an example of non-isolated development platforms.
We have been discussing development isolation topic in the past focusing mainly on concurrent repository development and how to integrate it with versioning tools like Git and SVN. The concurrent online editing option is not viable since multiple developers are modifying the same artifact (RPD) without a way of testing for regression the changes or to verifying that what has been done is correct before merging the changes in the RPD.
Alternative solutions of using MUDE (default multi-user development method provided by the Admintool) or pure offline RPD work encounter the same problems defined above: no feature or regression testing available before merging the RPD in the main development environment.
Different RPD development techniques solve only partially the problem: almost any OAC/OBIEE development consist at least in both RPD and catalog work (creation of analysis/dashboards/VA projects) we need an approach which provides Development Isolation at both levels. The solution, in order to properly build a DevOps framework around OAC/OBIEE, it's to provide isolated feature-related full OBIEE instances where the RPD can be edited in online mode, the catalog work can be done independently, and the overall result can be tested and validated before being merged into the common development environment.
Feature-Related Instances
The feature instances, as described above, need to be full OAC/OBIEE development instances where only a feature (or a small set) is worked at the time in order to give the agility to developers to release the code as soon as it's ready and tested. In the on-premises world this can "easily" be achieved by providing a number of dedicated Virtual Machines or, more in line with the recent trends, an automated instance provisioning with Docker using a template image like the one built by our previous colleague Gianni Ceresa.
However, when we think about Oracle Analytics Cloud (OAC), we seem to have two problems:
- There is a cost associated with every instance, thus minimizing the number of instances and the uptime is necessary
- The OAC provisioning interface is point and click, thus automating the instance management seems impossible
The overall OAC instance cost can be mitigated by the Bring Your Own License (BYOL) licensing method which allows customers to migrate on-premises licenses to the cloud and have discounted prices on the hourly/monthly instance cost (more details here). However, since the target is to minimize the cost thus the # of instances and the uptime, we need to find a way to do so that doesn't rely on a human and a point and click interface. Luckily the PaaS Service Manager Command Line Interface (PSM Cli) allows us to solve this problem by providing a scriptable way of creating, starting and stopping instances.
PaaS Service Manager Command Line Interface
PSMCLI is a command line interface acting as a wrapper over the PaaS REST APIs. Its usage is not limited to OAC, the same interface can be used to create and manage instances of the Oracle's Database Cloud Service or Java Cloud Services amongst the others.
When talking about OAC please keep in mind that, as of now, PSM Cli works only with the non-autonomous version but I believe the Autonomous support will be added soon.
Installing and Configuring PSM Cli
PSMCLI has two prerequisites before it can be installed:
- cURL - a command line utility to transfer data with URLs
- Python 3.3 or later
Once both prerequisites are installed PSM can easily be downloaded with the following cURL call
curl -X GET -u <USER>:<PWD> -H X-ID-TENANT-NAME:<IDENTITY_DOMAIN> https://<REST_SERVER>/paas/core/api/v1.1/cli/<IDENTITY_DOMAIN>/client -o psmcli.zip
Where
- <USER> and <PWD> are the credentials
- <IDENTITY_DOMAIN> is the Identity Domain ID specified during the account creation
- <REST_SERVER> is the REST API server name which is:
- psm.us.oraclecloud.com if you are using a US datacenter
- psm.aucom.oraclecloud.com if you are in the AuCom region
- psm.europe.oraclecloud.com otherwise
Next step is to install PSM as a Python package with
pip3 install -U psmcli.zip
After the installation is time for configuration
psm setup
The configuration command will request the following information:
- Oracle Cloud Username and Password
- Identity Domain
- Region, this need to be set to
- emea if the RESTSERVER mentioned above contains emea
- aucom if the RESTSERVER mentioned above contains aucom
- us otherwise
- Output format: the choice is between short, json and html
- OAuth: the communication between the CLI and the REST API can use basic authentication (flag n) or OAuth (flag y). If OAuth is chosen then ClientID, Secret and Access Token need to be specified
A JSON profile file can also be used to provide the same information mentioned above. The structure of the file is the following
{
"username":"<USER>",
"password":"<PASSWORD>",
"identityDomain":"<IDENTITY_DOMAIN>",
"region":"<REGION>",
"outputFormat":"<OUTPUT_FORMAT>",
"oAuth":{
"clientId":"",
"clientSecret":"",
"accessTokenServer":""
}
}
If the profile is stored in a file profile.json
the PSM configuration can be achieved by just executing
psm setup -c profile.json
One quick note: the identity domain Id, shown in the Oracle Cloud header, isn't working if it's not the original name (name at the time of the creation).
In order to get the correct identity domain Id to use, check in an Oracle Cloud instance already created (e.g. a database one) and check the Details, you'll see the original identity domain listed there (credits to Pieter Van Puymbroeck).
Working With PSM Cli
Once the PSM has been correctly configured it's time to start checking what options are available, for a detailed list of the options check PSM documentation.
The PSM commands are product related, so each command is in the form:
psm <product> <command> <parameters>
Where
- product is the Oracle cloud product e.g.
dbcs
,analytics
,BigDataAppliance
, for a complete list usepsm help
- command is the action to be executed against the product e.g.
services
,stop
,start
,create-service
- parameters is the list of parameters to pass depending on the command executed
The first step is to check what instances I already created, I can do so for the database by executing
psm dbcs services
which, as expected, will list all my active instances
I can then start and stop it using:
psm dbcs start/stop/restart -s <INSTANCE_NAME>
Which in my example provides the Id of the Job assigned to the stop
operation.
When I check the status via the service
command I get Maintenance
like in the web UI.
The same applies to the start
and restart
operation. Please keep in mind that all the calls are asynchronous -> the command will call the related REST API and then return the associated Job ID without waiting for the command to be finished. The status of a job can be checked with:
psm dbcs operation-status -j <JOB_ID>
The same operations described above are available on OAC with the same commands by simply changing the product from dbcs
to analytics
like:
psm analytics start/stop/restart -s <INSTANCE_NAME>
On top of the basic operation, PSM Cli allows also the following:
- Service Instance: start/stop/restart, instance creation-deletion
- Access Control: lists, creates, deletes, enables and disables access rules for a service.
- Scaling: changes the computer shape of an instance and allows scaling up/down.
- Storage: extends the storage associated to OAC
- Backup Configuration: updates/shows the backup configurations
- Backups: lists, creates, deletes backups of the instance
- Restore: restores a backup giving detailed information about it and the history of Restores
- Patches: allows patching, rollbacking, doing pre-checks, and retrieving patching history
Creating an OAC Instance
So far we discussed the maintenance on already created instances with start
/stop
/restart
commands, but PSM Cli allows also the creation of an instance via command line. The call is pretty simple:
psm analytics create-service -c <CONFIG_FILE> -of <OUTPUT_FORMAT>
Where
- CONFIG_FILE: is the file defining all OAC instance configurations
- OUTPUT_FORMAT: is the desired output format between short, json and html
The question now is:
How do I create a Config File for OAC?
The documentation doesn't provide any help on this, but we can use the same approach as for on-premises OBIEE and response file: create the first instance with the Web-UI, save the payload for future use and change parameters when necessary.
On the Confirm screen, there is the option to Download the REST payload in JSON format
With the resulting json Config File being
{
"edition": "<EDITION>",
"vmPublicKeyText": "<SSH_TOKEN>",
"enableNotification": "true",
"notificationEmail": "<EMAIL>",
"serviceVersion": "<VERSION>",
"isBYOL": "false",
"components": {
"BI": {
"adminUserPassword": "<ADMINPWD>",
"adminUserName": "<ADMINUSER>",
"analyticsStoragePassword": "<PWD>",
"shape": "oc3",
"createAnalyticsStorageContainer": "true",
"profile_essbase": "false",
"dbcsPassword": "<DBCSPWD>",
"totalAnalyticsStorage": "280.0",
"profile_bi": "true",
"profile_dv_forced": "true",
"analyticsStorageUser": "<EMAIL>",
"dbcsUserName": "<DBUSER>",
"dbcsPDBName": "<PDBNAME>",
"dbcsName": "<DBCSNAME>",
"idcs_enabled": "false",
"analyticsStorageContainerURL": "<STORAGEURL>",
"publicStorageEnabled": "false",
"usableAnalyticsStorage": "180"
}
},
"serviceLevel": "PAAS",
"meteringFrequency": "HOURLY",
"subscriptionId": "<SUBSCRIPTIONID>",
"serviceName": "<SERVICENAME>"
}
This file can be stored and the parameters changed as necessary to create new OAC instances with the command:
psm analytics create-service -c <JSON_PAYLOAD_FILE> -of short/json/html
As shown previously, the result of the call is a Job Id
that can be monitored with
psm analytics operation-status -j <JOB_ID>
Once the Job is finished successfully, the OAC instance is ready to be used. If at a certain point, the OAC instance is not needed anymore, it can be deleted via:
psm analytics delete-service -s <SERVICE_NAME> -n <DBA_NAME> -p <DBA_PWD>
Where
- SERVICE_NAME is the OAC instance name
- DBA_NAME and DBA_PWD are the DBA credentials where OAC schemas are residing
Summary
Worried about providing development isolation in OAC while keeping the costs down? Not anymore! With PSM Cli you now have a way of creating instances on demand, start/stop, up/down scaling with a command line tool easily integrable with automation tools like Jenkins.
Create an OAC instances automatically only when features need to be developed or tested, stop&start the instances based on your workforce timetables, take the benefit of the cloud minimizing the cost associated to it by using PSM Cli!
One last note; for a full DevOps OAC implementation, PSM Cli is not sufficient: tasks like automated regression testing, code versioning, and promotion can't be managed directly with PSM Cli but require usage of external toolsets like Rittman Mead BI Developer Toolkit. If you are interested in a full DevOps implementation on OAC and understanding the details on how PSM Cli can be used in conjunction with Rittman Mead BI Developer Toolkit don't hesitate to contact us!
DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli
This summer we unselfish Italians decided to not participate to the World Cup to give another country the opportunity to win (good luck with that England!). This decision, which I strongly support, gives me lot of time for blogging!
As already written, two weeks ago while in Orlando for Kscope18, I presented a session about DevOps and OBIEE focusing on how to properly source control, promote and test for regression any component of the infrastructure.
Development Isolation
One key aspect of DevOps is providing the Development Isolation: a way of allowing multiple development streams to work independently and merging the outcome of the process into the main environment only after this has been tested and validated. This is needed to avoid the standard situation where code promotions are blocked due to different working streams not being in sync: forcing a team to postpone a code release just because another team doesn't have the UAT OK is just an example of non-isolated development platforms.
We have been discussing development isolation topic in the past focusing mainly on concurrent repository development and how to integrate it with versioning tools like Git and SVN. The concurrent online editing option is not viable since multiple developers are modifying the same artifact (RPD) without a way of testing for regression the changes or to verifying that what has been done is correct before merging the changes in the RPD.
Alternative solutions of using MUDE (default multi-user development method provided by the Admintool) or pure offline RPD work encounter the same problems defined above: no feature or regression testing available before merging the RPD in the main development environment.
Different RPD development techniques solve only partially the problem: almost any OAC/OBIEE development consist at least in both RPD and catalog work (creation of analysis/dashboards/VA projects) we need an approach which provides Development Isolation at both levels. The solution, in order to properly build a DevOps framework around OAC/OBIEE, it's to provide isolated feature-related full OBIEE instances where the RPD can be edited in online mode, the catalog work can be done independently, and the overall result can be tested and validated before being merged into the common development environment.
Feature-Related Instances
The feature instances, as described above, need to be full OAC/OBIEE development instances where only a feature (or a small set) is worked at the time in order to give the agility to developers to release the code as soon as it's ready and tested. In the on-premises world this can "easily" be achieved by providing a number of dedicated Virtual Machines or, more in line with the recent trends, an automated instance provisioning with Docker using a template image like the one built by our previous colleague Gianni Ceresa.
However, when we think about Oracle Analytics Cloud (OAC), we seem to have two problems:
- There is a cost associated with every instance, thus minimizing the number of instances and the uptime is necessary
- The OAC provisioning interface is point and click, thus automating the instance management seems impossible
The overall OAC instance cost can be mitigated by the Bring Your Own License (BYOL) licensing method which allows customers to migrate on-premises licenses to the cloud and have discounted prices on the hourly/monthly instance cost (more details here). However, since the target is to minimize the cost thus the # of instances and the uptime, we need to find a way to do so that doesn't rely on a human and a point and click interface. Luckily the PaaS Service Manager Command Line Interface (PSM Cli) allows us to solve this problem by providing a scriptable way of creating, starting and stopping instances.
PaaS Service Manager Command Line Interface
PSMCLI is a command line interface acting as a wrapper over the PaaS REST APIs. Its usage is not limited to OAC, the same interface can be used to create and manage instances of the Oracle's Database Cloud Service or Java Cloud Services amongst the others.
When talking about OAC please keep in mind that, as of now, PSM Cli works only with the non-autonomous version but I believe the Autonomous support will be added soon.
Installing and Configuring PSM Cli
PSMCLI has two prerequisites before it can be installed:
- cURL - a command line utility to transfer data with URLs
- Python 3.3 or later
Once both prerequisites are installed PSM can easily be downloaded with the following cURL call
curl -X GET -u <USER>:<PWD> -H X-ID-TENANT-NAME:<IDENTITY_DOMAIN> https://<REST_SERVER>/paas/core/api/v1.1/cli/<IDENTITY_DOMAIN>/client -o psmcli.zip
Where
- <USER> and <PWD> are the credentials
- <IDENTITY_DOMAIN> is the Identity Domain ID specified during the account creation
- <REST_SERVER> is the REST API server name which is:
- psm.us.oraclecloud.com if you are using a US datacenter
- psm.aucom.oraclecloud.com if you are in the AuCom region
- psm.europe.oraclecloud.com otherwise
Next step is to install PSM as a Python package with
pip3 install -U psmcli.zip
After the installation is time for configuration
psm setup
The configuration command will request the following information:
- Oracle Cloud Username and Password
- Identity Domain
- Region, this need to be set to
- emea if the REST_SERVER mentioned above contains emea
- aucom if the REST_SERVER mentioned above contains aucom
- us otherwise
- Output format: the choice is between short, json and html
- OAuth: the communication between the CLI and the REST API can use basic authentication (flag n) or OAuth (flag y). If OAuth is chosen then ClientID, Secret and Access Token need to be specified
A JSON profile file can also be used to provide the same information mentioned above. The structure of the file is the following
{
"username":"<USER>",
"password":"<PASSWORD>",
"identityDomain":"<IDENTITY_DOMAIN>",
"region":"<REGION>",
"outputFormat":"<OUTPUT_FORMAT>",
"oAuth":{
"clientId":"",
"clientSecret":"",
"accessTokenServer":""
}
}
If the profile is stored in a file profile.json
the PSM configuration can be achieved by just executing
psm setup -c profile.json
One quick note: the identity domain Id, shown in the Oracle Cloud header, isn't working if it's not the original name (name at the time of the creation).
In order to get the correct identity domain Id to use, check in an Oracle Cloud instance already created (e.g. a database one) and check the Details, you'll see the original identity domain listed there (credits to Pieter Van Puymbroeck).
Working With PSM Cli
Once the PSM has been correctly configured it's time to start checking what options are available, for a detailed list of the options check PSM documentation.
The PSM commands are product related, so each command is in the form:
psm <product> <command> <parameters>
Where
- product is the Oracle cloud product e.g.
dbcs
,analytics
,BigDataAppliance
, for a complete list usepsm help
- command is the action to be executed against the product e.g.
services
,stop
,start
,create-service
- parameters is the list of parameters to pass depending on the command executed
The first step is to check what instances I already created, I can do so for the database by executing
psm dbcs services
which, as expected, will list all my active instances
I can then start and stop it using:
psm dbcs start/stop/restart -s <INSTANCE_NAME>
Which in my example provides the Id of the Job assigned to the stop
operation.
When I check the status via the service
command I get Maintenance
like in the web UI.
The same applies to the start
and restart
operation. Please keep in mind that all the calls are asynchronous -> the command will call the related REST API and then return the associated Job ID without waiting for the command to be finished. The status of a job can be checked with:
psm dbcs operation-status -j <JOB_ID>
The same operations described above are available on OAC with the same commands by simply changing the product from dbcs
to analytics
like:
psm analytics start/stop/restart -s <INSTANCE_NAME>
On top of the basic operation, PSM Cli allows also the following:
- Service Instance: start/stop/restart, instance creation-deletion
- Access Control: lists, creates, deletes, enables and disables access rules for a service.
- Scaling: changes the computer shape of an instance and allows scaling up/down.
- Storage: extends the storage associated to OAC
- Backup Configuration: updates/shows the backup configurations
- Backups: lists, creates, deletes backups of the instance
- Restore: restores a backup giving detailed information about it and the history of Restores
- Patches: allows patching, rollbacking, doing pre-checks, and retrieving patching history
Creating an OAC Instance
So far we discussed the maintenance on already created instances with start
/stop
/restart
commands, but PSM Cli allows also the creation of an instance via command line. The call is pretty simple:
psm analytics create-service -c <CONFIG_FILE> -of <OUTPUT_FORMAT>
Where
- CONFIG_FILE: is the file defining all OAC instance configurations
- OUTPUT_FORMAT: is the desired output format between short, json and html
The question now is:
How do I create a Config File for OAC?
The documentation doesn't provide any help on this, but we can use the same approach as for on-premises OBIEE and response file: create the first instance with the Web-UI, save the payload for future use and change parameters when necessary.
On the Confirm screen, there is the option to Download the REST payload in JSON format
With the resulting json Config File being
{
"edition": "<EDITION>",
"vmPublicKeyText": "<SSH_TOKEN>",
"enableNotification": "true",
"notificationEmail": "<EMAIL>",
"serviceVersion": "<VERSION>",
"isBYOL": "false",
"components": {
"BI": {
"adminUserPassword": "<ADMINPWD>",
"adminUserName": "<ADMINUSER>",
"analyticsStoragePassword": "<PWD>",
"shape": "oc3",
"createAnalyticsStorageContainer": "true",
"profile_essbase": "false",
"dbcsPassword": "<DBCSPWD>",
"totalAnalyticsStorage": "280.0",
"profile_bi": "true",
"profile_dv_forced": "true",
"analyticsStorageUser": "<EMAIL>",
"dbcsUserName": "<DBUSER>",
"dbcsPDBName": "<PDBNAME>",
"dbcsName": "<DBCSNAME>",
"idcs_enabled": "false",
"analyticsStorageContainerURL": "<STORAGEURL>",
"publicStorageEnabled": "false",
"usableAnalyticsStorage": "180"
}
},
"serviceLevel": "PAAS",
"meteringFrequency": "HOURLY",
"subscriptionId": "<SUBSCRIPTIONID>",
"serviceName": "<SERVICENAME>"
}
This file can be stored and the parameters changed as necessary to create new OAC instances with the command:
psm analytics create-service -c <JSON_PAYLOAD_FILE> -of short/json/html
As shown previously, the result of the call is a Job Id
that can be monitored with
psm analytics operation-status -j <JOB_ID>
Once the Job is finished successfully, the OAC instance is ready to be used. If at a certain point, the OAC instance is not needed anymore, it can be deleted via:
psm analytics delete-service -s <SERVICE_NAME> -n <DBA_NAME> -p <DBA_PWD>
Where
- SERVICE_NAME is the OAC instance name
- DBA_NAME and DBA_PWD are the DBA credentials where OAC schemas are residing
Summary
Worried about providing development isolation in OAC while keeping the costs down? Not anymore! With PSM Cli you now have a way of creating instances on demand, start/stop, up/down scaling with a command line tool easily integrable with automation tools like Jenkins.
Create an OAC instances automatically only when features need to be developed or tested, stop&start the instances based on your workforce timetables, take the benefit of the cloud minimizing the cost associated to it by using PSM Cli!
One last note; for a full DevOps OAC implementation, PSM Cli is not sufficient: tasks like automated regression testing, code versioning, and promotion can't be managed directly with PSM Cli but require usage of external toolsets like Rittman Mead BI Developer Toolkit. If you are interested in a full DevOps implementation on OAC and understanding the details on how PSM Cli can be used in conjunction with Rittman Mead BI Developer Toolkit don't hesitate to contact us!
Kscope18: It’s a Wrap!
As announced few weeks back I represented Rittman Mead at ODTUG's Kscope18 hosted in the magnificent Walt Disney World Dolphin Resort. It's always hard to be credible when telling people you are going to Disneyworld for work but Kscope is a must-go event if you are in the Oracle landscape.
In the Sunday symposium Oracle PMs share hints about the products latest capabilities and roadmaps, then three full days of presentations spanning from the traditional Database, EPM and BI tracks to the new entries like Blockchain. On top of this the opportunity to be introduced to a network of Oracle experts including Oracle ACEs and Directors, PMs and people willing to share their experience with Oracle (and other) tools.
Sunday Symposium and Presentations
I attended the Oracle Analytics (BI and Essbase) Sunday Symposium run by Gabby Rubin and Matt Milella from Oracle. It was interesting to see the OAC product enhancements and roadmap as well as the feature catch-up in the latest release of OBIEE on-premises (version 12.2.1.4.0).
As expected, most of the push is towards OAC (Oracle Analytics Cloud): all new features will be developed there and eventually (but assurance on this) ported in the on-premises version. This makes a lot of sense from Oracle's point of view since it gives them the ability to produce new features quickly since they need to be tested only against a single set of HW/SW rather than the multitude they are supporting on-premises.
Most of the enhancements are expected in the Mode 2/Self Service BI area covered by Oracle Analytics Cloud Standard since a) this is the overall trend of the BI industry b) the features requested by traditional dashboard style reporting are well covered by OBIEE.
The following are just few of the items you could expect in future versions:
- Recommendations during the data preparation phase like GeoLocation and Date enrichments
- Data Flow enhancements like incremental updates or parametrized data-flows
- New Visualizations and in general more control over the settings of the single charts.
In general Oracle's idea is to provide a single tool that meets both the needs of Mode 1 and Mode 2 Analytics (Self Service vs Centralized) rather than focusing on solving one need at a time like other vendors do.
Special mention to the Oracle Autonomous Analytics Cloud, released few weeks ago, that differs from traditional OAC for the fact that backups, patching and service monitoring are now managed automatically by Oracle thus releasing the customer from those tasks.
During the main conference days (mon-wed) I assisted a lot of very insightful presentations and the Oracle ACE Briefing which gave me ideas for future blog posts, so stay tuned! As written previously I had two sessions accepted for Kscope18: "Visualizing Streams" and "DevOps and OBIEE: Do it Before it's too late", in the following paragraph I'll share details (and link to the slides) of both.
Visualizing Streams
One of the latest trends in the data and analytics space is the transition from the old style batch based reporting systems which by design were adding a delay between the event creation and the appearance in the reporting to the concept of streaming: ingesting and delivering event information and analytics as soon as the event is created.
The session explains how the analytics space changed in recent times providing details on how to setup a modern analytical platform which includes streaming technologies like Apache Kafka, SQL based enrichment tools like Confluent's KSQL and connections to Self Service BI tools like Oracle's Data Visualization via sql-on-Hadoop technologies like Apache Drill. The slides of the session are available here.
DevOps and OBIEE: Do it Before it's Too Late
In the second session, slides here, I've been initially going through the motivations of applying DevOps principles to OBIEE: the self service BI wave started as a response to the long time to delivery associated with the old school centralized reporting projects. Huge monolithic sets of requirements to be delivered, no easy way to provide development isolation, manual testing and code promotion were only few of the stoppers for a fast delivery.
After an initial analysis of the default OBIEE developments methods, the presentation explains how to apply DevOps principles to an OBIEE (or OAC) environment and precisely:
- Code versioning techniques
- Feature-driven environment creation
- Automated promotion
- Automated regression testing
Providing details on how the Rittman Mead BI Developer Toolkit, partially described here, can act as an accelerator for the adoption of these practices in any custom OBIEE implementation and delivery process.
As mentioned before, the overall Kscope experience is great: plenty of technical presentation, roadmap information, networking opportunities and also much fun! Looking forward to Kscope19 in Seattle!
ChitChat for OBIEE – Now Available as Open Source!
ChitChat is the Rittman Mead commentary tool for OBIEE. ChitChat enhances the BI experience by bridging conversational capabilities into the BI dashboard, increasing ease-of-use and seamlessly joining current workflows. From tracking the history behind analytical results to commenting on specific reports, ChitChat provides a multi-tiered platform built into the BI dashboard that creates a more collaborative and dynamic environment for discussion.
Today we're pleased to announce the release into open-source of ChitChat! You can find the github repository here: https://github.com/RittmanMead/ChitChat
Highlights of the features that ChitChat provides includes:
Annotate - ChitChat's multi-tiered annotation capabilities allow BI users to leave comments where they belong, at the source of the conversation inside the BI ecosystem.
Document - ChitChat introduces the ability to include documentation inside your BI environment for when you need more that a comment. Keeping key materials contained inside the dashboard gives the right people access to key information without searching.
Share - ChitChat allows to bring attention to important information on the dashboard using the channel or workflow manager you prefer.
Verified Compatibility - ChitChat has been tested against popular browsers, operating systems, and database platforms for maximum compatibility.
Getting Started
In order to use ChitChat you will need OBIEE 11.1.1.7.x, 11.1.1.9.x or 12.2.1.x.
First, download the application and unzip it to a convenient access location in the OBIEE server, such as a home directory or the desktop.
See the Installation Guide for full detail on how to install ChitChat.
Database Setup
Build the required database tables using the installer:
cd /home/federico/ChitChatInstaller
java -jar SocializeInstaller.jar -Method:BuildDatabase -DatabasePath:/app/oracle/oradata/ORCLDB/ORCLPDB1/ -JDBC:"jdbc:oracle:thin:@192.168.0.2:1521/ORCLPDB1" -DatabaseUser:"sys as sysdba" -DatabasePassword:password -NewDBUserPassword:password1
The installer will create a new user (RMREP
), and tables required for the application to operate correctly. -DatabasePath
flag tells the installer where to place the datafiles for ChitChat in your database server. -JDBC
indicates what JDBC driver to use, followed by a colon and the JDBC string to connect to your database. -DatabaseUser
specifies the user to access the database with. -DatabasePassword
specifies the password for the user previously given. -NewDBUserPassword
indicates the password for the new user (RMREP
) being created.
WebLogic Data Source Setup
Add a Data Source object to WebLogic using WLST:
cd /home/federico/ChitChatInstaller/jndiInstaller
$ORACLE_HOME/oracle_common/common/bin/wlst.sh ./create-ds.py
To use this script, modify the ds.properties
file using the method of your choice. The following parameters must be updated to reflect your installation: domain.name
, admin.url
, admin.userName
, admin.password
, datasource.target
, datasource.url
and datasource.password
.
Deploying the Application on WebLogic
Deploy the application to WebLogic using WLST:
cd /home/federico/ChitChatInstaller
$ORACLE_HOME/oracle_common/common/bin/wlst.sh ./deploySocialize.py
To use this script, modify the deploySocialize.py
file using the method of your choice. The first line must be updated with username, password and url to connect to your Weblogic Server instance. The second parameter in deploy
command must be updated to reflect your ChitChat access location.
Configuring the Application
ChitChat requires several several configuration parameters to allow the application to operate successfully. To change the configuration, you must log in to the database schema as the RMREP
user, and update the values manually into the APPLICATION_CONSTANT
table.
See the Installation Guide for full detail on the available configuration and integration options.
Enabling the Application
To use ChitChat, you must add a small block of code on any given dashboard (in a new column on the right-side of the dashboard) where you want to have the application enabled:
<rm id="socializePageParams"
user="@{biServer.variables['NQ_SESSION.USER']}"
tab="@{dashboard.currentPage.name}"
page="@{dashboard.name}">
</rm>
<script src="/Socialize/js/dashboard.js"></script>
Congratulations! You have successfully installed the Rittman Mead commentary tool. To use the application to its fullest capabilities, please refer to the User Guide.
Problems?
Please raise any issues on the github issue tracker. This is open source, so bear in mind that it's no-one's "job" to maintain the code - it's open to the community to use, benefit from, and maintain.
If you'd like specific help with an implementation, Rittman Mead would be delighted to assist - please do get in touch with Jon Mead or DM us on Twitter @rittmanmead to get access to our Slack channel for support about ChitChat.
Please contact us on the same channels to request a demo.