Liberate your data

Intelligence is all about knowledge. This website is dedicated sharing expertise on Oracle BI. More »

 

Patch Set Update: Oracle Smart View for Office 11.1.2.5.800

Hyperion Product Management recently advised the release of a Patch Set Update (PSU) for Oracle Smart View for Office 11.1.2.5.x

Patch Set Update: 11.1.2.5.800 Oracle Smart View for Office Patch 28150001

 

This PSU download is available from the  My Oracle Support | Patches & Updates section.

This patch provides the Smart View help files that can be installed on a user's local drive or on a web server in your organization.

  • Note that in the zip file, the help for English and non-localized languages is release 11.1.2.5.800. The help for localized languages is release 11.1.2.5.720.
  • The localized languages are Brazilian Portuguese, Dutch, French, German, Italian, Japanese, Korean, Spanish, Simplified Chinese, and Traditional Chinese.
README:

Refer to the Readme files for information pertaining to the above requirements. The Readme file should also be consulted prior proceeding with the PSU implementation. This document contains important information that includes supported paths, implementing and configuration steps, list of new features and defects fixed, along with additional support information.

It is important to ensure that the requirements and support paths to this patch are met as outlined within the Readme file.

The Readme file is available from the Patches & Updates download screen.

More Information:
  • Available Patch Sets and Patch Set Updates for Oracle Hyperion Smart View for Office (Doc ID 2220997.1)
  • Smart View Support Matrix and Compatibility FAQ (Doc ID 1923582.1)

 

ChitChat for OBIEE – Now Available as Open Source!

ChitChat is the Rittman Mead commentary tool for OBIEE. ChitChat enhances the BI experience by bridging conversational capabilities into the BI dashboard, increasing ease-of-use and seamlessly joining current workflows. From tracking the history behind analytical results to commenting on specific reports, ChitChat provides a multi-tiered platform built into the BI dashboard that creates a more collaborative and dynamic environment for discussion.

Today we're pleased to announce the release into open-source of ChitChat! You can find the github repository here: https://github.com/RittmanMead/ChitChat

Highlights of the features that ChitChat provides includes:

  • Annotate - ChitChat's multi-tiered annotation capabilities allow BI users to leave comments where they belong, at the source of the conversation inside the BI ecosystem.

  • Document - ChitChat introduces the ability to include documentation inside your BI environment for when you need more that a comment. Keeping key materials contained inside the dashboard gives the right people access to key information without searching.

  • Share - ChitChat allows to bring attention to important information on the dashboard using the channel or workflow manager you prefer.

  • Verified Compatibility - ChitChat has been tested against popular browsers, operating systems, and database platforms for maximum compatibility.

Getting Started

In order to use ChitChat you will need OBIEE 11.1.1.7.x, 11.1.1.9.x or 12.2.1.x.

First, download the application and unzip it to a convenient access location in the OBIEE server, such as a home directory or the desktop.

See the Installation Guide for full detail on how to install ChitChat.

Database Setup

Build the required database tables using the installer:

cd /home/federico/ChitChatInstaller  
java -jar SocializeInstaller.jar -Method:BuildDatabase -DatabasePath:/app/oracle/oradata/ORCLDB/ORCLPDB1/ -JDBC:"jdbc:oracle:thin:@192.168.0.2:1521/ORCLPDB1" -DatabaseUser:"sys as sysdba" -DatabasePassword:password -NewDBUserPassword:password1  

The installer will create a new user (RMREP), and tables required for the application to operate correctly. -DatabasePath flag tells the installer where to place the datafiles for ChitChat in your database server. -JDBC indicates what JDBC driver to use, followed by a colon and the JDBC string to connect to your database. -DatabaseUser specifies the user to access the database with. -DatabasePassword specifies the password for the user previously given. -NewDBUserPassword indicates the password for the new user (RMREP) being created.

WebLogic Data Source Setup

Add a Data Source object to WebLogic using WLST:

cd /home/federico/ChitChatInstaller/jndiInstaller  
$ORACLE_HOME/oracle_common/common/bin/wlst.sh ./create-ds.py

To use this script, modify the ds.properties file using the method of your choice. The following parameters must be updated to reflect your installation: domain.name, admin.url, admin.userName, admin.password, datasource.target, datasource.url and datasource.password.

Deploying the Application on WebLogic

Deploy the application to WebLogic using WLST:

cd /home/federico/ChitChatInstaller  
$ORACLE_HOME/oracle_common/common/bin/wlst.sh ./deploySocialize.py

To use this script, modify the deploySocialize.py file using the method of your choice. The first line must be updated with username, password and url to connect to your Weblogic Server instance. The second parameter in deploy command must be updated to reflect your ChitChat access location.

Configuring the Application

ChitChat requires several several configuration parameters to allow the application to operate successfully. To change the configuration, you must log in to the database schema as the RMREP user, and update the values manually into the APPLICATION_CONSTANT table.

See the Installation Guide for full detail on the available configuration and integration options.

Enabling the Application

To use ChitChat, you must add a small block of code on any given dashboard (in a new column on the right-side of the dashboard) where you want to have the application enabled:

<rm id="socializePageParams"  
user="@{biServer.variables['NQ_SESSION.USER']}"  
tab="@{dashboard.currentPage.name}"  
page="@{dashboard.name}">  
</rm>  
<script src="/Socialize/js/dashboard.js"></script>  

Congratulations! You have successfully installed the Rittman Mead commentary tool. To use the application to its fullest capabilities, please refer to the User Guide.

Problems?

Please raise any issues on the github issue tracker. This is open source, so bear in mind that it's no-one's "job" to maintain the code - it's open to the community to use, benefit from, and maintain.

If you'd like specific help with an implementation, Rittman Mead would be delighted to assist - please do get in touch with Jon Mead or DM us on Twitter @rittmanmead to get access to our Slack channel for support about ChitChat.

Please contact us on the same channels to request a demo.

Hyperion Financial Close Management and Tax Governance 11.1.2.4.251 is Available

The following Patch Set Update (PSU) has been released for Hyperion Financial Close Management (FCM) 11.1.2.4.x.

This PSU download is available from the  My Oracle Support > Patches & Updates section.

Hyperion Financial Close Management and Tax Governance PSU 11.1.2.4.251
 Patch 27479194

 

CAUTION!! 

  • 11.1.2.4.251 PSU for Financial Close Management and Tax Governance are the same patch but have different readme steps.
  • You are urged to carefully read and understand the following requirements. Failure to comply may result in applying a patch that can cause your application to malfunction, including interruption of service and/or loss of data.
  • Before Installing or applying this patch:
    • Verify that the issue described in the Readme matches the issue that you are encountering.  Review the Bug Number referenced in the Readme for additional information.
    • Verify that your system configuration (product version, patch level, and platform) exactly matches what is specified in the Readme.

Important:

Prior to proceeding with this PSU implementation refer to the Readme file for important information. In addition to the details of new features and full list of the defects fixed, the Readme file contains important support information that includes prerequisites, install details for applying patch, post-installation instructions, and tips & troubleshooting information.

It is important to verify that the requirements and support paths for this patch are met as outlined within the Readme file.

The Readme file is available from the Patches & Updates download screen.

Supported Paths to this Patch

You can apply this patch to the following releases:

  • 11.1.2.4.000
  • 11.1.2.4.100 PSU
  • 11.1.2.4.101 PSU
  • 11.1.2.4.102 PSU
  • 11.1.2.4.103 PSU
  • 11.1.2.4.200 PSU
  • 11.1.2.4.201 PSU
  • 11.1.2.4.250 PSU
To share your experience about installing this patch ...
  • In the MOS > Patches & Updates screen for FCM  Patch 27479194, click the "Start a Discussion" or "Reply to Discussion" and submit your review.
  • The patch install reviews and other patch related information is available within the My Oracle Support Communities. Visit the Oracle Hyperion EPM sub-space -  Hyperion Patch Reviews

Have a question for FCM specifically ....

To locate the latest Patch Sets and Patch Set Updates for the EPM products visit the My Oracle Support (MOS) Knowledge Article Oracle Hyperion Enterprise Performance Management Products Doc ID 1400559.1

 

Hyperion Planning Patch Set Update 11.1.2.4.008 is Available

Hyperion Product Management have advised the release of a Patch Set Update (PSU) for Oracle Hyperion Planning 11.1.2.4.x.

This PSU is available from the My Oracle Support | Patches & Updates section.

Hyperion Planning PSU 11.1.2.4.008
Patch 28103100

This is a patch set update (PSU).This patch replaces files in the existing installation and does not require a full installation. This is a standalone patch and it is built on 11.1.2.4.000 of Planning.

This PSU can be applied to the Hyperion Planning releases:

  • Planning 11.1.2.4.000

  • Planning 11.1.2.4.001 (20817841)

  • Planning 11.1.2.4.002 (20937926)

  • Planning 11.1.2.4.003 (21317639)

  • Planning 11.1.2.4.004 (22186227)

  • Planning 11.1.2.4.005 (23009543)

  • Planning 11.1.2.4.006 (25345185)

  • Planning 11.1.2.4.007 (27027776)

Defects Fixed:

  • Refer to the readme for the full list

Prerequisites:

EPM System Patches:

Before applying this patch, install the below mentioned patch as described in the readme for that patch:

  • Oracle Hyperion Financial Data Quality Management, Enterprise Edition Patch Set Update (PSU): 11.1.2.4.200 (Patch 22452414) or later PSU.
  • Oracle Hyperion Essbase Server Patch Set Update (PSU): 11.1.2.4.018 (Patch 26007120).
  • Oracle Hyperion Essbase RTC Patch Set Update (PSU): 11.1.2.4.018 (Patch 26007112).
  • Oracle Calculation Manager Patch Set Update (PSU): 11.1.2.4.006 (Patch 22806363) or a later PSU.
  • Oracle Hyperion Financial Reporting Release 11.1.2.4.000 Patch Set Update (PSU): 11.1.2.4.006 (Patch 22462544) or a later PSU.
    • Please note: If Oracle Hyperion 11.1.2.4.900 Patch Set Update (PSU) is applied, then no other Oracle Hyperion Financial Reporting or Oracle Hyperion Workspace patches that are released prior to 5/1/2018 should be applied. Doing so, will effectively rollback Financial Reporting/Workspace to 11.1.2.4.700, which used Reporting&Analysis (R&A). In other words, the EPM system will no longer work.
  • Smart View 11.1.2.5.520+. Download the Smart View zip file from either OTN or MOS.

Other Required Patches:

  • Apply the ADF MLR Patch 25113405.
  • SU Patch [FMJJ]: WLS PATCH SET UPDATE 10.3.6.0.171017 (Patch 26519424)

Required User Rights

The user applying the patch should be the user who was set up to install and configure EPM System products. Required user privileges or rights:

  • Windows:- Use the user account that has Local administrator rights and was set up for installation and configuration. This user is an administrator and is the same for all EPM System products. Assign local policies if required by the product. Such assignments typically are: “Act as part of the operating system, Bypass traverse checking, Log on as a batch job, Log on as a service.”
  • UNIX/Linux: - Use the account that was used to install EPM System products and has Read, Write, and Execute permissions on $MIDDLEWARE_HOME. If you installed other Oracle products, the user who installed EPM System products must be in the same group as the user who installed the other Oracle products. OPatches are not intended to be applied using a root user.

Ensure to review the Readme file prior proceeding with this PSU implementation for important information that includes prerequisites, details for applying patch, troubleshooting FAQ's and additional support information.

The Readme file is available from the Patches & Updates download screen.

Share your experience about installing this patch ...

In the MOS | Patches & Updates screen for Oracle Hyperion Planning Patch 28103100, click the "Start a Discussion" or "Reply to Discussion" and submit your review.

The patch install reviews and other patch related information is available within the My Oracle Support Communities. Visit the Oracle Hyperion EPM sub-space:

Hyperion Patch Reviews

For questions specific to Hyperion Planning ...

The My Oracle Support Community "Hyperion Planning Products" is the ideal first stop to seek & find product specific answers:

 Hyperion Planning Products

 

Real-time Sailing Yacht Performance – Kafka (Part 2)

In the last two blogs Getting Started (Part 1) and Stepping back a bit (Part 1.1) I looked at what data I could source from the boat's instrumentation and introduced some new hardware to the boat to support the analysis.

Just to recap I am looking to create the yachts Polars with a view to improving our knowledge of her abilities (whether we can use this to improve our race performance is another matter).

Polars give us a plot of the boat's speed given a true wind speed and angle. This, in turn, informs us of the optimal speed the boat could achieve at any particular angle to wind and wind speed.

Image Description

In the first blog I wrote a reader in Python that takes messages from a TCP/IP feed and writes the data to a file. The reader is able, using a hash key to validate each message (See Getting Started (Part 1)). I'm also converting valid messages into a JSON format so that I can push meaningful structured data downstream. In this blog, I'll cover the architecture and considerations around the setup of Kafka for this use case. I will not cover the installation of each component, there has been a lot written in this area. (We have some internal IP to help with configuration). I discuss the process I went through to get the data in real time displayed in a Grafana dashboard.

Introducing Kafka

I have introduced Kafka into the architecture as a next step.

Why Kafka?

I would like to be able to stream this data real time and don't want to build my own batch mechanism or create a publish/ subscribe model. With Kafka I don't need to check that messages have been successfully received and if there is a failure while consuming messages the consumers will keep track of what has been consumed. If a consumer fails it can be restarted and it will pick up where it left off (consumer offset stored in Kafka as a topic). In the future, I could scale out the platform and introduce some resilience through clustering and replication (this shouldn't be required for a while). Kafka therefore is saving me a lot of manual engineering and will support future growth (should I come into money and am able to afford more sensors for the boat).

High level architecture

Let's look at the high-level components and how they fit together. Firstly I have the instruments transmitting on wireless TCP/IP and these messages are read using my Python I wrote earlier in the year.

I have enhanced the Python I wrote to read and translate the messages and instead of writing to a file I stream the JSON messages to a topic in Kafka.

Once the messages are in Kafka I use Kafka Connect to stream the data into InfluxDB. The messages are written to topic-specific measurements (tables in InfluxdDB).

Grafana is used to display incoming messages in real time.

Kafka components

I am running the application on a MacBook Pro. Basically a single node instance with zookeeper, Kafka broker and a Kafka connect worker. This is the minimum setup with very little resilience.

Image Description

In summary

ZooKeeper is an open-source server that enables distributed coordination of configuration information. In the Kafka architecture ZooKeeper stores metadata about brokers, topics, partitions and their locations. ZooKeeper is configured in zookeeper.properties.

Kafka broker is a single Kafka server.

"The broker receives messages from producers, assigns offsets to them, and commits the messages to storage on disk. It also services consumers, responding to fetch requests for partitions and responding with the messages that have been committed to disk." 1

The broker is configured in server.properties. In this setup I have set auto.create.topics.enabled=false. Setting this to false gives me control over the environment as the name suggests it disables the auto-creation of a topic which in turn could lead to confusion.

Kafka connect worker allows us to take advantage of predefined connectors that enable the writing of messages to known external datastores from Kafka. The worker is a wrapper around a Kafka consumer. A consumer is able to read messages from a topic partition using offsets. Offsets keep track of what has been read by a particular consumer or consumer group. (Kafka connect workers can also write to Kafka from datastores but I am not using this functionality in this instance). The connect worker is configured in connect-distributed-properties. I have defined the location of the plugins in this configuration file. Connector definitions are used to determine how to write to an external data source.

Producer to InfluxDB

I use kafka-python to stream the messages into kafka. Within kafka-python there is a KafkaProducer that is intended to work in a similar way to the official java client.

I have created a producer for each message type (parameterised code). Although each producer reads the entire stream from the TCP/IP port it only processes it's assigned message type (wind or speed) this increasing parallelism and therefore throughput.

  producer = KafkaProducer(bootstrap_servers='localhost:9092' , value_serializer=lambda v: json.dumps(v).encode('utf-8'))
  producer.send(topic, json_str) 

I have created a topic per message type with a single partition. Using a single partition per topic guarantees I will consume messages in the order they arrive. There are other ways to increase the number of partitions and still maintain the read order but for this use case a topic per message type seemed to make sense. I basically have optimised throughput (well enough for the number of messages I am trying to process).

kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic wind-json

kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic speed-json

kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic gps-json 

When defining a topic you specify the replaication-factor and the number of partitions.

The topic-level configuration is replication.factor. At the broker level, you control the default.replication.factor for automatically created topics. 1 (I have turned off the automatic creation of topics).

The messages are consumed using Stream reactor which has an InfluxDB sink mechanism and writes directly to the measurements within a performance database I have created. The following parameters showing the topics and inset mechanism are configured in performance.influxdb-sink.properties.

topics=wind-json,speed-json,gps-json

connect.influx.kcql=INSERT INTO wind SELECT * FROM wind-json WITHTIMESTAMP sys_time();INSERT INTO speed SELECT * FROM speed-json WITHTIMESTAMP sys_time();INSERT INTO gps SELECT * FROM gps-json WITHTIMESTAMP sys_time()

The following diagram shows the detail from producer to InfluxDB.

If we now run the producers we get data streaming through the platform.

Producer Python log showing JSON formatted messages:

Status of consumers show minor lag reading from two topics, the describe also shows the current offsets for each consumer task and partitions being consumed (if we had a cluster it would show multiple hosts):

Inspecting the InfluxDB measurements:

When inserting into a measurement in InfluxDB if the measurement does not exist it gets created automatically. The datatypes of the fields are determined from the JSON object being inserted. I needed to adjust the creation of the JSON message to cast the values to floats otherwise I ended up with the wrong types. This caused reporting issues in Grafana. This would be a good case for using Avro and Schema Registry to handle these definitions.

The following gif shows Grafana displaying some of the wind and speed measurements using a D3 Gauge plugin with the producers running to the right of the dials.

Next Steps

I'm now ready to do some real-life testing on our next sailing passage.

In the next blog, I will look at making the setup more resilient to failure and how to monitor and automatically recover from some of these failures. I will also introduce the WorldMap pannel to Grafana so I can plot the location the readings were taken and overlay tidal data.

References