Tag Archives: blog
Is a Hybrid Oracle EPM Cloud Strategy Right for You?
These days, Oracle is undoubtedly a cloud-first company: new features and functionality are rolled out to the cloud before they reach their on-premises equivalent. Yet for various reasons—whether it be simplicity, data security, or just organizational inertia—many Oracle customers still maintain one or more of their enterprise performance management (EPM) applications on-premises.
In this case, a hybrid Oracle EPM cloud strategy, keeping some Oracle EPM applications on-premises while moving others to the cloud, could be the perfect fit for your organization. The benefits of a hybrid Oracle cloud strategy include:
- Cutting IT costs for some or all applications by switching to an operational expense (OPEX) rather than a capital expenditures (CAPEX) expense model.
- Lowering business risk by taking advantage of cloud data backups and business continuity features.
- Organizational stability by maintaining business-critical applications on-premises until the time is right for a cloud migration.
So, given the competing benefits of cloud and on-premises, why not try to straddle the divide by choosing a hybrid Oracle cloud strategy? One of Datavail’s recent clients, a global industrial safety company operating in 20 countries around the world, was faced with exactly this question when they needed to upgrade their Hyperion EPM 11.1.2.3 deployment.
With the end of support (EOS) date for 11.1.x arriving soon in December 2021, the client knew they had to upgrade to 11.2 to remain in compliance and enjoy new features and bug fixes. However, whether the client should perform an Oracle EPM migration to the cloud or stay on-premises was still an open question:
- Oracle has guaranteed that it will continue to offer support for Hyperion EPM 11.2 on-premises through at least 2030, making this a safe long-term option for another decade.
- However, the client had already completed a cloud migration to Oracle Planning and Budgeting Cloud Service (PBCS) and could realize greater efficiencies by moving EPM to the Oracle cloud as well.
Ultimately, the right advice for most companies will be to move multiple Oracle applications to the cloud—especially since purchasing EPM Cloud gives you access to all of them at once. However, you might also wish to stagger your migration projects, moving some of them before others as a trial run.
Thinking about a hybrid Oracle EPM cloud strategy for yourself? Find out how it worked for one of our clients by reading our case study: Industrial Safety Company Stops Outages by Moving to Oracle EPM Cloud.
The post Is a Hybrid Oracle EPM Cloud Strategy Right for You? appeared first on Datavail.
Exploring MariaDB’s Storage Engine Options
MariaDB is a flexible, modern relational database that’s open source and is capable of turning data into structured information. It supports many types of workloads in a single database platform and offers pluggable storage architecture for flexibility and optimization purposes.
You can set up storage engines on a per-database instance or per-table basis. Here are some of the storage engines you can leverage in MariaDB for your development projects.
InnoDB
Originally, the default MariaDB storage engine was XtraDB. Starting with MariaDB 10.2, InnoDB is the default. However, starting from MariaDB 5.5, InnoDB is included as a plugin.
MariaDB’s default storage engine is InnoDB. When you’re looking for a general-purpose option for your transactional workloads, or you’re not sure which solution to use, InnoDB is a brilliant choice. It works best in a mixed read/write environment and offers compression and encryption. You also need to use InnoDB if you want to set up multi-mastering clustering that supports synchronous replication.
CONNECT
The CONNECT handler was introduced in MariaDB 10.0. The strength of this handler is that it can access data from multiple places on a server as if it was a centralized database. In addition, CONNECT doesn’t use locking, meaning that data files are opened and closed for each query.
You may want to use CONNECT for importing and exporting data from a MariaDB database, and for all types of Business Intelligence applications. However, CONNECT isn’t appropriate for transactional applications.
S3
The S3 storage engine was introduced in MariaDB 10.5.4. S3 is read-only and can be used to archive MariaDB tables in Amazon S3. You can also use it with any of the many third-party public or private cloud systems that implement the S3 API, while still having the data accessible for reading in MariaDB.
Typically, you would use S3 when you have tables that are almost inactive, but you still need to maintain them. Using S3 allows you to move that type of table to an archiving service, using an S3 API. S3 compatible storage is much less costly than other alternatives, and many implementations provide reliable long-term storage.
MyRocks
This storage engine comes from Facebook developers and is optimized for space and write-intensive workloads. The compression functionality is excellent, and it’s a fork of RocksDB, which is a Google project that’s performance focused. When you have servers with SSDs and multi-core processors, you’ll see a good boost with MyRocks. Other advantages include:
- It has 2x better compression than InnoDB meaning you get greater space efficiency.
- It has a 10x less write amplification compared to InnoDB, giving you greater writing efficiency.
- It avoids all compaction overheads when faster data loading is enabled because it writes data directly to the bottommost level.
- It offers faster replication because there are no random reads for updating secondary keys, except for unique indexes.
ColumnStore
Put your analytics workloads into ColumnStore for a columnar format. This storage engine keeps the data store separate from the database, which allows it to be distributed on multiple servers. This architecture supports real-time ad hoc queries so you can access insights from your data faster. This storage engine supports hundreds of billions of rows and you do not need to use snowflake schema or indexes. Other characteristics include:
- It is designed specially to handle analytical workloads.
- Data is written by column rather than row and is automatically partitioned, therefore no indexes are necessary.
- It can be used as the analytical storage engine for HTAP.
- It is easily scalable.
- It supports multiple connectors and data adapters to allow the use of commonly used business intelligence tools.
- Network latency has only a small impact on Enterprise ColumnStore.
Xpand
When scaling is your focus, Xpand is your top choice. This storage engine allows you to distribute your tables and elastically scale them in a high availability environment. Its performance keeps up with its scale, supporting millions of transactions per second. One of the biggest advantages of Xpand is that you can scale without bringing in a specialized database solution. Other advantages include:
- It provides distributed SQL capabilities and is ACID compliant.
- It is highly available due to maintaining replicas of each slice, allowing it to recover from a node failure without losing data.
- It can maintain multiple replicas of each slice and is zone aware, allowing it to recover from multi-node failures or zone failures without losing data.
- Its rebalancer maintains data distribution, meaning that a node or zone failure causes the creation of new replicas for each slice, and the rebalancer then redistributes the data.
- It has write scaling since every node writes concurrently and all nodes have the latest data, and it performs writes in parallel.
- It scales out because each node can read and write, reads are lockless. Writes don’t block reads, and additional nodes can be added to increase capacity.
Aria
Do you have non-transactional workloads that are read-heavy and need a crash-safe option? Aria fills these requirements nicely. It’s a performance-focused storage engine for system tables and delivers the reliability that this data needs. In addition, Aria offers a number of advantages over MyISAM, including:
- Aria can replay almost everything from the log, so making a backup is easy by just copying the log. Exceptions include Batch Insert into an empty table and Alter table.
- You can do unit tests of most parts.
- If you experience a crash, changes will go back to the start of a statement or the last Lock Tables statement.
- It allows multiple concurrent inserters into the same table.
Spider
You can use a MongoDB-like sharding architecture with this virtual storage engine. Set up list, range, and hash schemas and create partitions you can spread over multiple databases. In most cases, you use Spider alongside another storage engine. It pairs nicely with InnoDB and MyRocks for scaling their respective workloads. You may also want to use Spider for the following applications:
- Sharding a big table.
- Tracking data for multiple locations or branches of a company.
- Query tables on multiple MariaDB servers.
- Migrating tables between servers.
- Running queries on both MariaDB and remote tables on non-MariaDB databases.
- Migrating tables from a non-MariaDB server to a MariaDB server.
All of MariaDB’s storage engines have their place, and the best part about this database is that you can pick and choose the combinations that work best for your workload. You don’t need to compromise on your requirements or have to use a different database technology. You get it all in one place with MariaDB, along with all the other benefits it provides.
Read This Next
Going Open-Source: Making the Move to MariaDB from Oracle
Download our white paper to learn more about this powerful database technology, its features, and how to handle the migration process.
The post Exploring MariaDB’s Storage Engine Options appeared first on Datavail.
The Risk of Running Legacy Database Technology
End of Life comes for most technology, and your databases are no exception. When you’re used to working with a particular version of your database, such as PostgreSQL 9.5, and have built your organization’s applications around it, upgrading to more modern editions may be resource intensive. However, the risks of running legacy database technology are more costly when you consider all the implications.
Increased Vulnerability to Security Exploits
When your database technology no longer receives security patches, new exploits remain on your systems. Over time, you end up with a database that has many attractive attack surfaces for a cybercriminal. They can use this opportunity for a data breach or use this vulnerability as a steppingstone to access other parts of your network.
Your IT security team would have increased demands on their time, as they have to spend more resources monitoring your systems for unusual activity and signs of an attack. As cyber criminals develop new ways to hack the database, they could end up using approaches that are increasingly harder to detect through advanced threat detection and other solutions.
No More Bug Fixes
Are you running into bugs from your legacy database that makes it difficult or impossible to use your applications? Unless you’re able to solve the problem yourself or see if someone else in the development community for that database technology has a workaround, you could lose vital functionality.
Over time, you end up with a system that is highly inefficient. It could be prone to crashing or lack functionality that is required for changes in daily business operations. The unexpected downtime can cause many schedules to get off-track and impact your revenue.
Difficulty in Hiring Specialists
The older a database version is, the harder it is to find specialists who are proficient in the system. IT recruiting is challenging enough when it comes to the latest database technology. When you want to bring in someone capable of working on a 5–10-year-old implementation of the database, you’re going to spend a lot of time looking for talent.
If you’re unable to find someone with hands-on experience working with that particular database version, you’ll have to hire someone capable of learning it from the ground up. If the candidate is used to the latest features of that database, then they have to get used to doing without many important capabilities.
Decreased Business Agility
What does your time to market look like when it comes to solutions powered by an old database? Compare this pace to the latest database versions, especially if your competitors have upgraded. A lack of agility results in many types of opportunity costs, which have an indirect impact on your bottom line.
Some examples of the opportunities you miss out on due to old database technology include:
- Inability to integrate with new solutions: As new technology launches and offers productivity increases, your old database may not be able to work with new data types, schemas, and other innovations.
- The competition launching products before your organization: The productivity decreases associated with solutions that are slow or crash due to outdated technology can put you behind your competition’s release schedule. When they’re first to market, they can better position themselves as the best choice in the market sector.
- Decreased interest among job applicants: If your organization is known for being behind on technology, you may lose out on talent that prefers more tech-forward companies. Your turnover rates may also be affected when internal systems are difficult to work with.
- Poor customer experience: The client-facing experience also suffers from old databases. Your customers may not be able to access the data that they need to make decisions, or they could become frustrated if their interactions are slow and poor quality.
Read This Next
You Can’t Put Off a PostgreSQL v9.5 Upgrade Anymore – End of Life is Here
Don’t let old databases drag your entire organization down. You can learn more about upgrading to PostgreSQL 13 in this white paper or reach out to us to discuss your database migration, modernization, and database upgrade opportunities.
The post The Risk of Running Legacy Database Technology appeared first on Datavail.
3 Ways to Get Your Hyperion 11.2 Upgrade in the Fast Lane
With the 11.1.x end of support deadline coming up soon, many organizations are planning their Hyperion upgrade. Of course, upgrading comes with a set of challenges: executing pre- and post-upgrade performance testing, managing downtime (and keeping it minimal), and troubleshooting the inevitable issues that will crop up along the way.
But what if you could easily execute your upgrade with a partner who already has solutions to these problems?
Accelatis is Datavail’s enterprise-grade APM platform, designed to work with Oracle Hyperion from the ground up. In fact, the Accelatis software is the only APM tool that has been purpose-built especially for use with Hyperion. In this blog post, we’ll do a quick review of three ways that Datavail can deliver a fast-track 11.2 upgrade with the help of Accelatis.
1. Planning and Designing Your Upgrade
Right out the gate, Datavail uses Accelatis at the start of a Hyperion upgrade project to get the big picture of your Oracle environment. Who in the company is using Hyperion? How are they accessing it? What reports are they pulling? These questions and more are some of the most important to answer before planning your Hyperion upgrade—and they are exactly what Datavail uses Accelatis to find out.
From there, Accelatis collects data for at least four to eight weeks before the real work actually begins. This gives you an essential point of comparison for the benchmarking and testing process, so you can be sure that the upgrade hasn’t harmed the system’s performance.
2. Testing Before, During and After the Upgrade
There are many forms and variations of performance testing, each one exposing a different window into your IT environment. Before, during, and after any Hyperion upgrade project, we use Accelatis early and often for testing the system’s performance. Although tools such as LoadRunner and Oracle Application Testing Suite can also be used for baseline testing, they come with their own issues:
- Oracle Application Testing Suite is entirely separate from Hyperion… and is a costly purchase where the return on investment is questionable if you don’t already own it.
- Building appropriate tests in Micro Focus LoadRunner (formerly HP LoadRunner) for your Hyperion environment can take weeks—compared with just minutes for Accelatis—and can be an extremely tedious, labor-intensive manual process.
After baseline testing and comparison testing, Datavail uses Accelatis to perform load testing and break testing, assessing the scalability and resilience of your Hyperion environment.
Once your upgrade is complete, Accelatis continues to run performance testing scenarios on a regular basis so our team can work with you to detect and solve issues before they affect your users.
3. Troubleshooting Along the Way
Last but not least, Datavail also uses Accelatis for both short- and long-term troubleshooting over the course of a Hyperion upgrade. With the Accelatis remote agent installed on a machine, Datavail can utilize the Hyperion logs and compare them against the key performance indicators (KPIs) that you’ve defined for the project. Accelatis also has a built-in log management feature that scans the software’s log messages looking for and alerting on errors.
Read This Next
3 Ways Datavail’s IP Can Jumpstart Your Hyperion Upgrade
When upgrading to Hyperion 11.2, you want to be able to take advantage of the new features and functionality as soon as possible; and upgrade challenges can quickly turn a hopeful project into a nightmare. For more information on how Datavail can help you make a smooth transition to 11.2, download our white paper.
The post 3 Ways to Get Your Hyperion 11.2 Upgrade in the Fast Lane appeared first on Datavail.
PaaS, IaaS, and SaaS: Making the Right Choice for Your Oracle Cloud Migration
Cloud computing has gone from being a cutting-edge technology to a well-established best practice for businesses of all sizes and industries. According to Flexera’s 2020 State of the Cloud Report, 98 percent of organizations are now using at least one public or private cloud.
Beneath the umbrella of “the cloud,” however, there are several cloud services. The term XaaS (“anything as a service”) is shorthand for the proliferation of cloud services in recent years—everything from databases and artificial intelligence to unified communications and disaster recovery is now available from your choice of cloud provider.
In this article, we’ll discuss the three “pillars” of cloud computing—SaaS, IaaS, and PaaS—and discuss how you might use any or all of them when migrating to the Oracle cloud.
-
Oracle SaaS (Software as a Service)
SaaS (“software as a service”) refers to software applications that are hosted on a remote server and provisioned to customers over the Internet. Oracle’s SaaS cloud offerings include:
- Oracle EPM Cloud
- Oracle ERP—Financials Cloud
- Oracle HCM Cloud
- Oracle Analytics Cloud
- Oracle SCM and Manufacturing Cloud
- Oracle Data Cloud
Using SaaS is best in the following situations:- Your software needs to prioritize scalability and accessibility from anywhere at any time.
- Your processes are standardized across the enterprise or can be changed to fit the application.
- An off-the-shelf product straight from the vendor can fit your business requirements.
- You prefer a monthly or an annual payment scheme to large one-time capital expenses.
-
Oracle IaaS (Infrastructure as a Service)
IaaS (“infrastructure as a service”) refers to physical IT infrastructure (i.e. compute, network, storage, etc.) that is remotely provisioned and managed over the Internet. Oracle’s IaaS offering is Oracle Cloud Infrastructure (OCI), which includes everything from bare-metal servers and virtual machines (VMs) to more advanced offerings like GPUs (graphics processing units) and high-performance computing.
Using IaaS is best in the following situations:
- You have the necessary expertise in-house to remotely manage your IT infrastructure.
- You want to save money by only paying for the computing resources you actually use.
- Your IT environment is flexible and workloads are variable.
- You need to be able to quickly provision IT resources while shutting down excess capacity when demand is low.
-
Oracle PaaS (Platform as a Service)
Last but not least, PaaS (“platform as a service”) refers to a complete cloud platform for software development and deployment. PaaS includes the essential infrastructure and middleware as well as technologies such as artificial intelligence, the Internet of Things (IoT), containerization, and big data analytics.
Oracle’s PaaS offering is also incorporated within OCI and makes use of both Oracle and open-source technologies. Oracle PaaS includes functionality for application development, content management, and business analytics, among others.
Unlike SaaS and IaaS, using PaaS isn’t a choice that you can make in isolation. Each SaaS or IaaS platform usually works best with one or two PaaS options, which considerably narrows your choices.
Conclusion
SaaS, IaaS, and PaaS: although each one falls under the umbrella of “cloud computing,” each one has a very different role to play for your organization’s IT environment. If you need more advice on which one is right for you, reach out to a knowledgeable, qualified Oracle cloud partner like Datavail.
Discussing the various Oracle cloud options is especially timely right now—the end of support (EOS) date for Oracle Hyperion EPM 11.1.2.4 on-premises is arriving in December 2021. EPM 11.1 users who don’t upgrade will fall out of compliance, exposing themselves to security vulnerabilities and missing out on new features and functionality.
Read This Next
4 Ways Datavail Prepares Companies for EPM 11.1 EOS
Considering an Oracle EPM cloud migration? As an Oracle Platinum Partner with 17 different specializations, Datavail can assist with every stage of the process, from roadmaps and strategic planning to post-launch support and maintenance. Read our white paper to learn more about the upcoming Hyperion deadline and how Datavail can help.
The post PaaS, IaaS, and SaaS: Making the Right Choice for Your Oracle Cloud Migration appeared first on Datavail.