How Can You Control Scope Creep?
Application development projects are vulnerable to many things that can cause failure, such as underestimating resources, inaccurate time estimates, and more. One thing that can easily cause an AppDev project to fail is scope creep. You can control scope creep if you know how to recognize it, understand what causes it, and have plans for managing or avoiding it.
How to Recognize Scope Creep
Virtually every AppDev project starts out with a definition of the project goal, timing, budget, and deliverables. You can recognize scope creep when a new requirement is added to a project that wasn’t covered in the original project description.
Something that will have a big impact on scope is relatively easy to recognize, although you still need to manage it. It’s the combination of a number of seemingly harmless changes that are often the reason why projects get out of scope.
Who Causes Scope Creep?
Scope creep can be initiated in any number of ways and comes from both inside and outside of the AppDev project team.
-
People on the Team
If your team members aren’t all on the same page, you’re leaving yourself open to scope creep. A developer who isn’t crystal clear on what the requirements are can inadvertently go in a direction that doesn’t lead to the desired deliverables as they’ve been defined. Sometimes even a small deviation can cause trouble with timing and budget.
As developers work on a project, they start to feel invested in the outcome. If one of your developers decides that the project needs a slightly different approach, or just a small alteration in how the user interface works, you’ve got a problem.
Coming up with a better mousetrap isn’t a bad thing, but it’s critical that your team members can identify an idea that brings the project out of scope. The team members need to discuss the idea with the team leader to identify the effect it will have before going off on a tangent. One person going out of scope can get an entire team second-guessing what is in scope and what is not.
-
Project Stakeholders
If you do a good job of stakeholder management, the odds of stakeholders increasing the project’s scope is much lower. But, even when stakeholders seem to be on board, a senior stakeholder with an unspoken agenda can cause havoc.
-
Customers/Clients/Product Owners
It doesn’t matter what you call them, there’s always someone you’re responsible to in terms of the overall project. Scope creep caused by this person is usually easy to spot because you’re looking for the issue to arise.
This is the person you got approval from, the one who reviewed all of the project descriptions and should understand the scope of the project. But, it can still happen that this person will ask for revisions or changes without realizing the effect on timing and budget.
-
Users
Typically, you need to test the software with your users. User feedback can be important to make sure the final product meets your customers’ requirements. But, that feedback can also be fraught with out-of-scope suggestions. You’ll need to be careful about identifying revisions that you can do and stay within scope and suggestions that are truly going outside the scope of the project.
-
Others
In many situations, there are others who can affect your project. You must take input coming from or output going to another entity into account during the project definition. Any type of unrecognized dependency can cause scope creep very easily.
How to Control Scope Creep
Controlling scope creep works differently depending on the application development methodology you use.
Controlling Scope Creep Using the Waterfall Methodology
The waterfall methodology isn’t structured to be change friendly. The project is defined at the start and specific plans are laid for the life of the project. This methodology is linear and sequential, where each phase is completed and reviewed before the next phase begins. As a result, ad hoc changes are discouraged. There is just too much work to change the entire project plan in mid-stream.
If a product owner wants to make a major change, the project team can respond with alternatives such as adding the change in a future release. If it’s not possible to reach a negotiated agreement, then it’s critical that you define the impact on timing and budget and obtain agreement from the product owner to expand both.
Controlling Scope Creep Using the Agile Methodology
The agile methodology is structured to avoid scope creep wherever possible. Since agile approaches AppDev in small increments and focuses on collaboration in an iterative process, any “changes” that appear before the team has addressed how to build a particular piece of the project aren’t considered changes at all. But, that’s assuming that the change won’t create more work than was initially planned.
Agile teams typically add a 20% contingency in project plans because changes do happen as business needs change. Agile plans account for that fact by giving themselves the flexibility to adjust to changes rather than discourage them.
Some people think a project is out of scope if the final product differs from the original project definition. Using agile, that’s not an issue. Agile focuses on satisfying the customer. If the final product does that, even if it differs from the original description, the agile team has completed a successful project.
Naturally, there are times when a change can cause scope creep in an agile project. For example, if a software feature has been designed, tested, and delivered, and then someone wants to eliminate it in favor of a different feature, that’s definitely scope creep. Another example would be a change that would swap a big change for something small. You can’t add 100 hours of work and eliminate 10 hours of work without scope creep.
If a change will cause an increase in the budget or timeline, then you’re back to negotiating with the product owner to approve more resources. Scope creep is typically easier to control in an agile project. It all goes back to the communication that agile stresses with everyone associated with a project.
From the start of an agile project, anyone who might request a change should understand the difference between when the response will be, “No problem,” and when the term scope creep might waft into the discussion.
How to Avoid Scope Creep
Scope creep, as with most problems, is easier to avoid than fix. Here are some things that will help you avoid scope creep:
- Make sure you have a clear scope when the project starts
- Communicate that scope to everyone who may be affected by the project
- Establish close collaboration among the team members and everyone who may be affected by the project
- Prioritize features when the project starts—you’ll use that list when you’re trying to decide if a change should have precedence over an existing requirement
- Establish agreement on how the team will handle changes
- Strive for transparency—a new request doesn’t mean you’ve done something wrong, so be open to working with the requestor to help them understand the tradeoffs
Scope creep is just one of the issues that can cause an AppDev project to fail, but there are solutions to software development challenges. Read our recent whitepaper, “4 Reasons Why Application Development Projects Fail.” Not only will you learn more about the top four reasons why projects fail, you’ll also learn about seven solutions that can help you avoid the high cost of failed projects.
The post How Can You Control Scope Creep? appeared first on Datavail.
Is a Hybrid Oracle EPM Cloud Strategy Right for You?
These days, Oracle is undoubtedly a cloud-first company: new features and functionality are rolled out to the cloud before they reach their on-premises equivalent. Yet for various reasons—whether it be simplicity, data security, or just organizational inertia—many Oracle customers still maintain one or more of their enterprise performance management (EPM) applications on-premises.
In this case, a hybrid Oracle EPM cloud strategy, keeping some Oracle EPM applications on-premises while moving others to the cloud, could be the perfect fit for your organization. The benefits of a hybrid Oracle cloud strategy include:
- Cutting IT costs for some or all applications by switching to an operational expense (OPEX) rather than a capital expenditures (CAPEX) expense model.
- Lowering business risk by taking advantage of cloud data backups and business continuity features.
- Organizational stability by maintaining business-critical applications on-premises until the time is right for a cloud migration.
So, given the competing benefits of cloud and on-premises, why not try to straddle the divide by choosing a hybrid Oracle cloud strategy? One of Datavail’s recent clients, a global industrial safety company operating in 20 countries around the world, was faced with exactly this question when they needed to upgrade their Hyperion EPM 11.1.2.3 deployment.
With the end of support (EOS) date for 11.1.x arriving soon in December 2021, the client knew they had to upgrade to 11.2 to remain in compliance and enjoy new features and bug fixes. However, whether the client should perform an Oracle EPM migration to the cloud or stay on-premises was still an open question:
- Oracle has guaranteed that it will continue to offer support for Hyperion EPM 11.2 on-premises through at least 2030, making this a safe long-term option for another decade.
- However, the client had already completed a cloud migration to Oracle Planning and Budgeting Cloud Service (PBCS) and could realize greater efficiencies by moving EPM to the Oracle cloud as well.
Ultimately, the right advice for most companies will be to move multiple Oracle applications to the cloud—especially since purchasing EPM Cloud gives you access to all of them at once. However, you might also wish to stagger your migration projects, moving some of them before others as a trial run.
Thinking about a hybrid Oracle EPM cloud strategy for yourself? Find out how it worked for one of our clients by reading our case study: Industrial Safety Company Stops Outages by Moving to Oracle EPM Cloud.
The post Is a Hybrid Oracle EPM Cloud Strategy Right for You? appeared first on Datavail.
Exploring MariaDB’s Storage Engine Options
MariaDB is a flexible, modern relational database that’s open source and is capable of turning data into structured information. It supports many types of workloads in a single database platform and offers pluggable storage architecture for flexibility and optimization purposes.
You can set up storage engines on a per-database instance or per-table basis. Here are some of the storage engines you can leverage in MariaDB for your development projects.
InnoDB
Originally, the default MariaDB storage engine was XtraDB. Starting with MariaDB 10.2, InnoDB is the default. However, starting from MariaDB 5.5, InnoDB is included as a plugin.
MariaDB’s default storage engine is InnoDB. When you’re looking for a general-purpose option for your transactional workloads, or you’re not sure which solution to use, InnoDB is a brilliant choice. It works best in a mixed read/write environment and offers compression and encryption. You also need to use InnoDB if you want to set up multi-mastering clustering that supports synchronous replication.
CONNECT
The CONNECT handler was introduced in MariaDB 10.0. The strength of this handler is that it can access data from multiple places on a server as if it was a centralized database. In addition, CONNECT doesn’t use locking, meaning that data files are opened and closed for each query.
You may want to use CONNECT for importing and exporting data from a MariaDB database, and for all types of Business Intelligence applications. However, CONNECT isn’t appropriate for transactional applications.
S3
The S3 storage engine was introduced in MariaDB 10.5.4. S3 is read-only and can be used to archive MariaDB tables in Amazon S3. You can also use it with any of the many third-party public or private cloud systems that implement the S3 API, while still having the data accessible for reading in MariaDB.
Typically, you would use S3 when you have tables that are almost inactive, but you still need to maintain them. Using S3 allows you to move that type of table to an archiving service, using an S3 API. S3 compatible storage is much less costly than other alternatives, and many implementations provide reliable long-term storage.
MyRocks
This storage engine comes from Facebook developers and is optimized for space and write-intensive workloads. The compression functionality is excellent, and it’s a fork of RocksDB, which is a Google project that’s performance focused. When you have servers with SSDs and multi-core processors, you’ll see a good boost with MyRocks. Other advantages include:
- It has 2x better compression than InnoDB meaning you get greater space efficiency.
- It has a 10x less write amplification compared to InnoDB, giving you greater writing efficiency.
- It avoids all compaction overheads when faster data loading is enabled because it writes data directly to the bottommost level.
- It offers faster replication because there are no random reads for updating secondary keys, except for unique indexes.
ColumnStore
Put your analytics workloads into ColumnStore for a columnar format. This storage engine keeps the data store separate from the database, which allows it to be distributed on multiple servers. This architecture supports real-time ad hoc queries so you can access insights from your data faster. This storage engine supports hundreds of billions of rows and you do not need to use snowflake schema or indexes. Other characteristics include:
- It is designed specially to handle analytical workloads.
- Data is written by column rather than row and is automatically partitioned, therefore no indexes are necessary.
- It can be used as the analytical storage engine for HTAP.
- It is easily scalable.
- It supports multiple connectors and data adapters to allow the use of commonly used business intelligence tools.
- Network latency has only a small impact on Enterprise ColumnStore.
Xpand
When scaling is your focus, Xpand is your top choice. This storage engine allows you to distribute your tables and elastically scale them in a high availability environment. Its performance keeps up with its scale, supporting millions of transactions per second. One of the biggest advantages of Xpand is that you can scale without bringing in a specialized database solution. Other advantages include:
- It provides distributed SQL capabilities and is ACID compliant.
- It is highly available due to maintaining replicas of each slice, allowing it to recover from a node failure without losing data.
- It can maintain multiple replicas of each slice and is zone aware, allowing it to recover from multi-node failures or zone failures without losing data.
- Its rebalancer maintains data distribution, meaning that a node or zone failure causes the creation of new replicas for each slice, and the rebalancer then redistributes the data.
- It has write scaling since every node writes concurrently and all nodes have the latest data, and it performs writes in parallel.
- It scales out because each node can read and write, reads are lockless. Writes don’t block reads, and additional nodes can be added to increase capacity.
Aria
Do you have non-transactional workloads that are read-heavy and need a crash-safe option? Aria fills these requirements nicely. It’s a performance-focused storage engine for system tables and delivers the reliability that this data needs. In addition, Aria offers a number of advantages over MyISAM, including:
- Aria can replay almost everything from the log, so making a backup is easy by just copying the log. Exceptions include Batch Insert into an empty table and Alter table.
- You can do unit tests of most parts.
- If you experience a crash, changes will go back to the start of a statement or the last Lock Tables statement.
- It allows multiple concurrent inserters into the same table.
Spider
You can use a MongoDB-like sharding architecture with this virtual storage engine. Set up list, range, and hash schemas and create partitions you can spread over multiple databases. In most cases, you use Spider alongside another storage engine. It pairs nicely with InnoDB and MyRocks for scaling their respective workloads. You may also want to use Spider for the following applications:
- Sharding a big table.
- Tracking data for multiple locations or branches of a company.
- Query tables on multiple MariaDB servers.
- Migrating tables between servers.
- Running queries on both MariaDB and remote tables on non-MariaDB databases.
- Migrating tables from a non-MariaDB server to a MariaDB server.
All of MariaDB’s storage engines have their place, and the best part about this database is that you can pick and choose the combinations that work best for your workload. You don’t need to compromise on your requirements or have to use a different database technology. You get it all in one place with MariaDB, along with all the other benefits it provides.
Read This Next
Going Open-Source: Making the Move to MariaDB from Oracle
Download our white paper to learn more about this powerful database technology, its features, and how to handle the migration process.
The post Exploring MariaDB’s Storage Engine Options appeared first on Datavail.
The Risk of Running Legacy Database Technology
End of Life comes for most technology, and your databases are no exception. When you’re used to working with a particular version of your database, such as PostgreSQL 9.5, and have built your organization’s applications around it, upgrading to more modern editions may be resource intensive. However, the risks of running legacy database technology are more costly when you consider all the implications.
Increased Vulnerability to Security Exploits
When your database technology no longer receives security patches, new exploits remain on your systems. Over time, you end up with a database that has many attractive attack surfaces for a cybercriminal. They can use this opportunity for a data breach or use this vulnerability as a steppingstone to access other parts of your network.
Your IT security team would have increased demands on their time, as they have to spend more resources monitoring your systems for unusual activity and signs of an attack. As cyber criminals develop new ways to hack the database, they could end up using approaches that are increasingly harder to detect through advanced threat detection and other solutions.
No More Bug Fixes
Are you running into bugs from your legacy database that makes it difficult or impossible to use your applications? Unless you’re able to solve the problem yourself or see if someone else in the development community for that database technology has a workaround, you could lose vital functionality.
Over time, you end up with a system that is highly inefficient. It could be prone to crashing or lack functionality that is required for changes in daily business operations. The unexpected downtime can cause many schedules to get off-track and impact your revenue.
Difficulty in Hiring Specialists
The older a database version is, the harder it is to find specialists who are proficient in the system. IT recruiting is challenging enough when it comes to the latest database technology. When you want to bring in someone capable of working on a 5–10-year-old implementation of the database, you’re going to spend a lot of time looking for talent.
If you’re unable to find someone with hands-on experience working with that particular database version, you’ll have to hire someone capable of learning it from the ground up. If the candidate is used to the latest features of that database, then they have to get used to doing without many important capabilities.
Decreased Business Agility
What does your time to market look like when it comes to solutions powered by an old database? Compare this pace to the latest database versions, especially if your competitors have upgraded. A lack of agility results in many types of opportunity costs, which have an indirect impact on your bottom line.
Some examples of the opportunities you miss out on due to old database technology include:
- Inability to integrate with new solutions: As new technology launches and offers productivity increases, your old database may not be able to work with new data types, schemas, and other innovations.
- The competition launching products before your organization: The productivity decreases associated with solutions that are slow or crash due to outdated technology can put you behind your competition’s release schedule. When they’re first to market, they can better position themselves as the best choice in the market sector.
- Decreased interest among job applicants: If your organization is known for being behind on technology, you may lose out on talent that prefers more tech-forward companies. Your turnover rates may also be affected when internal systems are difficult to work with.
- Poor customer experience: The client-facing experience also suffers from old databases. Your customers may not be able to access the data that they need to make decisions, or they could become frustrated if their interactions are slow and poor quality.
Read This Next
You Can’t Put Off a PostgreSQL v9.5 Upgrade Anymore – End of Life is Here
Don’t let old databases drag your entire organization down. You can learn more about upgrading to PostgreSQL 13 in this white paper or reach out to us to discuss your database migration, modernization, and database upgrade opportunities.
The post The Risk of Running Legacy Database Technology appeared first on Datavail.
3 Ways to Get Your Hyperion 11.2 Upgrade in the Fast Lane
With the 11.1.x end of support deadline coming up soon, many organizations are planning their Hyperion upgrade. Of course, upgrading comes with a set of challenges: executing pre- and post-upgrade performance testing, managing downtime (and keeping it minimal), and troubleshooting the inevitable issues that will crop up along the way.
But what if you could easily execute your upgrade with a partner who already has solutions to these problems?
Accelatis is Datavail’s enterprise-grade APM platform, designed to work with Oracle Hyperion from the ground up. In fact, the Accelatis software is the only APM tool that has been purpose-built especially for use with Hyperion. In this blog post, we’ll do a quick review of three ways that Datavail can deliver a fast-track 11.2 upgrade with the help of Accelatis.
1. Planning and Designing Your Upgrade
Right out the gate, Datavail uses Accelatis at the start of a Hyperion upgrade project to get the big picture of your Oracle environment. Who in the company is using Hyperion? How are they accessing it? What reports are they pulling? These questions and more are some of the most important to answer before planning your Hyperion upgrade—and they are exactly what Datavail uses Accelatis to find out.
From there, Accelatis collects data for at least four to eight weeks before the real work actually begins. This gives you an essential point of comparison for the benchmarking and testing process, so you can be sure that the upgrade hasn’t harmed the system’s performance.
2. Testing Before, During and After the Upgrade
There are many forms and variations of performance testing, each one exposing a different window into your IT environment. Before, during, and after any Hyperion upgrade project, we use Accelatis early and often for testing the system’s performance. Although tools such as LoadRunner and Oracle Application Testing Suite can also be used for baseline testing, they come with their own issues:
- Oracle Application Testing Suite is entirely separate from Hyperion… and is a costly purchase where the return on investment is questionable if you don’t already own it.
- Building appropriate tests in Micro Focus LoadRunner (formerly HP LoadRunner) for your Hyperion environment can take weeks—compared with just minutes for Accelatis—and can be an extremely tedious, labor-intensive manual process.
After baseline testing and comparison testing, Datavail uses Accelatis to perform load testing and break testing, assessing the scalability and resilience of your Hyperion environment.
Once your upgrade is complete, Accelatis continues to run performance testing scenarios on a regular basis so our team can work with you to detect and solve issues before they affect your users.
3. Troubleshooting Along the Way
Last but not least, Datavail also uses Accelatis for both short- and long-term troubleshooting over the course of a Hyperion upgrade. With the Accelatis remote agent installed on a machine, Datavail can utilize the Hyperion logs and compare them against the key performance indicators (KPIs) that you’ve defined for the project. Accelatis also has a built-in log management feature that scans the software’s log messages looking for and alerting on errors.
Read This Next
3 Ways Datavail’s IP Can Jumpstart Your Hyperion Upgrade
When upgrading to Hyperion 11.2, you want to be able to take advantage of the new features and functionality as soon as possible; and upgrade challenges can quickly turn a hopeful project into a nightmare. For more information on how Datavail can help you make a smooth transition to 11.2, download our white paper.
The post 3 Ways to Get Your Hyperion 11.2 Upgrade in the Fast Lane appeared first on Datavail.