Category Archives: Art of BI

AppDev 101: A Comparison of Agile and Waterfall Project Methodologies

Choosing the right application development methodology will have a big impact on the success of your project. Agile and waterfall are two of the most popular approaches, and there is ongoing discussion about which is the best. A comparison of the two will help you choose the method best for your project. As you’ll see, there isn’t one answer to the question of which methodology to use. It all depends on your business and your development team capabilities.

Agile Methodology: Defined

The agile methodology was introduced in 2001 by a group of software developers who wanted better ways of developing software. It focuses on people working closely together, results reached by iterative development, and flexible responses to change.

Waterfall Methodology: Defined

The waterfall methodology is more traditional in its approach. Using the waterfall methodology, teams make concrete plans for the life of the project. The approach is linear and sequential, where each phase is reviewed and verified before a new phase is initiated. The work completed in each phase flows down to the next, which is where the term waterfall comes from.

What are the Key Differences Between Agile and Waterfall?

Both application development approaches can produce quality software. But, there are distinct differences in how the two methodologies reach that goal.

  • Agile leads teams to work in increments, while waterfall is linear
  • Agile teams work in sprints, while waterfall projects are divided into phases
  • Agile teams produce working applications at the end of each sprint, while waterfall teams produce a complete application after the build phase
  • Agile teams work on a series of small projects that come together into a whole, while waterfall teams work toward one overall outcome
  • Agile’s structure is focused on satisfying the customer, while the waterfall structure focuses on completing a successful project
  • Agile requirements are set daily, while waterfall defines all requirements at the start of the project
  • Agile teams can easily respond to changes in requirements, while waterfall teams discourage changes after the initial requirement definition
  • Agile performs testing during development, while waterfall tests at the end of the build phase

Agile Methodology: Pros and Cons

If your organization uses the agile methodology effectively, it is an excellent AppDev approach. But, there are pros and cons, and situations where it works better than others.

Pros:

  • Development is quick and flexible
  • Defects are identified and fixed faster than when using waterfall
  • Small teams working on a variety of tasks tend to avoid slowing phases of development
  • Changes can be made during development whenever needed

 

Cons:

  • Agile requires an experienced Scrum master who is comfortable with the quick pace
  • If customers aren’t willing to stay involved, they will get frustrated with the demands on their time
  • Teams must be well-organized and self-governing, even though some members may be remote

Who Should Use the Agile Methodology?

You should consider several factors when choosing the agile approach. For example, agile works best when:

  • you have an experienced AppDev team, or one with good support
  • your business is focused on continuous improvement
  • you need to respond to a rapidly changing environment
  • you want your large corporation to streamline business processes and respond to changes faster
  • your customers and stakeholders are educated about why their participation is required and how it benefits them, and they commit to the process

 

When you use the agile methodology, you’re establishing an environment where customers, stakeholders, and the project team can all play a part. When the team produces working applications at the end of a sprint, everyone involved can attend a sprint demo and see exactly how the application works. The project team can get feedback from the customer, and changes can be incorporated into the next sprint.

This iterative process helps the team to respond to requirement changes, and even the team members doing the most basic part of the project see the result of their efforts. Agile projects generally achieve higher customer satisfaction since communication is key and the team can easily manage changes. Since the team has committed to a specific deliverable for every sprint, and since everyone contributes to it, the motivation among team members stays high.

Waterfall Methodology: Pros and Cons

The waterfall methodology, like agile, has advantages and disadvantages, and situations where it works best.

Pros:

  • it’s easy for teams of any size to manage the formal and ordered processes
  • structured development phases provide stability for newly formed teams
  • setting requirements at the start of the project reduces the complexity of the project
  • planning for the entire project at the start makes it easy to manage expectations, risks, and budget

 

Cons:

  • application development is slower and less flexible
  • defects typically aren’t discovered until the testing phase, which occurs after the build phase
  • it’s difficult to incorporate changes

Who Should Use the Waterfall Methodology?

  • teams that work best with a predictable and sequential approach
  • less experienced application development teams
  • if your business doesn’t value change or taking risks
  • if your customers and stakeholders aren’t or can’t participate
  • if your projects are small and not complicated
  • if your projects have long timeframes

 

When you use the waterfall methodology, you’re establishing an environment that is methodical and where there are few surprises. Project requirements are set at the beginning of the project and don’t change significantly. Roles and tasks are defined for each phase of the project and those phases are predictable in how much time they will take. It’s easy to track milestones because the project follows a pre-defined path.

It’s the best approach when customers and stakeholders don’t want to or can’t participate. However, that means that customers won’t see the application until the end of the project, which can complicate testing and delivery. Waterfall may not be the best approach for projects that will take a long time because over an extended period of time, changes will most certainly be required.

How to Choose?

Recent studies such as the Chaos Report published by Standish Group indicate that in most situations, the agile methodology is more effective. The Standish report indicated that agile projects had a 60% greater success rate than non-agile projects. They went even further by discovering that waterfall projects are three times more likely to fail.

That doesn’t mean that you should use agile in every situation. As you’ve seen through this comparison, there are valid reasons to use both approaches. But, choosing the wrong AppDev approach is only one of the things that can cause a project to fail. For more information, read our whitepaper, “4 Reasons Why Application Development Projects Fail.” With the large cost of a failed application development project, we want to help you identify the four reasons that most commonly cause projects to fail. We also provide seven solutions that you can use to overcome application development challenges.

If you’re interested in using the agile methodology, but you don’t think your team is ready to take on an agile project, Datavail has alternatives for you. Learn more about Datavail’s Application Development Sprint Teams. Our dedicated sprint teams are available on demand to scale up your development team without losing momentum. We can also provide the leadership you need to develop your own sprint teams, allowing you to use the methodology that most developers find most effective.

The post AppDev 101: A Comparison of Agile and Waterfall Project Methodologies appeared first on Datavail.

Choosing the Right Partner for Your Oracle EBS 12.2 Upgrade

Oracle EBS 12.2 upgrades don’t need to be filled with frustration. When you have the right partner to help you along your path, you get to focus on taking advantage of this platform’s new capabilities. In this article, you’ll learn about the benefits that an upgrade provider offers and what you need to look for when choosing a partner.

Benefits of Working with an Oracle EBS 12.2 Upgrade Partner

An EBS partner brings many benefits to the table that reach all parts of your organization before, during, and after the upgrade.

Minimizes Downtime

Your partner has gone through many EBS 12.2 upgrades in a variety of business environments. They know the common pitfalls that can cause lengthy downtime, and they help you avoid them. You won’t disrupt daily business operations or run into unexpected outages that throw off schedules.

Prepares You Thoroughly for Upgrades

To hit the ground running with the latest Oracle EBS version, you need to prepare so everything works seamlessly once the system is back up. An upgrade partner walks you through every pre-check before the upgrade so everything is covered before the move.

Recommends Implementations that Work Best for Your Use Cases

Your partner’s experience with EBS 12.2 upgrades allows them to offer a plan that best fits your business needs. They can make recommendations that create a robust foundation for your organization’s future growth and requirements, as well as explain how you can get the most out of your investment.

Accesses a Deep Pool of Experts

Even if you have a large in-house IT team, you may not have specialists with an extensive background in upgrading Oracle EBS to 12.2. You can access this skill set through your partner without needing to go through an expensive and time-consuming recruitment process.

Keeps Costs Low by Doing It Right the First Time

Nothing drives up the cost of an upgrade more than having a failed deployment, configurations that don’t serve your business needs, a schedule overrun, or poor optimization. Your partner lets you avoid all these with a reliable and predictable deployment process.

Finding the Right Partner for Your EBS Upgrade

Now that you know what an EBS 12.2 upgrade partner can do for you, it’s time to search out the right one for your organization. Look for these key characteristics when you evaluate vendors:

  • Responsive: You don’t want to be kept waiting when you’re in the middle of the upgrade process. Make sure that your partner gets back to you within the expected time frame.
  • Willingness to learn about your company: Every business environment has unique quirks, infrastructures, and use cases. You want a partner that understands your organization and what you do, so they can better support your short- and long-term technology requirements.
  • Oracle EBS specialists and certifications: The staff offered by your partner should consist of IT professionals who specialize in Oracle solutions. You want to look for those with a background in working with EBS, as well as those holding Oracle certifications on an individual and organization-wide level.
  • Clients of the same size or industry: Partners with an in-depth understanding of the technology used for companies of similar size, and those in the same industry, make the upgrade process much more manageable.
  • Extensive experience in EBS deployments and upgrades: Your partner should have plenty of case studies showing their expertise in EBS and the results that they achieved for their clients.
  • End-to-end service options: In many cases, you need help with more than just the upgrade process itself. From planning the upgrade to making sure that everything functions properly, end-to-end services allow you to get help with the full life cycle. Ongoing support and maintenance are also essential to consider, especially if you have a lean in-house IT team responsible for your EBS deployment. Your partner can take over these tasks, so you have better in-house technical resource allocation.

 

A quality EBS 12.2 upgrade partner makes a world of difference during this process. Start your evaluation process by getting in touch with Datavail today. We’re a certified Oracle partner with highly experienced specialists ready to support you every step of the day.

The post Choosing the Right Partner for Your Oracle EBS 12.2 Upgrade appeared first on Datavail.

How to Manage a Very Large Database in SQL Server

Ten years ago when I started my career as a SQL Server DBA, I remember a 200-500 Gb sized database was considered as a very large database. During that time maintaining and managing those databases were not a tedious task. But over the years the definition of a very large database has changed. Today we’re looking at database size in terabytes and petabytes.

With cloud computing, hardware requirements needed to sustain the growth of databases are now just one click away. Auto-scaling is now a blessing to companies, as they previously had to run through their budgets whenever there was a need to add resources to the database servers.

Because database sizes have grown over time, managing and maintaining them have become a pain. And when I say managing and maintaining a database, it means taking regular backups, performing index maintenance, integrity checks, etc. etc.

Most of the time we try to archive the old/cold data so that we can keep the database size in check. But sometimes there are cases where the scope to archive the database is very limited. This is especially in medical and financial sectors where old data is still used for various purposes.

Given the rate of database daily growth, I’m going to take you through some database management tasks to give you a better understanding of how you can keep up.

Consider an OLTP database that is active 24*6 and is around 10TB in size.

Backup Strategies

Taking daily full backup of databases whose size is 10TB can be a very demanding task, especially for an OLTP database. Even with better hardware, the backup time would be around five to seven hours. Therefore, having a proper backup strategy in place is as important as maintaining its availability.

It would be wise to consider having a weekly full backup with daily differential and hours transaction backup when looking at the time and cost of resources.

Using third party tools to backup the database is a better option. These tools not only help you in reducing the time taken to backup the database, but also reduces the size of the compressed backup.

INDEX Maintenance

Performing normal index maintenance tasks on a very large database is not the same as performaning on a regular size database. REBUILDING index on big tables having big indexes is very time consuming. This is something that also causes blocking on the servers which hampers the performance of other applications.

The only way to maintain the indexes on such a huge database is to REORGANIZE them. REBUILD index option can only be chosen during index corruption or when there is an absolute need to REBUILD a particular large index.

One important option that we need to consider for creating indexes on such a large database is to use is to specify WITH SORT_IN_TEMPDB in the statement. SORT_IN_TEMPDB forces the index build to occur in tempdb, and when it is complete, it is then written to its destination filegroup. This is an often-overlooked option that can provide huge reductions in index build times.

Another advantage of reorganizing the index is that we can stop the operation in the middle if the execution time overlaps business hours – and this will not cause the index to go in ROLLBACK state.

Database Consistency Check (DBCC)

DBCC command is used to check the consistency and integrity of the database. This command helps us to make sure that our databases are in a healthy state and if in case any issue/corruption occurs then this command helps to identify and fix the issue.

Executing the DBCC command is very resource-intensive. It causes constrain on both memory and disk. Running DBCC command on such a large database can be very risky because if the command does not execute in the allotted time frame and if we try to KILL the execution process it will go in ROLLBACK state. This is not only time consuming but also jeopardizes the consistency of the database. Hence running this command on a very large OLTP database is not a feasible option.

The speed of completion of this command completely depends on the Memory provided and the type of RAID used for hosting the tempdb database.

Other options can be used such as DBCC CHECKTABLE against individual tables or groups of tables on a rotating basis, DBCC CheckDB WITH PHYSICAL_ONLY. This option limits the processing to check the integrity of the physical structure of the page and record headers in addition to the consistency between the pages for the allocation structures (data and indexes).

The most recommended and best option is to have a STANDBY server, restore the backup of the production database on that server, and then run the DBCC command. If the consistency checks run ok on the standby database, the production database should be ok as it is the source of the standby. If the standby database reports corruption, then DBCCs or other tests for corruption can be run against the production database.

I hope this gives you a better understanding of the complexity and management options for very large databases in SQL Server. If you’re looking for support with your SQL Server implementation, please get in touch with our experts.

The post How to Manage a Very Large Database in SQL Server appeared first on Datavail.

How to Solve the Oracle Error ORA-12154: TNS:could not resolve the connect identifier specified

The “ORA-12154: TNS:could not resolve the connect identifier specified” Oracle error is a commonly seen message for database administrators. When this occurs, there’s an issue with creating a connection with one of your Oracle services or database instances. In some Oracle database versions, this error may be called “ORA-12154: TNS:could not resolve service name.” The connect identifier is not able to resolve and may be caused by one or more of the following issues:

  • Inability to connect to the repository due to unplanned server and network outages
  • The entry is missing from tnsnames.ora
  • The entry in tnsnames.ora is malformed
  • The program is using tnsnames.ora from the wrong ORACLE_HOME
  • The program is not using a fully qualified service name, but no default domain is enabled in sqlnet.ora

 

Because there is more than one cause of the ORA-12154 error, you need to troubleshoot precisely what’s going on with your database connections. You’ll typically see this error in the Oracle client application during the connection process, not the server itself. While it can be frustrating to see this error when you’re working on an application, the fix is relatively straightforward.

Resolving ORA-12154 Error Codes

The Oracle client code uses one of three ways to look up connect data:

  • A flat file named tnsnames.ora
  • Oracle Names service
  • LDAP

 

When the complete ORA-12154 error appears with the text line, your program has found a working Oracle client install. However, the specified Oracle service is not listed in tnsnames.ora, Oracle Names or LDAP.

The first step in the troubleshooting process is to determine which name resolution method is deployed at your site. Most sites use tnsnames.ora, but enough use Oracle Names and LDAP, so it’s best to confirm this information.

If you are not the database administrator, get in touch with the people managing your Oracle systems and find out which method you should be using. They may be able to guide you in fixing the problem in accordance with your site’s standards.

The client code decides which mechanism to use based on the file sqlnet.ora. This file and tnsnames can usually both be found in the Oracle install directory (“ORACLE_HOME”), under network/admin/. This location may be overridden with the environment variable TNS_ADMIN.

If the sqlnet.ora file does not exist or does not specify a resolution method, then Oracle Net uses tnsnames.ora.

Example locations of Oracle networking files include:

Windows

  • ORANTNET80ADMIN
  • ORACLEORA81NETWORKADMIN
  • ORAWIN95NETWORKADMIN
  • ORAWINNETWORKADMIN

 

UNIX / Linux

  • $ORACLE_HOME/network/admin/
  • /etc/
  • /var/opt/oracle/

 

If you fix the naming issues, but you still see the ORA-12154 error, check the Oracle service to confirm that it’s available for connections. A power outage, server failure, or network connectivity issue will make this resource inaccessible. It’s also possible that scheduled maintenance or repairs of an unrelated Oracle issue may take that resource temporarily offline.

Get Expert Help with Resolving Your ORA-12154 Errors

Datavail’s Oracle experts have an average of 15 years of experience and are well-versed in resolving common connection problems with this database technology. We offer Oracle services tailored to your needs, whether you need occasional assistance with troubleshooting or end-to-end solutions for your business.

Don’t let Oracle errors get in the way of creating high-availability, stable applications that your organization depends on. Get the most out of your technology investments by contacting us today.

Read This Next

The Path to DBA Heaven

Are You Working in DBA Heaven or Hell? Time and again, we hear our clients tell us they were living in DBA Hell before they chose to work with Datavail. Now, they feel like they’re living in DBA Heaven. Learn more.

The post How to Solve the Oracle Error ORA-12154: TNS:could not resolve the connect identifier specified appeared first on Datavail.

Secrets of Power BI Performance: Power BI Dataflows

Microsoft Power BI is a mature, feature-rich business analytics solution used by thousands of companies to get cutting-edge data-driven insights. However, there are a number of challenges with using Power BI straight out of the box (as we discuss in our white paper “Power BI for Mid-Market Companies”).

 
As a Microsoft Gold Partner, Datavail knows Power BI like the back of our hand—and we’re here to help. In this series, we’ll take a look at some of the Power BI “secrets” that savvy users know about, starting with one of the most powerful and useful features of the platform: Power BI dataflows.

What Are Power BI Dataflows?

Power BI data flows are a construct for organizing and persisting self-service data in Power BI, similar to but distinct from a dataset. Users can employ dataflows to collect, clean, combine, and enrich their enterprise data, automating much of the process.

Dataflows enable you to eliminate some services and bring data directly into the Power BI data service. By creating an abstraction away from your data sources, dataflows help provide users access to the information they need, without them needing to worry about the underlying implementation in terms of data warehouses and tools.

The use cases of dataflows include:

  • Creating transformation logic that can be shared and reused for Power BI datasets and reports.
  • Establishing a single source of truth by forcing analysts to connect to dataflows themselves, instead of connecting to the underlying systems.
  • Limiting access to an underlying data source for only a few select people, while letting analysts build on top of the dataflows.

Power BI Dataflows: Tips and Tricks

Power BI dataflows, like any other software feature, require a bit of adjustment as you get used to how they work within your IT environment. To lower the learning curve and boost your productivity, below are just a few of the Power BI dataflow tips and tricks you should know about.

  1. Power BI enhanced compute engine

    Released in October 2020, the Power BI enhanced compute engine loads data into an SQL-based cache for faster querying. According to Microsoft, this enhanced compute engine has the potential to improve Power BI dataflow performance by up to 20 times. This new engine is now enabled by default for all new dataflows in Power BI, so be sure to turn it on for older dataflows as well.

  2. Reusing dataflows

    Reusing your Power BI dataflows across multiple environments and workspaces can save users a great deal of time and effort. Some of the best practices here include:

    • Separating data transformation workflows and staging/extraction workflows. Decomposing dataflows into smaller pieces makes them more modular and easier to reuse in different places.
    • To use the output of a dataflow in one workspace for dataflows in other workspaces, you need to set the correct workspace access levels. (See Microsoft’s article “Organize work in the new workspaces in Power BI.”)

    For the full story, check out Microsoft’s article “Best practices for reusing dataflows across environments and workspaces.”

  3. Implementing error handling

    Unexpected errors can derail your Power BI deployment and bring down your entire analytics workflow, depriving you of the insights you need. It’s highly recommended that you make your Power BI dataflows robust by implementing error handling so that you can handle errors and recover from them gracefully.

    The two types of common errors in Power BI are step-level errors (which prevent a query from loading) and cell-level errors (which do not prevent the query from loading, but display “Error” in cells with errors). Microsoft provides guidance for how to deal with these in the article “Dealing with errors in Power Query.” More sophisticated error handling in Power BI is possible by applying your own custom conditional logic (e.g. the “try” and “otherwise” syntax).

Final Thoughts

Using the enhanced compute engine, reusing your dataflows in multiple places, and implementing error handling are just a few ways for Power BI power users to optimize the software’s performance. Power BI dataflows are a tremendously powerful and flexible construct—at least when you follow the Power BI best practices that we’ve outlined above.

However, dataflows are just one way for companies to get the Power BI performance they need. Want to learn more? Check out the Datavail blog for more Power BI insights, or download our white paper “Power BI for Mid-Market Companies” for all the details. Also talk to one of our Power BI experts.

The post Secrets of Power BI Performance: Power BI Dataflows appeared first on Datavail.