Tag Archives: Cloud

A Few Words About OAC Embedding

TL;DR To exit VIM you press Esc, then type :q! to just exit or :wq to save changes and exit and then press Enter.

Some time ago and by that I mean almost exactly approximately about two years ago Mike Durran (https://insight2action.medium.com) wrote a few blogs describing how to embed Oracle Analytics Cloud (OAC) contents into public third-party sites.

Oracle Analytics Cloud (OAC) Embedding— Public User Access — Part 1
Introduction
Oracle Analytics Cloud (OAC) Embedding — Public User Access — Part 2
Introduction

For anyone who needs to embed OAC reports into their sites, these blogs are a must-read and a great source of valuable information. Just like his other blogs and the official documentation, of course.

Visualizing Data and Building Reports in Oracle Analytics Cloud
The topics in this section explain how to use the JavaScript embedding framework to embed Oracle Analytics content into applications and web pages.

If you have ever tried it, you most likely noticed that the embedding process is not exactly easy or intuitive. Roughly it consists of the following steps:

  1. Create content for embedding.
  2. Setup infrastructure for authentication:
    2.1. Create an Oracle Identity Cloud Service (IDCS) application.
    2.2.Create an Oracle Functions function.
    2.3. Set up Oracle API Gateway.
  3. Embed JavaScript code to the third-party site.

Failing to implement any of the above leads to a fully non-functional thing.

And here is the problem: Mike knows this well. Too well. Some things that are entirely obvious to him aren't obvious to anyone trying to implement it for the first time. When you know something at a high level, you tend to skip bits and bobs here and there and various tasks look easier than they are.

A small story When I was studying at the university, our techer told us a story. Her husband was writing a math book for students and wrote the infamous phrase all students love: "... it is easy to prove that ...". She said to him that, if it was easy to prove, he should do it.

He spent a week proving it.

That is why I think that I can write something useful on this topic. I'm not going to repeat everything Mike wrote, I'm not going to re-write his blog. I hope that I can fill in a few gaps and show some it is easy to do things.

Also, this blog is not intended to be a complete step-by-step guide. Or, at least, I have no intention of writing such a thing. Although, it frequently happens that I'm starting to write a simple one-paragraph hint and a few hours later I'm still proofreading something with three levels of headers and animated screen captures.

Disclaimer. This blog is not a critique of Mike's blog. What he did is hard to overestimate and my intention is just to fill some gaps.

Not that I needed to make the previous paragraph a disclaimer, but all my blogs have at least one disclaimer and once you get locked into a serious disclaimers collection, the tendency is to push it as far as you can.

Testing out Token Generation

My main problem with this section is the following. Or, more precisely, not a problem but a place that might require more clarification in my opinion.

You’ll see that the token expires in 100 seconds and I describe how to increase that in a separate blog. For now, you can test this token will authenticate your OAC embedded content by copying the actual token into the following example HTML and deploying on your test web server or localhost (don’t forget to add suitable entries into the OAC safe domains page in the OAC console)

I mean why exactly 100 seconds is a bad value? What problem does increasing this value solve? Or, from the practical point of view, how do we understand that our problem is the token lifespan?

It is easy and confusing at the same time. The easy part is that after the token is expired, no interaction with the OAC is possible. It is not a problem if you embed non-interactive content. If the users can only watch but do not touch, the default value is fine. However, if the users can set filters or anyhow interact with reports, tokens must live longer than the expected interaction time.

Here is what it looks like when the token is OK:

And the same page a few minutes later:

Assuming that we don't know the right answer and need to find it, how do we do it? The browser developer console is your friend! The worst thing you can do to solve problems is to randomly change parameters and press buttons hoping that it will help (well, sometimes it does help, but don't quote me on that). To actually fix it we need to understand what is going on.

To be fair, at first sight, the most obvious and visible message is totally misleading. Normally, we go to the Console tab (Ctrl+Shift+J/Command+Option+J) and read what is written there. But if the token is expired, we get this:

The console shows multiple CORS errors: Access to XMLHttpRequest at 'https://OAC-INSTANCE-NAME.analytics.ocp.oraclecloud.com/ui/dv/ui/api/v1/sessioninfo/ext' from origin 'https://THIRD-PARTY-SITE' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. CORS stands for Cross-Origin Resource Sharing. In short, CORS is a security mechanism implemented in all modern browsers which allows for specifying if content from one server may be embedded into another server.

So looking at this we might assume that the solution would be either specify Safe domains in OAC or set CORS policy for our Web server, or both. In reality, this message is misleading. The real error we can get from the Network tab.

Let's take a look at the first failed request.

Simply click it once and check the Headers tab. Here we can clearly see that the problem is caused by the token, not by CORS. The token is expired.

The same approach shows when there is something wrong with the token. For example, once I selected a wrong OAC instance for the Secure app. Everything was there. All options were set. All privileges were obtained. The token generation was perfect. Except it didn't work. The console was telling me that the problem was CORS. But here I got the real answer.

Oracle Functions Service

I feel like this is the part which can use more love. There are a few easy-to-miss things here.

And the most important thing is why do we need Oracle Functions at all? Can't we achieve our goal without Functions? And the answer is yes, we can. Both Oracle Functions and API Gateways are optional components.

In theory, we can use the Secure application directly. For example, we can set up a cron job that will get the token from the Secure application and then embed the token directly into static HTML pages using sed or Python or whatever we like. It (theoretically) will work. Note, that I didn't say it was a better idea. Or even a good one. What I'm trying to say is that Functions is not an essential part of this process. We use Oracle Functions to make the process more manageable, but it is only one of the possible good solutions, not the only one.

So what happens at this step is that we are creating a small self-containing environment with a Node.js application running in it. It all is based on Docker and Fn Project, but it is not important to us.

The function we are creating is a part required to simplify the result.

High-level steps are:

  1. Create an application.
  2. Open the application and either use Cloud Shell (the easy option) or set up a development machine.
  3. Init a boilerplate code for the function.
  4. Edit the boilerplate code and write your own function.
  5. Deploy the code.
  6. Run the deployed code.

Creating a new function is easy.  Go to Developer Services -> Applications

Create a new function and set networking for it. The main thing to keep in mind here is that the network should have access to Oracle Cloud Infrastructure Registry. If it doesn't have access, you'll get Fn: Error invoking function. status: 502 message: Failed to pull function image error message when trying to run the function: Issues invoking functions.

The first steps with Oracle functions are simple and hard at the same time. It is simple because when you go to Functions, you see commands which should be executed to get it up and running. It is hard because it is not obvious what is happening and why. And, also, diagnostics could've been better if you ask me.

After you create an application, open it, go to the Getting started, press Launch Cloud Shell and do what all programmers do: copy and paste commands trying to look smart and busy in the process. Literally. There are commands you can copy and paste and get a fully working Hello World example written in Java. Just one command has a placeholder to be changed.

Hint: to make your life easier first do step #5 (Generate an Auth Token) and then come back to the steps 1-4 and 6-11.

If everything is fine, you will see "Hello, world!" message. I wonder, does it make me a Java developer? At least a junior? I heard that is how this works.

OK, after the Java hello-world example works, we can add Node.js to the list of our skills. Leave the Java hello-world example and initialize a new node function:

cd
fn init --runtime node embed_func

This creates a new Node.js boilerplate function located in the embed_func directory (the actual name is not important you can choose whatever you like).  Now go to this directory and edit the func.js file and put Mike's code there.

cd embed_func
vim func.js

- do some vim magic
- try to exit vim

I don't feel brave enough to give directions on using vim. If you don't know how to use vim but value your life or your reason, find someone who knows it.

But because I know that many wouldn't trust me anyways, I can say that to start editing the text you press i on the keyboard (note -- INSERT -- in the bottom of the screen) then to save your changes and exit press Esc (-- INSERT -- disappears) and type :wq and press Enter. To exit without saving type :q! and to save without exiting - :w . Read more about it here: 10 Ways to Exit Vim Editor

Image source: https://www.linuxfordevices.com/tutorials/linux/exit-vim-editor

Most likely, after you created a new node function, pasted Mike's code and deployed it, it won't work and you'll get this message: Error invoking function. status: 504 message: Container failed to initialize, please ensure you are using the latest fdk and check the logs

I'm not a Node.js pro, but I found that installing NOT the latest version of the node-fetch package helps.

cd embed_func
npm install node-fetch@2.6.7

At the moment of writing this, the latest stable version of this package is 3.2.10: https://www.npmjs.com/package/node-fetch. I didn't test absolutely all versions, but the latest 2.x version seems to be fine and the latest 3.x version doesn't work.

If everything was done correctly and you managed to exit vim, you can run the function and get the token.

fn invoke <YOUR APP NAME> <YOUR FUNCTION NAME>

This should give you a token every time you run this. If it doesn't, fix the problem first before moving on.

Oracle API Gateway

API Gateway allows for easier and safer use of the token.

Just like Functions, the API Gateways is not an essential part. I mean after (if) we decided to use Oracle Functions, it makes sense to also use Gateways. Setting up a gateway to call a function only takes a few minutes, no coding is required and things like CORS or HTTPS are handled automatically. With this said API Gateways is a no-brainer.

In nutshell, we create an URL and every time we call that URL we get a token. It is somewhat similar to where we started. If you remember, the first step was "creating" an URL that we could call and get a token. The main and significant difference is that now all details like login and password are safely hidden behind the API Gateway and Oracle Functions.

Before Functions and Gateway it was:

curl --request POST \
 --url https://<IDCS-domain>.identity.oraclecloud.com/oauth2/v1/token \
 --header 'authorization: Basic <base64 encoded clientID:ClientSecret>' \
 --header 'content-type: application/x-www-form-urlencoded;charset=UTF-8' \
 -d 'grant_type=password&username=<username>&password=<password>&scope=\
 <scope copied from resource section in IDCS confidential application>'

With API Gateways the same result can be achieved by:

curl --request https://<gateway>.oci.customer-oci.com/<prefix>/<path>

Note, that there are no longer details like login and password, clientID and ClientSecret for the Secure application, or internal IDs. Everything is safely hidden behind closed doors.

API Gateways can be accessed via the Developer Services -> [API Management] Gateways menu.

We click Create Gateway and fill in some very self-explanatory properties like name or network. Note, that this URL will be called from the Internet (assuming that you are doing this to embed OAC content into a public site) so you must select the network accordingly.

After a gateway is created, go to Deployments and create one or more, well, deployments. In our case deployment is a call of our previously created function.

There are a few things to mention here.

Name is simply a marker for you so you can distinguish one deployment from another. It can be virtually anything you like.

Path prefix is the actual part of the URL. This has to follow rather strict URL rules.

The other very important thing is CORS. At the beginning of this blog I already mentioned CORS but that time it was a fake CORS message. This time CORS is actually important.

If we are embeddig OAC content into the site called https://thirdparty.com, we must add a CORS policy allowing us to do so.

If we don't do it, we will get an actual true authentic CORS error (the Network tab of the browser console):

The other very likely problem is after you created a working function, exited vim, created a gateway and deployment, and defined a deployment, you are trying to test it and get an error message {"code":500,"message":"Internal Server Error"}:

If you are getting this error, it is possible that the problem is caused by a missing policy:

Go to

And create policy like this:

ALLOW any-user to use functions-family in compartment <INSERT YOUR COMPARTMENT HERE> where ALL { request.principal.type= 'ApiGateway'}

A few minor things

It is rather easy to copy pieces of embedding code from the Developer menu. However, by default this menu option is disabled.

It can be enabled in the profile. Click your profile icon, open Profile then Advanced and Enable Developer Options. It is mentioned in the documentation but let's be real: nobody reads it.

If you simply take the embedding script, it won't work.

This code lacks two important modules: jquery and obitech-application/application. If either of them is missing you will get this error: Uncaught TypeError: application.setSecurityConfig is not a function. And by the way, the order of these modules is not exactly random. If you put them in an incorrect order, you will likely get the same error.

As a conclusion

After walking this path with a million ways to die we get this beautifully looking page: Niðurstaða stafrænna húsnæðisáætlana 2022

https://hms.is/husnaedi/husn%C3%A6%C3%B0isa%C3%A6tlanir/m%C3%A6labor%C3%B0-husn%C3%A6%C3%B0isa%C3%A6tlana/ni%C3%B0ursta%C3%B0a-stafr%C3%A6nna-husn%C3%A6%C3%B0isa%C3%A6tlana-2022

Why Your Website Needs the Scalability and Availability of the Cloud


 

As more people live essential parts of their lives online, their expectations for website performance and user experience rise. These heightened expectations can impact business success.

 
For example, according to one study, 47 percent of visitors expect your website to load in 2 seconds or less—and if it doesn’t, they’ll have no problem clicking away to find what they’re looking for elsewhere. Meanwhile, research and advisory firm Gartner estimates that the average cost of IT downtime is roughly $5,600 per minute, or more than $300,000 per hour; the costs can be even higher for large organizations or e-commerce websites that depend on a steady flow of traffic.

Given these issues, scalability and availability need to be essential concerns for any company that depends on its website to do business or offer services to its customers. By migrating your website hosting from on-premises to the cloud, you can improve your site’s reliability, performance, and uptime while leveraging the advantages of public cloud ecosystems.

Scalability and Availability for Websites in the Cloud

In cloud computing, “scalability” refers to the ability of an application or service to dynamically expand or contract its capacity as necessary (e.g., during times of peak usage). “Availability,” meanwhile, refers to the amount of time that an application or service is accessible.

Both scalability and availability can be dramatically increased when moving your website to cloud hosting:

  • Scaling in the cloud is much easier than scaling inflexible on-premises servers. Websites hosted in the cloud can take advantage of horizontal scaling, distributing and balancing the increased load across multiple servers to prevent overloading any one of them. On the other hand, on-premises resources have fixed capacity, and it’s difficult to scale them without making an expensive capital purchase. In addition, the extra on-premises resources you purchase will go unused much of the time, and you may spend more than you need if you don’t correctly estimate the amounts of peak usage.
  • Many cloud providers offer a guaranteed uptime percentage in their service level agreement (SLA). For example, the Microsoft Azure SLA guarantees 99.9 percent uptime (or more) for its various cloud services, which corresponds to downtime of roughly 9 hours over an entire year.

 
Hosting your website in the cloud also offers benefits in terms of disaster recovery and business continuity. The best public cloud providers offer automatic backups and data replication, letting you quickly restore operations in the event of data loss.

On the other hand, on-premises backups need to be handled manually, and hosting your website on-site puts you at risk of a natural disaster, such as a flood or fire.

Case Study: Major Utility Company

One of Datavail’s clients, a major Canadian utility company, was faced with several pain points when refreshing its outdated front-facing customer website. Just 8 percent of the client’s customers were paying their bills online, which was costing the client millions in mailing expenses.

Part of the problem: the client’s existing website suffered from a clunky user experience and archaic technology. In particular, the website needed to be more mobile-friendly and accessible for customers with disabilities. The client was also concerned about the website’s scalability and availability, especially during times of peak usage such as storms and widespread power outages.

Datavail worked with the client to refresh and modernize its outdated customer website, with a particular focus on cloud computing. Although the client faced data sovereignty issues that required it to maintain its customer data on-premises, Datavail helped the client design a hybrid cloud architecture and migrate the website infrastructure to Microsoft Azure.

The client now uses fast, modern cloud services such as Azure Data Factory for ETL data integration and Azure App Service for building and deploying web applications. Thanks to its partnership with Datavail, the client has slashed postal costs due to a fivefold increase in customers paying online via the new modernized and customer friendly website. In addition to the OpEx savings, the customer has also significantly improved its website’s JD Powers customer satisfaction rankings.

Want to make your website more robust, user-friendly, scalable, and available by migrating to the cloud? Datavail can assist you. Find out how we helped one client by reading our case study “Major Utility Company Improves Residential Customer Website Experience with Azure.”

The post Why Your Website Needs the Scalability and Availability of the Cloud appeared first on Datavail.

Why Access to Software Portals Is More Important Than Ever


 

The COVID-19 pandemic has upended the way businesses of all sizes and industries operate—in particular, the places that employees do their work. According to an October 2020 Gallup poll, 33 percent of U.S. employees say that they are “always” working remotely, while another 25 percent say that they “sometimes” telecommute.

 

Faced with this rapid and unexpected shift to working from home, organizations have had to make sudden changes and evolutions, especially regarding their IT systems and software. Despite the pandemic, businesses must ensure that their employees, customers, partners, and third-party vendors can enjoy continued access to key software applications and services. These concerns are especially relevant for companies with locations and employees scattered across the country, around the world, or those who must navigate different time zones, regulatory environments, and more.

Although the pandemic is waning, its effects and repercussions will be long lasting. Businesses that take advantage of this time of change, uncertainty, and turbulence will position themselves well for whatever comes next.

For some companies who have not adopted a resilient mindset and a transformation-focused organizational culture, the pandemic has been a challenge to their business—often an existential one. Others, however, have seen the COVID-19 pandemic as an affirmation that justifies the investments they previously made in their IT infrastructure. Below are just some digital technologies that have paid dividends for their users:

  • Cloud computing for easier access to applications and services and rapid horizontal and vertical scalability.
  • Identity management (IdM) solutions to monitor and control employees’ access to business-critical applications.
  • Automation of tedious manual processes, freeing up employees for higher-level, more revenue-generating activities.
  • Data analytics to collect, process, analyze and visualize vast quantities of information, mining it for insights to enable more accurate forecasts and smarter decision-making.
  • Workplace communication and collaboration tools (e.g., Microsoft Teams, Slack, Trello, Zoom, etc.).

 

The rise in telecommuting has fostered the growth of SaaS (“software as a service”) usage. Software applications running in the cloud have greatly expanded user connectivity and productivity. Cloud-native software services have many advantages, with a few of them especially relevant in this day and age:

  • Support and maintenance for SaaS applications is the responsibility of the software vendor, rather than the in-house IT department. This means less stress on overburdened IT teams, and not waking up at 3 a.m. when a server goes down.
  • As a corollary, SaaS upgrades are rolled out smoothly and automatically, without the need for business-disrupting downtime.
  • Users can access SaaS applications from anywhere with an Internet connection and at any time—a vital asset during this time of telecommuting, but also tremendously convenient in general.
  • During times of uncertain demand, SaaS applications can easily scale to accommodate spikes in usage without degrading performance.
  • By using a centralized, streamlined solution rather than disconnected legacy systems, SaaS improves visibility and makes IT governance easier.

 

In particular, SaaSOps (“SaaS operations”) i.e. managing and monitoring an organization’s use of SaaS applications, is becoming more and more relevant and important. According to BetterCloud’s “2020 State of SaaS” report:

  • Organizations use an average of 80 SaaS applications.
  • IT teams spend an average of more than seven hours offboarding an employee from the company’s SaaS applications after they depart the organization.
  • Only 49 percent of IT professionals are confident in their ability to detect unauthorized SaaS usage on the company network.

 

To confront these SaaSOps challenges, organizations need a centralized coordinated approach. One of the easiest IT projects for your business to take on—yet one of the most impactful for employee productivity and user experience—is to build a clean, streamlined application portal with single sign-on (SSO), simplifying the process of logging in and using enterprise software.

With a single application portal, users can enter their credentials and access the services they need to do their jobs efficiently, from anywhere and at any time.

Looking to implement your own software application portal? Datavail can help. To learn how we helped one client implement a secure application portal in the Microsoft Azure cloud, check out our recent case study “Major Auto Manufacturer Migrates Application Portal to Azure Cloud.”

The post Why Access to Software Portals Is More Important Than Ever appeared first on Datavail.

The Importance of Database Modernization for Cloud Adoption

Getting the most out of cloud technology involves far more than simply adopting cloud-based infrastructure. Older databases shifted into the cloud may not serve the needs of modern applications. As your users have more complex use cases, with a growing amount of data and data sources, real-time feature requests, and rapid scaling requirements, your database technology needs to change too.

 

Database modernization is spread throughout multiple stages, as your organization goes through testing, evaluation, and pilot projects to determine which areas benefit the most from upgrading the database.

In our recent cloud adoption survey, 46 percent of respondents planned on using or are considering modern data platforms, 34 percent are remaining on their current solution, 22 percent are considering a move in the future, 19 percent are evaluating and planning their migration, 15 percent are in the process of migration, and 10 percent are building new applications on new platforms and keeping legacy applications on old platforms.

As you can see, the way that organizations go about database modernization comes in many forms.

Benefits of Database Modernization

Gaining Purpose-built Databases
Databases are not a one-size-fits-all technology, but some organizations opt for the same technology no matter what application it’s for. By matching purpose-built databases to specific use cases, you can improve performance, expand functionality, and get the data structures that make sense for the project.

Reducing Costs
Modernizing your databases can also lead to lower expenses. Since they’re fine-tuned for a specific purpose, you get access to the features you need without paying for those that are not useful. These database platforms often require less time spent on maintenance, security, and optimization, so your database administrators and system administrators have more time in their workdays.

Better Reliability
Modern database platforms are filled with features that help your systems stay online and meet SLAs, including high availability, distributed processing, and robust disaster recovery options. If you’re using cloud-native database solutions, you also have the advantage of using technology specifically designed to get the most out of the cloud.

Fast Provisioning
You drastically reduce the amount of time it takes to spin up a new database instance and often enjoy a streamlined process. For some database platforms, all you need to do is click a button. Scaling is also simple, with many solutions offering automated control over the resources you’re using.

Signs That You Should Modernize Your Databases

  • Difficulty in keeping up with growing usage: Your users and workloads are rapidly increasing, and the system is starting to strain under the pressure. Performance issues abound and make it difficult to achieve peak productivity.
  • Inability to work with new data sources and structures: As new data sources and structures develop, older databases may not support these formats. You could lose out on valuable insights or end up with a major opportunity cost in the long run.
  • Increased demands on the IT team to keep the system running: Frequent downtime, crashes, errors, and other issues add up fast with older databases. You also have to worry about security exploits and other vulnerabilities occurring with databases that are past their prime or end of life.
  • Struggles meeting SLAs: You fail to meet your SLAs due to issues with the system, whether it becomes inaccessible or has extremely slow performance.
  • Database costs rising uncontrollably: Propping up older technology can become expensive in many ways, from the resources required to keep it operational to sourcing specialists of less popular databases.

Moving to Modern Databases in the Cloud with Datavail

As a leading cloud partner with AWS, Microsoft, Oracle and MongoDB we can help with your database modernization and cloud migration. We have more than 15 years of experience and over 800 data architects and database administrators ready to move your applications to cutting-edge databases. Learn more about the cloud adoption journey and see the results from the rest of the survey in our latest paper. Contact us to get started.

Read This Next

Modernize Legacy Tech with MongoDB

Your organization is probably running technology that is past its prime, and you probably know you need to update and upgrade it all to maintain your corporate competitiveness. In short, you need to ‘modernize,’ and MongoDB provides you with the tools you’ll need to bring all your tech – software, apps, and systems – up to speed.

The post The Importance of Database Modernization for Cloud Adoption appeared first on Datavail.

Aligning Your Cloud Adoption Costs with Your Expectations

A big selling point of the cloud for many companies is cost savings. Shifting capital expenses to operational expenses makes it easier to buy-in to cloud adoption, but it’s important to have a deep understanding of your costs so your total cost of ownership doesn’t exceed your expectations.

 
In a recent survey we conducted, we asked companies where they’re at in their cloud journey. Ten percent of respondents are 100 percent in the cloud, 61 percent use a hybrid cloud infrastructure, 21 percent are currently in the evaluation and planning stage, and 8 percent haven’t started on cloud adoption at all. Each stage of this journey has important costs to consider so that you can better plan for your future moves.

We found that 27 percent of organizations had cloud costs that were higher than they planned. You have several ways that you can better predict your cloud expenses to avoid surprises.

Understanding the Shift from CAPEX to OPEX

You’re fundamentally changing the way that you handle the bulk of your technology expenses with cloud-based solutions. The models you use to predict the total cost of ownership for on-premise systems don’t work with usage and subscription-heavy payments. Adjust your accounting to better predict the real-world costs of your cloud technology. It may take several quarters to pin down these numbers, but you’ll be able to build on the data as it comes in.

Consider Your Cloud Implementation and Optimization Costs

Look beyond the base cost of the cloud solution. How much will it cost to fully implement in your organization? You may need to change workflows, increase your network bandwidth, or expand your endpoint security to support mobile devices.

If you use an Infrastructure as a Service solution, you need to optimize it based on your requirements. Depending on the complexity of your project, you could end up paying significant amounts to get the best performance out of your cloud investment.

Keep a Close Eye on Your Cloud Consumption

Monitor your real-world usage and adjust your cost predictions based on this data. Sometimes it’s hard to pin down exactly how many resources you need, especially when you’re working with a usage-based payment model. Many cloud providers have calculators that allow you to get a general idea of your numbers, so you can better align them with your expected costs. Third-party tools are also available for cloud monitoring.

Develop a Scaling Plan

Unlike on-premise infrastructure, it’s simple to scale cloud workloads up and down as needed. Create a strategy that maximizes your flexibility, so you don’t overpay for capacity you’re not using. Don’t be afraid to adjust this plan as you gain more experience with your selected cloud platforms. Many systems offer automated scaling features to make this process even easier.

Use Reserved Instances for Predictable Workloads

If you have workloads that have static requirements or change very slowly, many cloud providers allow you to set up reserved instances. You pay for these instances on a long-term basis, such as a year upfront, and get a substantially decreased cost.

Work with an Experienced Cloud Migration Partner

One way to get better insights into the cost of your cloud migration is to work with an experienced partner. At Datavail, we’ve guided hundreds of organizations through cloud migrations and modernization. Our experience leads to cost savings throughout the entire process, allowing you to deploy cloud-based solutions faster, optimize your cloud infrastructure, and plan around well-informed expense predictions.

We’re an AWS Advanced Consulting Tier Partner, an Oracle Platinum Partner, a MongoDB Premier Partner, and a Microsoft Gold Partner with 15 years of experience and over 800 data architects and DBAs. No matter what type of cloud technology you’re migrating to, we’re able to help. Learn more about cloud adoption trends in our white paper. Contact us to get started on your cloud journey.

Read This Next

Cloud Adoption Industry Benchmark: Trends & Best Practices

Datavail partnered with TechValidate, an independent third-party surveyor, to conduct a cloud adoption industry benchmark survey. This paper takes a look at the results along with a big picture view on cloud history and trends.

The post Aligning Your Cloud Adoption Costs with Your Expectations appeared first on Datavail.