Skip navigation
cloud and business units storage.jpg Getty Images

The Blurring Line Between Cloud and On-Prem Storage

Overcoming cloud and on-prem storage challenges requires a reimagining of basic concepts such as storage tiering and migration.

The fact that organizations need to spend so much time and effort deciding when to use on-prem storage and when to use cloud storage speaks to a bigger issue: a gap between where we are and where we should be when it comes to strategic storage management.

One of the major challenges that storage admins face today is that of storage silos. In the data center, storage is divided by application, and is further subdivided into performance and capacity tiers. Never mind the fact that most organizations also have storage scattered across multiple clouds.

There are several problems with this approach to storage. For one thing, it makes application agility difficult. Suppose, for example, that you have a particular application running in your data center and want to move that application to the cloud. While there might not be anything overly difficult about moving the application itself, dependency resources can present a challenge. If the application depends on a specific backend database, for instance, the database might also need to be moved to the cloud. Otherwise, the application’s performance is likely to suffer if the application has to send all of its database queries across the WAN.

Of course, migrating the dependency database can pose its own challenges. There might be other applications that depend on the same database, and the database cannot be migrated until you figure out how the migration will impact those applications.

Another big problem with the traditional approach to storage is that it doesn’t adapt all that well to changing demands. If, for example, an application suddenly experienced a thousand-fold increase in usage, the application’s performance would almost certainly suffer because there is a limit to the number of IOPS that the underlying storage can deliver. Certainly, caching can help with this problem to some extent, but unless an application’s underlying storage architecture was designed to handle huge workload spikes, there is a good chance that the cache will be overwhelmed, thereby nullifying its usefulness.

One way that storage vendors are helping organizations to cope with these types of storage challenges is by reimagining basic concepts such as storage tiering and storage migration.

Tiered storage has been around in one form or another for many years. Storage arrays often feature high-capacity tiers that are made up of HDD storage and high-performance tiers consisting of flash storage. Storage admins can create LUNs in either tier, or get the best of both worlds by creating a LUN that primarily uses capacity storage but that also uses a bit of flash storage as a storage cache.

The type of storage tiering that I just described is commonly performed within a storage array, although it can also be done at the server level using technologies such as Microsoft’s Windows Storage Spaces. The newer approach that is being adopted by some storage vendors involves treating entire arrays as storage tiers.

Most organizations that have IT resources on premises (or in a co-location facility, for that matter) probably have a mix of storage hardware. Some of an organization’s storage arrays might be relatively new and feature all of the latest features. Other arrays might be older and nearing the day when the organization plans on retiring the array. Similarly, an organization might have some arrays that were purchased for the purpose of accommodating high-performance workloads, while other storage arrays cost less but also offer more modest levels of performance.

Solutions now exist that are able to define logical storage tiers based on the underlying hardware’s capabilities. This means that an organization can form logical storage hardware groupings based on the hardware’s capabilities, similar to the way that one might create a virtual SAN. Cloud storage can also be included in the architecture.

The reason why all of this matters is because it opens the door for far greater agility. Each vendor has its own way of doing things, but this approach to storage management essentially treats all of the organization’s storage hardware as one large, multi-tiered storage pool. The pooling of storage resources makes it far easier to perform live migration of storage LUNs.

Imagine for a moment that a workload sees a huge demand spike. The management layer could conceivably recognize the spike and automatically migrate the LUN to storage hardware that is better able to accommodate the demand.

In addition, because LUNs can be dynamically moved between arrays (or the cloud), it should be much easier with this model to provision newly purchased storage hardware or to retire aging hardware. Likewise, having LUN migration capabilities will likely make it easier to migrate resources to or from the public cloud.

The bottom line is that storage management will eventually become hardware- and location-agnostic. Storage will be managed in a similar manner, regardless of whether it resides in your own data center or in the public cloud.

 

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish