Today, there is a significant focus on the cloud. Most anything we do in the technology world can be moved to a comparable cloud service. But, there still are extremely good reasons to keep some or all of our IT functions in house. The modern on-premises data center has taken on a decidedly different look from the data centers of yesterday. But, do all the new infrastructure advancements make sense to your organization? One new technology that warrants further investigation is the hyper-converged infrastructure.
To explain hyper-converged infrastructure, we first need to take a look back a couple decades. At that time, data center design was quite different. We had multiple physical compute platforms (servers), each one serving a single function. These servers had internal storage and were connected to a data center network switch infrastructure.
Fast forward a decade, and virtualization started to emerge as a viable option. Servers were starting to move from the physical to the virtual, and we were left with fewer “boxes” in our data centers. The few that we did have were able to take advantage of centralized data storage systems (SAN and NAS). They still connected to our data center switches, but these had to support higher speeds to accommodate the additional host-to-host traffic and increasingly dense infrastructure.
Hyper-converged infrastructure enables tight system integration
Today, the converged infrastructure is a common center of a modern data center. The converged infrastructure brought together the host systems and the high-speed networking and created a tight integration with the storage systems. When selecting components, a converged infrastructure can use the best-of-breed storage and best-of-breed compute platforms. Some storage, compute and virtualization vendors partnered up to create validated converged infrastructure designs to make it easy for organizations to get the absolute best platform and support model. A couple examples include the FlexPod, which includes Cisco compute and networking, NetApp storage, and VMware hypervisor; and the VxBlock, which swaps out the storage for an EMC system.
The converged infrastructure allows organizations to granularly increase resources as they need to. If more memory is needed in the hosts, it’s simple to put more memory in the hosts. If more hosts are needed, they are easily added to the environment. If more storage is needed, it can be added at any time. Resources can be added and managed as the needs of the organization evolve.
Recently, manufacturers have been promoting hyper-converged infrastructure as a replacement for the converged data center design. With hyper-converged infrastructure, the compute and storage are once again brought together into a single node. This very much resembles our design of two decades ago, except it adds in the benefit of virtualization. The big selling point of hyper-converged infrastructure is simplicity. With the CPU, RAM, and storage all wrapped up into a node, you can add resources quickly and simply to a hyper-converged infrastructure stack. If you need more CPU, add a node. If you need RAM, add a node. If you need storage, add a node.
Be aware of limitations
Some hyper-converged infrastructure vendor solutions gloss over the fact that organizations may need more storage, for example, but have no need for additional CPU or RAM. Unfortunately, they could be stuck with a surplus of CPU and RAM, just to get the storage they need. Likewise, it is possible to get a lot of wasted storage, just to get enough RAM, etc. But, the thought is that simplicity trumps careful system sizing and design.
Another limitation of hyper-converged infrastructure is that the design is tied to a single vendor. This is probably why these solutions are so heavily promoted and marketed; every vendor wants a bigger piece of the pie. No longer are you able to select the best-of-breed. Instead, you have a single vendor stack of hardware that provides all your compute and storage functions. Unfortunately, this can mean you lose more than just the freedom to choose. You may also lose the ability to integrate with legacy systems, since the storage platform will be proprietary and in a silo inside the hyper-converged infrastructure environment.
You also lose some of the benefits of enterprise grade systems. Storage is the biggest victim of this design limitation. Instead of a purpose built enterprise SAN, you are relegated to small arrays of disks inside each node that cluster together to create something that remotely resembles the storage platform we are used to. If you are accustomed to sophisticated storage technologies like powerful, granular replication; snapshots; deduplication; compression and compaction; multiple protocol availability including SMB; and storage tier options, you may be out of luck.
Closer examination is warranted
So, while hyper-converged infrastructure is being heralded as the latest and best technology available for your on-premises data center, it definitely warrants closer examination. The current iteration of hyper-converged infrastructure and its one-size-fits-all approach to achieve simplicity may not be right for every organization. As hyper-converged infrastructure matures, some of these limitations will be lifted and it will become a more viable option for more use cases.
With technology changing so quickly, it is sometimes difficult to tell which path to take. Do you pick cloud, converged infrastructure, hyper-converged infrastructure, or some combination? If you have been pondering this decision, we can help. Talk with us about your data center needs and your RSM consultant can help you sort out all of the options and can separate the facts from the hype of hyper-converged infrastructure. We can provide you a clear path to move forward into the next decade with confidence.
To learn more about RSM’s consulting services and managed services offerings, please visit our website. You can also contact RSM’s technology and management consulting professionals at 800.274.3978 or email us.