Simplivity-logoModern IT Infrastructure Challenges

A snapshot of today’s IT environment is one of complexity, cost, and inflexibility that inhibit IT staff from effectively supporting the business. Several challenges include:

  • btn_downloadInability to innovate. An estimated 70% of the time, IT employees are just “keeping the lights on” by conducting maintenance, upgrades, patches, etc. For only 30% of the time, they are building new innovation or engaging in new projects that will push the business forward.
  • Complexity and decreasing employee productivity. The typical data center faces the complex challenges of assimilating many different IT stacks, including primary storage, servers, backup deduplication appliances, WAN optimization appliances, SSD acceleration arrays, public cloud gateways, backup applications, replication applications, and other special purpose appliances and software applications. IT staff must somehow cobble them together, but it inevitably results in poor utilization, idle resources, and high labor costs. This is typical in data center consolidation.
  • Multiple points of management. Many modern infrastructures require dedicated staff with specialized training to manage the interface of each stand-alone appliance. The lack of staff and challenge of multiple points of management is typical in remote offices.
  • Limited data mobility. As organizations move towards virtualization, they desire virtual machine mobility benefits. VMware vMotion shifts virtual machines from server to server or data center to data center. However, in today’s IT infrastructure, the data associated with the virtual machine is still limited in its mobility. This is evident in data migration scenarios.
  • Inflexible scalability. Predicting infrastructure requirements three years into the future is not practical or efficient. Data center managers need a solution that can scale out with growing demand and without increasing complexity. Similarly, the ability to quickly scale down infrastructure or rebalance workloads is time-consuming and difficult. Test and development situations illustrate this issue.
  • Poor IT and business agility. The complexities of legacy infrastructure place a burden on IT teams in day to day management. The inherent inflexible nature of these technologies also burdens IT teams, and therefore the business. For example, your ability to quickly roll out new applications or build new capabilities that the business requires. More technically, there are also restrictions on legacy infrastructure’s ability to restore, replicate, and clone data both locally and to remote data centers in an efficient manner at scale. This introduces economical limitations in terms of sought data management and protection practices. An example of this scenario is tier-1 applications.
  • Cost. Highly functional and high performance data storage is dependent on an expensive (CAPEX and OPEX) stack of technologies, including storage area network (SAN); Network Attached Storage (NAS); target backup devices; WAN optimization appliances; and traditional standalone servers. An example of this use case is VDI.

The Path to Hyperconvergence

The first step on the path to hyperconvergence, beginning in 2009, was the development of Integrated Systems and reference architectures. Consortiums formed around existing legacy technologies, with the goal of pre-integrated, pre-tested and pre-validated solutions combining servers, storage and networking to speed deployment for end users. The idea was good, however, the promise did not necessarily match reality. Purchase order-to-deployment takes months, and the integration work between disparate technologies from different vendors may cause delays. Moreover, this first step towards convergence only included the compute, storage, and networking, but did not address any of the other technologies needed to provide full while minimizing redundancy services while reducing OPEX and CAPEX. Customers still required: replication software, separate disaster recovery appliances, backup software, backup appliances, WAN optimization appliances, cloud gateways, flash caches, and more.

The next step on the path to hyperconvergence was the delivery of “partial convergence.” Vendors entered the market with incremental innovation by creating a shared resource pool of previously disparate compute, network, and storage resources. They effectively created virtual SAN appliances (VSA) and packaged them inside of the server running a hypervisor. Partial convergence decreases deployment times, but left something out: Partial convergence fails to solve the data problem. They did not eliminate redundancy of data or the redundancy of IO. They did not provide integrated data protection or a globally federated architecture of VM-centricity and mobility.

SimpliVity-Evolution-of-Convergence_2_800

Hyperconvergence Defined

Simplivity-OmniCube-Triple-Stack
SimpliVity OmniCube

According to Taneja Group: “We believe hyperconvergence occurs when you fundamentally architect a new product with the following requirements:

  • A genuine integration of compute, networking, storage, server virtualization, primary storage data deduplication, compression, WAN optimization, storage virtualization, and data protection. No need for separate appliances for disk-based data protection, WAN optimization, or backup software. Full pooling and sharing of all resources. A true data center building block. Just stack the blocks and they reform into a larger pool of complete data center infrastructure.
  • No need for separate acceleration or optimization solutions to be layered on or patched in. Performance (auto-tiering, caching and capacity optimizations is all built in). As such, no need for separate flash arrays or flash caching software.
  • Scale-out to web scale, locally and globally, with the system presenting one image. Manageable from one or more locations, delivering radical improvement in deployment and management time due to automation.
  • VM centricity, i.e. full visibility and manageability at VM level. No LUNS or volumes or other low level storage constructs.
  • Policy-based data protection and resource allocation at a VM level.
  • Built-in cloud gateway, allowing the cloud to become a genuine, integrated tier for storage or compute, or both.”

Hyperconvergence is the ultimate in an overall trend of convergence that has hit the market in recent years. Convergence is intended to bring simplicity to increasingly complex data centers. At the highest level, hyperconvergence is a way to enable cloud-like economics and scale without compromising the performance, reliability, and availability you expect in your own data center.

SimpliVity’s OmniCube is the market’s only hyperconvergence platform, providing a combination of a single shared resource pool, data virtualization platform and a globally federated architecture.

Hyperconverged-Infrastructure-for-Dummies-Cover-150PTS and SimpliVity offer Free eBook: Hyperconvergence Infrastructure for Dummies

From PTS and partner SimpliVity, download a free copy of Hyperconverged Infrastructure for Dummies (eBook in PDF format) by Scott D. Lowe

  • Use hyperconverged infrastructure to simplify IT & reduce total cost of ownership (TCO)
  • Achieve economics & agility of large scale web companies in your data center
  • Solve capacity, performance, data protection, mobility & management challenges

Hyperconvergence is the ultimate in an overall trend of convergence that has hit the market in recent years. Convergence is intended to bring simplicity to increasingly complex data centers. At the highest level, hyperconvergence is a way to enable cloud-like economics and scale without compromising the performance, reliability, and availability you expect in your own data center.

Other SimpliVity Resources:

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

lt_arrow Back to SimpliVity Manufacturer Page