Rather than trying to shoehorn demanding tasks onto general-purpose configurations or perform their own hardware upgrades and switching to achieve these task-optimized requirements, many IT planners are looking to purchase converged architecture products--systems with CPUs, RAM, storage and networking configurations optimized for a target task. In this post we will look at some of the best use cases and scenarios for converged architecture.
You could think of these converged architecture configurations as SKU variations to simplify and speed IT purchasing--similar to how many smartphones, notebook computers and other products have popular set-ups pre-built for customer purchases.
There is almost no limit to the kinds of tasks converged-architecture products can be used for, but some that leverage the model most effectively include server-based office productivity applications; high-performance computing; highly scalable transaction processing; virtual desktop infrastructure (VDI) sessions; and server virtualization. Specific applications and workloads could include Microsoft Exchange and SharePoint, VMware vSphere, Citrix XenDesktop, Red Hat Enterprise Linux ... you get the idea. Depending on its overall IT activities, an organization could easily end up with several different configurations.
Here's a look at some of the use cases and scenarios that would call for different hardware configurations, and what that hardware might consist of.
"One very common IT workload is virtual desktop infrastructures (VDI), where employees are using thin-client devices or remote-desktop software at their end, and their OS instance, apps and data are all running in the cloud," says Beth Cohen, who's involved in creating new cloud networking product strategies at Verizon. "VDI is used on the trading floor, for call centers, for many knowledge worker tasks. Supporting virtual desktops doesn't require a huge amount of storage, but it will need a lot of compute power. VDI also needs a lot of network capacity, because the user experience here is very important. Users sit in front of their systems, so screens need snappy performance."
By contrast, says Cohen, "database analytics, which is needed for activities like weather prediction, tends to be compute- and storage-heavy, but not particularly network-heavy."
Ted Dunning, chief application architect at MapR Technologies and vice president of incubation at Apache Software Foundation, says big analytics jobs require big RAM.
"If you're looking at solving analytical problems on large operational data stores, hardware configurations are predominantly all characterized by large RAM—300 GBytes and above," says Dunning. "A very standard configuration for baseline use is a machine with 24 spinning disks, four or eight TBytes each, and a dual socket CPU with 384 GBytes or 512 GBytes of RAM. Some specialty boxes have a very large number of hard drives in them, storing up to 600 or 700 TBytes per server."
Other configurations are optimized for storage speed with extreme performance, rather than size, Dunning notes: "These have solid-state drives, typically NVMe [NVM Express, a.k.a. Non-Volatile Memory Express] SSDs. We see some of these with as many as 48 NVMe drives and 12 controllers in one chassis, giving 10 to 20 GBytes per second throughput over network interfaces--10GB/s read and 16GB/s write."
The smaller hard-drive configurations are required by "anybody with a lot of data," says Dunning, "but the crazy-fast and large configurations are not as rare as you might think--for example, large retailers doing demand predictions. Other demanding activities include supply chain management, IT security, market analysis, financial services fraud detection, security analytics, and risk classification and analysis."
Yet more examples, Dunning says, include online ad targeting, real-time traffic projections, and search engines."
And, says Dunning, "It's easier to take a preconfigured box out of shipping, and plug it in, than have to do some work on the hardware when it gets to you. So dealing with a hardware vendor who has expertise with the specific requirements of tasks like big data configurations is very helpful."
What workloads at your company would make sense in the converged architecture model? Please let us know in the comments section below.
Underwritten by HPE
Part of HPE’s Power of One strategy, HPE Converged Architecture 700 delivers infrastructure as one integrated stack. HPE Converged Architecture 700 delivers proven, repeatable building blocks of infrastructure maintained by one management platform (HPE OneView), built and delivered exclusively by qualified HPE Channel Partners. This methodology saves considerable time and resources, compared to the do-it-yourself (DIY) approach.
Based on a complete HPE stack consisting of HPE BladeSystem with Intel® Xeon® E5 v3-based HPE ProLiant BL460c Gen9 blades, HPE 3PAR StoreServ all-flash storage, HPE Networking, and HPE OneView infrastructure management software, the HPE Converged Architecture 700 can be easily modified to fit within your existing IT environment.