As an analyst with Evaluator Group, I receive storage industry briefings several times a week. While many are standard product updates, some of these briefings are for new products and technologies that stand out. This blog highlights four such products in three technology areas: computational storage, software-defined servers and micro edge nodes.
Computational storage is an enterprise SSD that has embedded processing power to run applications or offload some of the computational tasks from the CPU. SSDs have always had an onboard processing capability to handle the internal data movement inherent to solid-state storage devices (wear leveling, garbage collection, etc.). This technology – also called “in situ processing” – enhances that processing capability and uses it for external tasks. The benefit is a reduction in data movement between storage and compute and a freeing up of CPU cycles for other operations. NGD Systems and ScaleFlux offer this technology.
NGD Systems embeds an ARM processor and up to 8GB of DRAM in PCIe Gen3 x4 storage devices with which to run Linux-based applications. NGD drives provide 8 to 64TB of raw capacity in AIC (add-in card), U.2, M.2 and EDSFF (Enterprise & Data Center SSD Form Factor) form factors. According to the vendor, this technology can result in a 10-times reduction in traffic on the data bus during a typical search operation.
Instead of a CPU, ScaleFlux puts an FPGA (field-programmable gate array) on each SSD, which is programmed to one specific function: erasure coding, compression/decompression or high-level sorting. This makes each SSD a fixed, dedicated engine for a specific algorithm, so you would buy a different drive for each of these three functions. ScaleFlux offers PCIe Gen 3 x4 AIC and U.2 form factors with up to 8TB of raw capacity.
Software-defined storage is a common term in IT and a technology category that Evaluator Group supports with the EvaluScale SDS Comparison Matrix and associated research. Now TidalScale offers “software-defined servers.” This is server virtualization, but the inverse of what a hypervisor does. Whereas VMware, Hyper-V and KVM can consolidate many individual workloads on a single physical server, TidalScale creates a single virtual server out of many physical servers. The result is a massively scalable compute infrastructure that can support enormous in-memory applications.
TidalScale runs its “hyperkernel” software on Ethernet-connected physical servers, which map CPU cores, memory and IO to a single, virtual, software-defined server instance. When an application needs to access data on another physical server or needs more CPU cores than it has locally, the hyperkernels on each server communicate to balance the resources to the load, in real time. This means either copying the data from the server it’s resident on or moving the application and the associated data out to another server that has free resources.
All this is done at machine speed using sophisticated ML algorithms so that it’s transparent to the application. This load balancing goes on continuously in the background. The effect of this process is to enable applications much larger than the resource capacity of a single server to run on a much larger virtual server. TidalScale is certified with Red Hat Enterprise Linux and SUSE and has been shipping for about 2 ½ years.
Micro Edge Computing Node
Edge computing is the movement of applications away from the data center, closer to where the data is generated or collected, in support of things like remote testing, manufacturing or IoT applications. “The edge” could also encompass distributed offices or retail locations that need to run software on-site. Hyperconverged infrastructure (HCI) is an ideal technology for many of these edge computing use cases and most HCI vendors have come out with “edge nodes” to address this need, typically smaller configurations of their regular models, with a single CPU and less memory and storage capacity. Bucking this trend, Scale Computing has released an edge node that runs on the Intel NUC 10 microcomputer.
With a footprint of 4 inches by 4 inches by 1 ½ inches, the Scale Computing HE150 runs the complete HyperCore OS, the same software that runs on all Scale’s other HCI nodes. This is a big accomplishment, since the HE150 currently has a max of 64TB DRAM and a single six-core CPU. Scale Computing includes a KVM hypervisor embedded into each node (there is no option to use a different hypervisor), helping to keep costs down. Each node has up to 2TB of SSD capacity, plus a single Ethernet connection.
The HE150 is hyperconverged infrastructure, which needs to run in a cluster of three or more nodes. To make that work, Scale developed an SDN overlay it calls an “Edge Fabric.” Connecting each node in a ring topology using Border Gateway Protocol (BGP) routing, a customer daemon and APIs, it provides intranode communication and data transfer without a switch. According to the vendor, a cluster of three HE150 nodes can cost as little as $5,000.