![power world simulator regulate remote bus power world simulator regulate remote bus](https://www.mdpi.com/energies/energies-11-02990/article_deploy/html/images/energies-11-02990-g001-550.jpg)
I suspect the sweet-spot server will be a single or dual module unit that is 1/4U wide and 1U high. The traditional disk caddy will disappear as well. Local disk storage may well migrate away from the traditional 3.5-in format to use M2 or 2.5-In SSD. HMC implies that servers will become much smaller physically. Alternatively, an extension of a bus such as OmniPath to be a cluster interconnect could be used to remove the PCIe latencies. First generation products will access this RDMA through PCIe, but it looks likely that this will be a bottleneck and as memory sharing ideas mature, we might see some form of direct transfer to LAN connections. The faster memory allows plenty of extra bandwidth for sharing. The combination of bandwidth, capacity, and nonvolatility beg for an RDMA solution to interconnect clustered servers. This means much more IOs are needed to keep the HMC system busy, especially if one considers the need to write to multiple locations.
POWER WORLD SIMULATOR REGULATE REMOTE BUS DRIVER
The removal of all those driver transistors means more compute-cores can be incorporated, serviced by the faster memory. Servers will shrink in size even though becoming much more powerful. HMC’s impact on system design will be substantial. AMD is planning High-Bandwidth Memory (HBM) with NVidia, Intel has a version of HMC that will be delivered on Knight’s Landing, the top end of their XEON family with 72 cores onboard.
![power world simulator regulate remote bus power world simulator regulate remote bus](https://ae01.alicdn.com/kf/H5c9cb7d8a01449a8997b601cc8c940dfa/Water-Bomb-Launch-Remote-Control-Tank-Gesture-Sensor-Lateral-Driving-Rotation-Turret-Simulation-Spray-Watch-Control.jpg)
HMC is CPU-agnostic, so expect support out of 圆4, ARM and GPU solutions.Īs is common in our industry, there are already proprietary flavors of HMC in the market. Performance-wise, we are looking at transfer rates beginning at 360 GBps and reaching 1 TBps in the 2018 timeframe, if not earlier. A typical module of HMC might thus consist of a couple of terabytes of DRAM and several more terabytes of flash or X-Point memory. Moreover, the work on NVDIMM will likely lead to nonvolatile storage planes in the HMC model, too. In the latter, it looks like packaging the CPU in the same stack as the DRAM is feasible. The combination of capacity and performance makes the HMC approach look like a winner across a broad spectrum of products, ranging from servers to smartphones. The HMC idea is that, by packing the CPU and DRAM on a silicon substrate as a module, the speed of the DRAM interface goes way up, while power goes way down since the distance between devices is so small that driver transistors are no longer needed. James O'Reilly, in Network Storage, 2017 The Hybrid Memory Cube