SAN JOSE, Calif., July 26, 2016 – As the volume and breadth of data continues to proliferate, businesses are faced with both opportunities and challenges associated with this deluge. On one hand, there is a genuine benefit to be found in leveraging growing volumes of information to plan, act and react to continuously evolving business environments. On the other hand, the amount of time, energy and resources required to collect and interpret this incoming data can easily outpace the value obtained. While turning to in-memory data processing applications built on DRAM to secure real-time analysis may help satisfy the former, the massive system memory usage required to support DRAM’s extremely high performance, as well as capacity and cost restraints associated with the technology, exasperate the latter.
With platforms like Apache Spark having emerged and gaining popularity among those looking to make intelligent, real-time business decisions, there is an increased need for memory capacity to enable the fastest access and most optimized system-level performance. However, using excessive numbers of servers to design around DRAM capacity constraints leads to inefficient, high-cost deployments. Instead, an approach that enables more memory per server by utilizing high-capacity NAND flash is better able to provide the unbeatable combination of business and economic value needed in Big Data environments, experts at Inspur Systems and Diablo Technologies found through close collaboration.
Diablo Technologies’ Memory1 is the first memory DIMM to expose NAND flash as standard application memory. This revolutionary tiered-memory solution provides the industry’s highest-capacity byte-addressable memory modules. Memory1 provides significantly higher capacity than DRAM DIMMs, enabling dramatic increases in application memory per server. This provides substantial performance advantages, due to increased data locality and reduced access times. Memory1 also minimizes Total Cost of Ownership (TCO) by reducing the number of servers required to support memory-constrained applications like Apache Spark.
“Dramatically expanding the application memory available in a single server directly addresses key issues found in traditional, DRAM-only deployments for Big Data processing platforms like Apache Spark,” said Maher Amer, Chief Technology Officer. “Because each server is capable of doing more work, jobs can be more efficiently handled with fewer servers, which also minimizes the associated networking and operational expenses. A tiered NAND flash approach is key to providing the benefits of real-time analysis while minimizing the expense required to collect and interpret valuable information.”
Additional information about how Memory1 improves Spark’s Return on Investment is available by downloading the Diablo Technologies and Inspur Systems whitepaper “Igniting Apache Spark with Memory1” at http://www.inspursystems.com/downloads/Inspur%20Spark%20Whitepaper.pdf.
The post Big Data Deployment Efficiency Optimized By Big Memory, Say Experts at Inspur Systems and Diablo Technologies appeared first on Diablo Technologies.