StorageX is ready for Data Centric Computing


By StorageX Technology Inc.


In the era of big data, increasingly data volume and data processing demand are bringing great challenges to all aspects of business.  The amount of data that needs to be analyzed and understood will grow more than 100X by 2025, according to a research report released by HPE.

Especially in the context of Moore's law approaching the physical limit, it is difficult for the industry to improve the chip performance and reduce the cost through the iteration of the manufacturing process. Therefore, industry began to think about how to find a new ways to breakthrough from new processing, packaging technology (Chiplet), computing architecture and other technologies.


Moving compute is better than moving data


StorageX is the first near-data computational storage processor/solution provider incorporating superb AI Engine, Data acceleration & I/O acceleration in one package. Focusing on Data Centric Computing, it develops Compute, Data & I/O processing that are closely integrated with storage system & data node.  It provides cornerstone of  data-heavy applications such as next-generation datacenters, 5G edge computing and cloud computing, address the requirements of low latency, high bandwidth services, solve the problem of coexistence of "Big data" and "Fast data" scenarios.

The founder and CEO of StorageX, said that company's main starting point is to deploy computing power closer to the data based on first principles, use the data as the centric point of computing“ "Moving compute is easier and more efficient than moving data, which will generate huge benefit of future data heavy applications."  He said.


01 The team has more than avg. 20 years of experience in industry. Company will be ready for the first release of product – Lake Ti.


"Our main goal is to explore ways to change the computing architecture at the data level & storage node, and to do more innovation for future data-centric computing solutions."  The CEO said.

He also said, storage and data server in the past has been in a "passive" mode for a long time, only focus on the capacity, read/write performance etc. However StorageX will change it into data cognitive capable system, or autonomous data warehouse, becoming smart data lake, capable of generating results, ready for the future data heavy applications. Including data centers, streaming media services, autonomous driving and other areas.

Since the establishment of the company, StorageX has successively launched algorithm prototype and engineering prototype, and is ready for first release of it's high performance product line – Lake Ti. The rapid pace of R&D is closely related to the background of the core team.

On the one hand, the company's core team came from Western Digital, Micron, HPE, Intel, Microsoft Azure, Nvidia, Tencent and other industry leading companies. With more than avg. 20+ years of R&D experience in memory and controller chips, data acceleration, GPU and world class data center architectures.  Successfully designed & iterated multiple data center silicon products & applications.

Among them, the founder & CEO of StorageX had long time experience in industry for memory, enterprise storage & data center architecture for more than two decades, and had worked on many aspects of storage & computing architecture innovations, combined with R&D & product development backgrounds,  held more than a dozen patents in the field of memory, storage as well as system architecture. He has led team developed and delivered industry-leading silicon and system-level products, which have been deployed to large-scale data centers such as AWS, Facebook, Microsoft Azure, and mainstream servers of top vendors such as DELL/EMC, HPE, and IBM.

Specific to it's product roadmap, ready for petabyte level of storage node, StorageX is expected to launch it’s first CSP in the Q4 of 2022 to address high performance near data computing requirements, positioned for smart data lake, cloud computing; Applications include content delivery & real time data analytics.  At the same time, the company has expanded partners into the field of smart manufacturing & other areas.


02 Focusing on the data as the center of computing, we build the strategic partnership with industry experts.


In fact, in recent years, with the increasing requirements of computing power, energy efficiency and other performance in industrial applications, the traditional von Neumann architecture, as a typical CPU-centric structure, is designed to separate computing and storage, which leads to the more obvious problem of "memory wall".  In order to solve this problem, at first, many players choose to solve the problem by Compute-in-Memory, including PIM, near memory computing and other technologies.

In contrast, near-data computing through CSP is more innovative and optimized from the compute architecture & system level.  For example, in data centers, applications often need to move large amount of raw data from storage nodes to computing nodes over the meshed network. Data center is like a computer, but it is much more complicated and bulky.


Applications with high bandwidth and low latency and huge data demand form a huge attraction to draw computing resources closer to data


Industry will face large amount of data growth in the future. Compared with the capacity and I/O performance that everyone has paid attention to in the past, they will place more challenges at response time, demand more high throughput & real-time data. That is requirement of fast data. As result, a huge gravity is formed around Data, which pulls computing resources towards Data. Data Centric Computing is becoming an important trend for data acceleration after GPU-based compute acceleration and DPU/IPU network acceleration.  Computational storage around data lake for data acceleration and deployment will be the primary work of industry for future.

In May 2022, server vendor Dell said data-centric computing storage is an important future trend


Therefore, in addition to increasing investment on computing power and network bandwidth, the industry should also consider how to improve the storage and data processing efficiency of data centers.  "Back to the first principal, the key to improve computing efficiency is to improve the efficiency of data movements, moving compute will be more effective than moving data."  StorageX CEO Steven said.

This is also the key reason why StorageX chooses to use near-data computing architecture to resolve the conflict between storage and compute nodes.  By combining the CSP into storage nodes, on the basis of reducing the inefficient way of handling raw data, the goal of low latency, high accuracy, and fast response can be achieved, and the AI computing power can play an efficient role in the nearest place of data.

In general, StorageX's CSP provides powerful data acceleration, efficient I/O processing, and AI computing, allowing applications to reduce distance of data movements, improve compute efficiency, to achieve the goal of low latency and high throughput.  StorageX’s CSP can work independently to deploy effective computing power near the data for versatile applications.

"Our biggest difference is that we have taken computing power to the extreme, integrating acceleration and high performance AI in the CSP near the storage node to achieve the goal of data-centric computing and acceleration." Steven said that in terms of efficiency, versatility, and comprehensiveness, StorageX’s goal is to provide innovative solutions for the industry to counter the challenge of data explosion & low latency applications.



Previous:StorageX joins SNIA (Storage Networking Industry Association) & CMSI (Compute, M...
Next:StorageX (Data Centric Computing)

Data Centric Computing