A New Standard of Measurement in AI to Drive Further Innovation
The O’Reilly AI Conference, NEW YORK, May 2, 2018 – SambaNova Systems, a Palo Alto-based startup company building the most advanced machine learning and big data analytics platforms, today announced it has joined a cohort of industry pioneers in machine learning such as Baidu, Google, Intel, and others to collaboratively define a new benchmark suite for machine learning. These industry leaders are also joined in contribution by researchers from Harvard University, Stanford University, University of California – Berkeley, University of Minnesota, and University of Toronto.
“SambaNova Systems is proud to be invited as an inaugural member of the consortium of MLPerf supporting companies,” said Kunle Olukotun, Chief Technologist, SambaNova Systems. “We are pleased to be able to contribute our extensive experience and expertise in this field to help solve the most difficult machine learning and data analytics problems.”
Data scientists and organizations developing and using ML software frameworks, ML hardware accelerators, and ML cloud platforms will benefit from the establishment of a standard suite of machine learning benchmarks. MLPerf aims to enable rapid advancements in measurable performance improvements for machine learning workloads while reflecting different areas of ML that are important to the commercial and research communities and where open datasets and models exist.
About SambaNova Systems
SambaNova Systems has developed a disruptive next-generation computing platform to power machine learning and data analytics. The company was founded in 2017 based on technology from Stanford University Professors Kunle Olukotun and Chris Re, along with Rodrigo Liang, former Senior Vice President of Processor Development at Oracle. The company’s investors include Walden International, GV (formerly Google Ventures), Redline Capital, Atlantic Bridge Ventures, and several others. SambaNova is headquartered in Palo Alto, California. For more information, please contact Andrea Hanna at firstname.lastname@example.org
About MLPerf benchmark suite
The MLPerf effort aims to build a common set of benchmarks that enables the machine learning (ML) field to measure system performance for both training and inference from mobile devices to cloud services. We believe that a widely accepted benchmark suite will benefit the entire community, including researchers, developers, builders of machine learning frameworks, cloud service providers, hardware manufacturers, application providers, and end users.