The growth in the demand for generative AI apps has led to a need for larger and larger databases to store the associated data (e.g. model training data). These databases tend to be resource-intensive from a hardware perspective and, depending on the algorithms used to orchestrate them, they can be high-latency. Often, companies are forced to make trade-offs between database cost, performance and accuracy.
But it doesn’t have to be this way, says Ohad Levi, the CEO and co-founder of Hyperspace. Hyperspace is using “domain-specific computing” to accelerate two specific database tasks: lexical searches and vector searches. Lexical searches are a type of keyword-based search that look for exact matches in a database, whereas vector searches consider the semantic meaning and context of the search query.
Levi claims that Hyperspace’s instances, which leverage a combination of FPGAs and GPUs, can deliver up to 10 times faster searches than traditional, non-accelerated databases.
“Our product helps companies dealing with large-scale data retrieval, particularly in AI and generative AI applications,” Levi told TechCrunch. “Unstructured data is outpacing traditional search capabilities. Data retrieval solutions must meet lexical and vector search datasets to meet current market demands.”
Prior to launching Hyperspace, Levi was an optimization engineer at Intel and then a product marketing lead at HP. He says he became frustrated with the limitations of legacy search solutions working for Big Tech, which led him to partner with ex-Intel design consultant Max Nigri to found Hyperspace.
Hyperspace doesn’t sell its instances. Instead, it sells access to managed database software running in those instances (hosted on AWS for now). Hyperspace’s databases can handle various types of structured and unstructured data, including videos, images and text, and are priced according to size and query volume.
“Hyperspace is a cloud-native managed database that works as a software-as-a-service model, priced per usage,” Levi explained. “Our team is able to design customized AI infrastructure solutions to help enterprises solve their search challenges.”
Hyperspace’s performance gains are impressive if true; Levi says that the company’s instances also deliver 5x higher throughput at a 50% lower cost than a typical database. (Those are average results; in one specific point of comparison, Levi claims that Hyperspace is generally faster than Elastic.) But can Hyperspace convince companies to use a newcomer database platform when there are so many incumbents — like Azure, AWS and Google Cloud — to choose from?
Levi says yes, and he claims that Hyperspace is already seeing some early customer traction. The Tel Aviv-based firm has inked deals with enterprises in the fraud prevention and e-commerce spaces, including Forter, nSure and Renovai, and tripled its annual recurring revenue and total contract volume over the last year.
Hyperspace also recently closed a $9.5 million seed funding round led by MizMaa with participation from JVP and toDay Ventures. Levi says that the money will be put toward scaling up Hyperspace’s database offering to “thousands” of instances and launching a free, entry-level plan.
“Hyperspace has an entire pipeline of new innovative products that will drive the search market forward and support the needs of our enterprise and small- and medium-sized clients,” Levi said. “We’re not seeing any headwinds. Every generative AI system is a search system, and search is becoming harder than before. The need for better AI infrastructure is growing daily, and with more data, the need for better search applications is becoming more apparent.”
Comment