In a boost to enterprise-focused AI innovation in India, business software provider Zoho has launched its suite of homegrown large language models (LLMs) named Zia. Designed specifically for enterprise use, the models come in three sizes — 1.3 billion, 2.6 billion, and 7 billion parameters — and are built entirely in-house using Nvidia GPUs.

Artificial Intelligence Model

The size of an AI model is often described by the number of parameters it has. More parameters generally mean that the model can learn more complex patterns, but it also requires more data and cost-intensive computational power to train effectively.

Unlike consumer-facing AI platforms, Zia LLM is tightly integrated into Zoho’s more than 55 business applications and is built to support contextual enterprise workflows such as CRM, finance, HR, and customer support. The company said these models are hosted within Zoho’s own data centers, allowing customers to keep their data entirely within the Zoho ecosystem, in sharp contrast to relying on third-party LLM providers.

“No customer data was used to train the model. We used a mix of publicly available and proprietary data,” said Ramprakash Ramamoorthy, research director (AI), Zoho, at the launch event. “We’ve built the models from the ground up, optimised for privacy, efficiency, and enterprise relevance,” he added. 

Today, most SaaS companies are spending a significant portion of their revenue on cloud and AI service providers, resulting in costs that ultimately get passed on to customers, Ramamoorthy noted. By investing heavily in its own data centers and foundational infrastructure, Zoho aims to control these expenses while innovating for a better overall experience.
“We didn’t want to be just a reseller of computer or AI,” he added. 

Zia LLM

With Zia LLM, Zoho aims to right-size AI models to fit specific use cases. For instance, the 1.3 billion parameter model may power structured data tasks like anomaly detection or revenue reporting, while the 7 billion parameter version can handle more complex, unstructured queries across enterprise data. 

This targeted approach reduces inference costs and improves energy efficiency, Ramamoorthy noted.

Zoho benchmarked Zia LLM against Meta’s Llama 2, a 7 billion parameter model; and Llama 3, an 8 billion parameter model. Zia LLM outperformed Llama 2 and achieved nearly 80% of Llama 3’s performance across industry-standard tasks, despite having fewer parameters. Typically, bigger models with more layers have a higher cost of inference, while for smaller models, the cost of inference is cheaper. 

Zoho plans to scale Zia LLM up to 32, 70, and 100 billion parameters. To support this, Zoho has partnered with Nvidia and is investing in its own GPU infrastructure.

In addition to the LLMs, Zoho launched its own Automated Speech Recognition (ASR) models for English, expanding its AI infrastructure stack. The model, designed to support enterprise features like sales call transcription and sentiment analysis, is also built in-house and benchmarked against models like OpenAI’s Whisper. 

The speech recognition model for English has 0.2 billion parameters. Zoho is also building ASR models for 15 Indian languages, which will be launched in the coming months, and have 0.1 billion parameters. “We went all frugal because we want to cut down on our GPU spending,” Ramamoorthy said.

The company said all AI capabilities are being bundled into existing Zoho pricing plans at no additional cost. Customers also have the option to plug in third-party LLMs such as OpenAI and Anthropic via Zoho’s AI Bridge and Model Context Protocol (MCP) server.

Read Next