New NVIDIA NIM microservices are now accessible in Japan and Taiwan

New NVIDIA NIM microservices are now accessible in Japan and Taiwan

NVIDIA has introduced four new NIM microservices, some of which are specifically tailored for Japan and Taiwan.

As AI demands become more localized, NVIDIA has responded with the Llama-3-Swallow-70B, trained on Japanese data, and the Llama-3-Taiwan-70B, based on Mandarin. These regionalized LLMs offer a deeper understanding of local cultures, laws, regulations, and customs.

Additionally, the RakutenAI 7B model, derived from the Mistral-7B and trained on English and Japanese datasets, is available as two NIM microservices for Chat and Instruct. Its capabilities were demonstrated by achieving the highest ranking in the LM Evaluation Harness benchmark from January to March 2024.

This accomplishment underscores the value of localized LLMs in delivering more accurate and nuanced communication by capturing cultural and linguistic subtleties.

These microservices are optimized for inference using the NVIDIA TensorRT-LLM open-source library and can be accessed via the NVIDIA AI Enterprise platform, as well as through traditional APIs. The NIM microservices for Llama 3 70B, which form the foundation for the Llama–3-Swallow-70B and Llama-3-Taiwan-70B, offer up to five times higher throughput, reducing production costs and improving user experiences by lowering latency.

These regional models have already been deployed in production to test markets and gather real-world data.

Preferred Networks in Japan is using a healthcare-specific version, Llama3-Preferred-MedSwallow-70B, trained with Japanese medical data, which has achieved top scores in the Japan National Examination for Physicians. In Taiwan, Chang Gung Memorial Hospital (CGMH) is developing a custom AI Inference Service (AIIS) to centralize LLM applications within the hospital, using the Llama 3-Taiwan 70B to enhance medical language communication for patients.

Beyond healthcare, Pegatron in Taiwan plans to integrate Llama 3-Taiwan 70B with its existing PEGAAi Agentic AI System for both internal and external applications. Other notable adopters of the Llama-3-Taiwan 70B include global petrochemical manufacturer Chang Chun Group, leading printed circuit board company Unimicron, technology media company TechOrange, online contract service LegalSign.ai, and generative AI startup APMIC.

For businesses looking to fine-tune AI models for specific processes and domains, NVIDIA AI Foundry offers a comprehensive platform. It includes popular foundation models, NVIDIA NeMo for fine-tuning, and dedicated capacity on NVIDIA DGX Cloud, providing a complete solution for creating customized foundation models as NIM microservices.

Developers can also access the NVIDIA AI Enterprise software platform, which ensures security, stability, and support for production deployments. NVIDIA AI Foundry equips developers with the tools to easily create and deploy custom, regional language NIM microservices, enabling AI applications that deliver culturally and linguistically appropriate results.