New Relic announces NVIDIA NIM integration to help with development and deployment through the power of Team Green's expert microservice

New Relic announces NVIDIA NIM integration to help with development and deployment through the power of Team Green's expert microservice

New Relic has announced they are adding NVIDIA NIM interference microservices into its platform to reduce the cost and complexity associated with things like development, deploying, and monitoring Gen-AI applications.

According to them, Gen-AI is the key to achieving higher operation efficiency alongside reducing cost but observability is essential to deploy AI models. Rapid deployment and faster ROI are essential for organizations to gain a market advantage so a holistic, real-time view of the AI application stack - across services, infrastructure, and the AI layer will be the deciding factor.

With New Relic AI monitoring, all sorts of parameters including throughput, latency, costs, and data privacy can all be handled in one place while in-depth expansion of said monitoring are augmented by NVIDIA NIM, supporting a huge range of AI models including Databricks DBRX, Google's Gemma, Meta's Llama 3, Microsoft's Phi-3, Mistral Large and Mixtral 8x22B, and Snowflake's Arctic.

As such, the entire build -> deploy -> monitor process is now accelerated, leading to faster ROI.

Here are some use cases for AI monitoring

  • Full AI stack visibility - Spot issues faster with a holistic view across apps, NVIDIA GPU-based infrastructure, AI layer, response quality, token count, and APM golden signals
  • Deep trace insights for every response - Fix performance and quality issues like bias, toxicity, and hallucinations by tracing the entire lifecycle of AI responses
  • Model inventory - Easily isolate model-related performance, error, and cost issues by tracking key metrics across all NVIDIA NIM inference microservices in one place
  • Model Comparison - Compare the performance of NVIDIA NIM inference microservices running in production in a single view to optimize model choice based on infrastructure and user needs
  • Deep GPU insights - Analyse critical accelerated computing metrics such as GPU utilization, temperature, and performance states; understand context and resolve problems faster
  • Enhanced data security - In addition to NVIDIA’s self-hosted model’s security advantage, New Relic allows you to exclude monitoring of sensitive data (PII) in your AI requests and responses

The announcement is part of New Relic's recent enlistment into NVIDIA's AIOps partner ecosystem where they look forward to combining observability and AI to streamline IT operations and accelerate innovation through its machine learning and generative AI assistant.

With more than 60 AI integrations, including NVIDIA GPUs and NVIDIA Triton Inference Server software, users can sign up for a free account and utilize the service on a usage-based pricing model.