BOOK A DEMO

Monitor OpenAI GPT application usage in New Relic

If you’re already using or plan to use OpenAI’s GPT-3 and GPT-3.5 at scale, it’s important to monitor metrics like average request time, total requests, and total cost. Doing so can help you ensure that OpenAI GPT Series APIs like ChatGPT are working as expected, especially when those services are required for important functions like customer service and support.

Enhance ML models, reduce costs, and achieve better performance with your GPT-3 models.

Monitor OpenAI with our integration

New Relic is focused on delivering valuable AI and ML tools that provide in-depth monitoring insights and integrate with your current technology stack. Our industry-first MLOps integration with OpenAI’s GPT-3, GPT-3.5, and beyond provides a seamless path for monitoring this service. Our lightweight library helps you monitor OpenAI completion queries and simultaneously records useful statistics around ChatGPT in a New Relic dashboard about your requests.

With just two lines of code, simply import the monitor module from the nr_openai_monitor library and automatically generate a dashboard that displays a variety of key GPT-3 and GPT-3.5 performance metrics such as cost, requests, average response time, and average tokens per request.

To get started, install the OpenAI Observability quickstart from New Relic Instant Observability (I/O). Watch the Data Bytes video or visit our library repo for further instructions on how to integrate New Relic with your GPT apps and deploy the custom dashboard.

Key observability metrics for GPT-3 and GPT-3.5

Using OpenAI’s most powerful Davinci model costs $0.12 per 1000 tokens, which can add up quickly and make it difficult to operate at scale. So one of the most valuable metrics you’ll want to monitor is the cost of operating ChatGPT. With the integration of GPT-3 and GPT-3.5 with New Relic, our dashboard provides you with real-time cost tracking, to surface the financial implications of your OpenAI usage and help you determine more efficient use cases.

Another important metric is average response time. The speed of your ChatGPT, Whisper API, and other GPT requests can help you improve your models and quickly deliver the value behind your OpenAI applications to your customers. Monitoring GPT-3 and GPT-3.5 with New Relic will give you insight into the performance of your OpenAI requests, so you can understand your usage, improve the efficiency of ML models, and ensure that you’re getting the best possible response times.

Other metrics included on the New Relic dashboard are total requests, average token/request, model names, and samples. These metrics provide valuable information about the usage and effectiveness of ChatGPT and OpenAI, and can help you enhance performance around your GPT use cases.

Overall, our OpenAI integration is fast, easy to use, and will get you access to real-time metrics that can help you optimize your usage, enhance ML models, reduce costs, and achieve better performance with your GPT-3 and GPT-3.5 models.

For more information on how to set up New Relic MLOps or integrate OpenAI’s GPT-3 and GPT-3.5 applications in your observability infrastructure, visit our MLOps documentation or our Instant Observability quickstart for OpenAI.

To learn more about how you can better observe your OpenAI usage, schedule a call with us today

DISCLAIMER: Webiscope LTD hereby declare that it do not own the rights to this content. All rights belong to the owner. No Copyright Infringement Intended.

Contact Us

Webiscope is now part of Aman Group

We are happy to announce that Webiscope is now part of Aman Group. We look forward giving our customers and partners greater value with more complete solutions and outstanding service.