New Relic Tightens AI Observability Mechanics

    Nike Boots on the starting blocks during day five of the 2017 IAAF World Championships at the London ... [+] Stadium. (Photo by Adam Davy/PA Images via Getty Images)PA Images via Getty Images The name

    New Relic Tightens AI Observability Mechanics

    Nike Boots on the starting blocks during day five of the 2017 IAAF World Championships at the London ... [+] Stadium. (Photo by Adam Davy/PA Images via Getty Images)

    PA Images via Getty Images

    The name New Relic is an anagram of founder Lew Cirne’s name. Arguably, he could have called the company Crew Line and still been anagram-observant (it uses the same letters) and delivered a company name more aligned with the organization’s mission to be an all-in-one observability platform. As an Application Performance Management (APM) specialist, New Relic now aims to coalesce all members of a software engineering team (the crew) in a more unified way to help control the production, deployment and existence of applications right down the software supply chain (the line) today.

    Why do we need AI APM?

    The company has this month come forward with launched New Relic AI Monitoring, an APM service for AI-powered applications. But why do we need APM in AI, how does it differ from ‘normal’ APM... and does it necessarily need to be smarter?

    “Almost every company is deciding how they are going to integrate AI into their operations and product offerings,” said Manav Khurana, New Relic chief product officer. “Observability is fundamental to the function and growth of AI. With AIM, we are giving engineers the necessary visibility and control needed to navigate the complexities of AI and build applications in a safe and cost-effective manner.”

    Khurana summarizes it succinctly enough, he says we need to know what data flow elements are happening inside AI applications in order to be able to corral and manage them and - fundamentally - that of course means we need to be able to see what Large Language Model (LLM) is injecting its knowledge into the code stream in order to assess its worth, strength, security and solidity.

    Today we see New Relic positioning AI observability with AI monitoring to provide software engineers with visibility across the AI stack, making it easier to troubleshoot and optimize their AI applications. The company’s AI monitoring technology is said to be capable of monitoring any AI ecosystem, with 50+ integrations across the AI stack including popular LLMs including OpenAI GPT-4.

    Just for clarity here, the New Relic AI Monitoring product is known as AIM i.e. AI Monitoring technology that comes in the form of an APM solution. At the risk of suggesting more naming convention reinvention for the company, it might have been better labelled as AI-M or AI-monitoring, or even AIMAPM. But we digress - we asked does AI APM differ from ‘normal’ APM.

    How AI monitoring mechanics work

    Addressing this point, New Relic reminds us that AI-powered tech stacks introduce new complexity because AI components like LLMs and vector data stores are often a black box for engineers, with the potential to provide inaccurate (or biased) results, generate volumes of telemetry data that need to be tracked and analyzed… and even introduce security issues.

    “[With AI monitoring] engineers can access a single view to troubleshoot, compare and optimize different LLM prompts and responses for performance, cost, security and quality issues including hallucinations, bias, toxicity, and fairness. It provides engineers with full visibility on all components of the AI stack alongside services and infrastructure so that they have the data they need to prove their compliance with AI regulations,” notes the company, in a technical statement.

    Key features and use cases here include the previously noted AI stack integrations to enable engineers to monitor an entire AI stack with quickstart integrations for popular LLMs, vector databases, orchestration frameworks and machine learning libraries. Technologies integrated here include:

    • Orchestration framework: LangChain
    • LLM: OpenAI, PaLM2, HuggingFace, MosaicML
    • Machine learning libraries: Pytorch, Keras, TensorFlow
    • Model serving: Amazon SageMaker, AzureML
    • Vector databases: Pinecone, Weaviate, Milvus, FAISS, Zilliz
    • AI infrastructure: Azure, AWS, GCP, Kubernetes

    The company also talks about visibility across the entire AI app stack to provide a holistic view across the application, infrastructure, and the AI layer, including AI metrics like response quality and tokens alongside so-called APM golden signals (latency, traffic, errors and saturation), all with no additional instrumentation required.

    "Using AI to ensure AI applications are meeting security, quality, safety and cost standards will save development teams time in terms of monitoring the complexity of these apps, adhering to compliance standards and establishing performance benchmarking, and will help protect organizations from vulnerabilities," said IDC group vice president Stephen Elliot. “Any company that provides these solutions is ultimately enabling developers to deliver better products and better customer experiences.”

    Amazon Bedrock

    Allied with this development, New Relic also announced that New Relic AI Monitoring product is now integrated with Amazon Bedrock, a fully managed service by Amazon Web Services, Inc. (AWS) that makes foundation models (FMs) from leading AI companies accessible via an API to build and scale generative AI applications. AWS customers can now use New Relic to gain greater visibility and insights across the AI stack, making it easier to troubleshoot and optimize their applications for performance, quality and cost.

    As we have said recently, it makes lots of sense to use controlled, robust and accountable AI to help build applications. In an equal and opposite way, it makes sense to use AI to ensure AI applications are being run with the right ingredients (in the form of Large Language Models, AI logic engines and connections to other data services and application sources) and in the right way operationally - and that’s what Application Performance Management is all about.

    Disclaimer & Legal Notice

    The opinions expressed in this article are the author's own and do not necessarily reflect the editorial policy of the site. We only aggregate the content, if you are interested in removing it, please contact us.