Skip to content

Following are my personal thoughts on tech, AI, startups and adoption of AI in Health-Care. You could read more about me here


Blog / Thoughts

Tracing and Observability in LLM Applications

From Concept to Production with Observability in LLM Applications

Understanding observability in AI applications, particularly in Large Language Models (LLMs), is crucial. It's all about tracking how your model performs over time, which is especially challenging with text generation outputs. Unlike categorical outputs, text generation can vary widely, making it essential to monitor the behavior and performance of your model closely.

Recently, there has been a surge of enthusiasm surrounding large language models (LLMs) and generative AI, and justifiably so: LLMs have the power to revolutionize entire industries. Yet, this enthusiasm often gives rise to inevitable hype. It appears somewhat counterintuitive to avoid incorporating “AI” into a product’s presentation, considering the immediate market interest it can generate.