Loading…
Wednesday, October 25 • 11:00am - 11:25am
OPEN TALK (AI): Approaches for Measuring Embedding/Vector Drift for Language Models

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Aman Khan, Arize AI, Group Product Manager

It’s often said that debugging machine learning is 10 times harder than debugging software since it uniquely combines many of the problems of software and data engineering as well as challenges unique to data science and MLOps.

This is particularly true in deep learning, where labeling data is expensive and is one of the only ways to get model performance feedback. It’s no wonder that even the most advanced and best-funded large language models (LLMs), like OpenAI’s ChatGPT and Google’s Bard, sometimes hallucinate and fail in the real world. When this happens with high stakes use cases, such as in the finance and medical industries, what tools does a data scientist have at their disposal to monitor for problems with their model in production - and fix them?

Here’s the truth: troubleshooting models based on unstructured data is notoriously difficult. The measures typically used for drift in tabular data – such as population stability index, Kullback-Leibler divergence, and Jensen-Shannon divergence– allow for statistical analysis on structured labels, but do not extend to unstructured data. The general challenge with measuring unstructured data drift is that you need to understand the change in relationships inside the unstructured data itself. In short, you need to understand the data in a deeper way before you can understand drift and performance degradation.

In this presentation, Aman Khan, Group Product Manager at Arize AI, will present findings from research on ways to measure vector/embedding drift for image and language models. With lessons learned from testing different approaches (including Euclidean and Cosine distance) across billions of streams and use cases, Khan will dive into how to detect whether two unstructured language datasets are different — and, if so, how to understand that difference using emerging techniques such as UMAP and clustering.

In the coming years, more ML teams will likely look to embedding drift to help detect and understand differences in their unstructured data. This presentation with examples from the real world will be both useful and fascinating to advanced data scientists and learners alike!

Speakers
avatar for Aman Khan

Aman Khan

Group Product Manager, Arize AI
Aman is a Group Product Manager at Arize AI, where he works on scaling ML Observability solutions so that Data Scientists can monitor and improve ML models in production. Prior to Arize, Aman was the PM on the Jukebox Feature Store in the ML Platform team at Spotify across ~50 data... Read More →


Wednesday October 25, 2023 11:00am - 11:25am PDT
AI DevWorld -- Stage 1