The Art of Testing LLM Components: Strategies and Recommendations

  Machine learning

Speaker: Miloš Švaňa ProfIQ: www.profiq.cz

Abstract: Many developers are enriching their apps with LLM functionality. But how to test LLM components properly? This task is generally more difficult than testing normal code because the output of LLM models is non-deterministic, API calls can be expensive, and getting a response takes a lot of time. In this talk, I am going to introduce several strategies for testing the LLM components of your application, each with different strengths and weaknesses. Some behave rather deterministically, others use an LLM to test an LLM. I’ll also talk about a few general recommendations. I will focus specifically on text generation and on “unit testing” — the task at hand will be to ensure that the LLM components of our app behave within specifications and that things don’t break when we change our prompts or when the LLM we are using gets updated.

You can watch it online here: https://www.facebook.com/events/3994856560836146 *Stream starts at 17:45

Program: 17:30 Welcome chat 18:00 Talk 18:50 Discussion 19:10 Networking (Impact Hub)

About MLMUs: Machine Learning Meetups (MLMU) is an independent platform for people interested in Machine Learning, Information Retrieval, Natural Language Processing, Computer Vision, Pattern Recognition, Data Journalism, Artificial Intelligence, Agent Systems and all the related topics. MLMU is a regular community meeting usually consisting of a talk, a discussion and subsequent networking. Except of Prague, MLMU also spread to Brno, Bratislava and Košice.

Zdarma