LLMs for math problems: overcoming the challenges

  Machine learning

LLMs for math problems: overcoming the challenges

While large language models excel in many NLP tasks, they often struggle with even trivial math problems. LMs are notoriously bad at working with numbers and often fail due to flawed reasoning. The first issue can be solved entirely by teaching models to use mathematical tools, but reliable reasoning remains a formidable challenge. We will demonstrate how to improve reasoning capabilities using preference optimization, a recent technique for aligning LMs with feedback. This method teaches the model to prefer generating correct solutions over incorrect ones. Lastly, we will discuss other possible approaches for improving reasoning and their limitations.

Program:

17:30 Welcome chat 18:00 Talk 18:50 Discussion 19:10 Networking (Impact Hub)

About MLMUs:

Machine Learning Meetups (MLMU) is an independent platform for people interested in Machine Learning, Information Retrieval, Natural Language Processing, Computer Vision, Pattern Recognition, Data Journalism, Artificial Intelligence, Agent Systems and all the related topics. MLMU is a regular community meeting usually consisting of a talk, a discussion and subsequent networking. Except of Prague, MLMU also spread to Brno, Bratislava and Košice.

Zdarma