MPI-INF Logo
Campus Event Calendar

Event Entry

What and Who

Context-Aware Fine-Tuning of Large Language Models

Shayan Salehi
Sharif University of Technology
PhD Application Talk
AG 1, AG 2, AG 3, INET, AG 4, AG 5, D6, SWS, RG1, MMCI  
AG Audience
English

Date, Time and Location

Tuesday, 4 February 2025
12:30
30 Minutes
Virtual talk
zoom

Abstract

With the rise of large language models, the need to utilize these models in specific contexts such as medical, financial, and educational is increasing. During my talk, I will explore my previous work in collaboration with Imperial College London on fine-tuning LLMs through reinforcement learning and reward systems, and examine the future possibility of research by introducing frameworks with inspiration from previous works like symbol tuning, context-aware meta-learning and prompt-tuning. This future direction of research will primarily investigate prompt evaluation metrics for the purpose of a specific context or adversarial attacks, and further go through introducing an outer component to analyze and interpret cross-attention headers for freezing and pruning unrelated layers for fine-tuning tasks. These efforts aim to make LLMs more effective and reliable for different applications.

Contact

Ina Geisler
+49 681 9325 1802
--email hidden

Virtual Meeting Details

Zoom
passcode not visible
logged in users only

Ina Geisler, 01/27/2025 09:33 -- Created document.