October 2 - 4 - Devoxx Morocco 2024 - 🇲🇦 Palm Plaza hotel - Marrakech 🌞🌴
Follow Us On

Talk details

Large language models have demonstrated impressive performance on NLPs tasks. However, these models have hundreds of billions of parameters, and their training from scratch needs high resources and huge datasets. Fine tuning could be a potent tool to lower training costs while improving the performance and applicability of these models. This talk will delve into 4 main parts:
1- Overview LLms : Definition, examples, the capabilities of LLMs and their impact on various fields.
2- Different types of fine tuning methodologies : Supervised fine tuning, Unsupervised fine tuning, Instruction fine tuning
3- Techniques to update pretrained LLM weights for finetuning : Full fine tuning, Adapter based fine tuning, Parameter-efficient fine tuning
4- Workflow of Supervised Fine-Tuning (Demo)
By the end of this session, attendees will have a thorough understanding of the state-of-the-art in LLM fine-tuning, equipped with practical knowledge.
Hajar GHARBI
Bouygues Telecom
Big Data Engineer @Bouygues Telecom and PhD student on GenAI