Home

Fine-Tuning with LoRA: A Practical Guide for the Mistral Model

by Mr Javier León (Laboratório BigData@UE)

Europe/Lisbon
125 (Casa Cordovil)

125

Casa Cordovil

Description

Fine-tuning Large Language Models (LLMs) with Low-Rank Adaptation (LoRA) is a highly efficient technique for optimizing model performance on specific tasks. LoRA enhances pre-trained LLMs by updating only a minimal subset of parameters, significantly cutting down on computational and memory demands. This approach works by incorporating low-rank matrices into the model’s layers, enabling precise adjustments without modifying the model’s core architecture. By leveraging LoRA, LLMs can be rapidly and effectively customized for new domains or tasks, preserving their broad generalization abilities while achieving superior performance on specialized datasets.

Organised by

VISTA Lab

Registration
Fine-Tuning with LoRA: registration
Participants
Participants
  • José Saias
  • Luis Rato
  • Miguel Barão
  • Miguel Silvério
  • Nuno Miquelina
  • Paulo Quaresma
  • Vítor Nogueira
  • Yanet Sáez Iznaga
  • +6
Contact