Fine-tuning LLMs: Make Open-Source Models Work for You— GenAI Monthly Meetup #3

ErasmusX
6 min readOct 16, 2024

--

Article Written by the ErasmusX GenAI Team

Late September we had our first Monthly Meetup of the academic year, and the third overall — this time on finetuning LLMs! The aim of these meetups is to share knowledge, experiment, and brainstorm about developments in (gen)AI and find new ways it can be applied to teaching and learning. Each month we have a guest speaker and a diverse group of participants from the EUR community.

This blog is a recap of the discussion from September’s genAI meetup! In case you missed the first two (pilot) meetups last academic year, you can check out those recaps here (1st) and here (2nd).

What we Covered in September’s GenAI Monthly Meetup

Dr. João Gonçalves, Associate Professor in the department of Media and Communication, kicked off the first GenAI Monthly Meetup of the academic year 2024–25. João is one of the creators behind the Erasmian Language Model (ELM for short), responsible for a large part of the coding that went into it.

In the meetup, João demonstrated how to customize ELM and other language models by fine-tuning them to make them better suited to (your) specific needs and contexts.

You can jump to the step-by-step instructions of the fine-tuning process here (this will open a PDF with the instructions on Google Drive).

What is a Large Language Model (LLM)?

João started off the meetup with a quick intro to LLMs (Large Language Models). LLMs are machine learning models that can “understand” language and generate appropriate responses, though the word “understand” here can be misleading. The process behind machine learning is mathematics — the way models “learn” is by processing large quantities of textual data, and trying to predict the next word in any given text. It is a process of trial and error. Of course, that means that the more data they have, and especially the more high-quality data, the better and more accurate their answers become. That is perhaps the most important takeaway from João’s presentation — high quality data is key to effective language models.

Issues with LLMs

Of all publicly available LLMs, OpenAI’s ChatGPT is probably the most well-known. It is a powerful tool trained on vast amounts of data and it is continuously being updated and refined. However, there are some issues that users might have with ChatGPT — and models like it. A main concern is privacy: OpenAI uses (some of the) user-data to train the model, which can raise privacy concerns with sensitive data. Another concern is the environmental impact, as machine learning requires millions of neurons in a network to process the large amount of data it works with, which uses energy on a large scale.

Possibilities

Despite the concerns, LLMs’ ability to be used in many different contexts has quickly made them very popular, and they are becoming more prevalent in daily life. When we asked the participants in our meetup what they would do if they had a perfect language model, their responses ranged from “grading papers” to “responding to emails” to “everything”. There are indeed many possibilities for what one can do with an LLM.

Let’s discuss two ways of making an LLM address users’ specific needs: (1) system prompting and (2) fine-tuning.

System Prompting

System prompting is a way of customizing LLMs that people are likely to be familiar with and that takes less effort and specialized knowledge. This is not João’s preferred method, but it has its merits, and so we will address it.

To borrow João’s MasterChef metaphor, system prompting is like a pressure test. In reality-cooking shows that means that contestants are given a recipe and tested on how well they can follow it. For the purposes of LLMs, it means that system prompting doesn’t change the LLM, just giving it an instruction it has to follow when generating responses. For example, if you prompt ChatGPT with the following: “You are a helpful assistant, but you can’t mention competing companies,” it will take on that role, but still only “know” what is already in its data set. If you don’t give it specific information to draw from, it is likely to perform in very generic ways, “hallucinate”, and straight up making up information in its responses.

Example of System Prompting

In ChatGPT, you have the option to “create GPTs” (which is only available in the paid version). When you create a GPT for a specific purpose, you can give it information, or upload documents containing information, and it will follow those parameters when generating responses. There are downsides, however. The issue with uploading documents into ChatGPT is that it doesn’t recognize which parts of the text are relevant, so it doesn’t always effectively extract the information that would be useful. There are also privacy risks if you upload your own data to ChatGPT.

Overall, system prompting can help with more general tasks, but still involves some risks, and is not very good at adapting to a specific user’s writing style for example. In other words, this method allows you to tailor up to a certain local-maximum, but doesn't provide you with the ability to really fine-tune it to “perfection”, to specialize it.

Fine-tuning

Unlike system prompting, fine-tuning changes the model itself by customizing the LLM. In João’s MasterChef metaphor of language models, fine-tuning is like the mystery box challenge — you give it the ingredients and see what it can come up with, instead of it having to follow very clear instructions (recipe).

Benefits of Fine-tuning

With fine-tuning, you take an existing LLM, such as the Erasmian Language Model (or Llama, etc.), and you give it specific examples of what you want it to produce. Here, the emphasis is on feeding it high-quality data. This way, you can for example input samples of your own writing and let it learn how to write like you, allowing it, for example, to respond to your emails. If you work with ELM or other open-source models, you can do this completely free. Additionally, since you are usually creating a copy of the model and modifying it on your own device or private cloud, it eliminates privacy concerns, since no one else can access the data you provide it. This can also minimize the environmental impact, since you can work with relatively smaller models, which don’t require as much energy.

Difficulties with Fine-tuning

Fine-tuning requires a little bit of technical knowledge, which might seem intimidating for those not familiar with coding. However, the barrier to entry is not as high as it may seem! In the meetup, João walked our participants through the steps of customizing the Erasmian Language Model. You can find the full instructions here.

Questions to Consider

  • There is a risk of creating bias in the model when you feed it data from limited sources. You can always tweak the model to try and eliminate the risk, but no model will ever be 100% bias-free (especially since data is never bias-free).
  • Is it cheating if a student uses a model based on their own writing to complete an assignment?

Conclusion

Both system prompting and fine-tuning can be good ways to make LLMs more suited to your individual needs. System prompting solves some of the issues that arise with using commercial LLMs, and fine-tuning goes even further, but requires some more work and can look intimidating at first. However, with a little bit of practice, you can do it yourself, even if you are not experienced with coding.

We encourage you to try and explore the world of open-source LLMs for yourself by following João’s instructions here.

Join Us for the Next GenAI Meetup! 😁

To participate in the meetups and receive follow-ups, we invite you to become a member of our community! Every month we have an exciting new meetup to look forward to, and it’s all up to you whether you join or not (there are no obligations).

👉 You can join us here (EUR members only)!

--

--

ErasmusX
ErasmusX

Written by ErasmusX

We are a team of passionate people forming the driving force behind educational experimental innovation for the Erasmus University Rotterdam (EUR).

No responses yet