AI Applications You Can Build Today
- Han Gerrits

- Dec 8, 2024
- 3 min read

With the rise of LLMs (Large Language Models), AI has become accessible to everyone. You no longer need vast amounts of data or computing power to train an AI; you can now leverage pre-trained LLMs to build your own AI applications.
In practice, I’ve noticed that while many organizations are interested, they’re often hesitant to take the first step. This hesitation is usually tied to perceived risks associated with AI. For instance, media attention has focused heavily on hallucinating LLMs—models that generate perfect-looking text but produce complete nonsense. Other concerns include fears that data input into LLMs might be used for unintended purposes or that personal data might leave Europe, potentially violating GDPR compliance.
These fears are partially valid: there isn’t much experience with LLMs yet, and improper implementation can indeed lead to such issues. However, there are already AI applications we can build today using LLMs that are both reliable and trustworthy.
To understand how to use LLMs effectively, we need to revisit their purpose. This begins with the concept of AI, or Artificial Intelligence. In computer science, we consider a system intelligent when it can perform tasks that humans do naturally or learn from a young age, such as recognizing objects in images or understanding and generating human language.
While these tasks are simple for humans, they were nearly impossible for computers until a few years ago. To train computers in language skills, models were developed and trained on vast amounts of text from the internet. Using neural networks (a topic for another article), these models eventually became capable of understanding and generating human language.
We experience this progress daily, whether it’s talking to a navigation system in the car or giving commands to an assistant like Siri. However, as these models gained language skills, they also acquired “knowledge” from the texts they processed. For example, they likely know Amsterdam is the capital of the Netherlands and can provide a correct answer to that question.
But the accuracy diminishes with more complex queries, such as the capital of Germany. The LLM may have read texts stating Berlin is the capital and others claiming Bonn is. As a result, the answer may not always be reliable.
For this reason, when building LLM applications today, I recommend using only their language processing capabilities and avoiding reliance on their factual knowledge. By adhering to this limitation, we can already create many reliable applications.
For instance, we can build chatbots that use LLMs to answer questions based on documents linked to the chatbot. The LLM can be programmed to refrain from answering if the information isn’t in the documents. Compared to traditional chatbots, LLM-based chatbots excel because they can understand the intent behind a question and provide relevant answers. Traditional chatbots, on the other hand, only work when a question matches predefined question-answer pairs exactly.
An excellent example of success with an AI-powered chatbot is Klarna. Their chatbot not only improves customer satisfaction by providing better answers faster (reducing the response time from 11 minutes to just 2), but it also saves Klarna €40 million annually by performing the work of 700 employees.
Other use cases for LLMs that leverage only their language capabilities include summarizing documents and extracting structured data from them. At EPSA, for instance, we’ve developed an application that analyzes invoices and assigns them to the correct expense categories, as well as a tool for contract analysis.
By using LLMs in this way, we can create trustworthy AI applications today that deliver real value.


Comments