Organizations, industries, and even governments across the world are in a race to embrace artificial intelligence (AI) and unlock its wide array of benefits. Singapore's government is no different, with AI embedded in its plans to dramatically transform government services, finance, healthcare, education, and logistics, as outlined in its National AI Strategy.
Within the multitude of different AI models, generative AI and large language models (LLMs) have significant relevance to businesses today due to their advanced natural language processing capabilities. For example, LLMs empower customers to interact easily with chatbots and virtual assistants for a smoother customer experience. This can be extended to other applications, such as data analytics platforms, to make complex business tasks more accessible.
Another use of LLMs currently being explored is generating high-quality written content for marketing teams at a fraction of the usual time. In the same way, these models can also write code faster, allowing business users within an organization to perform low-code development and freeing development teams for other critical tasks.
The way generative AI and LLMs perform such tasks is amazing, but it is also not perfect for now.
Challenges in utilizing LLMs
While LLMs seem to have a human-like understanding of natural language, these models deliver fluent and coherent responses through probabilities. They recognize patterns within their training data, such as when the use of a word or a series of words in a specific order is usually followed by another specific word or set of words.
As an AI model, LLMs are designed to finish the task given to them, which in their case is answering a prompt or question. The problem is that they do not understand natural language like humans, so they will answer based only on the patterns they learned from their training data. This leads to LLMs' coherent and fluent responses, due to the guidance of the patterns mentioned above, but are either not factual or do not make sense about the question or prompt.
For example, a recent Government Technology Agency (GovTech) experiment involved posing a question about their headquarters to ChatGPT. Unfortunately, the tool provided the address of the GovTech Hive building instead of the actual headquarters, highlighting that the AI model does not guarantee correct and up-to-date information in its responses.
LLMs' hallucinations can arise due to shortcomings in the dataset and training procedures. Two main factors contribute to these hallucinations: overfitting and data quality.
Overfitting occurs when a model is too complex or trained using noisy data. As a result, the model learns subpar pattern recognition, makes errors in classification and prediction, and generates inaccurate and factually incorrect output. Insufficient quality data, characterized by a low signal-to-noise ratio, also contributes to poor generalization and inaccurate classifications and predictions, leading to hallucinations.
Addressing hallucinations through retrieval-augmented generation (RAG)
Various techniques can be employed in addressing hallucinations in LLMs, such as fine-tuning, prompt engineering, and RAG.
Fine-tuning retrains a model using domain-specific datasets to enhance the relevance of its responses to that particular domain, but it is considered time-consuming and costly. Prompt engineering, which relies on producing better results through more descriptive features and clarifying information within the input prompt, is also time-consuming, particularly for users of LLMs.
Instead of expending vast amounts of resources through fine-tuning or having LLM users go through a time-consuming process of writing better prompts, organizations looking to leverage generative AI can turn to RAG. This framework focuses on grounding LLMs with the most accurate and up-to-date information by retrieving facts from an external knowledge repository, thereby improving the LLM's responses.
Powering RAG with real-time data
The combination of RAG and real-time data has proven highly effective in reducing hallucinations by leveraging updated and contextual data. RAG also enriches language models by incorporating context-specific information, leading to more accurate and relevant responses.
To optimize the effectiveness of the RAG model, it is essential to integrate it with an operational data store capable of storing data in LLMs' native language, i.e., high-dimensional mathematical vectors known as embeddings. When a user query is received, the database transforms it into a numerical vector, allowing for queries related to relevant papers or passages even if they do not contain the exact terms.
To ensure successful implementation, it is crucial to have a highly available and performant database that can handle substantial amounts of unstructured data through semantic search. This database forms a critical component of the RAG process.
Unlocking the full potential of GenAI
As more businesses and industries will leverage generative AI for an increasing variety of use cases, it is crucial to address the issue of model hallucinations. Implementing RAG, coupled with real-time contextual data, can significantly reduce these hallucinations and improve the accuracy and value of AI models.
Organizations must adopt a data layer that supports both transactional and real-time analytics to ensure the effectiveness and relevance of generative AI. By harnessing real-time data, businesses can power dynamic and adaptive AI solutions, making timely decisions and responding instantly to market dynamics.
Genie Yuan, regional vice president for APAC at Couchbase, wrote this article.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/wildpixel