There is a long list of Large Language Models (LLMs) out in the wild already, from OpenAI’s GPT-4 to Google’s PaLM2 to Meta’s LLaMA, to name three of the more high profile examples. Differentiation between LLMs is determined by factors including the core architecture of the model, training data used, model weights applied and any fine tuning for specific contexts/purposes, as well as the cost of development (and the relative budget of the model maker to splurge on those costs) — all of which can influence how this flavor of generative AI performs in response to a user’s natural language query.
Thing is, this already lengthy list of LLMs seems unlikely to stop growing any time soon, given how many variables AI makers can toy with and contexts lean into to try to get the best performance from conversational generative AI for a given use-case.
Another factor influencing outputs is how much LLM development has focused on the English language — with less attention paid t...