Models

What can you use?

GenAI workflows depends on Large Language Models (LLMs). An LLM can accept inputs, incorporate them as parameters in its prompt, and create a completion.

Diaflow offers a wide range of providers and models that are integrated into the platform. Here are some of the providers and their models that you can access:

  • OpenAI: GPT-4, GPT-4 32k, GPT 3.5 Turbo, GPT 3.5 16k

  • Anthropic: Claude 2, Claude 2.1

  • Cohere: Chat

You can also connect your self-hosted fine-tune models into Diaflow through REST APIs.


How it works

To use the LLM component, you need to set up these connections:

  • Input: The LLM component requires a text input. You can connect the input to an Input component (for example, a user's message), or to another LLM component's output where it creates input for another LLM.

  • Output: This component returns the response from the LLM.

You can include inputs in the LLM prompt by putting their IDs in double curly brackets. The LLM prompt will show correct inputs in {{green}} and incorrect inputs in {{red}}.

When you start typing the double curly bracket {{, the system will suggest available inputs for you to choose.

Below the prompt, you will also find all connected components.

You can also configure your LLM by clicking on the ellipsis icon (...). This lets you check parameters such as temperatures, length, top P, and so on.


LLM Settings

When working with prompts, you interact with the LLM via an API or directly. You can configure a few parameters to get different results for your prompts.

  • Temperature - In short, the lower the temperature, the more deterministic the results in the sense that the highest probable next token is always picked. Increasing temperature could lead to more randomness, which encourages more diverse or creative outputs. You are essentially increasing the weights of the other possible tokens. In terms of application, you might want to use a lower temperature value for tasks like fact-based QA to encourage more factual and concise responses. For poem generation or other creative tasks, it might be beneficial to increase the temperature value.

  • Top_p - Similarly, with top_p, a sampling technique with temperature called nucleus sampling, you can control how deterministic the model is at generating a response. If you are looking for exact and factual answers keep this low. If you are looking for more diverse responses, increase to a higher value.

Note: The general recommendation is to alter temperature or top_p, not both.

  • Max Length - You can manage the number of tokens the model generates by adjusting the 'max length'. Specifying a max length helps you prevent long or irrelevant responses and control costs.

  • Stop Sequences - A 'stop sequence' is a string that stops the model from generating tokens. Specifying stop sequences is another way to control the length and structure of the model's response. For example, you can tell the model to generate lists that have no more than 10 items by adding "11" as a stop sequence.

  • Frequency Penalty - The 'frequency penalty' applies a penalty on the next token proportional to how many times that token already appeared in the response and prompt. The higher the frequency penalty, the less likely a word will appear again. This setting reduces the repetition of words in the model's response by giving tokens that appear more a higher penalty.

  • Presence Penalty - The 'presence penalty' also applies a penalty on repeated tokens but, unlike the frequency penalty, the penalty is the same for all repeated tokens. A token that appears twice and a token that appears 10 times are penalized the same. This setting prevents the model from repeating phrases too often in its response. If you want the model to generate diverse or creative text, you might want to use a higher presence penalty. Or, if you need the model to stay focused, try using a lower presence penalty.

Similar to temperature and top_p, the general recommendation is to alter the frequency or presence penalty, not both.

Before starting with some basic examples, keep in mind that your results may vary depending on the version of LLM you use.

Source: https://www.promptingguide.ai/introduction/settings


The following sections detail the available AI components.

Last updated