Llama
To process a large amount of data based on context and input from the user using Llama Provider.
Last updated
Was this helpful?
To process a large amount of data based on context and input from the user using Llama Provider.
Last updated
Was this helpful?
The Llama model, developed by Meta, is a language model that combines the power of GPT (Generative Pre-trained Transformer) with LLM (Language Model for Legal Text). It is specifically designed to understand and generate legal text, making it useful for tasks such as legal document analysis, contract generation, and legal research. The model has been trained on a large corpus of legal documents to ensure accuracy and relevance in its responses.
The Llama component allows you to integrate Llama into your flows. You can customize the parameters used by Llama component, and also specify the context of knowledge that the Llama component operates on, as well as provide the input query. Both the context and the query are given to the Llama component by specifying Diaflow component identifiers. For example, the above screenshot shows the default user message of trigger.text which is a Text Input component.
The Llama component has the identifier of an-X, where X represents the instance number of the Llama component.
The Llama component has the following input connections.
From data Loaders/ Data source/Vector DB
This input connection represents the context information for the Llama model
Must originate from a Data Loader/Data Source or VectorDB component.
From Input
This input connection represents the user query for the Llama model.
Must originate from a component that generates a text string as output such as a Python or Text Input component.
Credentials
You can specify to use your own Llama credentials or alternatively you can use Diaflow's default credentials.
Model
This parameter specifies the version of Llama that the component should use. Available values: - Llama 3.2 11B Vision Instruct - Llama 3 70B Instruct - Llama 3 8B Instruct - Llama 3.1 8B Instruct - Llama 3.1 70B Instruct - Llama 3.1 70B Instruct - Llama 3.2 90B Vision Instruct - Llama 3.2 3B Instruct - Llama 3.2 1B Instruct
Prompt
Describes how you want the Llama model to respond. For example, you can specify the role, manner and rules that Llama should adhere to. Also mention the component ID to connect the components.
Image source
Adding an image to your prompt by identify a trigger file in this configuration.
Enable caching
This option determines whether the results of the component are cached. This means that on the next run of the Flow, Diaflow will utilize the previous computed component output, as long as the inputs have not changed.
Caching time
Only applicable if the "Enable Caching" option has been enabled. This parameter controls how long Diaflow will wait before automatically clearing the cache.
The Llama component has the following output connections.
To Output
This output connection contains the text result of the Llama component.
Can be connected to any component that accepts a string input.
Here is a simple use case of the Llama component, where the Llama component is being used to provide the user the ability to ask the Llama component questions via a Text Input component.