Google Gemini
To process a large amount of data based on context and input from the user using Google Gemini Provider.
Last updated
To process a large amount of data based on context and input from the user using Google Gemini Provider.
Last updated
Google Gemini is an advanced generative AI model developed by Google. It is part of the broader landscape of large language models (LLMs) and represents a significant step forward in AI technology. The particularity of Gemini is his capacity to generate multiple form of data (text, image, audio, video)
The Google Gemini component allows you to integrate Gemini into your flows. You can customize the parameters used by the component, and also specify the context of knowledge that the component operates on, as well as provide the input query. Both the context and the query are given to the Gemini component by specifying Diaflow component identifiers. For example, the above screenshot shows the default user message of trigger.question which is a Text Input component.
The Google Gemini component has the identifier of gg-X, where X represents the instance number of the Gemini component.
The Google Gemini component has the following input connections.
From data Loaders/ Data source/Vector DB
This input connection represents the context information for the Gemini model.
Must originate from a Data Loader/Data Source or VectorDB component.
From Input
This input connection represents the user query for the Gemini model.
Must originate from a component that generates a text string as output such as a Python or Text Input component.
Credentials
You can specify to use your own gemini credentials or alternatively you can use Diaflow's default credentials.
Model
This parameter specifies the version of gemini that the component should use.
Available values: - gemini-pro - gemini-pro-vision
Prompt
Describes how you want the gemini model to respond. For example, you can specify the role, manner and rules that gemini should adhere to. Also mention the component ID to connect the components.
Enable caching
This option determines whether the results of the component are cached. This means that on the next run of the Flow, Diaflow will utilize the previous computed component output, as long as the inputs have not changed.
Caching time
Only applicable if the "Enable Caching" option has been enabled. This parameter controls how long Diaflow will wait before automatically clearing the cache.
Clear cache
Only applicable if the "Enable Caching" option has been enabled. Clicking this button will clear the cache.
Memory
The ability of the model to remember and utilize context within a single session. The context window represent the maximum amount of text the model can consider.
Window size
Only applicable if the "Memory" option has been enabled. The Window Size option refers to the number of previous conversation turns that the model can remember. Valid range for this parameter is 0 to 1000.
View test memory
Only applicable if the "Memory" option has been enabled. Opens a window to display the history of prompts and completions.
Clear test memory
Only applicable if the "Memory" option has been enabled. Clicking this button will clear the history of prompts and completions.
Temperature
The temperature is used to control the randomness of the output. When you set it higher, you'll get more random outputs. When you set it lower, towards 0, the values are more deterministic. Valid range for this parameter is 0 to 1.
Max lenght
The Max Length parameter in OpenAI refers to the maximum number of tokens allowed in the input text. Tokens can be individual words or characters. By setting the max length, you can control the length of the response generated by the model. It's important to note that longer texts may result in higher costs and longer response times. Valid range for this parameter is 0 to 3097.
Top P
Top-p sampling, involves selecting the next word from the smallest possible set of words whose cumulative probability is greater than or equal to the specified probability p, typically between 0 and 1.
The Google Gemini component has the following output connections.
To Output
This output connection contains the text result of the Anthropic component.
Can be connected to any component that accepts a string input.
Here is a simple use case of the Google Gemini component, where the Gemini component is being used with the gemini-pro model to generate a text. In this case we ask Gemini to return the list of colors that exist on earth.