Whisper
Generate ASMR-inducing audio, providing immersive auditory experiences in your flows.
Last updated
Generate ASMR-inducing audio, providing immersive auditory experiences in your flows.
Last updated
The OpenAI component allows you to integrate OpenAI Whisper generation into your flows. In particular, the following versions are supported:
Whisper-1
Whisper is an AI model developed by OpenAI. Whisper is a powerful tool for converting spoken language into text, with capabilities that extend to multiple languages and various audio conditions.
The OpenAI component has the identifier of opa-X, where X represents the instance number of the OpenAI component.
The OpenAI component has the following input connections.
Input Name | Description | Constraints |
---|---|---|
From Data Loaders | This input connection represents the context information for the OpenAI model. | Must originate from a Data Loader, Data Source or VectorDB component. |
From Audio Input | This input connection represents the user query for the OpenAI model. |
Parameter Name | Description |
---|---|
Credentials | You can specify to use your own OpenAI credentials or alternatively you can use Diaflow's default credentials. |
Model | This parameter specifies the version of OpenAI that the component should use. Available values: - Whisper-1 |
Endpoint | Available options: - Transcriptions - Translations |
Data source |
Options | Description |
---|---|
Temperature | This parameter is used to control the level of randomness or creativity in the generated ASMR content. A lower temperature value results in more deterministic and conservative outputs, while a higher temperature value encourages more diverse and creative variations in the generated audio. Adjusting the temperature parameter allows users to fine-tune the balance between predictability and novelty in the Whisper-generated ASMR content, catering to individual preferences for relaxation and sensory pleasure. |
Response format | Specified the format of the output audio data. |
language | For more accurate results, specify the input language from this dropdown box. |
Prompt | Describes how you want the OpenAI model to respond. For example, you can specify the role, manner and rules that OpenAI should adhere to. Also mention the component ID to connect the components. |
The OpenAI component has the following output connections.
Output Name | Description | Constraints |
---|---|---|
To Output | This output connection contains the audio data result of the OpenAI component | Can be connected to any component that accepts a string input or audio input. |
Here is a simple use case of the Whisper component, where the Whisper component is being used with the whisper-1 model to translate an audio into a text.