To process a large amount of data based on context and input from the user using OpenAI Provider.

The OpenAI component allows you to integrate OpenAI into your Flows. You can customize the parameters used by OpenAI component, and also specify the context of knowledge that the OpenAI component operates on, as well as provide the input query.

The OpenAI component UI changes depending on the model selected, as each model has differing available options. You can specify the exact model to run with the "Model" dropdown menu. These models range from Text to Image, GPT Chat, GPT Vision, Text to Speech models and more. See the Parameters table for more information on the available models to use.

The OpenAI component has the identifier of opa-X, where X represents the instance number of the OpenAI component.

The OpenAI component has the following general parameters that can be specified directly on the UI component.

Parameter NameDescription


You can specify to use your own OpenAI credentials or alternatively you can use Diaflow's default credentials.


This parameter specifies the version of OpenAI that the component should use. Available values: - GPT 3.5 Turbo - GPT 3.5 Turbo 16K - GPT 3.5 Turbo Instruct - GPT-4 - GPT-4 32K - GPT-4 Vision - DALL-E 2 - DALL-E 3 - TTS-1 - TTS-1 HD - Whisper-1

Each of the above AI models serves different purposes, ranging from natural language understanding and generation (GPT), image generation (DALL-E), text-to-speech synthesis (TTS), to specialized tasks like ASMR generation (Whisper). They vary in terms of capabilities, modalities, and target applications.

Each of the available models is summarized below:

GPT Variants

  1. GPT 3.5 Turbo:

    • This is an enhanced version of the GPT-3 model, optimized for better performance, accuracy, or efficiency compared to the original GPT-3.

  2. GPT 3.5 Turbo 16K:

    • Similar to GPT 3.5 Turbo, but with 16,000 parameters (16K). The number of parameters influences the model's capacity and complexity.

  3. GPT 3.5 Turbo Instruct:

    • A variant of GPT 3.5 Turbo optimized for instruction-based learning or fine-tuning on specific tasks. It excels in scenarios where the model receives guidance or instructions during the generation process.

  4. GPT-4:

    • Represents the next iteration of the GPT series after GPT-3, with improvements in model capacity, performance, and capabilities.

  5. GPT-4 32K:

    • A variant of GPT-4 with 32,000 parameters, indicating higher model capacity compared to GPT-4 with a lower parameter count.

GPT Vision

  1. GPT-4 Vision:

    • A version of GPT-4 tailored specifically for understanding and generating visual content, such as images and videos. It extends the capabilities of traditional GPT models to incorporate vision-based inputs and outputs.

DALL-E Variants

  1. DALL-E 2:

    • A version of OpenAI's DALL-E model, which generates images from textual descriptions. DALL-E 2 includes improvements over the original DALL-E model in terms of image generation quality or efficiency.

  2. DALL-E 3:

    • Another iteration of the DALL-E model, with further enhancements compared to DALL-E 2.

TTS Variants

  1. TTS-1:

    • Stands for Text-to-Speech 1, a model designed to convert written text into spoken audio. It provides high-quality and natural-sounding speech synthesis.

  2. TTS-1 HD:

    • A variant of TTS-1 optimized for high-definition audio synthesis, offering even higher fidelity and clarity in the generated speech.


  1. Whisper-1:

    • A model optimized for generating ASMR (Autonomous Sensory Meridian Response) content, which typically involves producing calming and pleasurable sensations through auditory stimuli. Whisper-1 specializes in creating ASMR-inducing audio content.

For more information regarding the various versions of OpenAI model please refer to the following subsections.

Last updated