Diaflow's Documentation
HomePricingIntegrations
Documentation
Documentation
  • 😎WELCOME TO DIAFLOW
    • Introduction to Generative AI
    • What can you build with Diaflow?
  • 💻USER ACCOUNT
    • Create your user account
    • Delete your user account
    • Log out and log in
    • Change "Personal" & "Workspace" settings
    • Reset user account password
  • 🚀Getting Started
    • Let's start with the basics
      • How a component works?
      • How a flow works?
      • Creating your first flow
    • Dashboard
      • Services
      • Create a flow from scratch
      • Create a flow from templates
      • View your flows
    • Terminology
  • 🌊Flows
    • Overview
    • Create a flow
    • Delete a flow
    • Manage a flow
    • Publish a flow
    • Unpublish a flow
    • Deployment
    • Component Reference
      • Trigger
        • When inputs are submitted (Apps)
        • Cronjob (Automation)
        • Webhook (Automation)
        • Microsoft Outlook (Automation)
      • Outputs (Apps)
        • Text Output
        • Chart Output
        • Video Output
        • Audio Output
        • Image Output
      • Built in tools
        • Branch
        • Merge (Multiple data source to JSON)
        • Split Data (JSON Formatter)
        • Video to audio
        • Get current date and time
        • Web scraper
        • Document to plain text
        • Retrieve data from spreadsheet (Spreadsheet analyzer)
        • Spreadsheet creator
        • Convert JSON to chart data
        • PDF to image
        • Get weather information
        • HTTP Request
        • Get GEO Location
        • SMTP
        • Loop
      • Built in resources
        • Diaflow Vision
        • Diaflow Vectors
        • Diaflow Drive
        • Diaflow Table
      • Apps
        • Hunter.io
        • Outlook Email
        • Telegram
        • Slack
        • Python
        • YouTube
        • SerpAPI
        • Google Sheet
          • Document-level Operations
          • Sheet-level Operations
          • Data-level Operations
      • Database
        • MySQL
        • Microsoft SQL
        • PostgreSQL
        • Snowflake
      • Private AI/LLM Models
        • OpenAI
          • GPT Variants
          • GPT Vision
          • DALL-E Variants
          • TTS Variants
          • Whisper
        • Anthropic
        • Llama
        • Google Gemini
        • Cohere
        • MistralAI
      • Public AI/LLM Models
        • OpenAI Cloud
        • Perplexity Cloud
        • Deepseek Cloud
        • Anthropic Cloud
        • Replicate
        • Straico
        • OpenRouter
        • Cohere Cloud
        • Google Gemini Cloud
        • MistralAI Cloud
        • ElevenLabs Cloud
      • AI Tools
  • ✒️PRODUCTIVITY TOOLS
    • Tables
    • Drive
    • Vectors
      • Document
      • Article
      • URLs
  • 🏠Workspace
    • History
    • Teams
    • Billing & Subscription
      • Upgrade/Downgrade a subscription
      • Buy credits
      • Credit Usage
      • Cancel a subscription
    • Settings
      • Personnal
      • Workspace
        • Change workspace
        • Workspace settings
        • Custom Domain
        • Delete workspace
      • Change Language
    • Documentation
    • Integrations
    • API keys
  • 📑Other
    • FAQs
    • Contact Information
Powered by GitBook
On this page
  • Description
  • Inputs
  • Component settings
  • Advanced configurations
  • Outputs
  • Use case

Was this helpful?

  1. Flows
  2. Component Reference
  3. Private AI/LLM Models

Llama

To process a large amount of data based on context and input from the user using Llama Provider.

Last updated 1 month ago

Was this helpful?

Description

The Llama model, developed by Meta, is a language model that combines the power of GPT (Generative Pre-trained Transformer) with LLM (Language Model for Legal Text). It is specifically designed to understand and generate legal text, making it useful for tasks such as legal document analysis, contract generation, and legal research. The model has been trained on a large corpus of legal documents to ensure accuracy and relevance in its responses.

The Llama component allows you to integrate Llama into your flows. You can customize the parameters used by Llama component, and also specify the context of knowledge that the Llama component operates on, as well as provide the input query. Both the context and the query are given to the Llama component by specifying Diaflow component identifiers. For example, the above screenshot shows the default user message of trigger.text which is a Text Input component.

The Llama component has the identifier of an-X, where X represents the instance number of the Llama component.

Inputs

The Llama component has the following input connections.

Input Name
Description
Constraints

From data Loaders/ Data source/Vector DB

This input connection represents the context information for the Llama model

Must originate from a Data Loader/Data Source or VectorDB component.

From Input

This input connection represents the user query for the Llama model.

Must originate from a component that generates a text string as output such as a Python or Text Input component.

Component settings

Parameter Name
Description

Credentials

You can specify to use your own Llama credentials or alternatively you can use Diaflow's default credentials.

Model

This parameter specifies the version of Llama that the component should use. Available values: - Llama 3.2 11B Vision Instruct - Llama 3 70B Instruct - Llama 3 8B Instruct - Llama 3.1 8B Instruct - Llama 3.1 70B Instruct - Llama 3.1 70B Instruct - Llama 3.2 90B Vision Instruct - Llama 3.2 3B Instruct - Llama 3.2 1B Instruct

Prompt

Describes how you want the Llama model to respond. For example, you can specify the role, manner and rules that Llama should adhere to. Also mention the component ID to connect the components.

Image source

Adding an image to your prompt by identify a trigger file in this configuration.

Advanced configurations

Options
Description

Enable caching

This option determines whether the results of the component are cached. This means that on the next run of the Flow, Diaflow will utilize the previous computed component output, as long as the inputs have not changed.

Caching time

Only applicable if the "Enable Caching" option has been enabled. This parameter controls how long Diaflow will wait before automatically clearing the cache.

Outputs

The Llama component has the following output connections.

Output Name
Description
Constraints

To Output

This output connection contains the text result of the Llama component.

Can be connected to any component that accepts a string input.

Use case

Here is a simple use case of the Llama component, where the Llama component is being used to provide the user the ability to ask the Llama component questions via a Text Input component.

🌊