You can send multiple prompts in parallel, constrain responses to specific formats, generate media and allow language models to use any workflows as tools.

Prompt

The prompt input is where you describe what you want the language model to do. This is exactly the same as the text input to chatGPT, Claude or any other language model chatbot on the internet. In Runchat, the prompt is typically used to describe some kind of task or action you want the language model to take. This includes things like:
  • Does this meet criteria X? (output true / false)
  • Format this as Y (output an object)
  • Generate a list of options for Z (output an array)
  • Save this to a Google Sheet for me (use Google Sheet tool)

Context

The context input is where you provide any additional information to help the language model with understanding the context of your prompt. This is exactly the same as the conversation history in most chatbots. In Runchat, the context is typically used for:
  • Conversation history from previous prompts
  • Images / Documents for the prompt to act on
  • Input from a user running the Runchat as an App

Format

The Format input is where you can specify an exact format for the language model to respond in. To display the format input, change the dropdown in the settings from Markdown to Custom. The default output format is text. In Runchat, the format is typically used for:
  • Generating lists of content to be run in parallel by other nodes
  • Creating structured data from unstructured inputs (e.g. parsing a blog to header and content fields)
  • Formatting data for external APIs
  • Flow control (outputting true / false values)

Format Types

You can generate responses in several different formats:
  • Text: default markdown responses
  • Number: constrain output to numerical responses only
  • Boolean: constrain output to true / false values only. This is useful for using prompts for flow control.
  • Object: constrain output to objects that match a specified schema
  • Array: output a list with a specified type
  • Enum: constrain output to one of a list of options
  • Request: constrain output to an object in API request format (url, method, header, body)
You can enable search from the Tools settings in the Agent node. When you enable search, the model provider will perform a web search and pass the first result to the language model along with your prompt. This allows you to ground results with real factual data and is especially useful for prompts that require up to date information. Search with Openrouter models costs $4 / 1000 requests and is charged to your Openrouter account. Search requests with Gemini cost credits. If you need more requests, get in touch.
For more control over search parameters and results, use the tools in the Search library.