Generate a Beast Mode calculation based on the given text input and data source schema.
Documentation Index
Fetch the complete documentation index at: https://domoinc-arun-raj-connectors-domo-479695-remove-crime-report.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Text to Beast Mode AI Service request.
Text-to-Beast-Mode AI Service Request.
A prompt template is a string that contains placeholders for parameters that will be replaced with parameter values before the prompt is submitted to the model.
A default prompt template is set for each model configured for the Text-to-Beast-Mode AI Service. Individual requests can override the
default template by including the promptTemplate parameter.
The following request parameters are automatically injected into the prompt template if the associated placeholder is present:
Models with built-in support for system prompts and chat message history do not need to include system or chatContext in the prompt template.
Additional parameters can be provided in the parameters map as key-value pairs.
The input text.
The AI session ID. If provided, this request will be associated with the specified AI Session.
The data source schema and metadata to be included in the Text-to-Beast-Mode task prompt to generate a SQL Calculation.
The prompt template to use for the Text-to-Beast-Mode task. The default prompt template will be used if not provided.
Custom parameters to inject into the prompt template if an associated placeholder is present.
The ID of the model to use for Text-to-Beast-Mode. The specified model must be configured for the Text-to-Beast-Mode AI Service by an Admin.
Additional model-specific configuration parameter key-value pairs. e.g. temperature, max_tokens, etc.
The system message to use for the Text-to-SQL task. If not provided, the default system will be used. If the model does not include built-in support for system prompts, this parameter may be included in the prompt template using the "${system}" placeholder.
Controls randomness in the model's output. Lower values make output more deterministic.
The maximum number of tokens to generate in the response.
Whether to disable validation of the generated Beast Mode calculation.
Configuration for reasoning behavior and effort level.
TextAIResponse The generated calculation and model token usage information.
Response from a text AI Service.
The formatted prompt that was used to generate the response.
The list of choices generated by the model.
The id of the model used to generate the response.
The id of the AI Session associated with this request.
The output of the model.
The token usage from the model provider.