Skip to main content

complete

The complete method is akin to the execute method, as both relay requests to a large language model AI model for processing. However, the distinction lies in the complete method requiring you to construct the 'messages' parameter and handle the returned results yourself. Generally speaking, we advise against using this method.

async complete(
messages: {
role: string,
content: string,
name?: string,
functionCall?: {
name: string,
arguments: string
}
}[];
parameters?: {
temperature?: number;
topP?: number;
stop?:
| []
| [string]
| [string, string]
| [string, string, string]
| [string, string, string, string];
maxTokens?: number;
presencePenalty?: number;
frequencyPenalty?: number;
functions?: {
name: string;
description?: string;
parameters: object
}[];
functionCall?: "none" | "auto" | { name: string };
user?: string;
}
): Promise<any>;

Reference

Overview

import { myLLMExecutor } from "#elements";

...

const messages = [
{
role: "system" as const,
content: "You are an AI research assistant. You use a tone that is technical and scientific."
},
{
role: "assistant" as const,
content: "Greeting! I am an AI research assistant. How can I help you today?"
}
];
const result = await myLLMExecutor.complete(messages);
console.log(result.choices[0].message.content);

...

Parameters

  • messages: A list of messages comprising the conversation so far.

    • role: The role of the messages author. One of system, user, assistant, or function.
    • content: The contents of the message. content is required for all messages, and may be null for assistant messages with function calls.
    • name: (optional) The name of the author of this message. name is required if role is function, and it should be the name of the function whose response is in the content. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.
    • functionCall: (optional) The name and arguments of a function that should be called, as generated by the model.
      • name: The name of the function to call.
      • arguments: The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
  • parameters: Optional parameters, containing the following fields:

    • temperature: (optional) What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
    • topP: (optional) An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
    • stop: (optional) Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
    • maxTokens: (optional) The maximum number of tokens to generate in the completion. The token count of your prompt plus max tokens cannot exceed the model's context length.
    • presencePenalty: (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
    • frequencyPenalty: (optional) Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
    • functionCall: (optional) Controls how the model calls functions. "none" means the model will not call a function and instead generates a message. "auto" means the model can pick between generating a message or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present
    • functions: (optional) A list of functions the model may generate JSON inputs for.
      • name:The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
      • description: (optional) A description of what the function does, used by the model to choose when and how to call the function.
      • parameters:The parameters the functions accepts, described as a JSON Schema object.
    • user: (optional) A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

Returns

The complete method return a json value

Caveats

The built-in Service Provider temporarily does not support function call invocation.