API Reference
Learn more about the API reference for embedded function calling.
runWithTools
This wrapper method enables you to do embedded function calling. You pass it the AI binding, model, inputs (messages
array and tools
array), and optional configurations.
AI Binding
Ai
- The AI binding, such as
env.AI
.
- The AI binding, such as
model
BaseAiTextGenerationModels
- The ID of the model that supports function calling. For example,
@hf/nousresearch/hermes-2-pro-mistral-7b
.
- The ID of the model that supports function calling. For example,
input
Object
messages
RoleScopedChatInput[]
tools
AiTextGenerationToolInputWithFunction[]
config
Object
streamFinalResponse
boolean
maxRecursiveToolRuns
number
strictValidation
boolean
verbose
boolean
trimFunction
boolean
- For the
trimFunction
, you can pass itautoTrimTools
, which is another helper method we’ve devised to automatically choose the correct tools (using an LLM) before sending it off for inference. This means that your final inference call will have fewer input tokens.
- For the
createToolsFromOpenAPISpec
This method lets you automatically create tool schemas based on OpenAPI specs, so you don’t have to manually write or hardcode the tool schemas. You can pass the OpenAPI spec for any API in JSON or YAML format.
createToolsFromOpenAPISpec
has a config input that allows you to perform overrides if you need to provide headers like Authentication or User-Agent.
spec
string
- The OpenAPI specifiction in either JSON or YAML format, or a URL to a remote OpenAPI specification.
config
Config
- Configuration options for the createToolsFromOpenAPISpec function
overrides
ConfigRule[]
matchPatterns
RegExp[]
options
Object
{verbose
boolean
}