Skip to main content

AI Client

Artificial intelligence client resource.

Allows integration with AI providers compatible with the OpenAI API, supporting chat, streaming, embeddings and MCP (Model Context Protocol) tools.

const client = _ai.client('openai')
client.model('gpt-4o')

const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const result = client.chat(messages)
_out.json(result)

chat


chat(model: string, messages: Values) : Values

Description

Runs a conversation explicitly specifying the model to use, overriding the default configured model.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const response = client.chat('gpt-4o-mini', messages)
_out.json(response.toJSON())
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the model to use in this call.
messagesValuesList of conversation messages.
Return

( Values )

Object with the full API response.


chat(model: string, messages: Values, options: Values) : Values

Description

Runs a conversation explicitly specifying the model to use, with additional options, overriding the default configured model.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const options = _val.map().set('temperature', 0.7)

const response = client.chat('gpt-4o-mini', messages, options)
_out.json(response.toJSON())
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the model to use in this call.
messagesValuesList of conversation messages.
optionsValuesAdditional options: temperature (0.0–2.0), max_tokens, top_p.
Return

( Values )

Object with the full API response.


chat(model: string, messages: Values, options: Values, toolCallback: org.netuno.tritao.ai.client.Client$ToolCallback) : Values

Description

Runs a conversation explicitly specifying the model to use, with additional options and MCP tool support via callback, overriding the default configured model.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const options = _val.map().set('temperature', 0.7)

const response = client.chat('gpt-4o-mini', messages, options, (toolName, args, mcpClient, tool) => {
_log.info('Tool invoked: ' + toolName)
return null
})
_out.json(response.toJSON())
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the model to use in this call.
messagesValuesList of conversation messages.
optionsValuesAdditional options: temperature (0.0–2.0), max_tokens, top_p.
toolCallbackorg.netuno.tritao.ai.client.Client$ToolCallbackCallback invoked before each tool execution. Return null for normal execution or a Values to override the result.
Return

( Values )

Object with the full API response.


chat(model: string, messages: Values, toolCallback: org.netuno.tritao.ai.client.Client$ToolCallback) : Values

Description

Runs a conversation explicitly specifying the model to use, with MCP tool support via callback, overriding the default configured model.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'What time is it?'))

const response = client.chat('gpt-4o-mini', messages, (toolName, args, mcpClient, tool) => {
_log.info('Tool invoked: ' + toolName)
return null
})
_out.json(response.toJSON())
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the model to use in this call.
messagesValuesList of conversation messages.
toolCallbackorg.netuno.tritao.ai.client.Client$ToolCallbackCallback invoked before each tool execution. Return null for normal execution or a Values to override the result.
Return

( Values )

Object with the full API response.


chat(messages: Values) : Values

Description

Runs a conversation with the configured AI model, sending a list of messages and returning the full response.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'system').set('content', 'You are a helpful assistant.'))
.add(_val.map().set('role', 'user').set('content', 'What is the capital of Portugal?'))

const response = client.chat(messages)
_out.json(response.toJSON())
Attributes
NAMETYPEDESCRIPTION
messagesValuesList of conversation messages. Each message must have the fields role (system, user, assistant) and content.
Return

( Values )

Object with the full API response, including choices, usage and other metadata.


chat(messages: Values, options: Values) : Values

Description

Runs a conversation with the configured AI model, with additional options such as temperature and max_tokens.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const options = _val.map()
.set('temperature', 0.7)
.set('max_tokens', 200)

const response = client.chat(messages, options)
_out.json(response.toJSON())
Attributes
NAMETYPEDESCRIPTION
messagesValuesList of conversation messages.
optionsValuesAdditional options: temperature (0.0–2.0), max_tokens, top_p.
Return

( Values )

Object with the full API response.


chat(messages: Values, options: Values, toolCallback: org.netuno.tritao.ai.client.Client$ToolCallback) : Values

Description

Runs a conversation with the configured AI model, with additional options and MCP tool support via callback.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const options = _val.map().set('temperature', 0.7)

const response = client.chat(messages, options, (toolName, args, mcpClient, tool) => {
_log.info('Tool invoked: ' + toolName)
return null
})
_out.json(response.toJSON())
Attributes
NAMETYPEDESCRIPTION
messagesValuesList of conversation messages.
optionsValuesAdditional options: temperature (0.0–2.0), max_tokens, top_p.
toolCallbackorg.netuno.tritao.ai.client.Client$ToolCallbackCallback invoked before each tool execution. Return null for normal execution or a Values to override the result.
Return

( Values )

Object with the full API response.


chat(messages: Values, toolCallback: org.netuno.tritao.ai.client.Client$ToolCallback) : Values

Description

Runs a conversation with the configured AI model with MCP tool support via callback. The callback is invoked before each tool call, allowing you to intercept or override the result.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'What time is it?'))

const response = client.chat(messages, (toolName, args, mcpClient, tool) => {
_log.info('Tool invoked: ' + toolName)
return null // null = let the client execute normally
})
_out.json(response.toJSON())
Attributes
NAMETYPEDESCRIPTION
messagesValuesList of conversation messages.
toolCallbackorg.netuno.tritao.ai.client.Client$ToolCallbackCallback invoked before each tool execution. Return null for normal execution or a Values to override the result.
Return

( Values )

Object with the full API response.


embeddings


embeddings(input: string) : Values

Description

Generates a vector embedding for a text input using the configured model.

How To Use
const result = client.embeddings('The sky is blue.')
_out.json(result.toJSON())
Attributes
NAMETYPEDESCRIPTION
inputstringText input for which the embedding will be generated.
Return

( Values )

Object with the API response, including the generated vectors and usage metadata.


embeddings(model: string, input: string) : Values

Description

Generates a vector embedding for a text input by explicitly specifying the model to use.

How To Use
const result = client.embeddings('text-embedding-3-small', 'The sky is blue.')
_out.json(result.toJSON())
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the embeddings model to use, for example: text-embedding-3-small.
inputstringText input for which the embedding will be generated.
Return

( Values )

Object with the API response, including the generated vectors and usage metadata.


embeddings(model: string, input: string, options: Values) : Values

Description

Generates a vector embedding for a text input by explicitly specifying the model and additional options.

How To Use
const options = _val.map().set('dimensions', 512)

const result = client.embeddings('text-embedding-3-small', 'The sky is blue.', options)
_out.json(result.toJSON())
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the embeddings model to use, for example: text-embedding-3-small.
inputstringText input for which the embedding will be generated.
optionsValuesAdditional options: dimensions (number of vector dimensions), encoding_format (float or base64), user (end-user identifier).
Return

( Values )

Object with the API response, including the generated vectors and usage metadata.


embeddings(model: string, inputs: Values) : Values

Description

Generates vector embeddings for multiple text inputs by explicitly specifying the model to use. The list must contain text values only.

How To Use
const texts = _val.list()
.add('The sky is blue.')
.add('The grass is green.')

const result = client.embeddings('text-embedding-3-small', texts)
_out.json(result.toJSON())
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the embeddings model to use, for example: text-embedding-3-small.
inputsValuesList of text inputs. Each element must be a plain text string.
Return

( Values )

Object with the API response, including the generated vectors for each text and usage metadata.


embeddings(model: string, inputs: Values, options: Values) : Values

Description

Generates vector embeddings for multiple text inputs by explicitly specifying the model to use and additional options. The list must contain text values only.

How To Use
const texts = _val.list()
.add('The sky is blue.')
.add('The grass is green.')

const options = _val.map().set('dimensions', 512)

const result = client.embeddings('text-embedding-3-small', texts, options)
_out.json(result.toJSON())
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the embeddings model to use, for example: text-embedding-3-small.
inputsValuesList of text inputs. Each element must be a plain text string.
optionsValuesAdditional options: dimensions (number of vector dimensions), encoding_format (float or base64), user (end-user identifier).
Return

( Values )

Object with the API response, including the generated vectors for each text and usage metadata.


embeddings(inputs: Values) : Values

Description

Generates vector embeddings for multiple text inputs using the configured model. The list must contain text values only.

How To Use
const texts = _val.list()
.add('The sky is blue.')
.add('The grass is green.')

const result = client.embeddings(texts)
_out.json(result.toJSON())
Attributes
NAMETYPEDESCRIPTION
inputsValuesList of text inputs. Each element must be a plain text string.
Return

( Values )

Object with the API response, including the generated vectors for each text and usage metadata.


embeddings(inputs: Values, options: Values) : Values

Description

Generates vector embeddings for multiple text inputs using the configured model, with additional options. The list must contain text values only.

How To Use
const texts = _val.list()
.add('The sky is blue.')
.add('The grass is green.')

const options = _val.map().set('dimensions', 512)

const result = client.embeddings(texts, options)
_out.json(result.toJSON())
Attributes
NAMETYPEDESCRIPTION
inputsValuesList of text inputs. Each element must be a plain text string.
optionsValuesAdditional options: dimensions (number of vector dimensions), encoding_format (float or base64), user (end-user identifier).
Return

( Values )

Object with the API response, including the generated vectors for each text and usage metadata.


getMaxToolLoops


getMaxToolLoops() : int

Description

Gets the configured maximum number of tool call loops.

How To Use
const maxLoops = client.getMaxToolLoops()
_out.print(maxLoops)
Return

( int )

Maximum number of tool loops.


instance


instance() : com.openai.client.OpenAIClient

Description

Gets the internal OpenAI client instance for advanced direct use with the underlying library.

How To Use
const openAIClient = client.instance()
Return

( com.openai.client.OpenAIClient )

OpenAI client instance.


isInitialized


isInitialized() : boolean

Description

Checks whether the AI client was successfully initialized for the configured provider.

How To Use
if (!client.isInitialized()) {
_log.error('Client not initialized.')
}
Return

( boolean )

True if the client is initialized.


maxToolLoops


maxToolLoops(maxLoops: int) : boolean

Description

Sets the maximum number of tool call cycles (tool loops) during a conversation. Prevents infinite loops when the model keeps invoking tools successively.

How To Use
client.maxToolLoops(5)
Attributes
NAMETYPEDESCRIPTION
maxLoopsintMaximum number of tool loops. Must be at least 1.
Return

( boolean )

True if the value was applied successfully, false if the value is invalid.


mcp


mcp(configs: Values) : void

Description

Configures the MCP (Model Context Protocol) servers to use in chat and stream operations. Each server exposes tools that the model can invoke automatically during the conversation. Tools are available with the prefix serverName__toolName.

Supported transport types:

  • remote: connects to an MCP server via HTTP Streamable (SSE/HTTP)
  • stdio: starts a local process and communicates via stdin/stdout
How To Use
// Remote MCP server via HTTP
const servers = _val.list()
.add(
_val.map()
.set('type', 'remote')
.set('name', 'myServer')
.set('url', 'https://mcp.example.com')
.set('endpoint', '/mcp')
.set('headers',
_val.map().set('Authorization', 'Bearer YOUR_TOKEN')
)
)

client.mcp(servers)

const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Use the available tool.'))

const response = client.chat(messages)
_out.json(response.toJSON())
Attributes
NAMETYPEDESCRIPTION
configsValuesList of MCP server configurations. Each entry is an object with the following fields:
Common fields:
- type (required): transport type — remote or stdio
- name (optional): server name, used as a prefix for tools. If omitted, it is auto-generated
For type: remote:
- url (required): base URL of the MCP server, e.g. https://mcp.example.com
- endpoint (optional): MCP endpoint path. Default: /mcp
- headers (optional): object with additional HTTP headers, e.g. Authorization
For type: stdio:
- command (required): command to execute
- args (optional): list of command arguments
- env (optional): object with environment variables
Return

( void )


model


model(model: string) : boolean

Description

Sets the AI model to use in chat, stream and embeddings operations. The model is validated against the list of available models on the provider.

How To Use
const ok = client.model('gpt-4o')
if (!ok) {
_log.error('Invalid or unavailable model.')
}
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the model to use, for example: gpt-4o.
Return

( boolean )

True if the model is valid and was set, false otherwise.


models


models() : Values

Description

Lists all models available on the configured AI provider.

How To Use
const models = client.models()
_out.json(modelos.toJSON())
Return

( Values )

List of available models, each as an object with its metadata.


provider


provider(provider: string) : boolean

Description

Switches the AI provider and reinitializes the client with the new provider settings defined in the application configuration file.

How To Use
const switched = client.provider('anthropic')
if (switched) {
_log.info('Provider switched successfully.')
}
Attributes
NAMETYPEDESCRIPTION
providerstringName of the AI provider as defined in the application settings.
Return

( boolean )

True if the provider was switched successfully, false otherwise.


stream


stream(model: string, messages: Values, onToken: java.util.function.Consumer<Values>) : void

Description

Runs a streaming conversation explicitly specifying the model to use, overriding the default configured model, processing each token as it is generated.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Tell me a short story.'))

client.stream('gpt-4o-mini', messages, (chunk) => {
_out.print(chunk.get('choices').get(0).get('delta').get('content'))
})
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the model to use in this call.
messagesValuesList of conversation messages.
onTokenjava.util.function.ConsumerCallback invoked for each token received, receiving the response chunk as argument.
Return

( void )


stream(model: string, messages: Values, onToken: java.util.function.Consumer<Values>, toolCallback: org.netuno.tritao.ai.client.Client$ToolCallback) : void

Description

Runs a streaming conversation explicitly specifying the model to use, with MCP tool support via callback, overriding the default configured model, processing each token as it is generated.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'What time is it?'))

client.stream('gpt-4o-mini', messages, (chunk) => {
_out.print(chunk.get('choices').get(0).get('delta').get('content'))
}, (toolName, args, mcpClient, tool) => {
_log.info('Tool invoked: ' + toolName)
return null
})
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the model to use in this call.
messagesValuesList of conversation messages.
onTokenjava.util.function.ConsumerCallback invoked for each token received, receiving the response chunk as argument.
toolCallbackorg.netuno.tritao.ai.client.Client$ToolCallbackCallback invoked before each tool execution. Return null for normal execution or a Values to override the result.
Return

( void )


stream(model: string, messages: Values, options: Values, onToken: java.util.function.Consumer<Values>) : void

Description

Runs a streaming conversation explicitly specifying the model to use, with additional options, overriding the default configured model, processing each token as it is generated.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const options = _val.map().set('temperature', 0.7)

client.stream('gpt-4o-mini', messages, options, (chunk) => {
_out.print(chunk.get('choices').get(0).get('delta').get('content'))
})
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the model to use in this call.
messagesValuesList of conversation messages.
optionsValuesAdditional options: temperature (0.0–2.0), max_tokens, top_p.
onTokenjava.util.function.ConsumerCallback invoked for each token received, receiving the response chunk as argument.
Return

( void )


stream(model: string, messages: Values, options: Values, onToken: java.util.function.Consumer<Values>, toolCallback: org.netuno.tritao.ai.client.Client$ToolCallback) : void

Description

Runs a streaming conversation explicitly specifying the model to use, with additional options and MCP tool support via callback, overriding the default configured model, processing each token as it is generated.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const options = _val.map().set('temperature', 0.7)

client.stream('gpt-4o-mini', messages, options, (chunk) => {
_out.print(chunk.get('choices').get(0).get('delta').get('content'))
}, (toolName, args, mcpClient, tool) => {
_log.info('Tool invoked: ' + toolName)
return null
})
Attributes
NAMETYPEDESCRIPTION
modelstringIdentifier of the model to use in this call.
messagesValuesList of conversation messages.
optionsValuesAdditional options: temperature (0.0–2.0), max_tokens, top_p.
onTokenjava.util.function.ConsumerCallback invoked for each token received, receiving the response chunk as argument.
toolCallbackorg.netuno.tritao.ai.client.Client$ToolCallbackCallback invoked before each tool execution. Return null for normal execution or a Values to override the result.
Return

( void )


stream(messages: Values, onToken: java.util.function.Consumer<Values>) : void

Description

Runs a streaming conversation with the configured AI model, processing each token as it is generated.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Tell me a short story.'))

client.stream(messages, (chunk) => {
_out.print(chunk.get('choices').get(0).get('delta').get('content'))
})
Attributes
NAMETYPEDESCRIPTION
messagesValuesList of conversation messages.
onTokenjava.util.function.ConsumerCallback invoked for each token received, receiving the response chunk as argument.
Return

( void )


stream(messages: Values, onToken: java.util.function.Consumer<Values>, toolCallback: org.netuno.tritao.ai.client.Client$ToolCallback) : void

Description

Runs a streaming conversation with the configured AI model, with MCP tool support via callback, processing each token as it is generated.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'What time is it?'))

client.stream(messages, (chunk) => {
_out.print(chunk.get('choices').get(0).get('delta').get('content'))
}, (toolName, args, mcpClient, tool) => {
_log.info('Tool invoked: ' + toolName)
return null
})
Attributes
NAMETYPEDESCRIPTION
messagesValuesList of conversation messages.
onTokenjava.util.function.ConsumerCallback invoked for each token received, receiving the response chunk as argument.
toolCallbackorg.netuno.tritao.ai.client.Client$ToolCallbackCallback invoked before each tool execution. Return null for normal execution or a Values to override the result.
Return

( void )


stream(messages: Values, options: Values, onToken: java.util.function.Consumer<Values>) : void

Description

Runs a streaming conversation with the configured AI model, with additional options, processing each token as it is generated.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const options = _val.map().set('temperature', 0.7)

client.stream(messages, options, (chunk) => {
_out.print(chunk.get('choices').get(0).get('delta').get('content'))
})
Attributes
NAMETYPEDESCRIPTION
messagesValuesList of conversation messages.
optionsValuesAdditional options: temperature (0.0–2.0), max_tokens, top_p.
onTokenjava.util.function.ConsumerCallback invoked for each token received, receiving the response chunk as argument.
Return

( void )


stream(messages: Values, options: Values, onToken: java.util.function.Consumer<Values>, toolCallback: org.netuno.tritao.ai.client.Client$ToolCallback) : void

Description

Runs a streaming conversation with the configured AI model, with additional options and MCP tool support via callback, processing each token as it is generated.

How To Use
const messages = _val.list()
.add(_val.map().set('role', 'user').set('content', 'Hello!'))

const options = _val.map().set('temperature', 0.7)

client.stream(messages, options, (chunk) => {
_out.print(chunk.get('choices').get(0).get('delta').get('content'))
}, (toolName, args, mcpClient, tool) => {
_log.info('Tool invoked: ' + toolName)
return null
})
Attributes
NAMETYPEDESCRIPTION
messagesValuesList of conversation messages.
optionsValuesAdditional options: temperature (0.0–2.0), max_tokens, top_p.
onTokenjava.util.function.ConsumerCallback invoked for each token received, receiving the response chunk as argument.
toolCallbackorg.netuno.tritao.ai.client.Client$ToolCallbackCallback invoked before each tool execution. Return null for normal execution or a Values to override the result.
Return

( void )