Eino: AgenticModel Guide [Beta]
Overview
AgenticModel is a model capability abstraction centered on “goal-driven autonomous execution”. As capabilities like caching and built-in tools gain native support in advanced provider APIs such as OpenAI Responses API and Claude API, models are evolving from “one-shot Q&A engines” to “goal-oriented autonomous agents”: capable of closed-loop planning, tool invocation, and iterative execution around user goals to accomplish more complex tasks.
Differences from ChatModel
| AgenticModel | ChatModel | |
| Positioning | Model capability abstraction centered on "goal-driven autonomous execution", an enhanced abstraction over ChatModel | One-shot Q&A engine |
| Core Entities | Message | |
| Capabilities | ||
| Related Components |
Component Definition
Interface Definition
Code location: https://github.com/cloudwego/eino/tree/main/components/model/interface.go
type AgenticModel interface {
Generate(ctx context.Context, input []*schema.AgenticMessage, opts ...Option) (*schema.AgenticMessage, error)
Stream(ctx context.Context, input []*schema.AgenticMessage, opts ...Option) (*schema.StreamReader[*schema.AgenticMessage], error)
// WithTools returns a new Model instance with the specified tools bound.
// This method does not modify the current instance, making it safer for concurrent use.
WithTools(tools []*schema.ToolInfo) (AgenticModel, error)
}
Generate Method
- Purpose: Generate a complete model response
- Parameters:
- ctx: Context object for passing request-level information and Callback Manager
- input: List of input messages
- opts: Optional parameters for configuring model behavior
- Returns:
*schema.AgenticMessage: The response message generated by the model- error: Error information during generation
Stream Method
- Purpose: Generate model response in streaming mode
- Parameters: Same as Generate method
- Returns:
*schema.StreamReader[*schema.AgenticMessage]: Stream reader for model response- error: Error information during generation
WithTools Method
- Purpose: Bind available tools to the model
- Parameters:
- tools: List of tool information
- Returns:
- Model: A new AgenticModel instance with tools bound
- error: Error information during binding
AgenticMessage Struct
Code location: https://github.com/cloudwego/eino/tree/main/schema/agentic_message.go
AgenticMessage is the basic unit for interacting with the model. A complete model response is encapsulated as an AgenticMessage, which carries complex composite content through an ordered list of ContentBlock. The definition is as follows:
type AgenticMessage struct {
// Role is the message role.
Role AgenticRoleType
// ContentBlocks is the list of content blocks.
ContentBlocks []*ContentBlock
// ResponseMeta is the response metadata.
ResponseMeta *AgenticResponseMeta
// Extra is the additional information.
Extra map[string]any
}
ContentBlock is the basic building unit of AgenticMessage, used to carry the specific content of a message. It is designed as a polymorphic structure that identifies the type of data contained in the current block through the Type field and holds the corresponding non-null pointer field. ContentBlock enables a message to contain mixed-type rich media content or structured data, such as “text + image” or “reasoning process + tool call”. The definition is as follows:
type ContentBlockType string
const (
ContentBlockTypeReasoning ContentBlockType = "reasoning"
ContentBlockTypeUserInputText ContentBlockType = "user_input_text"
ContentBlockTypeUserInputImage ContentBlockType = "user_input_image"
ContentBlockTypeUserInputAudio ContentBlockType = "user_input_audio"
ContentBlockTypeUserInputVideo ContentBlockType = "user_input_video"
ContentBlockTypeUserInputFile ContentBlockType = "user_input_file"
ContentBlockTypeAssistantGenText ContentBlockType = "assistant_gen_text"
ContentBlockTypeAssistantGenImage ContentBlockType = "assistant_gen_image"
ContentBlockTypeAssistantGenAudio ContentBlockType = "assistant_gen_audio"
ContentBlockTypeAssistantGenVideo ContentBlockType = "assistant_gen_video"
ContentBlockTypeFunctionToolCall ContentBlockType = "function_tool_call"
ContentBlockTypeFunctionToolResult ContentBlockType = "function_tool_result"
ContentBlockTypeServerToolCall ContentBlockType = "server_tool_call"
ContentBlockTypeServerToolResult ContentBlockType = "server_tool_result"
ContentBlockTypeMCPToolCall ContentBlockType = "mcp_tool_call"
ContentBlockTypeMCPToolResult ContentBlockType = "mcp_tool_result"
ContentBlockTypeMCPListToolsResult ContentBlockType = "mcp_list_tools_result"
ContentBlockTypeMCPToolApprovalRequest ContentBlockType = "mcp_tool_approval_request"
ContentBlockTypeMCPToolApprovalResponse ContentBlockType = "mcp_tool_approval_response"
)
type ContentBlock struct {
Type ContentBlockType
// Reasoning contains the reasoning content generated by the model.
Reasoning *Reasoning
// UserInputText contains the text content provided by the user.
UserInputText *UserInputText
// UserInputImage contains the image content provided by the user.
UserInputImage *UserInputImage
// UserInputAudio contains the audio content provided by the user.
UserInputAudio *UserInputAudio
// UserInputVideo contains the video content provided by the user.
UserInputVideo *UserInputVideo
// UserInputFile contains the file content provided by the user.
UserInputFile *UserInputFile
// AssistantGenText contains the text content generated by the model.
AssistantGenText *AssistantGenText
// AssistantGenImage contains the image content generated by the model.
AssistantGenImage *AssistantGenImage
// AssistantGenAudio contains the audio content generated by the model.
AssistantGenAudio *AssistantGenAudio
// AssistantGenVideo contains the video content generated by the model.
AssistantGenVideo *AssistantGenVideo
// FunctionToolCall contains the invocation details for a user-defined tool.
FunctionToolCall *FunctionToolCall
// FunctionToolResult contains the result returned from a user-defined tool call.
FunctionToolResult *FunctionToolResult
// ServerToolCall contains the invocation details for a provider built-in tool executed on the model server.
ServerToolCall *ServerToolCall
// ServerToolResult contains the result returned from a provider built-in tool executed on the model server.
ServerToolResult *ServerToolResult
// MCPToolCall contains the invocation details for an MCP tool managed by the model server.
MCPToolCall *MCPToolCall
// MCPToolResult contains the result returned from an MCP tool managed by the model server.
MCPToolResult *MCPToolResult
// MCPListToolsResult contains the list of available MCP tools reported by the model server.
MCPListToolsResult *MCPListToolsResult
// MCPToolApprovalRequest contains the user approval request for an MCP tool call when required.
MCPToolApprovalRequest *MCPToolApprovalRequest
// MCPToolApprovalResponse contains the user's approval decision for an MCP tool call.
MCPToolApprovalResponse *MCPToolApprovalResponse
// StreamingMeta contains metadata for streaming responses.
StreamingMeta *StreamingMeta
// Extra contains additional information for the content block.
Extra map[string]any
}
AgenticResponseMeta is the metadata returned in the model response, where TokenUsage is the metadata returned by all model providers. OpenAIExtension, GeminiExtension, and ClaudeExtension are extension field definitions specific to OpenAI, Gemini, and Claude models respectively; extension information from other model providers is placed in Extension, with specific definitions provided by the corresponding component implementations in eino-ext.
type AgenticResponseMeta struct {
// TokenUsage is the token usage.
TokenUsage *TokenUsage
// OpenAIExtension is the extension for OpenAI.
OpenAIExtension *openai.ResponseMetaExtension
// GeminiExtension is the extension for Gemini.
GeminiExtension *gemini.ResponseMetaExtension
// ClaudeExtension is the extension for Claude.
ClaudeExtension *claude.ResponseMetaExtension
// Extension is the extension for other models, supplied by the component implementer.
Extension any
}
Reasoning
The Reasoning type is used to represent the model’s reasoning process and thinking content. Some advanced models can perform internal reasoning before generating the final answer, and this reasoning content can be passed through this type.
- Definition
type Reasoning struct {
// Text is either the thought summary or the raw reasoning text itself.
Text string
// Signature contains encrypted reasoning tokens.
// Required by some models when passing reasoning text back.
Signature string
}
- Example
reasoning := &schema.Reasoning{
Text: "The user now needs me to solve...",
Signature: "asjkhvipausdgy23oadlfdsf"
}
UserInputText
UserInputText is the most basic content type, used to pass plain text input. It is the primary way for users to interact with the model, suitable for natural language conversations, instruction delivery, and question asking.
- Definition
type UserInputText struct {
// Text is the text content.
Text string
}
- Example
textInput := &schema.UserInputText{
Text: "Please help me analyze the performance bottlenecks in this code",
}
// Or use convenience functions to create messages
textInput := schema.UserAgenticMessage("Please help me analyze the performance bottlenecks in this code")
textInput := schema.SystemAgenticMessage("You are an intelligent assistant")
textInput := schema.DeveloperAgenticMessage("You are an intelligent assistant")
UserInputImage
UserInputImage is used to provide image content to the model. It supports passing image data via URL reference or Base64 encoding, suitable for visual understanding, image analysis, and multimodal conversations.
- Definition
type UserInputImage struct {
// URL is the HTTP/HTTPS link.
URL string
// Base64Data is the binary data in Base64 encoded string format.
Base64Data string
// MIMEType is the mime type, e.g. "image/png".
MIMEType string
// Detail is the quality of the image url.
Detail ImageURLDetail
}
- Example
// Using URL method
imageInput := &schema.UserInputImage{
URL: "https://example.com/chart.png",
MIMEType: "image/png",
Detail: schema.ImageURLDetailHigh,
}
// Using Base64 encoding method
imageInput := &schema.UserInputImage{
Base64Data: "iVBORw0KGgoAAAANSUhEUgAAAAUA...",
MIMEType: "image/png",
}
UserInputAudio
UserInputAudio is used to provide audio content to the model. It is suitable for speech recognition, audio analysis, and multimodal understanding.
- Definition
type UserInputAudio struct {
// URL is the HTTP/HTTPS link.
URL string
// Base64Data is the binary data in Base64 encoded string format.
Base64Data string
// MIMEType is the mime type, e.g. "audio/wav".
MIMEType string
}
- Example
audioInput := &schema.UserInputAudio{
URL: "https://example.com/voice.wav",
MIMEType: "audio/wav",
}
UserInputVideo
UserInputVideo is used to provide video content to the model. It is suitable for video understanding, scene analysis, and action recognition and other advanced visual tasks.
- Definition
type UserInputVideo struct {
// URL is the HTTP/HTTPS link.
URL string
// Base64Data is the binary data in Base64 encoded string format.
Base64Data string
// MIMEType is the mime type, e.g. "video/mp4".
MIMEType string
}
- Example
videoInput := &schema.UserInputVideo{
URL: "https://example.com/demo.mp4",
MIMEType: "video/mp4",
}
UserInputFile
UserInputFile is used to provide file content to the model. It is suitable for document analysis, data extraction, and knowledge understanding.
- Definition
type UserInputFile struct {
// URL is the HTTP/HTTPS link.
URL string
// Name is the filename.
Name string
// Base64Data is the binary data in Base64 encoded string format.
Base64Data string
// MIMEType is the mime type, e.g. "application/pdf".
MIMEType string
}
- Example
fileInput := &schema.UserInputFile{
URL: "https://example.com/report.pdf",
Name: "report.pdf",
MIMEType: "application/pdf",
}
AssistantGenText
AssistantGenText is the text content generated by the model, which is the most common form of model output. For different model providers, the extension field definitions vary: OpenAI models use OpenAIExtension, Claude models use ClaudeExtension; extension information from other model providers is placed in Extension, with specific definitions provided by the corresponding component implementations in eino-ext.
- Definition
import (
"github.com/cloudwego/eino/schema/claude"
"github.com/cloudwego/eino/schema/openai"
)
type AssistantGenText struct {
// Text is the generated text.
Text string
// OpenAIExtension is the extension for OpenAI.
OpenAIExtension *openai.AssistantGenTextExtension
// ClaudeExtension is the extension for Claude.
ClaudeExtension *claude.AssistantGenTextExtension
// Extension is the extension for other models.
Extension any
}
-
Example
- Creating a response
textGen := &schema.AssistantGenText{ Text: "Based on your requirements, I suggest the following approach...", Extension: &AssistantGenTextExtension{ Annotations: []*TextAnnotation{annotation}, }, }- Parsing a response
import ( "github.com/cloudwego/eino-ext/components/model/agenticark" ) // Assert to specific implementation definition ext := textGen.Extension.(*agenticark.AssistantGenTextExtension)
AssistantGenImage
AssistantGenImage is the image content generated by the model. Some models have image generation capabilities and can create images based on text descriptions, with the output passed through this type.
- Definition
type AssistantGenImage struct {
// URL is the HTTP/HTTPS link.
URL string
// Base64Data is the binary data in Base64 encoded string format.
Base64Data string
// MIMEType is the mime type, e.g. "image/png".
MIMEType string
}
- Example
imageGen := &schema.AssistantGenImage{
URL: "https://api.example.com/generated/image123.png",
MIMEType: "image/png",
}
AssistantGenAudio
AssistantGenAudio is the audio content generated by the model. Some models have audio generation capabilities, and the output audio data is passed through this type.
- Definition
type AssistantGenAudio struct {
// URL is the HTTP/HTTPS link.
URL string
// Base64Data is the binary data in Base64 encoded string format.
Base64Data string
// MIMEType is the mime type, e.g. "audio/wav".
MIMEType string
}
- Example
audioGen := &schema.AssistantGenAudio{
URL: "https://api.example.com/generated/audio123.wav",
MIMEType: "audio/wav",
}
AssistantGenVideo
AssistantGenVideo is the video content generated by the model. Some models have video generation capabilities, and the output video data is passed through this type.
- Definition
type AssistantGenVideo struct {
// URL is the HTTP/HTTPS link.
URL string
// Base64Data is the binary data in Base64 encoded string format.
Base64Data string
// MIMEType is the mime type, e.g. "video/mp4".
MIMEType string
}
- Example
audioGen := &schema.AssistantGenAudio{
URL: "https://api.example.com/generated/audio123.wav",
MIMEType: "audio/wav",
}
FunctionToolCall
FunctionToolCall represents a user-defined function tool call initiated by the model. When the model needs to perform a specific function, it generates a tool call request containing the tool name and parameters, with the actual execution handled by the user side.
- Definition
type FunctionToolCall struct {
// CallID is the unique identifier for the tool call.
CallID string
// Name specifies the function tool invoked.
Name string
// Arguments is the JSON string arguments for the function tool call.
Arguments string
}
- Example
toolCall := &schema.FunctionToolCall{
CallID: "call_abc123",
Name: "get_weather",
Arguments: `{"location": "Beijing", "unit": "celsius"}`,
}
FunctionToolResult
FunctionToolResult represents the execution result of a user-defined function tool. After the user side executes the tool call, the result is returned to the model through this type, allowing the model to continue generating responses.
- Definition
type FunctionToolResult struct {
// CallID is the unique identifier for the tool call.
CallID string
// Name specifies the function tool invoked.
Name string
// Result is the function tool result returned by the user
Result string
}
- Example
toolResult := &schema.FunctionToolResult{
CallID: "call_abc123",
Name: "get_weather",
Result: `{"temperature": 15, "condition": "sunny"}`,
}
// Or use convenience functions to create messages
msg := schema.FunctionToolResultAgenticMessage(
"call_abc123",
"get_weather",
`{"temperature": 15, "condition": "sunny"}`,
)
ServerToolCall
ServerToolCall represents a call to a model server’s built-in tool. Some model providers integrate specific tools on the server side (such as web search, code executor), which the model can call autonomously without user intervention. Arguments is the parameters for the model to call the server-side built-in tool, with specific definitions provided by the corresponding component implementations in eino-ext.
- Definition
type ServerToolCall struct {
// Name specifies the server-side tool invoked.
// Supplied by the model server (e.g., `web_search` for OpenAI, `googleSearch` for Gemini).
Name string
// CallID is the unique identifier for the tool call.
// Empty if not provided by the model server.
CallID string
// Arguments are the raw inputs to the server-side tool,
// supplied by the component implementer.
Arguments any
}
-
Example
- Creating a response
serverCall := &schema.ServerToolCall{ Name: "web_search", CallID: "search_123", Arguments: &ServerToolCallArguments{ WebSearch: &WebSearchArguments{ ActionType: WebSearchActionSearch, Search: &WebSearchQuery{ Query: "weather in Beijing today", }, }, }, }- Parsing a response
import ( "github.com/cloudwego/eino-ext/components/model/agenticopenai" ) // Assert to specific implementation definition args := serverCall.Arguments.(*agenticopenai.ServerToolCallArguments)
ServerToolResult
ServerToolResult represents the execution result of a server-side built-in tool. After the model server executes the tool call, the result is returned through this type. Result is the result of the model calling the server-side built-in tool, with specific definitions provided by the corresponding component implementations in eino-ext.
- Definition
type ServerToolResult struct {
// Name specifies the server-side tool invoked.
// Supplied by the model server (e.g., `web_search` for OpenAI, `googleSearch` for Gemini).
Name string
// CallID is the unique identifier for the tool call.
// Empty if not provided by the model server.
CallID string
// Result refers to the raw output generated by the server-side tool,
// supplied by the component implementer.
Result any
}
-
Example
- Creating a response
serverResult := &schema.ServerToolResult{ Name: "web_search", CallID: "search_123", Result: &ServerToolResult{ WebSearch: &WebSearchResult{ ActionType: WebSearchActionSearch, Search: &WebSearchQueryResult{ Sources: sources, }, }, }, }- Parsing a response
import ( "github.com/cloudwego/eino-ext/components/model/agenticopenai" ) // Assert to specific implementation definition args := serverResult.Result.(*agenticopenai.ServerToolResult)
MCPToolCall
MCPToolCall represents an MCP (Model Context Protocol) tool call initiated by the model. Some models allow configuring MCP tools and calling them autonomously without user intervention.
- Definition
type MCPToolCall struct {
// ServerLabel is the MCP server label used to identify it in tool calls
ServerLabel string
// ApprovalRequestID is the approval request ID.
ApprovalRequestID string
// CallID is the unique ID of the tool call.
CallID string
// Name is the name of the tool to run.
Name string
// Arguments is the JSON string arguments for the tool call.
Arguments string
}
- Example
mcpCall := &schema.MCPToolCall{
ServerLabel: "database-server",
CallID: "mcp_call_456",
Name: "execute_query",
Arguments: `{"sql": "SELECT * FROM users LIMIT 10"}`,
}
MCPToolResult
MCPToolResult represents the MCP tool execution result returned by the model. After the model autonomously completes the MCP tool call, the result or error information is returned through this type.
- Definition
type MCPToolResult struct {
// ServerLabel is the MCP server label used to identify it in tool calls
ServerLabel string
// CallID is the unique ID of the tool call.
CallID string
// Name is the name of the tool to run.
Name string
// Result is the JSON string with the tool result.
Result string
// Error returned when the server fails to run the tool.
Error *MCPToolCallError
}
type MCPToolCallError struct {
// Code is the error code.
Code *int64
// Message is the error message.
Message string
}
- Example
// MCP tool call success
mcpResult := &schema.MCPToolResult{
ServerLabel: "database-server",
CallID: "mcp_call_456",
Name: "execute_query",
Result: `{"rows": [...], "count": 10}`,
}
// MCP tool call failure
errorCode := int64(500)
mcpError := &schema.MCPToolResult{
ServerLabel: "database-server",
CallID: "mcp_call_456",
Name: "execute_query",
Error: &schema.MCPToolCallError{
Code: &errorCode,
Message: "Database connection failed",
},
}
MCPListToolsResult
MCPListToolsResult represents the query result of available tools from an MCP server returned by the model. Models that support configuring MCP tools can autonomously send available tool list query requests to the MCP server, and the query results are returned through this type.
- Definition
type MCPListToolsResult struct {
// ServerLabel is the MCP server label used to identify it in tool calls.
ServerLabel string
// Tools is the list of tools available on the server.
Tools []*MCPListToolsItem
// Error returned when the server fails to list tools.
Error string
}
type MCPListToolsItem struct {
// Name is the name of the tool.
Name string
// Description is the description of the tool.
Description string
// InputSchema is the JSON schema that describes the tool input parameters.
InputSchema *jsonschema.Schema
}
- Example
toolsList := &schema.MCPListToolsResult{
ServerLabel: "database-server",
Tools: []*schema.MCPListToolsItem{
{
Name: "execute_query",
Description: "Execute SQL query",
InputSchema: &jsonschema.Schema{...},
},
{
Name: "create_table",
Description: "Create data table",
InputSchema: &jsonschema.Schema{...},
},
},
}
MCPToolApprovalRequest
MCPToolApprovalRequest represents an MCP tool call request that requires user approval. In the model’s autonomous MCP tool calling process, certain sensitive or high-risk operations (such as data deletion, external payments, etc.) require explicit user authorization before execution. Some models support configuring MCP tool call approval policies, and before each call to a high-risk MCP tool, the model returns a call authorization request through this type.
- Definition
type MCPToolApprovalRequest struct {
// ID is the approval request ID.
ID string
// Name is the name of the tool to run.
Name string
// Arguments is the JSON string arguments for the tool call.
Arguments string
// ServerLabel is the MCP server label used to identify it in tool calls.
ServerLabel string
}
- Example
approvalReq := &schema.MCPToolApprovalRequest{
ID: "approval_20260112_001",
Name: "delete_records",
Arguments: `{"table": "users", "condition": "inactive=true", "estimated_count": 150}`,
ServerLabel: "database-server",
}
MCPToolApprovalResponse
MCPToolApprovalResponse represents the user’s approval decision for an MCP tool call. After receiving an MCPToolApprovalRequest, the user needs to review the operation details and make a decision. The user can choose to approve or reject the operation and optionally provide a reason for the decision.
- Definition
type MCPToolApprovalResponse struct {
// ApprovalRequestID is the approval request ID being responded to.
ApprovalRequestID string
// Approve indicates whether the request is approved.
Approve bool
// Reason is the rationale for the decision.
// Optional.
Reason string
}
- Example
approvalResp := &schema.MCPToolApprovalResponse{
ApprovalRequestID: "approval_789",
Approve: true,
Reason: "Confirmed deletion of inactive users",
}
StreamingMeta
StreamingMeta is used in streaming response scenarios to identify the position of a content block in the final response. During streaming generation, content may be returned in multiple blocks incrementally, and the index allows for correct assembly of the complete response.
- Definition
type StreamingMeta struct {
// Index specifies the index position of this block in the final response.
Index int
}
- Example
textGen := &schema.AssistantGenText{Text: "This is the first part"}
meta := &schema.StreamingMeta{Index: 0}
block := schema.NewContentBlockChunk(textGen, meta)
Common Options
AgenticModel and ChatModel share a common set of Options for configuring model behavior. Additionally, AgenticModel provides some exclusive configuration options specific to itself.
Code location: https://github.com/cloudwego/eino/tree/main/components/model/option.go
| AgenticModel | ChatModel | |
| Temperature | Supported | Supported |
| Model | Supported | Supported |
| TopP | Supported | Supported |
| Tools | Supported | Supported |
| ToolChoice | Supported | Supported |
| MaxTokens | Supported | Supported |
| AllowedToolNames | Not Supported | Supported |
| Stop | Supported by some implementations | Supported |
| AllowedTools | Supported | Not Supported |
Accordingly, AgenticModel adds the following method for setting Options:
// WithAgenticToolChoice is the option to set tool choice for the agentic model.
func WithAgenticToolChoice(toolChoice schema.ToolChoice, allowedTools ...*schema.AllowedTool) Option {}
Component Implementation Custom Options
The WrapImplSpecificOptFn method provides the ability for component implementations to inject custom Options. Developers need to define proprietary Option types in specific implementations and provide corresponding Option configuration methods.
type openaiOptions struct {
maxToolCalls *int
maxOutputTokens *int64
}
func WithMaxToolCalls(maxToolCalls int) model.Option {
return model.WrapImplSpecificOptFn(func(o *openaiOptions) {
o.maxToolCalls = &maxToolCalls
})
}
func WithMaxOutputTokens(maxOutputTokens int64) model.Option {
return model.WrapImplSpecificOptFn(func(o *openaiOptions) {
o.maxOutputTokens = &maxOutputTokens
})
}
Usage
Standalone Usage
- Non-streaming call
import (
"context"
"github.com/cloudwego/eino-ext/components/model/agenticopenai"
"github.com/cloudwego/eino/schema"
openaischema "github.com/cloudwego/eino/schema/openai"
"github.com/eino-contrib/jsonschema"
"github.com/openai/openai-go/v3/responses"
"github.com/wk8/go-ordered-map/v2"
)
func main() {
ctx := context.Background()
am, _ := agenticopenai.New(ctx, &agenticopenai.Config{})
input := []*schema.AgenticMessage{
schema.UserAgenticMessage("what is the weather like in Beijing"),
}
am_, _ := am.WithTools([]*schema.ToolInfo{
{
Name: "get_weather",
Desc: "get the weather in a city",
ParamsOneOf: schema.NewParamsOneOfByJSONSchema(&jsonschema.Schema{
Type: "object",
Properties: orderedmap.New[string, *jsonschema.Schema](
orderedmap.WithInitialData(
orderedmap.Pair[string, *jsonschema.Schema]{
Key: "city",
Value: &jsonschema.Schema{
Type: "string",
Description: "the city to get the weather",
},
},
),
),
Required: []string{"city"},
}),
},
})
msg, _ := am_.Generate(ctx, input)
}
- Streaming call
import (
"context"
"errors"
"io"
"github.com/cloudwego/eino-ext/components/model/agenticopenai"
"github.com/cloudwego/eino/components/model"
"github.com/cloudwego/eino/schema"
"github.com/openai/openai-go/v3/responses"
)
func main() {
ctx := context.Background()
am, _ := agenticopenai.New(ctx, &agenticopenai.Config{})
serverTools := []*agenticopenai.ServerToolConfig{
{
WebSearch: &responses.WebSearchToolParam{
Type: responses.WebSearchToolTypeWebSearch,
},
},
}
allowedTools := []*schema.AllowedTool{
{
ServerTool: &schema.AllowedServerTool{
Name: string(agenticopenai.ServerToolNameWebSearch),
},
},
}
opts := []model.Option{
model.WithToolChoice(schema.ToolChoiceForced, allowedTools...),
agenticopenai.WithServerTools(serverTools),
}
input := []*schema.AgenticMessage{
schema.UserAgenticMessage("what's cloudwego/eino"),
}
resp, _ := am.Stream(ctx, input, opts...)
var msgs []*schema.AgenticMessage
for {
msg, err := resp.Recv()
if err != nil {
if errors.Is(err, io.EOF) {
break
}
}
msgs = append(msgs, msg)
}
concatenated, _ := schema.ConcatAgenticMessages(msgs)
}
Usage in Orchestration
import (
"github.com/cloudwego/eino/schema"
"github.com/cloudwego/eino/compose"
)
func main() {
/* Initialize AgenticModel
* am, err := xxx
*/
// Use in Chain
c := compose.NewChain[[]*schema.AgenticMessage, *schema.AgenticMessage]()
c.AppendAgenticModel(am)
// Use in Graph
g := compose.NewGraph[[]*schema.AgenticMessage, *schema.AgenticMessage]()
g.AddAgenticModelNode("model_node", cm)
}
Options and Callbacks Usage
Options Usage
import "github.com/cloudwego/eino/components/model"
response, err := am.Generate(ctx, messages,
model.WithTemperature(0.7),
model.WithModel("gpt-5"),
)
Callback Usage
import (
"context"
"github.com/cloudwego/eino/callbacks"
"github.com/cloudwego/eino/components/model"
"github.com/cloudwego/eino/compose"
"github.com/cloudwego/eino/schema"
callbacksHelper "github.com/cloudwego/eino/utils/callbacks"
)
// Create callback handler
handler := &callbacksHelper.AgenticModelCallbackHandler{
OnStart: func(ctx context.Context, info *callbacks.RunInfo, input *model.AgenticCallbackInput) context.Context {
return ctx
},
OnEnd: func(ctx context.Context, info *callbacks.RunInfo, output *model.AgenticCallbackOutput) context.Context {
return ctx
},
OnError: func(ctx context.Context, info *callbacks.RunInfo, err error) context.Context {
return ctx
},
OnEndWithStreamOutput: func(ctx context.Context, info *callbacks.RunInfo, output *schema.StreamReader[*model.AgenticCallbackOutput]) context.Context {
defer output.Close()
for {
chunk, err := output.Recv()
if errors.Is(err, io.EOF) {
break
}
...
}
return ctx
},
}
// Use callback handler
helper := callbacksHelper.NewHandlerHelper().
AgenticModel(handler).
Handler()
/*** compose a chain
* chain := NewChain
* chain.Appendxxx().
* Appendxxx().
* ...
*/
// Use at runtime
runnable, err := chain.Compile()
if err != nil {
return err
}
result, err := runnable.Invoke(ctx, messages, compose.WithCallbacks(helper))
Official Implementations
To be added