Eino: ChatModel Guide

Overview

The Model component is used to interact with large language models. Its main purpose is to send user input messages to the language model and obtain the model’s response. This component plays an important role in the following scenarios:

  • Natural language conversations
  • Text generation and completion
  • Tool call parameter generation
  • Multimodal interactions (text, images, audio, etc.)

Component Definition

Interface Definition

Code location: eino/components/model/interface.go

type BaseChatModel interface {
    Generate(ctx context.Context, input []*schema.Message, opts ...Option) (*schema.Message, error)
    Stream(ctx context.Context, input []*schema.Message, opts ...Option) (
        *schema.StreamReader[*schema.Message], error)
}

type ToolCallingChatModel interface {
    BaseChatModel

    // WithTools returns a new ToolCallingChatModel instance with the specified tools bound.
    // This method does not modify the current instance, making it safer for concurrent use.
    WithTools(tools []*schema.ToolInfo) (ToolCallingChatModel, error)
}

Generate Method

  • Function: Generate a complete model response
  • Parameters:
    • ctx: Context object for passing request-level information, also used to pass the Callback Manager
    • input: List of input messages
    • opts: Optional parameters for configuring model behavior
  • Return values:
    • *schema.Message: The response message generated by the model
    • error: Error information during generation

Stream Method

  • Function: Generate model response in streaming mode
  • Parameters: Same as the Generate method
  • Return values:
    • *schema.StreamReader[*schema.Message]: Stream reader for model response
    • error: Error information during generation

WithTools Method

  • Function: Bind available tools to the model
  • Parameters:
    • tools: List of tool information
  • Return values:
    • ToolCallingChatModel: ChatModel with tools bound
    • error: Error information during binding

Message Struct

Code location: eino/schema/message.go

type Message struct {
    // Role indicates the role of the message (system/user/assistant/tool)
    Role RoleType
    // Content is the text content of the message
    Content string
    // MultiContent is multimodal content, supporting text, images, audio, etc.
    // Deprecated: Use UserInputMultiContent instead
  ~~  MultiContent []ChatMessagePart~~
    // UserInputMultiContent stores user input multimodal data, supporting text, images, audio, video, files
    // When using this field, the model role is restricted to User
    UserInputMultiContent []MessageInputPart
    // AssistantGenMultiContent holds multimodal data output by the model, supporting text, images, audio, video
    // When using this field, the model role is restricted to Assistant
    AssistantGenMultiContent []MessageOutputPart
    // Name is the sender name of the message
    Name string
    // ToolCalls is the tool call information in assistant messages
    ToolCalls []ToolCall
    // ToolCallID is the tool call ID for tool messages
    ToolCallID string
    // ResponseMeta contains response metadata
    ResponseMeta *ResponseMeta
    // Extra is used to store additional information
    Extra map[string]any
}

The Message struct is the basic structure for model interaction, supporting:

  • Multiple roles: system, user, assistant (ai), tool
  • Multimodal content: text, images, audio, video, files
  • Tool calls: Support for model calling external tools and functions
  • Metadata: Including response reason, token usage statistics, etc.

Common Options

The Model component provides a set of common Options for configuring model behavior:

Code location: eino/components/model/option.go

type Options struct {
    // Temperature controls the randomness of output
    Temperature *float32
    // MaxTokens controls the maximum number of tokens to generate
    MaxTokens *int
    // Model specifies the model name to use
    Model *string
    // TopP controls the diversity of output
    TopP *float32
    // Stop specifies the conditions to stop generation
    Stop []string
}

Options can be set using the following methods:

// Set temperature
WithTemperature(temperature float32) Option

// Set maximum tokens
WithMaxTokens(maxTokens int) Option

// Set model name
WithModel(name string) Option

// Set top_p value
WithTopP(topP float32) Option

// Set stop words
WithStop(stop []string) Option

Usage

Standalone Usage

import (
    "context"
    "fmt"
    "io"

    "github.com/cloudwego/eino-ext/components/model/openai"
    "github.com/cloudwego/eino/components/model"
    "github.com/cloudwego/eino/schema"
)

// Initialize model (using OpenAI as an example)
cm, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{
    // Configuration parameters
})

// Prepare input messages
messages := []*schema.Message{
    {
       Role:    schema.System,
       Content: "你是一个有帮助的助手。",
    },
    {
       Role:    schema.User,
       Content: "你好!",
    },
}

// Generate response
response, err := cm.Generate(ctx, messages, model.WithTemperature(0.8))

// Handle response
fmt.Print(response.Content)

// Stream generation
streamResult, err := cm.Stream(ctx, messages)

defer streamResult.Close()

for {
    chunk, err := streamResult.Recv()
    if err == io.EOF {
       break
    }
    if err != nil {
       // Error handling
    }
    // Handle response chunk
    fmt.Print(chunk.Content)
}

Usage in Orchestration

import (
    "github.com/cloudwego/eino/schema"
    "github.com/cloudwego/eino/compose"
)

/*** Initialize ChatModel
* cm, err := xxx
*/

// Use in Chain
c := compose.NewChain[[]*schema.Message, *schema.Message]()
c.AppendChatModel(cm)


// Use in Graph
g := compose.NewGraph[[]*schema.Message, *schema.Message]()
g.AddChatModelNode("model_node", cm)

Option and Callback Usage

Option Usage Example

import "github.com/cloudwego/eino/components/model"

// Using Options
response, err := cm.Generate(ctx, messages,
    model.WithTemperature(0.7),
    model.WithMaxTokens(2000),
    model.WithModel("gpt-4"),
)

Callback Usage Example

import (
    "context"
    "fmt"

    "github.com/cloudwego/eino/callbacks"
    "github.com/cloudwego/eino/components/model"
    "github.com/cloudwego/eino/compose"
    "github.com/cloudwego/eino/schema"
    callbacksHelper "github.com/cloudwego/eino/utils/callbacks"
)

// Create callback handler
handler := &callbacksHelper.ModelCallbackHandler{
    OnStart: func(ctx context.Context, info *callbacks.RunInfo, input *model.CallbackInput) context.Context {
       fmt.Printf("Starting generation, input message count: %d\n", len(input.Messages))
       return ctx
    },
    OnEnd: func(ctx context.Context, info *callbacks.RunInfo, output *model.CallbackOutput) context.Context {
       fmt.Printf("Generation complete, Token usage: %+v\n", output.TokenUsage)
       return ctx
    },
    OnEndWithStreamOutput: func(ctx context.Context, info *callbacks.RunInfo, output *schema.StreamReader[*model.CallbackOutput]) context.Context {
        fmt.Println("Starting to receive streaming output")
        defer output.Close()
    
        for {
            chunk, err := output.Recv()
            if errors.Is(err, io.EOF) {
                break
            }
            if err != nil {
                fmt.Printf("Stream read error: %v\n", err)
                return
            }
            if chunk == nil || chunk.Message == nil {
                continue
            }
    
            // Only print when model output contains ToolCall
            if len(chunk.Message.ToolCalls) > 0 {
                for _, tc := range chunk.Message.ToolCalls {
                    fmt.Printf("ToolCall detected, arguments: %s\n", tc.Function.Arguments)
                }
            }
        }
    
        return ctx
    },
}

// Use callback handler
helper := callbacksHelper.NewHandlerHelper().
    ChatModel(handler).
    Handler()

/*** compose a chain
* chain := NewChain
* chain.appendxxx().
*       appendxxx().
*       ...
*/

// Use at runtime
runnable, err := chain.Compile()
if err != nil {
    return err
}
result, err := runnable.Invoke(ctx, messages, compose.WithCallbacks(helper))

Existing Implementations

  1. OpenAI ChatModel: Using OpenAI’s GPT series models ChatModel - OpenAI
  2. Ollama ChatModel: Using Ollama local models ChatModel - Ollama
  3. ARK ChatModel: Using ARK platform model services ChatModel - ARK
  4. More: Eino ChatModel

Custom Implementation Reference

When implementing a custom ChatModel component, pay attention to the following points:

  1. Make sure to implement common options
  2. Make sure to implement the callback mechanism
  3. Remember to close the writer after streaming output is complete

Option Mechanism

If a custom ChatModel needs Options beyond the common Options, you can use the component abstraction utility functions to implement custom Options, for example:

import (
    "time"

    "github.com/cloudwego/eino/components/model"
)

// Define Option struct
type MyChatModelOptions struct {
    Options    *model.Options
    RetryCount int
    Timeout    time.Duration
}

// Define Option functions
func WithRetryCount(count int) model.Option {
    return model.WrapImplSpecificOptFn(func(o *MyChatModelOptions) {
       o.RetryCount = count
    })
}

func WithTimeout(timeout time.Duration) model.Option {
    return model.WrapImplSpecificOptFn(func(o *MyChatModelOptions) {
       o.Timeout = timeout
    })
}

Callback Handling

ChatModel implementations need to trigger callbacks at appropriate times. The following structures are defined by the ChatModel component:

import (
    "github.com/cloudwego/eino/schema"
)

// Define callback input and output
type CallbackInput struct {
    Messages    []*schema.Message
    Model       string
    Temperature *float32
    MaxTokens   *int
    Extra       map[string]any
}

type CallbackOutput struct {
    Message    *schema.Message
    TokenUsage *schema.TokenUsage
    Extra      map[string]any
}

Complete Implementation Example

import (
    "context"
    "errors"
    "net/http"
    "time"

    "github.com/cloudwego/eino/callbacks"
    "github.com/cloudwego/eino/components/model"
    "github.com/cloudwego/eino/schema"
)

type MyChatModel struct {
    client     *http.Client
    apiKey     string
    baseURL    string
    model      string
    timeout    time.Duration
    retryCount int
}

type MyChatModelConfig struct {
    APIKey string
}

func NewMyChatModel(config *MyChatModelConfig) (*MyChatModel, error) {
    if config.APIKey == "" {
       return nil, errors.New("api key is required")
    }

    return &MyChatModel{
       client: &http.Client{},
       apiKey: config.APIKey,
    }, nil
}

func (m *MyChatModel) Generate(ctx context.Context, messages []*schema.Message, opts ...model.Option) (*schema.Message, error) {
    // 1. Process options
    options := &MyChatModelOptions{
       Options: &model.Options{
          Model: &m.model,
       },
       RetryCount: m.retryCount,
       Timeout:    m.timeout,
    }
    options.Options = model.GetCommonOptions(options.Options, opts...)
    options = model.GetImplSpecificOptions(options, opts...)

    // 2. Callback before starting generation
    ctx = callbacks.OnStart(ctx, &model.CallbackInput{
       Messages: messages,
       Config: &model.Config{
          Model: *options.Options.Model,
       },
    })

    // 3. Execute generation logic
    response, err := m.doGenerate(ctx, messages, options)

    // 4. Handle error and completion callbacks
    if err != nil {
       ctx = callbacks.OnError(ctx, err)
       return nil, err
    }

    ctx = callbacks.OnEnd(ctx, &model.CallbackOutput{
       Message: response,
    })

    return response, nil
}

func (m *MyChatModel) Stream(ctx context.Context, messages []*schema.Message, opts ...model.Option) (*schema.StreamReader[*schema.Message], error) {
    // 1. Process options
    options := &MyChatModelOptions{
       Options: &model.Options{
          Model: &m.model,
       },
       RetryCount: m.retryCount,
       Timeout:    m.timeout,
    }
    options.Options = model.GetCommonOptions(options.Options, opts...)
    options = model.GetImplSpecificOptions(options, opts...)

    // 2. Callback before starting streaming generation
    ctx = callbacks.OnStart(ctx, &model.CallbackInput{
       Messages: messages,
       Config: &model.Config{
          Model: *options.Options.Model,
       },
    })

    // 3. Create streaming response
    // Pipe produces a StreamReader and a StreamWriter; writing to StreamWriter can be read from StreamReader, both are concurrency-safe.
    // The implementation asynchronously writes generated content to StreamWriter and returns StreamReader as the return value
    // ***StreamReader is a data stream that can only be read once. When implementing Callback yourself, you need to pass the data stream to callback via OnEndWithCallbackOutput and also return a data stream, requiring a copy of the data stream
    // Considering this scenario always requires copying the data stream, the OnEndWithStreamOutput function will copy internally and return an unread stream
    // The following code demonstrates one stream processing approach; the processing method is not unique
    sr, sw := schema.Pipe[*model.CallbackOutput](1)

    // 4. Start asynchronous generation
    go func() {
       defer sw.Close()

       // Stream writing
       m.doStream(ctx, messages, options, sw)
    }()

    // 5. Completion callback
    _, nsr := callbacks.OnEndWithStreamOutput(ctx, sr)

    return schema.StreamReaderWithConvert(nsr, func(t *model.CallbackOutput) (*schema.Message, error) {
       return t.Message, nil
    }), nil
}

func (m *MyChatModel)  WithTools(tools []*schema.ToolInfo) (model.ToolCallingChatModel, error) {
    // Implement tool binding logic
    return nil, nil
}

func (m *MyChatModel) doGenerate(ctx context.Context, messages []*schema.Message, opts *MyChatModelOptions) (*schema.Message, error) {
    // Implement generation logic
    return nil, nil
}

func (m *MyChatModel) doStream(ctx context.Context, messages []*schema.Message, opts *MyChatModelOptions, sr *schema.StreamWriter[*model.CallbackOutput]) {
    // Write streaming generated text to sr
    return
}

Last modified January 20, 2026: feat(eino): sync En docs with zh docs (9da8ff724c)