Skip to content

BatchedExecutor

Namespace: LLama.Batched

A batched executor that can infer multiple separate "conversations" simultaneously.

1
public sealed class BatchedExecutor : System.IDisposable

Inheritance ObjectBatchedExecutor
Implements IDisposable

Properties

Context

The LLamaContext this executor is using

1
public LLamaContext Context { get; }

Property Value

LLamaContext

Model

The LLamaWeights this executor is using

1
public LLamaWeights Model { get; }

Property Value

LLamaWeights

BatchedTokenCount

Get the number of tokens in the batch, waiting for BatchedExecutor.Infer(CancellationToken) to be called

1
public int BatchedTokenCount { get; }

Property Value

Int32

IsDisposed

Check if this executor has been disposed.

1
public bool IsDisposed { get; private set; }

Property Value

Boolean

Constructors

BatchedExecutor(LLamaWeights, IContextParams)

Create a new batched executor

1
public BatchedExecutor(LLamaWeights model, IContextParams contextParams)

Parameters

model LLamaWeights
The model to use

contextParams IContextParams
Parameters to create a new context

Methods

Prompt(String)

Caution

Use BatchedExecutor.Create instead


Start a new Conversation with the given prompt

1
public Conversation Prompt(string prompt)

Parameters

prompt String

Returns

Conversation

Create()

Start a new Conversation

1
public Conversation Create()

Returns

Conversation

Infer(CancellationToken)

Run inference for all conversations in the batch which have pending tokens.

If the result is NoKvSlot then there is not enough memory for inference, try disposing some conversation threads and running inference again.

1
public Task<DecodeResult> Infer(CancellationToken cancellation)

Parameters

cancellation CancellationToken

Returns

Task<DecodeResult>

Dispose()

1
public void Dispose()

GetNextSequenceId()

1
internal LLamaSeqId GetNextSequenceId()

Returns

LLamaSeqId