Usage

@Serializable
class Usage(val completionTokens: Int, val promptTokens: Int, val promptCacheHitTokens: Int? = null, val promptCacheMissTokens: Int? = null, val promptTokensDetails: PromptTokensDetails? = null, val totalTokens: Int, val completionTokensDetails: CompletionTokenDetails? = null)

Token usage statistics for a single chat or FIM request.

For streamed responses the ChatCompletionChunk.usage or FIMCompletion.usage field is populated only on the final usage-only chunk, which the API emits when streamOptions.includeUsage is set.

Constructors

Link copied to clipboard
constructor(completionTokens: Int, promptTokens: Int, promptCacheHitTokens: Int? = null, promptCacheMissTokens: Int? = null, promptTokensDetails: PromptTokensDetails? = null, totalTokens: Int, completionTokensDetails: CompletionTokenDetails? = null)

Properties

Link copied to clipboard

Number of tokens in the generated completion.

Link copied to clipboard

Breakdown of how completionTokens was spent (e.g. reasoning tokens for deepseek-reasoner).

Link copied to clipboard

Number of prompt tokens served from the context cache, or null when caching does not apply. Legacy field; newer responses may report the same information under promptTokensDetails instead.

Link copied to clipboard

Number of prompt tokens not served from the context cache, or null when caching does not apply. Legacy field; see promptTokensDetails.

Link copied to clipboard

Number of tokens in the prompt. When context caching applies, this equals promptCacheHitTokens + promptCacheMissTokens.

Link copied to clipboard

Structured breakdown of promptTokens under the prompt_tokens_details key, matching the OpenAI-compatible shape. May be null for older API versions.

Link copied to clipboard

Total tokens billed for the request (prompt + completion).

Functions

Link copied to clipboard
open operator override fun equals(other: Any?): Boolean
Link copied to clipboard
open override fun hashCode(): Int
Link copied to clipboard
open override fun toString(): String