Detokenize tokens
POST/v1/detokenize
By giving a list of tokens, generate a detokenized output text string.
To successfully run an inference request, it is mandatory to enter a personal access token (e.g. flp_XXX) value in the Bearer Token field. Refer to the authentication section on our introduction page to learn how to acquire this variable and visit here to generate your token.
Request
Header Parameters
X-Friendli-Team string
ID of team to run requests as (optional parameter).
- application/json
Body
model stringrequired
Code of the model to use. See available model list.
tokens integer[]required
A token sequence to detokenize.
Responses
- 200
Successfully detokenized the tokens.
- application/json
- Schema
- Example
Schema
text string
Detokenized text output.
Detokenized output.
{
"text": "What is generative AI?\n"
}
Loading...