Skip to main content

API Reference

Create Text Chat Request

Content-Type:application/json

Request Parameters

Authorization
  • Type
    string
  • Location
    header
  • Required
    Yes
  • Description
    Use the following format for authentication: Bearer <your api key> (visit iFlow official website to login and get API KEY).

LLM Models

Parameter NameTypeRequiredDefaultDescription
messagesobject[]Yes-List of messages that make up the current conversation.
messages.contentstringYesWhat opportunities and challenges will the Chinese large model industry face in 2025?Content of the message.
messages.roleenum<string>YesuserRole of the message author. Possible values: user
, assistant
, system
modelenum<string>Yesdeepseek-r1Corresponding model name. To better improve service quality, we may periodically make changes to the models provided by this service, including but not limited to model online/offline adjustments and model service capability adjustments. We will notify you through appropriate means such as announcements and message pushes when possible. Please refer to the Quick Start page for supported models.
frequency_penaltynumberNo0.5Adjusts the frequency penalty for generated tokens to control repetitiveness.
max_tokensintegerNo512Maximum number of tokens to generate. Value range: 1 < x < 8192
nintegerNo1Number of generated results to return.
response_formatobjectNo-Object specifying the model output format.
response_format.typestringNo-Type of response format.
stopstring[]nullNo-
streambooleanNofalseIf set to true
, tokens will be returned progressively as Server-Sent Events (SSE).
temperaturenumberNo0.7Controls the randomness of responses. Lower values make output more deterministic; higher values make output more random.
toolsobject[]No-List of tools the model may call. Currently, only functions are supported as tools. Use this parameter to provide a list of functions for which the model may generate JSON input. Supports up to 128 functions.
tools.functionobjectNo-Function object.
tools.function.namestringNo-Name of the function to call. Must consist of letters, numbers, underscores, or hyphens, with a maximum length of 64.
tools.function.descriptionstringNo-Description of the function, used by the model to determine when and how to call it.
tools.function.parametersobjectNo-Parameters accepted by the function, described as a JSON Schema object. If parameters are not specified, a function with an empty parameter list is defined.
tools.function.strictbooleannullNofalse
tools.typeenum<string>NofunctionType of tool. Currently only supports function
.
top_knumberNo50Limits token selection range to the top k candidates.
top_pnumberNo0.7Nucleus sampling parameter used to dynamically adjust the selection range of each predicted token based on cumulative probability.

Request Examples

curl --request POST \
--url https://apis.iflow.cn/v1/chat/completions \
--header 'Authorization: Bearer <iflow API KEY>' \
--header 'Content-Type: application/json' \
--data '{
"model": "deepseek-r1",
"messages": [
{
"role": "user",
"content": "What opportunities and challenges will the Chinese large model industry face in 2025?"
}
],
"stream": false,
"max_tokens": 512,
"stop": [
"null"
],
"temperature": 0.7,
"top_p": 0.7,
"top_k": 50,
"frequency_penalty": 0.5,
"n": 1,
"response_format": {
"type": "text"
},
"tools": [
{
"type": "function",
"function": {
"description": "<string>",
"name": "<string>",
"parameters": {},
"strict": false
}
}
]
}'

Response Parameters

Parameter NameTypeRequiredDefaultDescription
choicesobject[]Yes-List of choices generated by the model.
choices.finish_reasonenum<string>No-Reason for generation completion. Possible values: - stop
: Natural completion. - eos
: Reached sentence end marker. - length
: Reached maximum token length limit. - tool_calls
: Called a tool (such as a function).
choices.messageobjectYes-Message object returned by the model.
createdintegerYes-Timestamp when the response was generated.
idstringYes-Unique identifier of the response.
modelstringYes-Name of the model used.
objectenum<string>Yes-Type of response. Possible values: - chat.completion
: Indicates this is a chat completion response.
tool_callsobject[]No-Tool calls generated by the model, such as function calls.
tool_calls.functionobjectNo-Function called by the model.
tool_calls.function.argumentsstringNo-Arguments for the function call, generated by the model in JSON format. Note: The JSON generated by the model may be invalid or may generate parameters not belonging to the function definition. Please validate these parameters in your code before calling the function.
tool_calls.function.namestringNo-Name of the function to call.
tool_calls.idstringNo-Unique identifier of the tool call.
tool_calls.typeenum<string>No-Type of tool. Currently only supports function
. Possible values: - function
: Indicates this is a function call.
usageobjectYes-Token usage statistics.
usage.completion_tokensintegerYes-Number of tokens used in the completion part.
usage.prompt_tokensintegerYes-Number of tokens used in the prompt part.
usage.total_tokensintegerYes-Total number of tokens used.

Response Information

        {
"id": "<string>",
"choices": [
{
"message": {
"role": "assistant",
"content": "<string>",
"reasoning_content": "<string>"
},
"finish_reason": "stop"
}
],
"tool_calls": [
{
"id": "<string>",
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
}
}
],
"usage": {
"prompt_tokens": 123,
"completion_tokens": 123,
"total_tokens": 123
},
"created": 123,
"model": "<string>",
"object": "chat.completion"
}