qianfan package

Library aimed to helping developer to interactive with LLM.

qianfan.AK(ak: str) None[source]

Set the API Key (AK) for LLM API authentication.

This function allows you to set the API Key that will be used for authentication throughout the entire SDK. The API Key can be acquired from the qianfan console: https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application

Parameters:
ak (str):

The API Key to be set for LLM API authentication.

qianfan.AccessKey(access_key: str) None[source]

Set the Access Key for console api authentication.

This function allows you to set the Access Key that will be used for authentication throughout the entire SDK. The Access Key can be acquired from the baidu bce console: https://console.bce.baidu.com/iam/#/iam/accesslist

Parameters:
access_key (str):

The Access Key to be set for console API authentication.

qianfan.AccessToken(access_token: str) None[source]

Set the access token for LLM api authentication.

This function allows you to set the access token that will be used for authentication throughout the entire SDK. The access token can be generated from API key and secret key according to the instructions at https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Ilkkrb0i5.

This function is only needed when you only have access token. If you have both API key and secret key, sdk will automatically refresh the access token for you.

Parameters:
access_token (str):

The access token to be set for LLM API authentication.

class qianfan.ChatCompletion(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]

Bases: BaseResource

QianFan ChatCompletion is an agent for calling QianFan ChatCompletion API.

async abatch_do(messages_list: List[Union[List[Dict], QfMessages]], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]][source]

Async batch perform chat-based language generation using user-supplied messages.

Parameters:
messages_list: List[Union[List[Dict], QfMessages]]:

List of the messages list in the conversation. Please refer to ChatCompletion.do for more information of each messages.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to ChatCompletion.do for other parameters such as model, endpoint, retry_count, etc.

``` response_list = await ChatCompletion().abatch_do([…], worker_num = 10) for response in response_list:

# response is QfResponse if succeed, or response will be exception print(response)

```

async ado(messages: Union[List[Dict], QfMessages], model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, auto_concat_truncate: bool = False, truncated_continue_prompt: str = '继续', **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]][source]

Async perform chat-based language generation using user-supplied messages.

Parameters:
messages (Union[List[Dict], QfMessages]):

A list of messages in the conversation including the one from system. Each message should be a dictionary containing ‘role’ and ‘content’ keys, representing the role (either ‘user’, or ‘assistant’) and content of the message, respectively. Alternatively, you can provide a QfMessages object for convenience.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

If set to True, the responses are streamed back as an iterator. If False, a single response is returned.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

auto_concat_truncate (bool):

[Experimental] If set to True, continuously requesting will be run until is_truncated is False. As a result, the entire reply will be returned. Cause this feature highly relies on the understanding ability of LLM, Use it carefully.

truncated_continue_prompt (str):

[Experimental] The prompt to use when requesting more content for auto truncated reply.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` ChatCompletion().ado(messages = ..., temperature = 0.2, top_p = 0.5) `

batch_do(messages_list: Union[List[List[Dict]], List[QfMessages]], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture[source]

Batch perform chat-based language generation using user-supplied messages.

Parameters:
messages_list: List[Union[List[Dict], QfMessages]]:

List of the messages list in the conversation. Please refer to ChatCompletion.do for more information of each messages.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to ChatCompletion.do for other parameters such as model, endpoint, retry_count, etc.

``` response_list = ChatCompletion().batch_do([…], worker_num = 10) for response in response_list:

# return QfResponse if succeed, or exception will be raised print(response.result())

# or while response_list.finished_count() != response_list.task_count():

time.sleep(1)

print(response_list.results()) ```

do(messages: Union[List[Dict], QfMessages], model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, auto_concat_truncate: bool = False, truncated_continue_prompt: str = '继续', **kwargs: Any) Union[QfResponse, Iterator[QfResponse]][source]

Perform chat-based language generation using user-supplied messages.

Parameters:
messages (Union[List[Dict], QfMessages]):

A list of messages in the conversation including the one from system. Each message should be a dictionary containing ‘role’ and ‘content’ keys, representing the role (either ‘user’, or ‘assistant’) and content of the message, respectively. Alternatively, you can provide a QfMessages object for convenience.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

If set to True, the responses are streamed back as an iterator. If False, a single response is returned.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

auto_concat_truncate (bool):

[Experimental] If set to True, continuously requesting will be run until is_truncated is False. As a result, the entire reply will be returned. Cause this feature highly relies on the understanding ability of LLM, Use it carefully.

truncated_continue_prompt (str):

[Experimental] The prompt to use when requesting more content for auto truncated reply.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` ChatCompletion().do(messages = ..., temperature = 0.2, top_p = 0.5) `

class qianfan.Completion(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]

Bases: BaseResource

QianFan Completion is an agent for calling QianFan completion API.

async abatch_do(prompt_list: List[str], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]][source]

Async batch generate a completion based on the user-provided prompt.

Parameters:
prompt_list (List[str]):

The input prompt list to generate the continuation from.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Completion.ado for other parameters such as model, endpoint, retry_count, etc.

``` response_list = await Completion().abatch_do([…], worker_num = 10) for response in response_list:

# response is QfResponse if succeed, or response will be exception print(response)

```

async ado(prompt: str, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]][source]

Async generate a completion based on the user-provided prompt.

Parameters:
prompt (str):

The input prompt to generate the continuation from.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

If set to True, the responses are streamed back as an iterator. If False, a single response is returned.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Completion().do(prompt = ..., temperature = 0.2, top_p = 0.5) `

batch_do(prompt_list: List[str], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture[source]

Batch generate a completion based on the user-provided prompt.

Parameters:
prompt_list (List[str]):

The input prompt list to generate the continuation from.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Completion.do for other parameters such as model, endpoint, retry_count, etc.

``` response_list = Completion().batch_do([”…”, “…”], worker_num = 10) for response in response_list:

# return QfResponse if succeed, or exception will be raised print(response.result())

# or while response_list.finished_count() != response_list.task_count():

time.sleep(1)

print(response_list.results()) ```

do(prompt: str, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]][source]

Generate a completion based on the user-provided prompt.

Parameters:
prompt (str):

The input prompt to generate the continuation from.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

If set to True, the responses are streamed back as an iterator. If False, a single response is returned.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Completion().do(prompt = ..., temperature = 0.2, top_p = 0.5) `

class qianfan.Embedding(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]

Bases: BaseResource

QianFan Embedding is an agent for calling QianFan embedding API.

async abatch_do(texts_list: List[List[str]], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]][source]

Async batch generate embeddings for a list of input texts using a specified model.

Parameters:
texts_list (List[List[str]]):

List of the input text list to generate the embeddings.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Embedding.ado for other parameters such as model, endpoint, retry_count, etc.

``` response_list = await Embedding().abatch_do([…], worker_num = 10) for response in response_list:

# response is QfResponse if succeed, or response will be exception print(response)

```

async ado(texts: List[str], model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]][source]

Async generate embeddings for a list of input texts using a specified model.

Parameters:
texts (List[str]):

A list of input texts for which embeddings need to be generated.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

If set to True, the responses are streamed back as an iterator. If False, a single response is returned.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Embedding().do(texts = ..., temperature = 0.2, top_p = 0.5) `

batch_do(texts_list: List[List[str]], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture[source]

Batch generate embeddings for a list of input texts using a specified model.

Parameters:
texts_list (List[List[str]]):

List of the input text list to generate the embeddings.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Completion.do for other parameters such as model, endpoint, retry_count, etc.

``` response_list = Completion().batch_do([”…”, “…”], worker_num = 10) for response in response_list:

# return QfResponse if succeed, or exception will be raised print(response.result())

# or while response_list.finished_count() != response_list.task_count():

time.sleep(1)

print(response_list.results()) ```

do(texts: List[str], model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]][source]

Generate embeddings for a list of input texts using a specified model.

Parameters:
texts (List[str]):

A list of input texts for which embeddings need to be generated.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

If set to True, the responses are streamed back as an iterator. If False, a single response is returned.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Embedding().do(texts = ..., temperature = 0.2, top_p = 0.5) `

class qianfan.Image2Text(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]

Bases: BaseResource

QianFan Image2Text API Resource

async abatch_do(input_list: List[Tuple[str, str]], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]][source]

Async batch generate execute a image2text action on the provided inputs and generate responses.

Parameters:
input_list (Tuple(str, str)):

The list user input prompt and base64 encoded image data for which a response is generated.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Plugin.ado for other parameters such as model, endpoint, retry_count, etc.

``` response_list = await Image2Text(endpoint=””).abatch_do([(”…”, “…”),

(”…”, “…”)], worker_num = 10)

for response in response_list:

# response is QfResponse if succeed, or response will be exception print(response)

```

async ado(prompt: str, image: str, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 0, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]][source]

Async execute a image2text action on the provided input prompt and generate responses.

Parameters:
prompt (str):

The user input or prompt for which a response is generated.

image (str):

The user input base64 encoded image data for which a response is generated.

model (Optional[str]):

The name or identifier of the language model to use.

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

Whether to stream responses or not.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Image2Text(endpoint="").ado(prompt="", image="", xx=vv) `

batch_do(input_list: List[Tuple[str, str]], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture[source]

Batch generate execute a image2text action on the provided inputs and generate responses.

Parameters:
input_list (Tuple(str, str)):

The list user input prompt and base64 encoded image data for which a response is generated.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Plugin.do for other parameters such as model, endpoint, retry_count, etc.

``` response_list = Image2Text(endpoint=””).batch_do([(”…”, “…”),

(”…”, “…”)], worker_num = 10)

for response in response_list:

# return QfResponse if succeed, or exception will be raised print(response.result())

# or while response_list.finished_count() != response_list.task_count():

time.sleep(1)

print(response_list.results()) ```

do(prompt: str, image: str, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 0, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]][source]

Execute a image2text action on the provided input prompt and generate responses.

Parameters:
prompt (str):

The user input or prompt for which a response is generated.

image (str):

The user input base64 encoded image data for which a response is generated.

model (Optional[str]):

The name or identifier of the language model to use.

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

Whether to stream responses or not.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Image2Text(endpoint="").do(prompt="", image="", xxx=vvv) `

qianfan.Messages

alias of QfMessages

class qianfan.Plugin(model: str = 'EBPlugin', endpoint: Optional[str] = None, **kwargs: Any)[source]

Bases: BaseResource

QianFan Plugin API Resource

async abatch_do(query_list: List[Union[str, QfMessages, List[Dict]]], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]][source]

Async batch execute a plugin action on the provided input prompt and generate responses.

Parameters:
query_list List[Union[str, QfMessages, List[Dict]]]:

The list user input messages or prompt for which a response is generated.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Plugin.ado for other parameters such as model, endpoint, retry_count, etc.

``` response_list = await Plugin().abatch_do([…], worker_num = 10) for response in response_list:

# response is QfResponse if succeed, or response will be exception print(response)

```

async ado(query: Union[str, QfMessages, List[Dict]], plugins: Optional[List[str]] = None, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]][source]

Async execute a plugin action on the provided input prompt and generate responses.

Parameters:
query Union[str, QfMessages, List[Dict]]:

The user input for which a response is generated. Concretely, the following types are supported:

query should be str for qianfan plugin, while query should be either QfMessages or list for EBPlugin

plugins (Optional[List[str]]):

A list of plugins to be used.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

If set to True, the responses are streamed back as an iterator. If False, a single response is returned.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Plugin().do(prompt = ..., temperature = 0.2, top_p = 0.5) `

batch_do(query_list: List[Union[str, QfMessages, List[Dict]]], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture[source]

Batch generate execute a plugin action on the provided input prompt and generate responses.

Parameters:
query_list List[Union[str, QfMessages, List[Dict]]]:

The list user input messages or prompt for which a response is generated.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Plugin.do for other parameters such as model, endpoint, retry_count, etc.

``` response_list = Plugin().batch_do([”…”, “…”], worker_num = 10) for response in response_list:

# return QfResponse if succeed, or exception will be raised print(response.result())

# or while response_list.finished_count() != response_list.task_count():

time.sleep(1)

print(response_list.results()) ```

do(query: Union[str, QfMessages, List[Dict]], plugins: Optional[List[str]] = None, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]][source]

Execute a plugin action on the provided input prompt and generate responses.

Parameters:
query Union[str, QfMessages, List[Dict]]:

The user input for which a response is generated. Concretely, the following types are supported:

query should be str for qianfan plugin, while query should be either QfMessages or list for EBPlugin

plugins (Optional[List[str]]):

A list of plugins to be used.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

stream (bool):

If set to True, the responses are streamed back as an iterator. If False, a single response is returned.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Plugin().do(prompt = ..., temperature = 0.2, top_p = 0.5) `

class qianfan.QfMessages[source]

Bases: object

An auxiliary class for representing a list of messages in a chat model.

Example usage:

messages = QfMessages()
# append a message by str
messages.append("Hello!")
# send the messages directly
resp = qianfan.ChatCompletion().do(messages = messages)
# append the response to the messages and continue the conversation
messages.append(resp)
messages.append("next message", role = QfRole.User) # role is optional
append(message: Union[str, QfResponse], role: Optional[Union[str, QfRole]] = None) None[source]

Appends a message to the QfMessages object.

Parameters:
message (Union[str, QfResponse]):

The message to be appended. It can be a string or a QfResponse object. When the object is a QfResponse object, the role of the message sender will be QfRole.Assistant by default, unless you specify the role using the ‘role’

role (Optional[Union[str, QfRole]]):

An optional parameter to specify the role of the message sender. If not provided, the function will determine the role based on the existed message.

Example usage can be found in the introduction of this class.

class qianfan.QfResponse(code: int, headers: ~typing.Dict[str, str] = <factory>, body: ~typing.Dict[str, ~typing.Any] = <factory>, statistic: ~typing.Dict[str, ~typing.Any] = <factory>, request: ~typing.Optional[~qianfan.resources.typing.QfRequest] = <factory>)[source]

Bases: Mapping

Response from Qianfan API

body: Dict[str, Any]

The JSON-formatted body of the response.

code: int

The HTTP status code of the response.

headers: Dict[str, str]

A dictionary of HTTP headers included in the response.

request: Optional[QfRequest]

Original request

statistic: Dict[str, Any]

key: request_latency: request elapsed time in seconds, or received elapsed time

of each response if stream=True

first_token_latency: first token elapsed time int seconds

only existed in streaming calling

total_latency: resource elapsed time int seconds, include request, serialization

and the waiting time if rate_limit is set.

class qianfan.QfRole(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

Role type supported in Qianfan

Assistant = 'assistant'
Function = 'function'
User = 'user'
qianfan.Response

alias of QfResponse

qianfan.Role

alias of QfRole

qianfan.SK(sk: str) None[source]

Set the Secret Key (SK) for LLM api authentication. The secret key is paired with the API key.

This function allows you to set the Secret Key that will be used for authentication throughout the entire SDK. The Secret Key can be acquired from the qianfan console: https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application

Parameters:
sk (str):

The Secret Key to be set for LLM API authentication.

qianfan.SecretKey(secret_key: str) None[source]

Set the Secret Key for console api authentication. The secret key is paired with the access key.

This function allows you to set the Secret Key that will be used for authentication throughout the entire SDK. The secret Key can be acquired from the baidu bce console: https://console.bce.baidu.com/iam/#/iam/accesslist

Parameters:
secret_key (str):

The Secret Key to be set for console API authentication.

class qianfan.Text2Image(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]

Bases: BaseResource

QianFan Text2Image API Resource

async abatch_do(prompt_list: List[str], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]][source]

Async batch execute a text2image action on the provided input prompt and generate responses.

Parameters:
prompt_list (List[str]):

The list user input or prompt for which a response is generated.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Text2Image.ado for other parameters such as model, endpoint, retry_count, etc.

``` response_list = await Text2Image().abatch_do([…], worker_num = 10) for response in response_list:

# response is QfResponse if succeed, or response will be exception print(response)

```

async ado(prompt: str, model: Optional[str] = None, endpoint: Optional[str] = None, with_decode: Optional[str] = None, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 0, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]][source]

Async execute a text2image action on the provided input prompt and generate responses.

Parameters:
prompt (str):

The user input or prompt for which a response is generated.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(Stable-Diffusion-XL).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

with_decode(Optional[str]):

The way to decode data. If not provided, the decode is not used. use “base64” to auto decode from data.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Text2Image().do(prompt = ..., steps=20) `

batch_do(prompt_list: List[str], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture[source]

Batch generate execute a text2image action on the provided input prompt and generate responses.

Parameters:
prompt_list (List[str]):

The list user input or prompt for which a response is generated.

worker_num (Optional[int]):

The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.

kwargs (Any):

Please refer to Text2Image.do for other parameters such as model, endpoint, retry_count, etc.

``` response_list = Text2Image().batch_do([”…”, “…”], worker_num = 10) for response in response_list:

# return QfResponse if succeed, or exception will be raised print(response.result())

# or while response_list.finished_count() != response_list.task_count():

time.sleep(1)

print(response_list.results()) ```

do(prompt: str, model: Optional[str] = None, endpoint: Optional[str] = None, with_decode: Optional[str] = None, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 0, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]][source]

Execute a text2image action on the provided input prompt and generate responses.

Parameters:
prompt (str):

The user input or prompt for which a response is generated.

model (Optional[str]):

The name or identifier of the language model to use. If not specified, the default model is used(Stable-Diffusion-XL).

endpoint (Optional[str]):

The endpoint for making API requests. If not provided, the default endpoint is used.

with_decode(Optional[str]):

The way to decode data. If not provided, the decode is not used. use “base64” to auto decode from data.

retry_count (int):

The number of times to retry the request in case of failure.

request_timeout (float):

The maximum time (in seconds) to wait for a response from the model.

backoff_factor (float):

A factor to increase the waiting time between retry attempts.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:

` Text2Image().do(prompt = ..., steps=20) `

class qianfan.Tokenizer[source]

Bases: object

Class for Tokenizer API

classmethod count_tokens(text: str, mode: Literal['local', 'remote'] = 'local', model: str = 'ERNIE-Bot', **kwargs: Any) int[source]

Count the number of tokens in a given text.

Parameters:
text (str):

The input text for which tokens need to be counted.

mode (str, optional):
local (default):

local SIMULATION (Chinese characters count + English word count * 1.3)

remote:

use qianfan api to calculate the token count. API will return accurate token count, but only ERNIE-Bot series models are supported.

model (str, optional):

The name of the model to be used for token counting, which may influence the counting strategy. Default is ‘ERNIE-Bot’.

kwargs (Any):

Additional keyword arguments that can be passed to customize the request.

qianfan.disable_log() None[source]

Disables logging.

This function turns off the logging feature, preventing the recording of log messages.

Parameters:

None

qianfan.enable_log(log_level: int = 20) None[source]

Set the logging level for the qianfan sdk.

This function allows you to configure the logging level for the sdk’s logging system. The logging level determines the verbosity of log messages that will be recorded. By default, it is set to ‘WARN’, which logs only important information.

Parameters:
log_level (int, optional):

The logging level to set for the application. It controls the granularity of log messages. You can specify one of the following integer values or str like “INFO”:

  • logging.CRITICAL (50): Logs only critical messages.

  • logging.ERROR (40): Logs error and critical messages.

  • logging.WARNING (30): Logs warnings, errors, and critical messages.

  • logging.INFO (20): Logs general information, warnings, errors, and critical messages.

  • logging.DEBUG (10): Logs detailed debugging information, in addition to all the above log levels.

Example Usage: To enable detailed debugging, you can call the function like this: enable_log(logging.DEBUG)

To set the logging level to only log errors and critical messages, use: enable_log(“ERROR”)

qianfan.get_config() GlobalConfig[source]

Subpackages

Submodules

qianfan.config module

qianfan.config.AK(ak: str) None[source]

Set the API Key (AK) for LLM API authentication.

This function allows you to set the API Key that will be used for authentication throughout the entire SDK. The API Key can be acquired from the qianfan console: https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application

Parameters:
ak (str):

The API Key to be set for LLM API authentication.

qianfan.config.AccessKey(access_key: str) None[source]

Set the Access Key for console api authentication.

This function allows you to set the Access Key that will be used for authentication throughout the entire SDK. The Access Key can be acquired from the baidu bce console: https://console.bce.baidu.com/iam/#/iam/accesslist

Parameters:
access_key (str):

The Access Key to be set for console API authentication.

qianfan.config.AccessToken(access_token: str) None[source]

Set the access token for LLM api authentication.

This function allows you to set the access token that will be used for authentication throughout the entire SDK. The access token can be generated from API key and secret key according to the instructions at https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Ilkkrb0i5.

This function is only needed when you only have access token. If you have both API key and secret key, sdk will automatically refresh the access token for you.

Parameters:
access_token (str):

The access token to be set for LLM API authentication.

class qianfan.config.GlobalConfig(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '<object object>', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, AK: Optional[str] = None, SK: Optional[str] = None, ACCESS_KEY: Optional[str] = None, SECRET_KEY: Optional[str] = None, ACCESS_TOKEN: Optional[str] = None, BASE_URL: str = 'https://aip.baidubce.com', AUTH_TIMEOUT: float = 5, DISABLE_EB_SDK: bool = True, EB_SDK_INSTALLED: bool = False, IAM_SIGN_EXPIRATION_SEC: int = 300, CONSOLE_API_BASE_URL: str = 'https://qianfan.baidubce.com', ACCESS_TOKEN_REFRESH_MIN_INTERVAL: float = 3600, QPS_LIMIT: float = 0, APPID: Optional[int] = None, ENABLE_PRIVATE: bool = False, ENABLE_AUTH: Optional[bool] = None, ACCESS_CODE: Optional[str] = None, IMPORT_STATUS_POLLING_INTERVAL: float = 2, EXPORT_STATUS_POLLING_INTERVAL: float = 2, RELEASE_STATUS_POLLING_INTERVAL: float = 2, EXPORT_FILE_SIZE_LIMIT: int = 2147483648, ETL_STATUS_POLLING_INTERVAL: float = 2, GET_ENTITY_CONTENT_FAILED_RETRY_TIMES: int = 3, TRAIN_STATUS_POLLING_INTERVAL: float = 30, TRAINER_STATUS_POLLING_BACKOFF_FACTOR: float = 3, TRAINER_STATUS_POLLING_RETRY_TIMES: float = 3, MODEL_PUBLISH_STATUS_POLLING_INTERVAL: float = 30, BATCH_RUN_STATUS_POLLING_INTERVAL: float = 30, DEPLOY_STATUS_POLLING_INTERVAL: float = 30, DEFAULT_FINE_TUNE_TRAIN_TYPE: str = 'ERNIE-Bot-turbo-0725', LLM_API_RETRY_COUNT: int = 1, LLM_API_RETRY_TIMEOUT: int = 60, LLM_API_RETRY_BACKOFF_FACTOR: float = 1, LLM_API_RETRY_JITTER: float = 1, LLM_API_RETRY_MAX_WAIT_INTERVAL: float = 120, LLM_API_RETRY_ERR_CODES: set = {18, 336100}, CONSOLE_API_RETRY_COUNT: int = 1, CONSOLE_API_RETRY_TIMEOUT: int = 60, CONSOLE_API_RETRY_JITTER: float = 1, CONSOLE_API_RETRY_MAX_WAIT_INTERVAL: float = 120, CONSOLE_API_RETRY_ERR_CODES: set = {18, 336100, 500000}, CONSOLE_API_RETRY_BACKOFF_FACTOR: int = 0, EVALUATION_ONLINE_POLLING_INTERVAL: float = 30, BOS_HOST_REGION: str = 'bj', SSL_VERIFICATION_ENABLED: bool = True, PROXY: str = '', FILE_ENCODING: str = 'utf-8')[source]

Bases: BaseSettings

The global config of whole qianfan sdk

ACCESS_CODE: Optional[str]
ACCESS_KEY: Optional[str]
ACCESS_TOKEN: Optional[str]
ACCESS_TOKEN_REFRESH_MIN_INTERVAL: float
AK: Optional[str]
APPID: Optional[int]
AUTH_TIMEOUT: float
BASE_URL: str
BATCH_RUN_STATUS_POLLING_INTERVAL: float
BOS_HOST_REGION: str
CONSOLE_API_BASE_URL: str
CONSOLE_API_RETRY_BACKOFF_FACTOR: int
CONSOLE_API_RETRY_COUNT: int
CONSOLE_API_RETRY_ERR_CODES: set
CONSOLE_API_RETRY_JITTER: float
CONSOLE_API_RETRY_MAX_WAIT_INTERVAL: float
CONSOLE_API_RETRY_TIMEOUT: int
class Config[source]

Bases: object

case_sensitive = True
env_file_encoding = 'utf-8'
env_prefix = 'QIANFAN_'
DEFAULT_FINE_TUNE_TRAIN_TYPE: str
DEPLOY_STATUS_POLLING_INTERVAL: float
DISABLE_EB_SDK: bool
EB_SDK_INSTALLED: bool
ENABLE_AUTH: Optional[bool]
ENABLE_PRIVATE: bool
ETL_STATUS_POLLING_INTERVAL: float
EVALUATION_ONLINE_POLLING_INTERVAL: float
EXPORT_FILE_SIZE_LIMIT: int
EXPORT_STATUS_POLLING_INTERVAL: float
FILE_ENCODING: str
GET_ENTITY_CONTENT_FAILED_RETRY_TIMES: int
IAM_SIGN_EXPIRATION_SEC: int
IMPORT_STATUS_POLLING_INTERVAL: float
LLM_API_RETRY_BACKOFF_FACTOR: float
LLM_API_RETRY_COUNT: int
LLM_API_RETRY_ERR_CODES: set
LLM_API_RETRY_JITTER: float
LLM_API_RETRY_MAX_WAIT_INTERVAL: float
LLM_API_RETRY_TIMEOUT: int
MODEL_PUBLISH_STATUS_POLLING_INTERVAL: float
PROXY: str
QPS_LIMIT: float
RELEASE_STATUS_POLLING_INTERVAL: float
SECRET_KEY: Optional[str]
SK: Optional[str]
SSL_VERIFICATION_ENABLED: bool
TRAINER_STATUS_POLLING_BACKOFF_FACTOR: float
TRAINER_STATUS_POLLING_RETRY_TIMES: float
TRAIN_STATUS_POLLING_INTERVAL: float
qianfan.config.SK(sk: str) None[source]

Set the Secret Key (SK) for LLM api authentication. The secret key is paired with the API key.

This function allows you to set the Secret Key that will be used for authentication throughout the entire SDK. The Secret Key can be acquired from the qianfan console: https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application

Parameters:
sk (str):

The Secret Key to be set for LLM API authentication.

qianfan.config.SecretKey(secret_key: str) None[source]

Set the Secret Key for console api authentication. The secret key is paired with the access key.

This function allows you to set the Secret Key that will be used for authentication throughout the entire SDK. The secret Key can be acquired from the baidu bce console: https://console.bce.baidu.com/iam/#/iam/accesslist

Parameters:
secret_key (str):

The Secret Key to be set for console API authentication.

qianfan.config.encoding() str[source]

Get the file encoding used in the SDK.

qianfan.config.get_config() GlobalConfig[source]

qianfan.consts module

Consts used in qianfan sdk

class qianfan.consts.APIErrorCode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

Error code from API return value

APINameNotExist = 336005
APITokenExpired = 111
APITokenInvalid = 110
AppNotExist = 15
ConsoleInternalError = 500000
DailyLimitReached = 17
GetServiceTokenFailed = 13
InternalError = 336000
InvalidArgument = 336001
InvalidArgumentSystem = 336104
InvalidArgumentUserSetting = 336105
InvalidHTTPMethod = 336101
InvalidJSON = 336002
InvalidParam = 336003
InvalidRequest = 100
NoError = 0
NoPermissionToAccessData = 6
PermissionError = 336004
QPSLimitReached = 18
RequestLimitReached = 4
ServerHighLoad = 336100
ServiceUnavailable = 2
TotalRequestLimitReached = 19
UnknownError = 1
UnsupportedMethod = 3
class qianfan.consts.Consts[source]

Bases: object

Constant used by qianfan sdk

AppListAPI: str = '/wenxinworkshop/service/appList'
AuthAPI: str = '/oauth/2.0/token'
DatasetAnnotateAPI: str = '/wenxinworkshop/entity/annotate'
DatasetAugListTaskAPI: str = '/wenxinworkshop/enhance/list'
DatasetAugTaskDeleteAPI: str = '/wenxinworkshop/enhance/delete'
DatasetAugTaskInfoAPI: str = '/wenxinworkshop/enhance/detail'
DatasetCreateAPI: str = '/wenxinworkshop/dataset/create'
DatasetCreateAugTaskAPI: str = '/wenxinworkshop/enhance/create'
DatasetCreateETLTaskAPI: str = '/wenxinworkshop/etl/create'
DatasetDeleteAPI: str = '/wenxinworkshop/dataset/delete'
DatasetETLListTaskAPI: str = '/wenxinworkshop/etl/list'
DatasetETLTaskDeleteAPI: str = '/wenxinworkshop/etl/delete'
DatasetETLTaskInfoAPI: str = '/wenxinworkshop/etl/detail'
DatasetEntityDeleteAPI: str = '/wenxinworkshop/entity/delete'
DatasetEntityListAPI: str = '/wenxinworkshop/entity/list'
DatasetExportAPI: str = '/wenxinworkshop/dataset/export'
DatasetExportRecordAPI: str = '/wenxinworkshop/dataset/exportRecord'
DatasetImportAPI: str = '/wenxinworkshop/dataset/import'
DatasetImportErrorDetail: str = '/wenxinworkshop/dataset/importErrorDetail'
DatasetInfoAPI: str = '/wenxinworkshop/dataset/info'
DatasetReleaseAPI: str = '/wenxinworkshop/dataset/release'
DatasetStatusFetchInBatchAPI: str = '/wenxinworkshop/dataset/statusList'
EBTokenizerAPI: str = '/rpc/2.0/ai_custom/v1/wenxinworkshop/tokenizer/erniebot'
FineTuneCreateJobAPI: str = '/wenxinworkshop/finetune/createJob'
FineTuneCreateTaskAPI: str = '/wenxinworkshop/finetune/createTask'
FineTuneGetJobAPI: str = '/wenxinworkshop/finetune/jobDetail'
FineTuneStopJobAPI: str = '/wenxinworkshop/finetune/stopJob'
ModelAPIPrefix: str = '/rpc/2.0/ai_custom/v1/wenxinworkshop'
ModelBatchDeleteAPI: str = '/wenxinworkshop/modelrepo/model/batchDelete'
ModelDetailAPI: str = '/wenxinworkshop/modelrepo/modelDetail'
ModelEvalCreateAPI: str = '/wenxinworkshop/modelrepo/eval/create'
ModelEvalInfoAPI: str = '/wenxinworkshop/modelrepo/eval/detail'
ModelEvalResultAPI: str = '/wenxinworkshop/modelrepo/eval/report'
ModelEvalResultExportAPI: str = '/wenxinworkshop/modelrepo/eval/result/export'
ModelEvalResultExportStatusAPI: str = '/wenxinworkshop/modelrepo/eval/result/export/info'
ModelEvalStopAPI: str = '/wenxinworkshop/modelrepo/eval/cancel'
ModelPresetListAPI: str = '/wenxinworkshop/modelrepo/model/preset/list'
ModelPublishAPI: str = '/wenxinworkshop/modelrepo/publishTrainModel'
ModelUserListAPI: str = '/wenxinworkshop/modelrepo/model/user/list'
ModelVersionBatchDeleteAPI: str = '/wenxinworkshop/modelrepo/model/version/batchDelete'
ModelVersionDetailAPI: str = '/wenxinworkshop/modelrepo/modelVersionDetail'
PromptCreateAPI: str = '/wenxinworkshop/prompt/template/create'
PromptCreateOptimizeTaskAPI: str = '/wenxinworkshop/prompt/singleOptimize/create'
PromptDeleteAPI: str = '/wenxinworkshop/prompt/template/delete'
PromptEvaluationAPI: str = '/wenxinworkshop/prompt/evaluate/predict'
PromptEvaluationSummaryAPI: str = '/wenxinworkshop/prompt/evaluate/summary'
PromptGetOptimizeTaskInfoAPI: str = '/wenxinworkshop/prompt/singleOptimize/info'
PromptInfoAPI: str = '/wenxinworkshop/prompt/template/info'
PromptLabelListAPI: str = '/wenxinworkshop/prompt/label/list'
PromptListAPI: str = '/wenxinworkshop/prompt/template/list'
PromptRenderAPI: str = '/rest/2.0/wenxinworkshop/api/v1/template/info'
PromptUpdateAPI: str = '/wenxinworkshop/prompt/template/update'
QianfanRequestIdDefaultPrefix: str = 'sdk-py-0.2.9'
STREAM_RESPONSE_EVENT_PREFIX: str = 'event: '
STREAM_RESPONSE_PREFIX: str = 'data: '
ServiceCreateAPI: str = '/wenxinworkshop/service/apply'
ServiceDetailAPI: str = '/wenxinworkshop/service/detail'
ServiceListAPI: str = '/wenxinworkshop/service/list'
XRequestID: str = 'Request_id'
XResponseID: str = 'X-Baidu-Request-Id'
class qianfan.consts.DefaultLLMModel[source]

Bases: object

Defualt LLM model in qianfan sdk

ChatCompletion = 'ERNIE-Bot-turbo'
Completion = 'ERNIE-Bot-turbo'
Embedding = 'Embedding-V1'
Text2Image = 'Stable-Diffusion-XL'
class qianfan.consts.DefaultValue[source]

Bases: object

Default value used by qianfan sdk

AK: str = ''
AccessCode: str = ''
AccessToken: str = ''
AccessTokenRefreshMinInterval: float = 3600
AuthTimeout: float = 5
BaseURL: str = 'https://aip.baidubce.com'
BatchRunStatusPollingInterval: float = 30
BosHostRegion: str = 'bj'
ConsoleAK: str = ''
ConsoleAPIBaseURL: str = 'https://qianfan.baidubce.com'
ConsoleRetryBackoffFactor: float = 0
ConsoleRetryCount: int = 1
ConsoleRetryErrCodes: Set = {18, 336100, 500000}
ConsoleRetryJitter: int = 1
ConsoleRetryMaxWaitInterval: float = 120
ConsoleRetryTimeout: float = 60
ConsoleSK: str = ''
DefaultFinetuneTrainType: str = 'ERNIE-Bot-turbo-0725'
DeployStatusPollingInterval: float = 30
DisableErnieBotSDK: bool = True
DotEnvConfigFile: str = '.env'
ETLStatusPollingInterval: float = 2
EnablePrivate: bool = False
EvaluationOnlinePollingInterval: float = 30
ExportFileSizeLimit: int = 2147483648
ExportStatusPollingInterval: float = 2
FileEncoding: str = 'utf-8'
GetEntityContentFailedRetryTimes: int = 3
IAMSignExpirationSeconds: int = 300
ImportStatusPollingInterval: float = 2
ModelPublishStatusPollingInterval: float = 30
Proxy: str = ''
QpsLimit: float = 0
ReleaseStatusPollingInterval: float = 2
RetryBackoffFactor: float = 1
RetryCount: int = 1
RetryErrCodes: Set = {18, 336100}
RetryJitter: float = 1
RetryMaxWaitInterval: float = 120
RetryTimeout: float = 60
SK: str = ''
SSLVerificationEnabled: bool = True
TrainStatusPollingInterval: float = 30
TrainerStatusPollingBackoffFactor: float = 3
TrainerStatusPollingRetryTimes: float = 3
TruncatedContinuePrompt = '继续'
class qianfan.consts.Env[source]

Bases: object

Environment variable name used by qianfan sdk

AK: str = 'QIANFAN_AK'
AccessCode: str = 'QIANFAN_PRIVATE_ACCESS_CODE'
AccessKey: str = 'QIANFAN_ACCESS_KEY'
AccessToken: str = 'QIANFAN_ACCESS_TOKEN'
AccessTokenRefreshMinInterval: str = 'QIANFAN_ACCESS_TOKEN_REFRESH_MIN_INTERVAL'
AuthTimeout: str = 'QIANFAN_AUTH_TIMEOUT'
BaseURL: str = 'QIANFAN_BASE_URL'
ConsoleAPIBaseURL: str = 'QIANFAN_CONSOLE_API_BASE_URL'
ConsoleRetryBackoffFactor: str = 'QIANFAN_CONSOLE_API_RETRY_BACKOFF_FACTOR'
ConsoleRetryCount: str = 'QIANFAN_CONSOLE_API_RETRY_COUNT'
ConsoleRetryTimeout: str = 'QIANFAN_CONSOLE_API_RETRY_TIMEOUT'
DisableErnieBotSDK: str = 'QIANFAN_DISABLE_EB_SDK'
DotEnvConfigFile: str = 'QIANFAN_DOT_ENV_CONFIG_FILE'
ETLStatusPollingInterval: str = 'QIANFAN_ETL_STATUS_POLLING_INTERVAL'
EnablePrivate: str = 'QIANFAN_ENABLE_PRIVATE'
ExportFileSizeLimit: str = 'QIANFAN_EXPORT_FILE_SIZE_LIMIT'
ExportStatusPollingInterval: str = 'QIANFAN_EXPORT_STATUS_POLLING_INTERVAL'
FileEncoding: str = 'QIANFAN_FILE_ENCODING'
GetEntityContentFailedRetryTimes: str = 'QIANFAN_GET_ENTITY_CONTENT_FAILED_RETRY_TIMES'
IAMSignExpirationSeconds: str = 'QIANFAN_IAM_SIGN_EXPIRATION_SEC'
ImportStatusPollingInterval: str = 'QIANFAN_IMPORT_STATUS_POLLING_INTERVAL'
Proxy: str = 'QIANFAN_PROXY'
QpsLimit: str = 'QIANFAN_QPS_LIMIT'
ReleaseStatusPollingInterval: str = 'QIANFAN_RELEASE_STATUS_POLLING_INTERVAL'
RetryBackoffFactor: str = 'QIANFAN_LLM_API_RETRY_BACKOFF_FACTOR'
RetryCount: str = 'QIANFAN_LLM_API_RETRY_COUNT'
RetryTimeout: str = 'QIANFAN_LLM_API_RETRY_TIMEOUT'
SK: str = 'QIANFAN_SK'
SSLVerificationEnabled: str = 'QIANFAN_SSL_VERIFICATION_ENABLED'
SecretKey: str = 'QIANFAN_SECRET_KEY'
class qianfan.consts.PromptFrameworkType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: int, Enum

Basic: int = 1

基础框架

CRISPE: int = 2

CRISPE框架

Fewshot: int = 3

fewshot框架

NotUse: int = 0

不使用框架

class qianfan.consts.PromptSceneType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: int, Enum

Text2Image: int = 2

文生图

Text2Text: int = 1

文生文

class qianfan.consts.PromptScoreStandard(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: int, Enum

Exact = 3

精准匹配

Regex = 2

正则匹配

Semantic = 1

语义相似

class qianfan.consts.PromptType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: int, Enum

Preset = 1

预置模版

User = 2

用户创建模版

qianfan.errors module

the collection of errors for this library

exception qianfan.errors.APIError(error_code: int, error_msg: str, req_id: Any)[source]

Bases: QianfanError

Base exception clas for the qianfan api error

exception qianfan.errors.AccessTokenExpiredError[source]

Bases: QianfanError

Exception when access token is expired

exception qianfan.errors.ArgumentNotFoundError[source]

Bases: QianfanError

Exception when the argument is not found

exception qianfan.errors.FileSizeOverflow[source]

Bases: Exception

Exception when zip file is too big

exception qianfan.errors.InternalError[source]

Bases: QianfanError

Exception when internal error occurs

exception qianfan.errors.InvalidArgumentError[source]

Bases: QianfanError

Exception when the argument is invalid

exception qianfan.errors.NotImplmentError[source]

Bases: QianfanError

Exception that’s raised when code not implemented.

exception qianfan.errors.QianfanError[source]

Bases: Exception

Base exception class for the qianfan sdk.

exception qianfan.errors.QianfanRequestError[source]

Bases: Exception

Exception when request on qianfan failed

exception qianfan.errors.RequestError[source]

Bases: QianfanError

Exception when api request is failed

exception qianfan.errors.RequestTimeoutError[source]

Bases: QianfanError

Exception when api request is timeout

exception qianfan.errors.ValidationError[source]

Bases: Exception

Exception when validating failed

qianfan.version module

version specification