qianfan package
Library aimed to helping developer to interactive with LLM.
- qianfan.AK(ak: str) None [source]
Set the API Key (AK) for LLM API authentication.
This function allows you to set the API Key that will be used for authentication throughout the entire SDK. The API Key can be acquired from the qianfan console: https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application
- Parameters:
- ak (str):
The API Key to be set for LLM API authentication.
- qianfan.AccessKey(access_key: str) None [source]
Set the Access Key for console api authentication.
This function allows you to set the Access Key that will be used for authentication throughout the entire SDK. The Access Key can be acquired from the baidu bce console: https://console.bce.baidu.com/iam/#/iam/accesslist
- Parameters:
- access_key (str):
The Access Key to be set for console API authentication.
- qianfan.AccessToken(access_token: str) None [source]
Set the access token for LLM api authentication.
This function allows you to set the access token that will be used for authentication throughout the entire SDK. The access token can be generated from API key and secret key according to the instructions at https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Ilkkrb0i5.
This function is only needed when you only have access token. If you have both API key and secret key, sdk will automatically refresh the access token for you.
- Parameters:
- access_token (str):
The access token to be set for LLM API authentication.
- class qianfan.ChatCompletion(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]
Bases:
BaseResource
QianFan ChatCompletion is an agent for calling QianFan ChatCompletion API.
- async abatch_do(messages_list: List[Union[List[Dict], QfMessages]], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]] [source]
Async batch perform chat-based language generation using user-supplied messages.
- Parameters:
- messages_list: List[Union[List[Dict], QfMessages]]:
List of the messages list in the conversation. Please refer to ChatCompletion.do for more information of each messages.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to ChatCompletion.do for other parameters such as model, endpoint, retry_count, etc.
``` response_list = await ChatCompletion().abatch_do([…], worker_num = 10) for response in response_list:
# response is QfResponse if succeed, or response will be exception print(response)
- async ado(messages: Union[List[Dict], QfMessages], model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, auto_concat_truncate: bool = False, truncated_continue_prompt: str = '继续', **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]] [source]
Async perform chat-based language generation using user-supplied messages.
- Parameters:
- messages (Union[List[Dict], QfMessages]):
A list of messages in the conversation including the one from system. Each message should be a dictionary containing ‘role’ and ‘content’ keys, representing the role (either ‘user’, or ‘assistant’) and content of the message, respectively. Alternatively, you can provide a QfMessages object for convenience.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
If set to True, the responses are streamed back as an iterator. If False, a single response is returned.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- auto_concat_truncate (bool):
[Experimental] If set to True, continuously requesting will be run until is_truncated is False. As a result, the entire reply will be returned. Cause this feature highly relies on the understanding ability of LLM, Use it carefully.
- truncated_continue_prompt (str):
[Experimental] The prompt to use when requesting more content for auto truncated reply.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` ChatCompletion().ado(messages = ..., temperature = 0.2, top_p = 0.5) `
- batch_do(messages_list: Union[List[List[Dict]], List[QfMessages]], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture [source]
Batch perform chat-based language generation using user-supplied messages.
- Parameters:
- messages_list: List[Union[List[Dict], QfMessages]]:
List of the messages list in the conversation. Please refer to ChatCompletion.do for more information of each messages.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to ChatCompletion.do for other parameters such as model, endpoint, retry_count, etc.
``` response_list = ChatCompletion().batch_do([…], worker_num = 10) for response in response_list:
# return QfResponse if succeed, or exception will be raised print(response.result())
# or while response_list.finished_count() != response_list.task_count():
time.sleep(1)
- do(messages: Union[List[Dict], QfMessages], model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, auto_concat_truncate: bool = False, truncated_continue_prompt: str = '继续', **kwargs: Any) Union[QfResponse, Iterator[QfResponse]] [source]
Perform chat-based language generation using user-supplied messages.
- Parameters:
- messages (Union[List[Dict], QfMessages]):
A list of messages in the conversation including the one from system. Each message should be a dictionary containing ‘role’ and ‘content’ keys, representing the role (either ‘user’, or ‘assistant’) and content of the message, respectively. Alternatively, you can provide a QfMessages object for convenience.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
If set to True, the responses are streamed back as an iterator. If False, a single response is returned.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- auto_concat_truncate (bool):
[Experimental] If set to True, continuously requesting will be run until is_truncated is False. As a result, the entire reply will be returned. Cause this feature highly relies on the understanding ability of LLM, Use it carefully.
- truncated_continue_prompt (str):
[Experimental] The prompt to use when requesting more content for auto truncated reply.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` ChatCompletion().do(messages = ..., temperature = 0.2, top_p = 0.5) `
- class qianfan.Completion(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]
Bases:
BaseResource
QianFan Completion is an agent for calling QianFan completion API.
- async abatch_do(prompt_list: List[str], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]] [source]
Async batch generate a completion based on the user-provided prompt.
- Parameters:
- prompt_list (List[str]):
The input prompt list to generate the continuation from.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Completion.ado for other parameters such as model, endpoint, retry_count, etc.
``` response_list = await Completion().abatch_do([…], worker_num = 10) for response in response_list:
# response is QfResponse if succeed, or response will be exception print(response)
- async ado(prompt: str, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]] [source]
Async generate a completion based on the user-provided prompt.
- Parameters:
- prompt (str):
The input prompt to generate the continuation from.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
If set to True, the responses are streamed back as an iterator. If False, a single response is returned.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Completion().do(prompt = ..., temperature = 0.2, top_p = 0.5) `
- batch_do(prompt_list: List[str], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture [source]
Batch generate a completion based on the user-provided prompt.
- Parameters:
- prompt_list (List[str]):
The input prompt list to generate the continuation from.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Completion.do for other parameters such as model, endpoint, retry_count, etc.
``` response_list = Completion().batch_do([”…”, “…”], worker_num = 10) for response in response_list:
# return QfResponse if succeed, or exception will be raised print(response.result())
# or while response_list.finished_count() != response_list.task_count():
time.sleep(1)
- do(prompt: str, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]] [source]
Generate a completion based on the user-provided prompt.
- Parameters:
- prompt (str):
The input prompt to generate the continuation from.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
If set to True, the responses are streamed back as an iterator. If False, a single response is returned.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Completion().do(prompt = ..., temperature = 0.2, top_p = 0.5) `
- class qianfan.Embedding(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]
Bases:
BaseResource
QianFan Embedding is an agent for calling QianFan embedding API.
- async abatch_do(texts_list: List[List[str]], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]] [source]
Async batch generate embeddings for a list of input texts using a specified model.
- Parameters:
- texts_list (List[List[str]]):
List of the input text list to generate the embeddings.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Embedding.ado for other parameters such as model, endpoint, retry_count, etc.
``` response_list = await Embedding().abatch_do([…], worker_num = 10) for response in response_list:
# response is QfResponse if succeed, or response will be exception print(response)
- async ado(texts: List[str], model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]] [source]
Async generate embeddings for a list of input texts using a specified model.
- Parameters:
- texts (List[str]):
A list of input texts for which embeddings need to be generated.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
If set to True, the responses are streamed back as an iterator. If False, a single response is returned.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Embedding().do(texts = ..., temperature = 0.2, top_p = 0.5) `
- batch_do(texts_list: List[List[str]], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture [source]
Batch generate embeddings for a list of input texts using a specified model.
- Parameters:
- texts_list (List[List[str]]):
List of the input text list to generate the embeddings.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Completion.do for other parameters such as model, endpoint, retry_count, etc.
``` response_list = Completion().batch_do([”…”, “…”], worker_num = 10) for response in response_list:
# return QfResponse if succeed, or exception will be raised print(response.result())
# or while response_list.finished_count() != response_list.task_count():
time.sleep(1)
- do(texts: List[str], model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]] [source]
Generate embeddings for a list of input texts using a specified model.
- Parameters:
- texts (List[str]):
A list of input texts for which embeddings need to be generated.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
If set to True, the responses are streamed back as an iterator. If False, a single response is returned.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Embedding().do(texts = ..., temperature = 0.2, top_p = 0.5) `
- class qianfan.Image2Text(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]
Bases:
BaseResource
QianFan Image2Text API Resource
- async abatch_do(input_list: List[Tuple[str, str]], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]] [source]
Async batch generate execute a image2text action on the provided inputs and generate responses.
- Parameters:
- input_list (Tuple(str, str)):
The list user input prompt and base64 encoded image data for which a response is generated.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Plugin.ado for other parameters such as model, endpoint, retry_count, etc.
``` response_list = await Image2Text(endpoint=””).abatch_do([(”…”, “…”),
(”…”, “…”)], worker_num = 10)
- for response in response_list:
# response is QfResponse if succeed, or response will be exception print(response)
- async ado(prompt: str, image: str, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 0, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]] [source]
Async execute a image2text action on the provided input prompt and generate responses.
- Parameters:
- prompt (str):
The user input or prompt for which a response is generated.
- image (str):
The user input base64 encoded image data for which a response is generated.
- model (Optional[str]):
The name or identifier of the language model to use.
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
Whether to stream responses or not.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Image2Text(endpoint="").ado(prompt="", image="", xx=vv) `
- batch_do(input_list: List[Tuple[str, str]], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture [source]
Batch generate execute a image2text action on the provided inputs and generate responses.
- Parameters:
- input_list (Tuple(str, str)):
The list user input prompt and base64 encoded image data for which a response is generated.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Plugin.do for other parameters such as model, endpoint, retry_count, etc.
``` response_list = Image2Text(endpoint=””).batch_do([(”…”, “…”),
(”…”, “…”)], worker_num = 10)
- for response in response_list:
# return QfResponse if succeed, or exception will be raised print(response.result())
# or while response_list.finished_count() != response_list.task_count():
time.sleep(1)
- do(prompt: str, image: str, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 0, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]] [source]
Execute a image2text action on the provided input prompt and generate responses.
- Parameters:
- prompt (str):
The user input or prompt for which a response is generated.
- image (str):
The user input base64 encoded image data for which a response is generated.
- model (Optional[str]):
The name or identifier of the language model to use.
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
Whether to stream responses or not.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Image2Text(endpoint="").do(prompt="", image="", xxx=vvv) `
- qianfan.Messages
alias of
QfMessages
- class qianfan.Plugin(model: str = 'EBPlugin', endpoint: Optional[str] = None, **kwargs: Any)[source]
Bases:
BaseResource
QianFan Plugin API Resource
- async abatch_do(query_list: List[Union[str, QfMessages, List[Dict]]], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]] [source]
Async batch execute a plugin action on the provided input prompt and generate responses.
- Parameters:
- query_list List[Union[str, QfMessages, List[Dict]]]:
The list user input messages or prompt for which a response is generated.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Plugin.ado for other parameters such as model, endpoint, retry_count, etc.
``` response_list = await Plugin().abatch_do([…], worker_num = 10) for response in response_list:
# response is QfResponse if succeed, or response will be exception print(response)
- async ado(query: Union[str, QfMessages, List[Dict]], plugins: Optional[List[str]] = None, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]] [source]
Async execute a plugin action on the provided input prompt and generate responses.
- Parameters:
- query Union[str, QfMessages, List[Dict]]:
The user input for which a response is generated. Concretely, the following types are supported:
query should be str for qianfan plugin, while query should be either QfMessages or list for EBPlugin
- plugins (Optional[List[str]]):
A list of plugins to be used.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
If set to True, the responses are streamed back as an iterator. If False, a single response is returned.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Plugin().do(prompt = ..., temperature = 0.2, top_p = 0.5) `
- batch_do(query_list: List[Union[str, QfMessages, List[Dict]]], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture [source]
Batch generate execute a plugin action on the provided input prompt and generate responses.
- Parameters:
- query_list List[Union[str, QfMessages, List[Dict]]]:
The list user input messages or prompt for which a response is generated.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Plugin.do for other parameters such as model, endpoint, retry_count, etc.
``` response_list = Plugin().batch_do([”…”, “…”], worker_num = 10) for response in response_list:
# return QfResponse if succeed, or exception will be raised print(response.result())
# or while response_list.finished_count() != response_list.task_count():
time.sleep(1)
- do(query: Union[str, QfMessages, List[Dict]], plugins: Optional[List[str]] = None, model: Optional[str] = None, endpoint: Optional[str] = None, stream: bool = False, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 1, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]] [source]
Execute a plugin action on the provided input prompt and generate responses.
- Parameters:
- query Union[str, QfMessages, List[Dict]]:
The user input for which a response is generated. Concretely, the following types are supported:
query should be str for qianfan plugin, while query should be either QfMessages or list for EBPlugin
- plugins (Optional[List[str]]):
A list of plugins to be used.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(ERNIE-Bot-turbo).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- stream (bool):
If set to True, the responses are streamed back as an iterator. If False, a single response is returned.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Plugin().do(prompt = ..., temperature = 0.2, top_p = 0.5) `
- class qianfan.QfMessages[source]
Bases:
object
An auxiliary class for representing a list of messages in a chat model.
Example usage:
messages = QfMessages() # append a message by str messages.append("Hello!") # send the messages directly resp = qianfan.ChatCompletion().do(messages = messages) # append the response to the messages and continue the conversation messages.append(resp) messages.append("next message", role = QfRole.User) # role is optional
- append(message: Union[str, QfResponse], role: Optional[Union[str, QfRole]] = None) None [source]
Appends a message to the QfMessages object.
- Parameters:
- message (Union[str, QfResponse]):
The message to be appended. It can be a string or a QfResponse object. When the object is a QfResponse object, the role of the message sender will be QfRole.Assistant by default, unless you specify the role using the ‘role’
- role (Optional[Union[str, QfRole]]):
An optional parameter to specify the role of the message sender. If not provided, the function will determine the role based on the existed message.
Example usage can be found in the introduction of this class.
- class qianfan.QfResponse(code: int, headers: ~typing.Dict[str, str] = <factory>, body: ~typing.Dict[str, ~typing.Any] = <factory>, statistic: ~typing.Dict[str, ~typing.Any] = <factory>, request: ~typing.Optional[~qianfan.resources.typing.QfRequest] = <factory>)[source]
Bases:
Mapping
Response from Qianfan API
- body: Dict[str, Any]
The JSON-formatted body of the response.
- code: int
The HTTP status code of the response.
- headers: Dict[str, str]
A dictionary of HTTP headers included in the response.
- statistic: Dict[str, Any]
key: request_latency: request elapsed time in seconds, or received elapsed time
of each response if stream=True
- first_token_latency: first token elapsed time int seconds
only existed in streaming calling
- total_latency: resource elapsed time int seconds, include request, serialization
and the waiting time if rate_limit is set.
- class qianfan.QfRole(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
Enum
Role type supported in Qianfan
- Assistant = 'assistant'
- Function = 'function'
- User = 'user'
- qianfan.Response
alias of
QfResponse
- qianfan.SK(sk: str) None [source]
Set the Secret Key (SK) for LLM api authentication. The secret key is paired with the API key.
This function allows you to set the Secret Key that will be used for authentication throughout the entire SDK. The Secret Key can be acquired from the qianfan console: https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application
- Parameters:
- sk (str):
The Secret Key to be set for LLM API authentication.
- qianfan.SecretKey(secret_key: str) None [source]
Set the Secret Key for console api authentication. The secret key is paired with the access key.
This function allows you to set the Secret Key that will be used for authentication throughout the entire SDK. The secret Key can be acquired from the baidu bce console: https://console.bce.baidu.com/iam/#/iam/accesslist
- Parameters:
- secret_key (str):
The Secret Key to be set for console API authentication.
- class qianfan.Text2Image(model: Optional[str] = None, endpoint: Optional[str] = None, **kwargs: Any)[source]
Bases:
BaseResource
QianFan Text2Image API Resource
- async abatch_do(prompt_list: List[str], worker_num: Optional[int] = None, **kwargs: Any) List[Union[QfResponse, AsyncIterator[QfResponse]]] [source]
Async batch execute a text2image action on the provided input prompt and generate responses.
- Parameters:
- prompt_list (List[str]):
The list user input or prompt for which a response is generated.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Text2Image.ado for other parameters such as model, endpoint, retry_count, etc.
``` response_list = await Text2Image().abatch_do([…], worker_num = 10) for response in response_list:
# response is QfResponse if succeed, or response will be exception print(response)
- async ado(prompt: str, model: Optional[str] = None, endpoint: Optional[str] = None, with_decode: Optional[str] = None, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 0, **kwargs: Any) Union[QfResponse, AsyncIterator[QfResponse]] [source]
Async execute a text2image action on the provided input prompt and generate responses.
- Parameters:
- prompt (str):
The user input or prompt for which a response is generated.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(Stable-Diffusion-XL).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- with_decode(Optional[str]):
The way to decode data. If not provided, the decode is not used. use “base64” to auto decode from data.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Text2Image().do(prompt = ..., steps=20) `
- batch_do(prompt_list: List[str], worker_num: Optional[int] = None, **kwargs: Any) BatchRequestFuture [source]
Batch generate execute a text2image action on the provided input prompt and generate responses.
- Parameters:
- prompt_list (List[str]):
The list user input or prompt for which a response is generated.
- worker_num (Optional[int]):
The number of prompts to process at the same time, default to None, which means this number will be decided dynamically.
- kwargs (Any):
Please refer to Text2Image.do for other parameters such as model, endpoint, retry_count, etc.
``` response_list = Text2Image().batch_do([”…”, “…”], worker_num = 10) for response in response_list:
# return QfResponse if succeed, or exception will be raised print(response.result())
# or while response_list.finished_count() != response_list.task_count():
time.sleep(1)
- do(prompt: str, model: Optional[str] = None, endpoint: Optional[str] = None, with_decode: Optional[str] = None, retry_count: int = 1, request_timeout: float = 60, request_id: Optional[str] = None, backoff_factor: float = 0, **kwargs: Any) Union[QfResponse, Iterator[QfResponse]] [source]
Execute a text2image action on the provided input prompt and generate responses.
- Parameters:
- prompt (str):
The user input or prompt for which a response is generated.
- model (Optional[str]):
The name or identifier of the language model to use. If not specified, the default model is used(Stable-Diffusion-XL).
- endpoint (Optional[str]):
The endpoint for making API requests. If not provided, the default endpoint is used.
- with_decode(Optional[str]):
The way to decode data. If not provided, the decode is not used. use “base64” to auto decode from data.
- retry_count (int):
The number of times to retry the request in case of failure.
- request_timeout (float):
The maximum time (in seconds) to wait for a response from the model.
- backoff_factor (float):
A factor to increase the waiting time between retry attempts.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
Additional parameters like temperature will vary depending on the model, please refer to the API documentation. The additional parameters can be passed as follows:
` Text2Image().do(prompt = ..., steps=20) `
- class qianfan.Tokenizer[source]
Bases:
object
Class for Tokenizer API
- classmethod count_tokens(text: str, mode: Literal['local', 'remote'] = 'local', model: str = 'ERNIE-Bot', **kwargs: Any) int [source]
Count the number of tokens in a given text.
- Parameters:
- text (str):
The input text for which tokens need to be counted.
- mode (str, optional):
- local (default):
local SIMULATION (Chinese characters count + English word count * 1.3)
- remote:
use qianfan api to calculate the token count. API will return accurate token count, but only ERNIE-Bot series models are supported.
- model (str, optional):
The name of the model to be used for token counting, which may influence the counting strategy. Default is ‘ERNIE-Bot’.
- kwargs (Any):
Additional keyword arguments that can be passed to customize the request.
- qianfan.disable_log() None [source]
Disables logging.
This function turns off the logging feature, preventing the recording of log messages.
- Parameters:
None
- qianfan.enable_log(log_level: int = 20) None [source]
Set the logging level for the qianfan sdk.
This function allows you to configure the logging level for the sdk’s logging system. The logging level determines the verbosity of log messages that will be recorded. By default, it is set to ‘WARN’, which logs only important information.
- Parameters:
- log_level (int, optional):
The logging level to set for the application. It controls the granularity of log messages. You can specify one of the following integer values or str like “INFO”:
logging.CRITICAL (50): Logs only critical messages.
logging.ERROR (40): Logs error and critical messages.
logging.WARNING (30): Logs warnings, errors, and critical messages.
logging.INFO (20): Logs general information, warnings, errors, and critical messages.
logging.DEBUG (10): Logs detailed debugging information, in addition to all the above log levels.
Example Usage: To enable detailed debugging, you can call the function like this: enable_log(logging.DEBUG)
To set the logging level to only log errors and critical messages, use: enable_log(“ERROR”)
- qianfan.get_config() GlobalConfig [source]
Subpackages
- qianfan.common package
Prompt
Prompt.PromptEvaluateResult
Prompt.base_prompt()
Prompt.creator_name
Prompt.crispe_prompt()
Prompt.delete()
Prompt.evaluate()
Prompt.fewshot_prompt()
Prompt.framework_type
Prompt.from_file()
Prompt.id
Prompt.identifier
Prompt.labels
Prompt.name
Prompt.negative_template
Prompt.negative_variables
Prompt.optimize()
Prompt.render()
Prompt.save_to_file()
Prompt.scene_type
Prompt.set_negative_template()
Prompt.set_template()
Prompt.template
Prompt.type
Prompt.variables
PromptLabel
- Subpackages
- qianfan.common.client package
- Submodules
- qianfan.common.client.chat module
- qianfan.common.client.completion module
- qianfan.common.client.dataset module
- qianfan.common.client.embedding module
- qianfan.common.client.evaluation module
- qianfan.common.client.main module
- qianfan.common.client.trainer module
- qianfan.common.client.txt2img module
- qianfan.common.client.utils module
- qianfan.common.hub package
- qianfan.common.prompt package
- qianfan.common.runnable package
- qianfan.common.client package
- qianfan.dataset package
DataExportDestinationType
DataProjectType
DataSetType
DataSourceType
DataStorageType
DataTemplateType
Dataset
Dataset.add_default_group_column()
Dataset.append()
Dataset.atest_using_llm()
Dataset.col_append()
Dataset.col_delete()
Dataset.col_filter()
Dataset.col_insert()
Dataset.col_list()
Dataset.col_map()
Dataset.col_names()
Dataset.col_renames()
Dataset.create_from_pyarrow_table()
Dataset.create_from_pyobj()
Dataset.delete()
Dataset.delete_group_column()
Dataset.filter()
Dataset.get_input_data
Dataset.get_reference_data
Dataset.insert()
Dataset.is_dataset_generic_text()
Dataset.is_dataset_located_in_qianfan()
Dataset.list()
Dataset.load()
Dataset.map()
Dataset.online_data_process()
Dataset.row_number()
Dataset.save()
Dataset.start_online_data_process_task()
Dataset.test_using_llm()
FormatType
Table
Table.append()
Table.col_append()
Table.col_delete()
Table.col_filter()
Table.col_insert()
Table.col_list()
Table.col_map()
Table.col_names()
Table.col_renames()
Table.column_number()
Table.delete()
Table.filter()
Table.insert()
Table.is_dataset_grouped()
Table.is_dataset_packed()
Table.list()
Table.map()
Table.pack()
Table.row_number()
Table.to_pydict()
Table.to_pylist()
Table.unpack()
- Subpackages
- qianfan.dataset.data_source package
- qianfan.dataset.local_data_operators package
BaseLocalFilterOperator
BaseLocalMapOperator
LocalCheckCharacterRepetitionFilter
LocalCheckEachSentenceIsLongEnoughFilter
LocalCheckFlaggedWordsFilter
LocalCheckSpecialCharactersFilter
LocalCheckStopwordsFilter
LocalCheckWordNumberFilter
- Submodules
- qianfan.dataset.local_data_operators.base module
- qianfan.dataset.local_data_operators.check_character_repetition_filter module
- qianfan.dataset.local_data_operators.check_flagged_words module
- qianfan.dataset.local_data_operators.check_sentence_length_filter module
- qianfan.dataset.local_data_operators.check_special_characters module
- qianfan.dataset.local_data_operators.check_stopwords module
- qianfan.dataset.local_data_operators.check_word_number module
- qianfan.dataset.local_data_operators.check_word_repetition_filter module
- qianfan.dataset.local_data_operators.consts module
- qianfan.dataset.local_data_operators.utils module
- qianfan.dataset.local_data_operators.word_list module
- Submodules
- qianfan.dataset.consts module
- qianfan.dataset.dataset module
Dataset
Dataset.add_default_group_column()
Dataset.append()
Dataset.atest_using_llm()
Dataset.col_append()
Dataset.col_delete()
Dataset.col_filter()
Dataset.col_insert()
Dataset.col_list()
Dataset.col_map()
Dataset.col_names()
Dataset.col_renames()
Dataset.create_from_pyarrow_table()
Dataset.create_from_pyobj()
Dataset.delete()
Dataset.delete_group_column()
Dataset.filter()
Dataset.get_input_data
Dataset.get_reference_data
Dataset.insert()
Dataset.is_dataset_generic_text()
Dataset.is_dataset_located_in_qianfan()
Dataset.list()
Dataset.load()
Dataset.map()
Dataset.online_data_process()
Dataset.row_number()
Dataset.save()
Dataset.start_online_data_process_task()
Dataset.test_using_llm()
- qianfan.dataset.dataset_utils module
- qianfan.dataset.process_interface module
- qianfan.dataset.qianfan_data_operators module
DeduplicationSimhash
Deduplicator
DesensitizationProcessor
ExceptionRegulator
Filter
FilterCheckCharacterRepetitionRemoval
FilterCheckFlaggedWords
FilterCheckLangId
FilterCheckNumberWords
FilterCheckPerplexity
FilterCheckSpecialCharacters
FilterCheckWordRepetitionRemoval
QianfanOperator
RemoveEmoji
RemoveInvisibleCharacter
RemoveNonMeaningCharacters
RemoveWebIdentifiers
ReplaceEmails
ReplaceIdentifier
ReplaceIp
ReplaceTraditionalChineseToSimplified
ReplaceUniformWhitespace
- qianfan.dataset.schema module
- qianfan.dataset.table module
Table
Table.append()
Table.col_append()
Table.col_delete()
Table.col_filter()
Table.col_insert()
Table.col_list()
Table.col_map()
Table.col_names()
Table.col_renames()
Table.column_number()
Table.delete()
Table.filter()
Table.insert()
Table.is_dataset_grouped()
Table.is_dataset_packed()
Table.list()
Table.map()
Table.pack()
Table.row_number()
Table.to_pydict()
Table.to_pylist()
Table.unpack()
- qianfan.dataset.table_utils module
- qianfan.evaluation package
EvaluationManager
EvaluationResult
- Submodules
- qianfan.evaluation.consts module
- qianfan.evaluation.evaluation_manager module
- qianfan.evaluation.evaluation_result module
- qianfan.evaluation.evaluator module
- qianfan.evaluation.opencompass_evaluator module
- qianfan.extensions package
- qianfan.model package
- qianfan.resources package
ChatCompletion
Completion
Data
Data.annotate_an_entity()
Data.create_bare_dataset()
Data.create_data_import_task()
Data.create_dataset_augmenting_task()
Data.create_dataset_etl_task()
Data.create_dataset_export_task()
Data.delete_an_entity()
Data.delete_dataset()
Data.delete_dataset_augmenting_task()
Data.delete_dataset_etl_task()
Data.get_dataset_aug_task_list()
Data.get_dataset_augmenting_task_info()
Data.get_dataset_etl_task_info()
Data.get_dataset_etl_task_list()
Data.get_dataset_export_records()
Data.get_dataset_import_error_detail()
Data.get_dataset_info()
Data.get_dataset_status_in_batch()
Data.list_all_entity_in_dataset()
Data.release_dataset()
Embedding
FineTune
Image2Text
Model
Model.batch_delete_model()
Model.batch_delete_model_version()
Model.create_evaluation_result_export_task()
Model.create_evaluation_task()
Model.detail()
Model.get_evaluation_info()
Model.get_evaluation_result()
Model.get_evaluation_result_export_task_status()
Model.list()
Model.preset_list()
Model.publish()
Model.stop_evaluation_task()
Model.user_list()
Plugin
Prompt
QfMessages
QfResponse
QfRole
Service
Text2Image
Tokenizer
- Subpackages
- Submodules
- qianfan.resources.http_client module
- qianfan.resources.rate_limiter module
- qianfan.resources.typing module
- qianfan.trainer package
BaseAction
DeployAction
Event
EventHandler
LLMFinetune
LoadDataSetAction
LoadDataSetAction.Dataset
LoadDataSetAction.Dataset.add_default_group_column()
LoadDataSetAction.Dataset.append()
LoadDataSetAction.Dataset.atest_using_llm()
LoadDataSetAction.Dataset.col_append()
LoadDataSetAction.Dataset.col_delete()
LoadDataSetAction.Dataset.col_filter()
LoadDataSetAction.Dataset.col_insert()
LoadDataSetAction.Dataset.col_list()
LoadDataSetAction.Dataset.col_map()
LoadDataSetAction.Dataset.col_names()
LoadDataSetAction.Dataset.col_renames()
LoadDataSetAction.Dataset.create_from_pyarrow_table()
LoadDataSetAction.Dataset.create_from_pyobj()
LoadDataSetAction.Dataset.delete()
LoadDataSetAction.Dataset.delete_group_column()
LoadDataSetAction.Dataset.filter()
LoadDataSetAction.Dataset.get_input_data
LoadDataSetAction.Dataset.get_reference_data
LoadDataSetAction.Dataset.insert()
LoadDataSetAction.Dataset.is_dataset_generic_text()
LoadDataSetAction.Dataset.is_dataset_located_in_qianfan()
LoadDataSetAction.Dataset.list()
LoadDataSetAction.Dataset.load()
LoadDataSetAction.Dataset.map()
LoadDataSetAction.Dataset.online_data_process()
LoadDataSetAction.Dataset.row_number()
LoadDataSetAction.Dataset.save()
LoadDataSetAction.Dataset.start_online_data_process_task()
LoadDataSetAction.Dataset.test_using_llm()
LoadDataSetAction.dataset
LoadDataSetAction.exec()
LoadDataSetAction.resume()
ModelPublishAction
TrainAction
TrainAction.base_model
TrainAction.exec()
TrainAction.get_default_train_config()
TrainAction.is_incr
TrainAction.job_description
TrainAction.job_id
TrainAction.job_str_id
TrainAction.result
TrainAction.resume()
TrainAction.stop()
TrainAction.task_description
TrainAction.task_id
TrainAction.task_name
TrainAction.task_str_id
TrainAction.train_config
TrainAction.train_mode
TrainAction.train_type
TrainAction.validateTrainConfig()
Trainer
- Submodules
- qianfan.trainer.actions module
DeployAction
EvaluateAction
LoadDataSetAction
ModelPublishAction
TrainAction
TrainAction.base_model
TrainAction.exec()
TrainAction.get_default_train_config()
TrainAction.is_incr
TrainAction.job_description
TrainAction.job_id
TrainAction.job_str_id
TrainAction.result
TrainAction.resume()
TrainAction.stop()
TrainAction.task_description
TrainAction.task_id
TrainAction.task_name
TrainAction.task_str_id
TrainAction.train_config
TrainAction.train_mode
TrainAction.train_type
TrainAction.validateTrainConfig()
- qianfan.trainer.base module
- qianfan.trainer.configs module
ModelInfo
TrainConfig
TrainConfig.batch_size
TrainConfig.epoch
TrainConfig.extras
TrainConfig.learning_rate
TrainConfig.load()
TrainConfig.logging_steps
TrainConfig.lora_all_linear
TrainConfig.lora_alpha
TrainConfig.lora_dropout
TrainConfig.lora_rank
TrainConfig.max_seq_len
TrainConfig.peft_type
TrainConfig.scheduler_name
TrainConfig.trainset_rate
TrainConfig.validate_config()
TrainConfig.validate_valid_fields()
TrainConfig.warmup_ratio
TrainConfig.weight_decay
TrainLimit
TrainLimit.batch_size_limit
TrainLimit.epoch_limit
TrainLimit.learning_rate_limit
TrainLimit.log_steps_limit
TrainLimit.lora_alpha_options
TrainLimit.lora_dropout_limit
TrainLimit.lora_rank_options
TrainLimit.max_seq_len_options
TrainLimit.scheduler_name_options
TrainLimit.supported_hyper_params
TrainLimit.warmup_ratio_limit
TrainLimit.weight_decay_limit
- qianfan.trainer.consts module
ActionState
FinetuneStatus
FinetuneStatus.DatasetLoadFailed
FinetuneStatus.DatasetLoadStopped
FinetuneStatus.DatasetLoaded
FinetuneStatus.DatasetLoading
FinetuneStatus.EvaluationCreated
FinetuneStatus.EvaluationFailed
FinetuneStatus.EvaluationFinished
FinetuneStatus.EvaluationRunning
FinetuneStatus.EvaluationStopped
FinetuneStatus.ModelPublishFailed
FinetuneStatus.ModelPublished
FinetuneStatus.ModelPublishing
FinetuneStatus.TrainCreated
FinetuneStatus.TrainFailed
FinetuneStatus.TrainFinished
FinetuneStatus.TrainStopped
FinetuneStatus.Training
FinetuneStatus.Unknown
PeftType
ServiceStatus
ServiceType
- qianfan.trainer.event module
- qianfan.trainer.finetune module
- qianfan.utils package
AsyncLock
disable_log()
enable_log()
log_debug()
log_error()
log_info()
log_warn()
- Subpackages
- Submodules
- qianfan.utils.bos_uploader module
- qianfan.utils.helper module
- qianfan.utils.logging module
- qianfan.utils.utils module
Submodules
qianfan.config module
- qianfan.config.AK(ak: str) None [source]
Set the API Key (AK) for LLM API authentication.
This function allows you to set the API Key that will be used for authentication throughout the entire SDK. The API Key can be acquired from the qianfan console: https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application
- Parameters:
- ak (str):
The API Key to be set for LLM API authentication.
- qianfan.config.AccessKey(access_key: str) None [source]
Set the Access Key for console api authentication.
This function allows you to set the Access Key that will be used for authentication throughout the entire SDK. The Access Key can be acquired from the baidu bce console: https://console.bce.baidu.com/iam/#/iam/accesslist
- Parameters:
- access_key (str):
The Access Key to be set for console API authentication.
- qianfan.config.AccessToken(access_token: str) None [source]
Set the access token for LLM api authentication.
This function allows you to set the access token that will be used for authentication throughout the entire SDK. The access token can be generated from API key and secret key according to the instructions at https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Ilkkrb0i5.
This function is only needed when you only have access token. If you have both API key and secret key, sdk will automatically refresh the access token for you.
- Parameters:
- access_token (str):
The access token to be set for LLM API authentication.
- class qianfan.config.GlobalConfig(_env_file: Optional[Union[str, PathLike, List[Union[str, PathLike]], Tuple[Union[str, PathLike], ...]]] = '<object object>', _env_file_encoding: Optional[str] = None, _env_nested_delimiter: Optional[str] = None, _secrets_dir: Optional[Union[str, PathLike]] = None, *, AK: Optional[str] = None, SK: Optional[str] = None, ACCESS_KEY: Optional[str] = None, SECRET_KEY: Optional[str] = None, ACCESS_TOKEN: Optional[str] = None, BASE_URL: str = 'https://aip.baidubce.com', AUTH_TIMEOUT: float = 5, DISABLE_EB_SDK: bool = True, EB_SDK_INSTALLED: bool = False, IAM_SIGN_EXPIRATION_SEC: int = 300, CONSOLE_API_BASE_URL: str = 'https://qianfan.baidubce.com', ACCESS_TOKEN_REFRESH_MIN_INTERVAL: float = 3600, QPS_LIMIT: float = 0, APPID: Optional[int] = None, ENABLE_PRIVATE: bool = False, ENABLE_AUTH: Optional[bool] = None, ACCESS_CODE: Optional[str] = None, IMPORT_STATUS_POLLING_INTERVAL: float = 2, EXPORT_STATUS_POLLING_INTERVAL: float = 2, RELEASE_STATUS_POLLING_INTERVAL: float = 2, EXPORT_FILE_SIZE_LIMIT: int = 2147483648, ETL_STATUS_POLLING_INTERVAL: float = 2, GET_ENTITY_CONTENT_FAILED_RETRY_TIMES: int = 3, TRAIN_STATUS_POLLING_INTERVAL: float = 30, TRAINER_STATUS_POLLING_BACKOFF_FACTOR: float = 3, TRAINER_STATUS_POLLING_RETRY_TIMES: float = 3, MODEL_PUBLISH_STATUS_POLLING_INTERVAL: float = 30, BATCH_RUN_STATUS_POLLING_INTERVAL: float = 30, DEPLOY_STATUS_POLLING_INTERVAL: float = 30, DEFAULT_FINE_TUNE_TRAIN_TYPE: str = 'ERNIE-Bot-turbo-0725', LLM_API_RETRY_COUNT: int = 1, LLM_API_RETRY_TIMEOUT: int = 60, LLM_API_RETRY_BACKOFF_FACTOR: float = 1, LLM_API_RETRY_JITTER: float = 1, LLM_API_RETRY_MAX_WAIT_INTERVAL: float = 120, LLM_API_RETRY_ERR_CODES: set = {18, 336100}, CONSOLE_API_RETRY_COUNT: int = 1, CONSOLE_API_RETRY_TIMEOUT: int = 60, CONSOLE_API_RETRY_JITTER: float = 1, CONSOLE_API_RETRY_MAX_WAIT_INTERVAL: float = 120, CONSOLE_API_RETRY_ERR_CODES: set = {18, 336100, 500000}, CONSOLE_API_RETRY_BACKOFF_FACTOR: int = 0, EVALUATION_ONLINE_POLLING_INTERVAL: float = 30, BOS_HOST_REGION: str = 'bj', SSL_VERIFICATION_ENABLED: bool = True, PROXY: str = '', FILE_ENCODING: str = 'utf-8')[source]
Bases:
BaseSettings
The global config of whole qianfan sdk
- ACCESS_CODE: Optional[str]
- ACCESS_KEY: Optional[str]
- ACCESS_TOKEN: Optional[str]
- ACCESS_TOKEN_REFRESH_MIN_INTERVAL: float
- AK: Optional[str]
- APPID: Optional[int]
- AUTH_TIMEOUT: float
- BASE_URL: str
- BATCH_RUN_STATUS_POLLING_INTERVAL: float
- BOS_HOST_REGION: str
- CONSOLE_API_BASE_URL: str
- CONSOLE_API_RETRY_BACKOFF_FACTOR: int
- CONSOLE_API_RETRY_COUNT: int
- CONSOLE_API_RETRY_ERR_CODES: set
- CONSOLE_API_RETRY_JITTER: float
- CONSOLE_API_RETRY_MAX_WAIT_INTERVAL: float
- CONSOLE_API_RETRY_TIMEOUT: int
- class Config[source]
Bases:
object
- case_sensitive = True
- env_file_encoding = 'utf-8'
- env_prefix = 'QIANFAN_'
- DEFAULT_FINE_TUNE_TRAIN_TYPE: str
- DEPLOY_STATUS_POLLING_INTERVAL: float
- DISABLE_EB_SDK: bool
- EB_SDK_INSTALLED: bool
- ENABLE_AUTH: Optional[bool]
- ENABLE_PRIVATE: bool
- ETL_STATUS_POLLING_INTERVAL: float
- EVALUATION_ONLINE_POLLING_INTERVAL: float
- EXPORT_FILE_SIZE_LIMIT: int
- EXPORT_STATUS_POLLING_INTERVAL: float
- FILE_ENCODING: str
- GET_ENTITY_CONTENT_FAILED_RETRY_TIMES: int
- IAM_SIGN_EXPIRATION_SEC: int
- IMPORT_STATUS_POLLING_INTERVAL: float
- LLM_API_RETRY_BACKOFF_FACTOR: float
- LLM_API_RETRY_COUNT: int
- LLM_API_RETRY_ERR_CODES: set
- LLM_API_RETRY_JITTER: float
- LLM_API_RETRY_MAX_WAIT_INTERVAL: float
- LLM_API_RETRY_TIMEOUT: int
- MODEL_PUBLISH_STATUS_POLLING_INTERVAL: float
- PROXY: str
- QPS_LIMIT: float
- RELEASE_STATUS_POLLING_INTERVAL: float
- SECRET_KEY: Optional[str]
- SK: Optional[str]
- SSL_VERIFICATION_ENABLED: bool
- TRAINER_STATUS_POLLING_BACKOFF_FACTOR: float
- TRAINER_STATUS_POLLING_RETRY_TIMES: float
- TRAIN_STATUS_POLLING_INTERVAL: float
- qianfan.config.SK(sk: str) None [source]
Set the Secret Key (SK) for LLM api authentication. The secret key is paired with the API key.
This function allows you to set the Secret Key that will be used for authentication throughout the entire SDK. The Secret Key can be acquired from the qianfan console: https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application
- Parameters:
- sk (str):
The Secret Key to be set for LLM API authentication.
- qianfan.config.SecretKey(secret_key: str) None [source]
Set the Secret Key for console api authentication. The secret key is paired with the access key.
This function allows you to set the Secret Key that will be used for authentication throughout the entire SDK. The secret Key can be acquired from the baidu bce console: https://console.bce.baidu.com/iam/#/iam/accesslist
- Parameters:
- secret_key (str):
The Secret Key to be set for console API authentication.
- qianfan.config.get_config() GlobalConfig [source]
qianfan.consts module
Consts used in qianfan sdk
- class qianfan.consts.APIErrorCode(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
Enum
Error code from API return value
- APINameNotExist = 336005
- APITokenExpired = 111
- APITokenInvalid = 110
- AppNotExist = 15
- ConsoleInternalError = 500000
- DailyLimitReached = 17
- GetServiceTokenFailed = 13
- InternalError = 336000
- InvalidArgument = 336001
- InvalidArgumentSystem = 336104
- InvalidArgumentUserSetting = 336105
- InvalidHTTPMethod = 336101
- InvalidJSON = 336002
- InvalidParam = 336003
- InvalidRequest = 100
- NoError = 0
- NoPermissionToAccessData = 6
- PermissionError = 336004
- QPSLimitReached = 18
- RequestLimitReached = 4
- ServerHighLoad = 336100
- TotalRequestLimitReached = 19
- UnknownError = 1
- UnsupportedMethod = 3
- class qianfan.consts.Consts[source]
Bases:
object
Constant used by qianfan sdk
- AppListAPI: str = '/wenxinworkshop/service/appList'
- AuthAPI: str = '/oauth/2.0/token'
- DatasetAnnotateAPI: str = '/wenxinworkshop/entity/annotate'
- DatasetAugListTaskAPI: str = '/wenxinworkshop/enhance/list'
- DatasetAugTaskDeleteAPI: str = '/wenxinworkshop/enhance/delete'
- DatasetAugTaskInfoAPI: str = '/wenxinworkshop/enhance/detail'
- DatasetCreateAPI: str = '/wenxinworkshop/dataset/create'
- DatasetCreateAugTaskAPI: str = '/wenxinworkshop/enhance/create'
- DatasetCreateETLTaskAPI: str = '/wenxinworkshop/etl/create'
- DatasetDeleteAPI: str = '/wenxinworkshop/dataset/delete'
- DatasetETLListTaskAPI: str = '/wenxinworkshop/etl/list'
- DatasetETLTaskDeleteAPI: str = '/wenxinworkshop/etl/delete'
- DatasetETLTaskInfoAPI: str = '/wenxinworkshop/etl/detail'
- DatasetEntityDeleteAPI: str = '/wenxinworkshop/entity/delete'
- DatasetEntityListAPI: str = '/wenxinworkshop/entity/list'
- DatasetExportAPI: str = '/wenxinworkshop/dataset/export'
- DatasetExportRecordAPI: str = '/wenxinworkshop/dataset/exportRecord'
- DatasetImportAPI: str = '/wenxinworkshop/dataset/import'
- DatasetImportErrorDetail: str = '/wenxinworkshop/dataset/importErrorDetail'
- DatasetInfoAPI: str = '/wenxinworkshop/dataset/info'
- DatasetReleaseAPI: str = '/wenxinworkshop/dataset/release'
- DatasetStatusFetchInBatchAPI: str = '/wenxinworkshop/dataset/statusList'
- EBTokenizerAPI: str = '/rpc/2.0/ai_custom/v1/wenxinworkshop/tokenizer/erniebot'
- FineTuneCreateJobAPI: str = '/wenxinworkshop/finetune/createJob'
- FineTuneCreateTaskAPI: str = '/wenxinworkshop/finetune/createTask'
- FineTuneGetJobAPI: str = '/wenxinworkshop/finetune/jobDetail'
- FineTuneStopJobAPI: str = '/wenxinworkshop/finetune/stopJob'
- ModelAPIPrefix: str = '/rpc/2.0/ai_custom/v1/wenxinworkshop'
- ModelBatchDeleteAPI: str = '/wenxinworkshop/modelrepo/model/batchDelete'
- ModelDetailAPI: str = '/wenxinworkshop/modelrepo/modelDetail'
- ModelEvalCreateAPI: str = '/wenxinworkshop/modelrepo/eval/create'
- ModelEvalInfoAPI: str = '/wenxinworkshop/modelrepo/eval/detail'
- ModelEvalResultAPI: str = '/wenxinworkshop/modelrepo/eval/report'
- ModelEvalResultExportAPI: str = '/wenxinworkshop/modelrepo/eval/result/export'
- ModelEvalResultExportStatusAPI: str = '/wenxinworkshop/modelrepo/eval/result/export/info'
- ModelEvalStopAPI: str = '/wenxinworkshop/modelrepo/eval/cancel'
- ModelPresetListAPI: str = '/wenxinworkshop/modelrepo/model/preset/list'
- ModelPublishAPI: str = '/wenxinworkshop/modelrepo/publishTrainModel'
- ModelUserListAPI: str = '/wenxinworkshop/modelrepo/model/user/list'
- ModelVersionBatchDeleteAPI: str = '/wenxinworkshop/modelrepo/model/version/batchDelete'
- ModelVersionDetailAPI: str = '/wenxinworkshop/modelrepo/modelVersionDetail'
- PromptCreateAPI: str = '/wenxinworkshop/prompt/template/create'
- PromptCreateOptimizeTaskAPI: str = '/wenxinworkshop/prompt/singleOptimize/create'
- PromptDeleteAPI: str = '/wenxinworkshop/prompt/template/delete'
- PromptEvaluationAPI: str = '/wenxinworkshop/prompt/evaluate/predict'
- PromptEvaluationSummaryAPI: str = '/wenxinworkshop/prompt/evaluate/summary'
- PromptGetOptimizeTaskInfoAPI: str = '/wenxinworkshop/prompt/singleOptimize/info'
- PromptInfoAPI: str = '/wenxinworkshop/prompt/template/info'
- PromptLabelListAPI: str = '/wenxinworkshop/prompt/label/list'
- PromptListAPI: str = '/wenxinworkshop/prompt/template/list'
- PromptRenderAPI: str = '/rest/2.0/wenxinworkshop/api/v1/template/info'
- PromptUpdateAPI: str = '/wenxinworkshop/prompt/template/update'
- QianfanRequestIdDefaultPrefix: str = 'sdk-py-0.2.9'
- STREAM_RESPONSE_EVENT_PREFIX: str = 'event: '
- STREAM_RESPONSE_PREFIX: str = 'data: '
- ServiceCreateAPI: str = '/wenxinworkshop/service/apply'
- ServiceDetailAPI: str = '/wenxinworkshop/service/detail'
- ServiceListAPI: str = '/wenxinworkshop/service/list'
- XRequestID: str = 'Request_id'
- XResponseID: str = 'X-Baidu-Request-Id'
- class qianfan.consts.DefaultLLMModel[source]
Bases:
object
Defualt LLM model in qianfan sdk
- ChatCompletion = 'ERNIE-Bot-turbo'
- Completion = 'ERNIE-Bot-turbo'
- Embedding = 'Embedding-V1'
- Text2Image = 'Stable-Diffusion-XL'
- class qianfan.consts.DefaultValue[source]
Bases:
object
Default value used by qianfan sdk
- AK: str = ''
- AccessCode: str = ''
- AccessToken: str = ''
- AccessTokenRefreshMinInterval: float = 3600
- AuthTimeout: float = 5
- BaseURL: str = 'https://aip.baidubce.com'
- BatchRunStatusPollingInterval: float = 30
- BosHostRegion: str = 'bj'
- ConsoleAK: str = ''
- ConsoleAPIBaseURL: str = 'https://qianfan.baidubce.com'
- ConsoleRetryBackoffFactor: float = 0
- ConsoleRetryCount: int = 1
- ConsoleRetryErrCodes: Set = {18, 336100, 500000}
- ConsoleRetryJitter: int = 1
- ConsoleRetryMaxWaitInterval: float = 120
- ConsoleRetryTimeout: float = 60
- ConsoleSK: str = ''
- DefaultFinetuneTrainType: str = 'ERNIE-Bot-turbo-0725'
- DeployStatusPollingInterval: float = 30
- DisableErnieBotSDK: bool = True
- DotEnvConfigFile: str = '.env'
- ETLStatusPollingInterval: float = 2
- EnablePrivate: bool = False
- EvaluationOnlinePollingInterval: float = 30
- ExportFileSizeLimit: int = 2147483648
- ExportStatusPollingInterval: float = 2
- FileEncoding: str = 'utf-8'
- GetEntityContentFailedRetryTimes: int = 3
- IAMSignExpirationSeconds: int = 300
- ImportStatusPollingInterval: float = 2
- ModelPublishStatusPollingInterval: float = 30
- Proxy: str = ''
- QpsLimit: float = 0
- ReleaseStatusPollingInterval: float = 2
- RetryBackoffFactor: float = 1
- RetryCount: int = 1
- RetryErrCodes: Set = {18, 336100}
- RetryJitter: float = 1
- RetryMaxWaitInterval: float = 120
- RetryTimeout: float = 60
- SK: str = ''
- SSLVerificationEnabled: bool = True
- TrainStatusPollingInterval: float = 30
- TrainerStatusPollingBackoffFactor: float = 3
- TrainerStatusPollingRetryTimes: float = 3
- TruncatedContinuePrompt = '继续'
- class qianfan.consts.Env[source]
Bases:
object
Environment variable name used by qianfan sdk
- AK: str = 'QIANFAN_AK'
- AccessCode: str = 'QIANFAN_PRIVATE_ACCESS_CODE'
- AccessKey: str = 'QIANFAN_ACCESS_KEY'
- AccessToken: str = 'QIANFAN_ACCESS_TOKEN'
- AccessTokenRefreshMinInterval: str = 'QIANFAN_ACCESS_TOKEN_REFRESH_MIN_INTERVAL'
- AuthTimeout: str = 'QIANFAN_AUTH_TIMEOUT'
- BaseURL: str = 'QIANFAN_BASE_URL'
- ConsoleAPIBaseURL: str = 'QIANFAN_CONSOLE_API_BASE_URL'
- ConsoleRetryBackoffFactor: str = 'QIANFAN_CONSOLE_API_RETRY_BACKOFF_FACTOR'
- ConsoleRetryCount: str = 'QIANFAN_CONSOLE_API_RETRY_COUNT'
- ConsoleRetryTimeout: str = 'QIANFAN_CONSOLE_API_RETRY_TIMEOUT'
- DisableErnieBotSDK: str = 'QIANFAN_DISABLE_EB_SDK'
- DotEnvConfigFile: str = 'QIANFAN_DOT_ENV_CONFIG_FILE'
- ETLStatusPollingInterval: str = 'QIANFAN_ETL_STATUS_POLLING_INTERVAL'
- EnablePrivate: str = 'QIANFAN_ENABLE_PRIVATE'
- ExportFileSizeLimit: str = 'QIANFAN_EXPORT_FILE_SIZE_LIMIT'
- ExportStatusPollingInterval: str = 'QIANFAN_EXPORT_STATUS_POLLING_INTERVAL'
- FileEncoding: str = 'QIANFAN_FILE_ENCODING'
- GetEntityContentFailedRetryTimes: str = 'QIANFAN_GET_ENTITY_CONTENT_FAILED_RETRY_TIMES'
- IAMSignExpirationSeconds: str = 'QIANFAN_IAM_SIGN_EXPIRATION_SEC'
- ImportStatusPollingInterval: str = 'QIANFAN_IMPORT_STATUS_POLLING_INTERVAL'
- Proxy: str = 'QIANFAN_PROXY'
- QpsLimit: str = 'QIANFAN_QPS_LIMIT'
- ReleaseStatusPollingInterval: str = 'QIANFAN_RELEASE_STATUS_POLLING_INTERVAL'
- RetryBackoffFactor: str = 'QIANFAN_LLM_API_RETRY_BACKOFF_FACTOR'
- RetryCount: str = 'QIANFAN_LLM_API_RETRY_COUNT'
- RetryTimeout: str = 'QIANFAN_LLM_API_RETRY_TIMEOUT'
- SK: str = 'QIANFAN_SK'
- SSLVerificationEnabled: str = 'QIANFAN_SSL_VERIFICATION_ENABLED'
- SecretKey: str = 'QIANFAN_SECRET_KEY'
- class qianfan.consts.PromptFrameworkType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
int
,Enum
- Basic: int = 1
基础框架
- CRISPE: int = 2
CRISPE框架
- Fewshot: int = 3
fewshot框架
- NotUse: int = 0
不使用框架
- class qianfan.consts.PromptSceneType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]
Bases:
int
,Enum
- Text2Image: int = 2
文生图
- Text2Text: int = 1
文生文
qianfan.errors module
the collection of errors for this library
- exception qianfan.errors.APIError(error_code: int, error_msg: str, req_id: Any)[source]
Bases:
QianfanError
Base exception clas for the qianfan api error
- exception qianfan.errors.AccessTokenExpiredError[source]
Bases:
QianfanError
Exception when access token is expired
- exception qianfan.errors.ArgumentNotFoundError[source]
Bases:
QianfanError
Exception when the argument is not found
- exception qianfan.errors.FileSizeOverflow[source]
Bases:
Exception
Exception when zip file is too big
- exception qianfan.errors.InternalError[source]
Bases:
QianfanError
Exception when internal error occurs
- exception qianfan.errors.InvalidArgumentError[source]
Bases:
QianfanError
Exception when the argument is invalid
- exception qianfan.errors.NotImplmentError[source]
Bases:
QianfanError
Exception that’s raised when code not implemented.
- exception qianfan.errors.QianfanError[source]
Bases:
Exception
Base exception class for the qianfan sdk.
- exception qianfan.errors.QianfanRequestError[source]
Bases:
Exception
Exception when request on qianfan failed
- exception qianfan.errors.RequestError[source]
Bases:
QianfanError
Exception when api request is failed
- exception qianfan.errors.RequestTimeoutError[source]
Bases:
QianfanError
Exception when api request is timeout
qianfan.version module
version specification