Define the data classes using Pydantic, making it possible to configure the chat application and do input validation.
Import statement
from gradiochat.config import*
Some lessons and clarification about the used code
Pydantic vs Dataclasses: Pydantic creates classes similar to Python’s dataclasses but with additional features. The key differences are that Pydantic provides data validation, type coercion, and more robust error handling. It will automatically validate data during initialization and conversion.
Pydantic and typing: Pydantic leverages Python’s standard typing system but adds its own validation on top. It uses Python’s type hints to know what types to validate against.
The “…” placeholder: The ellipsis (…) is a special value in Pydantic that indicates a required field. It means “this field must be provided when creating an instance” - there’s no default value. When you create a ModelConfig instance, you’ll need to provide a value for model_name.
@property usage: The @property decorator creates a getter method that’s accessed like an attribute. In our case, api_key looks like a normal attribute but when accessed, it runs the method to retrieve the value from environment variables. This is a clean way to avoid storing sensitive information in the object itself.
Field from pydantic can be used to add extra information and metadata to inform the reader and/or do data validation.
The configuration for the workings of the LLM chatbot
ModelConfig
First the configuration for the LLM model to use in the ModelConfig.
ModelConfig
def ModelConfig( data:Any)->None:
Configuration for the LLM model
ModelConfig
Configuration for the LLM model
Variable
Type
Default
Details
model_name
str
PydanticUndefined
Name or path of the model to use
provider
str
‘huggingface’
Model provider (huggingface, openai, etc)
api_key_env_var
Optional[str]
None
Environment variable name for API key
api_base_url
Optional[str]
None
Base URL for API reqeuest
temperature
float
0.7
Temperature for generation
max_completion_tokens
int
1024
Maximum tokens to generate
top_p
float
0.7
Adjust the number of choices for each predicted token [0-1]
top_k
int
50
Limits the number of choices for the next predicted token. Not available for OpenAI API
frequency_penalty
float
0
Reduces the likelihood of repeating prompt text or getting stuck in a loop [-2 -> 2]
stop
Optional[list[str]]
[‘:’, ‘<|endoftext|>’]
Sequences to stop generation
stream
Optional[bool]
None
If set to true, the model response data will be streamed to the client as it is generated using server-sent events.
Message config
Next the configuration of the Message system
Message
def Message( data:Any)->None:
A message in a conversation
Message
A message in a conversation
Variable
Type
Default
Details
role
Literal[system, user, assistant]
PydanticUndefined
Role of the message sender
content
str
PydanticUndefined
Content of the message
Chat config
Then the configuration for the chat implementation. Making sure the application can handle: - system prompt - context if applicable - a start ‘user’ prompt if applicable - user input
The other settings that are available in this class can easily be infered from the description in the ChatAppConfig class itself.
ChatAppConfig
def ChatAppConfig( data:Any)->None:
Main configuration for a chat application
ChatAppConfig
Main configuration for a chat application
Variable
Type
Default
Details
app_name
str
PydanticUndefined
Name of the application
description
str
’’
Description of the application
system_prompt
str
PydanticUndefined
System prompt for the LLM
starter_prompt
Optional[str]
None
Initial prompt to start the conversation
context_files
list[Path]
[]
List of markdown files for additional context
model
ModelConfig
PydanticUndefined
(see ModelConfig table)
theme
Optional[Any]
None
Gradio theme to use
logo_path
Optional[Path]
None
Path to logo image
show_system_prompt
bool
True
Whether to show system prompt in UI
show_context
bool
True
Whether to show context in UI
An example configuration for a chat application:
# Eval set to false, because the api key is stored in .env and thus can't be found when# nbdev_test is runtest_config = ChatAppConfig( app_name="Test App", system_prompt="You are a helpful assistant.", model=ModelConfig( model_name="gpt-3.5-turbo", api_key_env_var="TEST_API_KEY", ))print(test_config.model_dump_json(indent=2))print(f"API Key available: {'Yes'if test_config.model.api_key else'No'}")