Base Agent

The base_agent file contains foundational component for agent interaction. It provides a basic structure for agents that can interact with LLMs and provides basic performance updates.

AgentResponse dataclass

Data class to store responses generated by an agent.

Attributes:
  • agent_id (str) –

    Unique identifier for the agent.

  • response (Any) –

    The generated response from the language model.

  • confidence (float) –

    Confidence score of the response, typically between 0 and 1.

  • metadata (Dict[str, Any]) –

    Additional metadata about the response, such as model information.

Source code in llamarch/common/base_agent.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
@dataclass
class AgentResponse:
    """
    Data class to store responses generated by an agent.

    Attributes
    ----------
    agent_id : str
        Unique identifier for the agent.
    response : Any
        The generated response from the language model.
    confidence : float
        Confidence score of the response, typically between 0 and 1.
    metadata : Dict[str, Any]
        Additional metadata about the response, such as model information.
    """
    agent_id: str
    response: Any
    confidence: float
    metadata: Dict[str, Any]

GenerativeAIAgent

A generative AI agent that can generate responses and track performance.

Parameters:
  • agent_id (str) –

    Unique identifier for the agent.

  • llm (LLM) –

    An instance of the LLM class, initialized with the desired model.

  • embedding (LLMEmbedding, default: None ) –

    An instance of the LLMEmbedding class, initialized with the desired embedding model (default is None).

Source code in llamarch/common/base_agent.py
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
class GenerativeAIAgent:
    """
    A generative AI agent that can generate responses and track performance.

    Parameters
    ----------
    agent_id : str
        Unique identifier for the agent.
    llm : LLM
        An instance of the LLM class, initialized with the desired model.
    embedding : LLMEmbedding, optional
        An instance of the LLMEmbedding class, initialized with the desired embedding model (default is None).
    """

    def __init__(self, agent_id: str, llm: LLM, embedding: Optional[LLMEmbedding] = None):
        self.agent_id = agent_id
        self.llm = llm
        self.embedding = embedding
        self.performance_history: List[float] = []
        self.logger = logging.getLogger(f"Agent-{agent_id}")

    async def generate_response(self, query: str) -> AgentResponse:
        """
        Generate a response to the given query using the language model.

        Parameters
        ----------
        query : str
            The input query to respond to.

        Returns
        -------
        AgentResponse
            An object containing the response text, confidence score, and metadata.

        Notes
        -----
        Currently, the confidence score is set to a default value of 1.0 and can be adjusted based on future requirements.
        """
        response_text = self.llm.generate(query)
        confidence = 1.0  # Placeholder; this could be dynamically calculated if desired
        metadata = {
            "model_name": self.llm.model_name
        }
        return AgentResponse(agent_id=self.agent_id, response=response_text, confidence=confidence, metadata=metadata)

    def update_performance(self, score: float):
        """
        Update the agent's performance history with a new score.

        Parameters
        ----------
        score : float
            The performance score to add to the agent's history.

        Notes
        -----
        This method also logs the updated performance score.
        """
        self.performance_history.append(score)
        self.logger.info(f"Agent {self.agent_id} performance updated: {score}")

    @property
    def average_performance(self) -> float:
        """
        Calculate the agent's average performance score.

        Returns
        -------
        float
            The average score from the agent's performance history. Returns 0.0 if no history is available.

        Notes
        -----
        This property uses NumPy's `mean` function to compute the average.
        """
        return np.mean(self.performance_history) if self.performance_history else 0.0

average_performance: float property

Calculate the agent's average performance score.

Returns:
  • float

    The average score from the agent's performance history. Returns 0.0 if no history is available.

Notes

This property uses NumPy's mean function to compute the average.

generate_response(query) async

Generate a response to the given query using the language model.

Parameters:
  • query (str) –

    The input query to respond to.

Returns:
  • AgentResponse

    An object containing the response text, confidence score, and metadata.

Notes

Currently, the confidence score is set to a default value of 1.0 and can be adjusted based on future requirements.

Source code in llamarch/common/base_agent.py
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
async def generate_response(self, query: str) -> AgentResponse:
    """
    Generate a response to the given query using the language model.

    Parameters
    ----------
    query : str
        The input query to respond to.

    Returns
    -------
    AgentResponse
        An object containing the response text, confidence score, and metadata.

    Notes
    -----
    Currently, the confidence score is set to a default value of 1.0 and can be adjusted based on future requirements.
    """
    response_text = self.llm.generate(query)
    confidence = 1.0  # Placeholder; this could be dynamically calculated if desired
    metadata = {
        "model_name": self.llm.model_name
    }
    return AgentResponse(agent_id=self.agent_id, response=response_text, confidence=confidence, metadata=metadata)

update_performance(score)

Update the agent's performance history with a new score.

Parameters:
  • score (float) –

    The performance score to add to the agent's history.

Notes

This method also logs the updated performance score.

Source code in llamarch/common/base_agent.py
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
def update_performance(self, score: float):
    """
    Update the agent's performance history with a new score.

    Parameters
    ----------
    score : float
        The performance score to add to the agent's history.

    Notes
    -----
    This method also logs the updated performance score.
    """
    self.performance_history.append(score)
    self.logger.info(f"Agent {self.agent_id} performance updated: {score}")