Registering a custom tool with `load_tools``
The issue seems to be that load_tools
doesn’t really support loading custom tools, but instead complains about TypeError: unhashable type
.
A workaround is to simply initialize an empty tools list with load_tools
, and then extend it with your custom ones.
tools = load_tools([], llm=llm) +
[OhMyCustomTool(target_directory=target_directory, files=files)]
– via this GitHub Issue
Errors when initializing AzureChatOpenAI
Specifically, this beauty: “As of openai>=1.0.0, Azure endpoints should be specified via the azure_endpoint
param not openai_api_base
(or alias base_url
). (type=value_error)”.
Such a frustrating issue, can’t believe it hasn’t been fixed already. What happens is that, if you have ANY kind of OPENAI_API_BASE
environment variable set, the overly-eager error checker will throw the error, even if you’re specifying all parameters in AzureChatOpenAI’s constructor, as seen below:
llm = AzureChatOpenAI(
api_key=config['AZURE_OPENAI_CHAT_KEY'],
azure_endpoint=config["AZURE_OPENAI_CHAT_ENDPOINT"],
api_version=config["AZURE_OPENAI_CHAT_API_VERSION"],
model=config["AZURE_OPENAI_CHAT_MODEL"],
)
One way to debug this, is to simply cycle through all of os.environ
, looking for OPENAI_
keys:
import os
for key, value in os.environ.items():
if "OPENAI_" in key:
print(f"{key}: {value}")
One way to fix this would be removing the OPENAI_API_BASE
from os.environ
, one way or another.
Another way would be to instantiate it like this:
llm = AzureChatOpenAI(
api_key=config['AZURE_OPENAI_CHAT_KEY'],
azure_endpoint=config["AZURE_OPENAI_CHAT_ENDPOINT"],
api_version=config["AZURE_OPENAI_CHAT_API_VERSION"],
model=config["AZURE_OPENAI_CHAT_MODEL"],
validate_base_url=False
)
So terribly frustrating. Just so you know, here is the guilty code, you can also follow this issue for (hopefully) updates and/or a fix.