-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] How to host a azure open ai using docker #1159
Comments
Thank you for the report. I think this makes sense. There is a little bit on using Azure with OpenAI here: https://www.guardrailsai.com/docs/how_to_guides/using_llms#azure-openai That should cover the extra environment variables that one needs to set to use things with Azure. Additionally, for Docker deployments, there's this set of docs: https://www.guardrailsai.com/docs/how_to_guides/hosting_with_docker I'm not sure if that will fully answer the questions you have, but it's perhaps a starting point while we figure out how to make the documentation clearer. |
Given the exisiting documentation: The below API only works for standard open ai; what would be the procedure to revise below code so it will work with azure open ai? from openai import OpenAI client = OpenAI( response = client.chat.completions.create( print(response.choices[0].message.content) |
Assuming your Azure environment variables are set according to the linked documentation, it should only be a matter of
|
Yes, this works with guard() but does not work when using as a server: Any idea on how to modify this link, assume up on docker creation the env has "AZURE_API_KEY", "AZURE_API_BASE", and AZURE_API_VERSION" |
Aha! I think I see the difficulty. It's worth trying
I'll update this comment again if I can verify that this works. EDIT: For reference, I'm running over Microsoft's Azure OpenAI Service here: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=python-secure%2Cglobal-standard%2Cstandard-chat-completions |
Using 'http://127.0.0.1:8000/guards/gibberish_guard/openai/v1' will simply return the return the URL is not found in this setting. I'm also trying to follow your guide for cloud environment setting, but the documentation is outdated: This command is no longer available: guardrails create --template hub:template://guardrails/chatbot The json file cannot be installed either:
|
Would it be possible to see a copy/paste of your config.py file (with the auth token redacted)? It should really be only a matter of those env variables. I can look into the issue with --template, but I think that might be worth separating out as a different issue. |
I'm encountering the same issue when attempting to use Azure OpenAI models. However, the standard OpenAI models are functioning properly. |
My access to Azure has strangely disappeared, but I haven't lost site of this issue. I was seeing the "can't connect" matter with the OpenAI client, even outside of Guardrails. I posted a comment on their help thread: https://learn.microsoft.com/en-us/answers/questions/1315650/unable-to-log-into-azure-ai-studio-after-approved When I get my access back I'll set up a local repro and be able to check everything in mroe depth. |
Thank you for the effort, keep me posted @JosephCatrambone |
Hey @JosephCatrambone , The problem with using Azure OpenAI models lies in how LiteLLM processes the base URL, which differs from its handling of pure OpenAI models. This discrepancy leads to different HTTP requests being generated, causing requests for Azure OpenAI models to target undefined routes in the FastAPI app. For example: OpenAI models generate the following request: Azure OpenAI models, however, produce this request: The second request attempts to reach a route that isn't defined on the server, resulting from the way LiteLLM handles the request for Azure-specific endpoints. |
@OrenRachmil Good find, and thank you for sharing this information with us! @zsimjee looks like we need either a new route for AzureOpenAI support via the proxy endpoint or to add some additional parameters/wildcards after |
Hi @CalebCourier, Are there any updates on resolving this issue? If you could use some assistance, I'd be happy to contribute. Could you please provide any guidelines or details on how you envision the solution? Here's what I’ve tried so far: I created a new function in the FastAPI app with appropriate routing for Azure OpenAI calls. I based it on the internal logic of the existing function handling OpenAI calls, making modifications to the routing for the new function. However, this didn’t resolve the issue for some reason. If there are any specific areas where I could focus my efforts or adjustments I might have overlooked, please let me know. Thanks! |
Hi @OrenRachmil, we have an issue logged but we do not have a full solution specified, nor have we assigned out this work yet. We are always open to Pull Requests on our open source projects including the API which you can find here: https://github.com/guardrails-ai/guardrails-api I think what you've tried so far is a step in the right direction. Below is a route I threw together to see if I could get the request to go through to the FastAPI and it worked in the sense that I see the log outputs and receive the 418 error back on the client. What's left would be to try to patch in the logic from the guardrails-api new route@router.post("/guards/{guard_name}/openai/v1/openai/deployments/{deployment_name}/chat/completions")
@handle_error
async def azure_openai_v1_chat_completions(guard_name: str, deployment_name: str, request: Request):
payload = await request.json()
print("payload: ", payload)
decoded_guard_name = unquote_plus(guard_name)
print("guard_name: ", decoded_guard_name)
decoded_deployment_name = unquote_plus(deployment_name)
print("deployment_name: ", decoded_deployment_name)
query_params = request.query_params
print("query_params: ", query_params)
headers = request.headers
print("headers: ", headers)
raise HTTPException(418, detail="I'm a teapot") config.pyfrom guardrails import Guard
my_guard = Guard(name="my-guard") Client scriptimport os
from litellm import completion
## set ENV variables
os.environ["AZURE_API_KEY"] = "azure-api-key"
os.environ["AZURE_API_BASE"] = "http://localhost:8000/guards/my-guard/openai/v1"
os.environ["AZURE_REGION"] = "eastus"
os.environ["AZURE_API_VERSION"] = "2024-02-01"
# azure call
response = completion(
model = "azure/test",
messages = [{ "content": "Hello, how are you?","role": "user"}]
) Start commandguardrails-api start --config ./config.py |
Thank you very much for the effort @CalebCourier. |
Hi @CalebCourier,
I had also to configure the api_key,api_base of the deployment in the configuration file of the server. |
Thank @OrenRachmil , will this be incorporated in the latest release with official support using a docker image? |
Description
This document aims to guide users on deploying Azure OpenAI Service within a Docker container. The current documentation lacks a comprehensive, step-by-step guide for setting up Azure OpenAI in a Dockerized environment, including authentication, configuration, and deployment processes.
Current documentation
Currently, there is limited or no specific guidance on hosting Azure OpenAI using Docker in the Azure OpenAI Service documentation.
Suggested changes
Add a dedicated section or guide that covers:
Prerequisites
Dockerfile Configuration
Environment Variables
Sample Docker Compose File
docker-compose.yml
file example to demonstrate how to set up multi-container applications if additional services are needed.Running the Container
Testing and Verification
Security Best Practices
.env
files, and avoiding hardcoding credentials.Additional context
Adding this section will help users deploy Azure OpenAI Service using Docker more efficiently and securely, improving accessibility for development and production deployments. Screenshots or command line examples would be beneficial for each step.
Checklist
The text was updated successfully, but these errors were encountered: