Skip to content

Commit

Permalink
Ai freshness pass (#44038)
Browse files Browse the repository at this point in the history
* Freshness pass
---------

Co-authored-by: Genevieve Warren <[email protected]>
  • Loading branch information
alexwolfmsft and gewarren authored Dec 20, 2024
1 parent 80b63fe commit df11762
Show file tree
Hide file tree
Showing 11 changed files with 37 additions and 68 deletions.
2 changes: 1 addition & 1 deletion docs/ai/azure-ai-for-dotnet-developers.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Develop .NET apps that use Azure AI services
description: This article provides an organized list of resources about Azure AI scenarios for .NET developers, including documentation and code samples.
ms.date: 05/17/2024
ms.date: 12/19/2024
ms.topic: overview
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
---
Expand Down
4 changes: 2 additions & 2 deletions docs/ai/conceptual/embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "How Embeddings Extend Your AI Model's Reach"
description: "Learn how embeddings extend the limits and capabilities of AI models in .NET."
author: catbutler
ms.topic: concept-article #Don't change.
ms.date: 05/14/2024
ms.date: 12/19/2024

#customer intent: As a .NET developer, I want to understand how embeddings extend LLM limits and capabilities in .NET so that I have more semantic context and better outcomes for my AI apps.

Expand Down Expand Up @@ -34,7 +34,7 @@ Use embeddings to help a model understand the meaning and context of text, and t

Use audio embeddings to process audio files or inputs in your app.

For example, [Speech service](/azure/ai-services/speech-service/) supports a range of audio embeddings, including [speech to text](/azure/ai-services/speech-service/speech-to-text) and [text to speech](/azure/ai-services/speech-service/text-to-speech). You can process audio in real-time or in batches.
For example, [Azure AI Speech](/azure/ai-services/speech-service/) supports a range of audio embeddings, including [speech to text](/azure/ai-services/speech-service/speech-to-text) and [text to speech](/azure/ai-services/speech-service/text-to-speech). You can process audio in real-time or in batches.

### Turn text into images or images into text

Expand Down
2 changes: 1 addition & 1 deletion docs/ai/conceptual/understanding-openai-functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "Understanding OpenAI Function Calling"
description: "Understand how function calling enables you to integrate external tools with your OpenAI application."
author: haywoodsloan
ms.topic: concept-article
ms.date: 05/14/2024
ms.date: 12/19/2024

#customer intent: As a .NET developer, I want to understand OpenAI function calling so that I can integrate external tools with AI completions in my .NET project.

Expand Down
10 changes: 3 additions & 7 deletions docs/ai/conceptual/understanding-tokens.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@ title: "Understanding tokens"
description: "Understand how large language models (LLMs) use tokens to analyze semantic relationships and generate natural language outputs"
author: haywoodsloan
ms.topic: concept-article
ms.date: 05/14/2024
ms.date: 12/19/2024

#customer intent: As a .NET developer, I want understand how large language models (LLMs) use tokens so I can add semantic analysis and text generation capabilities to my .NET projects.

---

# Understand tokens

Tokens are words, character sets, or combinations of words and punctuation that are used by large language models (LLMs) to decompose text into. Tokenization is the first step in training. The LLM analyzes the semantic relationships between tokens, such as how commonly they're used together or whether they're used in similar contexts. After training, the LLM uses those patterns and relationships to generate a sequence of output tokens based on the input sequence.
Tokens are words, character sets, or combinations of words and punctuation that are generated by large language models (LLMs) when they decompose text. Tokenization is the first step in training. The LLM analyzes the semantic relationships between tokens, such as how commonly they're used together or whether they're used in similar contexts. After training, the LLM uses those patterns and relationships to generate a sequence of output tokens based on the input sequence.

## Turning text into tokens

Expand Down Expand Up @@ -89,11 +89,7 @@ Output generation is an iterative operation. The model appends the predicted tok

### Token limits

LLMs have limitations regarding the maximum number of tokens that can be used as input or generated as output. This limitation often causes the input and output tokens to be combined into a maximum context window.

For example, GPT-4 supports up to 8,192 tokens of context. The combined size of the input and output tokens can't exceed 8,192.

Taken together, a model's token limit and tokenization method determine the maximum length of text that can be provided as input or generated as output.
LLMs have limitations regarding the maximum number of tokens that can be used as input or generated as output. This limitation often causes the input and output tokens to be combined into a maximum context window. Taken together, a model's token limit and tokenization method determine the maximum length of text that can be provided as input or generated as output.

For example, consider a model that has a maximum context window of 100 tokens. The model processes our example sentences as input text:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Scale Azure OpenAI for .NET chat sample using RAG
description: Learn how to add load balancing to your application to extend the chat app beyond the Azure OpenAI token and model quota limits.
ms.date: 05/16/2024
ms.date: 12/19/2024
ms.topic: get-started
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
# CustomerIntent: As a .NET developer new to Azure OpenAI, I want to scale my Azure OpenAI capacity to avoid rate limit errors with Azure Container Apps.
Expand Down
2 changes: 1 addition & 1 deletion docs/ai/get-started-app-chat-template.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Get started with the chat using your own data sample for .NET
description: Get started with .NET and search across your own data using a chat app sample implemented using Azure OpenAI Service and Retrieval Augmented Generation (RAG) in Azure AI Search. Easily deploy with Azure Developer CLI. This article uses the Azure AI Reference Template sample.
ms.date: 05/16/2024
ms.date: 12/19/2024
ms.topic: get-started
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
# CustomerIntent: As a .NET developer new to Azure OpenAI, I want deploy and use sample code to interact with app infused with my own business data so that learn from the sample code.
Expand Down
28 changes: 9 additions & 19 deletions docs/ai/how-to/content-filtering.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ ms.custom: devx-track-dotnet, devx-track-dotnet-ai
author: alexwolfmsft
ms.author: alexwolf
ms.topic: how-to
ms.date: 05/13/2024
ms.date: 12/19/2024

#customer intent: As a .NET developer, I want to manage OpenAI Content Filtering in a .NET app

---

# Work with OpenAI content filtering in a .NET app
# Work with Azure OpenAI content filtering in a .NET app

This article demonstrates how to handle content filtering concerns in a .NET app. Azure OpenAI Service includes a content filtering system that works alongside core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.

Expand All @@ -27,33 +27,23 @@ The [Content Filtering](/azure/ai-services/openai/concepts/content-filter) docum

To use the sample code in this article, you need to create and assign a content filter to your OpenAI model.

1. [Create and assign a content filter](/azure/ai-services/openai/how-to/content-filters) to your provisioned GPT-35 or GPT-4 model.
1. [Create and assign a content filter](/azure/ai-services/openai/how-to/content-filters) to your provisioned model.

1. Add the [`Azure.AI.OpenAI`](https://www.nuget.org/packages/Azure.AI.OpenAI) NuGet package to your project.

```dotnetcli
dotnet add package Azure.AI.OpenAI
```
1. Create a simple chat completion flow in your .NET app using the `OpenAiClient`. Replace the `YOUR_OPENAI_ENDPOINT`, `YOUR_OPENAI_KEY`, and `YOUR_OPENAI_DEPLOYMENT` values with your own.
1. Create a simple chat completion flow in your .NET app using the `AzureOpenAiClient`. Replace the `YOUR_MODEL_ENDPOINT` and `YOUR_MODEL_DEPLOYMENT_NAME` values with your own.
:::code language="csharp" source="./snippets/content-filtering/program.cs" id="chatCompletionFlow":::
:::code language="csharp" source="./snippets/content-filtering/program.cs" :::
1. Print out the content filtering results for each category.
1. Replace the `YOUR_PROMPT` placeholder with your own message and run the app to experiment with content filtering results. If you enter a prompt the AI considers unsafe, Azure OpenAI returns a `400 Bad Request` code. The app prints a message in the console similar to the following:
:::code language="csharp" source="./snippets/content-filtering/program.cs" id="printContentFilteringResult":::
1. Replace the `YOUR_PROMPT` placeholder with your own message and run the app to experiment with content filtering results. The following output shows an example of a prompt that triggers a low severity content filtering result:
```output
I am sorry if I have done anything to upset you.
Is there anything I can do to assist you and make things better?
Hate category is filtered: False with low severity.
SelfHarm category is filtered: False with safe severity.
Sexual category is filtered: False with safe severity.
Violence category is filtered: False with low severity.
```
```output
The response was filtered due to the prompt triggering Azure OpenAI's content management policy...
```

## Related content

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,10 @@
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Azure.AI.OpenAI" Version="1.0.0-beta.17" />
<PackageReference Include="Azure.Core" Version="1.43.0" />
<PackageReference Include="Azure.AI.OpenAI" />
<PackageReference Include="Azure.Identity" />
<PackageReference Include="Microsoft.Extensions.AI" Version="9.0.1-preview.1.24570.5" />
<PackageReference Include="Microsoft.Extensions.AI.OpenAI" Version="9.0.1-preview.1.24570.5" />
</ItemGroup>

</Project>
45 changes: 13 additions & 32 deletions docs/ai/how-to/snippets/content-filtering/Program.cs
Original file line number Diff line number Diff line change
@@ -1,38 +1,19 @@
// <chatCompletionFlow>
using Azure;
using Azure.AI.OpenAI;
using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Extensions.AI;

string endpoint = "YOUR_OPENAI_ENDPOINT";
string key = "YOUR_OPENAI_KEY";
IChatClient client =
new AzureOpenAIClient(
new Uri("YOUR_MODEL_ENDPOINT"),
new DefaultAzureCredential()).AsChatClient("YOUR_MODEL_DEPLOYMENT_NAME");

OpenAIClient client = new(new Uri(endpoint), new AzureKeyCredential(key));

var chatCompletionsOptions = new ChatCompletionsOptions()
try
{
DeploymentName = "YOUR_DEPLOYMENT_NAME",
Messages =
{
new ChatRequestSystemMessage("You are a helpful assistant."),
new ChatRequestUserMessage("YOUR_PROMPT")
}
};

Response<ChatCompletions> response = client.GetChatCompletions(chatCompletionsOptions);
Console.WriteLine(response.Value.Choices[0].Message.Content);
Console.WriteLine();
// </chatCompletionFlow>
ChatCompletion completion = await client.CompleteAsync("YOUR_PROMPT");

// <printContentFilteringResult>
foreach (var promptFilterResult in response.Value.PromptFilterResults)
Console.WriteLine(completion.Message);
}
catch (Exception e)
{
var results = promptFilterResult.ContentFilterResults;
Console.WriteLine(@$"Hate category is filtered:
{results.Hate.Filtered} with {results.Hate.Severity} severity.");
Console.WriteLine(@$"Self-harm category is filtered:
{results.SelfHarm.Filtered} with {results.SelfHarm.Severity} severity.");
Console.WriteLine(@$"Sexual category is filtered:
{results.Sexual.Filtered} with {results.Sexual.Severity} severity.");
Console.WriteLine(@$"Violence category is filtered:
{results.Violence.Filtered} with {results.Violence.Severity} severity.");
Console.WriteLine(e.Message);
}
// </printContentFilteringResult>
2 changes: 1 addition & 1 deletion docs/ai/index.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ metadata:
description: Samples, tutorials, and education for using AI with .NET
ms.topic: hub-page
ms.service: dotnet
ms.date: 05/13/2024
ms.date: 12/19/2024
author: alexwolfmsft
ms.author: alexwolf

Expand Down
2 changes: 1 addition & 1 deletion docs/ai/quickstarts/quickstart-local-ai.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Quickstart - Connect to and chat with a local AI using .NET
description: Set up a local AI model and chat with it using a .NET console app and the Microsoft.Extensions.AI libraries
ms.date: 11/24/2024
ms.date: 12/19/2024
ms.topic: quickstart
ms.custom: devx-track-dotnet, devx-track-dotnet-ai
author: alexwolfmsft
Expand Down

0 comments on commit df11762

Please sign in to comment.