source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
123
label
stringclasses
1 value
content
stringlengths
6
6.94k
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
This tutorial shows how to use AutoGen.Net agent as model in AG Studio
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 1. Create Dotnet empty web app and install AutoGen and AutoGen.WebAPI package ```bash dotnet new web dotnet add package AutoGen dotnet add package AutoGen.WebAPI ```
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 2. Replace the Program.cs with following code ```bash using AutoGen.Core; using AutoGen.Service; var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); var helloWorldAgent = new HelloWorldAgent(); app.UseAgentAsOpenAIChatCompletionEndpoint(helloWorldAgent); app.Run(); class HelloWorldAge...
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 3: Start the web app Run the following command to start web api ```bash dotnet RUN ``` The web api will listen at `http://localhost:5264/v1/chat/completion ![terminal](../images/articles/UseAutoGenAsModelinAGStudio/Terminal.png)
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 4: In another terminal, start autogen-studio ```bash autogenstudio ui ```
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 5: Navigate to AutoGen Studio UI and add hello world agent as openai Model ### Step 5.1: Go to model tab ![The Model Tab](../images/articles/UseAutoGenAsModelinAGStudio/TheModelTab.png) ### Step 5.2: Select "OpenAI model" card ![Open AI model Card](../images/articles/UseAutoGenAsModelinAGStudio/Step5.2OpenAIMo...
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 6: Create a hello world agent that uses the hello world model ![Create a hello world agent that uses the hello world model](../images/articles/UseAutoGenAsModelinAGStudio/Step6.png) ![Agent Configuration](../images/articles/UseAutoGenAsModelinAGStudio/Step6b.png)
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Final Step: Use the hello world agent in workflow ![Use the hello world agent in workflow](../images/articles/UseAutoGenAsModelinAGStudio/FinalStepsA.png) ![Use the hello world agent in workflow](../images/articles/UseAutoGenAsModelinAGStudio/FinalStepsA.png) ![Use the hello world agent in workflow](../images/articl...
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
This tutorial shows how to generate response using an @AutoGen.Core.IAgent by taking @AutoGen.OpenAI.OpenAIChatAgent as an example. > [!NOTE] > AutoGen.Net provides the following agents to connect to different LLM platforms. Generating responses using these agents is similar to the example shown below. > - @AutoGen.Op...
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 1: Install AutoGen First, install the AutoGen package using the following command: ```bash dotnet add package AutoGen ```
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 2: add Using Statements [!code-csharp[Using Statements](../../samples/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs?name=Using)]
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 3: Create an @AutoGen.OpenAI.OpenAIChatAgent > [!NOTE] > The @AutoGen.OpenAI.Extension.OpenAIAgentExtension.RegisterMessageConnector* method registers an @AutoGen.OpenAI.OpenAIChatRequestMessageConnector middleware which converts OpenAI message types to AutoGen message types. This step is necessary when you want ...
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 4: Generate Response To generate response, you can use one of the overloaded method of @AutoGen.Core.AgentExtension.SendAsync* method. The following code shows how to generate response with text message: [!code-csharp[Generate Response](../../samples/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs?name=Chat_...
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Further Reading - [Chat with google gemini](../articles/AutoGen.Gemini/Chat-with-google-gemini.md) - [Chat with vertex gemini](../articles/AutoGen.Gemini/Chat-with-vertex-gemini.md) - [Chat with Ollama](../articles/AutoGen.Ollama/Chat-with-llama.md) - [Chat with Semantic Kernel Agent](../articles/AutoGen.SemanticKernel...
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
This tutorial shows how to perform image chat with an agent using the @AutoGen.OpenAI.OpenAIChatAgent as an example. > [!NOTE] > To chat image with an agent, the model behind the agent needs to support image input. Here is a partial list of models that support image input: > - gpt-4o > - gemini-1.5 > - llava > - claud...
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 1: Install AutoGen First, install the AutoGen package using the following command: ```bash dotnet add package AutoGen ```
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 2: Add Using Statements [!code-csharp[Using Statements](../../samples/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs?name=Using)]
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 3: Create an @AutoGen.OpenAI.OpenAIChatAgent [!code-csharp[Create an OpenAIChatAgent](../../samples/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs?name=Create_Agent)]
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 4: Prepare Image Message In AutoGen, you can create an image message using either @AutoGen.Core.ImageMessage or @AutoGen.Core.MultiModalMessage. The @AutoGen.Core.ImageMessage takes a single image as input, whereas the @AutoGen.Core.MultiModalMessage allows you to pass multiple modalities like text or image. Her...
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 5: Generate Response To generate response, you can use one of the overloaded methods of @AutoGen.Core.AgentExtension.SendAsync* method. The following code shows how to generate response with an image message: [!code-csharp[Generate Response](../../samples/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.c...
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Further Reading - [Image chat with gemini](../articles/AutoGen.Gemini/Image-chat-with-gemini.md) - [Image chat with llava](../articles/AutoGen.Ollama/Chat-with-llava.md)
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
This tutorial shows how to use tools in an agent.
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
What is tool Tools are pre-defined functions in user's project that agent can invoke. Agent can use tools to perform actions like search web, perform calculations, etc. With tools, it can greatly extend the capabilities of an agent. > [!NOTE] > To use tools with agent, the backend LLM model used by the agent needs to ...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Key Concepts - @AutoGen.Core.FunctionContract: The contract of a function that agent can invoke. It contains the function name, description, parameters schema, and return type. - @AutoGen.Core.ToolCallMessage: A message type that represents a tool call request in AutoGen.Net. - @AutoGen.Core.ToolCallResultMessage: A me...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Install AutoGen and AutoGen.SourceGenerator First, install the AutoGen and AutoGen.SourceGenerator package using the following command: ```bash dotnet add package AutoGen dotnet add package AutoGen.SourceGenerator ``` Also, you might need to enable structural xml document support by setting `GenerateDocumentationFile...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Add Using Statements [!code-csharp[Using Statements](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Using)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Create agent Create an @AutoGen.OpenAI.OpenAIChatAgent with `GPT-3.5-turbo` as the backend LLM model. [!code-csharp[Create an agent with tools](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Create_Agent)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Define `Tool` class and create tools Create a `public partial` class to host the tools you want to use in AutoGen agents. The method has to be a `public` instance method and its return type must be `Task<string>`. After the methods is defined, mark them with @AutoGen.Core.FunctionAttribute attribute. In the following ...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Tool call without auto-invoke In this case, when receiving a @AutoGen.Core.ToolCallMessage, the agent will not automatically invoke the tool. Instead, the agent will return the original message back to the user. The user can then decide whether to invoke the tool or not. ![single-turn tool call without auto-invoke](.....
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Tool call with auto-invoke In this case, the agent will automatically invoke the tool when receiving a @AutoGen.Core.ToolCallMessage and return the @AutoGen.Core.ToolCallAggregateMessage which contains both the tool call request and the tool call result. ![single-turn tool call with auto-invoke](../images/articles/Cre...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Send the tool call result back to LLM to generate further response In some cases, you may want to send the tool call result back to the LLM to generate further response. To do this, you can send the tool call response from agent back to the LLM by calling the `SendAsync` method of the agent. [!code-csharp[Generate Res...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Parallel tool call Some LLM models support parallel tool call, which returns multiple tool calls in one single message. Note that @AutoGen.Core.FunctionCallMiddleware has already handled the parallel tool call for you. When it receives a @AutoGen.Core.ToolCallMessage that contains multiple tool calls, it will automatic...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Further Reading - [Function call with openai](../articles/OpenAIChatAgent-use-function-call.md) - [Function call with gemini](../articles/AutoGen.Gemini/Function-call-with-gemini.md) - [Function call with local model](../articles/Function-call-with-ollama-and-litellm.md) - [Use kernel plugin in other agents](../article...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-simple-chat.md
autogen
The following example shows how to create an @AutoGen.OpenAI.OpenAIChatAgent and chat with it. Firsly, import the required namespaces: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=using_statement)] Then, create an @AutoGen.OpenAI.OpenAIChatAgent and chat with it: [!code-csh...
GitHub
autogen
autogen/dotnet/website/articles/Create-a-user-proxy-agent.md
autogen
## UserProxyAgent [`UserProxyAgent`](../api/AutoGen.UserProxyAgent.yml) is a special type of agent that can be used to proxy user input to another agent or group of agents. It supports the following human input modes: - `ALWAYS`: Always ask user for input. - `NEVER`: Never ask user for input. In this mode, the agent w...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen-OpenAI-Overview.md
autogen
## AutoGen.OpenAI Overview AutoGen.OpenAI provides the following agents over openai models: - @AutoGen.OpenAI.OpenAIChatAgent: A slim wrapper agent over `OpenAIClient`. This agent only support `IMessage<ChatRequestMessage>` message type. To support more message types like @AutoGen.Core.TextMessage, register the agent ...
GitHub
autogen
autogen/dotnet/website/articles/Group-chat.md
autogen
@AutoGen.Core.GroupChat invokes agents in a dynamic way. On one hand, It relies on its admin agent to intellegently determines the next speaker based on conversation context, and on the other hand, it also allows you to control the conversation flow by using a @AutoGen.Core.Graph. This makes it a more dynamic yet contr...
GitHub
autogen
autogen/dotnet/website/articles/Group-chat.md
autogen
Use @AutoGen.Core.GroupChat to implement a code interpreter chat flow The following example shows how to create a dynamic group chat with @AutoGen.Core.GroupChat. In this example, we will create a dynamic group chat with 4 agents: `admin`, `coder`, `reviewer` and `runner`. Each agent has its own role in the group chat:...
GitHub
autogen
autogen/dotnet/website/articles/function-comparison-page-between-python-AutoGen-and-autogen.net.md
autogen
### Function comparison between Python AutoGen and AutoGen\.Net #### Agentic pattern | Feature | AutoGen | AutoGen\.Net | | :---------------- | :------ | :---- | | Code interpreter | run python code in local/docker/notebook executor | run csharp code in dotnet interactive executor | | Single agent chat pattern | ✔️ ...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-function-call.md
autogen
The following example shows how to create a `GetWeatherAsync` function and pass it to @AutoGen.OpenAI.OpenAIChatAgent. Firstly, you need to install the following packages: ```xml <ItemGroup> <PackageReference Include="AutoGen.OpenAI" Version="AUTOGEN_VERSION" /> <PackageReference Include="AutoGen.SourceGenerat...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
## Use function call in AutoGen agent Typically, there are three ways to pass a function definition to an agent to enable function call: - Pass function definitions when creating an agent. This only works if the agent supports pass function call from its constructor. - Passing function definitions in @AutoGen.Core.Gen...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Pass function definitions when creating an agent In some agents like @AutoGen.AssistantAgent or @AutoGen.OpenAI.GPTAgent, you can pass function definitions when creating the agent Suppose the `TypeSafeFunctionCall` is defined in the following code snippet: [!code-csharp[TypeSafeFunctionCall](../../samples/AutoGen.Basi...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Passing function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent You can also pass function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent. This is useful when you want to override the function definitions passed to the agent when creating it. [!code-csharp[assistant ...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls You can also register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls. This is useful when you want to process and invoke function calls in a more flexible way. [!code-csharp[assista...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Invoke function call inside an agent To invoke a function instead of returning the function call object, you can pass its function call wrapper to the agent via `functionMap`. You can then pass the `WeatherReportWrapper` to the agent via `functionMap`: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/Fun...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Invoke function call by another agent You can also use another agent to invoke the function call from one agent. This is a useful pattern in two-agent chat, where one agent is used as a function proxy to invoke the function call from another agent. Once the function call is invoked, the result can be returned to the or...
GitHub
autogen
autogen/dotnet/website/articles/Create-your-own-agent.md
autogen
## Coming soon
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
The following example shows how to connect to third-party OpenAI API using @AutoGen.OpenAI.OpenAIChatAgent. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.OpenAI.Sample/Connect_To_Ollama.cs)
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Overview A lot of LLM applications/platforms support spinning up a chat server that is compatible with OpenAI API, such as LM Studio, Ollama, Mistral etc. This means that you can connect to these servers using the @AutoGen.OpenAI.OpenAIChatAgent. > [!NOTE] > Some platforms might not support all the features of OpenAI ...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Prerequisites - Install the following packages: ```bash dotnet add package AutoGen.OpenAI --version AUTOGEN_VERSION ``` - Spin up a chat server that is compatible with OpenAI API. The following example uses Ollama as the chat server, and llama3 as the llm model. ```bash ollama serve ```
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Steps - Import the required namespaces: [!code-csharp[](../../samples/AutoGen.OpenAI.Sample/Connect_To_Ollama.cs?name=using_statement)] - Create a `CustomHttpClientHandler` class. The `CustomHttpClientHandler` class is used to customize the HttpClientHandler. In this example, we override the `SendAsync` method to red...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Sample Output The following is the sample output of the code snippet above: ![output](../images/articles/ConnectTo3PartyOpenAI/output.gif)
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
`Middleware` is a key feature in AutoGen.Net that enables you to customize the behavior of @AutoGen.Core.IAgent.GenerateReplyAsync*. It's similar to the middleware concept in ASP.Net and is widely used in AutoGen.Net for various scenarios, such as function call support, converting message of different types, print mess...
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Use middleware in an agent To use middleware in an existing agent, you can either create a @AutoGen.Core.MiddlewareAgent on top of the original agent or register middleware functions to the original agent. ### Create @AutoGen.Core.MiddlewareAgent on top of the original agent [!code-csharp[](../../samples/AutoGen.Basic...
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Short-circuit the next agent The example below shows how to short-circuit the inner agent [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=short_circuit_middleware_agent)] > [!Note] > When multiple middleware functions are registered, the order of middleware functions ...
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Streaming middleware You can also modify the behavior of @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* by registering streaming middleware to it. One example is @AutoGen.OpenAI.OpenAIChatRequestMessageConnector which converts `StreamingChatCompletionsUpdate` to one of `AutoGen.Core.TextMessageUpdate` or `A...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-middleware.md
autogen
# Coming soon
GitHub
autogen
autogen/dotnet/website/articles/Create-an-agent.md
autogen
## AssistantAgent [`AssistantAgent`](../api/AutoGen.AssistantAgent.yml) is a built-in agent in `AutoGen` that acts as an AI assistant. It uses LLM to generate response to user input. It also supports function call if the underlying LLM model supports it (e.g. `gpt-3.5-turbo-0613`).
GitHub
autogen
autogen/dotnet/website/articles/Create-an-agent.md
autogen
Create an `AssistantAgent` using OpenAI model. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/CreateAnAgent.cs?name=code_snippet_1)]
GitHub
autogen
autogen/dotnet/website/articles/Create-an-agent.md
autogen
Create an `AssistantAgent` using Azure OpenAI model. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/CreateAnAgent.cs?name=code_snippet_2)]
GitHub
autogen
autogen/dotnet/website/articles/Group-chat-overview.md
autogen
@AutoGen.Core.IGroupChat is a fundamental feature in AutoGen. It provides a way to organize multiple agents under the same context and work together to resolve a given task. In AutoGen, there are two types of group chat: - @AutoGen.Core.RoundRobinGroupChat : This group chat runs agents in a round-robin sequence. The c...
GitHub
autogen
autogen/dotnet/website/articles/Print-message-middleware.md
autogen
@AutoGen.Core.PrintMessageMiddleware is a built-in @AutoGen.Core.IMiddleware that pretty print @AutoGen.Core.IMessage to console. > [!NOTE] > @AutoGen.Core.PrintMessageMiddleware support the following @AutoGen.Core.IMessage types: > - @AutoGen.Core.TextMessage > - @AutoGen.Core.MultiModalMessage > - @AutoGen.Core.Tool...
GitHub
autogen
autogen/dotnet/website/articles/Print-message-middleware.md
autogen
Use @AutoGen.Core.PrintMessageMiddleware in an agent You can use @AutoGen.Core.PrintMessageMiddlewareExtension.RegisterPrintMessage* to register the @AutoGen.Core.PrintMessageMiddleware to an agent. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/PrintMessageMiddlewareCodeSnippet.cs?name=PrintMessageMid...
GitHub
autogen
autogen/dotnet/website/articles/Print-message-middleware.md
autogen
Streaming message support @AutoGen.Core.PrintMessageMiddleware also supports streaming message types like @AutoGen.Core.TextMessageUpdate and @AutoGen.Core.ToolCallMessageUpdate. If you register @AutoGen.Core.PrintMessageMiddleware to a @AutoGen.Core.IStreamingAgent, it will format the streaming message and print it t...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
This example shows how to use function call with local LLM models where [Ollama](https://ollama.com/) as local model provider and [LiteLLM](https://docs.litellm.ai/docs/) proxy server which provides an openai-api compatible interface. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://gith...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install Ollama and pull `dolphincoder:latest` model First, install Ollama by following the instructions on the [Ollama website](https://ollama.com/). After installing Ollama, pull the `dolphincoder:latest` model by running the following command: ```bash ollama pull dolphincoder:latest ```
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install LiteLLM and start the proxy server You can install LiteLLM by following the instructions on the [LiteLLM website](https://docs.litellm.ai/docs/). ```bash pip install 'litellm[proxy]' ``` Then, start the proxy server by running the following command: ```bash litellm --model ollama_chat/dolphincoder --port 400...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install AutoGen and AutoGen.SourceGenerator In your project, install the AutoGen and AutoGen.SourceGenerator package using the following command: ```bash dotnet add package AutoGen dotnet add package AutoGen.SourceGenerator ``` The `AutoGen.SourceGenerator` package is used to automatically generate type-safe `Functio...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Define `WeatherReport` function and create @AutoGen.Core.FunctionCallMiddleware Create a `public partial` class to host the methods you want to use in AutoGen agents. The method has to be a `public` instance method and its return type must be `Task<string>`. After the methods are defined, mark them with `AutoGen.Core....
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Create @AutoGen.OpenAI.OpenAIChatAgent with `GetWeatherReport` tool and chat with it Because LiteLLM proxy server is openai-api compatible, we can use @AutoGen.OpenAI.OpenAIChatAgent to connect to it as a third-party openai-api provider. The agent is also registered with a @AutoGen.Core.FunctionCallMiddleware which co...
GitHub
autogen
autogen/dotnet/website/articles/Roundrobin-chat.md
autogen
@AutoGen.Core.RoundRobinGroupChat is a group chat that invokes agents in a round-robin order. It's useful when you want to call multiple agents in a fixed sequence. For example, asking search agent to retrieve related information followed by a summarization agent to summarize the information. Beside, it also used by @A...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
The following example shows how to enable JSON mode in @AutoGen.OpenAI.OpenAIChatAgent. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.OpenAI.Sample/Use_Json_Mode.cs)
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
What is JSON mode? JSON mode is a new feature in OpenAI which allows you to instruct model to always respond with a valid JSON object. This is useful when you want to constrain the model output to JSON format only. > [!NOTE] > Currently, JOSN mode is only supported by `gpt-4-turbo-preview` and `gpt-3.5-turbo-0125`. Fo...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
How to enable JSON mode in OpenAIChatAgent. To enable JSON mode for @AutoGen.OpenAI.OpenAIChatAgent, set `responseFormat` to `ChatCompletionsResponseFormat.JsonObject` when creating the agent. Note that when enabling JSON mode, you also need to instruct the agent to output JSON format in its system message. [!code-cs...
GitHub
autogen
autogen/dotnet/website/articles/Create-your-own-middleware.md
autogen
## Coming soon
GitHub
autogen
autogen/dotnet/website/articles/Create-type-safe-function-call.md
autogen
## Create type-safe function call using AutoGen.SourceGenerator `AutoGen` provides a source generator to easness the trouble of manually craft function definition and function call wrapper from a function. To use this feature, simply add the `AutoGen.SourceGenerator` package to your project and decorate your function ...
GitHub
autogen
autogen/dotnet/website/articles/getting-start.md
autogen
### Get start with AutoGen for dotnet [![dotnet-ci](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml/badge.svg)](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) Firstly, ...
GitHub
autogen
autogen/dotnet/website/articles/MistralChatAgent-use-function-call.md
autogen
## Use tool in MistralChatAgent The following example shows how to enable tool support in @AutoGen.Mistral.MistralClientAgent by creating a `GetWeatherAsync` function and passing it to the agent. Firstly, you need to install the following packages: ```bash dotnet add package AutoGen.Mistral dotnet add package AutoGen...
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
`AutoGen` provides a built-in feature to run code snippet from agent response. Currently the following languages are supported: - dotnet More languages will be supported in the future.
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
What is a code snippet? A code snippet in agent response is a code block with a language identifier. For example: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_1_3)]
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Why running code snippet is useful? The ability of running code snippet can greatly extend the ability of an agent. Because it enables agent to resolve tasks by writing code and run it, which is much more powerful than just returning a text response. For example, in data analysis scenario, agent can resolve tasks like...
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Use dotnet interactive kernel to execute code snippet? The built-in feature of running dotnet code snippet is provided by [dotnet-interactive](https://github.com/dotnet/interactive). To run dotnet code snippet, you need to install the following package to your project, which provides the intergraion with dotnet-interac...
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Run python code snippet To run python code, firstly you need to have python installed on your machine, then you need to set up ipykernel and jupyter in your environment. ```bash pip install ipykernel pip install jupyter ``` After `ipykernel` and `jupyter` are installed, you can confirm the ipykernel is installed corr...
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Further reading You can refer to the following examples for running code snippet in agentic workflow: - Dynamic_GroupChat_Coding_Task: [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.BasicSample/Example04_Dynamic_GroupChat_Co...
GitHub
autogen
autogen/dotnet/website/articles/Two-agent-chat.md
autogen
In `AutoGen`, you can start a conversation between two agents using @AutoGen.Core.AgentExtension.InitiateChatAsync* or one of @AutoGen.Core.AgentExtension.SendAsync* APIs. When conversation starts, the sender agent will firstly send a message to receiver agent, then receiver agent will generate a reply and send it back...
GitHub
autogen
autogen/dotnet/website/articles/Two-agent-chat.md
autogen
A basic example The following example shows how to start a conversation between the teacher agent and student agent, where the student agent starts the conversation by asking teacher to create math questions. > [!TIP] > You can use @AutoGen.Core.PrintMessageMiddlewareExtension.RegisterPrintMessage* to pretty print th...
GitHub
autogen
autogen/dotnet/website/articles/Use-graph-in-group-chat.md
autogen
Sometimes, you may want to add more control on how the next agent is selected in a @AutoGen.Core.GroupChat based on the task you want to resolve. For example, in the previous [code writing example](./Group-chat.md), the original code interpreter workflow can be improved by the following diagram because it's not necessa...
GitHub
autogen
autogen/dotnet/website/articles/Consume-LLM-server-from-LM-Studio.md
autogen
## Consume LLM server from LM Studio You can use @AutoGen.LMStudio.LMStudioAgent from `AutoGen.LMStudio` package to consume openai-like API from LMStudio local server. ### What's LM Studio [LM Studio](https://lmstudio.ai/) is an app that allows you to deploy and inference hundreds of thousands of open-source language ...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
`Agent` is one of the most fundamental concepts in AutoGen.Net. In AutoGen.Net, you construct a single agent to process a specific task, and you extend an agent using [Middlewares](./Middleware-overview.md), and you construct a multi-agent workflow using [GroupChat](./Group-chat-overview.md). > [!NOTE] > Every agent i...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Create an agent - Create an @AutoGen.AssistantAgent: [Create an assistant agent](./Create-an-agent.md) - Create an @AutoGen.OpenAI.OpenAIChatAgent: [Create an OpenAI chat agent](./OpenAIChatAgent-simple-chat.md) - Create a @AutoGen.SemanticKernel.SemanticKernelAgent: [Create a semantic kernel agent](./AutoGen.SemanticK...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Chat with an agent To chat with an agent, typically you can invoke @AutoGen.Core.IAgent.GenerateReplyAsync*. On top of that, you can also use one of the extension methods like @AutoGen.Core.AgentExtension.SendAsync* as shortcuts. > [!NOTE] > AutoGen provides a list of built-in message types like @AutoGen.Core.TextMess...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Streaming chat If an agent implements @AutoGen.Core.IStreamingAgent, you can use @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* to chat with the agent in a streaming way. You would need to process the streaming updates on your side though. - Send a @AutoGen.Core.TextMessage to an agent via @AutoGen.Core.IS...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Register middleware to an agent @AutoGen.Core.IMiddleware and @AutoGen.Core.IStreamingMiddleware are used to extend the behavior of @AutoGen.Core.IAgent.GenerateReplyAsync* and @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync*. You can register middleware to an agent to customize the behavior of the agent on t...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Group chat You can construct a multi-agent workflow using @AutoGen.Core.IGroupChat. In AutoGen.Net, there are two type of group chat: @AutoGen.Core.SequentialGroupChat: Orchestrates the agents in the group chat in a fix, sequential order. @AutoGen.Core.GroupChat: Provide more dynamic yet controllable way to orchestrate...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen-Mistral-Overview.md
autogen
## AutoGen.Mistral overview AutoGen.Mistral provides the following agent(s) to connect to [Mistral.AI](https://mistral.ai/) platform. - @AutoGen.Mistral.MistralClientAgent: A slim wrapper agent over @AutoGen.Mistral.MistralClient. ### Get started with AutoGen.Mistral To get started with AutoGen.Mistral, follow the [...
GitHub
autogen
autogen/dotnet/website/articles/Built-in-messages.md
autogen
## An overview of built-in @AutoGen.Core.IMessage types Start from 0.0.9, AutoGen introduces the @AutoGen.Core.IMessage and @AutoGen.Core.IMessage`1 types to provide a unified message interface for different agents. The @AutoGen.Core.IMessage is a non-generic interface that represents a message. The @AutoGen.Core.IMes...
GitHub
autogen
autogen/dotnet/website/articles/Installation.md
autogen
### Current version: [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) AutoGen.Net provides the following packages, you can choose to install one or more of them based on your needs: - `AutoGen`: The one-in-all package. This package has dependencies over `AutoGen.Co...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-overview.md
autogen
## Overview of function call In some LLM models, you can provide a list of function definitions to the model. The function definition is usually essentially an OpenAPI schema object which describes the function, its parameters and return value. And these function definitions tells the model what "functions" are availa...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-support-more-messages.md
autogen
By default, @AutoGen.OpenAI.OpenAIChatAgent only supports the @AutoGen.Core.IMessage<T> type where `T` is original request or response message from `Azure.AI.OpenAI`. To support more AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage and so on, you...
GitHub
autogen
autogen/dotnet/website/articles/MistralChatAgent-count-token-usage.md
autogen
The following example shows how to create a `MistralAITokenCounterMiddleware` @AutoGen.Core.IMiddleware and count the token usage when chatting with @AutoGen.Mistral.MistralClientAgent. ### Overview To collect the token usage for the entire chat session, one easy solution is simply collect all the responses from agent...