source stringclasses 1
value | repository stringclasses 1
value | file stringlengths 17 123 | label stringclasses 1
value | content stringlengths 6 6.94k |
|---|---|---|---|---|
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-and-multi-agent-application.md | autogen | # Agent and Multi-Agent Applications An **agent** is a software entity that communicates via messages, maintains its own state, and performs actions in response to received messages or changes in its state. These actions may modify the agentβs state and produce external effects, such as updating message logs, sending ... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-and-multi-agent-application.md | autogen | Characteristics of Multi-Agent Applications In multi-agent applications, agents may: - Run within the same process or on the same machine - Operate across different machines or organizational boundaries - Be implemented in diverse programming languages and make use of different AI models or instructions - Work togeth... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/index.md | autogen | # Core Concepts The following sections describe the main concepts of the Core API and the system architecture. ```{toctree} :maxdepth: 1 agent-and-multi-agent-application architecture api-layers application-stack agent-identity-and-lifecycle topic-and-subscription ``` |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/application-stack.md | autogen | # Application Stack AutoGen core is designed to be an unopinionated framework that can be used to build a wide variety of multi-agent applications. It is not tied to any specific agent abstraction or multi-agent pattern. The following diagram shows the application stack.  ... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/application-stack.md | autogen | An Example Application Consider a concrete example of a multi-agent application for code generation. The application consists of three agents: Coder Agent, Executor Agent, and Reviewer Agent. The following diagram shows the data flow between the agents, and the message types exchanged between them.  The diagram above shows the relationship between topic and subscription. An agent runtime keeps track of the subscriptions and uses them to deliver messages to agents. If a topic has no subscription, messages published to this to... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/topic-and-subscription.md | autogen | Type-based Subscription A type-based subscription maps a topic type to an agent type (see [agent ID](./agent-identity-and-lifecycle.md#agent-id)). It declares an unbounded mapping from topics to agent IDs without knowing the exact topic sources and agent keys. The mechanism is simple: any topic matching the type-based... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-identity-and-lifecycle.md | autogen | # Agent Identity and Lifecycle The agent runtime manages agents' identities and lifecycles. Application does not create agents directly, rather, it registers an agent type with a factory function for agent instances. In this section, we explain how agents are identified and created by the runtime. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-identity-and-lifecycle.md | autogen | Agent ID Agent ID uniquely identifies an agent instance within an agent runtime -- including distributed runtime. It is the "address" of the agent instance for receiving messages. It has two components: agent type and agent key. ```{note} Agent ID = (Agent Type, Agent Key) ``` The agent type is not an agent class. I... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-identity-and-lifecycle.md | autogen | Agent Lifecycle When a runtime delivers a message to an agent instance given its ID, it either fetches the instance, or creates it if it does not exist.  The runtime is also responsible for "paging in" or "out" agent instances to conserve resources and balance load across multi... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/architecture.md | autogen | # Agent Runtime Environments At the foundation level, the framework provides a _runtime environment_, which facilitates communication between agents, manages their identities and lifecycles, and enforce security and privacy boundaries. It supports two types of runtime environment: *standalone* and *distributed*. Both... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/architecture.md | autogen | Standalone Agent Runtime Standalone runtime is suitable for single-process applications where all agents are implemented in the same programming language and running in the same process. In the Python API, an example of standalone runtime is the {py:class}`~autogen_core.application.SingleThreadedAgentRuntime`. The fo... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/architecture.md | autogen | Distributed Agent Runtime Distributed runtime is suitable for multi-process applications where agents may be implemented in different programming languages and running on different machines.  A distributed runtime, as shown in the diagram above, consists of a _host... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/api-layers.md | autogen | # API Layers The API consists of the following layers: - {py:mod}`autogen_core.base` - {py:mod}`autogen_core.application` - {py:mod}`autogen_core.components` The following diagram shows the relationship between the layers.  The {py:mod}`autogen_core.base` layer defines the core interfaces and ... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/index.md | autogen | # Cookbook This section contains a collection of recipes that demonstrate how to use the Core API features. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/index.md | autogen | List of recipes ```{toctree} :maxdepth: 1 azure-openai-with-aad-auth termination-with-intervention tool-use-with-intervention extracting-results-with-an-agent openai-assistant-agent langgraph-agent llamaindex-agent local-llms-ollama-litellm instrumenting topic-subscription-scenarios structured-output-agent ``` |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/instrumenting.md | autogen | # Instrumentating your code locally AutoGen supports instrumenting your code using [OpenTelemetry](https://opentelemetry.io). This allows you to collect traces and logs from your code and send them to a backend of your choice. While debugging, you can use a local backend such as [Aspire](https://aspiredashboard.com/)... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/instrumenting.md | autogen | Setting up Aspire Follow the instructions [here](https://learn.microsoft.com/en-us/dotnet/aspire/fundamentals/dashboard/overview?tabs=bash#standalone-mode) to set up Aspire in standalone mode. This will require Docker to be installed on your machine. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/instrumenting.md | autogen | Instrumenting your code Once you have a dashboard set up, now it's a matter of sending traces and logs to it. You can follow the steps in the [Telemetry Guide](../framework/telemetry.md) to set up the opentelemetry sdk and exporter. After instrumenting your code with the Aspire Dashboard running, you should see trace... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/instrumenting.md | autogen | Observing LLM calls using Open AI If you are using the Open AI package, you can observe the LLM calls by setting up the opentelemetry for that library. We use [opentelemetry-instrumentation-openai](https://pypi.org/project/opentelemetry-instrumentation-openai/) in this example. Install the package: ```bash pip instal... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/azure-openai-with-aad-auth.md | autogen | # Azure OpenAI with AAD Auth This guide will show you how to use the Azure OpenAI client with Azure Active Directory (AAD) authentication. The identity used must be assigned the [**Cognitive Services OpenAI User**](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-s... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/azure-openai-with-aad-auth.md | autogen | Install Azure Identity client The Azure identity client is used to authenticate with Azure Active Directory. ```sh pip install azure-identity ``` |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/azure-openai-with-aad-auth.md | autogen | Using the Model Client ```python from autogen_ext.models import AzureOpenAIChatCompletionClient from azure.identity import DefaultAzureCredential, get_bearer_token_provider # Create the token provider token_provider = get_bearer_token_provider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.defau... |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/design-patterns/index.md | autogen | # Multi-Agent Design Patterns Agents can work together in a variety of ways to solve problems. Research works like [AutoGen](https://aka.ms/autogen-paper), [MetaGPT](https://arxiv.org/abs/2308.00352) and [ChatDev](https://arxiv.org/abs/2307.07924) have shown multi-agent systems out-performing single agent systems at c... |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | # Contributing to AutoGenBench As part of the broader AutoGen project, AutoGenBench welcomes community contributions. Contributions are subject to AutoGen's [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute), as well as a few additional AutoGenBench-specific requirements outlined here. You ... |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | General Contribution Requirements We ask that all contributions to AutoGenBench adhere to the following: - Follow AutoGen's broader [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute) - All AutoGenBench benchmarks should live in a subfolder of `/benchmarks` alongside `HumanEval`, `GAIA`, etc... |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Implementing and Running Benchmark Tasks At the core of any benchmark is a set of tasks. To implement tasks that are runnable by AutoGenBench, you must adhere to AutoGenBench's templating and scenario expansion algorithms, as outlined below. ### Task Definitions All tasks are stored in JSONL files (in subdirectories ... |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Task Instance Expansion Algorithm Once the tasks have been defined, as per above, they must be "instantiated" before they can be run. This instantiation happens automatically when the user issues the `agbench run` command and involves creating a local folder to share with Docker. Each instance and repetition gets its ... |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Scenario Execution Algorithm Once the task has been instantiated it is run (via run.sh). This script will execute the following steps: 1. If a file named `global_init.sh` is present, run it. 2. If a file named `scenario_init.sh` is present, run it. 3. Install the requirements.txt file (if running in Docker) 4. Run th... |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Integrating with the `tabulate` The above details are sufficient for defining and running tasks, but if you wish to support the `agbench tabulate` commands, a few additional steps are required. ### Tabulations If you wish to leverage the default tabulation logic, it is as simple as arranging your `scenario.py` file... |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Scripts/init_tasks.py Finally, you should provide an `Scripts/init_tasks.py` file, in your benchmark folder, and include a `main()` method therein. This `init_tasks.py` script is a great place to download benchmarks from their original sources and convert them to the JSONL format required by AutoGenBench: - See `Huma... |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | # AutoGenBench AutoGenBench (agbench) is a tool for repeatedly running a set of pre-defined AutoGen tasks in a setting with tightly-controlled initial conditions. With each run, AutoGenBench will start from a blank slate. The agents being evaluated will need to work out what code needs to be written, and what librarie... |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Technical Specifications If you are already an AutoGenBench pro, and want the full technical specifications, please review the [contributor's guide](CONTRIBUTING.md). |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Docker Requirement AutoGenBench also requires Docker (Desktop or Engine). **It will not run in GitHub codespaces**, unless you opt for native execution (which is strongly discouraged). To install Docker Desktop see [https://www.docker.com/products/docker-desktop/](https://www.docker.com/products/docker-desktop/). If ... |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Installation and Setup [Deprecated currently] **To get the most out of AutoGenBench, the `agbench` package should be installed**. At present, the easiest way to do this is to install it via `pip`. If you would prefer working from source code (e.g., for development, or to utilize an alternate branch), simply clone th... |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | A Typical Session Once AutoGenBench and necessary keys are installed, a typical session will look as follows: Navigate to HumanEval ```bash cd autogen/python/packages/agbench/benchmarks/HumanEval ``` **Note:** The following instructions are specific to the HumanEval benchmark. For other benchmarks, please refer to... |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Running AutoGenBench To run a benchmark (which executes the tasks, but does not compute metrics), simply execute: ``` cd [BENCHMARK] agbench run Tasks/*.jsonl ``` For example, ``` cd HumanEval agbench run Tasks/human_eval_MagenticOne.jsonl ``` The default is to run each task once. To run each scenario 10 times, us... |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Results By default, the AutoGenBench stores results in a folder hierarchy with the following template: ``./results/[scenario]/[task_id]/[instance_id]`` For example, consider the following folders: ``./results/default_two_agents/two_agent_stocks/0`` ``./results/default_two_agents/two_agent_stocks/1`` ... ``./resul... |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Contributing or Defining New Tasks or Benchmarks If you would like to develop -- or even contribute -- your own tasks or benchmarks, please review the [contributor's guide](CONTRIBUTING.md) for complete technical details. |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/README.md | autogen | # Benchmarking Agents This directory provides ability to benchmarks agents (e.g., built using Autogen) using AgBench. Use the instructions below to prepare your environment for benchmarking. Once done, proceed to relevant benchmarks directory (e.g., `benchmarks/GAIA`) for further scenario-specific instructions. |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/README.md | autogen | Setup on WSL 1. Install Docker Desktop. After installation, restart is needed, then open Docker Desktop, in Settings, Ressources, WSL Integration, Enable integration with additional distros β Ubuntu 2. Clone autogen and export `AUTOGEN_REPO_BASE`. This environment variable enables the Docker containers to use the corr... |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/HumanEval/README.md | autogen | # HumanEval Benchmark This scenario implements a modified version of the [HumanEval](https://arxiv.org/abs/2107.03374) benchmark. Compared to the original benchmark, there are **two key differences** here: - A chat model rather than a completion model is used. - The agents get pass/fail feedback about their implement... |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/HumanEval/README.md | autogen | Running the tasks Navigate to HumanEval ```bash cd benchmarks/HumanEval ``` Create a file called ENV.json with the following (required) contents (If you're using MagenticOne) ```json { "CHAT_COMPLETION_KWARGS_JSON": "{\"api_version\": \"2024-02-15-preview\", \"azure_endpoint\": \"YOUR_ENDPOINT/\", \"model_capa... |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/HumanEval/README.md | autogen | References **Evaluating Large Language Models Trained on Code**`<br/>` Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela ... |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/WebArena/README.md | autogen | # WebArena Benchmark This scenario implements the [WebArena](https://github.com/web-arena-x/webarena/tree/main) benchmark. The evaluation code has been modified from WebArena in [evaluation_harness](Templates/Common/evaluation_harness) we retain the License from WebArena and include it here [LICENSE](Templates/Common/... |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/WebArena/README.md | autogen | References Zhou, Shuyan, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng et al. "Webarena: A realistic web environment for building autonomous agents." arXiv preprint arXiv:2307.13854 (2023). |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/GAIA/README.md | autogen | # GAIA Benchmark This scenario implements the [GAIA](https://arxiv.org/abs/2311.12983) agent benchmark. Before you begin, make sure you have followed instruction in `../README.md` to prepare your environment. ### Setup Environment Variables for AgBench Navigate to GAIA ```bash cd benchmarks/GAIA ``` Create a file ... |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/GAIA/README.md | autogen | References **GAIA: a benchmark for General AI Assistants** `<br/>` GrΓ©goire Mialon, ClΓ©mentine Fourrier, Craig Swift, Thomas Wolf, Yann LeCun, Thomas Scialom `<br/>` [https://arxiv.org/abs/2311.12983](https://arxiv.org/abs/2311.12983) |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/AssistantBench/README.md | autogen | # AssistantBench Benchmark This scenario implements the [AssistantBench](https://assistantbench.github.io/) agent benchmark. Before you begin, make sure you have followed the instructions in `../README.md` to prepare your environment. We modify the evaluation code from AssistantBench in [Scripts](Scripts) and retain t... |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/AssistantBench/README.md | autogen | References Yoran, Ori, Samuel Joseph Amouyal, Chaitanya Malaviya, Ben Bogin, Ofir Press, and Jonathan Berant. "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?." arXiv preprint arXiv:2407.15711 (2024). https://arxiv.org/abs/2407.15711 |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/AssistantBench/Scripts/evaluate_utils/readme.md | autogen | These files were obtained from the creators of the AssistantBench benchmark and modified slightly. You can find the latest version at [https://huggingface.co/spaces/AssistantBench/leaderboard/tree/main/evaluation](https://huggingface.co/spaces/AssistantBench/leaderboard/tree/main/evaluation) |
GitHub | autogen | autogen/python/templates/new-package/{{cookiecutter.package_name}}/README.md | autogen | # {{cookiecutter.package_name}} |
GitHub | autogen | autogen/dotnet/README.md | autogen | # AutoGen for .NET Thre are two sets of packages here: AutoGen.\* the older packages derived from AutoGen 0.2 for .NET - these will gradually be deprecated and ported into the new packages Microsoft.AutoGen.* the new packages for .NET that use the event-driven model - These APIs are not yet stable and are subject to c... |
GitHub | autogen | autogen/dotnet/README.md | autogen | Samples You can find more examples under the [sample project](https://github.com/microsoft/autogen/tree/dotnet/samples/AutoGen.BasicSamples). |
GitHub | autogen | autogen/dotnet/README.md | autogen | Functionality - ConversableAgent - [x] function call - [x] code execution (dotnet only, powered by [`dotnet-interactive`](https://github.com/dotnet/interactive)) - Agent communication - [x] Two-agent chat - [x] Group chat - [ ] Enhanced LLM Inferences - Exclusive for dotnet - [x] Source generator for type... |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | # Packaging AutoGen.NET This document describes the steps to pack the `AutoGen.NET` project. |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | Prerequisites - .NET SDK |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | Create Package 1. **Restore and Build the Project** ```bash dotnet restore dotnet build --configuration Release --no-restore ``` 2. **Create the NuGet Package** ```bash dotnet pack --configuration Release --no-build ``` This will generate both the `.nupkg` file and the `.snupkg` file in the `./artifacts/package/rel... |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | Add new project to package list. By default, when you add a new project to `AutoGen.sln`, it will not be included in the package list. To include the new project in the package, you need to add the following line to the new project's `.csproj` file e.g. ```xml <Import Project="$(RepoRoot)/nuget/nuget-package.props" /... |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | Package versioning The version of the package is defined by `VersionPrefix` and `VersionPrefixForAutoGen0_2` in [MetaInfo.props](./eng/MetaInfo.props). If the name of your project starts with `AutoGen.`, the version will be set to `VersionPrefixForAutoGen0_2`, otherwise it will be set to `VersionPrefix`. |
GitHub | autogen | autogen/dotnet/src/AutoGen.LMStudio/README.md | autogen | ## AutoGen.LMStudio This package provides support for consuming openai-like API from LMStudio local server. |
GitHub | autogen | autogen/dotnet/src/AutoGen.LMStudio/README.md | autogen | Installation To use `AutoGen.LMStudio`, add the following package to your `.csproj` file: ```xml <ItemGroup> <PackageReference Include="AutoGen.LMStudio" Version="AUTOGEN_VERSION" /> </ItemGroup> ``` |
GitHub | autogen | autogen/dotnet/src/AutoGen.LMStudio/README.md | autogen | Usage ```csharp using AutoGen.LMStudio; var localServerEndpoint = "localhost"; var port = 5000; var lmStudioConfig = new LMStudioConfig(localServerEndpoint, port); var agent = new LMStudioAgent( name: "agent", systemMessage: "You are an agent that help user to do some tasks.", lmStudioConfig: lmStudioConfig... |
GitHub | autogen | autogen/dotnet/src/AutoGen.LMStudio/README.md | autogen | Update history ### Update on 0.0.7 (2024-02-11) - Add `LMStudioAgent` to support consuming openai-like API from LMStudio local server. |
GitHub | autogen | autogen/dotnet/src/AutoGen.SourceGenerator/README.md | autogen | ### AutoGen.SourceGenerator This package carries a source generator that adds support for type-safe function definition generation. Simply mark a method with `Function` attribute, and the source generator will generate a function definition and a function call wrapper for you. ### Get start First, add the following ... |
GitHub | autogen | autogen/dotnet/nuget/README.md | autogen | # NuGet Directory This directory contains resources and metadata for packaging the AutoGen.NET SDK as a NuGet package. |
GitHub | autogen | autogen/dotnet/nuget/README.md | autogen | Files - **icon.png**: The icon used for the NuGet package. - **NUGET.md**: The readme file displayed on the NuGet package page. - **NUGET-PACKAGE.PROPS**: The MSBuild properties file that defines the packaging settings for the NuGet package. |
GitHub | autogen | autogen/dotnet/nuget/README.md | autogen | Purpose The files in this directory are used to configure and build the NuGet package for the AutoGen.NET SDK, ensuring that it includes necessary metadata, documentation, and resources. |
GitHub | autogen | autogen/dotnet/nuget/NUGET.md | autogen | ### About AutoGen for .NET `AutoGen for .NET` is the official .NET SDK for [AutoGen](https://github.com/microsoft/autogen). It enables you to create LLM agents and construct multi-agent workflows with ease. It also provides integration with popular platforms like OpenAI, Semantic Kernel, and LM Studio. ### Gettings st... |
GitHub | autogen | autogen/dotnet/website/index.md | autogen | [!INCLUDE [](./articles/getting-start.md)] |
GitHub | autogen | autogen/dotnet/website/README.md | autogen | ## How to build and run the website ### Prerequisites - dotnet 7.0 or later ### Build Firstly, go to autogen/dotnet folder and run the following command to build the website: ```bash dotnet tool restore dotnet tool run docfx website/docfx.json --serve ``` After the command is executed, you can open your browser and ... |
GitHub | autogen | autogen/dotnet/website/release_note/update.md | autogen | ##### Update on 0.0.15 (2024-06-13) Milestone: [AutoGen.Net 0.0.15](https://github.com/microsoft/autogen/milestone/3) ###### Highlights - [Issue 2851](https://github.com/microsoft/autogen/issues/2851) `AutoGen.Gemini` package for Gemini support. Examples can be found [here](https://github.com/microsoft/autogen/tree/ma... |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | # AutoGen.Net 0.0.16 Release Notes We are excited to announce the release of **AutoGen.Net 0.0.16**. This release includes several new features, bug fixes, improvements, and important updates. Below are the detailed release notes: **[Milestone: AutoGen.Net 0.0.16](https://github.com/microsoft/autogen/milestone/4)** |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | π¦ New Features 1. **Deprecate `IStreamingMessage`** ([#3045](https://github.com/microsoft/autogen/issues/3045)) - Replaced `IStreamingMessage` and `IStreamingMessage<T>` with `IMessage` and `IMessage<T>`. 2. **Add example for using ollama + LiteLLM for function call** ([#3014](https://github.com/microsoft/autogen/issu... |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | π Bug Fixes 1. **SourceGenerator doesn't work when function's arguments are empty** ([#2976](https://github.com/microsoft/autogen/issues/2976)) - Fixed an issue where the SourceGenerator failed when function arguments were empty. 2. **Add content field in ToolCallMessage** ([#2975](https://github.com/microsoft/autogen... |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | π Improvements 1. **Sample update - Add getting-start samples for BasicSample project** ([#2859](https://github.com/microsoft/autogen/issues/2859)) - Re-organized the `AutoGen.BasicSample` project to include only essential getting-started examples, simplifying complex examples. 2. **Graph constructor should consider n... |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | β οΈ API-Breakchange 1. **Deprecate `IStreamingMessage`** ([#3045](https://github.com/microsoft/autogen/issues/3045)) - **Migration guide:** Deprecating `IStreamingMessage` will introduce breaking changes, particularly for `IStreamingAgent` and `IStreamingMiddleware`. Replace all `IStreamingMessage` and `IStreamingMessag... |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | π Document Update 1. **Add example for using ollama + LiteLLM for function call** ([#3014](https://github.com/microsoft/autogen/issues/3014)) - Added a tutorial to the website for using ollama with LiteLLM. Thank you to all the contributors for making this release possible. We encourage everyone to upgrade to AutoGen... |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.1.md | autogen | ο»Ώ# Release Notes for AutoGen.Net v0.2.1 π |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.1.md | autogen | New Features π - **Support for OpenAi o1-preview** : Added support for OpenAI o1-preview model ([#3522](https://github.com/microsoft/autogen/issues/3522)) |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.1.md | autogen | Example π - **OpenAI o1-preview**: [Connect_To_OpenAI_o1_preview](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.OpenAI.Sample/Connect_To_OpenAI_o1_preview.cs) |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | # AutoGen.Net 0.0.17 Release Notes |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | π What's New 1. **.NET Core Target Framework Support** ([#3203](https://github.com/microsoft/autogen/issues/3203)) - π Added support for .NET Core to ensure compatibility and enhanced performance of AutoGen packages across different platforms. 2. **Kernel Support in Interactive Service Constructor** ([#3181](htt... |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | π Improvements 1. **Cancellation Token Addition in Graph APIs** ([#3111](https://github.com/microsoft/autogen/issues/3111)) - π Added cancellation tokens to async APIs in the `AutoGen.Core.Graph` class to follow best practices and enhance the control flow. |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | β οΈ API Breaking Changes 1. **FunctionDefinition Generation Stopped in Source Generator** ([#3133](https://github.com/microsoft/autogen/issues/3133)) - π Stopped generating `FunctionDefinition` from `Azure.AI.OpenAI` in the source generator to eliminate unnecessary package dependencies. Migration guide: - β‘οΈ U... |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | π Documentation 1. **Consume AutoGen.Net Agent in AG Studio** ([#3142](https://github.com/microsoft/autogen/issues/3142)) - Added detailed documentation on using AutoGen.Net Agent as a model in AG Studio, including examples of starting an OpenAI chat backend and integrating third-party OpenAI models. 2. **Middlew... |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.2.md | autogen | ο»Ώ# Release Notes for AutoGen.Net v0.2.2 π |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.2.md | autogen | Improvements π - **Update OpenAI and Semantick Kernel to the latest version** : Updated OpenAI and Semantick Kernel to the latest version ([#3792](https://github.com/microsoft/autogen/pull/3792) |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | # π Release Notes: AutoGen.Net 0.1.0 π |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | π¦ New Packages 1. **Add AutoGen.AzureAIInference Package** - **Issue**: [.Net][Feature Request] [#3323](https://github.com/microsoft/autogen/issues/3323) - **Description**: The new `AutoGen.AzureAIInference` package includes the `ChatCompletionClientAgent`. |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | β¨ New Features 1. **Enable Step-by-Step Execution for Two Agent Chat API** - **Issue**: [.Net][Feature Request] [#3339](https://github.com/microsoft/autogen/issues/3339) - **Description**: The `AgentExtension.SendAsync` now returns an `IAsyncEnumerable`, allowing conversations to be driven step by step, similar ... |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | π Bug Fixes 1. **GroupChatExtension.SendAsync Doesnβt Terminate Chat When `IOrchestrator` Returns Null as Next Agent** - **Issue**: [.Net][Bug] [#3306](https://github.com/microsoft/autogen/issues/3306) - **Description**: Fixed an issue where `GroupChatExtension.SendAsync` would continue until the max_round is r... |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | π Documentation Updates 1. **Add Function Comparison Page Between Python AutoGen and AutoGen.Net** - **Issue**: [.Net][Document] [#3184](https://github.com/microsoft/autogen/issues/3184) - **Description**: Added comparative documentation for features between AutoGen and AutoGen.Net across various functionalitie... |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | # Release Notes for AutoGen.Net v0.2.0 π |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | New Features π - **OpenAI Structural Format Output**: Added support for structural output format in the OpenAI integration. You can check out the example [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.OpenAI.Sample/Structural_Output.cs) ([#3482](https://github.com/microsoft/autogen/issues... |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | Bug Fixes π - **Fixed Error Code 500**: Resolved an issue where an error occurred when the message history contained multiple different tool calls with the `name` field ([#3437](https://github.com/microsoft/autogen/issues/3437)). |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | Improvements π§ - **Leverage OpenAI V2.0 in AutoGen.OpenAI package**: The `AutoGen.OpenAI` package now uses OpenAI v2.0, providing improved functionality and performance. In the meantime, the original `AutoGen.OpenAI` is still available and can be accessed by `AutoGen.OpenAI.V1`. This allows users who prefer to contin... |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | Documentation π - **Tool Call Instructions**: Added detailed documentation on using tool calls with `ollama` and `OpenAIChatAgent` ([#3248](https://github.com/microsoft/autogen/issues/3248)). ### Migration Guides π #### For the Deprecation of `GPTAgent` ([#3404](https://github.com/microsoft/autogen/issues/3404)): *... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.