Now that we have some information in our memory, let's search for it. "aboutMe", id="info5", text="My family is from New York" ![]() "aboutMe", id="info4", text="I visited France and Italy five times since 2015" "aboutMe", id="info3", text="I've been living in Seattle since 2005" "aboutMe", id="info2", text="I currently work as a tour guide" "aboutMe", id="info1", text="My name is Andrea" # Add some documents to the semantic memoryĪwait _information_async( async def populate_memory(kernel: sk.Kernel) -> None: This method takes in a collection name, an id, and the text to save. In Python we'll use the save_information_async method to save information to the memory. const string Memor圜ollectionName = "aboutMe" Īwait (Memor圜ollectionName, id: "info1", text: "My name is Andrea") Īwait (Memor圜ollectionName, id: "info2", text: "I currently work as a tourist operator") Īwait (Memor圜ollectionName, id: "info3", text: "I currently live in Seattle and have been living there since 2005") Īwait (Memor圜ollectionName, id: "info4", text: "I visited France and Italy five times since 2015") Īwait (Memor圜ollectionName, id: "info5", text: "My family is from New York") In C# we'll use the SaveInformationAsync method to save information to the memory. In this example, we'll add some information about a person named Andrea so that we can later query the memory to retrieve this information. Let's start by adding some memories manually to the VolatileMemoryStore. Kernel.register_memory_store(memory_store=sk.memory.VolatileMemoryStore()) Kernel.add_text_embedding_generation_service("ada", OpenAITextEmbedding("text-embedding-ada-002", api_key, org_id)) Kernel.add_text_completion_service("dv", OpenAITextCompletion("text-davinci-003", api_key, org_id)) WithMemoryStorage(new VolatileMemoryStore())Īpi_key, org_id = sk.openai_settings_from_dot_env() Var (useAzureOpenAI, model, azureEndpoint, apiKey, orgId) = Settings.LoadFromFile() Ĭ.AddAzureTextEmbeddingGenerationService("text-embedding-ada-002", azureEndpoint, apiKey) Ĭ.AddAzureTextCompletionService(model, azureEndpoint, apiKey) Ĭ.AddOpenAITextEmbeddingGenerationService("text-embedding-ada-002", apiKey) Ĭ.AddOpenAITextCompletionService(model, apiKey, orgId) When developing your app you will have the option to plug in persistent storage like Azure Cognitive Search, PostgreSQL, SQLite, etc. This memory is not written to disk and is only available during the app session. In this example, we make use of the VolatileMemoryStore which can be thought of as a temporary in-memory storage (not to be confused with Semantic Memory). ![]() In order to use memory in our example, we'll first instantiate the kernel with memory storage and an embedding service. These texts can be from the web, e-mail providers, chats, a database, or from your local directory, and are hooked up to the Semantic Kernel through data source connectors. from typing import Tupleįrom semantic_.open_ai import OpenAITextCompletion, OpenAITextEmbeddingĪt its core, semantic memory is a set of data structures that allow you to store the meaning of text that comes from different data sources, and optionally to store the source text too. In Python we will add OpenAITextEmbedding to our list of imports to access the memory functions. In C# we will add to our list of imports to access the memory functions. To get started, we will import the necessary packages. To do this, we can leverage semantic memories in the Semantic Kernel. What we need is a way to persist state and build both short-term and long-term memory to empower even more intelligent applications. However, if you solely relied on context variables, you would quickly realize that eventually your prompt would grow so large that you would run into a the model's token limit. This allowed us to create a basic chat experience. ![]() ![]() In a previous guide, we used context variables to pass in additional text into prompts to enrich them with more context. So far, we've mostly treated the kernel as a stateless orchestration engine to send text into a model and receive text out. samples/notebooks/python/06-memory-and-embeddings.ipynb samples/notebooks/dotnet/06-memory-and-embeddings.ipynb We recommend following along in the notebook so that you can easily run the code snippets by pressing the run button next to each code snippet. The following steps walk through the 06-memory-and-embeddings.ipynb notebook in the /samples/notebooks folder.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |