Information
Menu POST /_plugins/_ml/agents/_register { "name" : "Test_Agent_For_RAG" , "type" : "flow" , "description" : "this is a test agent" , "tools" : [ { "type" : "VectorDBTool" , "parameters" : { "model_id" : "YOUR_TEXT_EMBEDDING_MODEL_ID" , "index" : "my_test_data" , "embedding_field" : "embedding" , "source_field" : [ "text" ], "input" : "${parameters.question}" } }, { "type" : "MLModelTool" , "description" : "A general tool to answer any question" , "parameters" : { "model_id" : "YOUR_LLM_MODEL_ID" , "prompt" : " \n\n Human:You are a professional data analyst. You will always answer a question based on the given context first. If the answer is not directly shown in the context, you will analyze the data and find the answer. If you don't know the answer, just say you don't know. \n\n Context: \n ${parameters.VectorDBTool.output} \n\n Human:${parameters.question} \n\n Assistant:" } } ] } POST /_plugins/_ml/agents/_register { "name" : "population data analysis agent" , "type" : "conversational_flow" , "description" : "This is a demo agent for population data analysis" , "app_type" : "rag" , "memory" : { "type" : "conversation_index" }, "tools" : [ { "type" : "VectorDBTool" , "name" : "population_knowledge_base" , "parameters" : { "model_id" : "YOUR_TEXT_EMBEDDING_MODEL_ID" , "index" : "test_population_data" , "embedding_field" : "population_description_embedding" , "source_field" : [ "population_description" ], "input" : "${parameters.question}" } }, { "type" : "MLModelTool" , "name" : "bedrock_claude_model" , "description" : "A general tool to answer any question" , "parameters" : { "model_id" : "YOUR_LLM_MODEL_ID" , "prompt" : """Human:You are a professional data analyst. You will always answer question based on the given context first. If the answer is not directly shown in the context, you will analyze the data and find the answer. If you don't know the answer, just say don't know. Context:${parameters.population_knowledge_base.output:-}${parameters.chat_history:-}Human:${parameters.question}Assistant:""" } } ] } POST /_plugins/_ml/agents/_register { "name" : "Test_Agent_For_ReAct_ClaudeV2" , "type" : "conversational" , "description" : "this is a test agent" , "llm" : { "model_id" : "YOUR_LLM_MODEL_ID" , "parameters" : { "max_iteration" : 5 , "stop_when_no_tool_found" : true , "response_filter" : "$.completion" } }, "memory" : { "type" : "conversation_index" }, "tools" : [ { "type" : "VectorDBTool" , "name" : "VectorDBTool" , "description" : "A tool to search opensearch index with natural language question. If you don't know answer for some question, you should always try to search data with this tool. Action Input: " , "parameters" : { "model_id" : "YOUR_TEXT_EMBEDDING_MODEL_ID" , "index" : "my_test_data" , "embedding_field" : "embedding" , "source_field" : [ "text" ], "input" : "${parameters.question}" } }, { "type" : "CatIndexTool" , "name" : "RetrieveIndexMetaTool" , "description" : "Use this tool to get OpenSearch index information: (health, status, index, uuid, primary count, replica count, docs.count, docs.deleted, store.size, primary.store.size)." } ], "app_type" : "my app" } Introduced 2.13 You can automate machine learning (ML) tasks using agents and tools. An agent orchestrates and runs ML models and tools. A tool performs a set of specific tasks. Some examples of tools are the VectorDBTool, which supports vector search, and the CATIndexTool, which executes the cat indices operation. For a list of supported tools, see Tools. An agent is a coordinator that uses a large language model (LLM) to solve a problem. After the LLM reasons and decides what action to take, the agent coordinates the action execution. OpenSearch supports the following agent types: A flow agent is configured with a set of tools that it runs in order. For example, the following agent runs the VectorDBTool and then the MLModelTool. The agent coordinates the tools so that one tool’s output can become another tool’s input. In this example, the VectorDBTool queries the k-NN index and the agent passes its output ${parameters.VectorDBTool.output} to the MLModelTool as context, along with the ${parameters.question} (see the prompt parameter): Similarly to a flow agent, a conversational flow agent is configured with a set of tools that it runs in order. The difference between them is that a conversational flow agent stores the conversation in an index, in the following example, the conversation_index. The following agent runs the VectorDBTool and then the MLModelTool: Similarly to a conversational flow agent, a conversational agent stores the conversation in an index, in the following example, the conversation_index. A conversational agent can be configured with an LLM and a set of supplementary tools that perform specific jobs. For example, you can set up an LLM and a CATIndexTool when configuring an agent. When you send a question to the model, the agent also includes the CATIndexTool as context. The LLM then decides whether it needs to use the CATIndexTool to answer questions like “How many indexes are in my cluster?” The context allows an LLM to answer specific questions that are outside of its knowledge base. For example, the following agent is configured with an LLM and a CATIndexTool that retrieves information about your OpenSearch indexes: It is important to provide thorough descriptions of the tools so that the LLM can decide in which situations to use those tools. Thank you for your feedback! Have a question? Ask us on the OpenSearch forum. Want to contribute? Edit this page or create an issue.