At its developer conference, OpenAI announced a new API, the Assistants API, that it characterizes as a step toward helping developers build “agent-like experiences” within their apps.
Using the Assistants API, OpenAI customers can build an “assistant” that has specific instructions, leverages outside knowledge and can call OpenAI generative AI models and tools to perform tasks. Use cases range from a natural language-based data analysis app to a coding assistant or even an AI-powered vacation planner.
Powering the new Assistants API is Code Interpreter, OpenAI’s tool that writes and runs Python code in a sandboxed execution environment. Launched in March for ChatGPT, Code Interpreter can generate graphs and charts and process files, letting assistants created with the Assistants API run code iteratively to solve code and math problems.
The Assistants API can also tap a retrieval component that augments dev-created assistants with knowledge from outside OpenAI’s models, like product information or documents provided by a company’s employees. And it supports function calling, which enables assistants to invoke programming functions that a developer defines and incorporate the responses in their messages.
The Assistants API is in beta and available to all developers starting today. The tokens used for the API will be billed at the chosen model’s per-token rates, OpenAI says, with “tokens,” here, referring to parts of raw text (for example, the word “fantastic” split into “fan,” “tas” and “tic”).
In the future, OpenAI says that it plans to allow customers to provide their own assistant-driving tools to complement Code Interpreter, retrieval component and function calling on its platform.
Source link