Forethought Announces New Open Source Developer Platform, AutoChain

Platform Makes it Easier for Developers to Fully Customize Generative LLM Agents

Forethought, the leading generative AI for customer support automation, today announced the release of AutoChain, a framework developed to experiment and build lightweight, extensible, and testable LLM agents for the generative AI community.

Generative LLM agents have recently taken the AI industry by storm and captured the attention of developers everywhere. Despite this, the process of exploring and customizing generative agents is still complex and time consuming, and existing frameworks do not alleviate the pain point of evaluating generative agents under various and complex scenarios in a scalable way. With AutoChain, developers can enable easy and reliable iteration over LLM agents to expedite exploration.

“LLMs have demonstrated huge success in various text generation tasks and enable developers to build generative agents based on natural language objectives,” said Sami Ghoche, CTO and Co-Founder of Forethought. “Yet most of the generative agents require heavy customization for specific purposes, and adapting to different use cases is sometimes overwhelming using existing tools and framework. As a result, it is still very challenging to build a customized generative agent.”

“AutoChain makes it easy for developers to fully customize their agents. Whether it’s adding custom clarifying questions or automatically fixing tool input arguments–the simplicity saves developers time from experiment overhead, errors, and troubleshooting”

Marketing Technology News: Cisco Appoints Oliver Tuszik as New Europe, Middle East, and Africa President

“AutoChain makes it easy for developers to fully customize their agents. Whether it’s adding custom clarifying questions or automatically fixing tool input arguments–the simplicity saves developers time from experiment overhead, errors, and troubleshooting,” said Yi Lu, Head of Machine Learning at Forethought.

To facilitate agent evaluation, AutoChain introduces the workflow evaluation framework. This framework runs conversations between a generative agent and LLM-simulated test users. The test users incorporate various user contexts and desired conversation outcomes, which enables easy addition of test cases for new user scenarios and fast evaluation. The framework leverages LLMs to evaluate whether a given multi-turn conversation has achieved the intended outcome.

AutoChain provides a lightweight framework and simplifies the building process. Features include:

  • Lightweight and extensible generative agent pipeline
  • Agent that can use different custom tools and support OpenAI function calling
  • Simple memory tracking for conversation history and tools’ outputs
  • Automated agent multi-turn conversation evaluation with simulated conversations

In the future, AutoChain will add features that allow more text encoder options, and a documents loader to facilitate initializing agents with knowledge sources.

Marketing Technology News: Why You Need a Dedicated Marketing Operations Team

Brought to you by
For Sales, write to: contact@martechseries.com
Copyright © 2024 MarTech Series. All Rights Reserved.Privacy Policy
To repurpose or use any of the content or material on this and our sister sites, explicit written permission needs to be sought.