Custom Models
The Seven20 AI / LLM functionality is designed to be extensible and allows for the use of custom models, outside of what Seven20 provides out of the box. This is useful for using more niche AI providers, or using custom trained models.
Implementation of a custom model depends on implementing several interfaces to define the request, response and chat message structures for an LLM.
The interfaces and classes which are used to define custom models are to allow them to be consumed within Seven20 functionality. It is the responsibility of the implementer to ensure custom code used to implement them is fit for purpose.
Custom model request
A custom model is expected to implement the LargeLanguageModelTextGenerator.Request interface. This interface defines the class as consumable by the LargeLanguageModelService and enables it to be automatically detected by parts of the system which consume LLMs, e.g. the LLM Model picklist within the flow action.
The specific implementations of each method within this interface are up to the implementer, however it is expected that the getHttpRequest() method will return a valid HTTP request for the model being used.
Ensure the HTTPRequest returned by getHttpRequest() is valid and contains authentication details. The LargeLanguageModelService is not responsible for custom model authentication or validation of request format.
Custom model chat message
All models consume chat messages to be able to perform their functions. While these chat messages usually have a common format based upon the OpenAI API specification, there are some models which have their own format. The LargeLanguageModelTextGenerator.ChatMessage interface is used to define the chat message structure for a custom model.
This interface consists of a single method, getMessage() which returns an Object. This allows the underlying implementation to define the message format for the model being used. The getMessage() should be consumed by the custom model request to build the HTTP request.
If the model matches the OpenAI API specification, the generic FlowLlmTextGenMessage class can be used to define the chat message structure, instead of implementing a custom message format.
Custom model response
A custom model is expected to implement the LargeLanguageModelTextGenerator.Response interface. This interface defines a class which is initialisable by the LargeLanguageModelService and is responsible for parsing the response from the LLM request. The specific response type for a request is defined by returning the System.Type for it in the request's responseType() method.
The processResponse() method is called after the object has been initialised and the HTTP callout has been performed.
It is responsible for parsing the HTTP response from the LLM request and establishing any error-handling functionality.
It is expected to parse a list of LargeLanguageModelTextGenerator.Error and LargeLanguageModelTextGenerator.Result objects from the response, which are then returned by the getErrors() and getResult() methods respectively - these methods should always return a list, even if empty.
It is expected that the results/error objects return human-readable strings, there is no restriction on custom methods or properties being added to these objects if consuming the service via Apex. If consuming via Flow, the properties of these objects are not available to the Flow action and so it must be ensured that the getMessage() method of the result/error objects are human readable.