Skip to main content
Feedback

OpenAI Connector

Overview

The popular conversational interface ChatGPT is created by OpenAI, an artificial intelligence research organization. In addition, you may use its many models for text editing, image generation, and even categorization. With OpenAI, you may develop and refine your own models and expose them internally.

  1. Go to Connectors on the left-hand-menu.
  2. Select New Connector.
  3. From the drop-down under Connector Type, select OpenAI.

Configuration

Follow the steps to configure the OpenAI connector:

  1. Navigate to the Connectors screen.
  2. Provide a name to your OpenAI connector under Name, the URL field is a pre-populated input.
  3. Click Retrieve Connector Configuration Data.
  4. Configure the parameters. For details, refer to OpenAI connector configuration values.

O 5. Click Install.

note

To view settings and configuration, click Preview Actions & Types.

OpenAI connector configuration values

OptionTypeDefault ValueDescription
API KeyPasswordNoneThe Open API key to use if API key based authentication is required.
ModelStringgpt-3.5-turboThe model's name with whom you wish to communicate. This is an optional field, if no value is specified it will default to gpt-3.5-turbo.
System MessageListNoneThe values entered into the $Chat Message list. They will always appear top in the list of messages delivered to OpenAI.
Instance UrlStringNo DefaultThe Azure OpenAI resource endpoint to use. This should not include model deployment or operation information. For example: https://my-resource.openai.azure.com.
TemperatureNumberNo DefaultThe sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. Supported range is [0, 1].
Max TokensNumberNo DefaultThe maximum number of tokens to generate.
Presence PenaltyNumberNo DefaultA value that influences the probability of generated tokens appearing based on their existing presence in generated text. Positive values will make tokens less likely to appear when they already exist and increase the model's likelihood to output new topics. Supported range is [-2, 2].
Frequency PenaltyNumberNo DefaultA value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text. Positive values will make tokens less likely to appear as their frequency increases and decrease the likelihood of the model repeating the same statements verbatim. Supported range is [-2, 2].
Nucleus Sampling FactorNumberNo DefaultAn alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass. As an example, a value of 0.15 will cause only the tokens comprising the top 15% of probability mass to be considered. It is not recommended to modify temperature and top_p for the same completions request as the interaction of these two settings is difficult to predict. Supported range is [0, 1].
On this Page