Skip to main content

NVIDIA AI Data Source

The NVIDIA AI connector enables you to generate text completions and chat completions using NVIDIA's LLM models for various AI-powered applications. This connector is particularly useful for applications that need to integrate advanced language models, build AI-powered features, perform text generation, or leverage NVIDIA's high-performance inference infrastructure. Follow the instructions below to create a new data flow that ingests data from an NVIDIA AI source in Nexla.
nvidia_llm_api.png

NVIDIA AI

Create a New Data Flow

  1. To create a new data flow, navigate to the Integrate section, and click the New Data Flow button. Then, select the desired flow type from the list, and click the Create button.

  2. Select the NVIDIA AI connector tile from the list of available connectors. Then, select the credential that will be used to connect to the NVIDIA AI API, and click Next; or, create a new NVIDIA AI credential for use in this flow.

  3. In Nexla, NVIDIA AI data sources can be created using pre-built endpoint templates, which expedite source setup for common NVIDIA AI API endpoints. Each template is designed specifically for the corresponding NVIDIA AI API endpoint, making source configuration easy and efficient.
    • To configure this source using a template, follow the instructions in Configure Using a Template.

    NVIDIA AI sources can also be configured manually, allowing you to ingest data from NVIDIA AI API endpoints not included in the pre-built templates or apply further customizations to exactly suit your needs.
    • To configure this source manually, follow the instructions in Configure Manually.

Configure Using a Template

Nexla provides pre-built templates that can be used to rapidly configure data sources to ingest data from common NVIDIA AI API endpoints. Each template is designed specifically for the corresponding NVIDIA AI API endpoint, making data source setup easy and efficient.

Endpoint Settings

  • Select the endpoint from which this source will fetch data from the Endpoint pulldown menu. Available endpoint templates are listed in the expandable boxes below. Click on an endpoint to see more information about it and how to configure your data source for this endpoint.

    Text Completions

    This endpoint generates text completions using NVIDIA's LLM API. Use this endpoint when you need to generate text based on prompts, complete sentences or paragraphs, or perform text generation tasks using NVIDIA's language models.

    • Enter the model name to use for generating content in the Model field. Examples include mixtral_8x7b, meta/llama-3.1-70b-instruct, or other available NVIDIA LLM models. The default value is mixtral_8x7b.
    • Enter the prompt text in the Prompt field. This is the input text that the model will use to generate completions.
    • Optionally, specify the temperature for text generation in the Temperature field. Temperature controls the randomness of the output. Lower values (e.g., 0.2) make the output more deterministic, while higher values (e.g., 1.0) make it more creative. The default value is typically 1.0.
    • Optionally, specify the top_p (nucleus sampling) parameter in the Top P field. This controls the diversity of the output by considering only the top p probability mass. The default value is typically 1.0.
    • Optionally, specify the frequency penalty in the Frequency Penalty field. This penalizes tokens based on their frequency in the text so far. The default value is typically 0.0.
    • Optionally, specify the presence penalty in the Presence Penalty field. This penalizes tokens based on whether they appear in the text so far. The default value is typically 0.0.
    • Optionally, specify the maximum number of tokens to generate in the Max Tokens field. The default value varies by model. This limits the length of the generated text.
    • Optionally, specify stop sequences in the Stop field. This should be a JSON array of strings that will cause the model to stop generating when encountered. Example: ["\n", "END"].

    The Text Completions endpoint uses POST requests to send prompts to the NVIDIA AI API and returns generated text completions. The endpoint supports various NVIDIA LLM models and provides fine-grained control over text generation parameters. For more information about the Text Completions endpoint, refer to the NVIDIA AI API Documentation.

    Chat Completions

    This endpoint generates chat completions using NVIDIA's LLM API with support for multiple models. Use this endpoint when you need to build conversational AI applications, create chatbots, or perform interactive text generation with context.

    • Enter the model name to use for generating content in the Model field. Examples include mixtral_8x7b, meta/llama-3.1-70b-instruct, or other available NVIDIA LLM models. The default value is mixtral_8x7b.
    • Enter the message content in the Message field. This is the user message that will be sent to the model for chat completion.
    • Optionally, specify the temperature for text generation in the Temperature field. Temperature controls the randomness of the output. Lower values make the output more deterministic, while higher values make it more creative. The default value is typically 1.0.
    • Optionally, specify the top_p (nucleus sampling) parameter in the Top P field. This controls the diversity of the output. The default value is typically 1.0.
    • Optionally, specify the frequency penalty in the Frequency Penalty field. This penalizes tokens based on their frequency. The default value is typically 0.0.
    • Optionally, specify the presence penalty in the Presence Penalty field. This penalizes tokens based on whether they appear in the conversation. The default value is typically 0.0.
    • Optionally, specify the maximum number of tokens to generate in the Max Tokens field. This limits the length of the generated response.
    • Optionally, specify stop sequences in the Stop field. This should be a JSON array of strings that will cause the model to stop generating when encountered.

    The Chat Completions endpoint uses POST requests to send chat messages to the NVIDIA AI API and returns conversational responses. The endpoint supports various NVIDIA LLM models and provides fine-grained control over chat generation parameters. For more information about the Chat Completions endpoint, refer to the NVIDIA AI API Documentation.

Endpoint Testing

Once the selected endpoint template has been configured, Nexla can retrieve a sample of the data that will be fetched according to the current settings. This allows users to verify that the source is configured correctly before saving.

  • To test the current endpoint configuration, click the Test button to the right of the endpoint selection menu. Sample data will be fetched & displayed in the Endpoint Test Result panel on the right.

  • If the sample data is not as expected, review the selected endpoint and associated settings, and make any necessary adjustments. Then, click the Test button again, and check the sample data to ensure that the correct information is displayed.

Configure Manually

NVIDIA AI data sources can be manually configured to ingest data from any valid NVIDIA AI API endpoint. Manual configuration provides maximum flexibility for accessing endpoints not covered by pre-built templates or when you need custom API configurations.

With manual configuration, you can also create more complex NVIDIA AI sources, such as sources that use chained API calls to fetch data from multiple endpoints or sources that require custom authentication headers or request parameters.

API Method

  1. To manually configure this source, select the Advanced tab at the top of the configuration screen.

  2. Select the API method that will be used for calls to the NVIDIA AI API from the Method pulldown menu. The most common methods are:

    • POST: For sending completion requests to the API (most NVIDIA AI endpoints use POST)

API Endpoint URL

  1. Enter the URL of the NVIDIA AI API endpoint from which this source will fetch data in the Set API URL field. This should be the complete URL including the protocol (https://) and any required path parameters. NVIDIA AI API endpoints typically follow the pattern {base_url}/{api_version}/completions for text completions or {base_url}/{api_version}/chat/completions for chat completions, where {base_url} is typically https://api.nvidia.com and {api_version} is typically v1.

Ensure the API endpoint URL is correct and accessible with your current credentials. You can test the endpoint using the Test button after configuring the URL. The endpoint URL should match the base URL and API version configured in your credential.

Path to Data

Optional

If only a subset of the data that will be returned by API endpoint is needed, you can designate the part(s) of the response that should be included in the Nexset(s) produced from this source by specifying the path to the relevant data within the response. This is particularly useful when API responses contain metadata, pagination information, or other data that you don't need for your analysis.

For example, when a request call is used to fetch completions, the API will typically return completion data along with metadata. By entering the path to the relevant data, you can configure Nexla to extract the specific completion content you need.

Path to Data is essential when API responses have nested structures. Without specifying the correct path, Nexla might not be able to properly parse and organize your data into usable records. For NVIDIA AI API responses, common paths include $ for the entire response or $.choices[*] for arrays of completion choices.

  • To specify which data should be treated as relevant in responses from this source, enter the path to the relevant data in the Set Path to Data in Response field.

    • For responses in JSON format enter the JSON path that points to the object or array that should be treated as relevant data. JSON paths use dot notation (e.g., $.choices to access the choices array).
    Path to Data Example:

    If the API response is in JSON format and includes a choices array that contains the completion results, the path to the response would be entered as $.choices[*].

Autogenerate Path Suggestions

Nexla can also autogenerate data path suggestions based on the response from the API endpoint. These suggested paths can be used as-is or modified to exactly suit your needs.

  • To use this feature, click the Test button next to the Set API URL field to fetch a sample response from the API endpoint. Suggested data paths generated based on the content & format of the response will be displayed in the Suggestions box below the Set Path to Data in Response field.

  • Click on a suggestion to automatically populate the Set Path to Data in Response field with the corresponding path. The populated path can be modified directly within the field if further customization is needed.

Request Headers

Optional
  • If Nexla should include any additional request headers in API calls to this source, enter the headers & corresponding values as comma-separated pairs in the Request Headers field (e.g., header1:value1,header2:value2). Additional headers are often required for API versioning, content type specifications, or custom authentication requirements.

    You do not need to include any headers already present in the credentials. Common headers like Authorization, Content-Type, and Accept are typically handled automatically by Nexla based on your credential configuration. For NVIDIA AI, the Authorization header with Bearer token is automatically included from your credential, and Content-Type is typically set to application/json for API requests.

Request Body

Optional
  • If the API endpoint requires a request body (which is common for POST requests to NVIDIA AI), enter the request body in the Request Body field. The request body should be formatted as JSON and include the necessary parameters for the completion request, such as the model, prompt or messages, temperature, max_tokens, and other generation parameters.

    For NVIDIA AI completion requests, the request body typically includes a model field (e.g., "mixtral_8x7b"), a prompt field for text completions or a messages array for chat completions, and optionally fields like temperature, max_tokens, top_p, frequency_penalty, presence_penalty, and stop. Refer to the NVIDIA AI API documentation for the complete list of supported parameters.

Endpoint Testing

After configuring all settings for the selected endpoint, Nexla can retrieve a sample of the data that will be fetched according to the current configuration. This allows users to verify that the source is configured correctly before saving.

  • To test the current endpoint configuration, click the Test button to the right of the endpoint selection menu. Sample data will be fetched & displayed in the Endpoint Test Result panel on the right.

  • If the sample data is not as expected, review the selected endpoint and associated settings, and make any necessary adjustments. Then, click the Test button again, and check the sample data to ensure that the correct information is displayed.

Save & Activate the Source

  1. Once all of the relevant steps in the above sections have been completed, click the Create button in the upper right corner of the screen to save and create the new NVIDIA AI data source. Nexla will now begin ingesting data from the configured endpoint and will organize any data that it finds into one or more Nexsets.