Skip to main content

Hugging Face TGI Data Source

The Hugging Face TGI connector enables you to interact with Hugging Face's Text Generation Inference (TGI) API, allowing you to generate chat-based completions, perform text generation tasks, and leverage AI-powered capabilities in your data workflows. This connector is particularly useful for applications that need to generate text content, perform language analysis, integrate AI capabilities into data processing pipelines, or build conversational AI applications using open-source LLMs. Follow the instructions below to create a new data flow that ingests data from a Hugging Face TGI source in Nexla.
huggingface_api.png

Hugging Face TGI

Create a New Data Flow

  1. To create a new data flow, navigate to the Integrate section, and click the New Data Flow button. Then, select the desired flow type from the list, and click the Create button.

  2. Select the Hugging Face TGI connector tile from the list of available connectors. Then, select the credential that will be used to connect to the Hugging Face TGI endpoint, and click Next; or, create a new Hugging Face TGI credential for use in this flow.

  3. In Nexla, Hugging Face TGI data sources can be created using pre-built endpoint templates, which expedite source setup for common TGI API endpoints. Each template is designed specifically for the corresponding TGI API endpoint, making source configuration easy and efficient.
    • To configure this source using a template, follow the instructions in Configure Using a Template.

    Hugging Face TGI sources can also be configured manually, allowing you to ingest data from TGI API endpoints not included in the pre-built templates or apply further customizations to exactly suit your needs.
    • To configure this source manually, follow the instructions in Configure Manually.

Configure Using a Template

Nexla provides pre-built templates that can be used to rapidly configure data sources to ingest data from common Hugging Face TGI API endpoints. Each template is designed specifically for the corresponding TGI API endpoint, making data source setup easy and efficient.

Endpoint Settings

  • Select the endpoint from which this source will fetch data from the Endpoint pulldown menu. Available endpoint templates are listed in the expandable boxes below. Click on an endpoint to see more information about it and how to configure your data source for this endpoint.

    Chat Completions

    This endpoint generates chat-based completions using Hugging Face's TGI Messages API. Use this endpoint when you need to generate conversational responses, perform text analysis, or leverage TGI's language understanding capabilities for your applications.

    • Enter the model ID in the Model field. This is the identifier for the TGI model you want to use for generating completions. Examples include deepseek/deepseek-v3-0324, mistralai/Mistral-7B-Instruct-v0.2, gpt2, or other models supported by your TGI endpoint. The default value is deepseek/deepseek-v3-0324. Ensure the model you specify is available on your TGI endpoint.
    • Enter the messages array in JSON format in the Messages field. This should be an array of message objects, where each object has a role (e.g., "user", "assistant", "system") and content (the message text). Example format: [{"role": "user", "content": "What is Deep Learning?"}]. The default value provides a sample user message. You can include multiple messages to create a conversation history.
    • Optionally, enter the maximum number of tokens in the Max Tokens field to limit the length of the generated response. This helps control API costs and response length. The default value is 50 tokens. For longer responses, you can increase this value, but be aware that longer responses consume more API quota and take more time to generate.
    • Optionally, enter a temperature value in the Temperature field to control the randomness and creativity of the model's output. Temperature controls the probability distribution of token selection. Lower values (e.g., 0.1-0.3) produce more focused, deterministic, and factual responses, while higher values (e.g., 0.7-1.0) produce more creative and varied responses. The default value is 0.7, which provides a balance between creativity and consistency.
    • Optionally, enter a Top-P value in the Top P field to control diversity via nucleus sampling. Top-P limits token selection to those whose cumulative probability mass reaches the specified threshold. Higher values (closer to 1) increase diversity by considering more token options, while lower values make the model more conservative. The default value is 0.9, which allows good diversity while maintaining quality.
    • Optionally, set the Stream field to true if you want to stream the response as it's generated, or false to receive the complete response at once. Streaming can be useful for real-time applications, but for data ingestion purposes, you typically want false to receive complete responses. The default value is false.

    The Chat Completions endpoint uses POST requests to send messages to the TGI model. Adjust temperature and Top-P values based on your use case: use lower values for factual content and data extraction, and use higher values for creative writing and brainstorming. The combination of these parameters allows you to fine-tune the model's output to match your specific requirements. For more information about the Chat Completions endpoint, refer to the Hugging Face TGI API Documentation.

Endpoint Testing

Once the selected endpoint template has been configured, Nexla can retrieve a sample of the data that will be fetched according to the current settings. This allows users to verify that the source is configured correctly before saving.

  • To test the current endpoint configuration, click the Test button to the right of the endpoint selection menu. Sample data will be fetched & displayed in the Endpoint Test Result panel on the right.

  • If the sample data is not as expected, review the selected endpoint and associated settings, and make any necessary adjustments. Then, click the Test button again, and check the sample data to ensure that the correct information is displayed.

Configure Manually

Hugging Face TGI data sources can be manually configured to ingest data from any valid TGI API endpoint. Manual configuration provides maximum flexibility for accessing endpoints not covered by pre-built templates or when you need custom API configurations.

With manual configuration, you can also create more complex TGI sources, such as sources that use chained API calls to fetch data from multiple endpoints or sources that require custom authentication headers or request parameters.

API Method

  1. To manually configure this source, select the Advanced tab at the top of the configuration screen.

  2. Select the API method that will be used for calls to the Hugging Face TGI API from the Method pulldown menu. The most common methods are:

    • GET: For retrieving data from the API (e.g., listing models)
    • POST: For sending data to the API or triggering actions (most TGI endpoints use POST for chat completions)

API Endpoint URL

  1. Enter the URL of the Hugging Face TGI API endpoint from which this source will fetch data in the Set API URL field. This should be the complete URL including the protocol (https://) and any required path parameters. TGI API endpoints typically follow the pattern https://router.huggingface.co/novita/v3/openai/chat/completions or your custom endpoint URL.

Ensure the API endpoint URL is correct and accessible with your current credentials. You can test the endpoint using the Test button after configuring the URL. The URL should include the API version and the specific endpoint path.

Path to Data

Optional

If only a subset of the data that will be returned by API endpoint is needed, you can designate the part(s) of the response that should be included in the Nexset(s) produced from this source by specifying the path to the relevant data within the response. This is particularly useful when API responses contain metadata, pagination information, or other data that you don't need for your analysis.

For example, when a request call is used to fetch chat completions, the API will typically return an array of choices, along with metadata, in the response. By entering the path to the relevant data, you can configure Nexla to treat each element of the returned array as a record.

Path to Data is essential when API responses have nested structures. Without specifying the correct path, Nexla might not be able to properly parse and organize your data into usable records. For TGI API responses, common paths include $.choices[*].message.content for chat completions or $.models[*] for model listings.

  • To specify which data should be treated as relevant in responses from this source, enter the path to the relevant data in the Set Path to Data in Response field.

    • For responses in JSON format enter the JSON path that points to the object or array that should be treated as relevant data. JSON paths use dot notation (e.g., $.choices[*].message.content to access message content within choices array).
    Path to Data Example:

    If the API response is in JSON format and includes a choices array that contains message objects with content, the path to the response would be entered as $.choices[*].message.content.

Autogenerate Path Suggestions

Nexla can also autogenerate data path suggestions based on the response from the API endpoint. These suggested paths can be used as-is or modified to exactly suit your needs.

  • To use this feature, click the Test button next to the Set API URL field to fetch a sample response from the API endpoint. Suggested data paths generated based on the content & format of the response will be displayed in the Suggestions box below the Set Path to Data in Response field.

  • Click on a suggestion to automatically populate the Set Path to Data in Response field with the corresponding path. The populated path can be modified directly within the field if further customization is needed.

Metadata

If metadata is included in the response but is located outside of the defined path to relevant data, you can configure Nexla to include this data as common metadata in each record. This is useful when you want to preserve important contextual information that applies to all records but isn't part of the main data array.

For example, when a request call is used to fetch chat completions, the API response will typically include an array of choices along with metadata such as model information, usage statistics, or request IDs. In this case, if you have specified the path to the relevant data but metadata of interest is located in a different part of the response, you can specify a path to this metadata to include it with each record in the generated Nexset(s).

Metadata paths are particularly useful for preserving API response context like request IDs, timestamps, or usage statistics that apply to all records in the response.

  • To specify the location of metadata that should be included with each record, enter the path to the relevant metadata in the Path to Metadata in Response field.

    • For responses in JSON format, enter the JSON path to the object or array that contains the metadata.

Request Headers

Optional
  • If Nexla should include any additional request headers in API calls to this source, enter the headers & corresponding values as comma-separated pairs in the Request Headers field (e.g., header1:value1,header2:value2). Additional headers are often required for API versioning, content type specifications, or custom authentication requirements.

    You do not need to include any headers already present in the credentials. Common headers like Authorization, Content-Type, and Accept are typically handled automatically by Nexla based on your credential configuration.

Endpoint Testing

After configuring all settings for the selected endpoint, Nexla can retrieve a sample of the data that will be fetched according to the current configuration. This allows users to verify that the source is configured correctly before saving.

  • To test the current endpoint configuration, click the Test button to the right of the endpoint selection menu. Sample data will be fetched & displayed in the Endpoint Test Result panel on the right.

  • If the sample data is not as expected, review the selected endpoint and associated settings, and make any necessary adjustments. Then, click the Test button again, and check the sample data to ensure that the correct information is displayed.

Save & Activate the Source

  1. Once all of the relevant steps in the above sections have been completed, click the Create button in the upper right corner of the screen to save and create the new Hugging Face TGI data source. Nexla will now begin ingesting data from the configured endpoint and will organize any data that it finds into one or more Nexsets.