Skip to main content

Ollama API Data Source

The Ollama API connector enables you to connect to Ollama's local REST API to run inference, generate text completions, list available models, and perform chat completions using local LLMs. This connector is particularly useful for applications that need to run LLMs locally for privacy, cost control, or offline capabilities, without relying on cloud-based AI services. Follow the instructions below to create a new data flow that ingests data from an Ollama API source in Nexla.
ollama_api.png

Ollama API

Create a New Data Flow

  1. To create a new data flow, navigate to the Integrate section, and click the New Data Flow button. Then, select the desired flow type from the list, and click the Create button.

  2. Select the Ollama API connector tile from the list of available connectors. Then, select the credential that will be used to connect to the Ollama instance, and click Next; or, create a new Ollama API credential for use in this flow.

  3. In Nexla, Ollama API data sources can be created using pre-built endpoint templates, which expedite source setup for common Ollama API endpoints. Each template is designed specifically for the corresponding Ollama API endpoint, making source configuration easy and efficient.
    • To configure this source using a template, follow the instructions in Configure Using a Template.

    Ollama API sources can also be configured manually, allowing you to ingest data from Ollama API endpoints not included in the pre-built templates or apply further customizations to exactly suit your needs.
    • To configure this source manually, follow the instructions in Configure Manually.

Configure Using a Template

Nexla provides pre-built templates that can be used to rapidly configure data sources to ingest data from common Ollama API endpoints. Each template is designed specifically for the corresponding Ollama API endpoint, making data source setup easy and efficient.

Endpoint Settings

  • Select the endpoint from which this source will fetch data from the Endpoint pulldown menu. Available endpoint templates are listed in the expandable boxes below. Click on an endpoint to see more information about it and how to configure your data source for this endpoint.

    Get Version

    This endpoint retrieves the Ollama version information. Use this endpoint when you need to check the Ollama version, verify connectivity, or get system information about your Ollama instance.

    • This endpoint automatically retrieves the version information from your Ollama instance. No additional configuration is required beyond selecting this endpoint template.

    The Get Version endpoint uses GET requests to retrieve version information from the Ollama API. This is a simple endpoint useful for testing connectivity and verifying that your Ollama instance is running correctly. For more information about the Get Version endpoint, refer to the Ollama API Documentation.

    List Models

    This endpoint retrieves all locally available models in Ollama. Use this endpoint when you need to list available models, check which models are installed, or get model information for further API calls.

    • This endpoint automatically retrieves all models available in your Ollama instance. No additional configuration is required beyond selecting this endpoint template.

    The List Models endpoint uses GET requests to retrieve model information from the Ollama API. The endpoint returns a list of all models that have been downloaded and are available for use in your Ollama instance. For more information about the List Models endpoint, refer to the Ollama API Documentation.

    Generate Text

    This endpoint generates text using a model and prompt. Use this endpoint when you need to generate text completions, perform text generation tasks, or get responses from local LLMs.

    • Enter the name of the local model to use in the Model Name field. Examples include llama3, mistral, codellama, or other models you have installed in Ollama. You can use the "List Models" endpoint to see available models.
    • Enter the prompt text in the Prompt field. This is the input text that the model will use to generate completions.

    The Generate Text endpoint uses POST requests to send prompts to the Ollama API and returns generated text completions. The endpoint supports various local models and provides text generation capabilities without requiring cloud-based AI services. For more information about the Generate Text endpoint, refer to the Ollama API Documentation.

    Chat Completion

    This endpoint runs a multi-turn chat completion using a local model. Use this endpoint when you need to build conversational AI applications, create chatbots, or perform interactive text generation with context.

    • Enter the model name to use in the Model Name field. Examples include llama3, mistral, codellama, or other models you have installed in Ollama.
    • Enter the messages array in JSON format in the Messages (JSON Array) field. The messages should be an array of objects with role and content fields. For example: [{"role": "user", "content": "Hello!"}, {"role": "assistant", "content": "Hi there!"}, {"role": "user", "content": "How are you?"}].

    The Chat Completion endpoint uses POST requests to send chat messages to the Ollama API and returns conversational responses. The endpoint supports multi-turn conversations with context, allowing you to build interactive chat applications using local LLMs. For more information about the Chat Completion endpoint, refer to the Ollama API Documentation.

Endpoint Testing

Once the selected endpoint template has been configured, Nexla can retrieve a sample of the data that will be fetched according to the current settings. This allows users to verify that the source is configured correctly before saving.

  • To test the current endpoint configuration, click the Test button to the right of the endpoint selection menu. Sample data will be fetched & displayed in the Endpoint Test Result panel on the right.

  • If the sample data is not as expected, review the selected endpoint and associated settings, and make any necessary adjustments. Then, click the Test button again, and check the sample data to ensure that the correct information is displayed.

Configure Manually

Ollama API data sources can be manually configured to ingest data from any valid Ollama API endpoint. Manual configuration provides maximum flexibility for accessing endpoints not covered by pre-built templates or when you need custom API configurations.

With manual configuration, you can also create more complex Ollama API sources, such as sources that use chained API calls to fetch data from multiple endpoints or sources that require custom authentication headers or request parameters.

API Method

  1. To manually configure this source, select the Advanced tab at the top of the configuration screen.

  2. Select the API method that will be used for calls to the Ollama API from the Method pulldown menu. The most common methods are:

    • GET: For retrieving data from the API (version and list models endpoints use GET)
    • POST: For sending generation requests to the API (generate and chat endpoints use POST)

API Endpoint URL

  1. Enter the URL of the Ollama API endpoint from which this source will fetch data in the Set API URL field. This should be the complete URL including the protocol (http:// or https://) and any required path parameters. Ollama API endpoints typically follow the pattern {base_url}/api/{operation}, where {base_url} is your Ollama base URL configured in the credential.

Ensure the API endpoint URL is correct and accessible with your current credentials. You can test the endpoint using the Test button after configuring the URL. The endpoint URL should use the base URL configured in your credential. Common Ollama endpoints include /api/version, /api/tags (for listing models), /api/generate, and /api/chat.

Path to Data

Optional

If only a subset of the data that will be returned by API endpoint is needed, you can designate the part(s) of the response that should be included in the Nexset(s) produced from this source by specifying the path to the relevant data within the response. This is particularly useful when API responses contain metadata, pagination information, or other data that you don't need for your analysis.

For example, when a request call is used to list models, the API will typically return model data along with metadata. By entering the path to the relevant data, you can configure Nexla to extract the specific models you need.

Path to Data is essential when API responses have nested structures. Without specifying the correct path, Nexla might not be able to properly parse and organize your data into usable records. For Ollama API responses, common paths include $ for the entire response, $.models[*] for arrays of models, or $.response for generated text.

  • To specify which data should be treated as relevant in responses from this source, enter the path to the relevant data in the Set Path to Data in Response field.

    • For responses in JSON format enter the JSON path that points to the object or array that should be treated as relevant data. JSON paths use dot notation (e.g., $.models to access the models array).
    Path to Data Example:

    If the API response is in JSON format and includes a models array that contains the model information, the path to the response would be entered as $.models[*].

Autogenerate Path Suggestions

Nexla can also autogenerate data path suggestions based on the response from the API endpoint. These suggested paths can be used as-is or modified to exactly suit your needs.

  • To use this feature, click the Test button next to the Set API URL field to fetch a sample response from the API endpoint. Suggested data paths generated based on the content & format of the response will be displayed in the Suggestions box below the Set Path to Data in Response field.

  • Click on a suggestion to automatically populate the Set Path to Data in Response field with the corresponding path. The populated path can be modified directly within the field if further customization is needed.

Request Headers

Optional
  • If Nexla should include any additional request headers in API calls to this source, enter the headers & corresponding values as comma-separated pairs in the Request Headers field (e.g., header1:value1,header2:value2). Additional headers are often required for API versioning, content type specifications, or custom authentication requirements.

    You do not need to include any headers already present in the credentials. Common headers like Authorization, Content-Type, and Accept are typically handled automatically by Nexla based on your credential configuration. For Ollama, Content-Type is typically set to application/json for POST requests.

Request Body

Optional
  • If the API endpoint requires a request body (which is common for POST requests to Ollama), enter the request body in the Request Body field. The request body should be formatted as JSON and include the necessary parameters for the generation or chat request, such as the model name, prompt or messages, and any optional parameters.

    For Ollama generation and chat requests, the request body typically includes a model field (e.g., "llama3"), a prompt field for text generation or a messages array for chat completions, and optionally a stream field set to false for non-streaming responses. Refer to the Ollama API documentation for the complete list of supported parameters.

Endpoint Testing

After configuring all settings for the selected endpoint, Nexla can retrieve a sample of the data that will be fetched according to the current configuration. This allows users to verify that the source is configured correctly before saving.

  • To test the current endpoint configuration, click the Test button to the right of the endpoint selection menu. Sample data will be fetched & displayed in the Endpoint Test Result panel on the right.

  • If the sample data is not as expected, review the selected endpoint and associated settings, and make any necessary adjustments. Then, click the Test button again, and check the sample data to ensure that the correct information is displayed.

Save & Activate the Source

  1. Once all of the relevant steps in the above sections have been completed, click the Create button in the upper right corner of the screen to save and create the new Ollama API data source. Nexla will now begin ingesting data from the configured endpoint and will organize any data that it finds into one or more Nexsets.