Skip to content

Latest commit

 

History

History
96 lines (69 loc) · 6.8 KB

sql_endpoint.md

File metadata and controls

96 lines (69 loc) · 6.8 KB
subcategory
Databricks SQL

databricks_sql_endpoint Resource

This resource is used to manage Databricks SQL warehouses. To create SQL warehouses you must have databricks_sql_access on your databricks_group or databricks_user.

Example usage

data "databricks_current_user" "me" {}

resource "databricks_sql_endpoint" "this" {
  name             = "Endpoint of ${data.databricks_current_user.me.alphanumeric}"
  cluster_size     = "Small"
  max_num_clusters = 1

  tags {
    custom_tags {
      key   = "City"
      value = "Amsterdam"
    }
  }
}

Argument reference

The following arguments are supported:

  • name - (Required) Name of the SQL warehouse. Must be unique.

  • cluster_size - (Required) The size of the clusters allocated to the endpoint: "2X-Small", "X-Small", "Small", "Medium", "Large", "X-Large", "2X-Large", "3X-Large", "4X-Large".

  • min_num_clusters - Minimum number of clusters available when a SQL warehouse is running. The default is 1.

  • max_num_clusters - Maximum number of clusters available when a SQL warehouse is running. This field is required. If multi-cluster load balancing is not enabled, this is default to 1.

  • auto_stop_mins - Time in minutes until an idle SQL warehouse terminates all clusters and stops. This field is optional. The default is 120, set to 0 to disable the auto stop.

  • tags - Databricks tags all endpoint resources with these tags.

  • spot_instance_policy - The spot policy to use for allocating instances to clusters: COST_OPTIMIZED or RELIABILITY_OPTIMIZED. This field is optional. Default is COST_OPTIMIZED.

  • enable_photon - Whether to enable Photon. This field is optional and is enabled by default.

  • enable_serverless_compute - Whether this SQL warehouse is a serverless endpoint. See below for details about the default values. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field explicitly.

    • For AWS, If omitted, the default is false for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 30, 2023, the default remains the previous behavior which is default to true if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. If your account needs updated terms of use, workspace admins are prompted in the Databricks SQL UI. A workspace must meet the requirements and might require an update to its instance profile role to add a trust relationship.

    • For Azure, If omitted, the default is false for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between November 1, 2022 and May 19, 2023, the default remains the previous behavior which is default to true if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. A workspace must meet the requirements and might require an update to its Azure storage firewall.

  • channel block, consisting of following fields:

    • name - Name of the Databricks SQL release channel. Possible values are: CHANNEL_NAME_PREVIEW and CHANNEL_NAME_CURRENT. Default is CHANNEL_NAME_CURRENT.
  • warehouse_type - SQL warehouse type. See for AWS or Azure. Set to PRO or CLASSIC. If the field enable_serverless_compute has the value true either explicitly or through the default logic (see that field above for details), the default is PRO, which is required for serverless SQL warehouses. Otherwise, the default is CLASSIC.

Attribute reference

In addition to all arguments above, the following attributes are exported:

  • id - the unique ID of the SQL warehouse.
  • jdbc_url - JDBC connection string.
  • odbc_params - ODBC connection params: odbc_params.hostname, odbc_params.path, odbc_params.protocol, and odbc_params.port.
  • data_source_id - ID of the data source for this endpoint. This is used to bind an Databricks SQL query to an endpoint.
  • creator_name - The username of the user who created the endpoint.
  • num_active_sessions - The current number of clusters used by the endpoint.
  • num_clusters - The current number of clusters used by the endpoint.
  • state - The current state of the endpoint.
  • health - Health status of the endpoint.

Access control

Timeouts

The timeouts block allows you to specify create timeouts. It usually takes 10-20 minutes to provision a Databricks SQL warehouse.

timeouts {
  create = "30m"
}

Import

You can import a databricks_sql_endpoint resource with ID like the following:

terraform import databricks_sql_endpoint.this <endpoint-id>

Related resources

The following resources are often used in the same context: