Skip to content

Terraform modules for provisioning StreamNative Cloud on google cloud

License

Notifications You must be signed in to change notification settings

streamnative/terraform-google-cloud

Repository files navigation

terraform-google-cloud

An example configuration is found in examples/root_example/main.tf.

Assuming you have the GCloud CLI installed and configured, to the example, run

tf apply --target module.sn_cluster
tf apply

Requirements

Name Version
terraform >=1.3.0
google ~> 5.19
google-beta ~> 5.19
helm ~> 2.2
kubernetes ~> 2.8

Providers

Name Version
google ~> 5.19
helm ~> 2.2
kubernetes ~> 2.8

Modules

Name Source Version
cert_manager_sa terraform-google-modules/kubernetes-engine/google//modules/workload-identity 30.1.0
external_dns_sa terraform-google-modules/kubernetes-engine/google//modules/workload-identity 30.1.0
external_secrets_sa terraform-google-modules/kubernetes-engine/google//modules/workload-identity 30.1.0
gke terraform-google-modules/kubernetes-engine/google 29.0.0
gke_private terraform-google-modules/kubernetes-engine/google//modules/private-cluster 29.0.0
istio github.com/streamnative/terraform-helm-charts//modules/istio-operator master

Resources

Name Type
google_kms_crypto_key.gke_encryption_key resource
google_kms_key_ring.keyring resource
google_project_iam_member.kms_iam_binding resource
helm_release.cert_issuer resource
helm_release.cert_manager resource
helm_release.cilium resource
helm_release.external_dns resource
helm_release.external_secrets resource
kubernetes_namespace.istio_system resource
kubernetes_namespace.sn_system resource
kubernetes_resource_quota.istio_critical_pods resource
kubernetes_storage_class.sn_default resource
kubernetes_storage_class.sn_ssd resource
google_compute_zones.available data source
google_project.project data source

Inputs

Name Description Type Default Required
add_cluster_firewall_rules Creates additional firewall rules on the cluster. bool false no
add_master_webhook_firewall_rules Create master_webhook firewall rules for ports defined in firewall_inbound_ports. bool false no
add_shadow_firewall_rules Create GKE shadow firewall (the same as default firewall rules with firewall logs enabled). bool false no
authenticator_security_group The name of the RBAC security group for use with Google security groups in Kubernetes RBAC. Group name must be in format gke-security-groups@yourdomain.com string null no
cert_issuer_support_email The email address to receive notifications from the cert issuer. string "certs-support@streamnative.io" no
cert_manager_helm_chart_name The name of the Cert Manager Helm chart to be used. string "cert-manager" no
cert_manager_helm_chart_repository The location of the helm chart to use for Cert Manager. string "https://charts.bitnami.com/bitnami" no
cert_manager_helm_chart_version The version of the Cert Manager helm chart to install. Defaults to "0.7.8". string "0.7.8" no
cert_manager_settings Additional settings which will be passed to the Helm chart values. See https://github.com/bitnami/charts/tree/master/bitnami/cert-manager for detailed options. map(any) {} no
cilium_helm_chart_name The name of the Helm chart in the repository for Cilium. string "cilium" no
cilium_helm_chart_repository The repository containing the Cilium helm chart. string "https://helm.cilium.io" no
cilium_helm_chart_version Helm chart version for Cilium. See https://artifacthub.io/packages/helm/cilium/cilium for updates. string "1.13.2" no
cluster_autoscaling_config Cluster autoscaling configuration for node auto-provisioning. This is disabled for our configuration, since we typically want to scale existing node pools rather than add new ones to the cluster
object({
enabled = bool
min_cpu_cores = number
max_cpu_cores = number
min_memory_gb = number
max_memory_gb = number
gpu_resources = list(object({ resource_type = string, minimum = number, maximum = number }))
auto_repair = bool
auto_upgrade = bool
})
{
"auto_repair": true,
"auto_upgrade": false,
"enabled": false,
"gpu_resources": [],
"max_cpu_cores": null,
"max_memory_gb": null,
"min_cpu_cores": null,
"min_memory_gb": null
}
no
cluster_http_load_balancing Enable the HTTP load balancing addon for the cluster. Defaults to "true" bool true no
cluster_name The name of your GKE cluster. string n/a yes
cluster_network_policy Enable the network policy addon for the cluster. Defaults to "true", and uses CALICO as the provider bool true no
create_service_account Creates a service account for the cluster. Defaults to "true". bool true no
database_encryption_key_name Name of the KMS key to encrypt Kubernetes secrets at rest in etcd string "" no
datapath_provider the datapath provider to use, in the future, the default of this should be ADVANCED_DATAPATH string "DATAPATH_PROVIDER_UNSPECIFIED" no
default_max_pods_per_node the number of pods per node, defaults to GKE default of 110, but in smaller CIDRs we want to tune this number 110 no
deletion_protection Whether or not to allow Terraform to destroy the cluster. bool true no
enable_cert_manager Enables the Cert-Manager addon service on the cluster. Defaults to "true", and in most situations is required by StreamNative Cloud. bool true no
enable_cilium Enables Cilium on the cluster. Set to "false" by default. bool false no
enable_database_encryption Enables etcd encryption via Google KMS. bool false no
enable_external_dns Enables the External DNS addon service on the cluster. Defaults to "true", and in most situations is required by StreamNative Cloud. bool true no
enable_external_secrets Enables kubernetes-external-secrets on the cluster, which uses GCP Secret Manager as the secrets backend bool true no
enable_func_pool Enable an additional dedicated pool for Pulsar Functions. Enabled by default. bool true no
enable_istio Enables Istio on the cluster. Set to "false" by default. bool false no
enable_private_gke Enables private GKE cluster, where nodes are not publicly accessible. Defaults to "false". bool false no
enable_private_nodes Whether nodes have internal IP addresses only, only used for private clusters bool true no
enable_resource_creation When enabled, all dependencies, like service accounts, buckets, etc will be created. When disabled, they will note. Use in combination with enable_<app> to manage these outside this module bool true no
external_dns_helm_chart_name The name of the Helm chart in the repository for ExternalDNS. string "external-dns" no
external_dns_helm_chart_repository The repository containing the ExternalDNS helm chart. string "https://charts.bitnami.com/bitnami" no
external_dns_helm_chart_version Helm chart version for ExternalDNS. See https://github.com/bitnami/charts/tree/master/bitnami/external-dns for updates. string "6.15.0" no
external_dns_policy Sets how DNS records are managed by ExternalDNS. Options are "sync", which allows ExternalDNS to create and delete records, or "upsert_only", which only allows for the creation of records string "upsert-only" no
external_dns_settings Additional settings which will be passed to the Helm chart values, see https://github.com/bitnami/charts/tree/master/bitnami/external-dns for detailed options. map(any) {} no
external_dns_version The version of the ExternalDNS helm chart to install. Defaults to "5.2.2". string "5.2.2" no
external_secrets_helm_chart_name The name of the Helm chart in the repository for kubernetes-external-secrets string "kubernetes-external-secrets" no
external_secrets_helm_chart_repository The repository containing the kubernetes-external-secrets helm chart string "https://external-secrets.github.io/kubernetes-external-secrets" no
external_secrets_helm_chart_version Helm chart version for kubernetes-external-secrets. Defaults to "8.3.0". See https://github.com/external-secrets/kubernetes-external-secrets/tree/master/charts/kubernetes-external-secrets for updates string "8.3.0" no
external_secrets_settings Additional settings which will be passed to the Helm chart values, see https://github.com/external-secrets/kubernetes-external-secrets/tree/master/charts/kubernetes-external-secrets for available options map(any) {} no
firewall_inbound_ports List of TCP ports for admission/webhook controllers. Either flag add_master_webhook_firewall_rules or add_cluster_firewall_rules (also adds egress rules) must be set to true for inbound-ports firewall rules to be applied. list(string)
[
"5443",
"8443",
"9443",
"15017"
]
no
func_pool_auto_repair Enable auto-repair for the Pulsar Functions pool. bool true no
func_pool_auto_upgrade Enable auto-upgrade for the Pulsar Functions pool. bool true no
func_pool_autoscaling Enable autoscaling of the Pulsar Functions pool. Defaults to "true". bool true no
func_pool_autoscaling_initial_count The initial number of nodes in the Pulsar Functions pool, per zone, when autoscaling is enabled. Defaults to 0. number 0 no
func_pool_autoscaling_max_size The maximum size of the Pulsar Functions pool Autoscaling group. Defaults to 3. number 3 no
func_pool_autoscaling_min_size The minimum size of the Pulsar Functions pool AutoScaling group. Defaults to 0. number 0 no
func_pool_count The number of worker nodes in the Pulsar Functions pool. This is only used if func_pool_autoscaling is set to false. Defaults to 1. number 1 no
func_pool_disk_size Disk size in GB for worker nodes in the Pulsar Functions pool. Defaults to 100. number 100 no
func_pool_disk_type The type disk attached to worker nodes in the Pulsar Functions pool. Defaults to "pd-standard". string "pd-standard" no
func_pool_image_type The image type to use for worker nodes in the Pulsar Functions pool. Defaults to "COS" (cointainer-optimized OS with docker). string "COS_CONTAINERD" no
func_pool_locations A string of comma seperated values (upstream requirement) of zones for the Pulsar Functions pool, e.g. "us-central1-b,us-central1-c" etc. Nodes must be in the same region as the cluster. Defaults to three random zones in the region specified for the cluster via the "cluster_location" input, or the zones provided through the "node_pool_locations" input (if it is defined). string "" no
func_pool_machine_type The machine type to use for worker nodes in the Pulsar Functions pool. Defaults to "n2-standard-4". string "n2-standard-4" no
func_pool_max_pods_per_node the number of pods per node number 110 no
func_pool_name The name of the Pulsar Functions pool. Defaults to "default-node-pool". string "func-pool" no
func_pool_service_account The service account email address to use for the Pulsar Functions pool. If create_service_account is set to true, it will use the the output from the module. string "" no
func_pool_ssd_count The number of SSDs to attach to each node in the Pulsar Functions pool. Defaults to 0. number 0 no
func_pool_version The version of Kubernetes to use for the Pulsar Functions pool. If the input "release_channel" is not defined, defaults to "kubernetes_version" used for the cluster. Should only be defined while "func_pool_auto_upgrade" is also set to "false". string "" no
google_service_account when set, don't create GSAs and instead use the this service account for all apps string "" no
horizontal_pod_autoscaling Enable horizontal pod autoscaling for the cluster. Defaults to "true". bool true no
istio_chart_version The version of the istio chart to use string "2.11" no
istio_mesh_id The ID used by the Istio mesh. This is also the ID of the StreamNative Cloud Pool used for the workload environments. This is required when "enable_istio_operator" is set to "true". string null no
istio_network The name of network used for the Istio deployment. This is required when "enable_istio_operator" is set to "true". string "default" no
istio_network_loadbalancer n/a string "internet_facing" no
istio_profile The path or name for an Istio profile to load. Set to the profile "default" if not specified. string "default" no
istio_revision_tag The revision tag value use for the Istio label "istio.io/rev". string "sn-stable" no
istio_settings Additional settings which will be passed to the Helm chart values map(any) {} no
istio_trust_domain The trust domain used for the Istio deployment, which corresponds to the root of a system. This is required when "enable_istio_operator" is set to "true". string "cluster.local" no
kiali_operator_settings Additional settings which will be passed to the Helm chart values map(any) {} no
kubernetes_version The version of Kubernetes to use for the cluster. Defaults to "latest", which uses the latest available version for GKE in the region specified. string "latest" no
logging_enabled_components List of services to monitor: SYSTEM_COMPONENTS, APISERVER, CONTROLLER_MANAGER, SCHEDULER, WORKLOADS. Empty list is default GKE configuration. list(string) [] no
logging_service The logging service to use for the cluster. Defaults to "logging.googleapis.com/kubernetes". string "logging.googleapis.com/kubernetes" no
maintenance_exclusions A list of objects used to define exceptions to the maintenance window, when non-emergency maintenance should not occur. Can have up to three exclusions. Refer to the offical Terraform docs on the "google_container_cluster" resource for object schema. list(object({ name = string, start_time = string, end_time = string, exclusion_scope = string })) [] no
maintenance_window The start time (in RFC3339 format) for the GKE to perform maintenance operations. Defaults to "05:00". string "05:00" no
master_authorized_networks A list of objects used to define authorized networks. If none are provided, the default is to disallow external access. See the parent module for more details. https://registry.terraform.io/modules/terraform-google-modules/kubernetes-engine/google/latest list(object({ cidr_block = string, display_name = string })) [] no
master_ipv4_cidr_block The IP range in CIDR notation to use for the hosted master network. Only used for private clusters string "10.0.0.0/28" no
monitoring_enabled_components List of services to monitor: SYSTEM_COMPONENTS, APISERVER, CONTROLLER_MANAGER, SCHEDULER. Empty list is default GKE configuration. list(string) [] no
network_project_id If using a different project, the id of the project string "" no
node_pool_auto_repair Enable auto-repair for the default node pool. bool true no
node_pool_auto_upgrade Enable auto-upgrade for the default node pool. bool true no
node_pool_autoscaling Enable autoscaling of the default node pool. Defaults to "true". bool true no
node_pool_autoscaling_initial_count The initial number of nodes per zone in the default node pool, PER ZONE, when autoscaling is enabled. Defaults to 1. number 1 no
node_pool_autoscaling_max_size The maximum size of the default node pool Autoscaling group. Defaults to 5. number 5 no
node_pool_autoscaling_min_size The minimum size of the default node pool AutoScaling group. Defaults to 1. number 1 no
node_pool_count The number of worker nodes in the default node pool. This is only used if node_pool_autoscaling is set to false. Defaults to 3. number 3 no
node_pool_disk_size Disk size in GB for worker nodes in the default node pool. Defaults to 100. number 100 no
node_pool_disk_type The type disk attached to worker nodes in the default node pool. Defaults to "pd-standard". string "pd-standard" no
node_pool_image_type The image type to use for worker nodes in the default node pool. Defaults to "COS" (cointainer-optimized OS with docker). string "COS_CONTAINERD" no
node_pool_locations A string of comma seperated values (upstream requirement) of zones for the location of the default node pool, e.g. "us-central1-b,us-central1-c" etc. Nodes must be in the region as the cluster. Defaults to three random zones in the region chosen for the cluster string "" no
node_pool_machine_type The machine type to use for worker nodes in the default node pool. Defaults to "n2-standard-8". string "n2-standard-8" no
node_pool_max_pods_per_node the number of pods per node number 110 no
node_pool_name The name of the default node pool. Defaults to "sn-node-pool". string "default-node-pool" no
node_pool_secure_boot enable the node pool secure boot setting bool false no
node_pool_service_account The service account email address to use for the default node pool. If create_service_account is set to true, it will use the the output from the module. string "" no
node_pool_ssd_count The number of SSDs to attach to each node in the default node pool number 0 no
node_pool_version The version of Kubernetes to use for the default node pool. If the input "release_channel" is not defined, defaults to "kubernetes_version" used for the cluster. Should only be defined while "node_pool_auto_upgrade" is also set to "false". string "" no
project_id The project ID to use for the cluster. Defaults to the current GCP project. string n/a yes
region The GCP region where the GKE cluster will be deployed. This module only supports creation of a regional cluster string n/a yes
release_channel The Kubernetes release channel to use for the cluster. Accepted values are "UNSPECIFIED", "RAPID", "REGULAR" and "STABLE". Defaults to "UNSPECIFIED". string "STABLE" no
secondary_ip_range_pods The name of the secondary range to use for the pods in the cluster. If no secondary range for the pod network is provided, GKE will create a /14 CIDR within the subnetwork provided by the "vpc_subnet" input string null no
secondary_ip_range_pods_cidr The cidr of the secondary range, required when using cillium string null no
secondary_ip_range_services The name of the secondary range to use for services in the cluster. If no secondary range for the services network is provided, GKE will create a /20 CIDR within the subnetwork provided by the "vpc_subnet" input string null no
service_domain The DNS domain for external service endpoints. This must be set when enabling Istio or else the deployment will fail. string null no
storage_class_default_ssd determines if the default storage class should be with ssd bool false no
suffix A unique string that is used to distinguish cluster resources, where name length constraints are imposed by GKE. Defaults to an empty string. string "" no
vpc_network The name of the VPC network to use for the cluster. Can be set to "default" if the default VPC is enabled in the project string n/a yes
vpc_subnet The name of the VPC subnetwork to use by the cluster nodes. Can be set to "default" if the default VPC is enabled in the project, and GKE will choose the subnetwork based on the "region" input string n/a yes

Outputs

Name Description
ca_certificate n/a
cert_manager_sa_email n/a
endpoint n/a
external_dns_manager_sa_email n/a
external_secrets_sa_email n/a
id n/a
master_version n/a
name n/a
node_pool_azs n/a
service_account n/a