Configure pod parameters
You can specify parameters for the installation with the --set flag. The following table displays the possible parameters when setting up a pod:
Parameter | Default | Description |
---|---|---|
global.image.registry | docker-registry.internal.futurex.com | This is the docker registry that the Futurex containers will be pulled from. |
global.image.namespace | futurex/cryptohub | This is the prefix for all of the Futurex container image names in the repository. For example the auth microservice container auth should be named futurex/cryptohub/auth by default. |
global.image.tag | 7.0.2.x | The tag for the Futurex containers. In development, this is a branch string (e.g., 7.0.2.x). In production, this is normally an exact version or a branch friendly name such as bionic, stable, or latest. |
global.pullPolicy | Always | Determines if cached versions of the containers should be used, or if new versions should be pulled from the repositories:
|
global.serviceType | NodePort | NodePort, LoadBalancer, Ingress |
settings.balancerPortRange.start | 1024 | The Futurex HSM/application load balancer allows the user to setup custom balancing ports. This configuration determines the lowest port number allowed to be used for those custom ports. This must be in the range 1024 to 1723. |
settings.balancerPortRange.end | 1032 | The symmetric configuration to settings.balancerPortRange.start that describes the highest port number allowed. |
settings.tunnelPortRange.start | 1050 | The Futurex HSM/application tunnel allows the user to setup custom tunnel ports. This configuration determines the lowest port number allowed to be used for those custom ports. This must be in the range 1050 to 1058. |
settings.tunnelPortRange.end | 1059 | The symmetric configuration to settings.tunnelPortRange.start that describes the highest port number allowed. |
settings.networkDrivesEnabled | false | Whether or not to enable network drives. |
settings.mountFuse | false | Whether or not to mount Fuse. |
settings.crashReports | false | Whether or not to enable crash reports. |
hsm.secret | | The name of the secret that contains the HSM TLS PKI. |
hsm.address | 10.90.4.91 | The address of the HSM with applications features. |
hsm.webEnabled | false | Whether or not to enable the HSM web portal. |
hsm.admin.port | 9009 | The TLS admin port on the HSM. |
hsm.admin.verify | false | Whether or not to verify the HSM admin port TLS. |
hsm.admin.bundle | adminBundle | The field in the secret that contains a PKCS #12 bundle for client PKI to the HSM admin port. |
hsm.admin.bundlePass | adminPassword | The field in the secret that contains the password for the PKCS #12 bundle from hsm.admin.bundle |
hsm.admin.crl | adminCrl | The field in the secret that contains the certificate revocation list for verifying the admin connection. |
hsm.admin.ca | adminCa | The field in the secret that contains trusted CA certificates for verifying the admin connection. |
hsm.prod.port | 9100 | The TLS production port on the HSM. |
hsm.prod.verify | false | Whether or not to verify the HSM production port TLS. |
hsm.prod.bundle | prodBundle | The field in the secret that contains a PKCS #12 bundle for client PKI to the HSM production port. |
hsm.prod.bundlePass | prodPassword | The field in the secret that contains the password for the PKCS #12 bundle from hsm.prod.bundle |
hsm.prod.crl | prodCr | The field in the secret that contains the certificate revocation list for verifying the production connection. |
hsm.prod.ca | prodCa | The field in the secret that contains trusted CA certificates for verifying the production connection. |
Understanding how to manage external access to applications is crucial for deploying on Google Cloud GKE. This section covers the following methods: NodePort, LoadBalancer, and Ingress.
NodePort services are the simplest means to gain external access to your cluster. This approach allocates a port from a configured range (by default 30000-32767) and makes your service accessible on this port on all nodes within the cluster.
- Stability: The port assigned to your service remains consistent across all nodes.
- Use Case: Ideal for development purposes or internal network access.
LoadBalancer services integrate seamlessly with cloud provider offerings, such as Google Cloud Load Balancing. They provision an external load balancer that automatically routes traffic to the NodePort across your nodes.
- Externally Accessible: Provides an external IP address that forwards traffic to the internal service.
- Cloud Integration: Leveraging the cloud provider’s load balancer features like automated health checks and failovers
Ingress resources manage external access to services, typically for HTTP(S) traffic. They provide advanced routing options like SSL termination and name-based virtual hosting.
- Advanced Routing: Offers fine-grained control over traffic flow, such as URL-based routing.
- Portability: With cloud-agnostic configurations, ingress controllers facilitate a uniform approach across different environments.
- TLS Termination: Capable of handling TLS termination, centralizing SSL management and offloading encryption tasks from backend services
Each method serves different scenarios and requirements. NodePort can be sufficient for simple exposure of services, LoadBalancer suits scenarios needing robust, automatic scaling and distribution of traffic, and Ingress is optimal for complex traffic routing and management. Consider the application needs, security policies, and architecture to select the most appropriate method for service exposure in Google Cloud GKE.