Introducing the Hookbase Kubernetes Operator
Manage webhook sources, destinations, routes, and tunnels as native Kubernetes CRDs. GitOps-ready with Helm, sidecar injection, and drift detection.
Webhook Infrastructure as Code
If you have ever manually configured webhook endpoints through a dashboard, copied signing secrets between environments, or wondered why your staging webhook setup does not match production, you already know the problem. Webhook infrastructure does not belong in a UI when everything else is defined in code.
Today we are releasing the Hookbase Kubernetes Operator -- a set of 10 Custom Resource Definitions that let you manage your entire webhook infrastructure as declarative YAML, right alongside your application manifests.
The operator introduces CRDs for every Hookbase resource: WebhookSource, WebhookDestination, WebhookRoute, WebhookTunnel, WebhookTransform, WebhookFilter, WebhookSchema, HookbaseCronJob, HookbaseAPIKey, and a cluster-scoped HookbaseConfig.
Here is what a complete webhook pipeline looks like in YAML:
apiVersion: hookbase.io/v1alpha1
kind: WebhookSource
metadata:
name: github-webhooks
spec:
name: GitHub Webhooks
slug: github-webhooks
provider: github
verifySignature: true
signingSecretRef:
name: github-webhook-secret
key: secret
rateLimitPerMinute: 1000
---
apiVersion: hookbase.io/v1alpha1
kind: WebhookDestination
metadata:
name: internal-api
spec:
name: Internal API
slug: internal-api
url: "https://api.internal.example.com/webhooks"
authType: bearer
authSecretRef:
name: api-auth-secret
key: token
timeoutMs: 10000
retryCount: 3
---
apiVersion: hookbase.io/v1alpha1
kind: WebhookRoute
metadata:
name: github-to-api
spec:
name: GitHub to Internal API
sourceRef: github-webhooks
destinationRef: internal-api
filterConditions:
- field: headers.x-github-event
operator: in
value: "push,pull_request,release"
circuitBreaker:
failureThreshold: 5
cooldownSeconds: 60
Notice how sourceRef and destinationRef use Kubernetes resource names, not opaque API IDs. The operator resolves these references during reconciliation. If a referenced resource does not exist yet, it waits. If it gets deleted, you will see it in the status conditions.
Because it is all YAML, your webhook configuration lives in Git. ArgoCD, Flux, or any GitOps tool will deploy and sync it. Promote from staging to production with a PR. Roll back by reverting a commit.
Automatic Sidecar Injection for Tunnels
The WebhookTunnel CRD is where things get interesting. Define a tunnel and the operator will patch your target Deployment with a lightweight Go-based tunnel agent as a sidecar container:
apiVersion: hookbase.io/v1alpha1
kind: WebhookTunnel
metadata:
name: dev-tunnel
spec:
name: Dev Tunnel
targetPort: 8080
targetService: my-app
sidecarInjection:
deploymentRef: my-app
resources:
requests:
cpu: 10m
memory: 16Mi
limits:
cpu: 100m
memory: 64Mi
The agent binary is roughly 5MB. It establishes a WebSocket connection to Hookbase and forwards incoming webhooks to your service on the specified port. You get a public ingestion URL without exposing anything through your cluster's ingress. Useful for development clusters, preview environments, and internal services that need to receive webhooks without a public endpoint.
API Key Isolation and Ingress Controller Mode
Managing API keys across namespaces is a common pain point. The HookbaseAPIKey CRD provisions scoped keys through the Hookbase API and stores them as native Kubernetes Secrets:
apiVersion: hookbase.io/v1alpha1
kind: HookbaseAPIKey
metadata:
name: team-key
spec:
name: Team API Key
scopes:
- sources:read
- sources:write
- events:read
expiresInDays: 90
secretRef:
name: hookbase-team-key
key: apiKey
After reconciliation, a Secret named hookbase-team-key appears in the same namespace. Other CRDs in that namespace can reference it. Each team or namespace gets its own scoped credentials, managed declaratively.
The operator also ships with an ingress controller mode. Annotate a standard Kubernetes Ingress resource and the operator creates the corresponding Hookbase source, destination, and route automatically:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-webhooks
annotations:
hookbase.io/source-provider: "stripe"
hookbase.io/source-verify-signature: "true"
hookbase.io/source-signing-secret-ref: "stripe-secret:whsec"
spec:
ingressClassName: hookbase
rules:
- host: webhooks.example.com
http:
paths:
- path: /stripe
pathType: Prefix
backend:
service:
name: payment-service
port:
number: 8080
This is a good option if you want webhook routing without learning new CRDs -- just annotate the Ingress resources you already have.
Operational Details
The operator runs 11 controllers and reconciles on a configurable interval (default 5 minutes) to detect and correct drift. If someone modifies a source through the dashboard, the next reconciliation cycle brings it back in line with the YAML definition.
Admission webhooks enforce constraints at apply time: slug immutability (changing a slug after creation would break ingestion URLs), CIDR validation on allowlists, and mutual exclusion guards on conflicting configurations.
Prometheus metrics are built in. You get reconcile counts and latency histograms per controller, API call tracking, and tunnel connection status. Enable the ServiceMonitor in the Helm values and your existing Prometheus stack picks it up.
Getting Started
Clone the operator repository and install with Helm:
git clone https://github.com/HookbaseApp/hookbase-operator.git
cd hookbase-operator
Create a bootstrap API key Secret before installing:
kubectl create namespace hookbase-system
kubectl create secret generic hookbase-bootstrap-key \
--namespace hookbase-system \
--from-literal=apiKey=whr_your_api_key_here
Install the operator from the local chart:
helm install hookbase-operator ./chart \
--namespace hookbase-system \
--set hookbase.apiKeySecretRef.name=hookbase-bootstrap-key \
--set hookbase.apiKeySecretRef.key=apiKey
Then apply your webhook resources:
kubectl apply -f webhook-source.yaml
kubectl apply -f webhook-destination.yaml
kubectl apply -f webhook-route.yaml
Check the status:
kubectl get webhooksources
kubectl get webhookroutes
kubectl describe webhookroute github-to-api
The full CRD reference, sample manifests, and Helm values are in the operator documentation.
What is Next
This is version 0.1.0. We are already working on status-based health checks that surface delivery success rates directly in kubectl get, multi-cluster support for routing webhooks across cluster boundaries, and a WebhookReplay CRD for triggering event replays from your manifests.
If you run webhook infrastructure in Kubernetes, we would like to hear how you use it. Open an issue, or reach out to us at [email protected].