Hookbase
LoginGet Started Free
Back to Blog
Product Update

Warehouse Destinations: Send Webhooks Directly to S3, R2, GCS, and Azure Blob

Hookbase now supports warehouse destinations. Route webhook events directly to Amazon S3, Cloudflare R2, Google Cloud Storage, or Azure Blob Storage as structured JSONL or JSON files with automatic batching, field mapping, and encrypted credentials.

Hookbase Team
February 20, 2026
7 min read

Beyond HTTP: Webhooks as a Data Pipeline

Webhook relay platforms typically forward payloads to HTTP endpoints. That works well for triggering workflows, syncing state, and integrating services in real time. But many teams also need their webhook data to land in a data warehouse or object store for analytics, compliance archival, or batch processing.

Until now, that meant building your own pipeline: receive the webhook, write it to a queue, transform the payload, and upload it to your storage layer. That is a lot of glue code for something that should be a configuration change.

Today we are releasing Warehouse Destinations -- a new destination type that routes webhook events directly to object storage. No HTTP endpoint required. No custom code. Just point Hookbase at a bucket and your events start flowing.

Supported Storage Providers

Hookbase supports four object storage providers at launch:

| Provider | Auth Method | Use Case | |----------|-------------|----------| | Amazon S3 | Access Key + Secret Key (Signature V4) | AWS-native data lakes, Athena queries | | Cloudflare R2 | Native binding (no credentials) | Zero-egress archival, S3-compatible | | Google Cloud Storage | Service Account Key (JWT) | BigQuery external tables, GCP pipelines | | Azure Blob Storage | Shared Key (HMAC-SHA256) | Azure Data Lake, Synapse Analytics |

Each provider uses its native authentication mechanism. There are no third-party SDKs involved -- all uploads use the provider's REST API with pure Web Crypto for signing. This keeps the Worker bundle small and avoids external dependencies.

How It Works

Warehouse destinations use the same route system as HTTP destinations. You create a source, create a warehouse destination, and connect them with a route. The only difference is what happens at delivery time.

1. Create a Warehouse Destination

In the dashboard, click New Destination and select a storage provider instead of HTTP Endpoint. Each provider has its own configuration form:

Amazon S3:

  • Bucket name, AWS region
  • Access Key ID and Secret Access Key
  • Optional path prefix (e.g., webhooks/stripe/)
  • File format (JSONL or JSON)
  • Partition strategy (by date, hour, or source)

Cloudflare R2:

  • Bucket name and optional prefix
  • No credentials needed -- uses your Worker's native R2 binding

Google Cloud Storage:

  • Bucket name and GCP project ID
  • Service account key (JSON)
  • Prefix, file format, and partitioning

Azure Blob Storage:

  • Storage account name and account key
  • Container name
  • Prefix, file format, and partitioning

2. Connect with a Route

Create a route from any source to your warehouse destination. Filters and transforms work exactly as they do with HTTP destinations. You can route the same source to both an HTTP endpoint and a warehouse destination simultaneously.

3. Events Are Batched and Uploaded

When events arrive, Hookbase queues them for batch delivery. The warehouse queue accumulates up to 100 events or waits 30 seconds (whichever comes first), then uploads a single file to your bucket.

The file path follows a predictable pattern:

{prefix}/{partition}/{timestamp}-{destination_id}.jsonl

For example, with daily partitioning and a webhooks/stripe/ prefix:

webhooks/stripe/2026-02-21/1740150000000-a1b2c3d4.jsonl

With hourly partitioning:

webhooks/stripe/2026-02-21/14/1740150000000-a1b2c3d4.jsonl

With source-based partitioning:

webhooks/stripe/2026-02-21/my-stripe-source/1740150000000-a1b2c3d4.jsonl

File Formats

JSONL (Recommended)

Each line is a self-contained JSON object. This format works well with tools like AWS Athena, BigQuery external tables, and streaming processors:

{"event_id":"evt_abc123","received_at":"2026-02-21T14:30:00Z","payload":{"type":"payment_intent.succeeded","data":{"amount":2500}}}
{"event_id":"evt_def456","received_at":"2026-02-21T14:30:01Z","payload":{"type":"customer.created","data":{"email":"[email protected]"}}}

JSON Array

A single JSON array containing all events in the batch. Useful for systems that expect a complete document:

[
  {
    "event_id": "evt_abc123",
    "received_at": "2026-02-21T14:30:00Z",
    "payload": { "type": "payment_intent.succeeded" }
  },
  {
    "event_id": "evt_def456",
    "received_at": "2026-02-21T14:30:01Z",
    "payload": { "type": "customer.created" }
  }
]

Visual Field Mapper

Webhook payloads are deeply nested and inconsistent across providers. If you need your warehouse data to follow a clean, flat schema, use the Visual Field Mapper.

The field mapper lets you define a mapping from JSONPath expressions in the incoming payload to named columns in the output:

| Source Path | Target Column | Type | |------------|---------------|------| | $.payload.data.amount | amount | number | | $.payload.data.currency | currency | string | | $.payload.type | event_type | string | | $.payload.created | created_at | timestamp |

When field mapping is configured, each event in the output file uses the mapped schema instead of the raw payload:

{"_event_id":"evt_abc123","_received_at":"2026-02-21T14:30:00Z","amount":2500,"currency":"usd","event_type":"payment_intent.succeeded","created_at":"2026-02-21T14:29:58.000Z"}

Metadata fields _event_id and _received_at are always included for traceability.

Supported column types: string, number, boolean, timestamp, json. You can also set default values for fields that may be missing from certain event types.

Credential Security

Warehouse destinations require storage credentials (access keys, service account keys, account keys). Hookbase encrypts all sensitive credential fields at rest using AES-256-GCM with HKDF-derived, organization-scoped encryption keys.

When you view a destination's configuration in the dashboard or API, credential values are redacted:

{
  "bucket": "my-data-lake",
  "region": "us-east-1",
  "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
  "secretAccessKey": "\u2022\u2022\u2022\u2022EKEY"
}

Credentials are only decrypted at delivery time inside the Worker. They are never logged, never included in API responses, and never stored in plain text.

When updating a destination, sending back the redacted value preserves the existing encrypted credential. You only need to provide a new value if you are rotating keys.

Testing a Warehouse Destination

Every warehouse destination supports a Test Connection action. When you test a warehouse destination, Hookbase uploads a small test file to your bucket and verifies that the credentials and permissions are correct.

The test endpoint returns the file key, size, and event count so you can verify the file landed where expected.

curl -X POST https://api.hookbase.app/api/organizations/{orgId}/destinations/{destId}/test \
  -H "Authorization: Bearer {token}" \
  -H "Content-Type: application/json"
{
  "success": true,
  "result": {
    "key": "hookbase/2026-02-21/test-1740150000000.jsonl",
    "size": 142,
    "count": 1
  }
}

Plan Availability

Warehouse destinations are available on Pro and Business plans. Free and Starter plans can use HTTP destinations.

Getting Started

  1. Navigate to Destinations in your dashboard
  2. Click New Destination
  3. Select your storage provider (S3, R2, GCS, or Azure Blob)
  4. Enter your bucket configuration and credentials
  5. Click Test Connection to verify
  6. Create a route from any source to your new destination

Your webhook events will start flowing to your bucket on the next delivery cycle.

What's Next

Warehouse destinations are the foundation for deeper analytics integrations. We are exploring direct connectors for Snowflake, BigQuery, Databricks, and ClickHouse that would skip the object storage layer entirely and insert rows directly into your warehouse tables.

If you have a specific integration you would like to see, let us know at [email protected].

product-updatewarehouses3r2gcsazuredata-lakewebhooks

Related Articles

Product Update

Static IP Delivery: Whitelist a Single IP for Webhooks

Hookbase now offers static IP delivery for outbound webhooks. Whitelist one IP address in your firewall and receive all webhook traffic through a dedicated, fixed endpoint.

Product Update

Transient Mode: Process Webhooks Without Storing Payloads

New per-source Transient Mode skips payload storage entirely. Designed for HIPAA, GDPR, and data minimization requirements in webhook pipelines.

Product Update

Introducing the Hookbase Kubernetes Operator

Manage webhook sources, destinations, routes, and tunnels as native Kubernetes CRDs. GitOps-ready with Helm, sidecar injection, and drift detection.

Ready to Try Hookbase?

Start receiving, transforming, and routing webhooks in minutes.

Get Started Free
Hookbase

Reliable webhook infrastructure for modern teams. Built on Cloudflare's global edge network.

Product

  • Features
  • Pricing
  • Use Cases
  • Integrations
  • ngrok Alternative

Resources

  • Documentation
  • API Reference
  • CLI Guide
  • Blog
  • FAQ

Legal

  • Privacy Policy
  • Terms of Service
  • Contact
  • Status

© 2026 Hookbase. All rights reserved.