Datastores
In order to reduce operational complexity, SpiceDB leverages existing, popular systems for persisting data.
Each datastore implementation comes with various trade-offs.
- CockroachDB - Recommended for planet-scale, multi-region deployments
- Cloud Spanner - Recommended for Google Cloud deployments
- PostgreSQL - Recommended for single-region deployments
- MySQL - Not recommended; only use if you cannot use PostgreSQL
- memdb - Recommended for local development and integration testing against applications
Migrations
Before a datastore can be used by SpiceDB or before running a new version of SpiceDB, you must execute all available migrations.
The only exception is the memdb datastore because it does not persist any data.
In order to migrate a datastore, run the following command with your desired values:
spicedb migrate head \
--datastore-engine $DESIRED_ENGINE \
--datastore-conn-uri $CONNECTION_STRING
For more information, refer to the Datastore Migration documentation.
CockroachDB
Usage Notes
SpiceDB's Watch API requires CockroachDB's Experimental Changefeed (opens in a new tab) to be enabled.
- Recommended for multi-region deployments, with configurable region awareness
- Enables horizonally scalablity by adding more SpiceDB and CockroachDB instances
- Resiliency to individual CockroachDB instance failures
- Query and data balanced across the CockroachDB
- Setup and operational complexity of running CockroachDB
Developer Notes
- Code can be found here (opens in a new tab)
- Documentation can be found here (opens in a new tab)
- Implemented using pgx (opens in a new tab) for a SQL driver and connection pooling
- Has a native changefeed
- Stores migration revisions using the same strategy as Alembic (opens in a new tab)
Configuration
Required Parameters
Parameter | Description | Example |
---|---|---|
datastore-engine | the datastore engine | --datastore-engine=cockroachdb |
datastore-conn-uri | connection string used to connect to CRDB | --datastore-conn-uri="postgres://user:password@localhost:26257/spicedb?sslmode=disable" |
Optional Parameters
Parameter | Description | Example |
---|---|---|
datastore-max-tx-retries | Maximum number of times to retry a query before raising an error | --datastore-max-tx-retries=50 |
datastore-tx-overlap-strategy | The overlap strategy to prevent New Enemy on CRDB (see below) | --datastore-tx-overlap-strategy=static |
datastore-tx-overlap-key | The key to use for the overlap strategy (see below) | --datastore-tx-overlap-key="foo" |
datastore-conn-max-idletime | Maximum idle time for a connection before it is recycled | --datastore-conn-max-idletime=60s |
datastore-conn-max-lifetime | Maximum lifetime for a connection before it is recycled | --datastore-conn-max-lifetime=300s |
datastore-conn-max-open | Maximum number of concurrent connections to open | --datastore-conn-max-open=10 |
datastore-conn-min-open | Minimum number of concurrent connections to open | --datastore-conn-min-open=1 |
datastore-query-split-size | The (estimated) query size at which to split a query into multiple queries | --datastore-query-split-size=5kb |
datastore-gc-window | Sets the window outside of which overwritten relationships are no longer accessible | --datastore-gc-window=1s |
datastore-revision-fuzzing-duration | Sets a fuzzing window on all zookies/zedtokens | --datastore-revision-fuzzing-duration=50ms |
datastore-readonly | Places the datastore into readonly mode | --datastore-readonly=true |
datastore-follower-read-delay-duration | Amount of time to subtract from non-sync revision timestamps to ensure follower reads | --datastore-follower-read-delay-duration=4.8s |
datastore-relationship-integrity-enabled | Enables relationship integrity checks, only supported on CRDB | --datastore-relationship-integrity-enabled=false |
datastore-relationship-integrity-current-key-id | Current key id for relationship integrity checks | --datastore-relationship-integrity-current-key-id="foo" |
datastore-relationship-integrity-current-key-filename | Current key filename for relationship integrity checks | --datastore-relationship-integrity-current-key-filename="foo" |
datastore-relationship-integrity-expired-keys | Config for expired keys for relationship integrity checks | --datastore-relationship-integrity-expired-keys="foo" |
Overlap Strategy
In distributed systems, you can trade-off consistency for performance.
CockroachDB datastore users that are willing to rely on more subtle guarantees to mitigate the New Enemy Problem can configure --datastore-tx-overlap-strategy
.
The available strategies are:
Strategy | Description |
---|---|
static (default) | All writes overlap to guarantee safety at the cost of write throughput |
prefix | Only writes that contain objects with same prefix overlap (e.g. tenant1/user and tenant2/user can be written in concurrently) |
request | Only writes with the same io.spicedb.requestoverlapkey header overlap enabling applications to decide on-the-fly which writes have causual dependencies. Writes without any header act the same as insecure . |
insecure | No writes overlap, providing the best write throughput, but possibly leaving you vulnerable to the New Enemy Problem |
For more information, refer to the CockroachDB datastore README (opens in a new tab) or our blog post "The One Crucial Difference Between Spanner and CockroachDB (opens in a new tab)".
Garbage Collection Window
As of February 2023, the default garbage collection window (opens in a new tab) has changed to 1.25 hours
for CockroachDB Serverless and 4 hours
for CockroachDB Dedicated.
SpiceDB warns if the garbage collection window as configured in CockroachDB is smaller than the SpiceDB configuration.
If you need a longer time window for the Watch API or querying at exact snapshots, you can adjust the value in CockroachDB (opens in a new tab):
ALTER ZONE default CONFIGURE ZONE USING gc.ttlseconds = 90000;
Relationship Integrity
Relationship Integrity is a new experimental feature in SpiceDB that ensures that data written into the supported backing datastores (currently: only CockroachDB) is validated as having been written by SpiceDB itself.
-
What does relationship integrity ensure? Relationship integrity primarily ensures that all relationships written into the backing datastore were written via a trusted instance of SpiceDB or that the caller has access to the key(s) necessary to write those relationships. It ensures that if someone gains access to the underlying datastore, they cannot simply write new relationships of their own invention.
-
What does relationship integrity not ensure? Since the relationship integrity feature signs each individual relationship, it does not ensure that removal of relationships is by a trusted party. Schema is also currently unverified, so an untrusted party could change it as well. Support for schema changes will likely come in a future version.
Setting up relationship integrity To run with relationship integrity, new flags must be given to SpiceDB:
spicedb serve ...existing flags...
--datastore-relationship-integrity-enabled
--datastore-relationship-integrity-current-key-id="somekeyid"
--datastore-relationship-integrity-current-key-filename="some.key"
Place the generated key contents (which must support an HMAC key) in some.key
Deployment Process
- Start with a clean datastore for SpiceDB. At this time, migrating an existing SpiceDB installation is not supported.
- Run the standard
migrate
command but with relationship integrity flags included. - Run SpiceDB with the relationship integrity flags included.
Cloud Spanner
Usage Notes
- Requires a Google Cloud Account with an active Cloud Spanner instance
- Take advantage of Google's TrueTime. The Spanner driver assumes the database is linearizable and skips the transaction overlap strategy required by CockroachDB.
Developer Notes
- Code can be found here (opens in a new tab)
- Documentation can be found here (opens in a new tab)
- Starts a background GC worker (opens in a new tab) to clean up old entries from the manually-generated changelog table
Configuration
- The Cloud Spanner docs (opens in a new tab) outline how to set up an instance
- Authentication via service accounts: The service account that runs migrations must have
Cloud Spanner Database Admin
; SpiceDB (non-migrations) must haveCloud Spanner Database User
.
Required Parameters
Parameter | Description | Example |
---|---|---|
datastore-engine | the datastore engine | --datastore-engine=spanner |
datastore-conn-uri | the cloud spanner database identifier | --datastore-conn-uri="projects/project-id/instances/instance-id/databases/database-id" |
Optional Parameters
Parameter | Description | Example |
---|---|---|
datastore-spanner-credentials | JSON service account token (omit to use application default credentials (opens in a new tab)) | --datastore-spanner-credentials=./spanner.json |
datastore-gc-interval | Amount of time to wait between garbage collection passes | --datastore-gc-interval=3m |
datastore-gc-window | Sets the window outside of which overwritten relationships are no longer accessible | --datastore-gc-window=1s |
datastore-revision-fuzzing-duration | Sets a fuzzing window on all zookies/zedtokens | --datastore-revision-fuzzing-duration=50ms |
datastore-readonly | Places the datastore into readonly mode | --datastore-readonly=true |
datastore-follower-read-delay-duration | Amount of time to subtract from non-sync revision timestamps to ensure stale reads | --datastore-follower-read-delay-duration=4.8s |
PostgreSQL
Usage Notes
- Recommended for single-region deployments
- Postgres 15 or newer is required for optimal performance
- Resiliency to failures only when PostgreSQL is operating with a follower and proper failover
- Carries setup and operational complexity of running PostgreSQL
- Does not rely on any non-standard PostgreSQL extensions
- Compatible with managed PostgreSQL services (e.g. AWS RDS)
- Can be scaled out on read workloads using read replicas
SpiceDB's Watch API requires PostgreSQL's Commit Timestamp tracking (opens in a new tab) to be enabled.
This can be done by providing the --track_commit_timestamp=on
flag, configuring postgresql.conf
, or executing ALTER SYSTEM SET track_commit_timestamp = on;
and restarting the instance.
Developer Notes
- Code can be found here (opens in a new tab)
- Documentation can be found here (opens in a new tab)
- Implemented using pgx (opens in a new tab) for a SQL driver and connection pooling
- Stores migration revisions using the same strategy as Alembic (opens in a new tab)
- Implements its own MVCC (opens in a new tab) model by storing its data with transaction IDs
Read Replicas
SpiceDB supports Postgres read replicas and does it while retaining consistency guarantees. Typical use cases are:
- scale read workloads/offload reads from the primary
- deploy SpiceDB in other regions with primarily read workloads
Read replicas are typically configured with asynchronous replication, which involves replication lag. That would be problematic to SpiceDB's ability to solve the new enemy problem but it addresses the challenge by checking if a revision has been replicated into the target replica. If missing, it will fall back to the primary.
All API consistency options will leverage replicas, but the ones that benefit the most are those that involve some level of staleness as it increases the odds a revision has replicated.
minimize_latency
, at_least_as_fresh
, and at_exact_snapshot
consistency modes have the highest chance of being redirected to a replica.
SpiceDB supports Postgres replicas behind a load-balancer, and/or individually listing replica hosts.
When multiple URIs are provided, they will be queried using a round-robin strategy.
Please note that the maximum number of replica URIs to list is 16.
Read replicas are configured with the --datastore-read-replica-*
family of flags.
SpiceDB supports PgBouncer (opens in a new tab) connection pooler and is part of the test suite.
Transaction IDs and MVCC
The Postgres implementation of SpiceDB's internal MVCC mechanism involves storing the internal transaction ID count associated with a given transaction
in the rows written in that transaction.
Because this counter is instance-specific, there are ways in which the data in the datastore can become desynced with that internal counter.
Two concrete examples are the use of pg_dump
and pg_restore
to transfer data between an old instance and a new instance and setting up
logical replication between a previously-existing instance and a newly-created instance.
If you encounter this, SpiceDB can behave as though there is no schema written, because the data (including the schema) is associated with a future transaction ID and therefore isn't "visible" to Spicedb. You may see an error like this:
FAILED_PRECONDITION: object definition `some_definition` not found
If this occurs, you can use spicedb datastore repair
(along with your usual datastore flags) to roll the transaction counter of the database forward until SpiceDB is able to resume normal operation:
spicedb datastore repair --datastore-engine=postgres --datastore-conn-uri=$DATASTORE_URI
Configuration
Required Parameters
Parameter | Description | Example |
---|---|---|
datastore-engine | the datastore engine | --datastore-engine=postgres |
datastore-conn-uri | connection string used to connect to PostgreSQL | --datastore-conn-uri="postgres://postgres:password@localhost:5432/spicedb?sslmode=disable" |
Optional Parameters
Parameter | Description | Example |
---|---|---|
datastore-conn-max-idletime | Maximum idle time for a connection before it is recycled | --datastore-conn-max-idletime=60s |
datastore-conn-max-lifetime | Maximum lifetime for a connection before it is recycled | --datastore-conn-max-lifetime=300s |
datastore-conn-max-open | Maximum number of concurrent connections to open | --datastore-conn-max-open=10 |
datastore-conn-min-open | Minimum number of concurrent connections to open | --datastore-conn-min-open=1 |
datastore-query-split-size | The (estimated) query size at which to split a query into multiple queries | --datastore-query-split-size=5kb |
datastore-gc-window | Sets the window outside of which overwritten relationships are no longer accessible | --datastore-gc-window=1s |
datastore-revision-fuzzing-duration | Sets a fuzzing window on all zookies/zedtokens | --datastore-revision-fuzzing-duration=50ms |
datastore-readonly | Places the datastore into readonly mode | --datastore-readonly=true |
datastore-read-replica-conn-uri | Connection string used by datastores for read replicas; only supported for postgres and MySQL | --datastore-read-replica-conn-uri="postgres://postgres:password@localhost:5432/spicedb\" |
—datastore-read-replica-credentials-provider-name | Retrieve datastore credentials dynamically using aws-iam | |
--datastore-read-replica-conn-pool-read-healthcheck-interval | amount of time between connection health checks in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-healthcheck-interval=30s |
--datastore-read-replica-conn-pool-read-max-idletime | maximum amount of time a connection can idle in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-max-idletime=30m |
--datastore-read-replica-conn-pool-read-max-lifetime | maximum amount of time a connection can live in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-max-lifetime=30m |
--datastore-read-replica-conn-pool-read-max-lifetime-jitter | waits rand(0, jitter) after a connection is open for max lifetime to actually close the connection to a read replica(default: 20% of max lifetime) | --datastore-read-replica-conn-pool-read-max-lifetime-jitter=6m |
--datastore-read-replica-conn-pool-read-max-open | number of concurrent connections open in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-max-open=20 |
--datastore-read-replica-conn-pool-read-min-open | number of minimum concurrent connections open in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-min-open=20 |
MySQL
Usage Notes
- Recommended for single-region deployments
- Setup and operational complexity of running MySQL
- Does not rely on any non-standard MySQL extensions
- Compatible with managed MySQL services
- Can be scaled out on read workloads using read replicas
Developer Notes
- Code can be found here (opens in a new tab)
- Documentation can be found here (opens in a new tab)
- Implemented using Go-MySQL-Driver (opens in a new tab) for a SQL driver
- Query optimizations are documented here (opens in a new tab)
- Implements its own MVCC (opens in a new tab) model by storing its data with transaction IDs
Read Replicas
Do not use a load balancer between SpiceDB and MySQL replicas because SpiceDB will not be able to maintain consistency guarantees.
SpiceDB supports MySQL read replicas and does it while retaining consistency guarantees. Typical use cases are:
- scale read workloads/offload reads from the primary
- deploy SpiceDB in other regions with primarily read workloads
Read replicas are typically configured with asynchronous replication, which involves replication lag. That would be problematic to SpiceDB's ability to solve the new enemy problem but it addresses the challenge by checking if the revision has been replicated into the target replica. If missing, it will fall back to the primary. Compared to the Postgres implementation, MySQL support requires two roundtrips instead of one, which means it adds some extra latency overhead.
All API consistency options will leverage replicas, but the ones that benefit the most are those that involve some level of staleness as it increases the odds a revision has replicated.
minimize_latency
, at_least_as_fresh
, and at_exact_snapshot
consistency modes have the highest chance of being redirected to a replica.
SpiceDB does not support MySQL replicas behind a load-balancer: you may only list replica hosts individually. Failing to adhere to this would compromise consistency guarantees. When multiple host URIs are provided, they will be queried using round-robin. Please note that the maximum number of MySQL replica host URIs to list is 16.
Read replicas are configured with the --datastore-read-replica-*
family of flags.
Configuration
Required Parameters
Parameter | Description | Example |
---|---|---|
datastore-engine | the datastore engine | --datastore-engine=mysql |
datastore-conn-uri | connection string used to connect to MySQL | --datastore-conn-uri="user:password@(localhost:3306)/spicedb?parseTime=True" |
SpiceDB requires --datastore-conn-uri
to contain the query parameter parseTime=True
.
Optional Parameters
Parameter | Description | Example |
---|---|---|
datastore-conn-max-idletime | Maximum idle time for a connection before it is recycled | --datastore-conn-max-idletime=60s |
datastore-conn-max-lifetime | Maximum lifetime for a connection before it is recycled | --datastore-conn-max-lifetime=300s |
datastore-conn-max-open | Maximum number of concurrent connections to open | --datastore-conn-max-open=10 |
datastore-query-split-size | The (estimated) query size at which to split a query into multiple queries | --datastore-query-split-size=5kb |
datastore-gc-window | Sets the window outside of which overwritten relationships are no longer accessible | --datastore-gc-window=1s |
datastore-revision-fuzzing-duration | Sets a fuzzing window on all zookies/zedtokens | --datastore-revision-fuzzing-duration=50ms |
datastore-mysql-table-prefix string | Prefix to add to the name of all SpiceDB database tables | --datastore-mysql-table-prefix=spicedb |
datastore-readonly | Places the datastore into readonly mode | --datastore-readonly=true |
datastore-read-replica-conn-uri | Connection string used by datastores for read replicas; only supported for postgres and MySQL | --datastore-read-replica-conn-uri="postgres://postgres:password@localhost:5432/spicedb\" |
—datastore-read-replica-credentials-provider-name | Retrieve datastore credentials dynamically using aws-iam | |
--datastore-read-replica-conn-pool-read-healthcheck-interval | amount of time between connection health checks in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-healthcheck-interval=30s |
--datastore-read-replica-conn-pool-read-max-idletime | maximum amount of time a connection can idle in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-max-idletime=30m |
--datastore-read-replica-conn-pool-read-max-lifetime | maximum amount of time a connection can live in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-max-lifetime=30m |
--datastore-read-replica-conn-pool-read-max-lifetime-jitter | waits rand(0, jitter) after a connection is open for max lifetime to actually close the connection to a read replica(default: 20% of max lifetime) | --datastore-read-replica-conn-pool-read-max-lifetime-jitter=6m |
--datastore-read-replica-conn-pool-read-max-open | number of concurrent connections open in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-max-open=20 |
--datastore-read-replica-conn-pool-read-min-open | number of minimum concurrent connections open in a read-only replica datastore's connection pool | --datastore-read-replica-conn-pool-read-min-open=20 |
memdb
Usage Notes
- Fully ephemeral; all data is lost when the process is terminated
- Intended for usage with SpiceDB itself and testing application integrations
- Cannot be ran highly-available as multiple instances will not share the same in-memory data
If you need an ephemeral datastore designed for validation or testing, see the test server system in Validating and Testing
Developer Notes
- Code can be found here (opens in a new tab)
- Documentation can be found here (opens in a new tab)
- Implements its own MVCC (opens in a new tab) model by storing its data with transaction IDs
Configuration
Required Parameters
Parameter | Description | Example |
---|---|---|
datastore-engine | the datastore engine | --datastore-engine memory |
Optional Parameters
Parameter | Description | Example |
---|---|---|
datastore-revision-fuzzing-duration | Sets a fuzzing window on all zookies/zedtokens | --datastore-revision-fuzzing-duration=50ms |
datastore-gc-window | Sets the window outside of which overwritten relationships are no longer accessible | --datastore-gc-window=1s |
datastore-readonly | Places the datastore into readonly mode | --datastore-readonly=true |