At the heart of this project is a set of APIs for consuming database changes as reactive streams on top of Vert.x.

This manual uses PostgreSQL as the full walkthrough and lists all supported connectors and caveats on this page:

  • PostgreSQL logical replication (wal2json)

  • SQL Server CDC polling

  • MySQL binlog CDC

  • MariaDB CDC adapter (currently polling)

  • CockroachDB CDC adapter (currently polling)

  • Oracle CDC adapter (currently polling)

  • Db2 CDC adapter (currently polling)

  • Cassandra CDC adapter (DB-native)

  • ScyllaDB CDC adapter (DB-native)

  • MongoDB change stream adapter

  • Neo4j CDC adapter (DB-native)

If you are using Maven, add the dependency for the connector you need:

  • PostgreSQL:

    <dependency>
      <groupId>dev.henneberger</groupId>
      <artifactId>vertx-pg-logical-replication</artifactId>
      <version>0.3.0-SNAPSHOT</version>
    </dependency>
  • SQL Server:

    <dependency>
      <groupId>dev.henneberger</groupId>
      <artifactId>vertx-sqlserver-cdc-replication</artifactId>
      <version>0.3.0-SNAPSHOT</version>
    </dependency>

Supported Databases

Database Module Mode Replication Class Notes

PostgreSQL

vertx-pg-logical-replication

LOG_STREAM

True log-stream

Full walkthrough in this manual (wal2json).

SQL Server

vertx-sqlserver-cdc-replication

POLLING

Polling CDC

Supported. Uses CDC polling.

MySQL

vertx-mysql-cdc-replication

LOG_STREAM

True log-stream

Supported. Uses binlog connector semantics.

MariaDB

vertx-mariadb-cdc-replication

POLLING

Polling CDC

Supported. Polling adapter today; planned migration to binlog stream mode.

CockroachDB

vertx-cockroachdb-cdc-replication

POLLING

Polling CDC

Supported. Polling adapter today; planned migration to changefeed stream mode.

Oracle

vertx-oracle-cdc-replication

POLLING

Polling CDC

Supported connector module.

Db2

vertx-db2-cdc-replication

POLLING

Polling CDC

Supported connector module.

Cassandra

vertx-cassandra-cdc-replication

DB_NATIVE_CDC

DB-native CDC

Supported connector module.

ScyllaDB

vertx-scylladb-cdc-replication

DB_NATIVE_CDC

DB-native CDC

Supported connector module.

MongoDB

vertx-mongodb-cdc-replication

LOG_STREAM

True log-stream

Supported connector module.

Neo4j

vertx-neo4j-cdc-replication

DB_NATIVE_CDC

DB-native CDC

Supported connector module.

Replication Readiness Rules

For customer-facing statements, this project uses strict terminology:

  • True log-stream: consumes native transaction logs or event streams directly (no table polling loop in runtime path).

  • Polling CDC: reads change tables or source tables on an interval.

  • DB-native CDC: uses vendor-specific CDC mechanisms that are not protocol-equivalent to PostgreSQL/MySQL logical/binlog streams.

Minimum customer-readiness criteria for any adapter:

  1. Resume from persisted checkpoint after restart is verified.

  2. Duplicate boundary behavior is documented and tested.

  3. Preflight checks fail fast on unsafe source configuration.

  4. Mode classification in docs matches runtime behavior.

In the beginning there is a replication stream

Each connector implements the same stream lifecycle:

  • preflight() to validate source prerequisites

  • start() to begin consuming changes

  • subscribe(…​) or startAndSubscribe(…​) to receive events

  • close() to stop

You can treat each connector as a database-specific implementation of the shared ReplicationStream contract.

Use this sequence in production:

  1. Build options explicitly.

  2. Run preflight() and fail fast on errors.

  3. Use startAndSubscribe(…​) for deterministic startup behavior.

  4. Persist checkpoints (LsnStore) outside development environments.

  5. Register state handlers and metrics listeners for observability.

Checkpoint storage

vertx-replication-core includes several LsnStore implementations so you can choose storage based on environment:

  • NoopLsnStore for local development only.

  • InMemoryLsnStore for tests.

  • LocalMapLsnStore for Vert.x process-local shared data.

  • FileLsnStore for durable JSON-file persistence.

  • PrefixedLsnStore to namespace keys over any existing store.

PostgreSQL

The PostgreSQL connector consumes logical replication messages produced by the wal2json plugin and maps them to PostgresChangeEvent.

Requirements

  • PostgreSQL logical replication enabled.

  • wal2json plugin installed and available to the server.

  • Replication role with privileges to create/use the slot.

Typical PostgreSQL settings:

wal_level=logical
max_replication_slots>=1
max_wal_senders>=1

Creating options

PostgresReplicationOptions options = new PostgresReplicationOptions()
  .setHost("localhost")
  .setPort(5432)
  .setDatabase("app")
  .setUser("app")
  .setPasswordEnv("PG_REPLICATION_PASSWORD")
  .setSlotName("vertx_app_slot")
  .setPlugin("wal2json")
  .setPreflightEnabled(true);

Starting and subscribing

Vertx vertx = Vertx.vertx();
PostgresLogicalReplicationStream stream = new PostgresLogicalReplicationStream(vertx, options);

stream.preflight()
  .compose(report -> report.ok()
    ? Future.succeededFuture()
    : Future.failedFuture("PostgreSQL preflight failed"))
  .onSuccess(v -> {
    SubscriptionRegistration registration = stream.startAndSubscribe(
      PostgresChangeFilter.tables("public.orders")
        .operations(PostgresChangeEvent.Operation.INSERT, PostgresChangeEvent.Operation.UPDATE),
      event -> {
        System.out.println(event.getNewData());
        return Future.succeededFuture();
      },
      Throwable::printStackTrace
    );

    registration.started().onFailure(Throwable::printStackTrace);
  })
  .onFailure(Throwable::printStackTrace);

Operational notes

  • Keep slot lag under control and alert before disk pressure builds up.

  • Persist LSN checkpoints in production with a non-NoopLsnStore implementation, for example FileLsnStore optionally wrapped by PrefixedLsnStore.

  • Prefer passwordEnv over inline secrets.

  • Enable retry policy explicitly if your deployment requires reconnect behavior.

Connector-specific caveats

Non-PostgreSQL connectors have database-specific setup differences. Minimum checks by connector:

  • SQL Server: enable SQL Server Agent, sp_cdc_enable_db, and table-level CDC/capture instance.

  • MySQL and MariaDB: enable binlog, use ROW format, and verify connector/auth compatibility.

  • CockroachDB: verify CDC/changefeed permissions and sink/source compatibility used by the adapter.

  • Oracle: verify required CDC/log mining prerequisites and connector principal privileges.

  • Db2: verify CDC/change data capture setup and polling/query privileges.

  • Cassandra and ScyllaDB: verify CDC is enabled at table/keyspace level and retention windows match consumer lag.

  • MongoDB: run as replica set/sharded deployment with change streams enabled and proper read permissions.

  • Neo4j: verify change data capture/event stream prerequisites and driver authentication/permissions.