Tech Stack
Data Flows

Data Flows

FeatBit's architecture relies on a set of well-defined data flows between its interconnected services. These flows are essential for the platform's core functions: synchronizing feature flag configurations, propagating changes in real-time, managing user data for targeting, and collecting analytics for insights and experimentation. Understanding these flows provides insight into how FeatBit operates internally.

Data sync flow

Purpose

To ensure that connected SDKs have the correct set of feature flags, segments, and (for client-side SDKs) evaluated flag results necessary for their operation.

Trigger

An SDK (client-side or server-side) establishes a connection or reconnects to the Evaluation Server Service (ELS).

Process

  1. Upon connection, the SDK sends a data synchronization request to the ELS. This request includes a timestamp indicating the last time the SDK received updates.
  2. The ELS uses this timestamp to query its Caching Layer or the Primary Database for all feature flags and segments that have been created or modified since that time.
  3. For client-side SDKs only: The ELS performs an evaluation of the relevant feature flags based on the user context provided during connection.
  4. The ELS constructs a response containing the necessary data and sends it back to the requesting SDK.
    • Server Side SDKs: Receive the raw feature flag configurations and segment definitions.
    • Client Side SDKs: Receive the pre-evaluated feature flag results relevant to their provided user context.

Feature flag / Segment changes flow

Purpose

To distribute modifications made to feature flags or segments (typically via the UI) to all actively connected SDKs in real-time, ensuring consistent behavior across the application.

Trigger

A user modifies a feature flag or segment definition through the FeatBit UI.

Process

  1. The API persists the changes into the Primary Database.
  2. After successful persistence, the API then publishes a change notification message to the Message Queue.
  3. The ELS consumes this change notification from the Message Queue
    • It re-evaluates relevant flags for connected client-side SDKs based on their known user contexts.
    • It prepares the updated flag/segment definitions for server-side SDKs.
  4. The ELS pushes the updated data (evaluated results for client-side, definitions for server-side) to the relevant connected SDKs via WebSocket connections.

End user data flow

Purpose

To collect and persist end-user information provided by the SDKs. We can then use this data for feature flag targeting and user segmentation.

Trigger

This flow is triggered by several events:

  • A client-side SDK establishes its initial WebSocket connection (sending initial user context).
  • A client-side SDK explicitly identifies or updates the current user via the SDK's identify method (or equivalent).
  • An SDK (client-side or server-side) sends insights data (like evaluation results) which includes user information.

Process

  1. The ELS receives user data from an SDK and publishes it to the Message Queue.
  2. The API consumes these user data messages from the Message Queue, updating existing user records or inserting new ones into the Primary Database.

Insights data flow

Purpose

To collect and process analytics data generated by SDKs, powering FeatBit's reporting features and A/B/n experimentation analysis.

Trigger

SDKs send insights data to the ELS. Insights data include:

  • Flag Evaluation Events: Records detailing which flag variations were served to which users.
  • Experiment Metrics: Data tracking user interactions or conversions relevant to ongoing A/B/n tests (e.g., custom events like button clicks, page views).

Process

  1. SDKs send insights data packets to the ELS.
  2. The ELS receives the data and forwards it to the Message Queue.
  3. Persisting the data varies by deployment version:
    • Standalone & Standard Versions: The API consumes the insights data from the Message Queue and persists it into the Primary Database.
    • Professional Version: ClickHouse consumes the insights data directly from the Message Queue ( Kafka).