Data Plane Framework (DPF) Services are performing the actual data transfer between consumer and provider. DPF Services support multiple deployment topologies and work with finite data transfers, and small payload, latent non-finite transfers such as events.
High-volume or latency-sensitive streaming (non-finite) data transfers should be handled by other implementations that delegate to specialized third-party infrastructure such as Kafka.
Key features:
* Minimal state: All state pertaining to a transfer process are maintained by the Control Plane as part of the TransferProcess. The only state that is maintained by the DPF Service is if a transfer process has been completed. This requires the Control Plane to issue retries in the event of failure.
* No transformation: The DPF Service is not an ETL tool. It does not contain any facilities for data transformations or processing. This is expected to be handled by the Control Plane as part of the provisioning state.
* re-use: DPF Services rely on existing data transfer technology such as FTP, S3, Azure Object Storage, etc., in general the DPF Services do not contain any wire protocol implementations.
* Flexible Deployment: DPF Services must be deployable:
** to a Kubernetes cluster,
** remotely from the Control Plane,
** within the same process as the Control Plane for demo and testing purposes,