The Edge Computing (Decentralized AI processing) BB (BB-02) provides value-added services exploiting an underlying distributed edge-computing infrastructure (e.g., owned and operated by Cloud Providers).
These services target two main high-level goals:
- Privacy-preserving, where data is kept close(r) to the user, more exactly within a pre-defined domain, called a privacy zone, that is eligible to process the private data.
- Efficient near-data processing, with optimized computation performance, resource utilization, and data privacy.
In general, the main goal is to move (AI-related) data processing capabilities close to the data source and execute them on-site. If the execution capability is available on-site, that is, in the virtual/physical node storing the data, the data-consuming software function (as a FaaS-based operation) or container (as a CaaS based operation) is launched there (e.g., by a Kubernetes-based orchestration framework). Thus, we can avoid the transmission of a large number of data and address privacy challenges designated by geographical or provider-related rules, regulations, or demands.
As a more realistic scenario, the data and the function can also be moved for processing but only within a pre-defined privacy zone. This privacy zone primarily encompasses a set of worker nodes (using the Kubernetes terminology), that are suitable in the sense of pre-defined privacy rules and where processing functions can and should be deployed on demand.
From the viewpoint of data processing capabilities, a processing function (deployed within an eligible privacy zone) can perform simple data preprocessing, filtering, or manipulation actions on the private data, or more complex tasks, such as federated AI model learning steps. This is uniformly enabled by advanced containerization technologies, such as docker, where the data processing piece of code is bundled with its dependencies and software resources in a standalone deployable package, in a lightweight and portable way.Nevertheless This container-based approach also opens new possibilities to support the execution of standalone and portable components of other building blocks provided these block parts can be seamlessly operated in a cloud-native or serverless execution environments.
- Edge computing — AI processing BB
Important
See the comprehensive design document of the Edge Computing building block here.
Important
See the developer/technical document of the building block components here.
Since the functionalities of the Edge Computing BB fundamentally rely on the Kubernetes (K8s) container orchestration platform (realistically spanning multiple providers' domains/clouds), its value-added services are implemented as standalone software containers, operated in a dedicated Kubernetes namespace, and several PTX-tailored extensions of the Kubernetes framework itself.
The elements of BB-02's main functionalities cover the following:
- Provide a generic runtime environment for data processing functions.
- Provide the ability to deploy pre-built containers with privacy-preserving options.
- Provide the capability of orchestrating data processing by privacy-zone metadata.
- Use the PTX Connector to interact with PTX core elements.
- Implement and control the process of getting data for data consumer functions/software.
- Implement a separate REST-API interface for the integration with PTX dataspace.
- Implement a dedicated scheduler for managing compute resources efficiently and in a privacy-preserving manner.
See the main technical document in kubernetes/design, which describes in detail how the building block components are realized and bound to the Kubernetes' architecture features.
Schematic design architecture of binding BB-02's functional components to K8s internal objects.
Since BB-02 primarily consists of extension modules to the widespread Kubernetes framework, instead of a (set of) standalone software, its installation and setup require different steps, and most of all, an operating vanilla Kubernetes cluster as a prerequisite.
The fundamental Kubernetes features, on which the designed extension modules rely, are specifically chosen to support a wide variety of Kubernetes versions (e.g., preferring Ingress routes instead of the newer API Gateway entries).
Nevertheless, currently Kubernetes version 1.31.5 and above are preferred and tested.
There are many methods and tools for setting up a production-grade Kubernetes cluster on a local machine. For example,
- consult with the Kubernetes' official documentation,
- pick any of the numerous certified platform solutions, or
- choose one of the managed Kubernetes services available online at top-tier cloud provider or in the EU.
Since the BB-02 building block is not a standalone data processing service per se, but an over-the-top platform for executing or operating other data processing functionalities in a native cloud environment. Thus, BB-02's installation steps highly rely on the actual setup of the underlying Kubernetes orchestration framework. Nevertheless, except minor platform-specific configurations, e.g., routing with K8s deployment's application gateway or using built-in certificate management services, the installation process can be considered system-independent assuming the configuration profile and context to the underlying Kubernetes cluster is available and set by default.
The installation and configuration steps are grouped together into separate helper scripts with a dedicated Makefile, that are intended to
- download and install required software and system dependencies,
- assemble a specific version of the PTX Connector component (PDC) tailored for K8s,
- compile dedicated containers of building block components,
- install and configure the building block over the default kKubernetes cluster using either command line tools and/or the Kubernetes' package manager, called Helm internally.
The easiest and straightforward way to set up BB-02's ptx-edge K8s extension,
(assuming a valid default kubectl profile for a running Kubernetes cluster),
use the following command issued in the project's root folder:
$ make setupNote
Since BB-02's additional features are still under development, Makefile targets currently
(setup / run / cleanup) point directly to the targets of the latest tested readiness
level's Makefile in kubernetes/test/levels, that assumes a default locally emulated
Kubernetes cluster!
Note
The configured level based on the ptx-edge-internal
definitions is Level 5.
Current development of BB-02 addresses additional features to ease the development and usage of the building block feature for human users in order to fulfill the goals of final Level 6.
Furthermore, there are dedicated deployment scenarios for BB02's intended application scenarios in kubernetes/deployment, e.g., the latest deployment configuration of building block BB-02 operated at the BME side.
These deployment configurations are not designed for general applicability, but for specific cloud environments and use case scenarios. These deployment setups encompass comprehensive configuration options for a given use case scenario, including
- domain/server information,
- certificate management,
- additional security hardening,
- Kubernetes-related extra metadata,
- and more.
Nevertheless, these can be used as a starting point by advanced users to create similar deployment configurations for other use cases.
Usually, these configurations require only minor modifications, e.g., changing public domain
DNS, load balancer IP, etc., to adjust them to deployment scenarios of the same kind.
For the available configuration options, consult with the
config.sh files and templates folders.
There are several deployment setups in kubernetes/test/demos for
demonstrating ptx-edge capabilities over a local emulated Kubernetes cluster.
Although, these configuration setups are designed for executing test workflows, it contains
useful examples how the building block component can be precompiled, defined, set up,
and configured to use them in different scenarios.
For the configuration and deployment options, consult with the related Makefile,
install.sh, "*-topo.sh, and setup-*.sh helper scripts, as well as the K8s manifest
template files under the local ./rsc folders.
Note
The BB-02 module is unique in that sense that it cannot be seamlessly run by a container framework, such as Docker or Podman, as it is inherently based on container orchestration features of a higher architecture level.
However, for development and testing purposes, full-fledged but lightweight clusters of different Kubernetes distributions can be set up on the fly even in a single virtual machine.
For example, the kind, k3d, and
minikube tools
are purposefully designed for creating and spinning up local, multi-node K8s
clusters/sandboxes using docker with little hassle and resource usage.
These are meant for developers to test Kubernetes distributions on their (isolated)
development machine, but are also suitable for local development, CI, and testing.
The K8s control plane and worker nodes are created as separate docker containers based on specially built docker images, which
- are capable of running arbitrary software modules as preloaded docker images using docker-in-docker,
- run standard K8s distribution components, e.g.,
kubelet, - that can be configured via the standard
kubectltool from the host machine.
See a detailed description of these tools, their installation and configuration on an Ubuntu 22.04/24.04 VM in kubernetes/test.
However, the ptx-edge extension's customer-facing API can also be separately run in
a single container as a mockup for automated integration test cases.
See further details about Docker-based testing
- in the Level 1 testing setup here with the related README.md
- or in the mockup REST-API README.md
in
kubernetes/test/mock-api.
To start ptx-edge components deployed in the local K8s cluster, run
make runwhile for tearing down components, run
make cleanupNote
Since BB-02 is still under development, Makefile targets currently
(setup / run / cleanup) point directly to the targets of the latest
readiness level's Makefile in kubernetes/test/levels!
These targets launch the deployed ptx-edge core services in K8s automatically,
but it does not wait until all the resources are running before it exits!
To check the current status of the installed components, use the following command:
$ make statusThe ptx-edge K8s extension provides a separate REST-API in
kubernetes/src/rest-api
to integrate its features with the PTX core components and to ease the use of
building block services by external APIs or users.
The API uses the FastAPI Python package to implement its endpoints and also define the related OpenAPI 3 specification directly from the Python software code.
- The REST-API uses the following base URL:
http://<service_name>:8080/ptx-edge/api/v1/. - The interactive API interface (Swagger UI) lives here:
http://<service_name>:8080/ptx-edge/api/v1/ui/ - The OpenAPI specification is available at
http://<service_name>:8080/ptx-edge/api/v1/openapi.json
Additionally, the latest OpenAPI specification is auto-generated and updated at every commit and can be found here.
For testing purposes, a mock-API is generated based on the BB-02's predefined OpenAPI specification.
The detailed description of the mock-API and its internal test cases can be found in the related Readme.
The REST-API endpoints can be easily tested in the following two approaches:
- Calling directly on the specific endpoint using e.g.,
curland Python'sjsonmodule.
For example, the standalone mock REST-API can be tested with the following command:
$ curl -sX 'GET' \
-H 'accept: application/json' \
'http://localhost:8080/ptx-edge/api/v1/version' | python3 -m json.tool
{
"api": "0.1",
"framework": "0.112.1"
}- Manually testing endpoints with in-line test data on its Swagger UI.
Caution
The different ptx-edge setups along with the included REST-API service
may be exposed on different port(s) (e.g., 80, 8080, 443) according
to the applied (test/dev/prod) K8s setup, used (cloud) load balancer,
or test VM configuration!
Refer to the exposed port number in the related part of the documentation!
To execute all module and component tests prepared in the
kubernetes/test folder, including all unit tests defined for
each submodule in kubernetes/src and explicit component-level
tests, use the joint Makefile target tests in the main Makefile:
$ make testsThe following table contains example API calls with successful results.
Further test cases for incorrect input data and other failures are collected in the mock-APIs unit tests in kubernetes/test/mock-api/tests.
To validate the endpoints, send the following requests to the main REST-API using the URL:
http://<service_name>:8080/ptx-edge/api/v1/<endpoint>.
| Endpoint | HTTP verb | Example input (JSON) | Response Code | Example output (JSON) |
|---|---|---|---|---|
| /versions | GET | - | 200 | {"api": "0.1", |
| /requestEdgeProc | POST | {"data": "Data42", |
202 | {"data": "Data42", |
| /requestPrivacyEdgeProc | POST | {"consent": "Consent42", |
202 | {"function": "FunctionData42", |
Detailed test definitions can be found in kubernetes/test/cases.
Tip
All tests can be executed with the make tests target.
Unit tests are based on (Python) module-specific tests defined separately under kubernetes/src/<module>/tests
for each ptx-edge subcomponent <module>.
Since multiple modules use and expose APIs, defined unit tests also contain API endpoint validations. These test cases usually use specific test methods/dependencies/tools recommended to module's main framework.
For installing test dependencies of a given submodule in kubernetes/src, refer to the related README file.
Each subproject defines a Makefile to unify the development/test environment creation. Accordingly, test environment configuration (and execution) is implicitly managed by external tools and third-party libraries, such as virtualenv, pytest, and tox, within these Makefiles.
Therefore, in general, there is no need for explicit environment setup as it is automatically configured and managed by wrapper tools/scripts.
However, to explicitly set up the test/dev environment for a <module> locally
(without Docker), usually the following command can be used:
$ cd kubernetes/src/<module> && make setupFurthermore, the configuration of docker-based test environments can be also performed explicitly by executing the dedicated Makefile target as follows:
$ cd kubernetes/src/<module> && make docker-test-setup # Preferred wayTip
Unit test dependencies are the same as for its main submodules.
To locally execute all unit tests defined for ptx-edge,
use the following helper script in kubernetes/test/units:
$ cd kubernetes/test/units && ./runall.shFor the available configuration parameters, refer to the help menu:
$ ./runall.sh -h
Usage: ./runall.sh [options]
Options:
-d Execute tests in Docker containers instead of local venvs.
-c Cleanup projects before build.
-o <dir> Collect Junit-style reports into <dir>.
-h Display help.To locally execute the unit tests of a single <module>,
execute the dedicated Makefile target within the <module> folder, e.g.,
$ cd kubernetes/src/<module> && make unit-testsNote
Subprojects may define different dependencies and test parameters wrapped by Makefiles. The preferred way for testing is the preconfigured Docker-based test environments.
For docker-based test execution, use the dedicated -d flag of runall.sh
or call the dedicated Makefile target of any <module>:
$ cd kubernetes/test/units && ./runall.sh -d # Preferred way
# or
$ cd kubernetes/src/<module> && make docker-unit-testsJUnit-style test reports are automatically generated and stored in the test containers.
To export these reports from the test environment to the local host/VM, use the -o flag
with the runall.sh script:
$ ./runall.sh -d -o results/
[x] Docker-based unit test execution is configured.
[x] JUnit-style reports are configured with path: kubernetes/test/units/results
Preparing report folder...
# <logs truncated>
$ ls -al results/
total 20
drwxrwx--- 1 root vboxsf 4096 Feb 24 20:08 ./
drwxrwx--- 1 root vboxsf 4096 Feb 24 20:01 ../
-rwxrwx--- 1 root vboxsf 218 Feb 24 20:08 report-test-builder.xml
-rwxrwx--- 1 root vboxsf 2878 Feb 24 20:09 report-test-mock-api.xml
-rwxrwx--- 1 root vboxsf 218 Feb 24 20:08 report-test-rest-api.xmlEach component test (script) starting with the prefix test is executed successfully
(without error/failure notification),
while the helper script runall.sh returns with value 0.
An example result log of one successful test execution is the following:
$ cd kubernetes/test/mock-api
$ make docker-unit-tests
# <logs truncated>
py38 run-test: commands[0] | nosetests -v --with-xunit --xunit-file=report/report-test-mock-api.xml
[66] /usr/src/app$ /usr/src/app/.tox/py38/bin/nosetests -v --with-xunit --xunit-file=report/report-test-mock-api.xml
Test case for checking available live API: HTTP 200 ... ok
Test case for valid request_edge_proc request: HTTP 202 ... ok
Test case for invalid request_edge_proc request: HTTP 400 ... ok
Test case for invalid request_edge_proc request: HTTP 403 ... ok
Test case for invalid request_edge_proc request: HTTP 404 ... ok
Test case for invalid request_edge_proc request: HTTP 408 ... ok
Test case for invalid request_edge_proc request: HTTP 412 ... ok
Test case for invalid request_edge_proc request: HTTP 503 ... ok
Test case for valid request_privacy_edge_proc request: HTTP 202 ... ok
Test case for invalid request_privacy_edge_proc request: HTTP 400 ... ok
Test case for invalid request_privacy_edge_proc request: HTTP 401 ... ok
Test case for invalid request_privacy_edge_proc request: HTTP 403 ... ok
Test case for invalid request_privacy_edge_proc request: HTTP 404 ... ok
Test case for invalid request_privacy_edge_proc request: HTTP 408 ... ok
Test case for invalid request_privacy_edge_proc request: HTTP 412 ... ok
Test case for invalid request_privacy_edge_proc request: HTTP 503 ... ok
Test case for valid get_versions response: HTTP 200 ... ok
----------------------------------------------------------------------
XML: /usr/src/app/report/report-test-mock-api.xml
----------------------------------------------------------------------
Ran 17 tests in 3.709s
OK
_____________________________________________________________________________________________________ summary _____________________________________________________________________________________________________
py38: commands succeeded
congratulations :)Programmatically, each Makefile returns the value 0 in case all executed tests defined in the target
unit-tests were successful, and a non-zero value otherwise.
The helper script runall.sh follows this "UNIX" behavior as well.
Important
Detailed test execution summary can be found in kubernetes/test/suites/README.md.
Testing of ptx-edge components is based on the basic functionality and applicability of
ptx-edge K8s components defined in the Design document.
This means that the designed component-level tests aim to test
- K8s manifest files designed to be used as templates by
ptx-edgemodules, - Implicitly validate K8s capabilities and K8s API server endpoints on which
ptx-edgemodules rely, - Interactions between
ptx-edgemodules as well as with K8s entities (services, persistent volumes, load balancer, etc.).
The related test cases can be found in kubernetes/test/suites.
Note
For the detailed description of component-level tests, refer to the related README.md.
Typically, these test scripts perform the following steps:
- set up and configure a K3s test environment according to the test case,
- deploy test manifest file(s) or make direct configuration by using K8s API using
kubectl, - wait for component(s) to set up and reach a stable state or escalate designed issues,
- check the test status and validate the outcome, and
- tear down the test environment.
To install test dependencies with the latest versions:
$ cd kubernetes/test/suites && ./install-dep.sh -uWarning
For test report generation, the flag -u is mandatory!
To execute all component-level tests with JUnit-style test report generation (into the folder
kubernetes/test/suites/results), use the following helper script:
$ cd kubernetes/test/suites && ./runall.sh -o ./results
[x] JUnit-style reports are configured with path: kubernetes/test/suites/results
Preparing report folder...
# <logs truncated>
$ ls -al results/
total 20
drwxrwx--- 1 root vboxsf 4096 Feb 20 16:56 .
drwxrwx--- 1 root vboxsf 4096 Feb 20 12:31 ..
-rwxrwx--- 1 root vboxsf 445 Feb 20 16:56 report-test-policy-zone-scheduling.xml
-rwxrwx--- 1 root vboxsf 524 Feb 20 16:59 report-test-ptx-edge-builder.xml
-rwxrwx--- 1 root vboxsf 265 Feb 20 17:00 report-test-ptx-edge-rest-api.xmlFor the available configuration parameters, refer to the help menu:
$ ./runall.sh -h
Usage: ./runall.sh [options]
Options:
-o <dir> Generate Junit-style reports into <dir>.
-h Display help.Each component test script starting with the prefix test in the folder kubernetes/test/suites
is executed successfully (without error/failure notification),
while the helper script runall.sh returns with value 0.
An example result log of one successful test execution is the following:
$ ./test-policy-zone-scheduling.sh -- testPolicyZoneSchedulingWithNodeSelector
# <logs truncated>
Ran 1 test.
OKProgrammatically, each test script returns 0 in case all defined test
cases were successful, and a non-zero value otherwise.
The helper script runall.sh follows this UNIX behavior as well.
Note
Detailed test execution summary can be found in kubernetes/test/units/README.md.

