diff --git a/AGENTS.md b/AGENTS.md index bd40a313c6..ce6ea35c57 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -231,7 +231,7 @@ For distributed development: 3. Build Store: `mvn clean package -pl hugegraph-store -am -DskipTests` 4. Build Server with HStore backend: `mvn clean package -pl hugegraph-server -am -DskipTests` -See Docker Compose example: `hugegraph-server/hugegraph-dist/docker/example/` +See Docker Compose examples: `docker/` directory. Single-node quickstart (pre-built images): `docker/docker-compose.yml`. Single-node dev build (from source): `docker/docker-compose.dev.yml`. 3-node cluster: `docker/docker-compose-3pd-3store-3server.yml`. See `docker/README.md` for full setup guide. ### Debugging Tips diff --git a/README.md b/README.md index eba5d980ee..9b92fd2bea 100644 --- a/README.md +++ b/README.md @@ -173,11 +173,11 @@ flowchart TB ### 5 Minutes Quick Start ```bash -# Start HugeGraph with Docker +# Start HugeGraph (standalone mode) docker run -itd --name=hugegraph -p 8080:8080 hugegraph/hugegraph:1.7.0 # Verify server is running -curl http://localhost:8080/apis/version +curl http://localhost:8080/versions # Try a Gremlin query curl -X POST http://localhost:8080/gremlin \ @@ -208,13 +208,18 @@ docker run -itd --name=hugegraph -e PASSWORD=your_password -p 8080:8080 hugegrap ``` For advanced Docker configurations, see: -- [Docker Documentation](https://hugegraph.apache.org/docs/quickstart/hugegraph-server/#3-deploy) -- [Docker Compose Example](./hugegraph-server/hugegraph-dist/docker/example) -- [Docker README](hugegraph-server/hugegraph-dist/docker/README.md) + +* [Docker Documentation](https://hugegraph.apache.org/docs/quickstart/hugegraph-server/#3-deploy) +* [Docker Compose Examples](./docker/) +* [Docker README](./docker/README.md) +* [Server Docker README](hugegraph-server/hugegraph-dist/docker/README.md) + +> **Docker Desktop (Mac/Windows)**: The 3-node distributed cluster (`docker/docker-compose-3pd-3store-3server.yml`) uses Docker bridge networking and works on all platforms including Docker Desktop. Allocate at least 12 GB memory to Docker Desktop. > **Note**: Docker images are convenience releases, not **official ASF distribution artifacts**. See [ASF Release Distribution Policy](https://infra.apache.org/release-distribution.html#dockerhub) for details. > -> **Version Tags**: Use release tags (`1.7.0`, `1.x.0`) for stable versions. Use `latest` for development features. +> **Version Tags**: Use release tags (e.g., `1.7.0`) for stable deployments. The `latest` tag should only be used for testing or development. +
Option 2: Download Binary Package @@ -283,14 +288,16 @@ Once the server is running, verify the installation: ```bash # Check server version -curl http://localhost:8080/apis/version +curl http://localhost:8080/versions # Expected output: # { -# "version": "1.7.0", -# "core": "1.7.0", -# "gremlin": "3.5.1", -# "api": "1.7.0" +# "versions": { +# "version": "v1", +# "core": "1.7.0", +# "gremlin": "3.5.1", +# "api": "1.7.0" +# } # } # Try Gremlin console (if installed locally) diff --git a/docker/README.md b/docker/README.md new file mode 100644 index 0000000000..9bc21b1ba7 --- /dev/null +++ b/docker/README.md @@ -0,0 +1,259 @@ +# HugeGraph Docker Deployment + +This directory contains Docker Compose files for running HugeGraph: + +| File | Description | +|------|-------------| +| `docker-compose.yml` | Single-node cluster using pre-built images from Docker Hub | +| `docker-compose.dev.yml` | Single-node cluster built from source (for developers) | +| `docker-compose-3pd-3store-3server.yml` | 3-node distributed cluster (PD + Store + Server) | + +## Prerequisites + +- **Docker Engine** 20.10+ (or Docker Desktop 4.x+) +- **Docker Compose** v2 (included in Docker Desktop) +- **Memory**: Allocate at least **12 GB** to Docker Desktop (Settings → Resources → Memory). The 3-node cluster runs 9 JVM processes (3 PD + 3 Store + 3 Server) which are memory-intensive. Insufficient memory causes OOM kills that appear as silent Raft failures. + +> [!IMPORTANT] +> The 12 GB minimum is for Docker Desktop. On Linux with native Docker, ensure the host has at least 12 GB of free memory. +--- + +## Single-Node Setup + +Two compose files are available for running a single-node cluster (1 PD + 1 Store + 1 Server): + +### Option A: Quick Start (pre-built images) + +Uses pre-built images from Docker Hub. Best for **end users** who want to run HugeGraph quickly. + +```bash +cd docker +HUGEGRAPH_VERSION=1.7.0 docker compose up -d +``` + +- Images: `hugegraph/pd:1.7.0`, `hugegraph/store:1.7.0`, `hugegraph/server:1.7.0` +- `pull_policy: always` — always pulls the specified image tag + +> **Note**: Use release tags (e.g., `1.7.0`) for stable deployments. The `latest` tag is intended for testing or development only. +- PD healthcheck endpoint: `/v1/health` +- Single PD, single Store (`HG_PD_INITIAL_STORE_LIST: store:8500`), single Server +- Server healthcheck endpoint: `/versions` + +### Option B: Development Build (build from source) + +Builds images locally from source Dockerfiles. Best for **developers** who want to test local changes. + +```bash +cd docker +docker compose -f docker-compose.dev.yml up -d +``` + +- Images: built from source via `build: context: ..` with Dockerfiles +- No `pull_policy` — builds locally, doesn't pull +- Entrypoint scripts are baked into the built image (no volume mounts) +- PD healthcheck endpoint: `/v1/health` +- Otherwise identical env vars and structure to the quickstart file + +### Key Differences + +| | `docker-compose.yml` (quickstart) | `docker-compose.dev.yml` (dev build) | +|---|---|---| +| **Images** | Pull from Docker Hub | Build from source | +| **Who it's for** | End users | Developers | +| **pull_policy** | `always` | not set (build) | + +**Verify** (both options): +```bash +curl http://localhost:8080/versions +``` + +--- + +## 3-Node Cluster Quickstart + +```bash +cd docker +HUGEGRAPH_VERSION=1.7.0 docker compose -f docker-compose-3pd-3store-3server.yml up -d + +# To stop and remove all data volumes (clean restart) +docker compose -f docker-compose-3pd-3store-3server.yml down -v +``` + +**Startup ordering** is enforced via `depends_on` with `condition: service_healthy`: + +1. **PD nodes** start first and must pass healthchecks (`/v1/health`) +2. **Store nodes** start after all PD nodes are healthy +3. **Server nodes** start after all Store nodes are healthy + +This ensures PD and Store are healthy before the server starts. The server entrypoint still performs a best-effort partition wait after launch, so partition assignment may take a little longer. + +**Verify the cluster is healthy**: + +```bash +# Check PD health +curl http://localhost:8620/v1/health + +# Check Store health +curl http://localhost:8520/v1/health + +# Check Server (Graph API) +curl http://localhost:8080/versions + +# List registered stores via PD +curl http://localhost:8620/v1/stores + +# List partitions +curl http://localhost:8620/v1/partitions +``` + +--- + +## Environment Variable Reference + +Configuration is injected via environment variables. The old `docker/configs/application-pd*.yml` and `docker/configs/application-store*.yml` files are no longer used. + +### PD Environment Variables + +| Variable | Required | Default | Maps To (`application.yml`) | Description | +|----------|----------|---------|-----------------------------|-------------| +| `HG_PD_GRPC_HOST` | Yes | — | `grpc.host` | This node's hostname/IP for gRPC | +| `HG_PD_RAFT_ADDRESS` | Yes | — | `raft.address` | This node's Raft address (e.g. `pd0:8610`) | +| `HG_PD_RAFT_PEERS_LIST` | Yes | — | `raft.peers-list` | All PD peers (e.g. `pd0:8610,pd1:8610,pd2:8610`) | +| `HG_PD_INITIAL_STORE_LIST` | Yes | — | `pd.initial-store-list` | Expected stores (e.g. `store0:8500,store1:8500,store2:8500`) | +| `HG_PD_GRPC_PORT` | No | `8686` | `grpc.port` | gRPC server port | +| `HG_PD_REST_PORT` | No | `8620` | `server.port` | REST API port | +| `HG_PD_DATA_PATH` | No | `/hugegraph-pd/pd_data` | `pd.data-path` | Metadata storage path | +| `HG_PD_INITIAL_STORE_COUNT` | No | `1` | `pd.initial-store-count` | Min stores for cluster availability | + +**Deprecated aliases** (still work but log a warning): + +| Deprecated | Use Instead | +|------------|-------------| +| `GRPC_HOST` | `HG_PD_GRPC_HOST` | +| `RAFT_ADDRESS` | `HG_PD_RAFT_ADDRESS` | +| `RAFT_PEERS` | `HG_PD_RAFT_PEERS_LIST` | +| `PD_INITIAL_STORE_LIST` | `HG_PD_INITIAL_STORE_LIST` | + +### Store Environment Variables + +| Variable | Required | Default | Maps To (`application.yml`) | Description | +|----------|----------|---------|-----------------------------|-------------| +| `HG_STORE_PD_ADDRESS` | Yes | — | `pdserver.address` | PD gRPC addresses (e.g. `pd0:8686,pd1:8686,pd2:8686`) | +| `HG_STORE_GRPC_HOST` | Yes | — | `grpc.host` | This node's hostname (e.g. `store0`) | +| `HG_STORE_RAFT_ADDRESS` | Yes | — | `raft.address` | This node's Raft address (e.g. `store0:8510`) | +| `HG_STORE_GRPC_PORT` | No | `8500` | `grpc.port` | gRPC server port | +| `HG_STORE_REST_PORT` | No | `8520` | `server.port` | REST API port | +| `HG_STORE_DATA_PATH` | No | `/hugegraph-store/storage` | `app.data-path` | Data storage path | + +**Deprecated aliases** (still work but log a warning): + +| Deprecated | Use Instead | +|------------|-------------| +| `PD_ADDRESS` | `HG_STORE_PD_ADDRESS` | +| `GRPC_HOST` | `HG_STORE_GRPC_HOST` | +| `RAFT_ADDRESS` | `HG_STORE_RAFT_ADDRESS` | + +### Server Environment Variables + +| Variable | Required | Default | Maps To | Description | +|----------|----------|---------|-----------------------------|-------------| +| `HG_SERVER_BACKEND` | Yes | — | `backend` in `hugegraph.properties` | Storage backend (e.g. `hstore`) | +| `HG_SERVER_PD_PEERS` | Yes | — | `pd.peers` | PD cluster addresses (e.g. `pd0:8686,pd1:8686,pd2:8686`) | +| `STORE_REST` | No | — | Used by `wait-partition.sh` | Store REST endpoint for partition verification (e.g. `store0:8520`) | +| `PASSWORD` | No | — | Enables auth mode | Optional authentication password | + +**Deprecated aliases** (still work but log a warning): + +| Deprecated | Use Instead | +|------------|-------------| +| `BACKEND` | `HG_SERVER_BACKEND` | +| `PD_PEERS` | `HG_SERVER_PD_PEERS` | + +--- + +## Port Reference + +The table below reflects the published host ports in `docker-compose-3pd-3store-3server.yml`. +The single-node compose file (`docker-compose.yml`) only publishes the REST/API ports (`8620`, `8520`, `8080`) by default. + +| Service | Container Port | Host Port | Protocol | Purpose | +|---------|---------------|-----------|----------|---------| +| pd0 | 8620 | 8620 | HTTP | REST API | +| pd0 | 8686 | 8686 | gRPC | PD gRPC | +| pd0 | 8610 | — | TCP | Raft (internal only) | +| pd1 | 8620 | 8621 | HTTP | REST API | +| pd1 | 8686 | 8687 | gRPC | PD gRPC | +| pd2 | 8620 | 8622 | HTTP | REST API | +| pd2 | 8686 | 8688 | gRPC | PD gRPC | +| store0 | 8500 | 8500 | gRPC | Store gRPC | +| store0 | 8510 | 8510 | TCP | Raft | +| store0 | 8520 | 8520 | HTTP | REST API | +| store1 | 8500 | 8501 | gRPC | Store gRPC | +| store1 | 8510 | 8511 | TCP | Raft | +| store1 | 8520 | 8521 | HTTP | REST API | +| store2 | 8500 | 8502 | gRPC | Store gRPC | +| store2 | 8510 | 8512 | TCP | Raft | +| store2 | 8520 | 8522 | HTTP | REST API | +| server0 | 8080 | 8080 | HTTP | Graph API | +| server1 | 8080 | 8081 | HTTP | Graph API | +| server2 | 8080 | 8082 | HTTP | Graph API | + +--- + +## Healthcheck Endpoints + +| Service | Endpoint | Expected | +|---------|----------|----------| +| PD | `GET /v1/health` | `200 OK` | +| Store | `GET /v1/health` | `200 OK` | +| Server | `GET /versions` | `200 OK` with version JSON | + +--- + +## Troubleshooting + +### Containers Exiting or Restarting (OOM Kills) + +**Symptom**: Containers exit with code 137, or restart loops. Raft logs show election timeouts. + +**Cause**: Docker Desktop does not have enough memory. The 9 JVM processes require at least 12 GB. + +**Fix**: Docker Desktop → Settings → Resources → Memory → set to **12 GB** or higher. Restart Docker Desktop. + +```bash +# Check if containers were OOM killed +docker inspect hg-pd0 | grep -i oom +docker stats --no-stream +``` + +### Raft Leader Election Failure + +**Symptom**: PD logs show repeated `Leader election timeout`. Store nodes cannot register. + +**Cause**: PD nodes cannot reach each other on the Raft port (8610), or `HG_PD_RAFT_PEERS_LIST` is misconfigured. + +**Fix**: +1. Verify all PD containers are running: `docker compose -f docker-compose-3pd-3store-3server.yml ps` +2. Check PD logs: `docker logs hg-pd0` +3. Verify network connectivity: `docker exec hg-pd0 ping pd1` +4. Ensure `HG_PD_RAFT_PEERS_LIST` is identical on all PD nodes + +### Partition Assignment Not Completing + +**Symptom**: Server starts but graph operations fail. Store logs show `partition not found`. + +**Cause**: PD has not finished assigning partitions to stores, or stores did not register successfully. + +**Fix**: +1. Check registered stores: `curl http://localhost:8620/v1/stores` +2. Check partition status: `curl http://localhost:8620/v1/partitions` +3. Wait for partition assignment (can take 1–3 minutes after all stores register) +4. Check server logs for the `wait-partition.sh` script output: `docker logs hg-server0` + +### Connection Refused Errors + +**Symptom**: Stores cannot connect to PD, or Server cannot connect to Store. + +**Cause**: Services are using `127.0.0.1` instead of container hostnames, or the `hg-net` bridge network is misconfigured. + +**Fix**: Ensure all `HG_*` env vars use container hostnames (`pd0`, `store0`, etc.), not `127.0.0.1` or `localhost`. diff --git a/hugegraph-pd/AGENTS.md b/hugegraph-pd/AGENTS.md index c9ba2bcfa0..aaaa861f39 100644 --- a/hugegraph-pd/AGENTS.md +++ b/hugegraph-pd/AGENTS.md @@ -247,7 +247,7 @@ store: ### Common Configuration Errors 1. **Raft peer discovery failure**: `raft.peers-list` must include all PD nodes' `raft.address` values -2. **Store connection issues**: `grpc.host` must be a reachable IP (not `127.0.0.1`) for distributed deployments +2. **Store connection issues**: `grpc.host` must be a reachable IP (not `127.0.0.1`) for distributed deployments. In Docker bridge networking, use the container hostname (e.g., `pd0`) set via `HG_PD_GRPC_HOST` env var. 3. **Split-brain scenarios**: Always run 3 or 5 PD nodes in production for Raft quorum 4. **Partition imbalance**: Adjust `patrol-interval` for faster/slower rebalancing @@ -331,7 +331,7 @@ docker run -d -p 8620:8620 -p 8686:8686 -p 8610:8610 \ hugegraph-pd:latest # For production clusters, use Docker Compose or Kubernetes -# See: hugegraph-server/hugegraph-dist/docker/example/ +# See: ../docker/docker-compose-3pd-3store-3server.yml and ../docker/README.md ``` Exposed ports: 8620 (REST), 8686 (gRPC), 8610 (Raft) diff --git a/hugegraph-pd/README.md b/hugegraph-pd/README.md index 65d700e677..b900673ace 100644 --- a/hugegraph-pd/README.md +++ b/hugegraph-pd/README.md @@ -154,6 +154,36 @@ raft: For detailed configuration options and production tuning, see [Configuration Guide](docs/configuration.md). +#### Docker Bridge Network Example + +When running PD in Docker with bridge networking (e.g., `docker/docker-compose-3pd-3store-3server.yml`), configuration is injected via environment variables instead of editing `application.yml` directly. Container hostnames are used instead of IP addresses: + +**pd0** container: +```bash +HG_PD_GRPC_HOST=pd0 +HG_PD_RAFT_ADDRESS=pd0:8610 +HG_PD_RAFT_PEERS_LIST=pd0:8610,pd1:8610,pd2:8610 +HG_PD_INITIAL_STORE_LIST=store0:8500,store1:8500,store2:8500 +``` + +**pd1** container: +```bash +HG_PD_GRPC_HOST=pd1 +HG_PD_RAFT_ADDRESS=pd1:8610 +HG_PD_RAFT_PEERS_LIST=pd0:8610,pd1:8610,pd2:8610 +HG_PD_INITIAL_STORE_LIST=store0:8500,store1:8500,store2:8500 +``` + +**pd2** container: +```bash +HG_PD_GRPC_HOST=pd2 +HG_PD_RAFT_ADDRESS=pd2:8610 +HG_PD_RAFT_PEERS_LIST=pd0:8610,pd1:8610,pd2:8610 +HG_PD_INITIAL_STORE_LIST=store0:8500,store1:8500,store2:8500 +``` + +See [docker/README.md](../docker/README.md) for the full environment variable reference. + ### Verify Deployment Check if PD is running: @@ -203,22 +233,25 @@ Build PD Docker image: ```bash # From project root -docker build -f hugegraph-pd/Dockerfile -t hugegraph-pd:latest . +docker build -f hugegraph-pd/Dockerfile -t hugegraph/pd:latest . # Run container docker run -d \ -p 8620:8620 \ -p 8686:8686 \ -p 8610:8610 \ - -v /path/to/conf:/hugegraph-pd/conf \ + -e HG_PD_GRPC_HOST= \ + -e HG_PD_RAFT_ADDRESS=:8610 \ + -e HG_PD_RAFT_PEERS_LIST=:8610 \ + -e HG_PD_INITIAL_STORE_LIST=:8500 \ -v /path/to/data:/hugegraph-pd/pd_data \ --name hugegraph-pd \ - hugegraph-pd:latest + hugegraph/pd:latest ``` For Docker Compose examples with HugeGraph Store and Server, see: ``` -hugegraph-server/hugegraph-dist/docker/example/ +docker/docker-compose-3pd-3store-3server.yml ``` ## Documentation diff --git a/hugegraph-pd/docs/configuration.md b/hugegraph-pd/docs/configuration.md index f66ddbd043..e3ae4f6f25 100644 --- a/hugegraph-pd/docs/configuration.md +++ b/hugegraph-pd/docs/configuration.md @@ -53,7 +53,7 @@ grpc: | Parameter | Type | Default | Description | |-----------|------|---------|-------------| -| `grpc.host` | String | `127.0.0.1` | **IMPORTANT**: Must be set to actual IP address (not `127.0.0.1`) for distributed deployments. Store and Server nodes connect to this address. | +| `grpc.host` | String | `127.0.0.1` | **IMPORTANT**: Must be set to actual IP address (not `127.0.0.1`) for distributed deployments. Store and Server nodes connect to this address. In Docker bridge networking, set this to the container hostname (e.g., `pd0`) via `HG_PD_GRPC_HOST` env var. | | `grpc.port` | Integer | `8686` | gRPC server port. Ensure this port is accessible from Store and Server nodes. | **Production Notes**: @@ -119,6 +119,31 @@ raft: peers-list: 192.168.1.10:8610,192.168.1.11:8610,192.168.1.12:8610 ``` +### Docker Bridge Network Deployment + +When deploying PD in Docker with bridge networking (e.g., `docker/docker-compose-3pd-3store-3server.yml`), container hostnames are used instead of IP addresses. Configuration is injected via `HG_PD_*` environment variables: + +```yaml +# pd0 — set via HG_PD_RAFT_ADDRESS and HG_PD_RAFT_PEERS_LIST env vars +raft: + address: pd0:8610 + peers-list: pd0:8610,pd1:8610,pd2:8610 + +# pd1 +raft: + address: pd1:8610 + peers-list: pd0:8610,pd1:8610,pd2:8610 + +# pd2 +raft: + address: pd2:8610 + peers-list: pd0:8610,pd1:8610,pd2:8610 +``` + +The `grpc.host` must also use the container hostname (e.g., `pd0`) set via `HG_PD_GRPC_HOST`. Do not use `127.0.0.1` or `0.0.0.0` in bridge networking mode. + +See [docker/README.md](../../docker/README.md) for the full environment variable reference. + ### PD Core Settings Controls PD-specific behavior. @@ -726,7 +751,7 @@ pd_partition_count 36.0 ### Pre-Deployment Checklist -- [ ] `grpc.host` set to actual IP address (not `127.0.0.1`) +- [ ] `grpc.host` set to actual IP address or container hostname (not `127.0.0.1`). For Docker bridge networking use container hostname via `HG_PD_GRPC_HOST` env var. - [ ] `raft.address` unique for each PD node - [ ] `raft.peers-list` identical on all PD nodes - [ ] `raft.peers-list` contains all PD node addresses diff --git a/hugegraph-server/README.md b/hugegraph-server/README.md index 597d412940..c145190bf0 100644 --- a/hugegraph-server/README.md +++ b/hugegraph-server/README.md @@ -9,3 +9,24 @@ HugeGraph Server consists of two layers of functionality: the graph engine layer - Storage Layer: - Storage Backend: Supports multiple built-in storage backends (RocksDB/Memory/HStore/HBase/...) and allows users to extend custom backends without modifying the existing source code. + +## Docker + +### Standalone Mode + +```bash +docker run -itd --name=hugegraph -p 8080:8080 hugegraph/hugegraph:1.7.0 +``` + +> Use release tags (e.g., `1.7.0`) for stable deployments. The `latest` tag is intended for testing or development only. + +### Distributed Mode (PD + Store + Server) + +For a full distributed deployment, use the compose file in the `docker/` directory at the repository root: + +```bash +cd docker +HUGEGRAPH_VERSION=1.7.0 docker compose -f docker-compose-3pd-3store-3server.yml up -d +``` + +See [docker/README.md](../docker/README.md) for the full setup guide. diff --git a/hugegraph-server/hugegraph-dist/docker/README.md b/hugegraph-server/hugegraph-dist/docker/README.md index 454d4ca24d..7ee88ab1fe 100644 --- a/hugegraph-server/hugegraph-dist/docker/README.md +++ b/hugegraph-server/hugegraph-dist/docker/README.md @@ -1,52 +1,52 @@ -# Deploy Hugegraph server with docker +# Deploy HugeGraph Server with Docker > Note: > -> 1. The docker image of hugegraph is a convenience release, not official distribution artifacts from ASF. You can find more details from [ASF Release Distribution Policy](https://infra.apache.org/release-distribution.html#dockerhub). +> 1. The HugeGraph Docker image is a convenience release, not an official ASF distribution artifact. See the [ASF Release Distribution Policy](https://infra.apache.org/release-distribution.html#dockerhub) for details. > -> 2. Recommend to use `release tag` (like `1.5.0`/`1.7.0`) for the stable version. Use `latest` tag to experience the newest functions in development. +> 2. Use release tags (for example, `1.7.0`) for stable deployments. Use `latest` only for development or testing. ## 1. Deploy -We can use docker to quickly start an inner HugeGraph server with RocksDB in the background. +Use Docker to quickly start a standalone HugeGraph Server with RocksDB. -1. Using docker run +1. Using `docker run` - Use `docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph:1.3.0` to start hugegraph server. + Use `docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph:1.7.0` to start hugegraph server. -2. Using docker compose +2. Using `docker compose` - Certainly we can only deploy server without other instance. Additionally, if we want to manage other HugeGraph-related instances with `server` in a single file, we can deploy HugeGraph-related instances via `docker-compose up -d`. The `docker-compose.yaml` is as below: + To deploy only the server, use `docker compose up -d`. The compose file is as follows: ```yaml version: '3' services: graph: - image: hugegraph/hugegraph:1.3.0 + image: hugegraph/hugegraph:1.7.0 ports: - 8080:8080 ``` ## 2. Create Sample Graph on Server Startup -If you want to **preload** some (test) data or graphs in container(by default), you can set the env `PRELOAD=ture` +To preload sample data on startup, set `PRELOAD=true`. -If you want to customize the preloaded data, please mount the groovy scripts (not necessary). +To customize the preload, mount your own Groovy script. -1. Using docker run +1. Using `docker run` - Use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true -v /path/to/script:/hugegraph-server/scripts/example.groovy hugegraph/hugegraph:1.3.0` + Use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true -v /path/to/script:/hugegraph-server/scripts/example.groovy hugegraph/hugegraph:1.7.0` to start hugegraph server. -2. Using docker compose +2. Using `docker compose` - We can also use `docker-compose up -d` to quickly start. The `docker-compose.yaml` is below. [example.groovy](https://github.com/apache/hugegraph/blob/master/hugegraph-server/hugegraph-dist/src/assembly/static/scripts/example.groovy) is a pre-defined script. If needed, we can mount a new `example.groovy` to preload different data: + Use `docker compose up -d` to start quickly. The compose file is below. [example.groovy](https://github.com/apache/hugegraph/blob/master/hugegraph-server/hugegraph-dist/src/assembly/static/scripts/example.groovy) is a predefined script. Replace it with your own script to preload different data: ```yaml version: '3' services: graph: - image: hugegraph/hugegraph:1.3.0 + image: hugegraph/hugegraph:1.7.0 environment: - PRELOAD=true volumes: @@ -55,25 +55,25 @@ If you want to customize the preloaded data, please mount the groovy scripts (no - 8080:8080 ``` -3. Using start-hugegraph.sh +3. Using `start-hugegraph.sh` - If you deploy HugeGraph server without docker, you can also pass arguments using `-p`, like this: `bin/start-hugegraph.sh -p true`. + If you deploy HugeGraph Server without Docker, you can also pass `-p true` to `bin/start-hugegraph.sh`. ## 3. Enable Authentication -1. Using docker run +1. Using `docker run` - Use `docker run -itd --name=graph -p 8080:8080 -e AUTH=true -e PASSWORD=xxx hugegraph/hugegraph:1.3.0` to enable the authentication and set the password with `-e AUTH=true -e PASSWORD=xxx`. + Use `docker run -itd --name=graph -p 8080:8080 -e AUTH=true -e PASSWORD=xxx hugegraph/hugegraph:1.7.0` to enable authentication. -2. Using docker compose +2. Using `docker compose` - Similarly, we can set the environment variables in the docker-compose.yaml: + Set the environment variables in the compose file: ```yaml version: '3' services: server: - image: hugegraph/hugegraph:1.3.0 + image: hugegraph/hugegraph:1.7.0 container_name: graph ports: - 8080:8080 @@ -82,31 +82,31 @@ If you want to customize the preloaded data, please mount the groovy scripts (no - PASSWORD=xxx ``` -## 4. Running Open-Telemetry-Collector +## 4. Run OpenTelemetry > CAUTION: > -> The `docker-compose-trace.yaml` utilizes `Grafana` and `Grafana-Tempo`, both of them are licensed under [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.en.html), you should be aware of and use them with caution. Currently, we mainly provide this template for everyone to **test** +> The `docker-compose-trace.yaml` uses Grafana and Grafana Tempo, both of which are licensed under [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.en.html). Use this template for testing only. > -1. Start Open-Telemetry-Collector +1. Start the OpenTelemetry collector ```bash - cd hugegraph-server/hugegraph-dist/docker/example - docker-compose -f docker-compose-trace.yaml -p hugegraph-trace up -d + # Run from the repository root + docker compose -f hugegraph-server/hugegraph-dist/docker/example/docker-compose-trace.yaml -p hugegraph-trace up -d ``` -2. Active Open-Telemetry-Agent +2. Enable the OpenTelemetry agent ```bash ./start-hugegraph.sh -y true ``` -3. Stop Open-Telemetry-Collector +3. Stop the OpenTelemetry collector ```bash - cd hugegraph-server/hugegraph-dist/docker/example - docker-compose -f docker-compose-trace.yaml -p hugegraph-trace stop + # Run from the repository root + docker compose -f hugegraph-server/hugegraph-dist/docker/example/docker-compose-trace.yaml -p hugegraph-trace stop ``` 4. References @@ -114,3 +114,19 @@ If you want to customize the preloaded data, please mount the groovy scripts (no - [What is OpenTelemetry](https://opentelemetry.io/docs/what-is-opentelemetry/) - [Tempo in Grafana](https://grafana.com/docs/tempo/latest/getting-started/tempo-in-grafana/) + +## 5. Distributed Cluster (PD + Store + Server) + +For a full distributed HugeGraph cluster with PD, Store, and Server, use the +3-node compose file in the `docker/` directory at the repository root. + +**Prerequisites**: Allocate at least **12 GB** memory to Docker Desktop +(Settings → Resources → Memory). The cluster runs 9 JVM processes. + +```bash +cd docker +HUGEGRAPH_VERSION=1.7.0 docker compose -f docker-compose-3pd-3store-3server.yml up -d +``` + +See [docker/README.md](../../../docker/README.md) for the full setup guide, +environment variable reference, and troubleshooting. diff --git a/hugegraph-store/AGENTS.md b/hugegraph-store/AGENTS.md index 97efa22fd7..8b5ef46bab 100644 --- a/hugegraph-store/AGENTS.md +++ b/hugegraph-store/AGENTS.md @@ -129,7 +129,7 @@ bin/restart-hugegraph-store.sh 2. HugeGraph Store cluster (3+ nodes) 3. Proper configuration pointing Store nodes to PD cluster -See Docker Compose example: `hugegraph-server/hugegraph-dist/docker/example/` +See Docker Compose examples in the repository root `../docker/` directory. Single-node quickstart (pre-built images): `../docker/docker-compose.yml`. Single-node dev build (from source): `../docker/docker-compose.dev.yml`. 3-node cluster: `../docker/docker-compose-3pd-3store-3server.yml`. See `../docker/README.md` for the full setup guide. ## Configuration Files diff --git a/hugegraph-store/README.md b/hugegraph-store/README.md index 10f6e61587..223517525e 100644 --- a/hugegraph-store/README.md +++ b/hugegraph-store/README.md @@ -348,7 +348,7 @@ For development workflows and debugging, see [Development Guide](docs/developmen From the project root: ```bash -docker build -f hugegraph-store/Dockerfile -t hugegraph-store:latest . +docker build -f hugegraph-store/Dockerfile -t hugegraph/store:latest . ``` ### Run Container @@ -358,11 +358,12 @@ docker run -d \ -p 8520:8520 \ -p 8500:8500 \ -p 8510:8510 \ - -v /path/to/conf:/hugegraph-store/conf \ + -e HG_STORE_PD_ADDRESS=:8686 \ + -e HG_STORE_GRPC_HOST= \ + -e HG_STORE_RAFT_ADDRESS=:8510 \ -v /path/to/storage:/hugegraph-store/storage \ - -e PD_ADDRESS=192.168.1.10:8686,192.168.1.11:8686 \ --name hugegraph-store \ - hugegraph-store:latest + hugegraph/store:latest ``` **Exposed Ports**: @@ -375,9 +376,11 @@ docker run -d \ For a complete HugeGraph distributed deployment (PD + Store + Server), see: ``` -hugegraph-server/hugegraph-dist/docker/example/ +docker/docker-compose-3pd-3store-3server.yml ``` +See [docker/README.md](../docker/README.md) for the full setup guide. + For Docker and Kubernetes deployment details, see [Deployment Guide](docs/deployment-guide.md). --- diff --git a/hugegraph-store/docs/deployment-guide.md b/hugegraph-store/docs/deployment-guide.md index e92e99171f..de07904d64 100644 --- a/hugegraph-store/docs/deployment-guide.md +++ b/hugegraph-store/docs/deployment-guide.md @@ -672,163 +672,73 @@ curl http://localhost:8080/versions ### Docker Compose: Complete Cluster -File: `docker-compose.yml` +For a production-like 3-node distributed deployment, use the compose file at `docker/docker-compose-3pd-3store-3server.yml` in the repository root. See [docker/README.md](../../docker/README.md) for the full setup guide. + +> **Prerequisites**: Allocate at least **12 GB** memory to Docker Desktop (Settings → Resources → Memory). The cluster runs 9 JVM processes. + +```bash +cd docker +HUGEGRAPH_VERSION=1.7.0 docker compose -f docker-compose-3pd-3store-3server.yml up -d +``` + +The compose file uses a Docker bridge network (`hg-net`) with container hostnames for service discovery. Configuration is injected via environment variables using the `HG_*` prefix: + +**PD environment variables** (per node): + +```yaml +environment: + HG_PD_GRPC_HOST: pd0 # maps to grpc.host + HG_PD_GRPC_PORT: "8686" # maps to grpc.port + HG_PD_REST_PORT: "8620" # maps to server.port + HG_PD_RAFT_ADDRESS: pd0:8610 # maps to raft.address + HG_PD_RAFT_PEERS_LIST: pd0:8610,pd1:8610,pd2:8610 # maps to raft.peers-list + HG_PD_INITIAL_STORE_LIST: store0:8500,store1:8500,store2:8500 # maps to pd.initial-store-list + HG_PD_DATA_PATH: /hugegraph-pd/pd_data # maps to pd.data-path + HG_PD_INITIAL_STORE_COUNT: 3 # maps to pd.initial-store-count +``` + +**Store environment variables** (per node): + +```yaml +environment: + HG_STORE_PD_ADDRESS: pd0:8686,pd1:8686,pd2:8686 # maps to pdserver.address + HG_STORE_GRPC_HOST: store0 # maps to grpc.host + HG_STORE_GRPC_PORT: "8500" # maps to grpc.port + HG_STORE_REST_PORT: "8520" # maps to server.port + HG_STORE_RAFT_ADDRESS: store0:8510 # maps to raft.address + HG_STORE_DATA_PATH: /hugegraph-store/storage # maps to app.data-path +``` + +**Server environment variables**: ```yaml -version: '3.8' - -services: - # PD Cluster (3 nodes) - pd1: - image: hugegraph/hugegraph-pd:1.7.0 - container_name: hugegraph-pd1 - ports: - - "8686:8686" - - "8620:8620" - - "8610:8610" - environment: - - GRPC_HOST=pd1 - - RAFT_ADDRESS=pd1:8610 - - RAFT_PEERS=pd1:8610,pd2:8610,pd3:8610 - networks: - - hugegraph-net - - pd2: - image: hugegraph/hugegraph-pd:1.7.0 - container_name: hugegraph-pd2 - ports: - - "8687:8686" - environment: - - GRPC_HOST=pd2 - - RAFT_ADDRESS=pd2:8610 - - RAFT_PEERS=pd1:8610,pd2:8610,pd3:8610 - networks: - - hugegraph-net - - pd3: - image: hugegraph/hugegraph-pd:1.7.0 - container_name: hugegraph-pd3 - ports: - - "8688:8686" - environment: - - GRPC_HOST=pd3 - - RAFT_ADDRESS=pd3:8610 - - RAFT_PEERS=pd1:8610,pd2:8610,pd3:8610 - networks: - - hugegraph-net - - # Store Cluster (3 nodes) - store1: - image: hugegraph/hugegraph-store:1.7.0 - container_name: hugegraph-store1 - ports: - - "8500:8500" - - "8510:8510" - - "8520:8520" - environment: - - PD_ADDRESS=pd1:8686,pd2:8686,pd3:8686 - - GRPC_HOST=store1 - - RAFT_ADDRESS=store1:8510 - volumes: - - store1-data:/hugegraph-store/storage - depends_on: - - pd1 - - pd2 - - pd3 - networks: - - hugegraph-net - - store2: - image: hugegraph/hugegraph-store:1.7.0 - container_name: hugegraph-store2 - ports: - - "8501:8500" - environment: - - PD_ADDRESS=pd1:8686,pd2:8686,pd3:8686 - - GRPC_HOST=store2 - - RAFT_ADDRESS=store2:8510 - volumes: - - store2-data:/hugegraph-store/storage - depends_on: - - pd1 - - pd2 - - pd3 - networks: - - hugegraph-net - - store3: - image: hugegraph/hugegraph-store:1.7.0 - container_name: hugegraph-store3 - ports: - - "8502:8500" - environment: - - PD_ADDRESS=pd1:8686,pd2:8686,pd3:8686 - - GRPC_HOST=store3 - - RAFT_ADDRESS=store3:8510 - volumes: - - store3-data:/hugegraph-store/storage - depends_on: - - pd1 - - pd2 - - pd3 - networks: - - hugegraph-net - - # Server (2 nodes) - server1: - image: hugegraph/hugegraph:1.7.0 - container_name: hugegraph-server1 - ports: - - "8080:8080" - environment: - - BACKEND=hstore - - PD_PEERS=pd1:8686,pd2:8686,pd3:8686 - depends_on: - - store1 - - store2 - - store3 - networks: - - hugegraph-net - - server2: - image: hugegraph/hugegraph:1.7.0 - container_name: hugegraph-server2 - ports: - - "8081:8080" - environment: - - BACKEND=hstore - - PD_PEERS=pd1:8686,pd2:8686,pd3:8686 - depends_on: - - store1 - - store2 - - store3 - networks: - - hugegraph-net - -networks: - hugegraph-net: - driver: bridge - -volumes: - store1-data: - store2-data: - store3-data: +environment: + HG_SERVER_BACKEND: hstore # maps to backend + HG_SERVER_PD_PEERS: pd0:8686,pd1:8686,pd2:8686 # maps to pd.peers + STORE_REST: store0:8520 # used by wait-partition.sh ``` +**Startup ordering** is enforced via `depends_on` with `condition: service_healthy`: +1. PD nodes start first and must pass healthchecks (`/v1/health`) +2. Store nodes start after all PD nodes are healthy +3. Server nodes start after all Store nodes are healthy + +> **Note**: The deprecated env var names (`GRPC_HOST`, `RAFT_ADDRESS`, `RAFT_PEERS`, `PD_ADDRESS`, `BACKEND`, `PD_PEERS`) still work but log a warning. Use the `HG_*` prefixed names for new deployments. + **Deploy**: ```bash -# Start cluster -docker-compose up -d +# Start cluster (run from the docker/ directory) +HUGEGRAPH_VERSION=1.7.0 docker compose -f docker-compose-3pd-3store-3server.yml up -d # Check status -docker-compose ps +docker ps # View logs -docker-compose logs -f store1 +docker logs hg-store0 # Stop cluster -docker-compose down +docker compose -f docker-compose-3pd-3store-3server.yml down ``` --- @@ -876,7 +786,7 @@ spec: spec: containers: - name: store - image: hugegraph/hugegraph-store:1.7.0 + image: hugegraph/store:1.7.0 ports: - containerPort: 8500 name: grpc @@ -889,11 +799,11 @@ spec: valueFrom: fieldRef: fieldPath: metadata.name - - name: PD_ADDRESS + - name: HG_STORE_PD_ADDRESS value: "hugegraph-pd-0.hugegraph-pd:8686,hugegraph-pd-1.hugegraph-pd:8686,hugegraph-pd-2.hugegraph-pd:8686" - - name: GRPC_HOST + - name: HG_STORE_GRPC_HOST value: "$(POD_NAME).hugegraph-store" - - name: RAFT_ADDRESS + - name: HG_STORE_RAFT_ADDRESS value: "$(POD_NAME).hugegraph-store:8510" volumeMounts: - name: data