Skip to content

Community fixes: bot-runtime secret, configurable gateway upstreams, dynamic AI models#18

Merged
DavidsonGomes merged 6 commits intomainfrom
develop
Apr 16, 2026
Merged

Community fixes: bot-runtime secret, configurable gateway upstreams, dynamic AI models#18
DavidsonGomes merged 6 commits intomainfrom
develop

Conversation

@gomessguii
Copy link
Copy Markdown
Member

@gomessguii gomessguii commented Apr 16, 2026

Summary

Accumulates the fixes committed on `develop` since the last merge to `main`:

  • fix(compose): default `BOT_RUNTIME_SECRET` on evo-bot-runtime — the bot-runtime was reading the secret only from `.env`, so stale `.env` files left it blank and every CRM → bot-runtime event failed `pipeline.auth.unauthorized`. Adds the same `${BOT_RUNTIME_SECRET:-evo-bot-runtime-dev-secret}` default already used by `evo-crm` and `evo-crm-sidekiq`.
  • feat(gateway): make upstream service names configurable via env — replaces the five hardcoded backend hostnames in `nginx/default.conf.template` with `${AUTH_UPSTREAM}`, `${CRM_UPSTREAM}`, `${CORE_UPSTREAM}`, `${PROCESSOR_UPSTREAM}`, `${BOT_RUNTIME_UPSTREAM}`. Defaults match the previous hardcoded values so existing deployments keep working; stacks that rename services (e.g. `evocrm_auth` instead of `evo_auth`) can now override without a custom image.
  • ci(gateway): build on pushes to develop — adds `develop` to the gateway publish workflow with tags `:develop` and `:develop-`. `:latest` stays reserved for `main`/tag builds.
  • docs(gateway): new `nginx/README.md` documenting the upstream env vars plus a commented block in `docker-compose.swarm.yaml` pointing at it.
  • chore: remove dependabot config — the monorepo only wraps submodule pointers; dependabot at this layer never produced useful PRs.
  • feat: load AI models dynamically from the provider's API — bumps `evo-ai-core-service-community` and `evo-ai-frontend-community` submodule tips to the matching PRs so the agent wizard queries the provider for its model list instead of using a stale hardcoded catalog.

Linked submodule PRs

  • evo-ai-core-service-community#1
  • evo-ai-frontend-community#7

Test plan

  • CRM → bot-runtime events succeed on a fresh `.env` copied from an older template (no `BOT_RUNTIME_SECRET` set)
  • Gateway with default envs routes the same as before (`evo_auth`, `evo_crm`, …)
  • Gateway with `AUTH_UPSTREAM=evocrm_auth:3001` etc. routes a prefixed stack without rebuild
  • Reproduced colleague's 502 scenario on a local Swarm stack with `evocrm_*` names — fix resolved it
  • Agent wizard shows GPT-5 family live from the OpenAI API
  • Unsupported provider falls back to hardcoded list

Summary by Sourcery

Update the API gateway configuration to support configurable upstream service targets, adjust related deployment and CI settings, and align runtime defaults and submodules with recent community fixes.

New Features:

  • Allow configuring gateway upstream backend services via environment variables with sensible defaults.
  • Load AI models dynamically from the provider API by updating core service and frontend community submodules.

Bug Fixes:

  • Set a default BOT_RUNTIME_SECRET for the bot runtime service to prevent authorization failures when the env var is missing.

Enhancements:

  • Document the gateway architecture and upstream configuration in a new nginx README and annotate swarm compose with guidance for overriding upstream targets.

Build:

  • Adjust the gateway Dockerfile to use nginx template-based configuration with envsubst-filtered upstream variables.

CI:

  • Extend the gateway publish workflow to build and tag images from the develop branch.

Documentation:

  • Add nginx/README.md describing gateway routing, configurable upstreams, and build details.

Chores:

  • Remove the unused Dependabot configuration from the repository.

Align evo-bot-runtime with evo-crm and evo-crm-sidekiq, which already
fall back to the shared dev secret when BOT_RUNTIME_SECRET is absent
from .env. Previously the bot-runtime depended solely on env_file, so a
.env copied from an older template left it with an empty secret — every
event from the CRM then failed the X-Bot-Runtime-Secret check with
pipeline.auth.unauthorized and the AI silently never replied.
The gateway nginx config had the five backend service names (evo_auth,
evo_crm, evo_core, evo_processor, evo_bot_runtime) baked into the image,
which broke any deployment that uses a different naming convention —
requests would hit DNS-NOT-FOUND upstreams and surface as 502s.

Replace the hardcoded host:port pairs with ${AUTH_UPSTREAM},
${CRM_UPSTREAM}, ${CORE_UPSTREAM}, ${PROCESSOR_UPSTREAM} and
${BOT_RUNTIME_UPSTREAM}, rendered at container start by the stock
nginx-image envsubst entrypoint. Defaults in the Dockerfile match the
previous hardcoded values exactly, so deployments that follow the
reference stack keep working with zero changes.

NGINX_ENVSUBST_FILTER is scoped to the five *_UPSTREAM names so the
rendering pass does not touch nginx's own runtime variables like $host
or $request_uri.
Adds develop to the trigger branches and emits evoapicloud/evo-crm-gateway:develop
plus :develop-<sha> so pre-release gateway builds can be pulled and
tested before cutting a tag. The main and tag paths are unchanged and
:latest stays reserved for main/tag builds.
Adds nginx/README.md with the full upstream table, default values, the
renaming scenario that motivated the feature, and verification steps.
Also leaves a commented-out override block inline in the swarm stack
(plus a note in the gateway service) so operators who rename the backend
services see the escape hatch without having to dig into the nginx image.
The monorepo wraps git submodules whose own repos already run their own
dependency updates. Running dependabot at this layer only produced PRs
against committed submodule pointers, never the submodules themselves,
which is not useful.
Bumps evo-ai-core-service-community and evo-ai-frontend-community to
their feat/dynamic-ai-models tips. Together the two bumps wire the
agent model picker to GET /api/v1/agents/apikeys/:id/models, which
queries the provider's own models endpoint (OpenAI, Anthropic, Gemini,
OpenRouter, DeepSeek, Together AI, Fireworks AI) with the stored API
key. The frontend falls back to its hardcoded catalog for providers
without a public listing endpoint or if the call fails, so nothing
regresses for existing stacks.

This ends the churn where the hardcoded model list had to be edited
every time a provider shipped a new family (the immediate trigger was
GPT-5 not appearing).
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 16, 2026

Reviewer's Guide

Makes the gateway’s upstream backends configurable via environment variables, adds CI publishing for develop builds, fixes the bot-runtime secret default, upgrades AI-related submodules for dynamic model loading, documents the gateway configuration, and removes an unused Dependabot config.

Sequence diagram for dynamic AI model loading in the agent wizard

sequenceDiagram
  actor User as AgentWizardUser
  participant FE as Frontend_agent_wizard
  participant CORE as Core_AI_service
  participant API as Provider_API

  User->>FE: Open_agent_wizard
  FE->>CORE: GET /models?provider=current_provider
  CORE->>API: Provider_specific_models_request
  API-->>CORE: Models_list
  CORE-->>FE: Models_list
  FE-->>User: Render_live_model_catalog

  alt Unsupported_provider
    FE->>CORE: GET /models?provider=unsupported_provider
    CORE-->>FE: Fallback_hardcoded_model_list
    FE-->>User: Render_fallback_model_catalog
  end
Loading

File-Level Changes

Change Details Files
Make nginx gateway upstream services configurable via environment variables and use Docker’s template-based config rendering.
  • Switch gateway image to use /etc/nginx/templates/default.conf.template instead of a static nginx.conf
  • Introduce AUTH_UPSTREAM, CRM_UPSTREAM, CORE_UPSTREAM, PROCESSOR_UPSTREAM, BOT_RUNTIME_UPSTREAM env vars with defaults matching previous hardcoded host:port values
  • Use envsubst-filtered ${*_UPSTREAM} values inside the nginx config template to define per-service upstream variables
  • Restrict envsubst via NGINX_ENVSUBST_FILTER so only upstream env vars are substituted, leaving native nginx $variables untouched
nginx/Dockerfile
nginx/default.conf.template
Expose gateway upstream configurability to Swarm deployments and document how to use it.
  • Add commented environment block on evo_gateway service describing how and when to override *_UPSTREAM variables
  • Reference the new nginx gateway README from compose comments for further details
  • Provide detailed README for nginx gateway describing routing matrix, upstream env vars, override scenarios, and verification steps
docker-compose.swarm.yaml
nginx/README.md
Update CI pipeline to publish gateway images for develop branch with dedicated tags.
  • Extend GitHub Actions trigger to run gateway publish workflow on pushes to develop in addition to main and tags
  • Add tagging logic to emit :develop and :develop- image tags when building from develop
.github/workflows/gateway-publish.yml
Ensure evo-bot-runtime has a sane default BOT_RUNTIME_SECRET so it works even with older .env files.
  • Inject BOT_RUNTIME_SECRET into evo-bot-runtime service environment with a default fallback matching other services
  • Align secret handling so missing BOT_RUNTIME_SECRET no longer causes pipeline.auth.unauthorized failures for CRM → bot-runtime events
docker-compose.yml
Pull in new AI core and frontend submodule revisions that support dynamic model loading from provider APIs.
  • Advance evo-ai-core-service-community submodule to a revision that queries providers for available models at runtime
  • Advance evo-ai-frontend-community submodule so the agent wizard uses the provider’s live model list with a hardcoded fallback for unsupported providers
evo-ai-core-service-community
evo-ai-frontend-community
Remove obsolete Dependabot configuration at the monorepo wrapper level.
  • Delete root-level Dependabot configuration file that was not providing useful updates for this submodule-based repo
.github/dependabot.yml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues

Prompt for AI Agents
Please address the comments from this code review:

## Individual Comments

### Comment 1
<location path="docker-compose.yml" line_range="248" />
<code_context>
       LISTEN_ADDR: 0.0.0.0:8080
       REDIS_URL: redis://:${REDIS_PASSWORD:-evoai_redis_pass}@redis:6379
       AI_PROCESSOR_URL: http://evo-processor:8000
+      BOT_RUNTIME_SECRET: ${BOT_RUNTIME_SECRET:-evo-bot-runtime-dev-secret}
     depends_on:
       redis:
</code_context>
<issue_to_address>
**🚨 issue (security):** The default BOT_RUNTIME_SECRET value is predictable and might be unsafe if reused beyond local dev.

A hard-coded default secret is fine for local dev, but if this compose file is ever used in shared or production-like environments the value becomes trivial to guess. Consider requiring it to be explicitly set (`${BOT_RUNTIME_SECRET:?must_be_set}`) or clearly marking this default as local-only and unsafe to use elsewhere.
</issue_to_address>

### Comment 2
<location path="nginx/README.md" line_range="41-42" />
<code_context>
+
+### When you need to override
+
+If your deployment renames any of the backend services (e.g. applying a
+`evocrm_` prefix, or shortening to `auth`/`crm`/…) the gateway cannot
+resolve the default hostnames and every proxied request returns **502 Bad
+Gateway**. Set the matching `*_UPSTREAM` env vars on the gateway service
</code_context>
<issue_to_address>
**suggestion (typo):** Use "an" instead of "a" before `evocrm_` for correct article usage.

Because `evocrm_` begins with a vowel sound, this should be "applying an `evocrm_` prefix" for correct grammar.

```suggestion
If your deployment renames any of the backend services (e.g. applying an
`evocrm_` prefix, or shortening to `auth`/`crm`/…) the gateway cannot
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread docker-compose.yml
LISTEN_ADDR: 0.0.0.0:8080
REDIS_URL: redis://:${REDIS_PASSWORD:-evoai_redis_pass}@redis:6379
AI_PROCESSOR_URL: http://evo-processor:8000
BOT_RUNTIME_SECRET: ${BOT_RUNTIME_SECRET:-evo-bot-runtime-dev-secret}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚨 issue (security): The default BOT_RUNTIME_SECRET value is predictable and might be unsafe if reused beyond local dev.

A hard-coded default secret is fine for local dev, but if this compose file is ever used in shared or production-like environments the value becomes trivial to guess. Consider requiring it to be explicitly set (${BOT_RUNTIME_SECRET:?must_be_set}) or clearly marking this default as local-only and unsafe to use elsewhere.

Comment thread nginx/README.md
Comment on lines +41 to +42
If your deployment renames any of the backend services (e.g. applying a
`evocrm_` prefix, or shortening to `auth`/`crm`/…) the gateway cannot
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (typo): Use "an" instead of "a" before evocrm_ for correct article usage.

Because evocrm_ begins with a vowel sound, this should be "applying an evocrm_ prefix" for correct grammar.

Suggested change
If your deployment renames any of the backend services (e.g. applying a
`evocrm_` prefix, or shortening to `auth`/`crm`/…) the gateway cannot
If your deployment renames any of the backend services (e.g. applying an
`evocrm_` prefix, or shortening to `auth`/`crm`/…) the gateway cannot

@DavidsonGomes DavidsonGomes merged commit 3735857 into main Apr 16, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants