Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
@claude review |
| ```python filename="__main__.py" | ||
| import hypercorn | ||
| import asyncio | ||
| import restate | ||
|
|
||
| from langfuse import get_client | ||
| from opentelemetry import trace as trace_api | ||
| from openinference.instrumentation import OITracer, TraceConfig | ||
| from agents import set_trace_processors | ||
|
|
||
| from utils.tracing import RestateTracingProcessor | ||
| from agent import claim_service | ||
|
|
||
| # Initialize Langfuse (sets up the global OTel tracer provider + exporter) | ||
| langfuse = get_client() | ||
| tracer = OITracer( | ||
| trace_api.get_tracer("openinference.openai_agents"), config=TraceConfig() | ||
| ) | ||
| set_trace_processors([RestateTracingProcessor(tracer)]) | ||
|
|
||
| if __name__ == "__main__": | ||
| app = restate.app(services=[claim_service]) | ||
|
|
||
| conf = hypercorn.Config() | ||
| conf.bind = ["0.0.0.0:9080"] | ||
| asyncio.run(hypercorn.asyncio.serve(app, conf)) | ||
| ``` |
There was a problem hiding this comment.
🔴 The __main__.py example imports import hypercorn but then calls hypercorn.asyncio.serve(app, conf), which will raise AttributeError: module hypercorn has no attribute asyncio at runtime. Fix by adding import hypercorn.asyncio explicitly, or replace with from hypercorn.asyncio import serve and call serve(app, conf).
Extended reasoning...
What the bug is and how it manifests
In Python, importing a top-level package does not automatically import its subpackages or submodules. The __main__.py code block uses import hypercorn but then accesses hypercorn.asyncio.serve(app, conf). When Python executes import hypercorn, only hypercorn/__init__.py is loaded. The asyncio submodule is not populated on the hypercorn module object unless it is explicitly imported or hypercorn/__init__.py itself imports it.
The specific code path that triggers it
The problematic pattern in the example:
import hypercorn # Only loads hypercorn/__init__.py
...
asyncio.run(hypercorn.asyncio.serve(app, conf)) # AttributeError hereAt runtime, Python evaluates hypercorn.asyncio as an attribute lookup on the hypercorn module. Since hypercorn.asyncio was never imported, Python cannot find the attribute and raises AttributeError.
Why existing code does not prevent it
The only guard would be if hypercorn/__init__.py explicitly imported its asyncio submodule. Hypercorn is an ASGI server with multiple async backends (asyncio and trio), making it unlikely that it auto-imports all backends unconditionally in its __init__.py. The standard, documented usage pattern for hypercorn consistently uses an explicit import: from hypercorn.asyncio import serve.
Impact
Any developer copying this code snippet verbatim will encounter a runtime AttributeError when starting their server. This is the final step of the integration guide, meaning the code will appear to run (all imports succeed) but crash at the asyncio.run(...) call - a confusing failure point.
How to fix it
Either add an explicit submodule import:
import hypercorn
import hypercorn.asyncioOr, more idiomatically (matching hypercorn's own documentation):
from hypercorn.asyncio import serve
...
asyncio.run(serve(app, conf))Step-by-step proof
- User reads the guide and copies
__main__.pyas written. - User runs
python __main__.py. - Python executes
import hypercorn- success,hypercorn/__init__.pyruns. - Python executes
import asyncio- success. - Python reaches
asyncio.run(hypercorn.asyncio.serve(app, conf)). - Python evaluates
hypercorn.asyncioas an attribute lookup on the module. - Since
hypercorn.asynciowas never imported, this attribute does not exist. - Python raises
AttributeError: module 'hypercorn' has no attribute 'asyncio'. - The server never starts.
| ```bash filename=".env" | ||
| LANGFUSE_PUBLIC_KEY=pk-lf-... | ||
| LANGFUSE_SECRET_KEY=sk-lf-... | ||
| LANGFUSE_HOST=https://cloud.langfuse.com | ||
| OPENAI_API_KEY=sk-proj-... | ||
| ``` |
There was a problem hiding this comment.
🔴 The .env block on line 34 uses LANGFUSE_HOST but the correct Langfuse SDK environment variable is LANGFUSE_BASE_URL. Users who self-host or use a regional endpoint will set LANGFUSE_HOST, which the SDK silently ignores, causing all traces to be sent to cloud.langfuse.com instead of their configured endpoint with no error or warning.
Extended reasoning...
Bug: Wrong Environment Variable Name (LANGFUSE_HOST should be LANGFUSE_BASE_URL)
What the bug is and how it manifests:
In the .env configuration block (line 34), the guide instructs users to set LANGFUSE_HOST=https://cloud.langfuse.com. The Langfuse Python SDK does not recognize LANGFUSE_HOST as a valid environment variable -- it looks for LANGFUSE_BASE_URL. As a result, if a user sets only LANGFUSE_HOST, the SDK silently ignores it and defaults to https://cloud.langfuse.com.
The specific code path that triggers it:
The guide uses the new-style v3+ Python SDK API (from langfuse import get_client), which reads LANGFUSE_BASE_URL to determine the host. When get_client() is called in __main__.py, it reads environment variables. Since LANGFUSE_HOST is not recognized, any custom host value is silently dropped. The official SDK docs confirm: configure the host argument or the LANGFUSE_BASE_URL environment variable, and the v2-to-v3 upgrade guide explicitly states the Langfuse base URL environment variable is now LANGFUSE_BASE_URL.
Why existing code does not prevent it:
The SDK does not emit a warning when LANGFUSE_HOST is set but LANGFUSE_BASE_URL is absent. The call to get_client() succeeds regardless, defaulting to the cloud endpoint. There is no validation error or log message to alert the user that their configured host is being ignored.
Impact:
This disproportionately affects self-hosting users and those using EU/US regional endpoints. A developer following this guide who self-hosts Langfuse would set LANGFUSE_HOST=https://langfuse.mycompany.com, see the workflow run successfully, and then be confused why traces appear on cloud.langfuse.com -- or fail to appear anywhere if they have no cloud account. The root cause is not obvious since no error is surfaced.
How to fix it:
Change LANGFUSE_HOST to LANGFUSE_BASE_URL on line 34. This is consistent with every other modern framework integration guide in this repo (openai-agents.mdx, claude-agent-sdk.mdx, autogen.mdx, haystack.mdx, temporal.mdx, smolagents.mdx, etc.) and matches the official SDK documentation.
Step-by-step proof:
- User self-hosts Langfuse at
https://langfuse.internal.company.com - User follows this guide and sets
LANGFUSE_HOST=https://langfuse.internal.company.comin.env __main__.pycallsget_client()-- the SDK reads environment variables looking forLANGFUSE_BASE_URL, finds it absent, and silently defaults tohttps://cloud.langfuse.com- Workflow runs; traces are exported to
cloud.langfuse.cominstead of the user's self-hosted instance - User sees no error, but traces do not appear in their Langfuse instance -- root cause is entirely non-obvious
| ```python | ||
| from langfuse import get_client | ||
|
|
||
| langfuse = get_client() | ||
|
|
||
| def fetch_prompt() -> str: | ||
| prompt = langfuse.get_prompt("claim-agent", type="text") | ||
| return prompt.compile() | ||
|
|
||
| # Durably journaled — same prompt is used on retries | ||
| prompt = await ctx.run_typed("Fetch prompt", fetch_prompt) | ||
| ``` |
There was a problem hiding this comment.
🟡 The Prompt Management code snippet (lines 140-151) calls await ctx.run_typed(...) where ctx is never defined in the snippet scope, causing NameError: name ctx is not defined for users copying it verbatim. Additionally, await appears at module top-level outside any async def, which is a SyntaxError in Python <3.12 scripts. The snippet should be wrapped in a @claim_service.handler() decorated async def run(ctx: restate.Context, ...) function, matching the pattern shown in section 3.
Extended reasoning...
What the bug is and how it manifests
The Prompt Management section ends with a standalone code snippet that uses await ctx.run_typed("Fetch prompt", fetch_prompt) (line 150). The variable ctx does not appear anywhere in the snippet — it is not imported, assigned, or passed as a parameter. Any user who copies this snippet and runs it will immediately encounter NameError: name ctx is not defined.
A second independent error exists on the same line: prompt = await ctx.run_typed(...) is written at module top-level, outside any async def function. In Python scripts (.py files), await outside an async function is a SyntaxError. (Top-level await is only valid in Python 3.12+ interactive REPL sessions or Jupyter notebooks, not in regular scripts.)
The specific code path that triggers it
The offending snippet is the final code block in the file (lines 140-151):
from langfuse import get_client
langfuse = get_client()
def fetch_prompt() -> str:
prompt = langfuse.get_prompt("claim-agent", type="text")
return prompt.compile()
# Durably journaled — same prompt is used on retries
prompt = await ctx.run_typed("Fetch prompt", fetch_prompt)There is no surrounding function, no ctx parameter, and no async def.
Why existing code does not prevent it
Section 3 of the guide correctly shows ctx passed as a parameter to a @claim_service.handler() decorated async def run(ctx: restate.Context, req: ClaimDocument) -> str: function. However, the Prompt Management snippet is presented as a separate, self-contained block that drops this framing entirely. Nothing in the snippet signals that ctx must come from a surrounding handler.
What the impact would be
Users following the Prompt Management section and copying the snippet to integrate Langfuse prompt fetching will receive either a SyntaxError (Python <3.12, the overwhelming majority of production environments) or a NameError at runtime. This makes the example completely non-functional as written.
How to fix it
Wrap the await ctx.run_typed(...) call inside a handler function, showing the full pattern:
from langfuse import get_client
langfuse = get_client()
def fetch_prompt() -> str:
prompt = langfuse.get_prompt("claim-agent", type="text")
return prompt.compile()
@claim_service.handler()
async def run(ctx: restate.Context, req: ClaimDocument) -> str:
# Durably journaled — same prompt is used on retries
prompt = await ctx.run_typed("Fetch prompt", fetch_prompt)
...Step-by-step proof
- User reads the Prompt Management section and copies the code block verbatim into
agent.py. - They run
python agent.py. - Python parses the file and encounters
await ctx.run_typed(...)at module top-level — in Python <3.12 this is immediately aSyntaxError: await outside functionbefore any code runs. - Even in Python 3.12+ (where top-level await is allowed in some contexts), the interpreter evaluates
ctxand finds it is undefined in module scope, raisingNameError: name ctx is not defined. - The user cannot proceed without understanding that this snippet must be embedded inside a
@claim_service.handler()async function, which is never stated in this section.
Summary