Skip to content

Conversation

@XG-xin
Copy link

@XG-xin XG-xin commented Dec 3, 2025

What does this PR do?

Add reasoning tokens metric
MLOB-4745

Motivation

Plugin Checklist

Additional Notes

@github-actions
Copy link

github-actions bot commented Dec 3, 2025

Overall package size

Self size: 13.46 MB
Deduped: 113.66 MB
No deduping: 128.67 MB

Dependency sizes | name | version | self size | total size | |------|---------|-----------|------------| | @datadog/libdatadog | 0.7.0 | 35.02 MB | 35.02 MB | | @datadog/native-appsec | 10.3.0 | 20.73 MB | 20.74 MB | | @datadog/pprof | 5.12.0 | 11.19 MB | 11.57 MB | | @datadog/native-iast-taint-tracking | 4.1.0 | 9.01 MB | 9.02 MB | | @opentelemetry/resources | 1.30.1 | 557.67 kB | 7.71 MB | | @opentelemetry/core | 1.30.1 | 908.66 kB | 7.16 MB | | protobufjs | 7.5.4 | 2.95 MB | 5.83 MB | | @datadog/wasm-js-rewriter | 5.0.1 | 2.82 MB | 3.53 MB | | @datadog/native-metrics | 3.1.1 | 1.02 MB | 1.43 MB | | @opentelemetry/api-logs | 0.208.0 | 199.48 kB | 1.42 MB | | @opentelemetry/api | 1.9.0 | 1.22 MB | 1.22 MB | | jsonpath-plus | 10.3.0 | 617.18 kB | 1.08 MB | | import-in-the-middle | 1.15.0 | 127.66 kB | 856.24 kB | | lru-cache | 10.4.3 | 804.3 kB | 804.3 kB | | @datadog/openfeature-node-server | 0.2.0 | 118.51 kB | 437.19 kB | | opentracing | 0.14.7 | 194.81 kB | 194.81 kB | | source-map | 0.7.6 | 185.63 kB | 185.63 kB | | pprof-format | 2.2.1 | 163.06 kB | 163.06 kB | | @datadog/sketches-js | 2.1.1 | 109.9 kB | 109.9 kB | | @isaacs/ttlcache | 2.1.3 | 90.79 kB | 90.79 kB | | lodash.sortby | 4.7.0 | 75.76 kB | 75.76 kB | | ignore | 7.0.5 | 63.38 kB | 63.38 kB | | istanbul-lib-coverage | 3.2.2 | 34.37 kB | 34.37 kB | | rfdc | 1.4.1 | 27.15 kB | 27.15 kB | | dc-polyfill | 0.1.10 | 26.73 kB | 26.73 kB | | tlhunter-sorted-set | 0.1.0 | 24.94 kB | 24.94 kB | | shell-quote | 1.8.3 | 23.74 kB | 23.74 kB | | limiter | 1.1.5 | 23.17 kB | 23.17 kB | | retry | 0.13.1 | 18.85 kB | 18.85 kB | | semifies | 1.0.0 | 15.84 kB | 15.84 kB | | jest-docblock | 29.7.0 | 8.99 kB | 12.76 kB | | crypto-randomuuid | 1.0.0 | 11.18 kB | 11.18 kB | | ttl-set | 1.0.0 | 4.61 kB | 9.69 kB | | mutexify | 1.4.0 | 5.71 kB | 8.74 kB | | path-to-regexp | 0.1.12 | 6.6 kB | 6.6 kB | | module-details-from-path | 1.0.4 | 3.96 kB | 3.96 kB | | escape-string-regexp | 5.0.0 | 3.66 kB | 3.66 kB |

🤖 This report was automatically generated by heaviest-objects-in-the-universe

@codecov
Copy link

codecov bot commented Dec 3, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 84.95%. Comparing base (0bb1f17) to head (75a21c4).
⚠️ Report is 7 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #7026      +/-   ##
==========================================
+ Coverage   84.94%   84.95%   +0.01%     
==========================================
  Files         514      514              
  Lines       21754    21796      +42     
==========================================
+ Hits        18478    18517      +39     
- Misses       3276     3279       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@XG-xin XG-xin changed the title add reasoning token metrics in openai plugin feat(llmobs): add reasoning token metrics in openai plugin Dec 3, 2025
@datadog-official

This comment has been minimized.

@pr-commenter
Copy link

pr-commenter bot commented Dec 3, 2025

Benchmarks

Benchmark execution time: 2025-12-05 20:49:36

Comparing candidate commit e00c7b3 in PR branch xinyuan/openai-add-reasoning-token-metric with baseline commit 0bb1f17 in branch master.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 288 metrics, 32 unstable metrics.

Copy link
Collaborator

@sabrenner sabrenner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice lgtm! just one question & one code style suggestion. will stamp once we have system tests updated 😎

Comment on lines +119 to +127
if (tokenUsage.output_tokens_details) {
const reasoningOutputTokens = tokenUsage.output_tokens_details.reasoning_tokens
if (reasoningOutputTokens !== undefined) metrics.reasoningOutputTokens = reasoningOutputTokens
} else if (tokenUsage.completion_tokens_details) {
const reasoningOutputTokens = tokenUsage.completion_tokens_details.reasoning_tokens
if (reasoningOutputTokens !== undefined) {
metrics.reasoningOutputTokens = reasoningOutputTokens
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could probably shorten this to something like

Suggested change
if (tokenUsage.output_tokens_details) {
const reasoningOutputTokens = tokenUsage.output_tokens_details.reasoning_tokens
if (reasoningOutputTokens !== undefined) metrics.reasoningOutputTokens = reasoningOutputTokens
} else if (tokenUsage.completion_tokens_details) {
const reasoningOutputTokens = tokenUsage.completion_tokens_details.reasoning_tokens
if (reasoningOutputTokens !== undefined) {
metrics.reasoningOutputTokens = reasoningOutputTokens
}
}
const reasoningOutputObject = tokenUsage.output_tokens_details ?? tokenUsage.completion_tokens_details
const reasoningOutputTokens = reasoningOutputObject?.reasoning_tokens ?? 0
if (reasoningOutputTokens !== undefined) metrics.reasoningOutputTokens = reasoningOutputTokens

taking either tokenUsage.output_tokens_details or tokenUsage.completion_tokens_details, then getting reasoning_tokens from whichever one exists, defaulting to 0

optional though, def not needed!

cache_read_input_tokens: MOCK_NUMBER,
reasoning_output_tokens: 128
},
modelName: 'gpt-5-mini-2025-08-07', // update
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think node.js still uses the input model name, so this should be gpt-5-mini, but lmk if you experienced differently!

this is something i'll look to change soon 🙇

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants