feat: add-mssql-support#56
Conversation
# Conflicts: # docker-compose.yml
…jected by Debian Trixie)
📝 WalkthroughWalkthroughThis PR adds comprehensive Microsoft SQL Server (MSSQL) database backend support to Portabase. It introduces a complete MSSQL implementation with connection handling, ping/backup/restore operations, Docker infrastructure, configuration validation, seeding, and integration tests. ChangesMSSQL Database Backend Implementation
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (9)
src/domain/mssql/connection.rs (2)
14-16: ⚡ Quick winAdd timeout for TCP connection to prevent indefinite hangs.
The TCP connection has no timeout configured, which could cause the application to hang indefinitely if the database server is unreachable or experiencing network issues.
⏱️ Add connection timeout
- let tcp = TcpStream::connect(config.get_addr()).await?; + let tcp = tokio::time::timeout( + std::time::Duration::from_secs(30), + TcpStream::connect(config.get_addr()) + ).await + .context("Connection timeout")? + .context("Failed to connect to MSSQL server")?; tcp.set_nodelay(true)?;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/domain/mssql/connection.rs` around lines 14 - 16, The TcpStream::connect call in connection.rs can hang because there's no timeout; wrap the async connect with a tokio timeout (e.g., tokio::time::timeout) using a sensible duration (config or hardcoded) and handle the timeout error by returning an appropriate error instead of awaiting forever; update the code around TcpStream::connect(config.get_addr()).await? so the timed-out result is mapped into the same error type the function returns before calling tcp.set_nodelay(true)? and Client::connect afterwards.
12-12: ⚡ Quick winCertificate validation is disabled, creating a security risk.
The
trust_cert()call unconditionally bypasses TLS certificate validation, making the connection vulnerable to man-in-the-middle attacks. While acceptable for local development, this should either be configurable viaDatabaseConfigor clearly documented as development-only.🔒 Consider making certificate validation configurable
Add an optional field to
DatabaseConfigto control certificate validation:pub async fn build_client(cfg: &DatabaseConfig) -> Result<Client<Compat<TcpStream>>> { let mut config = Config::new(); config.host(&cfg.host); config.port(cfg.port); config.authentication(AuthMethod::sql_server(&cfg.username, &cfg.password)); // Only trust cert if explicitly configured (e.g., for development) if cfg.trust_cert.unwrap_or(false) { config.trust_cert(); } let tcp = TcpStream::connect(config.get_addr()).await?; tcp.set_nodelay(true)?; let client = Client::connect(config, tcp.compat_write()).await?; Ok(client) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/domain/mssql/connection.rs` at line 12, The code unconditionally calls trust_cert() which disables TLS validation; update the build_client logic to make certificate-trusting configurable via DatabaseConfig (e.g., add an optional trust_cert bool) and only call config.trust_cert() when that flag is true; modify the DatabaseConfig type to include the new field and adjust the build_client (where Config::new(), config.host(), authentication/AuthMethod::sql_server(), TcpStream::connect, and Client::connect are used) to check cfg.trust_cert.unwrap_or(false) before invoking trust_cert() so certificate validation remains enabled by default.src/domain/mssql/backup.rs (3)
29-34: 🏗️ Heavy liftAdd timeout to sqlpackage command to prevent indefinite hangs.
The
Command::output()call has no timeout and could block indefinitely ifsqlpackagehangs or the database becomes unresponsive during backup. This would tie up the tokio blocking thread pool.⏱️ Add command timeout
Consider using
tokio::time::timeoutaround the blocking operation:let output = tokio::time::timeout( std::time::Duration::from_secs(3600), // 1 hour timeout for large databases tokio::task::spawn_blocking(move || { Command::new("sqlpackage") .arg("/a:Export") .arg(format!("/scs:{}", connection_string)) .arg(format!("/tf:{}", file_path.display())) .output() }) ).await .context("MSSQL backup timed out")?? .with_context(|| format!("Failed to run sqlpackage for {}", cfg.name))?;Note: This would require restructuring the spawn_blocking call.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/domain/mssql/backup.rs` around lines 29 - 34, The sqlpackage invocation in backup.rs currently calls Command::new(...).output() without a timeout, which can hang; wrap the blocking process spawn in tokio::task::spawn_blocking and use tokio::time::timeout (e.g., Duration::from_secs(3600) or configured value) around the spawn_blocking future, await the timeout, map a timeout error to a contextual error message like "MSSQL backup timed out", then propagate the underlying Command::output() error with the existing .with_context(|| format!("Failed to run sqlpackage for {}", cfg.name)) so failures and timeouts are distinguishable; update the code surrounding the Command::new(...) call in the backup.rs function to implement this pattern.
45-46: 💤 Low valueImprove error handling for task panics.
The
await?on line 46 will convert aJoinError(from task panic) into a generic error. Consider handling task panics explicitly to provide better diagnostics.🔍 Explicit panic handling
- }) - .await? + }) + .await + .map_err(|e| { + if e.is_panic() { + anyhow::anyhow!("MSSQL backup task panicked: {:?}", e) + } else { + anyhow::anyhow!("MSSQL backup task cancelled") + } + })?🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/domain/mssql/backup.rs` around lines 45 - 46, The current `.await?` on the JoinHandle swallows panic information by treating a `JoinError` as a generic error; replace the direct `await?` with explicit handling of the `JoinHandle` result: call `task.await`, match on `Ok(inner)` to propagate the inner `Result` (e.g., `inner?`), and on `Err(join_err)` detect `join_err.is_panic()` and extract/log the panic payload (or create a descriptive error) so panics are reported with diagnostics. Update the code around the `JoinHandle`/`task.await` call in this file (the `JoinHandle` awaiting site) to return or log a richer error when the task panicked.
16-19: ⚡ Quick winSecurity: Connection string disables encryption and certificate validation.
The connection string includes
TrustServerCertificate=True;Encrypt=False, which disables both TLS encryption and certificate validation. This exposes credentials and data in transit to potential interception.While this may be acceptable for local development with Docker containers, it should be configurable for production use.
🔒 Consider making TLS settings configurable
let connection_string = format!( - "Server=tcp:{},{};Database={};User Id={};Password={};TrustServerCertificate=True;Encrypt=False", + "Server=tcp:{},{};Database={};User Id={};Password={};TrustServerCertificate={};Encrypt={}", - cfg.host, cfg.port, cfg.database, cfg.username, cfg.password + cfg.host, cfg.port, cfg.database, cfg.username, cfg.password, + cfg.trust_cert.unwrap_or(false), + cfg.encrypt.unwrap_or(true) );🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/domain/mssql/backup.rs` around lines 16 - 19, The connection string in the backup code hardcodes insecure TLS options ("TrustServerCertificate=True;Encrypt=False"); update the code that builds connection_string (the format! call that uses cfg.host, cfg.port, cfg.database, cfg.username, cfg.password) to use configurable TLS flags from the configuration (e.g., cfg.encrypt and cfg.trust_server_certificate), add those fields to the config struct with secure defaults (Encrypt=true, TrustServerCertificate=false), and use those values when formatting the connection string so production defaults are encrypted while allowing overrides for local/dev.src/domain/mssql/database.rs (1)
29-39: Consider adding timeout to lock acquisition.The
FileLock::acquirecalls (lines 30, 42) have no timeout. If a previous process crashed while holding a lock, or if a concurrent operation hangs, new operations will wait indefinitely.If
FileLockdoesn't support timeouts internally, consider wrapping the acquire call withtokio::time::timeoutto fail fast:use tokio::time::{timeout, Duration}; let lock_timeout = Duration::from_secs(300); // 5 minutes timeout( lock_timeout, FileLock::acquire(&self.cfg.generated_id, DbOpLock::Backup.as_str()) ) .await .context("Lock acquisition timed out")??;This prevents indefinite blocking and allows for clearer error messages in operations dashboards.
Also applies to: 41-46
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/domain/mssql/database.rs` around lines 29 - 39, Wrap the FileLock::acquire call in the backup method (and the other similar acquire calls e.g., the one using DbOpLock::Backup.as_str()) with tokio::time::timeout using a sensible Duration (e.g., 5 minutes), map timeout errors into a clear context/err message and propagate the error instead of awaiting indefinitely; keep FileLock::release as-is on success, and use the same pattern to replace other acquire usages in this file so lock acquisition fails fast with a contextual error if it times out.src/tests/domain/mssql.rs (2)
135-139: ⚡ Quick winUse assertion instead of panic! for test validation.
Line 138 uses
panic!()to handle an unexpected condition. In tests, prefer standard assertions which provide better integration with test frameworks and clearer failure messages.✅ Proposed assertion
- let backup_file: PathBuf = if files.len() == 1 { - files[0].clone() - } else { - panic!("Unexpected number of files after decompression: {}", files.len()); - }; + assert_eq!( + files.len(), + 1, + "Expected exactly 1 file after decompression, found {}", + files.len() + ); + let backup_file: PathBuf = files[0].clone();🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/tests/domain/mssql.rs` around lines 135 - 139, The test currently uses panic! when checking the decompressed files length; replace that with a test assertion (e.g., assert_eq! or assert!(...)) to integrate with the test framework and give clearer failure messages: assert that files.len() == 1 with a descriptive message (e.g., "Unexpected number of files after decompression: {}"), then proceed to set backup_file from files[0].clone(); update the block around the backup_file assignment (the variables files and backup_file) to use the assertion instead of panic!.
28-44: ⚡ Quick winAdd error context to test helper unwrap() calls.
The
create_user_databasehelper usesunwrap()on lines 35-37 and 43 without context. When these fail in CI, it's unclear which specific operation failed.📝 Proposed error context
- let tcp = TcpStream::connect(config.get_addr()).await.unwrap(); - tcp.set_nodelay(true).unwrap(); - let mut client = Client::connect(config, tcp.compat_write()).await.unwrap(); + let tcp = TcpStream::connect(config.get_addr()) + .await + .expect("Failed to connect to MSSQL container"); + tcp.set_nodelay(true) + .expect("Failed to set TCP nodelay"); + let mut client = Client::connect(config, tcp.compat_write()) + .await + .expect("Failed to create MSSQL client"); let sql = format!( "IF NOT EXISTS (SELECT name FROM sys.databases WHERE name = N'{}') CREATE DATABASE [{}]", db_name, db_name ); - client.simple_query(sql.as_str()).await.unwrap(); + client + .simple_query(sql.as_str()) + .await + .expect(&format!("Failed to create database '{}'", db_name));🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/tests/domain/mssql.rs` around lines 28 - 44, The unwraps in create_user_database mask which operation failed; replace each unwrap() with expect(...) (or map_err with context) that names the failing operation and includes relevant variables: use TcpStream::connect(...).await.expect("failed to connect TCP to {host}:{port}"), tcp.set_nodelay(true).expect("failed to set TCP_NODELAY on connection to {host}:{port}"), Client::connect(...).await.expect("failed to create mssql Client for {host}:{port}/{db_name}"), and client.simple_query(...).await.expect("failed to execute CREATE DATABASE for {db_name}"); this gives clear context for failures in create_user_database.src/domain/mssql/restore.rs (1)
7-7: ⚡ Quick winMissing validation for restore file path.
The
restore_fileparameter is not validated for existence or readability before being passed tosqlpackage. If the file doesn't exist or is inaccessible, the operation will fail late with potentially unclear error messages.✅ Proposed validation
pub async fn run(cfg: DatabaseConfig, restore_file: PathBuf) -> Result<()> { + if !restore_file.exists() { + anyhow::bail!("Restore file does not exist: {}", restore_file.display()); + } + if !restore_file.is_file() { + anyhow::bail!("Restore path is not a file: {}", restore_file.display()); + } + tokio::task::spawn_blocking(move || -> Result<()> {🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/domain/mssql/restore.rs` at line 7, The run function should validate restore_file before invoking sqlpackage: verify the PathBuf exists, is a regular file, and is readable (e.g., use std::fs::metadata or tokio::fs::metadata and check is_file(), or attempt File::open to ensure readability) and return an early Err with a clear message if checks fail; update the run (in src/domain/mssql/restore.rs) to perform these checks on restore_file and include the file path in the error/log so sqlpackage is only called when the file is present and accessible.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@databases.json`:
- Line 106: Update the "name" value in the JSON entry that currently reads "Test
database 13 - MysSQL" to correct the typo to "Test database 13 - MSSQL"; locate
the JSON object with the "name" key containing that exact string and replace
"MysSQL" with "MSSQL".
In `@databases.toml`:
- Around line 107-109: Replace the hardcoded privileged credential in
databases.toml — specifically the password value for the "password" key (and
optionally the "username" key) — with a non-sensitive placeholder and stop
committing real secrets; instead wire the real SA password into your deployment
via an environment-specific, untracked config or environment variable (e.g., use
an env var like DB_SA_PASSWORD or a secrets manager at runtime) and update any
code that reads databases.toml to prefer the env/secret source over the
checked-in "password" entry; also ensure the committed placeholder makes it
clear that secrets must be supplied externally and rotate the exposed credential
immediately.
In `@docker/Dockerfile`:
- Around line 121-126: The image currently installs only the .NET runtime via
the install script (ENV DOTNET_ROOT and /tmp/dotnet-install.sh with "--runtime
dotnet"), but the Dockerfile also installs sqlpackage as a dotnet global tool
(microsoft.sqlpackage) which requires the .NET SDK; either change the install
invocation to install the SDK (replace "--runtime dotnet" with the appropriate
SDK channel/flag so the SDK is present) or stop installing sqlpackage as a
dotnet global tool and instead download and extract the self-contained
SqlPackage .zip distribution into the image and copy those binaries into the
final stage (remove dotnet tool install --global microsoft.sqlpackage and use
the self-contained zip approach); ensure DOTNET_ROOT usage and any subsequent
COPY of sqlpackage binaries align with the chosen approach.
In `@justfile`:
- Around line 53-61: seed-all target contains a duplicate invocation of
seed-mongo; edit the seed-all recipe to remove the second "just seed-mongo"
entry so seed-mongo is only run once, leaving the other seed tasks (seed-mysql,
seed-postgres, seed-postgres-1gb, seed-sqlite, seed-firebird, seed-mssql)
unchanged; verify no other duplicated seed calls exist in the recipe.
In `@scripts/mssql/seed.sql`:
- Around line 17-19: Make the seed idempotent by changing the unconditional
INSERTs into conditional inserts that skip rows if a user with the same
users.email already exists; update the three statements for 'alice@example.com',
'bob@example.com', and 'charlie@example.com' to use either a single MERGE INTO
users ... ON users.email = source.email ... WHEN NOT MATCHED THEN INSERT(...) or
wrap each INSERT in IF NOT EXISTS (SELECT 1 FROM users WHERE email = '...')
INSERT INTO users(...), so rerunning the seed will not throw duplicate-key
errors.
In `@src/domain/mssql/restore.rs`:
- Around line 11-14: The connection string built in restore.rs (variable
connection_string using cfg.host, cfg.port, cfg.database, cfg.username,
cfg.password) hardcodes insecure settings
TrustServerCertificate=True;Encrypt=False; change this to use secure defaults
and make encryption/certificate validation configurable on DatabaseConfig (e.g.,
add fields like encrypt: bool and trust_server_certificate: bool or an explicit
allow_insecure flag), then construct the connection string from those config
fields (default Encrypt=True and TrustServerCertificate=False) and only set
insecure values when the config explicitly requests them; update any callers
that create cfg to supply the new fields or accept defaults.
- Around line 24-29: The blocking sqlpackage call (Command::new("sqlpackage")...
.output()) can hang; wrap the blocking work in a tokio::task::spawn_blocking and
apply a tokio::time::timeout around the join (or use tokio::time::timeout on the
spawn_blocking future) so the restore operation fails after a configurable
duration; ensure you handle the timeout case by killing the child process or
returning a contextual error from the function (look for the restore function
that calls Command::new and modify the block that creates/output() the process
to use spawn_blocking + tokio::time::timeout and propagate a descriptive error
when the timeout elapses).
- Around line 24-29: The current call in restore.rs builds sqlpackage with a
plaintext connection string via format!("/tcs:{}", connection_string) which
exposes credentials; update the restore flow in the function that calls
Command::new("sqlpackage") to stop passing /tcs:. Instead, implement one of the
recommended approaches: (A) generate a temporary publish profile XML and pass it
via /Profile:<path> (ensure file permissions are set to owner-only and delete
the file after use), or (B) accept and pass a short-lived access token via
/at:<access-token> (obtain token from MS Entra/managed identity) and remove any
/tcs argument; update the argument assembly in restore.rs to use /Profile: or
/at: accordingly, handle token/profile creation and secure cleanup, and add
proper error handling and logging around those steps.
In `@src/services/backup/runner.rs`:
- Around line 55-64: The error handling in BackupService::run is inconsistent:
change the backup operation Err arm (currently returning Ok(BackupResult {
status: "failed" })) to propagate the error like the ping path by returning
Err(e.into()), so both ping and backup failures use the same Result-based error
contract; update any callers such as execute_backup and related tests to handle
Err results (or adjust their logic accordingly) after this change.
In `@src/tests/domain/mssql.rs`:
- Line 65: Replace the fixed 30s sleep calls
(tokio::time::sleep(Duration::from_secs(30)).await) with a polling-based
readiness loop: attempt a lightweight connect to the MSSQL port (e.g.,
tokio::net::TcpStream::connect with the container host:port or a simple client
connection using your SQL client) in a loop with a short interval (e.g., 500ms),
stop when the connect succeeds, and fail after an overall timeout (e.g., 60s);
apply this change for all three occurrences in the tests so each test waits
until the container is actually ready instead of sleeping a fixed duration.
---
Nitpick comments:
In `@src/domain/mssql/backup.rs`:
- Around line 29-34: The sqlpackage invocation in backup.rs currently calls
Command::new(...).output() without a timeout, which can hang; wrap the blocking
process spawn in tokio::task::spawn_blocking and use tokio::time::timeout (e.g.,
Duration::from_secs(3600) or configured value) around the spawn_blocking future,
await the timeout, map a timeout error to a contextual error message like "MSSQL
backup timed out", then propagate the underlying Command::output() error with
the existing .with_context(|| format!("Failed to run sqlpackage for {}",
cfg.name)) so failures and timeouts are distinguishable; update the code
surrounding the Command::new(...) call in the backup.rs function to implement
this pattern.
- Around line 45-46: The current `.await?` on the JoinHandle swallows panic
information by treating a `JoinError` as a generic error; replace the direct
`await?` with explicit handling of the `JoinHandle` result: call `task.await`,
match on `Ok(inner)` to propagate the inner `Result` (e.g., `inner?`), and on
`Err(join_err)` detect `join_err.is_panic()` and extract/log the panic payload
(or create a descriptive error) so panics are reported with diagnostics. Update
the code around the `JoinHandle`/`task.await` call in this file (the
`JoinHandle` awaiting site) to return or log a richer error when the task
panicked.
- Around line 16-19: The connection string in the backup code hardcodes insecure
TLS options ("TrustServerCertificate=True;Encrypt=False"); update the code that
builds connection_string (the format! call that uses cfg.host, cfg.port,
cfg.database, cfg.username, cfg.password) to use configurable TLS flags from the
configuration (e.g., cfg.encrypt and cfg.trust_server_certificate), add those
fields to the config struct with secure defaults (Encrypt=true,
TrustServerCertificate=false), and use those values when formatting the
connection string so production defaults are encrypted while allowing overrides
for local/dev.
In `@src/domain/mssql/connection.rs`:
- Around line 14-16: The TcpStream::connect call in connection.rs can hang
because there's no timeout; wrap the async connect with a tokio timeout (e.g.,
tokio::time::timeout) using a sensible duration (config or hardcoded) and handle
the timeout error by returning an appropriate error instead of awaiting forever;
update the code around TcpStream::connect(config.get_addr()).await? so the
timed-out result is mapped into the same error type the function returns before
calling tcp.set_nodelay(true)? and Client::connect afterwards.
- Line 12: The code unconditionally calls trust_cert() which disables TLS
validation; update the build_client logic to make certificate-trusting
configurable via DatabaseConfig (e.g., add an optional trust_cert bool) and only
call config.trust_cert() when that flag is true; modify the DatabaseConfig type
to include the new field and adjust the build_client (where Config::new(),
config.host(), authentication/AuthMethod::sql_server(), TcpStream::connect, and
Client::connect are used) to check cfg.trust_cert.unwrap_or(false) before
invoking trust_cert() so certificate validation remains enabled by default.
In `@src/domain/mssql/database.rs`:
- Around line 29-39: Wrap the FileLock::acquire call in the backup method (and
the other similar acquire calls e.g., the one using DbOpLock::Backup.as_str())
with tokio::time::timeout using a sensible Duration (e.g., 5 minutes), map
timeout errors into a clear context/err message and propagate the error instead
of awaiting indefinitely; keep FileLock::release as-is on success, and use the
same pattern to replace other acquire usages in this file so lock acquisition
fails fast with a contextual error if it times out.
In `@src/domain/mssql/restore.rs`:
- Line 7: The run function should validate restore_file before invoking
sqlpackage: verify the PathBuf exists, is a regular file, and is readable (e.g.,
use std::fs::metadata or tokio::fs::metadata and check is_file(), or attempt
File::open to ensure readability) and return an early Err with a clear message
if checks fail; update the run (in src/domain/mssql/restore.rs) to perform these
checks on restore_file and include the file path in the error/log so sqlpackage
is only called when the file is present and accessible.
In `@src/tests/domain/mssql.rs`:
- Around line 135-139: The test currently uses panic! when checking the
decompressed files length; replace that with a test assertion (e.g., assert_eq!
or assert!(...)) to integrate with the test framework and give clearer failure
messages: assert that files.len() == 1 with a descriptive message (e.g.,
"Unexpected number of files after decompression: {}"), then proceed to set
backup_file from files[0].clone(); update the block around the backup_file
assignment (the variables files and backup_file) to use the assertion instead of
panic!.
- Around line 28-44: The unwraps in create_user_database mask which operation
failed; replace each unwrap() with expect(...) (or map_err with context) that
names the failing operation and includes relevant variables: use
TcpStream::connect(...).await.expect("failed to connect TCP to {host}:{port}"),
tcp.set_nodelay(true).expect("failed to set TCP_NODELAY on connection to
{host}:{port}"), Client::connect(...).await.expect("failed to create mssql
Client for {host}:{port}/{db_name}"), and
client.simple_query(...).await.expect("failed to execute CREATE DATABASE for
{db_name}"); this gives clear context for failures in create_user_database.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7391ea5c-f4a2-4939-a855-e3c5ddd8638d
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (20)
.gitignoreCargo.tomldatabases.jsondatabases.tomldocker-compose.databases.ymldocker/Dockerfilejustfilescripts/mssql/seed.sqlsrc/domain/factory.rssrc/domain/mod.rssrc/domain/mssql/backup.rssrc/domain/mssql/connection.rssrc/domain/mssql/database.rssrc/domain/mssql/mod.rssrc/domain/mssql/ping.rssrc/domain/mssql/restore.rssrc/services/backup/runner.rssrc/services/config.rssrc/tests/domain/mod.rssrc/tests/domain/mssql.rs
| "generated_id": "16706124-ff7e-4c97-8c83-0adeff214681" | ||
| }, | ||
| { | ||
| "name": "Test database 13 - MysSQL", |
There was a problem hiding this comment.
Fix typo in database name: "MysSQL" → "MSSQL".
The database name contains a typo: "Test database 13 - MysSQL" should be "Test database 13 - MSSQL".
📝 Proposed fix
- "name": "Test database 13 - MysSQL",
+ "name": "Test database 13 - MSSQL",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "name": "Test database 13 - MysSQL", | |
| "name": "Test database 13 - MSSQL", |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@databases.json` at line 106, Update the "name" value in the JSON entry that
currently reads "Test database 13 - MysSQL" to correct the typo to "Test
database 13 - MSSQL"; locate the JSON object with the "name" key containing that
exact string and replace "MysSQL" with "MSSQL".
| username = "sa" | ||
| password = "Portabase!Strong1" | ||
| port = 1433 |
There was a problem hiding this comment.
Avoid committing a concrete SA password in tracked config.
Line 108 hardcodes a privileged credential. Please switch to a non-sensitive placeholder and source real values from environment-specific config that is not committed.
Suggested diff
username = "sa"
-password = "Portabase!Strong1"
+password = "changeme" # override via env/local untracked config
port = 1433📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| username = "sa" | |
| password = "Portabase!Strong1" | |
| port = 1433 | |
| username = "sa" | |
| password = "changeme" # override via env/local untracked config | |
| port = 1433 |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@databases.toml` around lines 107 - 109, Replace the hardcoded privileged
credential in databases.toml — specifically the password value for the
"password" key (and optionally the "username" key) — with a non-sensitive
placeholder and stop committing real secrets; instead wire the real SA password
into your deployment via an environment-specific, untracked config or
environment variable (e.g., use an env var like DB_SA_PASSWORD or a secrets
manager at runtime) and update any code that reads databases.toml to prefer the
env/secret source over the checked-in "password" entry; also ensure the
committed placeholder makes it clear that secrets must be supplied externally
and rotate the exposed credential immediately.
| ENV DOTNET_ROOT=/usr/local/dotnet | ||
| RUN curl -sSL https://dot.net/v1/dotnet-install.sh -o /tmp/dotnet-install.sh \ | ||
| && chmod +x /tmp/dotnet-install.sh \ | ||
| && /tmp/dotnet-install.sh --channel 8.0 --runtime dotnet --install-dir /usr/local/dotnet \ | ||
| && rm /tmp/dotnet-install.sh | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Test that sqlpackage works in the production image
# Build the production image
echo "Building production image..."
docker build --target prod -t portabase-prod-test -f docker/Dockerfile .
# Test sqlpackage availability and execution
echo "Testing sqlpackage in production image..."
docker run --rm portabase-prod-test /bin/bash -c "
which sqlpackage && \
sqlpackage /version || echo 'ERROR: sqlpackage failed to execute'
"Repository: Portabase/agent
Length of output: 241
🏁 Script executed:
cat -n docker/Dockerfile | sed -n '20,140p'Repository: Portabase/agent
Length of output: 4336
🌐 Web query:
sqlpackage .NET runtime requirements system requirements
💡 Result:
SqlPackage (.NET) runtime and system requirements (as of the latest Microsoft Learn guidance): 1) OS requirements (the same minimum OS requirements apply to both the dotnet-tool install and the self-contained.zip) - Windows (x64): Windows 11; Windows 10 (1607+); Windows Server Core 2012 R2+; Windows Server 2012 R2+ [1] - macOS (x64): macOS 14 “Sonoma”+ [1] - Linux (x64): Debian 12+; Red Hat Enterprise Linux 8+; SUSE Linux Enterprise Server 15 SP6+; Ubuntu 22.04+ [1] 2).NET runtime requirement (depends on how you install SqlPackage) - dotnet tool install (recommended): Installing SqlPackage as a global dotnet tool “requires the.NET SDK to be installed on your machine.” Also, SqlPackage is available as a dotnet tool for.NET 8 and later versions. [1] - Self-contained.zip download: “No.NET install is required” for the self-contained zip. [1] - SqlPackage built on.NET 10: Microsoft states “SqlPackage runs on Windows, macOS, and Linux and is built using.NET 10.” [1] So, if you use the recommended dotnet-tool method, ensure you have the.NET SDK (and a compatible.NET 8+ environment). If you use the self-contained.zip, you don’t need a separate.NET installation; just meet the OS requirements above. [1]
Citations:
Resolve: sqlpackage requires .NET SDK but production image installs only runtime.
The base image installs sqlpackage as a .NET global tool (line 30: dotnet tool install --global microsoft.sqlpackage), which requires the .NET SDK to function. However, the production image installs only the .NET runtime (line 124: --runtime dotnet), not the SDK. While the tool binaries are copied to the production image (line 136), they will fail at runtime because the .NET SDK is not available.
The solution is either:
- Install the .NET SDK in production (not just runtime), or
- Use the self-contained SqlPackage .zip distribution which does not require an SDK or runtime to be installed separately.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docker/Dockerfile` around lines 121 - 126, The image currently installs only
the .NET runtime via the install script (ENV DOTNET_ROOT and
/tmp/dotnet-install.sh with "--runtime dotnet"), but the Dockerfile also
installs sqlpackage as a dotnet global tool (microsoft.sqlpackage) which
requires the .NET SDK; either change the install invocation to install the SDK
(replace "--runtime dotnet" with the appropriate SDK channel/flag so the SDK is
present) or stop installing sqlpackage as a dotnet global tool and instead
download and extract the self-contained SqlPackage .zip distribution into the
image and copy those binaries into the final stage (remove dotnet tool install
--global microsoft.sqlpackage and use the self-contained zip approach); ensure
DOTNET_ROOT usage and any subsequent COPY of sqlpackage binaries align with the
chosen approach.
| seed-all: | ||
| just seed-mongo | ||
| just seed-mysql | ||
| just seed-postgres | ||
| just seed-postgres-1gb | ||
| just seed-sqlite | ||
| just seed-mongo | ||
| just seed-firebird No newline at end of file | ||
| just seed-firebird | ||
| just seed-mssql No newline at end of file |
There was a problem hiding this comment.
Remove the duplicate Mongo seed call in seed-all.
seed-all runs just seed-mongo twice, which can create duplicate-seed side effects and slows the pipeline with no benefit.
Suggested diff
seed-all:
just seed-mongo
just seed-mysql
just seed-postgres
just seed-postgres-1gb
just seed-sqlite
- just seed-mongo
just seed-firebird
just seed-mssql📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| seed-all: | |
| just seed-mongo | |
| just seed-mysql | |
| just seed-postgres | |
| just seed-postgres-1gb | |
| just seed-sqlite | |
| just seed-mongo | |
| just seed-firebird | |
| \ No newline at end of file | |
| just seed-firebird | |
| just seed-mssql | |
| seed-all: | |
| just seed-mongo | |
| just seed-mysql | |
| just seed-postgres | |
| just seed-postgres-1gb | |
| just seed-sqlite | |
| just seed-firebird | |
| just seed-mssql |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@justfile` around lines 53 - 61, seed-all target contains a duplicate
invocation of seed-mongo; edit the seed-all recipe to remove the second "just
seed-mongo" entry so seed-mongo is only run once, leaving the other seed tasks
(seed-mysql, seed-postgres, seed-postgres-1gb, seed-sqlite, seed-firebird,
seed-mssql) unchanged; verify no other duplicated seed calls exist in the
recipe.
| INSERT INTO users (email, name) VALUES ('alice@example.com', 'Alice'); | ||
| INSERT INTO users (email, name) VALUES ('bob@example.com', 'Bob'); | ||
| INSERT INTO users (email, name) VALUES ('charlie@example.com', 'Charlie'); |
There was a problem hiding this comment.
Make seed inserts idempotent to prevent rerun failures.
These inserts are unconditional, but users.email is unique. Re-running the seed will fail with duplicate-key errors.
Suggested diff
-INSERT INTO users (email, name) VALUES ('alice@example.com', 'Alice');
-INSERT INTO users (email, name) VALUES ('bob@example.com', 'Bob');
-INSERT INTO users (email, name) VALUES ('charlie@example.com', 'Charlie');
+IF NOT EXISTS (SELECT 1 FROM users WHERE email = 'alice@example.com')
+ INSERT INTO users (email, name) VALUES ('alice@example.com', 'Alice');
+IF NOT EXISTS (SELECT 1 FROM users WHERE email = 'bob@example.com')
+ INSERT INTO users (email, name) VALUES ('bob@example.com', 'Bob');
+IF NOT EXISTS (SELECT 1 FROM users WHERE email = 'charlie@example.com')
+ INSERT INTO users (email, name) VALUES ('charlie@example.com', 'Charlie');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| INSERT INTO users (email, name) VALUES ('alice@example.com', 'Alice'); | |
| INSERT INTO users (email, name) VALUES ('bob@example.com', 'Bob'); | |
| INSERT INTO users (email, name) VALUES ('charlie@example.com', 'Charlie'); | |
| IF NOT EXISTS (SELECT 1 FROM users WHERE email = 'alice@example.com') | |
| INSERT INTO users (email, name) VALUES ('alice@example.com', 'Alice'); | |
| IF NOT EXISTS (SELECT 1 FROM users WHERE email = 'bob@example.com') | |
| INSERT INTO users (email, name) VALUES ('bob@example.com', 'Bob'); | |
| IF NOT EXISTS (SELECT 1 FROM users WHERE email = 'charlie@example.com') | |
| INSERT INTO users (email, name) VALUES ('charlie@example.com', 'Charlie'); |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@scripts/mssql/seed.sql` around lines 17 - 19, Make the seed idempotent by
changing the unconditional INSERTs into conditional inserts that skip rows if a
user with the same users.email already exists; update the three statements for
'alice@example.com', 'bob@example.com', and 'charlie@example.com' to use either
a single MERGE INTO users ... ON users.email = source.email ... WHEN NOT MATCHED
THEN INSERT(...) or wrap each INSERT in IF NOT EXISTS (SELECT 1 FROM users WHERE
email = '...') INSERT INTO users(...), so rerunning the seed will not throw
duplicate-key errors.
| let connection_string = format!( | ||
| "Server=tcp:{},{};Database={};User Id={};Password={};TrustServerCertificate=True;Encrypt=False", | ||
| cfg.host, cfg.port, cfg.database, cfg.username, cfg.password | ||
| ); |
There was a problem hiding this comment.
Critical: Connection security settings disable encryption and certificate validation.
Lines 12-13 set TrustServerCertificate=True;Encrypt=False, which disables TLS encryption and certificate validation. This exposes credentials and data in transit to eavesdropping and MITM attacks.
For production environments, this is a critical security vulnerability. While these settings may be acceptable for local development/testing, they should not be hardcoded.
🔒 Recommended approach
Consider making encryption and certificate validation configurable via DatabaseConfig:
+// In DatabaseConfig struct (src/services/config.rs)
+pub struct DatabaseConfig {
+ // ... existing fields ...
+ pub tls_enabled: Option<bool>,
+ pub trust_server_certificate: Option<bool>,
+}
let connection_string = format!(
- "Server=tcp:{},{};Database={};User Id={};Password={};TrustServerCertificate=True;Encrypt=False",
- cfg.host, cfg.port, cfg.database, cfg.username, cfg.password
+ "Server=tcp:{},{};Database={};User Id={};Password={};TrustServerCertificate={};Encrypt={}",
+ cfg.host,
+ cfg.port,
+ cfg.database,
+ cfg.username,
+ cfg.password,
+ cfg.trust_server_certificate.unwrap_or(false),
+ cfg.tls_enabled.unwrap_or(true)
);Alternatively, default to secure settings and only allow insecure mode via explicit configuration flag.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/domain/mssql/restore.rs` around lines 11 - 14, The connection string
built in restore.rs (variable connection_string using cfg.host, cfg.port,
cfg.database, cfg.username, cfg.password) hardcodes insecure settings
TrustServerCertificate=True;Encrypt=False; change this to use secure defaults
and make encryption/certificate validation configurable on DatabaseConfig (e.g.,
add fields like encrypt: bool and trust_server_certificate: bool or an explicit
allow_insecure flag), then construct the connection string from those config
fields (default Encrypt=True and TrustServerCertificate=False) and only set
insecure values when the config explicitly requests them; update any callers
that create cfg to supply the new fields or accept defaults.
| let output = Command::new("sqlpackage") | ||
| .arg("/a:Import") | ||
| .arg(format!("/tcs:{}", connection_string)) | ||
| .arg(format!("/sf:{}", restore_file.display())) | ||
| .output() | ||
| .with_context(|| format!("Failed to run sqlpackage restore for {}", cfg.name))?; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major | ⚡ Quick win
Missing timeout on blocking restore operation.
The Command execution has no timeout. A hung or stalled sqlpackage process could block indefinitely, preventing the spawned blocking task from completing.
Consider adding a timeout wrapper around the blocking operation or using tokio::time::timeout on the spawned task.
⏱️ Proposed timeout addition
+use tokio::time::{timeout, Duration};
+
pub async fn run(cfg: DatabaseConfig, restore_file: PathBuf) -> Result<()> {
- tokio::task::spawn_blocking(move || -> Result<()> {
+ let task = tokio::task::spawn_blocking(move || -> Result<()> {
debug!("Starting MSSQL restore for database {}", cfg.name);
// ... existing code ...
- })
- .await?
+ });
+
+ timeout(Duration::from_secs(3600), task)
+ .await
+ .context("MSSQL restore timed out after 1 hour")?
+ .context("MSSQL restore task panicked")?
}Adjust the timeout duration based on expected restore sizes.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/domain/mssql/restore.rs` around lines 24 - 29, The blocking sqlpackage
call (Command::new("sqlpackage")... .output()) can hang; wrap the blocking work
in a tokio::task::spawn_blocking and apply a tokio::time::timeout around the
join (or use tokio::time::timeout on the spawn_blocking future) so the restore
operation fails after a configurable duration; ensure you handle the timeout
case by killing the child process or returning a contextual error from the
function (look for the restore function that calls Command::new and modify the
block that creates/output() the process to use spawn_blocking +
tokio::time::timeout and propagate a descriptive error when the timeout
elapses).
🧩 Analysis chain
🌐 Web query:
Does sqlpackage support passing credentials via environment variables or configuration files instead of command line arguments?
💡 Result:
No. SqlPackage does not provide a documented mechanism to pass database credentials via environment variables or external configuration files instead of supplying them to the SqlPackage command (either directly via command-line parameters or inside the connection string / publish profile files you pass). Microsoft’s SqlPackage docs show credentials are provided through command-line parameters such as /TargetUser and /TargetPassword (SQL authentication) and via /TargetConnectionString, and that those are the inputs SqlPackage consumes for authentication; they don’t document any “credentials-from-env-var” or “credentials-from-config-file” feature for SQL authentication. [1][2] What is supported as alternatives to “username + password on the command line” is: 1) Token-based authentication using /AccessToken (or /at). [3][4] 2) Microsoft Entra authentication methods (including Managed Identity) via connection string authentication keywords. [5] 3) Publish profiles: you can pass a.publish.xml profile using /Profile:. Those profiles can store deployment configuration; the docs describe the profile as defining a collection of properties and variables used when generating outputs, and command-line overrides for those values. However, the docs still treat the profile as an input file to SqlPackage you pass via the /Profile: argument, rather than something SqlPackage automatically reads for credentials from a separate “configuration file location” or environment-variable names. [6][7] Note: SqlPackage does check some environment variables, but the documented example is for diagnostics/performance tracing (e.g., DACFX_PERF_TRACE), not for supplying database credentials. [3] Bottom line: if you need to avoid putting secrets directly in the command line, use supported authentication alternatives (for example, Entra Managed Identity or access tokens) or provide secrets indirectly via a connection string you supply (e.g., from your orchestration system), but there is no documented “credentials via environment variables / config file” feature for SqlPackage itself. [5][1][2]
Citations:
- 1: https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-import?view=sql-server-ver16
- 2: https://learn.microsoft.com/en-us/sql/tools/sqlpackage/cli-reference?view=sql-server-ver17
- 3: https://learn.microsoft.com/en-us/sql/tools/sqlpackage/troubleshooting-issues-and-performance-with-sqlpackage?view=sql-server-ver16
- 4: https://docs.microsoft.com/sql/tools/sqlpackage/sqlpackage-export
- 5: https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage?view=sql-server-ver17
- 6: https://learn.microsoft.com/en-us/sql/tools/sql-database-projects/concepts/publish-profiles?view=sql-server-ver17
- 7: https://learn.microsoft.com/en-us/sql/tools/sqlpackage/sqlpackage-publish?view=sql-server-ver15
Major: Plaintext password exposure in process list.
The connection string passed to sqlpackage via /tcs: (line 26) is visible to all users on the system through ps or /proc, exposing the database password.
SqlPackage does not support passing credentials via environment variables or config files. To avoid this exposure, refactor to use one of these alternatives:
- Token-based authentication: Use
/at:<access-token>instead of embedding credentials in the connection string - Microsoft Entra authentication: Use Entra-managed identities or Entra tokens in the connection string
- Publish profile file: Pass credentials via a
.publish.xmlfile with/Profile:parameter and restrict file permissions
Choose the authentication method that fits your deployment environment.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/domain/mssql/restore.rs` around lines 24 - 29, The current call in
restore.rs builds sqlpackage with a plaintext connection string via
format!("/tcs:{}", connection_string) which exposes credentials; update the
restore flow in the function that calls Command::new("sqlpackage") to stop
passing /tcs:. Instead, implement one of the recommended approaches: (A)
generate a temporary publish profile XML and pass it via /Profile:<path> (ensure
file permissions are set to owner-only and delete the file after use), or (B)
accept and pass a short-lived access token via /at:<access-token> (obtain token
from MS Entra/managed identity) and remove any /tcs argument; update the
argument assembly in restore.rs to use /Profile: or /at: accordingly, handle
token/profile creation and secure cleanup, and add proper error handling and
logging around those steps.
| Err(e) => { | ||
| error!("Backup failed for {}: {:?}", generated_id, e); | ||
| Ok(BackupResult { | ||
| generated_id, | ||
| db_type, | ||
| status: "failed".into(), | ||
| backup_file: None, | ||
| code: None, | ||
| }) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for callers of BackupService::run that might expect Err results
rg -n -C 3 -P 'BackupService::run\(' --type=rustRepository: Portabase/agent
Length of output: 41
🏁 Script executed:
# First, let's read the file around lines 55-64 to understand context
cat -n src/services/backup/runner.rs | sed -n '40,80p'Repository: Portabase/agent
Length of output: 1099
🏁 Script executed:
# Search for callers of backup-related functions more broadly
rg -n 'run\(' src/services/backup/runner.rs | head -20Repository: Portabase/agent
Length of output: 147
🏁 Script executed:
# Search for any calls to methods in the runner module
rg -n 'backup.*run\|BackupRunner\|\.run\(' --type=rust -B 2 -A 2Repository: Portabase/agent
Length of output: 41
🏁 Script executed:
# Look for tests or integration points
find . -name "*.rs" -type f | xargs grep -l "BackupService\|backup.*runner" 2>/dev/null | head -10Repository: Portabase/agent
Length of output: 378
🏁 Script executed:
# Let's look at the function signature and broader context in runner.rs
cat -n src/services/backup/runner.rs | sed -n '1,70p'Repository: Portabase/agent
Length of output: 2459
🏁 Script executed:
# Check the dispatcher to see how it calls the runner
cat -n src/services/backup/dispatcher.rsRepository: Portabase/agent
Length of output: 1526
🏁 Script executed:
# Check the service to see how it handles results
cat -n src/services/backup/service.rsRepository: Portabase/agent
Length of output: 373
🏁 Script executed:
# Find execute_backup to understand the call chain
rg -n 'execute_backup' src/services/backup/ -A 5Repository: Portabase/agent
Length of output: 876
🏁 Script executed:
# Look for any version control history or comments about this change
git log --oneline src/services/backup/runner.rs | head -20Repository: Portabase/agent
Length of output: 91
🏁 Script executed:
# Check if there are tests that show the expected behavior
find . -name "*.rs" -path "*/test*" -o -name "*test.rs" | xargs grep -l "BackupService\|backup.*run" 2>/dev/nullRepository: Portabase/agent
Length of output: 43
🏁 Script executed:
# Look at execute_backup to see the full chain
cat -n src/services/backup/executor.rs | head -60Repository: Portabase/agent
Length of output: 1765
🏁 Script executed:
# Check BackupResult structure to understand the status field
cat -n src/services/backup/result.rsRepository: Portabase/agent
Length of output: 1668
🏁 Script executed:
# Check the BackupResult structure definition
cat -n src/services/backup/result.rsRepository: Portabase/agent
Length of output: 1668
🏁 Script executed:
# Let's see how the error handling differs between ping and backup
# by looking at the complete context of both paths
cat -n src/services/backup/runner.rs | sed -n '15,66p'Repository: Portabase/agent
Length of output: 1943
🏁 Script executed:
# Check if there's any documentation about the error handling strategy
rg -n 'BackupResult\|status.*failed\|error.*handling' src/services/backup/mod.rs -B 2 -A 2Repository: Portabase/agent
Length of output: 41
🏁 Script executed:
# Check if there are other callers of BackupService::run outside of execute_backup
rg -n 'Self::run\|BackupService::run' src/services/backup/ -A 2Repository: Portabase/agent
Length of output: 41
🏁 Script executed:
# Look at the full models to understand BackupResult structure
cat -n src/services/backup/models.rsRepository: Portabase/agent
Length of output: 699
Inconsistent error handling: backup operation errors converted to Ok while ping errors propagate as Err.
The BackupService::run method has inconsistent error handling paths. Ping failures propagate as Err(e.into()) (line 22), but backup operation failures are logged and returned as Ok(BackupResult { status: "failed" }) (lines 55-64). This creates an inconsistent error contract within the same method.
While the current calling code in execute_backup is adapted to check result.status instead of expecting Err, the inconsistency should be resolved by either:
- Propagating all errors consistently, or
- Converting ping errors to
Ok(BackupResult)for uniform error handling through the status field.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/services/backup/runner.rs` around lines 55 - 64, The error handling in
BackupService::run is inconsistent: change the backup operation Err arm
(currently returning Ok(BackupResult { status: "failed" })) to propagate the
error like the ping path by returning Err(e.into()), so both ping and backup
failures use the same Result-based error contract; update any callers such as
execute_backup and related tests to handle Err results (or adjust their logic
accordingly) after this change.
| init_tracing_for_test(); | ||
|
|
||
| let container = start_container().await; | ||
| tokio::time::sleep(Duration::from_secs(30)).await; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major | ⚡ Quick win
Replace fixed sleep with container readiness polling.
All three tests use a hardcoded 30-second sleep (lines 65, 82, 111) to wait for the MSSQL container. This is brittle: the container might be ready sooner (wasting CI time) or need longer (causing flaky test failures).
🔄 Proposed polling-based approach
+async fn wait_for_container_ready(host: &str, port: u16, max_wait_secs: u64) -> Result<(), String> {
+ let mut config = Config::new();
+ config.host(host);
+ config.port(port);
+ config.authentication(AuthMethod::sql_server("sa", SA_PASSWORD));
+ config.trust_cert();
+
+ let start = std::time::Instant::now();
+ loop {
+ if start.elapsed().as_secs() > max_wait_secs {
+ return Err(format!("Container not ready after {} seconds", max_wait_secs));
+ }
+
+ match TcpStream::connect(config.get_addr()).await {
+ Ok(tcp) => {
+ if let Ok(mut client) = Client::connect(config.clone(), tcp.compat_write()).await {
+ // Try a simple query to verify the server is fully ready
+ if client.simple_query("SELECT 1").await.is_ok() {
+ return Ok(());
+ }
+ }
+ }
+ Err(_) => {}
+ }
+ tokio::time::sleep(Duration::from_millis(500)).await;
+ }
+}
#[tokio::test]
async fn mssql_ping_test() {
init_tracing_for_test();
let container = start_container().await;
- tokio::time::sleep(Duration::from_secs(30)).await;
+ let host = container.get_host().await.unwrap().to_string();
+ let port = container.get_host_port_ipv4(1433).await.unwrap();
+ wait_for_container_ready(&host, port, 60).await.expect("Container ready");Apply similar changes to the other two tests.
Also applies to: 82-82, 111-111
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/tests/domain/mssql.rs` at line 65, Replace the fixed 30s sleep calls
(tokio::time::sleep(Duration::from_secs(30)).await) with a polling-based
readiness loop: attempt a lightweight connect to the MSSQL port (e.g.,
tokio::net::TcpStream::connect with the container host:port or a simple client
connection using your SQL client) in a loop with a short interval (e.g., 500ms),
stop when the connect succeeds, and fail after an overall timeout (e.g., 60s);
apply this change for all three occurrences in the tests so each test waits
until the container is actually ready instead of sleeping a fixed duration.
Summary by CodeRabbit
Release Notes
New Features
Tests
Chores