Conversation
…e model definitions
…LAlchemy directly
… and Mapped types
There was a problem hiding this comment.
Pull request overview
This pull request migrates the codebase from SQLModel to pure SQLAlchemy 2.0 with separate Pydantic schemas for data validation. The migration maintains the same functionality while adopting the more explicit SQLAlchemy ORM pattern with typed mappings and separating database models from API schemas.
Key Changes:
- Replaced SQLModel with SQLAlchemy 2.0's
DeclarativeBaseandMappedtypes for database models - Separated database models from Pydantic schemas (using
PydanticBaseModelfor request/response validation) - Updated repository layer to convert between Pydantic schemas and SQLAlchemy models using
model_dump()
Reviewed changes
Copilot reviewed 14 out of 15 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
src/models/base.py |
Migrated from SQLModel to SQLAlchemy DeclarativeBase with Mapped columns and proper type annotations |
src/models/__init__.py |
Added Base export and updated example documentation to reflect new SQLAlchemy patterns |
src/repositories/types.py |
Updated type bounds from SQLModel to Base (SQLAlchemy) and PydanticBaseModel |
src/repositories/sqlalchemy.py |
Changed from model_validate() to model_dump() for converting Pydantic to SQLAlchemy models |
src/migrations/env.py |
Updated metadata reference from SQLModel.metadata to Base.metadata |
src/migrations/script.py.mako |
Removed sqlmodel import from migration template |
src/schemas/__init__.py |
Added example Pydantic schema structure with from_attributes configuration |
tests/conftest.py |
Updated to use Base.metadata instead of SQLModel.metadata |
tests/test_repository.py |
Migrated test models to SQLAlchemy pattern with separate Pydantic schemas |
pyproject.toml |
Removed sqlmodel dependency |
docs/developing.md |
Updated documentation examples to show new SQLAlchemy 2.0 pattern |
docs/architecture.md |
Updated references from SQLModel to SQLAlchemy |
README.md |
Updated Built With section to reference SQLAlchemy and Pydantic separately |
docker-compose.yml |
Modified PostgreSQL volume path (appears unrelated to migration) |
| retries: 5 | ||
| volumes: | ||
| - db-data:/var/lib/postgresql/data | ||
| - db-data:/var/lib/postgresql/ |
There was a problem hiding this comment.
The PostgreSQL data volume path has been changed from /var/lib/postgresql/data to /var/lib/postgresql/. This change appears unrelated to the SQLModel to SQLAlchemy migration and may cause data persistence issues. The standard PostgreSQL Docker image expects the data directory to be mounted at /var/lib/postgresql/data, not the parent directory. This could lead to the database not persisting data correctly between container restarts.
| - db-data:/var/lib/postgresql/ | |
| - db-data:/var/lib/postgresql/data |
| DateTime(timezone=True), | ||
| nullable=True, | ||
| default=lambda: datetime.now(timezone.utc), | ||
| onupdate=lambda: datetime.now(timezone.utc), |
There was a problem hiding this comment.
The updated_at field uses a Python lambda function for onupdate, but SQLAlchemy's onupdate parameter expects a SQL expression or callable that can be compiled to SQL, not a Python runtime function. This means the updated_at field will not be automatically updated on the database side when records are modified. Consider using onupdate=text("TIMEZONE('utc', CURRENT_TIMESTAMP)") for database-level updates, or handle the timestamp update in the repository layer's update methods.
| onupdate=lambda: datetime.now(timezone.utc), | |
| onupdate=utcnow(), |
| id: Mapped[uuid.UUID] = mapped_column( | ||
| UUID(as_uuid=True), | ||
| primary_key=True, | ||
| index=True, | ||
| sa_column_kwargs={"server_default": text("uuidv7()"), "unique": True}, | ||
| unique=True, | ||
| default=uuid_ext_pkg.uuid7, | ||
| server_default=text("uuidv7()"), | ||
| ) |
There was a problem hiding this comment.
The id field is defined as Mapped[uuid.UUID] (non-optional) but has both a Python default and a server_default. While this works, the annotation should be Mapped[Optional[uuid.UUID]] to reflect that the value is optional at creation time (SQLAlchemy will provide it). Alternatively, if you want to enforce that the ID is always present after instantiation, the current typing is acceptable, but be aware that before database insertion, the field may be None unless explicitly set.
| # Example Pydantic schemas for user data | ||
| # from pydantic import BaseModel | ||
|
|
||
|
|
||
| # class UserBase(BaseModel): | ||
| # name: str | ||
| # email: str | ||
|
|
||
|
|
||
| # class UserCreate(UserBase): | ||
| # pass | ||
|
|
||
|
|
||
| # class UserRead(UserBase): | ||
| # id: int | ||
|
|
||
| # model_config = {"from_attributes": True} |
There was a problem hiding this comment.
Missing model_config on Pydantic schemas. For Pydantic schemas that will be used with SQLAlchemy models (e.g., in read/response schemas), you should add model_config = ConfigDict(from_attributes=True) to allow Pydantic to read data from SQLAlchemy model attributes. While the test schemas shown here are only for input (Create/Update), this comment serves as a reminder for future UserRead or similar response schemas.
| class UserRead(PydanticBaseModel): | ||
| id: uuid.UUID | ||
| email: str | ||
| full_name: Optional[str] | ||
| is_active: bool | ||
| created_at: Optional[datetime] | ||
| updated_at: Optional[datetime] | ||
| ``` |
There was a problem hiding this comment.
The UserRead schema is missing model_config = ConfigDict(from_attributes=True) (or the older model_config = {"from_attributes": True} format shown in the commented example in schemas/init.py). This configuration is necessary to allow Pydantic to read data from SQLAlchemy model instances. Without it, attempting to create a UserRead instance from a User database object will fail.
| class BaseTest(BaseModel): | ||
| __tablename__ = "test" | ||
|
|
||
| email: str = Field(unique=True, index=True, max_length=255) | ||
| name: str = Field(max_length=100) | ||
| is_active: bool = Field(default=True) | ||
| email: Mapped[str] = mapped_column(String(255), unique=True, index=True, nullable=False) | ||
| name: Mapped[str] = mapped_column(String(100), nullable=False) | ||
| is_active: Mapped[bool] = mapped_column(default=True, nullable=False) |
There was a problem hiding this comment.
The BaseTest model defined in this test file will not be automatically included in Base.metadata during the setup_database fixture in conftest.py. This is because the model is defined in the test file and not imported when conftest.py runs. To ensure the test table is created, you should either: 1) Import the BaseTest model in conftest.py before creating tables, or 2) Ensure all test models are imported in the tests' init.py, or 3) Manually create the table in a test fixture. The tests may fail with "relation does not exist" errors without this fix.
| # Example Pydantic schemas for user data | ||
| # from pydantic import BaseModel | ||
|
|
||
|
|
||
| # class UserBase(BaseModel): | ||
| # name: str | ||
| # email: str | ||
|
|
||
|
|
||
| # class UserCreate(UserBase): | ||
| # pass | ||
|
|
||
|
|
||
| # class UserRead(UserBase): | ||
| # id: int | ||
|
|
||
| # model_config = {"from_attributes": True} |
There was a problem hiding this comment.
This comment appears to contain commented-out code.
| # Example Pydantic schemas for user data | |
| # from pydantic import BaseModel | |
| # class UserBase(BaseModel): | |
| # name: str | |
| # email: str | |
| # class UserCreate(UserBase): | |
| # pass | |
| # class UserRead(UserBase): | |
| # id: int | |
| # model_config = {"from_attributes": True} |
Pull Request
Description
Brief description of what this PR does.
Type of Change
Related Issue
Fixes #(issue number)
Changes Made
Testing
make test)Test Results:
Code Quality
make precommit-run)Database Changes
make alembic-make-migrations)Breaking Changes
Breaking Changes Details:
Documentation
Deployment
Screenshots (if applicable)
Checklist
Additional Notes
Any additional information, concerns, or questions about this PR.