Troubleshooting Guide
This guide documents common setup and runtime issues and their solutions.
1. Mixed Content Error on HTTPS
-
Symptom: When accessing the application via a secure
https://domain (e.g., through a Cloudflare tunnel), the browser console shows a "Mixed Content" error. The page loads, but API calls to the backend fail.Mixed Content: The page at 'https://...' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://...'. -
Cause: The frontend is configured with an absolute
http://URL for the backend API (VITE_API_BASE_URL). When the frontend is loaded over HTTPS, the browser blocks these insecure HTTP requests for security. -
Solution: Configure the Vite development server to act as a proxy. This makes the frontend environment-agnostic.
- Proxy Frontend Requests: In
frontend/vite.config.ts, add aserver.proxyconfiguration to forward all requests starting with/apito the backend service within the Docker network (e.g.,target: 'http://backend:8000'). - Use Relative Paths: Update the frontend API client (
frontend/src/services/api.ts) to use relative paths (e.g.,/api/v1/auth/status). Remove the hardcodedbaseURLand theVITE_API_BASE_URLenvironment variable. - Update CORS Origins: Add your secure HTTPS domain to the
CORS_ORIGINSlist inbackend/.env.
- Proxy Frontend Requests: In
2. Vite "Host Not Allowed" Error
-
Symptom: When accessing the application via a custom domain, the browser console shows an error like:
Blocked request. This host ("custom.domain.com") is not allowed. -
Cause: This is a security feature in Vite to prevent DNS Rebinding attacks. By default, it only accepts requests from
localhost. -
Solution: Explicitly allow your custom domain in the Vite configuration.
- In
frontend/vite.config.ts, add anallowedHostsarray to theserverconfiguration and include your domain name (e.g.,"librephotos.aashish.ai.in").
- In
3. API calls fail with 404 Not Found
-
Symptom: Some API calls succeed, but others fail with a
404 Not Founderror, even though the endpoint exists on the backend. -
Cause: The failing API call is using a URL path that does not match the Vite proxy configuration. For our setup, all backend calls must be prefixed with
/api. -
Solution: Ensure all API calls in the frontend code (e.g., in
services/api.tsor components likeSetupForm.tsx) are prefixed correctly.- Incorrect:
apiClient.post('/auth/setup', ...) - Correct:
apiClient.post('/api/v1/auth/setup', ...)
- Incorrect:
4. Admin Features or User Name Not Appearing After Login
-
Symptom: After a user logs in, admin-specific UI elements (like a "User Management" link) or the user's full name do not appear in the navigation bar or dashboard.
-
Cause: This is typically due to a failure in fetching the user's details after the login is complete. The
AuthContextlikely attempts to fetch user data from an endpoint like/api/v1/users/mebut the request fails. Common reasons include:- Incorrect API Path: The path in the
AuthContext'sfetchUserDatafunction is wrong (e.g.,/users/meinstead of/api/v1/users/me). - Authentication Token Not Sent: The API interceptor in
services/api.tsis not correctly attaching theAuthorization: Bearer <token>header. This can happen if the key used to retrieve the token fromlocalStorage(e.g.,localStorage.getItem("authToken")) does not match the key used when setting it during login (e.g.,localStorage.setItem("token", ...)).
- Incorrect API Path: The path in the
-
Solution:
- Verify the API path for fetching user data in
AuthContext.tsxis correct and includes the/api/v1prefix. - Ensure the
localStoragekey is consistent across your application for setting and getting the authentication token.
- Verify the API path for fetching user data in
5. "No routes matched location" for Admin Pages
-
Symptom: Clicking a link to an admin-only section like
/admin/usersresults in a blank page and aNo routes matched locationerror in the console. -
Cause: The main application router in
App.tsxhas not been configured to handle the specific admin path. -
Solution:
- Create a dedicated
AdminRoute.tsxcomponent that checks if theuserfromAuthContexthas theis_adminflag set to true. - In
App.tsx, update the<Routes>to include a nested route structure for admin pages, protected by both the standardProtectedRoute(checks for a token) and the newAdminRoute(checks for admin privileges).
- Create a dedicated
6. User Management Page Crashes with "No QueryClient set"
-
Symptom: After fixing routing, navigating to the
UserManagementPageimmediately crashes the application with the errorUncaught Error: No QueryClient set, use QueryClientProvider to set one. -
Cause: The page or one of its child components uses React Query hooks (
useQuery,useMutation), but the root of the application is not wrapped in the required<QueryClientProvider>. -
Solution: In the main
App.tsxfile, instantiate a newQueryClientand wrap the entire application (typically the<Router>component) with the<QueryClientProvider client={queryClient}>.
7. Docker Compose fails with "unhealthy" container error
-
Symptom: Running
docker-compose upordocker-compose runfails immediately with an error similar to:ERROR: for backend Container "..." is unhealthy. -
Cause: This typically happens when a previous
docker-compose upcommand was interrupted or included a service that is designed to exit (like ourtestservice). This leaves behind a stopped container that Docker marks as "unhealthy". Subsequent commands are trying to attach to this stale container instead of creating a new one. -
Solution: Perform a clean reset of the Docker environment for the project.
- Stop and remove all containers, networks, and volumes associated with the project by running the following command from the project root:
bash docker-compose down - You can now start the application fresh with
docker-compose up --build db backend frontend.
- Stop and remove all containers, networks, and volumes associated with the project by running the following command from the project root:
-
If the error persists, you may need to perform a more aggressive system-wide prune to remove all unused Docker data (stopped containers, networks, and build cache). Warning: This will affect all projects on your machine, not just this one.
```bash
This is a more forceful cleanup
docker system prune -a -f ```
After running the prune command, try starting the application again.
8. Backend crashes with UndefinedColumn or ProgrammingError
-
Symptom: The backend starts but then crashes when you try to access a page. The logs show an error like:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column "..." does not exist -
Cause: The application's SQLAlchemy models (in
backend/app/models/) have been updated, but the corresponding Alembic migration has not been generated or applied. This causes the database schema to be out of sync with the application's expectations. -
Solution: You need to generate a new migration script to align the database schema with your model changes.
- Generate a new migration script. Run the following command from the project root, replacing the message with a description of your change. This command executes Alembic inside a temporary backend container.
bash docker-compose run --rm backend alembic revision --autogenerate -m "Describe your model change here" - Apply the migration. Restart the application. The
entrypoint.shscript will automatically apply the new migration script on startup.bash docker-compose up --build -d backend
- Generate a new migration script. Run the following command from the project root, replacing the message with a description of your change. This command executes Alembic inside a temporary backend container.
9. E2E tests fail with net::ERR_CONNECTION_REFUSED
-
Symptom: The Playwright E2E test suite fails immediately with a connection error.
Error: page.goto: net::ERR_CONNECTION_REFUSED at http://localhost:3000/ -
Cause: The E2E tests run inside a Docker container (
e2e-tests). From within this container,localhostrefers to the container itself, not thefrontendservice. The test runner is trying to connect to a web server that doesn't exist at that address. -
Solution: The
baseURLin the Playwright configuration must be updated to use the Docker service name, which is resolvable on the shared Docker network.- Open the Playwright config file:
e2e/playwright.config.ts. - Change the
baseURLfromhttp://localhost:3000tohttp://frontend:3000.
- Open the Playwright config file:
10. How to Reset the Database
-
Symptom: You need to completely wipe all data and start with a fresh, empty database. This is often necessary after major schema changes or to clear out test data.
-
Cause: The database data is stored in a persistent Docker volume (
postgres_data). Simply stopping and starting the containers withdocker-compose downanddocker-compose upwill not delete this data. -
Solution: Use the
-vflag withdocker-compose downto remove the volumes along with the containers.- Stop and remove all containers, networks, and volumes:
bash docker-compose down -v - Start the application fresh:
bash docker-compose up --build db backend frontend - Perform initial setup: Since the database is now empty, you will need to go to
http://localhost:3000and create the initial admin user again.
- Stop and remove all containers, networks, and volumes:
11. E2E tests are "flaky" or fail with database errors
-
Symptom: The E2E test suite fails intermittently with errors like
relation "users" does not existor other database-related issues. The failures might not happen on every run, and a re-run might pass. -
Cause: This is a race condition. By default, Playwright runs test files in parallel. Both of our E2E test files (
admin-user-management.spec.tsandportfolio-and-dashboard.spec.ts) have abeforeAllhook that resets the same shared database. When run in parallel, one test's setup can wipe the database while the other is in the middle of its run, causing unpredictable failures. -
Solution: The project has been configured to force serial execution for E2E tests to prevent this race condition.
- Open the Playwright config file:
e2e/playwright.config.ts. -
Ensure that the
workersproperty is set to1. This forces all test files to run one after another in the same process. ```typescript import { defineConfig } from '@playwright/test';export default defineConfig({ // ... / Run tests in files in parallel / fullyParallel: false, / Fail the build on CI if you accidentally left test.only in the source code. / forbidOnly: !!process.env.CI, / Opt out of parallel tests on CI. / workers: 1, // ... }); ```
- Open the Playwright config file:
After applying any of these fixes, always remember to rebuild your Docker containers (docker-compose up --build) to ensure the changes take effect.
16. Backend tests fail with UnsupportedCompilationError or TypeError: is not JSON serializable
-
Symptom: The backend test suite passes when run against PostgreSQL but fails when run against the local SQLite database with errors like
sqlalchemy.exc.UnsupportedCompilationError: Compiler <...> can't render element of type <JSONB>orTypeError: Object of type UUID is not JSON serializable. -
Cause: This is caused by subtle differences in how database backends and their SQLAlchemy dialects handle data types.
JSONBvs.JSON:JSONBis a binary, optimized JSON type specific to PostgreSQL. The SQLite dialect doesn't know how to handle it.- UUID Serialization: The standard Python
jsonlibrary, which the SQLite driver often uses, does not know how to serializeuuid.UUIDobjects by default. The PostgreSQL driver might handle this conversion implicitly, masking the issue.
-
Solution:
- Use Generic Types: In your SQLAlchemy models, always prefer generic types over database-specific ones if you need to support multiple backends. Use
from sqlalchemy.types import JSONinstead offrom sqlalchemy.dialects.postgresql import JSONB. -
Explicitly Cast Data: Never assume a database driver will correctly serialize complex Python objects for a JSON field. Before inserting a dictionary into a
JSONcolumn, explicitly cast any non-standard types to strings. ```python # Incorrect (may fail on SQLite) details = {"user_id": some_uuid_object}Correct (will work everywhere)
details = {"user_id": str(some_uuid_object)} ```
- Use Generic Types: In your SQLAlchemy models, always prefer generic types over database-specific ones if you need to support multiple backends. Use
15. Backend crashes with DetachedInstanceError on DELETE
-
Symptom: An API endpoint that deletes an object from the database fails with a
500 Internal Server Error. The backend logs showDetachedInstanceError: Parent instance <...> is not bound to a Session; lazy load operation of attribute '...' cannot proceed. -
Cause: This happens when you delete an object from the database and then try to return that same object in the API response. After
db.commit(), the SQLAlchemy object becomes "detached" from the session. If your Pydanticresponse_modelfor the endpoint includes a related field (e.g., returning aWatchlistItemthat includes itsasset), FastAPI/Pydantic will try to access that relationship. Because the object is detached, SQLAlchemy tries to lazy-load the relationship from the database, but it can't, which causes the crash. -
Solution: You must eagerly load any required relationships before you delete the object. This ensures all the data needed for the response is already loaded into the object's memory before it's detached from the session.
-
Incorrect (will crash):
python @router.delete("/items/{item_id}") def delete_item(item_id: int, db: Session = Depends(get_db)): item = crud.item.remove(db, id=item_id) # Fetches and deletes db.commit() return item # Crashes here if response_model needs item.related_thing -
Correct: ```python from sqlalchemy.orm import joinedload
@router.delete("/items/{item_id}") def delete_item(item_id: int, db: Session = Depends(get_db)): # Eagerly load the item AND its relationship first item = ( db.query(ItemModel) .options(joinedload(ItemModel.related_thing)) .filter(ItemModel.id == item_id) .first() ) # ... (add checks for not found, permissions, etc.)
# Now delete the object that has the data pre-loaded db.delete(item) db.commit() return item # Works because item.related_thing is already loaded```
-
12. Frontend tests fail with "Cannot find module" for libraries like Heroicons.
-
Symptom: The Jest test suite fails with errors like
Cannot find module '@heroicons/react/24/solid' from 'src/...'. This happens even if the library is correctly installed. -
Cause: This is a complex issue with multiple potential causes, often related to how Jest's module resolver interacts with modern JavaScript packages, especially in a Vite + TypeScript + Docker environment. Common causes include:
- Jest's
moduleNameMapperis not configured: Jest doesn't know how to handle non-JavaScript imports (like SVGs from an icon library) by default. - ES Module (ESM) vs. CommonJS (CJS) Conflict: The project's
package.jsonmay have"type": "module", which tells Node.js to treat.jsfiles as ES Modules. However, Jest's configuration files (jest.config.js) and mock files often use the older CommonJS syntax (module.exports). This conflict can break module resolution.
- Jest's
-
Solution: A robust, multi-step solution is required to stabilize the test environment.
- Use a dedicated Jest config file. Move all Jest configuration out of
package.jsonand into a dedicatedfrontend/jest.config.cjsfile. Note the.cjsextension, which explicitly tells Node.js to treat this file as a CommonJS module, resolving the ESM/CJS conflict. - Configure
moduleNameMapperinjest.config.cjs. Add amoduleNameMapperto intercept imports from problematic libraries and redirect them to a manual mock.javascript // frontend/jest.config.cjs module.exports = { // ... other config moduleNameMapper: { '^@heroicons/react/24/(outline|solid)$': '<rootDir>/src/__mocks__/heroicons.cjs', }, }; - Create a robust mock file. Create a mock file at
frontend/src/__mocks__/heroicons.cjs. Note the.cjsextension. This file should also use CommonJS syntax. Using a JavaScriptProxyis a robust way to mock any named export from the library.javascript // frontend/src/__mocks__/heroicons.cjs const React = require('react'); module.exports = new Proxy({}, { get: () => () => React.createElement('div') }); - Update
package.json. Ensure thetestscript points to the new configuration file:"test": "jest --config jest.config.cjs".
- Use a dedicated Jest config file. Move all Jest configuration out of
13. Data Import Commit Fails
- Symptom: You upload a CSV file, the preview looks correct, but when you click "Commit Transactions", the process fails.
- Cause: The most common cause is that the transactions in the source CSV file are not in the correct chronological order. If a 'SELL' transaction for an asset appears in the file before its corresponding 'BUY' transaction, the backend validation will correctly reject the 'SELL' because it would result in a negative holding.
- Solution: The application logic has been updated to automatically sort all parsed transactions by date, ticker, and then type (BUY before SELL) before they are committed. This should resolve most issues. If you are still encountering errors, ensure your CSV file has the correct columns required by the selected parser (e.g., Zerodha, ICICI Direct).
14. Proactive Error Avoidance Plan
-
Objective: To formalize a set of checks to avoid common environmental and access-related errors experienced in previous sessions.
-
Disk Space Issues (
df -h)- Problem: Previous sessions have failed due to "out of space" errors, especially when running tests or building large Docker images.
- Mitigation: Before running any potentially disk-intensive command (like
docker-compose up --build,npm install, orpytest), first check the available disk space.bash df -h - Action: If disk space is low (e.g., usage is > 90%), proactively clean up old Docker images, volumes, and build caches (
docker system prune -a -f) or other temporary files before proceeding. When running containers, use the--rmflag where appropriate (e.g.,docker-compose run --rm test) to ensure they are removed after execution.
-
Access/Path Failures (
pwd,ls -l)- Problem: Commands have failed because they were run from the wrong directory, or because a file or directory did not have the expected permissions.
- Mitigation: Before running a command that depends on the current working directory or specific file paths, always verify your location and the existence/permissions of the target files.
bash pwd ls -l path/to/your/file - Action: Ensure you are in the project's root directory before running
docker-composecommands. Double-check that any file paths passed as arguments are correct and that the files are readable.
-
Docker Compose Pull Failures
- Problem:
docker-compose up --buildcan sometimes fail if it has trouble pulling a base image from a remote registry. - Mitigation: To improve reliability, you can pre-pull the necessary images before running the main build command.
bash # You can specify multiple services docker-compose pull db backend frontend - Action: If a pull fails, running
docker-compose pull <service_name>directly can often provide a more specific error message than the genericupcommand. This also helps in caching the images locally for faster and more reliable subsequent builds.
- Problem:
17. Backend tests fail with RuntimeError: Cannot encrypt data: master key is not loaded...
- Symptom: A backend test suite that involves creating a
Userfails with aRuntimeErrorrelated to theKeyManager. - Cause: The test requires the use of encrypted database fields (e.g.,
User.full_name). TheKeyManageris responsible for handling the encryption keys, but it has not been initialized or "unlocked" for the test session. This is a common issue when running indesktopmode or any test that uses theEncryptedStringcustom type. -
Solution: The test file needs to use the
pre_unlocked_key_managerpytest fixture. This fixture ensures that a valid key is loaded into theKeyManagerbefore any tests in the module are run.- Import pytest: Add
import pytestto the top of your test file. - Apply the fixture: Add the following line at the module level (at the top of the file, after the imports):
python pytestmark = pytest.mark.usefixtures("pre_unlocked_key_manager")
- Import pytest: Add
18. Data is Not Updating (Stale Cache)
-
Symptom: You create, update, or delete a transaction, but the portfolio summary, holdings view, or dashboard does not reflect the change. The old data is still being displayed.
-
Cause: This is almost always due to a caching issue. The application uses a caching layer (Redis or DiskCache) to store the results of expensive calculations. If a data-mutating operation fails to correctly invalidate the cache, you will be served stale data.
-
Solution:
- User Action: The simplest way to force an invalidation is to perform another action on the same portfolio (e.g., add a small, temporary transaction). This will trigger the invalidation logic again.
- Developer Action (Bug Fix): If the issue is reproducible, it indicates a bug. The endpoint that performed the data mutation (e.g.,
POST /api/v1/bonds/) is likely missing a call tocache_utils.invalidate_caches_for_portfolio(). This function must be called after any successful database write operation that affects a portfolio's state. - Developer Action (Manual Invalidation): To debug, you can manually clear the cache.
- If using Redis:
docker compose exec redis redis-cli FLUSHALL - If using DiskCache (e.g., in
desktoporsqlitemode), you will need to find and delete thecachedirectory created by the application.
- If using Redis: