A website that allows you to view the investments of America's largest investors.
This repository holds the back-end code for wallstreetlocal, for the front-end, see here.
This project uses Docker, to deploy, run the following command.
docker compose up -f docker-compose.yaml up
To run both the development and production builds, you will need to have environment variables for third party APIs. Most of the environment variables in the provided compose files you can keep as is, but for the API keys you will need to visit the following services.
These three different services allow for the most up-to-date and accurate data, while also avoiding rate-limiting.
For telemetry, wallstreetlocal uses Sentry. You can sign up here.
Sentry is a paid service, although it has a free trial. If you are a student, there is also a free upgrade available.
The development build is mainly made for testing, so it is ideal for self-hosting.
A full list of this app's microservices.
- FastAPI for the main application
- MongoDB for the database
- Redis for cache
- Meilisearch for search
- Sentry for telemetry
To run the full app, you need the microservices running through Docker, and the main application running seperately.
- Run the microservices by calling the development
docker-compose.yaml
.
docker compose -f docker-compose.yaml up
- Run the main application (with configured environment variables).
python main.py
docker-compose.yaml
(Development)
services:
cache:
container_name: cache
build:
context: ./cache
dockerfile: Dockerfile
restart: always
networks:
- staging
ports:
- 6379:6379
database:
container_name: database
build:
context: ./database
dockerfile: Dockerfile
volumes:
- ./database/main_db:/data/db
restart: always
networks:
- staging
ports:
- 27017:27017
search:
container_name: search
build:
context: ./search
dockerfile: Dockerfile
volumes:
- ./search/search_db:/meili_data
restart: always
networks:
- staging
ports:
- 7700:7700
networks:
staging:
driver: bridge
Example .env
(Development)
SERVER = "127.0.0.1"
APP_NAME = "backend"
ENVIRONMENT = "development"
ADMIN_PASSWORD = "***********"
DEBUG_CIK = "1067983"
WORKERS = 1
HOST = "0.0.0.0"
EXPOSE_PORT = 8000
FORWARDED_ALLOW_IPS = "*"
FINN_HUB_API_KEY ="***********"
ALPHA_VANTAGE_API_KEY ="***********"
OPEN_FIGI_API_KEY = "***********"
MONGO_SERVER_URL = "mongodb://${SERVER}:27017"
MONGO_BACKUP_URL = "1LT4xiFJkh6YlAPQDcov8YIKqcvevFlEE"
REDIS_SERVER_URL = "${SERVER}"
REDIS_PORT = 6379
MEILI_SERVER_URL = "http://${SERVER}:7700"
MEILI_MASTER_KEY = "***********"
SENTRY_DSN = ""
TELEMETRY = False
The production build is made for running at scale, so you may want to do the following things:
- Run on only one worker
- Map all docker ports to
localhost
To run the full application with all required microservices, you need just one command.
docker compose -f docker-compose.yaml up
docker-compose.yaml
(Production)
version: "3.4"
services:
backend:
container_name: backend
build:
dockerfile: Dockerfile.prod
restart: always
depends_on:
- database
- cache
- search
volumes:
- ./public:/app/public
networks:
- proxy-network
environment:
APP_NAME: "backend"
ENVIRONMENT: "production"
ADMIN_PASSWORD: "***********"
WORKERS: 9
HOST: "0.0.0.0"
EXPOSE_PORT: 8000
FORWARDED_ALLOW_IPS: "*"
FINN_HUB_API_KEY: "***********"
ALPHA_VANTAGE_API_KEY: "***********"
OPEN_FIGI_API_KEY: "***********"
MONGO_SERVER_URL: "database"
MONGO_BACKUP_URL: "1LT4xiFJkh6YlAPQDcov8YIKqcvevFlEE"
REDIS_SERVER_URL: "cache"
REDIS_PORT: 6379
MEILI_SERVER_URL: "search"
MEILI_MASTER_KEY: "***********"
TELEMETRY: False
cache:
container_name: cache
build:
context: ./cache
dockerfile: Dockerfile
networks:
- proxy-network
restart: always
database:
container_name: database
build:
context: ./database
dockerfile: Dockerfile
networks:
- proxy-network
volumes:
- ./database/main_db:/data/db
restart: always
search:
container_name: search
build:
context: ./search
dockerfile: Dockerfile
volumes:
- ./search/search_db:/meili_data
networks:
- proxy-network
restart: always
networks:
proxy-network:
name: proxy-network