A microservices platform for managing hardware assets, tracking device assignments, and providing real-time support across an organization.
- Overview
- Architecture
- Services
- Kafka Event Flows
- Authentication & JWT
- Data Flow
- Getting Started
- Technology Stack
- Known Limitations
- Project Structure
Under the hood, DeviceHub is built as a distributed system of independently deployable microservices, each with its own PostgreSQL database following the Database-per-Service pattern. Services communicate synchronously via HTTP through an API Gateway and asynchronously via Apache Kafka for event-driven synchronization.
DeviceHub is a device management platform for organizations that need to track hardware assets across their workforce. When devices are assigned to employees, the platform keeps track of ownership, status, and lifecycle, while providing a way to handle issues when they arise.
Administrators get a centralized view of the entire device inventory — which devices exist, who they’re assigned to, their status, and history. Employees can see their assigned devices and contact support directly when something needs attention through real-time chat with automated responses and admin intervention when needed.
The diagram below illustrates the full Docker Compose topology — every container, its runtime port, and host mappings used for local development, and how components are connected at the infrastructure level.
DeviceHub follows a layered microservices architecture with a single public entry point:
Browser / Client
│
▼
API Gateway (:8080) ← JWT validation, routing, CORS, WebSocket proxy
│
├──► Auth Service (:8081) ← Identity provider, JWT issuance
├──► User Service (:8082) ← Business profile management
├──► Device Service (:8083) ← Hardware asset lifecycle
├──► Monitoring Service (:8084) ← Usage aggregation & alerting
└──► Communication Service (:8085) ← Real-time chat & notifications
│
Apache Kafka ← Async inter-service messaging
Communication patterns:
- Synchronous (HTTP): All client requests flow through the Gateway, which validates JWT tokens at the edge and
injects a trusted
X-Authenticated-Userheader before forwarding to downstream services. - Asynchronous (Kafka): Services publish and consume domain events for data synchronization — user lifecycle on
sync.users, device lifecycle onsync.devices, raw measurements ondevice.measurements, and alerts onalerts.
Key design decisions:
- The Gateway is the service exposed to the outside world
- Each service validates only the data it owns.
Port: 4200 | Tech: Angular, TypeScript, Tailwind CSS, Chart.js, ngx-charts, STOMP.js
Single-page application served via Nginx. Communicates with all backend services exclusively through the API Gateway. JWT is stored in localStorage and attached to every HTTP request via an Angular interceptor and to WebSocket connections via STOMP headers. Route guards enforce authentication state.
Port: 8080 | Tech: Spring Cloud Gateway (WebFlux/Reactive), jjwt
The single entry point for all client traffic. Handles JWT validation at the edge using a shared HS256 secret, strips
any client-provided X-Authenticated-User header to prevent spoofing, and injects a trusted identity header before
forwarding requests downstream. Also proxies WebSocket traffic to the communication service.
Public routes (/api/auth/**) bypass authentication. All other routes require a valid Authorization: Bearer <token>
header — failures return HTTP 401 with WWW-Authenticate: Bearer.
Port: 8081 | Tech: Spring Security, jjwt, BCrypt, Apache Kafka
The internal Identity Provider. Handles user registration and login, hashes passwords with BCrypt, and issues signed JWT
tokens (HS256). JWT claims include sub (username), role (ADMIN/CLIENT), and userId (UUID), allowing downstream
services to identify users without additional database lookups.
On signup, publishes a USER_CREATED event to Kafka (sync.users) to synchronize the newly created identity with the
User Service. Also consumes USER_UPDATED and USER_DELETED events to keep local authentication data consistent with
profile changes initiated in the User Service.
Port: 8082 | Tech: Spring Data JPA, Apache Kafka, Jakarta Validation
Manages the business profile of a user (username, email, first name, last name, timestamps). Completely decoupled from
authentication — holds no passwords or roles. User profiles are primarily created by consuming USER_CREATED events
from Kafka (idempotent — skips if the record already exists). Profile updates and deletions are propagated back to the
Auth Service via USER_UPDATED and USER_DELETED events on the same sync.users topic, maintaining eventual
consistency between the two services.
Port: 8083 | Tech: Spring Data JPA, jjwt, Apache Kafka
Manages the full lifecycle of hardware assets — creation, metadata updates, deletion, user assignment, and unassignment.
Device records include category, serial number, status, location, and maxHourlyConsumption, which is the
threshold used by the Monitoring Service for alert detection. Publishes DEVICE_CREATED, DEVICE_UPDATED, and
DEVICE_DELETED events to sync.devices to keep Monitoring in sync. Only stores the assigned user's UUID — no
cross-service object embedding.
Port: 8084 | Tech: Spring Data JPA, Apache Kafka, Hibernate
The analytics engine of the platform. Consumes raw device measurements from the device.measurements topic (published
by an external device simulator) and aggregates them into hourly buckets stored with a unique constraint on
(device_id, day, hour). Each time a bucket is updated, the new total is compared against the configured usage
threshold (sourced from sync.devices events stored locally in monitored_devices). If the limit is exceeded, an alert
event is published to the alerts topic. Aggregation writes are transactional.
Exposes a REST endpoint for querying hourly usage per device per day.
Port: 8085 | Tech: Spring WebSocket (STOMP), Spring Kafka, jjwt, Google Gemini API
The real-time layer of the platform with two distinct responsibilities:
Chat system: Users send messages via STOMP WebSocket to /app/chat.send. Messages are persisted to PostgreSQL and
the service attempts automated responses in cascade — first via keyword-based rules, then via the Google Gemini API, and
finally escalates to admin review if both fail. Admins view conversations via a REST inbox endpoint and reply in real
time.
Alert delivery: Consumes alerts events from Kafka and pushes them directly to the connected user's WebSocket
session via /user/{userId}/queue/alerts, enabling instant in-browser notification of usage threshold breaches. JWT is
validated at WebSocket CONNECT time to establish the user principal for targeted delivery.
Topics summary:
| Topic | Producer | Consumer | Purpose |
|---|---|---|---|
sync.users |
Auth (USER_CREATED), User (USER_UPDATED / USER_DELETED) | User (USER_CREATED), Auth (USER_UPDATED / USER_DELETED) | User lifecycle synchronization |
sync.devices |
Device Service | Monitoring Service | Device metadata & usage limits |
device.measurements |
Device Simulator | Monitoring Service | Raw device measurement |
alerts |
Monitoring Service | Communication Service | Usage threshold alert delivery |
JWT claims: sub (username), role (ADMIN/CLIENT), userId (UUID), iat, exp.
The shared HS256 secret (SECURITY_JWT_SECRET_KEY) must be identical across gateway-service, auth-service,
device-service, and communication-service.
- Docker and Docker Compose
- A
.envfile in the project root (see below)
Create a .env file in the project root:
# PostgreSQL credentials (shared across all instances)
PG_USER=postgres
PG_PASSWORD=postgres
# JWT — must be identical across all services that validate tokens
SECURITY_JWT_SECRET_KEY=your_base64_encoded_secret_here
SECURITY_JWT_EXPIRATION_TIME=3600000
# Google Gemini AI (used by communication-service)
GEMINI_API_KEY=your_gemini_api_key_here
SECURITY_JWT_SECRET_KEYmust be a Base64-encoded string and must be identical acrossgateway-service,auth-service,device-service, andcommunication-service.
# Clone the repository
git clone https://github.com/Bogdan016/DeviceHub.git
cd DeviceHub
# Start all services
docker compose up --buildServices start in dependency order. PostgreSQL and Kafka health checks ensure infrastructure is ready before application services boot.
# Stop all services
docker compose down
# Stop and remove volumes (clean slate)
docker compose down -v| Layer | Technologies |
|---|---|
| Frontend | Angular 17, TypeScript, Tailwind CSS, Chart.js, ngx-charts, STOMP.js, Lucide Icons |
| API Gateway | Spring Cloud Gateway (WebFlux), jjwt |
| Backend Services | Spring Boot 3.3, Spring Data JPA, Spring Security, Spring Kafka |
| Authentication | JWT (HS256), BCrypt |
| Databases | PostgreSQL 16 — one dedicated instance per service |
| Messaging | Apache Kafka 3.9.1 (KRaft mode — no Zookeeper) |
| AI Integration | Google Gemini API via Spring RestClient |
| Containerization | Docker, Docker Compose, multi-stage builds |
| Runtime | Eclipse Temurin JDK 17, Nginx (frontend) |
| Build | Maven, Angular CLI |
| Observability | Spring Boot Actuator, Swagger UI / OpenAPI 3, Kafka UI |
These are documented areas for future improvement:
- No refresh token — users must re-authenticate after JWT expiry (default 1 hour)
- No token revocation — issued JWTs cannot be invalidated before expiry
- Kafka publishes are not transactional with DB commits — a crash between a DB write and a Kafka produce may cause missed events
- WebSocket broker is in-memory — communication-service cannot be horizontally scaled
- No pagination on list endpoints (
/users,/devices,/monitoring) - No distributed tracing — no OpenTelemetry, Zipkin, or Jaeger integration
- No rate limiting or circuit breakers at the Gateway level
devicehub/
├── auth-service/ # Identity provider — JWT issuance, login, signup
├── user-service/ # Business profile management
├── device-service/ # Hardware asset lifecycle and assignment
├── monitoring-service/ # Usage aggregation and alerting
├── communication-service/ # Real-time chat and WebSocket notifications
├── gateway-service/ # API Gateway — routing, JWT validation, CORS
├── frontend/ # Angular SPA
├── docs/
│ └── images/ # Diagrams and screenshots
├── docker-compose.yml
└── .env # Local environment variables (not committed)
Each service contains its own README.md with detailed documentation covering internal architecture, API reference,
data model, Kafka integration, configuration, and design decisions.



