Wong Edan's

The Architecture of Glass Gallery

February 08, 2026 • By Azzar Budiyanto



The Architecture of Glass Gallery: A Modern Home Lab Ecosystem

In a world increasingly dominated by ephemeral cloud functions and serverless abstractions, there is a quiet dignity in owning your own infrastructure. The Glass Gallery Ecosystem is not just a collection of Docker containers; it is a manifesto. It is a statement that personal computing can be beautiful, resilient, and deeply integrated. This article serves as the definitive technical blueprint of our system, detailing the decisions, the trade-offs, and the precise configurations that make it tick.

1. High-Level Overview

At its heart, Glass Gallery is a Hyper-Converged Home Lab running on a single OCI (Oracle Cloud Infrastructure) Host. While it physically resides on one machine, logically it is a distributed system, orchestrated by Docker Compose and bound together by a private mesh network. The goal was to achieve “Single Pane of Glass” observability—a centralized hub where every metric, from CPU load to DNS queries, is visible at a glance.

The system adheres to the 12-Factor App methodology. Configuration is stored in the environment, backing services are treated as attached resources, and logs are treated as event streams. We prioritize Security by Design (Zero Trust), Minimalism (Alpine Linux everywhere), and Automation (Agentic AI integration).

2. The Stack: A Deep Dive

The Hub (Glass Gallery Dashboard)

The crown jewel is the Hub itself. It is not a generic dashboard like Heimdall or Dashy. It is a bespoke Flask application built from scratch to our exact specifications.

  • Frontend: Written in raw HTML/Tailwind CSS, implementing our signature “M3 Pastel Glass” design system. It uses vanilla JavaScript for real-time updates, avoiding the bloat of React/Vue for what is essentially a status page.
  • Backend: Python Flask. It interfaces directly with the host’s /var/run/docker.sock to count containers and reads the /etc/pihole/pihole-FTL.db SQLite database for ad-blocking stats. This direct integration bypasses the latency of HTTP APIs and allows for granular data extraction.
  • Security: Protected by Basic Auth (for now) and running as a non-root user where possible (though database access requires privilege, managed via Docker group mapping).

Mailu: Sovereign Communications

Email is the hardest service to self-host, which is exactly why we did it. We use Mailu, a comprehensive mail server stack.

  • Postfix: The MTA (Mail Transfer Agent). Configured with strict SPF, DKIM, and DMARC policies to ensure deliverability.
  • Dovecot: The IMAP server. Stores emails in a persistent volume, accessible via any standard mail client.
  • Roundcube: The webmail interface, skinned to match our dark mode aesthetic.
  • Redis: Used for rate-limiting and session caching, ensuring the mail server remains responsive even under load.
  • Antispam: Rspamd integration keeps the inbox clean.

Portainer: The Container Conductor

While we love the CLI, visual management is crucial for quick diagnostics. Portainer provides a GUI for our Docker environment.

  • Stacks: We manage our deployments as “Stacks” (Docker Compose files). This allows version-controlling our infrastructure configuration in Git.
  • Edge Agent: Portainer allows us to potentially manage remote nodes (like ‘Lappy’) from this central console in the future.

Pi-hole: The Network Shield

Pi-hole is our DNS sinkhole. Unlike standard deployments, we run this natively on the host (or mapped to host network) to ensure it can intercept traffic from all containers and the host itself efficiently.

  • FTL Engine: A fork of dnsmasq, optimized for speed. It handles millions of queries with negligible CPU impact.
  • Direct DB Access: Our Hub reads the pihole-FTL.db directly. This allows us to visualize blocking trends over time without spamming the Pi-hole API.

Node-RED: The Automation Brain

For IoT and event-driven tasks, we use Node-RED. It creates a low-code bridge between our hardware sensors (ESP32) and our software services.

  • MQTT: Listens to topics like home/livingroom/temp via a Mosquitto broker.
  • Flows: Logic flows that trigger alerts (e.g., “If Temp > 30C, email Azzar via Mailu”).

3. Network Architecture (Mermaid)

The following diagram illustrates the data flow and isolation boundaries within our system.

graph TD
    User((User)) -->|HTTPS/443| CF[Cloudflare Tunnel]
    CF -->|Zero Trust| Nginx[Nginx Proxy Manager]

    subgraph "Docker Host (OCI)"
        Nginx -->|Proxy| Hub[Glass Gallery Hub]
        Nginx -->|Proxy| WP[Headless WP]
        Nginx -->|Proxy| Portainer
        Nginx -->|Proxy| Mailu[Mailu Front]

        subgraph "Data Plane"
            Hub -->|Read-Only| PiHoleDB[(Pi-hole DB)]
            Hub -->|Docker Socket| DockerEngine
            WP -->|SQL| MariaDB[(MariaDB)]
            Mailu -->|IMAP| Dovecot
            Mailu -->|SMTP| Postfix
        end

        subgraph "Monitoring Plane"
            Prometheus -->|Scrape| NodeExporter
            Prometheus -->|Scrape| cAdvisor
            Hub -->|Query| Prometheus
        end
    end

    Lappy[Laptop Node] -->|Tunnel| Nginx
    IoT[ESP32 Sensors] -->|MQTT| NodeRED

4. Future Scalability & Roadmap

No architecture is ever finished. Our roadmap for Q3 2026 includes:

  1. Kubernetes Migration (K3s): As our container count exceeds 50, Docker Compose begins to show its limitations in orchestration and healing. We plan to migrate to a lightweight K3s cluster.
  2. Distributed Storage (Longhorn): To enable high availability, we will move from local bind mounts to distributed block storage.
  3. Agentic CI/CD: We are building a pipeline where Mema can automatically deploy code updates after passing her own security audits. The goal is a self-healing, self-improving infrastructure.

The Glass Gallery is more than a server; it is a living organism. It breathes data, thinks in code, and speaks through interfaces. And we are just getting started.