Files
dlseitz.dev-backend/README.md
2025-08-23 00:46:06 -05:00

13 KiB
Raw Blame History

dlseitz.dev: A Backend Demonstration

To learn about the front end of this two-part project, check out dlseitz.dev A Frontend Demonstration.

Table of Contents


Introduction

The Problem

While the frontend project demonstrated a strong grasp of design and client-side development, a true showcase of full-stack capabilities required more than just a static website. The primary challenge was creating a secure and reliable system to handle dynamic data, specifically to capture and process inquiries from potential clients via the website's contact form. This project serves to prove my ability to bridge the gap between user-facing interactions and the server-side logic required to manage and store that data effectively.

The Solution

To address this, I designed and built a dedicated backend server from scratch. This solution provides a secure, lightweight, and purpose-built endpoint for the contact form submissions. By separating the backend from the frontend, I was able to create a highly focused and scalable service. The solution is built with a commitment to efficiency, security, and reliability, ensuring that every client inquiry is handled with integrity and that the system remains resilient under real-world conditions.

The Vision

The backend is a critical piece of the overall business infrastructure. It's designed to be a long-term asset that not only facilitates client engagement but also serves as a robust demonstration of server-side development skills. This project provides a clear path for future expansion, whether that involves adding new API endpoints for dynamic content, integrating with third-party services, or scaling the database to handle increased traffic. The ultimate vision is to have a professional brand that is built on the values of accessibility, equity, and transparency, and a pipeline for future client work.

Back to Top

Core Architecture

Technology Stack

The backend is built using a modern, efficient, and reliable technology stack.

  • Node.js/Express.js: I chose Node.js for its non-blocking, asynchronous architecture, which is highly efficient for I/O-heavy tasks like handling form submissions. The Express.js framework provides a minimal and flexible foundation, allowing me to build a custom API without unnecessary overhead.
  • PostgreSQL: PostgreSQL was selected as the database for its reputation as a powerful, reliable, and standards-compliant relational database. It provides a secure and organized way to store and manage the structured data from client inquiries.

This combination of technologies creates a backend that is lightweight, scalable, and secure, ensuring that the system is both performant and maintainable for the long term.

Data Flow

The process begins when a user submits the contact form on the static frontend.

  1. Client-Side Validation and Submission: The user's input is first validated on the frontend to ensure it meets the required format and that the user isn't a bot. Once validated, the data is sent to the backend as a JSON object using an asynchronous fetch request.

  2. API Endpoint Reception and Processing: The Express.js server receives the incoming JSON payload at a dedicated API endpoint. The server then validates the data again to ensure its integrity and security before processing.

  3. Database Storage: The validated data is then saved into a table in the PostgreSQL database. This step ensures that a permanent record of the client inquiry is maintained for future reference.

  4. Email Service Integration: After the data is successfully stored in the database, the backend uses a secure email service to send a notification to a pre-defined email address. This step provides an immediate alert for new client inquiries.

  5. Confirmation to Frontend: The backend server sends a response back to the frontend, indicating that the form submission was successful. The frontend then presents a confirmation message to the user, completing the cycle.

Back to Top

Security & Reliability

Threat Mitigation

Building a reliable and trustworthy system required a proactive approach to security, with measures implemented at multiple layers to mitigate potential threats. The contact form is a critical access point, and as such, it's fortified with several defenses to ensure the integrity of the data and the security of the backend.

First, client-side validation acts as the initial barrier. While it is not a foolproof security measure, it provides a seamless user experience by catching malformed or missing input before a request is even sent to the server.

Second, the backend performs its own rigorous server-side input validation. This is the definitive security step. Every piece of data received from the frontend is sanitized and validated against a strict schema to prevent common injection attacks, such as SQL injection.

To protect against automated bot submissions and spam, the form uses two distinct methods: a hCaptcha and a honeypot field. The hCaptcha requires user interaction to verify that they are human, effectively stopping most automated scripts. The honeypot field is a hidden input that, if filled, immediately flags the submission as spam, as a human user would never see it.

Finally, to prevent resource exhaustion from denial-of-service (DoS) attacks, the API is protected with rate limiting. This ensures that no single user or IP address can make an excessive number of requests in a short period, preserving the server's availability and stability for legitimate users.

Code Integrity

The integrity and security of the codebase are maintained by the strategic use of environment variables. All sensitive information, such as database credentials and API keys, are stored in a separate .env file that is kept out of the public codebase and git repository. This practice ensures that confidential data remains secure, even if the code is made public.

Back to Top

Deployment & Infrastructure

The Ecosystem

The backend server is deployed within a robust and efficient ecosystem designed for reliability and ease of maintenance. This setup includes three key components: NGINX, Ubuntu, and PM2.

  • NGINX (Reverse Proxy): NGINX is a lightweight, high-performance web server that acts as a reverse proxy in this infrastructure. It is the public-facing entry point for all incoming HTTP requests, which it then forwards to the Node.js application running on the server. This setup provides a crucial layer of security, as NGINX can handle tasks like SSL termination and request buffering, while also hiding the underlying application from direct public access. It also serves static content, which reduces the load on the backend server.
  • Ubuntu (Server OS): Ubuntu Server was chosen as the operating system for its stability, widespread community support, and robust security features. As a Debian-based Linux distribution, it provides a secure and reliable foundation for the entire application, and its long-term support (LTS) versions ensure that the system receives security updates for an extended period without the need for frequent upgrades.
  • PM2 (Process Manager): To ensure the application remains available 24/7, I used PM2. This process manager for Node.js applications is configured to keep the backend server running indefinitely. If the application crashes for any reason, PM2 will automatically restart it without any downtime. It also simplifies the management of the application by providing a dashboard to monitor its health, manage logs, and handle server restarts.

Hosting

The project is hosted on a cost-effective cloud provider. This decision was a direct response to the initial project constraints, allowing me to deploy a full-stack application with minimal financial investment. Opting for a solution that is both professional and budget-friendly demonstrates a key value of the project: resourcefulness. It shows the ability to provide a complete, real-world solution while adhering to situational constraints. This strategic choice reinforces my commitment to building pragmatic solutions that are not only technically sound but also economically viable.

Back to Top

Issues & Lessons Learned

Challenges Overcome

A significant challenge during development was ensuring reliable and secure email delivery for client inquiries. Initially, I attempted to send emails directly from the server on port 587, a common practice for SMTP. However, the hosting provider actively blocks this port to prevent spam, which resulted in all contact form submissions failing to trigger email notifications.

To overcome this, I had to pivot the email delivery strategy. The solution was to implement a third-party mail relay service. I chose Brevo (formerly Sendinblue) as an intermediary to handle all outgoing mail. This required a re-architecting of the application's email functionality to integrate with Brevo's API. This not only solved the port-blocking issue but also added a layer of professionalism by using a dedicated service, improving deliverability and providing valuable analytics and logs.

Another challenge involved refining the backend's structure. As the project grew, it became clear that the monolithic codebase was becoming difficult to manage and scale. I made the decision to refactor the entire application into a more modular and organized architecture. This involved separating concerns, such as routing, database interactions, and API logic, into distinct files and directories. This restructuring will make the application more maintainable and easier to debug for future development.

The most difficult challenge so far has been migrating from a simple .env file to a more secure secrets manager. This is a critical security upgrade, but it has introduced significant complexity into the deployment process, requiring changes to how the application accesses and manages sensitive data. The process has been a valuable lesson in balancing development speed with robust security practices.

Reflections

This project has been a valuable exercise in understanding that a full-stack solution is more than just connecting a frontend to a backend. It requires a holistic and integrated approach to software development, where every decision—from the initial architecture to the final deployment strategy—is interconnected. The experience has underscored the importance of anticipating and mitigating infrastructure-level challenges, such as blocked ports, and the necessity of building an application with scalability and maintainability in mind from day one. Ultimately, these struggles and their resolutions have solidified a key lesson: the most effective solutions are not just functional; they are resilient, secure, and thoughtfully planned.

Back to Top

Looking to the Future

This project serves as a foundational component for future development, and the current architecture provides a clear path for expansion. The strategic design of this system is meant to demonstrate a forward-thinking approach, proving that the solution is not just functional but also scalable and adaptable for future needs.

Blog & Content Management

I plan to add a dynamic blog to the website. This will involve expanding the backend to include new API endpoints that will handle a full content management system. These endpoints will allow for the secure creation, editing, and publishing of blog posts. The content will be stored in the PostgreSQL database, enabling me to manage and display new articles without the need for a full site rebuild. This expansion would demonstrate a deeper understanding of RESTful API design and database schema management for a multi-purpose application.

Enhanced Security

While the current security measures are robust, I have planned for further enhancements to harden the system against potential threats. A critical next step is to migrate from a simple .env file to a dedicated secrets manager. This will ensure that sensitive data, such as API keys and database credentials, are not stored on the file system and are instead accessed securely at runtime. Additionally, implementing an in-depth security monitoring and logging system would provide real-time visibility into application access and potential malicious activity, allowing for a more proactive defense strategy

Back to Top