This post explains how I developed Community Maps, a small web app that helps members of a community share their approximate location with each other.

Note

AI assistance (Opus 4.6) was used for minor edits in this text.

Why this project

I am currently trying to organize an event for a geographically distributed community, so knowing roughly where everyone is would make that decision more informed. I have also encountered this situation multiple times earlier. The tools that exist for this either require accounts (friction), expose exact locations (privacy concern) or preserve data for too long (also a privacy concern). I wanted something simple enough that sharing a link is the entire onboarding process, private enough that participants only reveal their approximate area, and with data expiration.

This problem is aligned with the other project I’m involved with currently. Q10E Labs builds digital tools for habits, clarity, and connection. Our design philosophy centers on dignity and agency: we want to give people useful instruments that respect their attention and privacy, nudge them towards building communities, then get out of the way. The project we’re discussing today, Community Maps, is the newest addition to that portfolio.

Beyond solving a real problem, I was also interested in exercising the full lifecycle of building a product with AI assistance. There is plenty of discussion about AI generating code, but less about using AI through the entire arc: initial design, iterative development, security hardening, deployment, collecting user feedback, and responding to it in production. I wanted to see where AI assistance adds genuine leverage and where the human still does the essential work. This development was supported by Claude Code and Opus 4.6.

Phase 1: Design before code

I started by describing the product to an AI coding assistant in plain language. A Flask backend with SQLite. A static frontend with vanilla JavaScript and Leaflet.js for map rendering. Link-based authentication where possession of a URL constitutes identity. Pins rendered as circles without visible centers, so your exact location stays vague.

The conversation was structured as a design review. I stated requirements, the assistant asked clarifying questions, and I made decisions. Some of these decisions carried real architectural weight: that the frontend would be a fully static site, that the map engine should sit behind an abstraction layer to allow swapping Leaflet for something else later, that admin permissions should be session-scoped and decoupled from member records.

The output of this phase was a design document, a database schema, ASCII wireframes of every UI state, and a test suite. No application code. The test suite was deliberate. I wanted to practice test-driven development throughout the project, and starting with tests before the implementation existed set that discipline from the beginning.

The key insight from this phase: front-loading the design conversation made every subsequent implementation prompt more productive. By producing a project design document earlier, the assistant would always have access to the overall project direction, so feature requests could be terse and the output coherent. For example, I did not need to re-explain the use of anonymous links for authentication every I asked for a new backend endpoint.

Phase 2: Feature iteration through self-testing

With the design in hand, the backend came together quickly: Flask routes, SQLite schema, cookie-based sessions, the complete API. The frontend followed. Within the first working session I had a functional prototype: Leaflet rendering a world map, translucent circles representing members, a sidebar listing names, admin controls for managing participants.

From this point forward, development became a tight loop. I used the application, noticed something that felt wrong or missing, described the issue, and received a fix or new feature. The cycle time from observation to working code was measured in minutes.

Some iterations were small: a hover effect on circles, a “(me)” indicator next to the current user’s name, a pixel-level centering fix on zoom buttons. Others were structural. The original pin placement model (click anywhere on the map) failed when member circles overlapped, because clicking an occupied area selected the existing circle instead of placing a new pin. These were design flaws that I could not have envisioned in the initial top-down design.

Every time I asked the assistant to propose alternatives, evaluated several options, and chose an explicit placement mode that eliminated the ambiguity.

A pattern emerged in this phase. The AI assistant was effective at implementation, consistent at maintaining test coverage, and reliable at applying patterns it had seen earlier in the codebase. The work that remained fundamentally mine was product judgment: recognizing when an interaction model was broken, deciding which of several alternatives felt right, and knowing when to invest in a refactor versus work around a limitation.

Phase 3: Productization

Before exposing the application to anyone else, I shifted focus to the work that separates a prototype from a product. This is where my own past experience really brought quality forward; I was surprised how the AI assistant simply did not propose working on these topics unless I nudged it in that direction, and how much it tends to reach for over-engineered solutions unless I push back.

  • Input validation and sanitization: input length limits, HTML stripping, and control character removal for all user-provided text. Had to remind the assistant that the validation boundary sits at the API layer, not in the front-end code.
  • Rate limiting. Properly testing the rate limiting. Had to remind the assistant about the difference between full-app rate limiting and per-instance limits to prevent one user from DoS the others.
  • CSRF protection. Had to nudge the assistant towards a simple custom HTTP header instead of a complicated random token dance.
  • Database concurrency. Had to push the assistant to pay attention to possible race conditions and time-of-check-to-time-of-use vulnerabilities.
  • Legal and terms of service. The AI assistant would have been otherwise happy to publish without any, or with a template text that is too verbose.
  • Map expiry. Onboarding will cause a lot of just-created maps to be abandoned immediately. I had to ask for the right balance of expiry logic (trial maps with fewer than two pinned members expire after seven days, while active maps get a full year) and a purging endpoint I can automate with cron.

Phase 4: First deployment and early feedback

I gathered initial feedback from my online community with a locally deployed debug instance.

The first round of testing on real mobile displays exposed a class of layout issues that I had forgotten to check using a desktop browser. Also, share buttons were missing from dialogs where mobile users would most expect them.

The more consequential feedback concerned onboarding. New users joining a map were dropped into pin placement mode automatically. They would click on existing circles trying to explore, and accidentally place their pin in the wrong spot. The interaction model that felt intuitive to me as the developer was confusing to people seeing the map for the first time.

Thanks to this feedback, I redesigned the flow. Pin placement now requires a deliberate double-click or double-tap; and new users arriving via invite link see a brief onboarding dialog that explains what they can do and how. Map creators get a separate welcome sequence that walks them through placing their own pin, naming the map, and sharing the invite link.

This was the largest single revision of the project, it came entirely from watching real people use the product.

Phase 5: Production deployment and continued iteration

The next phase was public deployment. As this project is intended to be part of the Q10E Labs portfolio, I worked to scale the deployment DevOps of Q10E Labs before I could deploy the new project. For this project, the production stack is FreeBSD/nginx serving the static frontend and proxying API requests to gunicorn.

With the production instance live, I shared it with a wider group. Feedback continued to drive changes.

For example, a user without admin privileges could see admin-only menu items in the navigation. The terms acceptance flow needed rework. Pins could exist without labels, leaving anonymous circles on the map with no explanation. All this was also addressed incrementally.

What the process revealed

Building Community Maps from an empty repository to a production deployment covered roughly 85 commits and over 400 prompts to the AI assistant. The codebase spans about 40 files across backend, frontend, and documentation (code: 20% backend, 40% front-end, 60% tests). Initial MVP development took 2 days, whereas iterating and polishing based on user feedback took an additional full week.

Where AI assistance was most valuable: implementation velocity. Translating a well-described requirement into working code, maintaining test coverage, applying patterns consistently across the codebase. Drafting prose. Iterating on bug investigations and fixes.

Where human judgment made a real difference: product design decisions, especially those informed by watching real users. The session architecture redesign, the pin placement pivot, the terms enforcement layering. These all required recognizing that a working implementation was solving the wrong problem. The assistant could generate any solution I described, but it could not observe that users were confused and decide what to change.

The role of deployment in design: several of the most important improvements, including the onboarding redesign, the terms enforcement backend, and the display name requirement, came from “production” usage. These changes were obvious in hindsight: the product had to be in front of real users for the gaps to become visible.

Time saved: waiting for the AI assistant to generate stuff was only a minor part of the time spent on the project. Most of the time was spent thinking about specifications, interacting with users to gather feedback, making appropriate decisions, and iterating to address minor issues relating to the production deployment and its differences with the development environment. It is still unclear how to leverage AI assistance to simplify these non-programming phases.

The overall experience confirmed something I knew instinctively but had no personal empirical evidence for yet: AI-assisted development does not eliminate the need for product thinking and production engineering. It compresses the implementation phases enough that the design-test-learn cycle can run much faster, which means you arrive at a better MVP sooner. But the work that comes afterwards (user feedback, production care) is still costly. The bottleneck shifts from “how can I build this” to “do I understand what to build” and “do I know how to publish and maintain a software product safely.”

Like this post? Share on: BlueSkyTwitterHNRedditLinkedInEmail


Raphael Poss Avatar Raphael Poss is an entrepreneur who occasionally publishes field notes on systems, leadership, and the messy edge between technology and people.
Comments

Interested to discuss? Leave your comments below.


Keep Reading


Reading Time

~7 min read

Published

Category

Research

Tags

Stay in Touch