The Genesis: Research & Ideation
September 9-20: Where it all began. Discovering HuggingFace, Claude Projects, and realizing that voice + Kanban could eliminate workflow bottlenecks. The 'what if?' that started everything.
From September 9 to November 13: The pivots, breakthroughs, and lessons from building a voice-first project management system.
September 9-20: Where it all began. Discovering HuggingFace, Claude Projects, and realizing that voice + Kanban could eliminate workflow bottlenecks. The 'what if?' that started everything.
September 23-27: Five minutes to convince everyone this could work. Dana, Chris, Ted, Oliver—four personas, six task types, three intelligence layers. HP executives in the room. No pressure.
September 30: It's 2 AM. The app crashed. Ops can't fix it, Chris can't deploy it, Dana can't access logs. The problem isn't skill—it's sequential handoff waste. This is why voice-first Kanban matters.
Early October: Before writing a single line of code, we had to answer the hard questions. Data privacy, bias mitigation, worker autonomy. Technical decisions are ethical decisions.
October 16-18: Day one of development. OpenAI Whisper Large V3 finally capturing voice and returning text. Setting up HP AI Studio environment, installing dependencies. The 8-cell template methodology begins.
October 20-21: Tried GitLab MCP. Nothing. Tried Trello API. Complicated. Professor warned about Jira. Everything felt hard. Spent two days chasing integrations that didn't exist.
November 7: Talked with someone who worked at Netflix about their global collaboration systems. They don't use AI for task assignment—not because they rejected it, but because they never tried it. Maybe Voice Kanban is actually novel. Novice or innovative?
October 22-24: Tried to clone GitHub MCP repo. npm errors everywhere. Then it clicked: MCP servers are hosted endpoints at api.githubcopilot.com. Nothing to install. The revelation that changed everything—understanding MCP as the bridge between voice intent and board actions.
October 24: Helped Vivian workshop her mood board tool—lower standards on fonts, double down on colors. Told Jostin to give his AI affirmations about PDFs and it worked. Then they asked me questions I couldn't answer about Voice Kanban. Redesigned the whole advisory system. Peer feedback > solo development.
October 25-31: Text-based interface with keyword matching. Three assignment algorithms tested. MCP JSON output formatted correctly. Google Material Design aesthetic. Teaching the system to return structured JSON that GitHub's API understands. No voice yet, but core logic proven and translation layer complete.
October 28 - November 7: After all the failed integrations, back to basics. Pseudo code, personas, and task types. Created the Google Material Design interface. P1 (Dana) has scores for T1 and T2. P2 (Chris) has different scores. The weighted scoring system works. Foundation first, then complexity. It's alive.
November 4-6: Balanced Spread (steady-state), Gentle Flow (approaching deadlines), Strong Current (bottleneck focus), Full Rapids (crisis mode). Each strategy shifts team distribution based on project state. This is the innovation.
November 11-13: My Gantt chart looked clean: Research → Pitch → Ethics → Local → Deploy. Reality: Research → Pitch → Ethics → Local → GitLab fail → Trello fail → GitHub → finally deploy. Taking honest stock: I have working text interface, three tested algorithms, persona-based scoring, Google Material Design UI. I don't have real voice, live MCP connection, actual board updates. Phase 1 complete, Phase 2 is the dream. The buffer zone saved me.