Architectural Thinking Workshop Report

Background and Motivation

I have always felt that I lacked structured best practices and proven tools for conducting effective proof of concepts. This gap in my methodology became particularly apparent when working on customer engagements where clear architectural thinking and documentation could have significantly improved outcomes. To address this deficiency, I participated in an internal workshop called “Architectural Thinking.”

Workshop Overview

The workshop provided a comprehensive toolkit of diagrams, methodologies, and frameworks designed to help consultants and architects better serve customers through structured thinking and clear communication. The approach emphasizes visual modeling and systematic decision-making processes that can be applied across different project phases.

Architectural Thinking Agenda

Key Content Areas Covered

The workshop covered a broad spectrum of architectural thinking tools and techniques:

Planning and Context Setting:

  • System Context Diagram - Understanding system boundaries and external interactions
  • Stakeholder Mapping - Identifying all parties affected by or influencing the system
  • User Stories & Use Cases - Capturing requirements from user perspectives

Design and Decision Making:

  • Architectural Decisions - Systematic approach to documenting design choices
  • Component Modeling - Breaking down systems into manageable parts
  • Architectural Patterns - Applying proven solutions to common problems

Implementation and Operations:

  • Operational Modeling - Planning for system operations and maintenance
  • Release Planning - Structuring delivery and deployment strategies
  • Viability Assessment - Evaluating project feasibility and risks

Resources and References:

  • Open Practice Library - Access to community-driven best practices

Architectural Thinking Agenda

Key Takeaway: Architecture Decision Records

The most valuable insight for my immediate work, particularly for proof of concepts, was the introduction to Architecture Decision Records (ADRs). This practice addresses a critical gap I’ve experienced in previous projects - the lack of documented rationale behind architectural choices.

ADRs provide a lightweight way to capture:

  • The context and problem being solved
  • Available options considered
  • The decision made and rationale
  • Consequences and trade-offs

This systematic approach to decision documentation would have prevented many retrospective questions and helped maintain architectural consistency across proof of concept iterations.

Architectural Thinking Agenda

Application and Next Steps

While ADRs represent the most immediately applicable tool for my proof of concept work, the other diagrammatic approaches will prove valuable for customer communication and project planning. The visual modeling techniques will help bridge the gap between technical complexity and stakeholder understanding, ultimately leading to more successful engagements.

The workshop has provided me with a structured foundation for architectural thinking that I plan to integrate into all future proof of concept work, ensuring better documentation, clearer communication, and more thoughtful design decisions.

Architectural Thinking Agenda

One day during a break, we did a group session of office Pilates with our colleague and Pilates instructor Marta. Highly recommend it. Would gladly do it again. It was awesome.

Architectural Thinking Agenda

Open Source Models beat GPT-4

Beating GPT-4 with Open Source - An impressive technical achievement by the .txt team, demonstrating that open source can outperform closed models when properly orchestrated. By combining Microsoft’s Phi-3-medium with Outlines’ structured generation, they achieved 96.25% accuracy on the Berkeley Function Calling Leaderboard, surpassing GPT-4-0125-Preview’s 94.5%.

What makes this particularly noteworthy is their emphasis on community collaboration - the work was inspired by community contributions and built on open evaluations, models, and tools. The post does an excellent job explaining function calling vs. structured generation, showing that constraining output actually improves performance rather than hindering it.

The reproducibility aspect is crucial - they’ve made all code available and properly credited the entire ecosystem that made this possible. This represents a significant milestone for open source AI, proving that with the right combination of tools and techniques, open models can compete with and exceed proprietary alternatives.

Key takeaway: This isn’t just about beating a benchmark - it’s about demonstrating the power of open collaboration in AI development.