Our Principles for Building Enterprise Grade Generative AI
At WeBuild-AI, our journey to develop our Pathway Platform on Amazon Web Services was guided by a set of core principles that have proven essential for successful enterprise AI implementation. As we get ready to release the next incarnation of our Pathway Platform at AWS Summit, London on the 30th April, I thought now was a opportune moment to share the principles our engineering team established when we founded the business just under a year ago now.
These principles weren't merely theoretical concepts—they were practical tenets that shaped our team's methods, mindset, and approach to building generative AI products. Today, we're sharing these foundational principles to help other organisations navigate their own AI transformation journeys.
Let’s get to it….
Be AI-Native in Everything You Do to Build Fast
From the outset of our Pathway Platform development, we embraced an AI-native approach across all aspects of the project. This meant fundamentally rethinking traditional software development methods, recognising that generative AI systems operate with different constraints, opportunities, and requirements than conventional applications.
Being AI-native influenced everything from our architecture decisions to our development workflows and testing methodologies. Rather than retrofitting AI capabilities into existing patterns, we designed systems specifically optimised for AI workloads—employing vector databases where traditional relational systems would fall short, embracing prompt engineering as a first-class development activity, and designing for the unique scaling characteristics of large language models.
This approach enabled us to rapidly iterate on features while avoiding the technical debt that comes from forcing AI capabilities into frameworks not designed to support them. Our teams developed new skills and mental models, ensuring they could move quickly while building sustainable systems.
Cloud-First, with One Cloud Provider
When building the Pathway Platform, we made a strategic decision to adopt a cloud-first approach and standardise on a single cloud provider—AWS. This principle might seem counterintuitive to those concerned about vendor lock-in, but our experience demonstrated the significant advantages of deep integration with a comprehensive cloud ecosystem.
By fully committing to AWS, we gained access to a cohesive suite of AI services, data storage options, security frameworks, and networking capabilities that worked seamlessly together. This simplified our architecture, accelerated development, and reduced integration challenges that would have arisen from attempting to maintain compatibility across multiple platforms.
The unified monitoring, logging, and security controls available through a single provider proved invaluable, particularly when handling sensitive enterprise data within AI systems. While multi-cloud flexibility has its merits, we found that the velocity gains from deep platform expertise and integration far outweighed the theoretical benefits of provider diversity, especially in the rapidly evolving AI space.
Align to and Evidence Industry Standards
Enterprise AI systems must meet rigorous compliance requirements and align with emerging best practices. Throughout our Pathway Platform development, we placed a strong emphasis on evidencing adherence to industry standards—not merely as a compliance checkbox but as a core design principle.
This meant proactively incorporating frameworks like the NIST AI Risk Management Framework, UK's AI Regulatory Framework, and industry-specific standards relevant to our clients. We designed our systems with auditability in mind, implementing comprehensive logging, lineage tracking, and explainability features that provide transparency into AI decision processes.
Perhaps most importantly, we developed methodologies to evidence our compliance—creating documentation artefacts, testing protocols, and governance processes that demonstrate our alignment with standards rather than merely claiming it. This approach has proven invaluable in accelerating enterprise adoption, as it substantially reduces the friction in security and compliance reviews.
Everything Automated and Everything as Code
Generative AI systems are inherently complex, incorporating multiple model types, extensive data pipelines, and sophisticated orchestration logic. To manage this complexity while maintaining quality and repeatability, we embraced complete automation and infrastructure-as-code principles throughout the Pathway Platform.
Every aspect of our environment—from networking and security configurations to model deployment and monitoring systems—was defined in code, version controlled, and deployed through automated pipelines. This approach eliminated configuration drift, ensured consistent environments across development and production, and created a complete audit trail of system changes.
Our commitment extended beyond traditional infrastructure to include AI-specific components like prompt templates, fine-tuning configurations, and evaluation datasets. By treating these as code rather than manual artefacts, we gained the ability to version, test, and rollback these critical components just as we would with traditional software.
This principle proved particularly valuable as our systems scaled, enabling us to maintain quality and consistency while rapidly deploying new capabilities across numerous client environments.
Establish Guardrails and Demonstrate AI Ethics
From day one of our Pathway Platform development, we recognised that enterprise AI systems require comprehensive guardrails to ensure safe, responsible operation. Rather than treating ethical considerations as an afterthought, we built protection mechanisms into the core architecture of our platform.
These guardrails operate at multiple levels:
Input filtering to prevent prompt injection and other adversarial techniques
Output moderation to detect and prevent harmful, biased, or inappropriate content
Usage monitoring to identify potential misuse patterns
Automated testing to continuously verify that ethical boundaries are maintained
Beyond implementing technical safeguards, we established processes to demonstrate our ethical commitment through regular bias audits, transparency reports, and clear documentation of model limitations and appropriate use cases. This comprehensive approach has been crucial in building trust with enterprise clients, particularly in sensitive industries.
Use the Right Models for the Right Use Case
While it's tempting to default to the most powerful, general-purpose models for all applications, we discovered that intelligent model selection is critical for both performance and cost-effectiveness. The Pathway Platform was designed around the principle of matching model capabilities to specific use case requirements.
We developed a systematic framework for model selection that considers factors including:
Task complexity and specificity
Latency requirements
Cost constraints
Data privacy considerations
Needed context windows
Multimodal requirements
This approach often leads to unexpected optimisations—using smaller, specialised models for specific tasks while reserving larger, more expensive models for complex reasoning. In many cases, we found that thoughtfully engineered prompts with appropriately selected models outperformed brute-force approaches with larger models, all while reducing costs and improving response times.
Our platform incorporates these insights through dynamic model routing and composition, enabling sophisticated workflows that leverage different models at different stages based on their specific strengths.
Align Your First Use Case to Knowledge Management
Through our experience building AI systems for diverse enterprises, we've discovered that knowledge management represents the ideal first use case for most organisations. This insight became a guiding principle as we developed the Pathway Platform, influencing both our technical architecture and implementation methodology.
Knowledge management applications—connecting AI systems to an organisation's internal information—offer several significant advantages as initial use cases:
They provide immediate, tangible value by making existing information more accessible
They typically involve lower risk than customer-facing or decision-making applications
They create a foundation of connected knowledge that enhances all subsequent AI use cases
They build organisational confidence and capability through visible but controlled impact
Our platform was architecturally designed to excel at knowledge-centric applications, with advanced RAG (Retrieval Augmented Generation) capabilities, sophisticated document processing, and semantic search at its core. This focus allowed our clients to establish a strong foundation before expanding to more complex use cases.
Build as Much as Possible with Generative AI - "Code Vibing"
Perhaps our most transformative principle was embracing generative AI as a core development tool—what industry has now playfully termed "Code Vibing." We leveraged AI coding assistants extensively throughout the development of the Pathway Platform, fundamentally changing how our engineers worked. We were code vibing before it was even a thing!
This approach went far beyond simple code completion. Our engineers have since established sophisticated patterns for using generative AI across the entire development lifecycle:
Architectural design exploration and evaluation
Test case generation and security vulnerability identification
Documentation creation and maintenance
Performance optimisation and refactoring
Infrastructure-as-code generation and validation
Front end design and development
API design, development and documentation
Many many more….
By fully embracing these tools, we dramatically accelerated development velocity while maintaining quality. Our engineers evolved new skills focused on effectively directing AI systems rather than writing every line of code manually—a practice we now train all new team members in as standard procedure.
Importantly, we maintained rigorous code review and testing processes, using AI as a powerful augmentation to human expertise rather than a replacement for sound engineering judgment.
Conclusion: Principles as Foundation for Success
The principles that guided our development of the Pathway Platform on AWS have proven invaluable not just as philosophical stances, but as practical approaches that deliver tangible benefits. They've enabled us to build enterprise-grade generative AI systems that are secure, scalable, and sustainable.
We share these principles with the broader community in the hope that they can provide guidance to other organisations embarking on their own AI transformation journeys. While technologies will evolve and specific techniques will change, we believe these foundational principles will remain relevant as the field continues to advance.
The organisations that successfully navigate the generative AI revolution will be those that not only embrace cutting-edge technology but also establish clear, thoughtful principles to guide their implementation efforts. By sharing what we've learned, we hope to contribute to the collective wisdom of the community and advance the responsible development of this transformative technology.