Enterprise strategy, technical governance, risk and compliance, and vendor management, shared openly, no consulting markup. Built from real work across industries.
I'm Tajai Jones, AVP of Digital Strategy with 12+ years working across technology, risk, and operations. Most of that work has been in financial services, which is a good place to build these skills because the margin for error is low and documentation requirements are real. The thinking translates well beyond that industry.
It started at a teller line. I was a teenager working at a bank, learning how money moved, how customers were served, and how the people behind the counter actually kept things running. That experience planted something. Over the years it grew into a career that took me from branch banking through digital platforms, enterprise systems, compliance operations, and technology strategy. I didn't take a straight path. I paid attention at every stop.
This is not a consulting pitch. It's a place to share what I've picked up building platforms, managing integrations, navigating vendor relationships, and watching organizations spend money on things they weren't ready for. Some of it worked. Some of it didn't. All of it is useful.
Take what's helpful, leave what doesn't fit your situation, and run anything that matters through your own legal and compliance teams.
These are the areas organizations tend to underinvest in until something breaks. A failed implementation, a compliance finding, a vendor dispute. The goal here is to give you enough context to avoid those situations, or at least handle them better when they show up.
Data governance is not a one-time cleanup project. Organizations that treat it that way end up doing the same cleanup every 18 months. When it's built into how a team operates, it holds up through audits, leadership changes, and platform migrations.
Data Dictionary & Lineage
Technical Documentation Standards
Governance Program Components
Compliance is the floor, not the finish line. The organizations that hold up under scrutiny are usually the ones that built controls because they made operational sense, not because an auditor asked for them. The documentation follows the practice. Note: What's shared here is based on professional experience, not legal advice. Talk to qualified counsel for guidance specific to your situation.
NIST 800-53 — The Control Catalog
NIST SP 800-53 is the most complete catalog of security and privacy controls available. Originally built for federal systems, now widely adopted across healthcare, financial services, technology, and government contractors. Key control families relevant to any digital operation:
BSA/AML — Bank Secrecy Act & Anti-Money Laundering
SOX & SOC Controls
GLBA, GDPR, HIPAA, PCI-DSS
Security Control Frameworks
The pattern I've seen more times than I can count: leadership makes a purchase decision, procurement signs the paperwork, IT gets handed a timeline, and operations inherits a platform that doesn't quite fit at a cost well above what was presented. It's not because the vendor is dishonest. It's because their incentives are different from yours.
Before You Evaluate: Define the Problem
The RFP Process Done Right
Contract & SLA Governance
Ongoing Vendor Governance
Tools for Finding & Vetting Vendors
SR 15-18 — Third-Party Risk Management
SR 15-18 is the OCC and Federal Reserve's guidance specifically on third-party risk management — the companion framework to SR 11-7 for vendor risk in regulated environments. It establishes that financial institutions are responsible for the risks introduced by their third parties, including fourth parties (your vendor's vendors). Key requirements:
Most implementation failures are not technology problems. They're readiness problems that get blamed on the technology. The vendor delivers the product. Whether the organization can actually absorb and use it is a different question entirely.
Pre-Implementation Readiness Checklist
Stakeholder Alignment
Change Control During Implementation
ITIL — IT Service Management Framework
ITIL (Information Technology Infrastructure Library) is the most widely adopted framework for IT service management globally. Where your project management covers how you deliver a system, ITIL covers how you operate and improve it once it's live. Four concepts are most relevant to implementation and change work:
Secure SDLC & DevSecOps
If your organization builds software — whether internally or through a vendor — security needs to be part of the development process from the start, not added as a checkpoint at the end. This is what "shift left" means: catch security problems earlier, where they cost less and take less time to fix.
The default move in most organizations is to buy. It feels faster, it shifts accountability, and it gives leadership something to announce. But buying a tool is not the same as building a capability. The license is just the beginning.
Build When:
Buy When:
On Hiring
This is the part nobody talks about in governance frameworks. Everyone focuses on what the policy should say. Very few people address the harder problem: what do you do when the team already has a way of doing things, it kind of works, and asking them to change it feels like a threat to their workflow?
I've seen this play out across organizations of different sizes and industries. The pattern is almost always the same. Leadership endorses a new governance approach. A framework gets developed. Someone does a training session. And then six months later, half the team is still doing it the old way because they're under pressure, they don't see the immediate payoff, and nobody made it easier to do the right thing than to keep doing the wrong thing.
Why Governance Efforts Stall
How to Transition Without Wrecking Timelines
Building Buy-In Across Teams
Correcting Bad Processes That Are Already in Motion
Accessibility is often treated as a compliance checkbox — something you address after the product is built, when someone raises a legal concern or an audit finding. That approach produces brittle, bolt-on solutions that technically meet a standard but do not actually serve the people they are supposed to help. The better approach is to design inclusively from the start, which tends to produce better experiences for everyone, not just users with disabilities.
WCAG — Web Content Accessibility Guidelines
WCAG (Web Content Accessibility Guidelines), published by the W3C, is the international standard for digital accessibility. The current versions are WCAG 2.1 and WCAG 2.2. All guidance is organized around four principles, abbreviated as POUR:
Legal Landscape
Inclusive Design Principles
Inclusive design goes further than accessibility compliance. It means actively designing for the full range of human diversity — ability, language, age, culture, gender, economic access, and context of use. Key principles:
Color, Contrast & Typography
Accessibility Testing Tools
Building Accessibility Into Process
Human beings are predictable in ways that designers often ignore. The fields of cognitive psychology and behavioral science have spent decades documenting how people perceive, process, and respond to information. These findings have names. They are called the Laws of UX. Understanding them changes how you design not just interfaces, but processes, communications, and any experience a customer or user moves through.
The Core Laws of UX
Customer Journey Mapping
A customer journey map is a visualization of the experience a customer has with a product, service, or organization across time and touchpoints. The purpose is not to document what you think the experience is — it is to understand what it actually is, including the parts that frustrate, confuse, or lose people.
Try the Journey Builder
The interactive customer journey builder on this site lets you map a full journey for any persona — stages, touchpoints, emotions, pain points, and opportunities — with an emotion curve that updates as you work. It is the most fun thing on this website.
Every organization is deploying AI right now. Most of them don't have a clear policy about how it's being used, what data is going into it, or who's accountable for what it produces. This section starts with the terminology, because a lot of confusion about AI governance comes from not being precise about what kind of AI you're actually dealing with.
Jump to the full AI & ML section below
Organizations are moving faster on AI deployment than they are on AI governance. This section covers the vocabulary, the frameworks, and the controls worth having before an AI system touches anything in production.
| Requirement | Use RPA Bot | Use AI Agent |
|---|---|---|
| Input data is structured and consistent | Strong fit | Possible overkill |
| Process has zero tolerance for variation | Strong fit | Not recommended |
| Input includes unstructured text, docs, or images | Not capable | Strong fit |
| Task requires contextual judgment | Not capable | Strong fit |
| Audit trail must produce a consistent, repeatable record | Strong fit | Requires extra controls |
| Volume is high but patterns are complex | Partial | Strong fit |
| Output directly triggers an irreversible action | With controls | Human review required |
The typical pattern is that someone finds an AI tool that helps them work faster, they start using it, and six months later someone asks a hard question about what data went into it or who's accountable for what it produced. By that point the exposure is already there. Governance works better when it's set up before deployment, not after something goes wrong.
Foundational Policy Requirements
Governance Structure
Regulatory Landscape
Model Documentation Standards
This is the AI risk I see most often underestimated. People are using these tools every day, many without clear guidance on what's permissible, and sensitive data is going into systems that weren't designed to hold it. The intent is usually fine. The exposure isn't.
Data Classification and AI
What to Require from AI Vendors
Operational Controls
When a model influences a real decision, like who gets a loan, what gets flagged for review, how someone is priced or categorized. It carries risk. That risk doesn't disappear because the technology is sophisticated. SR 11-7 was written for financial institutions, but the core logic applies broadly: validate independently, monitor continuously, document the limitations.
ML-Specific Risks
What Model Documentation Should Cover
The most common mistake I see with AI adoption is treating the output as finished work. AI drafts, summarizes, classifies, and flags things. People verify, decide, and own the outcome. The gap between those two things is where governance lives.
What Needs Human Verification
Tools Worth Knowing
Human Review by Risk Level
| AI Output Type | Risk Level | Human Review | Documentation |
|---|---|---|---|
| Internal draft document | Low | Author review before use | Version control is enough |
| Customer-facing communication | Medium | Manager or compliance review | Approval log |
| Risk or fraud flag | Medium-High | Analyst reviews source data | Case documentation |
| Regulatory filing or report | High | Independent review and legal sign-off | Full audit trail |
| Credit or account decision | High | Qualified human decision-maker | Adverse action documentation |
| Automated action triggered by model | High | Pre-approved rules and exception process | Decision log and rollback capability |
These are working frameworks, not slide deck filler. They've been shaped by real implementations, real audit conversations, and real decisions that had consequences. Use them as starting points and adapt them to your situation.
The frameworks on this site are the thinking. The templates are the doing. Eight interactive builders that walk you through every field, coach you on what matters, auto-save your work, and export clean documentation your team can actually use.
I post regularly on LinkedIn under #TajaiBuilds. Strategy breakdowns, things I've seen go wrong, frameworks that have actually held up. My background is in financial services, but most of what I share applies well beyond that. No gatekeeping, no consulting pitch.
If something here is useful to you or your team, share it. That's the point.
This is not professional advice. Everything on this site, including frameworks, templates, checklists, tool references, and written content, reflects my own professional experience and perspective. It's shared for informational purposes. It is not legal advice, compliance guidance, regulatory counsel, or financial advice of any kind.
I am not a licensed professional. I'm not an attorney, a licensed compliance officer, a certified financial advisor, or anything similar. Nothing here should be treated as a substitute for guidance from a qualified professional who knows your specific situation, industry, and jurisdiction.
Regulations change. Frameworks referenced here including NIST 800-53, SR 11-7, SOX, BSA/AML, GLBA, GDPR, HIPAA, PCI-DSS, the EU AI Act, and others are subject to change. What's on this site may not reflect the most current requirements. Always verify against primary sources and talk to qualified counsel before making compliance decisions.
Your situation is different from mine. These frameworks are general starting points. They may not be right for your industry, your organization's size, or your regulatory environment. Adapt them carefully and get professional input when it matters.
No warranties. This content is provided as-is. I make no guarantees about accuracy, completeness, or fitness for any particular purpose.
You're responsible for how you use this. I'm not liable for decisions made or outcomes that result from using anything shared here. If something is high-stakes, get a professional involved.
The views expressed here are my own and do not represent any employer, past or present.