Ohio Minority Supplier Development Council

When “This Isn’t Scalable” Becomes a Product: How OMSDC Built MatchDesk to Stop Leaving Its Best Connections to Chance

OMSDC’s curated networking platform started as a frustration, survived a health scare, and became a case study in how mission-driven organizations can build tools that serve their members and sustain their work.

Jamie Van Doren was supposed to help organize and run DealMaker at ConnectingOHIO 2025. He had a plan. It was analog, but it was a start.

On April 30, he had a heart attack. More than one, actually. On May 11, he went under for a triple bypass.

He came back in late June, just in time to help support the event. And what he saw was a team working incredibly hard to do something that shouldn’t have been so difficult. Every curated meeting between a corporate member and an MBE supplier ran through emails, spreadsheets, and the institutional knowledge of one person trying to match 400-plus MBEs to specific corporate needs. The follow-up alone was a full-time job. The matching was well-intentioned but limited by what any single person can hold in their head.

“This isn’t scalable,” Van Doren remembers thinking. “And it isn’t just an efficiency problem. It’s a mission problem. If we can’t connect the right people reliably, we’re leaving our most important value on the table.”

From Pitch to Product

By January 2026, Van Doren had an idea he wanted to test. OMSDC’s Annual Meeting was coming up in March. Open networking is a staple of events like it, but open networking leaves outcomes to chance. The people who need to meet each other often don’t. The introverts hang back. The time runs out. The most valuable conversations happen accidentally or not at all.

Van Doren had experienced a better model years earlier. While fundraising for his first tech startup in 2020 and 2021, he’d used a curated virtual meeting platform that scheduled one-on-one sessions with venture capital firms. The structure worked. Every meeting was intentional. Every slot was used.

He pitched the concept to OMSDC’s leadership: a curated one-on-one networking platform for the Annual Meeting. Attendees would browse a directory, request meetings with specific people, and both parties would opt in before anything was scheduled. The system would then generate a conflict-free schedule automatically. No spreadsheets. No email chains. No collisions.

The team decided to trust him. George Simms, OMSDC’s President and CEO, saw something beyond a scheduling tool. “This is what supplier inclusion looks like when it’s hardwired into the way we operate,” Simms says. “We talk about connecting minority businesses to opportunity. This is infrastructure that makes that connection structured and repeatable, not something that depends on who happens to be standing next to whom at a reception.”

Simms also saw a broader signal. Van Doren is a Latino tech founder building enterprise software with AI tools, inside a minority business support organization. “That’s the kind of innovation and excellence we exist to spotlight,” Simms says. “It’s one thing to advocate for minority-owned businesses. It’s another to have one of our own people build the tool that solves the problem.”

Building It: AI as Developer, Not Magic Wand

Van Doren looked into off-the-shelf solutions first. The pricing was a non-starter. Competitors charge $3,000 to $10,000 or more per event. For a nonprofit running multiple events a year, those numbers don’t work. And Van Doren wasn’t convinced the features would match what OMSDC actually needed.

So he decided to build it himself. Not from scratch in the traditional sense, but not with a wave-of-the-hand prompt either. Van Doren has built two tech companies, led developer teams, and co-founded a company in 2020 called NeverEnding. They were building a custom AI model for animation, before the large language model wave hit the mainstream. He knew how to put together a product requirements document and a technical roadmap. The difference this time was the developer: Claude Code.

“Using AI to code isn’t like how they advertise it,” Van Doren says. “Not if you want something stable, scalable, and enterprise-ready. You can’t just say ‘build this app.’ Our platform needed real security, complex scheduling logic, multi-tenant architecture, role-based access. So we built it the way you’d build any serious software product: with a PRD, slice by slice, with multiple rounds of testing and deployment.”

Even with AI as the developer, the build was intense. Two months of ten- to twelve-hour days, seven days a week, reviewing and testing code daily. That’s not the effortless “just prompt it” story that AI marketing likes to tell. But it’s dramatically faster than the six to eight months the same product would take a team of two or three developers plus a project lead. The difference isn’t that AI eliminates the work. It’s that it compresses a team’s worth of output into one person’s timeline, as long as that person knows how to structure and lead a build.

James Price, OMSDC’s Associate Vice President of Operations, sees MatchDesk as evidence of OMSDC walking the walk. “We tell our MBEs they need to be looking at where AI fits in their organizations and how they can leverage it to compete,” said Price. “Creating MatchDesk is a perfect example of us not just telling, but showing.”

Here’s what it looks like in practice. MatchDesk is a 1:1 meeting scheduling platform designed for B2B events. Attendees browse a branded directory, send meeting requests with a personal message, and both sides confirm before anything is booked. The system then generates a conflict-free schedule for every participant. Organizers get real-time analytics on enrollment, meeting rates, and engagement.

Critically, Van Doren didn’t build MatchDesk just for OMSDC. It’s a multi-tenant system, meaning any organization — chambers of commerce, councils, accelerators, trade associations — can create an account, brand it to their identity, and run structured networking events through the platform. Pricing starts with a free tier and scales to $499 per year for enterprise use, a fraction of what incumbents charge per single event.

The First Deployment: Honest Lessons

MatchDesk launched at OMSDC’s 2026 Annual Meeting at Central State University. Signups were strong. But the event ran behind schedule, and as a result, fewer live meetings happened than were scheduled. The gap between scheduled and completed meetings surfaced a real operational lesson: structured matchmaking only works if the event itself protects the time it needs.

“It actually elevated something important for us,” Van Doren says. “We were packing too much in. The tool did what it was supposed to do. But we learned that if we want curated meetings to deliver, we have to give them room to breathe on the agenda.”

Price sees the tool as a way to shift where his team spends its energy. “Before MatchDesk, facilitating introductions between corporate members and MBEs meant a lot of manual coordination. Emails, spreadsheets, follow-ups. It worked, but it consumed time that could have gone toward building deeper relationships and solving real problems for our members,” Price says. “What excites me is getting out of the administration business and into the relationship business. I want our team focused on connecting people and creating value, not managing spreadsheets.”

The Bigger Thesis: Why Nonprofits Need to Build

Behind MatchDesk is a larger argument about how mission-driven organizations sustain themselves.

Most nonprofits operate at the mercy of a few revenue streams: donations, sponsorships, membership dues, and grants. One or two bad years can be devastating. And even in good years, there’s often not enough funding to do the work the organization knows needs to be done. For many non-profits, the mission is clear. The resource opportunities to fulfill it are not.

Van Doren believes nonprofits have an underexplored path: building products and services that create genuine member benefit while also generating revenue. Not merchandise. Not another gala tier. Real tools that solve real problems for the people the organization serves.

“Nonprofits are mission-driven,” Van Doren says. “That’s actually an advantage when it comes to product development. You’re building for the people you serve every day. You understand their problems because you live inside them. And because you’re not answering to shareholders, you’re less likely to make the kind of short-term decisions that erode trust.”

MatchDesk is one version of what that looks like. It started as a solution to OMSDC’s own operational problem. It’s now a product that any similar organization can use. The revenue it generates supports the mission it was built to serve.

“We talk a lot about how MBEs need to diversify revenue and build resilience,” Van Doren says. “The organizations that support them need to do the same thing.”

What’s Next

The MatchDesk team is now building out sponsor management and activation features, extending the platform’s value from attendee matchmaking into event monetization. The goal is to give organizations a single tool that handles the two things they struggle with most at events: making sure the right people meet, and making sure sponsors see measurable return.

For OMSDC, the next test will be DealMaker and future ConnectingOHIO events, where the combination of structured matchmaking and tighter event programming should close the gap between scheduled meetings and completed ones.

“The tool works,” Van Doren says. “Now we need to make sure everything around it works just as well.”


MatchDesk is currently accepting early-access signups at getmatchdesk.com. The first 100 organizations to sign up receive 50% off their first year.

Trend Watch: The Next AI Divide Isn’t Access. It’s Whether You Can Deploy AI Safely.

AI tools are easier to get than ever. The harder question is how can use them without creating new risk.

The AI conversation has shifted, and most people haven’t caught up to where it actually is.

A year ago, the question was access. Could your company afford the tools? Could your team figure them out? Could you get past the learning curve fast enough to matter? That question is fading. Tools are broadly available. Many are free or nearly free at entry level. The barrier to entry has dropped to almost nothing.

But “free to start” is not the same as “cheap to run.” At enterprise scale, the economics are different and getting harder. Compute costs have not followed the trajectory that most AI marketing implies. Energy costs, as we covered in last month’s Trend Watch, are climbing. The large AI companies themselves are not yet generating profits proportional to their infrastructure spend, which means the current pricing environment is unlikely to last. When the subsidy phase ends, the companies that deployed carelessly will feel it twice: once in rising costs, and once in the operational debt they accumulated when the tools were cheap.

What is GenAI? Generative AI refers to AI systems that create new content, such as text, images, code, audio, or video, based on patterns learned from large amounts of data. In business settings, GenAI is often used for drafting, summarizing, research support, content creation, and workflow assistance. The risk is that fluent output can look authoritative even when it is incomplete, wrong, or unsafe, which is why human review still matters.

That would be enough to think about. But there’s a second shift happening at the same time. And it’s arguably much more consequential.

The market is moving past basic chat use and into systems that can retrieve information, use tools, act across software, and make limited decisions on their own. This is the shift from AI as a writing assistant to AI as a semi-autonomous operator. Gartner says the growth of agentic AI applications and Model Context Protocol (MCP) is creating new avenues for cyberattacks and other exploitations, especially when those AI systems access sensitive data, ingest untrusted content, and communicate externally – all in the same workflow. Open Worldwide Application Security Project‘s (OWASP) latest guidance reinforces the point by highlighting prompt injection, excessive agency, and unsafe handling of tool output as leading risks in agentic applications.

The AI story is shifting from “Can it help?” to “What exactly can it touch, what can it do, and who is accountable when it gets something wrong?” That’s a different management problem. And it’s arriving at the same time the cost assumptions are about to change.

Why Does It Matter?

The real divide is becoming operational maturity. Two companies can buy the same AI tool. One gets speed. The other gets new exposure.

This isn’t theoretical. Reuters reports that banks in Asia and Australia are already revisiting their AI deployment protocols because frontier models could increase the speed and scale of cyberattacks. Gartner predicts that by 2028, a quarter of enterprise GenAI applications will experience at least five minor security incidents per year, driven in part by immature security practices around agentic systems. The pattern is clear: the organizations that moved first on AI are now moving first on governance, because they’ve seen what ungoverned deployment actually looks like.

For OMSDC’s audiences, the implications split three ways.

A small firm can adopt AI quickly. That’s genuinely good. But a small firm can also expose customer data, internal files, or API credentials quickly if the tool is poorly configured. For a company with no IT department and thin margins, a data exposure event isn’t a learning experience. It’s an existential one.

A larger minority business may already be connecting AI to client service, documentation, proposals, and internal workflows. The question isn’t whether the tools work. It’s whether those connections have clear boundaries, review points, and someone accountable for what the system does when no one is watching.

A corporate member will increasingly evaluate suppliers through this lens. A vendor that uses AI carelessly can become a security or compliance liability. A vendor that uses it with discipline becomes more responsive, more consistent, and easier to trust. That difference will show up in execution long before it shows up in a questionnaire.

The point for every reader is the same: access alone won’t create advantage. Controls, review processes, and permission design will. And deploying well isn’t just a security question anymore. With costs poised to rise, it’s also an economics question. The firms that build disciplined, efficient AI workflows now will be the ones that can afford to keep running them later.

Where Do AI Agents Fit In?

If you’ve been following the AI conversation this year, you’ve probably heard the word “agent” more than any other term. It’s where productivity and risk start to converge, and the distinction matters more than most coverage suggests.

A standard AI chat tool generates text in response to a question. You ask, it answers. An agent does something fundamentally different. It can call tools, access files, browse the web, send messages, update databases, or orchestrate multiple steps in sequence. That’s genuinely useful. It’s also a genuinely different risk profile.

What is an AI agent? An AI agent is a system that does more than answer a prompt. It can plan, take actions, call tools, interact with external systems, and carry work forward across multiple steps. NIST describes AI agents as systems capable of autonomous actions such as writing and debugging code, managing calendars and email, and handling other emerging tasks. That added usefulness is exactly why control, permissions, and monitoring become more important.

Think of it this way. A chatbot that drafts a bad email wastes your time. An agent that sends a bad email wastes your client’s trust. A chatbot that hallucinates a number gives you wrong information. An agent that hallucinates a number and enters it into your accounting system gives you a wrong record that looks like a real one. The failure modes change when the system can act, not just speak.

OWASP describes “excessive agency” as a real vulnerability when an LLM has too much ability to trigger actions in response to manipulated or ambiguous inputs. Gartner warns that ordinary use can produce security failures when agents both touch sensitive data and consume untrusted inputs. Axios’ recent cybersecurity roundtable made the same point from an access-control angle: organizations are not yet managing agents with the same discipline they would apply to any other privileged workload that can read, write, and execute across systems.

Most business leaders hear “agent” and think convenience. Instead, they should hear “permissions,” “scope,” and “review.” A workflow that can take action isn’t just a better chatbot. It’s closer to a junior intern who works fast, never sleeps, and has no judgment about when to stop and ask. Deploying agents aren’t just about productivity. It needs to be a governance and cybersecurity discussion.

What does “excessive agency” mean? Excessive agency is the security risk that appears when an AI system has too much authority, too many connected functions, or too much autonomy for the job it is supposed to do. OWASP defines it as a vulnerability that enables damaging actions when an LLM-based system can call functions or interact with other systems in response to ambiguous, manipulated, or otherwise faulty outputs. The business version is straightforward: an AI assistant that should only retrieve information should not also be able to delete records, send messages, or trigger transactions without tighter controls.

Why System Prompts and Surface Guardrails Are Not Enough

Many teams still treat AI safety as a prompt-writing problem. Write better instructions. Add guardrails to the system prompt. Tell the model what not to do. Hope it listens.

System prompts and better instructions are good. But relying on hope is increasingly insufficient, as the gap between what prompts can control and what systems can do is widening.

OWASP’s 2025 guidance confirms that prompt injection remains a core vulnerability, and that techniques like retrieval-augmented generation (RAG) and fine-tuning don’t fully solve it. NVIDIA’s AI red-team work adds another layer: new semantic and multimodal prompt-injection techniques can bypass existing guardrails entirely, which is why output controls, layered defenses, and behavioral analysis matter more than relying on a single control surface.

What is prompt injection? Prompt injection is a security vulnerability in which a model is manipulated by crafted inputs that alter its behavior in unintended ways. OWASP notes that this can happen directly through a user prompt or indirectly through content the model reads from a website, file, email, or another outside source. The practical implication is simple: if an AI system can read untrusted content and also access tools or sensitive data, the model may be pushed into doing something it was never meant to do.

Why system prompts are not enough.

A system prompt is the instruction layer developers use to shape how a model should behave. It matters, but it is not a complete security strategy. OWASP notes that prompt injection can still succeed even when guardrails exist in prompts, and that techniques like RAG and fine-tuning do not fully eliminate that risk. Once a system can retrieve, call tools, or take action, controls need to live in permissions, architecture, approval flows, and output checks, not just in the prompt itself.

Here’s an analogy that may help. Telling an employee “don’t share confidential information” is important. But if that employee has unrestricted access to every file in the company, an unlocked door to the server room, and the ability to email anyone in your client’s organization, the instruction isn’t the problem. The architecture is. The same logic applies to AI systems. The issue isn’t that prompts don’t matter. It’s that prompts are not a sufficient control plane once the model can retrieve, execute, and communicate. At that point, permissions, sandboxing, output validation, and human approval gates matter more than any instruction you can write.

A Cautionary Example: When Easy Setup Meets Deep Access

The spread of consumer-friendly, self-hosted agent tools illustrates where the gap between “easy to try” and “safe to run” becomes dangerous.

McAfee’s recent guidance describes tools like OpenClaw as self-hosted agents with deep system access, and warns that poor configuration can expose passwords, API keys, and private data. The same guidance cites reports of exposed installations and malicious plug-ins targeting credentials and financial information. TechRadar is making a broader argument about “shadow AI,” agentic tools that slip into business environments without IT oversight, proper configuration, or anyone asking whether the convenience is worth the exposure.

OpenClaw is worth naming not because it defines the whole market, but because it illustrates a pattern that will repeat. The tools that are easiest to install are often the ones with the broadest default access. The distance between curiosity and exposure is shrinking. And for a small business that treats a self-hosted agent as a casual productivity tool rather than a privileged system with deep reach, the consequences can arrive faster than the benefits.

This isn’t a reason to avoid experimentation. It’s a reason to treat experiments like experiments: sandboxed, monitored, and kept away from your primary business systems until you understand what the tool can touch.

What is MCP? MCP, or Model Context Protocol, is an open protocol designed to standardize how AI applications connect to external tools and data sources. Anthropic describes it as a standardized way to connect AI models to different systems, similar to how USB-C standardizes how devices connect to peripherals. In practice, MCP can make AI systems more useful by giving them access to files, apps, and business tools. It can also increase risk if those connections are too broad or poorly governed.

Where Humans Fit

The strongest AI deployments don’t remove human review. They move it to the places where judgment, approval, and accountability matter most.

That distinction is worth being specific about, because “keep a human in the loop” has become one of those phrases that sounds reassuring without actually telling anyone where to stand.

Human review should sit at permission design: deciding what the system can and cannot access before it’s turned on. It should sit at approval for external actions: anything that sends, publishes, submits, or commits on behalf of the business. It should sit at output review for sensitive or regulated workflows, where a polished-sounding wrong answer can create legal, financial, or reputational damage. And it should sit at monitoring: watching for failures, exceptions, drift, and the slow degradation of quality that’s easy to miss when every output looks professionally formatted.

There’s one more review point that most teams skip, and it may be the most important: periodically asking whether the tool is actually saving time.

One of the easiest AI mistakes is assuming that automation creates efficiency by default. In practice, some systems create rework, monitoring burden, debugging time, and workflow complexity that outweigh the productivity gain. A tool that drafts a document in ten seconds but requires thirty minutes of review, correction, and reformatting didn’t save twenty minutes. It moved the work and added a quality-control problem. That calculus is especially relevant for smaller firms, where every hour of overhead falls on the same few people.

A useful question to keep asking: if an AI workflow takes six steps to supervise, document, and correct, did you automate the work or just relocate it?

What Should You Do?

For smaller businesses (under $1M):

Stay with low-risk use cases first. Drafting, summarizing, internal organization, proposal support, and repeatable content are safer starting points than tools with deep file, inbox, or financial-system access. These are also the use cases most likely to survive a price increase, because the compute they require is modest.

Avoid self-hosted agents on your primary business systems unless you understand the security model. Tools like OpenClaw are better treated as sandbox experiments, not casual installs.

Ask one discipline question before adopting any new tool: is this reducing real work, or is it adding supervision overhead that I’ll be paying for in time, attention, and eventually in subscription costs?

For larger businesses:

Start drawing a clear line between AI use and AI governance. Standardize which tools are approved, where sensitive data can and cannot go, and what actions require human approval before the system executes.

Treat workflows involving customer data, contract data, or financial data as higher-risk categories. Gartner’s warning about “no-go zones” when agents combine sensitive data, untrusted content, and external communication is a useful frame for deciding where the boundaries should sit.

Begin thinking about cost resilience. The tools that are cheap today may not be cheap in eighteen months. Building workflows that depend on underpriced compute is a form of technical debt. The firms that deploy efficiently now, using the right-sized tool for the right-sized task, will be better positioned when pricing reflects actual costs.

For corporate members:

Update supplier and internal evaluation questions. Don’t just ask whether a vendor uses AI. Ask how it’s governed, what review points exist, what systems it can access, and what happens when something goes wrong. The quality of those answers will tell you more about operational maturity than any capabilities deck.

Treat AI agents as privileged workloads, not casual productivity tools. Axios’ security roundtable used that frame well: if an agent can read, write, and execute across systems, it deserves the same access controls and monitoring you’d apply to any other system with that level of reach.

Build internal policy around access, outputs, logging, and approval workflows, not just acceptable-use language. Acceptable-use policies tell people what not to do. Governance architecture makes it harder to do the wrong thing by default.

The Takeaway

The first AI divide was access. That divide is closing fast.

The next one is operational, and it has two faces. The first is security: who can deploy these tools with discipline, clear permissions, and real human review? The second is economics: who can deploy them efficiently enough that the workflows still make sense when the current pricing environment changes?

As AI systems gain the ability to retrieve, connect, and act, both questions start to matter more than whether a company has the tools at all. Reuters, Gartner, and OWASP are all pointing in the same direction: the spread of AI is becoming a security and governance story as much as a productivity story. And last month’s infrastructure analysis still applies. Software is getting more capable. The physical and financial systems underneath it are getting more constrained.

As quality differences between commercial and even open source AI models shrinks, disciplined deployment will become the biggest differentiator.

Trend Watch: AI Is Getting Cheaper to Run. Infrastructure Is On Shaky Ground.

New efficiency breakthroughs could lower AI costs for everyone, but energy strain, grid limits, and war-driven supply disruptions will shape who benefits first.

by Jamie R. Van Doren

What’s Changed?

Last week, Google Research published a compression algorithm called TurboQuant that does something genuinely useful: it shrinks the working memory AI models need to operate — by roughly six times — without degrading performance. On Nvidia’s H100 chips, a version of the technique delivered up to an eightfold speedup in a key processing step. Memory chipmakers noticed. Within hours of the announcement, shares of Samsung, SK Hynix, and Micron all dropped, as traders recalculated just how much physical hardware the AI industry will actually need.

The internet, naturally, compared it to the fictional compression algorithm from HBO’s Silicon Valley. Fair enough. But the business implications are real.

TurboQuant is not an isolated event. It sits inside a broader pattern: AI models are getting smaller, faster, and cheaper to operate. Techniques like quantization, distillation, and compression have been steadily reducing the computing resources needed to run useful AI. What once required racks of specialized hardware is beginning to run on leaner setups: smaller cloud instances, edge devices, even laptops.

Google released TurboQuant under an open research framework, and community developers had already begun porting it to consumer-grade hardware within a day of the announcement. An official open-source release is expected in the second quarter of this year.

Full Credit Google Research

This is the efficiency side of the equation. And for smaller firms, it’s the side worth paying attention to.

But there’s also another side.

Why Does It Matter?

AI infrastructure is under strain. Not eventually. Right now. The U.S. Department of Energy projects AI energy demand will double or triple within the next few years, potentially reaching 12% of total national electricity consumption by 2028. The country’s largest grid operator, PJM Interconnection, which serves over 65 million people across 13 states, has warned it could be six gigawatts short of reliability requirements by 2027. For everyday Americans, that means being told not to run your air conditioner on the hottest day of the year. And if enough people ignore that request, it means rolling blackouts so the whole grid doesn’t go down.

Retail electricity prices have already risen more than 40% since 2019. Some of that is weather, regulation, and fuel costs. But data center demand is an accelerating factor, and utilities from Virginia to Ohio to Texas are scrambling to keep up.

Large tech companies are responding by locking in long-term energy contracts, investing directly in power generation, and competing for grid capacity in ways that smaller firms simply cannot replicate. Meta recently committed up to $27 billion in a single deal for dedicated compute infrastructure. Google, Microsoft, and Amazon are collectively planning hundreds of billions in data center capital expenditure through the end of this year alone.

There’s a less obvious ripple, too. Iranian strikes on Qatari gas infrastructure have knocked out roughly a third of global helium production — a gas most people associate with party balloons but that chipmakers need to manufacture the semiconductors AI runs on. Without helium, you can’t etch the chips that power data centers. Software can get more efficient, but it still needs hardware, and that hardware supply chain just got more fragile. Cheaper algorithms don’t help much if you can’t build the machines to run them on.

So, here’s the tension. AI is getting cheaper to run, yes. But the infrastructure that supports it is getting more expensive and much more constrained. Energy costs are climbing. Grid capacity is tightening. And now, war-driven supply chain disruptions are threatening the materials needed to build the hardware itself. Efficiency improvements like TurboQuant help at the software layer, but software runs on chips, chips run on power, and both of those supply chains just got more complicated. The bottleneck isn’t the algorithm anymore. It’s, well… everything else.

For MBEs, this creates a two-sided opening.

The opportunity: If useful AI tools require less memory and less compute, the cost of adoption drops. You don’t need a massive technology budget to use AI well. You don’t need to build anything from scratch. The tools that help you write faster proposals, run quicker competitive analysis, automate reporting, and tighten forecasting are getting better and cheaper at the same time. That lowers the barrier to entry in a meaningful way.

The risk: Infrastructure advantages compound. Companies that can secure compute capacity, negotiate energy contracts, and invest ahead of demand will operate with structural advantages that have nothing to do with intelligence or effort. If energy costs keep climbing and grid access remains uneven, operational resilience becomes a competitive differentiator, not just a nice-to-have.

It’s worth repeating on point in particular: recent war-driven energy shocks are a reminder that technology growth doesn’t happen in a vacuum. As oil, power, and materials markets tighten, operational resilience matters just as much as innovation. The AI economy depends on the physical economy, and the physical economy is under pressure from several directions at once.

What Should You Do?

For MBEs: focus on disciplined adoption, not ambition.

The advantage right now isn’t in building custom AI. It’s in applying existing tools to real bottlenecks. The companies that benefit won’t necessarily be the ones experimenting casually. They’ll be the ones building repeatable workflows with clear outputs and measurable time savings.

Where are you spending hours on work that a well-configured AI tool could cut by a third? Proposal drafting, compliance documentation, market research, internal reporting? These aren’t glamorous use cases, but they’re the ones that actually change how a small company operates day to day.

The right question isn’t “Are we using AI?” It’s “Can we point to specific outcomes that improved because of it?” If the answer is vague, the implementation isn’t working yet.

For corporate members: start looking at efficiency as a signal.

A supplier using AI well may not look different on a capabilities slide. But they’ll be more responsive. Their documentation will be cleaner. Their turnaround will be faster. Their communication will be more consistent. These are observable differences, and they’re worth weighting in supplier evaluation.

As AI becomes more accessible, the gap won’t be between companies that use it and those that don’t. It will be between companies that use it with discipline and those that treat it as a novelty. That distinction will show up in execution long before it shows up in an RFP/RFQ response.

OMSDC’s Upcoming AI Workshop Series

The Ohio Minority Supplier Development Council (OMSDC) is developing a practical AI series for MBEs and others. We want to hear what formats and topics would actually be useful. Take the short survey at the link below. Complete it and you can download the Competitor Analysis AI Workflow one-pager, a step-by-step guide with prompts and instructions you can put to work immediately.

Take the survey and download the workflow

Questions Worth Asking

For MBEs:

  • Where are you spending time that could be reduced by 30–50% with the right tools? And have you actually tested that?
  • Are you building repeatable AI workflows, or relying on ad hoc experimentation?
  • If energy and compute costs rise, does your operating model stay viable?

For corporate members:

  • Are your suppliers becoming measurably more efficient over time, or staying flat?
  • Are you evaluating responsiveness and operational clarity, or just price and scale?
  • How do you identify partners who are quietly improving their operations through technology?

For both:

  • If AI becomes cheaper and more accessible, what differentiates you?
  • If infrastructure becomes more constrained, how do you stay flexible?

The Takeaway

The efficiency race in AI is real. And it favors smaller, disciplined adopters more than most people realize. But it’s happening against a backdrop of physical constraints, energy, grid capacity, supply chain friction, that won’t resolve quickly. The companies that navigate both sides of that equation, getting leaner on the software side while staying resilient on the infrastructure side, are the ones best positioned for what comes next. The most advanced AI won’t necessarily be the differentiator. The most disciplined use of it will be.

AI Won’t Replace Your Team, But It Can Change How You Compete

AI responsible technology concept displayed on tablet with icons for security, fairness, transparency, compliance, and collaboration in business environment

AI is quickly becoming a practical tool for small and mid-sized businesses, especially those operating with limited time, staff, and resources. Our new White Paper, by Jamie Van Doren, The Practical AI Playbook for SMBs, breaks down where AI actually creates value — not in hype, but in real workflows like proposals, sales prep, research, and operations. Used well, AI helps lean teams move faster, produce more polished work, and compete more effectively without adding headcount.

But the advantage only shows up when it’s used with discipline. Our white paper highlights a few critical truths: AI is strongest as a first-draft and organization tool, not a decision-maker. It can confidently produce wrong or biased outputs. And overreliance can erode skills and judgment over time. The takeaway is simple — businesses that win with AI won’t be the ones using it everywhere, but the ones using it intentionally, with clear guardrails and human oversight. Download our white paper and learn how.