In March of 2024, my boss sat me down and told me she had a new task for me: figure out how to bring AI to our workplace. Some combination of OpenAI events and industry reports had left our leadership team with conviction that our company needs to be on the bleeding edge of how AI will transform the workplace, and like many projects that feel important but don’t belong to anyone, the responsibility fell to my team.
Nine months later, I’ll refrain from giving you exact information that will get me in trouble with my comms team, but I can happily report that my workplace has made gradual but meaningful leaps in becoming AI-enabled. The majority of our employees are weekly active ChatGPT users, we’ve got people building custom GPTs for all sorts of workflows across every single department, and we are improving the quality and volume of usage every single month. In the process of getting to this place, I’ve also learned so much (and yet so little) about these tools and where they might fit into the future of work. To cap off my year, I am sharing some of these reflections for those who are in the process of, just starting, or curious about jumping into bringing AI tools and LLMs into the daily operations of their business.
Lesson #1: LLMs are unlike every other workplace software
Most of the time, when you’re buying applications and software platforms for a business, the process goes something like this: identify a problem, identify software that solves that problem, buy software, teach employees how to use the software, set them loose. For those of us who work anywhere near business technology, this process should be familiar.
LLMs are totally different. While much of the B2B SAAS market is made up of similar “technologies” that companies create value from by wrapping them and packaging them to help with specific high-friction use cases, LLMs are unwrapped and undirected. You’re getting the technology with no clear mandate on how to use it or what problem it should solve. Flipping the conventional software implementation process, buying an enterprise LLM goes more like this: identify really cool, potentially useful, and highly expensive software, buy software, inform employees that software has been bought and they should use it, train employees on how LLMs work…. Then hope they find problems to solve with it?
While you would never buy an HRIS and say, “Yea, it’s cool, but I’m just not sure what to use it for,” those of us in the practice of introducing LLMs to the workplace hear this practically every day. In the past five years, I’ve supported nearly a dozen software implementations, and what has made implementing an LLM so different from all of them is that once you have the technology in place, someone (or many people) in an organization has to do the additional work of figuring out what you want to do with it.
In my mind, this is the defining challenge of enabling a workplace with AI. While there are tons of AI capabilities showing up in our existing software like Slack, as well as emerging single-use-case AI products, large LLMs like ChatGPT and Claude offer unique flexibility and value to workplaces while also requiring remarkable overhead. Seriously. Consider for just a second how remarkable it is to, with one platform, be able to help the creative team build their campaign just as easily as you can help an engineer test their code.
While you could hypothetically achieve both of these things by giving everyone in your company a license and seeing how the superusers naturally run with them while the skeptics leave them untouched, in practice, I’ve found it’s not that easy. To truly alter the way work is done and create sizable efficiencies, you need broad swaths of the organization to mutually commit to trying to find a better (AI-powered) way. This brings us to lesson number two.
Lesson #2: You can either burn a lot of money or a lot of time. Or both.
Harnessing the transformative power of workplace AI isn’t easy, and it isn’t cheap. In researching tools to pilot at our business, my team came across licenses that cost as much as $100 a month per head, and that’s just the licensing costs. Even as these models become cheaper and these tools stand to become more accessible, the real price you pay is in implementation and the costly process of figuring out how to use LLMs.
Consider these options that organizations have when introducing AI:
Option 1: Spend money, save time.
To save your busy and already thinly stretched employees from having to do figure out how to use LLMs in their daily work, you can buy a ton of single-use case tools and AI add-ons to your existing application stack that takes the hassle out by bundling LLMs into clear, single-use case tools. From manager coaches to slide designers, there has been a huge influx of AI workplace tools, and some of them are really good.
However, at their core, most of these are just some basic product design and prompting sitting on top of OpenAI’s model. While they will save you a lot of time in figuring out how to use AI and deliver it to employees, they can also cost you a hefty amount in what are functionally duplicative licensing fees. They can also exacerbate existing application sprawl that sends employees into context-switching mayhem whenever they are supposed to use a platform for a specific task.
Approaching AI tool purchasing as you would with every other form of software can exacerbate many of the already existing problems businesses have with tracking, managing, and containing an ever-growing tangle of workplace applications that, with varying amounts of time, you could pretty easily set up yourself with some effort and an LLM.
Option 2: Less money, a lot more time.
Contrarily, if you purchase everyone licenses for one LLM and call it a day, you’ll save on the duplicative costs and headache of managing a million applications while simultaneously introducing a different type of cost. Tools like ChatGPT blur the traditional line between build and buy, requiring both investment in 3rd party technology and significant effort to tailor their use. You may only be paying for one kind of license, but it takes hundreds if not thousands of hours to identify use cases, customize prompts and GPTs that work for your business, train employees on the skills they need, educate them about the risks they might encounter, socialize and broadly implement high-value use cases, and convince people to structurally alter workflows, and generally identify where the juice is actually worth the squeeze, and if your assessment will still hold in a month when somehow the technology has changed completely.
If you don’t believe how costly this time investment is to an organization, consider this: in 2024, 20% of BCG’s earnings came from GenAI projects, and Accenture clocked three billion dollars in generative AI-related bookings to help companies adopt and better utilize AI. Three billion dollars. That’s the value of outsourcing AI implementation to just one company for one year.
Companies ring up these costs because it is comparatively cheaper than the time commitment of recruiting your employees to learn about and comprehensively implement workplace AI on top of managing their primary responsibilities. This type of “building” to design prompts and GPTs bespoke to business-specific processes, setting up integrations, and keeping contextual information current is an extraordinary amount of “extracurricular” work to put on your employees. Sure, you could also build a team (like I have) that supports this type of work, but even having in-house technical and operational support won’t save you from the hundreds of hours of change management that have to happen to bring LLMs into the workplace or the fact that research and logic tell us workplace AI is most effective where employees identify the need for it themselves.
My unsolicited recommendation
The best workplaces will balance the two approaches above by doing a little bit of both. Having an LLM like Claude or ChatGPT at employees’ fingers can help achieve unparalleled efficiency if you train and incentivize organizational influencers to help evangelize high-value use cases. We’re talking 10x time savings for routine and time-consuming tasks. However, for many use cases, you’re better off buying something pre-wrapped and designed than tasking your in-house engineers with the work and pulling them away from your product. Finding the balance is impossible and imperfect, but most workplaces will find the greatest value in some combination of LLMs and well-vetted single-use case products that it would take too much time, knowledge, or skill for the team to build internally.
Lesson #3: Don’t distract your employees… but definitely consult them.
While we’re on the subject of how you engage your employees, another major learning for me as we went about launching pilots, educating employees, and implementing tools was that while many of our employees were unfamiliar or skeptical of AI, several had been using AI at work long before we decided to ask them to. In particular, your technical teams are likely a step ahead of you in terms of finding use cases, understanding technologies, and trying things out.
For people in truly non-technical companies, this may not apply to you. But if you work in any type of field that employs technologists, I cannot overemphasize how valuable these people are as partners. Especially if, like me, you are not a technical employee. Very often, the people who think about “the business” and the people who think about technology are not on the same team. And while those of us responsible for managing costs and processes are likely nominees for AI workforce program leadership, we are also starting from behind.
As an aside, I’ll throw in another tidbit of advice for my fellow business types: you need to get on the learning now. That doesn’t mean enrolling in a course about ML engineering, but it does mean that you should try to do some reading about how AI technology works. If you’re not sure where to start, consider reading Artificial Intelligence: A Modern Approach, Artificial Intelligence Basics A Non-Technical Introduction, and Life 3.0: Being Human in the Age of Artificial Intelligence. All of these are easily digestible, comprehensive, and wildly helpful books for learning enough about AI to feel like you’re at least speaking the language. If books aren’t your thing, there are also numerous good podcasts and tech columns that can give you more bite-sized learning. If these are of interest to people, perhaps I’ll do a newsletter with all my favorite resources for learning about AI.
Anyhow. The point is that, LLMs are a technology more than a product, and to be able to understand how to use them and how people are already using them in your business, you should find some time to talk to your tech teams. I guarantee you’ll find some people who are already using AI at work (hopefully with approved tools!), and in those people, you will also find thought partners who can provide valuable perspectives on where your business can use AI and how to help bring it to other employees. The trick is to be mindful of using these people’s time wisely. If your team is spending over twenty hours a week with internal tech partners talking about AI, consider that you may benefit from your own technical help. As service functions always should be, be mindful of how you’re asking for employee time that takes away from their core functions.
Lesson #4: Your AI tools are only as good as your knowledge management practices
The final lesson I’ll share, although I’m realizing that this piece could go on forever and probably warrants a follow-up, is that you can get all of the procuring, piloting, and implementing of AI tools right, but if your organization is awful at information management, you can basically reduce the expected value of these efforts by half.
Because information is the most basic ingredient for AI tools, an organization with scattered, outdated, siloed, uncatalogued, or inaccurate information is unlikely to benefit from many AI tools any more than they would a fantastic intern. Take Notion AI as an example. Anyone who knows me at all knows that I love Notion. I love their product, I love their team, and I am bullish on their place in the workplace technology market (I unfortunately do not benefit at all from saying this… I really just love Notion).
Notion was one of the first knowledge management tools to introduce AI-powered search, a technology that could save your employees hundreds of hours of searching for updates and information and having internal employees waste their time answering rudimentary questions. Notion’s search feature combs your entire workplace as well as connected applications like Slack to let you ask questions like “What’s our time off policy?” or “What’s the status of X project?” and provides you with succinct answers and linked sources. Used well… it’s game-changing. Except if your knowledge management is a mess.
If you cannot provide AI tools with not just the basic information they need to be helpful (like your time off policy), but also a rich amount of context that helps your tools understand what makes your business unique and what your priorities are, you will find yourself missing out on valuable opportunities to give your models context, break siloes, and synthesize information. Having good, clear, accessible documentation about your business and how it runs is a non-negotiable in truly bringing AI to your workforce. If you want to dive into enterprise-wide transformation but don’t know where to begin, my recommendation is to start there.
So…
One of the best parts of my career has been the privilege of working on complex and seismic operational challenges. From corporate strategy to organizational design, I’ve gotten to work on a lot of cool things that have really stretched and sharpened my skills, but taking on the challenge of shaping an AI-enabled future for my workplace has been one of the most interesting challenges yet. I’ve learned so much, I regret so much, I’m proud of so much, and I’m amazed by how much more there is to learn every day. While these are some of the most importants I’m walking into 2025 with, I’m looking forward to hopefully sharing more with you about this subject in the coming months.
For now, thanks--always--for reading, and happy New Year!