Change is tough. Major transformation is even harder. It involves accepting and embracing things that you have learned to be cynical about, to disbelieve and distrust what you know from experience is never as simple or as easy as a glitzy advertisement claims. The old saying “Fool me once, shame on you – fool me twice, shame on me” is an encapsulation of the biggest barrier to transformation in any organization.
Such distrust is usually well earned – true transformational capabilities don’t usually just appear – they evolve. In the beginning of this evolution, things are definitely less than perfect, and distrust is completely appropriate. The problem arises when the evolution reaches the point where it fully delivers on its promises, and the baggage of previous disappointments becomes a barrier to using these evolved capabilities to gain advantage and transform effectively.
Such is the current state of Artificial Intelligence solutions and their ability to transform business operating models. The transformational capabilities are there, but the resistance is real. This resistance is not just fear of the unknown or general resistance to change – the foundational pillars of this resistance have been built from decades of lessons learned. Every failed technology initiative, every “too good to be true” effort experienced by an organization drives what would normally be healthy cynicism but today is just a barrier to organizational growth and prosperity.
Let’s focus on the very real concerns and commonly accepted institutional “wisdom” around new solutions and how they contrast with the current reality around AI solutions. These are concerns that are often completely wrong – not that the concerns are unfounded, but that the “foundational truths” they are based on simply do not apply to properly executed AI solutions. There are four main concerns or “truths” that are common errors:
The reality is that AI solutions can do everything most people in your organization do today – just better.
Most people today recognize that assembly-line work is commonly performed by factory robots. Medical robots perform surgery with a precision even the best doctors can’t match. No one blinks at the modern preponderance of physical automation capabilities that surround us all – but there is something simply unacceptable at the idea that basic human intelligence and judgement is now inferior to or even at parity with AI.
The truth is that, at least for any Information Worker activity, AI automation has been able to exceed human performance since about 2018. AI capabilities continue to improve and become simpler to implement and use. “Anything you can do AI can do better – AI can do anything better than you” is a tough thing for most to swallow – but it’s the truth. Best of all, AI solutions don’t make the common mistakes humans make. AI doesn’t “fat finger” things on data entry, or get tired late in the afternoon after a long day and insufficient caffeine. It just works tirelessly 24 hours a day and without breaks.
It's very common for transformation teams to spend significant effort trying to identify “use cases” for AI solutions. This is certainly useful for prioritizing the largest impact initiatives to start, but ultimately it is missing a critical truth – AI can and will ultimately be used for everything people do today. Approaching adoption of AI solutions as a transformational program instead of a use case by use case project stream enables for much more rapid transformation and attainment of measurable benefits.
Richard Baldwin, A professor at the Geneva Graduate Institute in Switzerland, was famously quoted in 2023 as saying "AI won't take your job. It's somebody using AI that will take your job". Empowering an organization to apply AI solutions broadly and to use AI not just to solve specific use cases, but to enable productivity benefits for every role in an organization, should be the mandate of every organization today.
Testing and validation is also something where a lot of wasted effort occurs, mainly as a result of this “common sense” disbelief in AI capabilities. It's common to see organizations “putting AI through its paces” unnecessarily and with much less surety than has already been done elsewhere.
Testing and validating AI solutions implemented in any organization should definitely occur, but validating AI capabilities in general is something no organization needs to do any longer. This is no longer a question of validating capabilities, it’s a simple matter of ensuring that any deployed solution does what it was intended to do before deploying to production. It’s just a need for change control, not experimentation with something that is “bleeding edge” or unproven. It’s the difference between making sure that you turned off the ignition and locked a car vs. testing to see if internal combustion actually works to move an axel on a car.
The reality is that AI solutions work with everything everyone has, in the same way that people use those systems today.
For decades, few things have been as problematically incompatible as different computer systems or software packages. Different software developers, hardware manufacturers – pretty much everyone in the technology landscape – all have struggled with the idea of seamless interoperability. It hasn’t happened yet. Not everything in technology works with everything else – this is just a fact of life, one that every organization has realized must be addressed before the introduction of any new technology. Different standards, competitive pressure, all have contributed to a mess of incompatibility, leading to an entire industry focused on integration tools and different ways to “patch things together”. Today’s AI solutions leverage AI automation to sidestep this issue completely. AI solutions can leverage any existing integrations, Application Programming Interfaces (APIs), data connectivity frameworks – but they don’t have to. They can just act like people do and use the systems required the way people do today. Giving an AI solution its own computer and licenses to applications may seem alien – but it works. Today’s AI-driven automation can log on to systems, send mouse clicks and keystrokes, chat with coworkers or clients – they can do anything a person can. While the incompatibility of different systems may never be fully solved for, it doesn’t have to be for AI solutions to work with anything. If a person can use it, AI solutions can, too.
The reality is that AI solutions are usually a “light lift” to get started with and immediately solve resource challenges – at a fraction of the cost of “business as usual”. AI solutions can and should be self-funding.
When AI was considered (not too many years ago) a “science project”, it took time, money and highly skilled resources to do anything – and few “science projects” actually made it into production. Skepticism here is often based on that old experience. Today, most enterprise software packages have AI capabilities, and commercial AI platforms have reached a level of simplicity that enables rapid adoption.
When something is simpler, it can be done faster and cheaper – and this is very true of AI solutions today. In fact, AI solutions are often the most rapid way to solve for the same resource challenges that are commonly presented as a reason not to pursue AI solutions. For a team that is understaffed and overwhelmed by volume, the obvious priority is always to increase the team size and effectively pursue immediate revenue opportunities. If AI solutions can enable teams to scale faster than hiring and training new workers, as is always the case, then it makes no sense to wait. AI solutions can make teams orders-of-magnitude more productive than they currently are. AI solutions can eliminate the need to scale workers proportionally with the volume of business. “Growth without cost” is now an obtainable reality in most industries – waiting on embracing the opportunity is the only bandwidth\cost option that doesn’t make sense.
The reality is that there are real risks and security considerations around AI use – but they are well understood and are now relatively straightforward for any organization to address within their AI solutions. The key is that AI solutions used in an organization should not be the same as a basic publicly available Generative AI search or chat function. They should use already established security and validation principles – principles and practices that should become part of the way organizations use AI.
Ensuring accurate data governance, and the protection of private and proprietary data is a foundational aspect of any solution – not just an AI solution. “Locking down” critical data and applications appropriately is also a fundamental responsibility of any organization – and another foundational aspect of effective AI solutions. Applying modern security tools to new AI solutions is a critical step – but it’s not reinventing anything, it’s making sure it’s done appropriately that organizations should be focused on. Ensuring that this is done is essential – but not something that needs to consume significant cycles in execution.
Of greater concern is the “black box” aspect of AI capabilities and lack of confidence in, specifically, Generative AI’s ability to make accurate decisions or supply real answers instead of AI “hallucinations”. Here, the answer is actually the same as the concern – don’t trust AI. Essentially, any AI solution must incorporate the ability to validate both what is being asked of AI and what is being answered by AI. Without this validation, AI utility will always be limited. With it, AI solutions are commonly less risky than human engagement.
The simplest way to solve for this is to use AI Solutions the way people perform activities today – doing things based on business rules that already exist, using knowledge they have from experience, or validating or checking their “cheat sheets” as needed if they are new to the role. This means AI solutions must incorporate pre-processing validation, and minimally use business rules in AI activities. More sophisticated AI solutions can use readily available machine learning models to “pre-inspect” activities to test for likelihood of success and perform validation testing of output. Such a “symphony of machine learning models” approach can be leveraged to “sandwich” generative AI to efficiently provide confidence score reliability and to mitigate risk of inappropriate response. AI is not by itself a real business solution – a true AI solution can and must incorporate components of:
Data – to ensure the right information is used and disseminated.
Security – to ensure modern data and application controls are effectively utilized.
Intelligent Automation – to enable AI to be used to actually do things, and to integrate with all existing systems.
Combining these components in AI solutions allows for effective AI solution development and adoption in a transformational way.