A ounce of prevention is worth a pound of cure.

A ounce of prevention is worth a pound of cure.

Everyone is touting AI. Everyone seems to be doing AI.

Word of warning, this isn't a short, fluff marketing piece. It is 1,600 words about things that can and will impact you, positively or negatively.

As a brand or retail executive, your professional experience is vast and deep. Your corporate technical team, as skilled as they are, is only as good as their last webinar or article consumed.

You are not in the business of AI, nor in the business of buying and using software systems and tools. You are in the business of designing, manufacturing, distributing, marketing and selling innovative products that customers love and rely upon.

You are in the business of delivering shareholder value. Marketshare acquisition and retention. Developing competitive advantages.

All of which is hard enough.

When adopting an AI enabled system, how do you know what you are getting? Really?

Like you and members of your executive teams, many on our team hav built and even bought complex and expensive 'enterprise systems'. We've built, marketed, sold and then had to support software with millions of lines of code.

We've also bought and used in previous endeavors six, seven and eight figure enterprise software packages from Microsoft to Salesforce to Adobe as well as powerful and promising niche solutions from small start ups.

As you well know, very little out of the box is plug and play.

Often, a consultancy with relevant expertise would have to be brought in to help navigate what a system actually did well versus what it was promoted to well in the webinars and sales materials.

There was a very specific incidence that illustrates this we will use as an example. As cloud based CRM systems were exploding just over a decade ago, a marquee software package from the market leader had been purchased, after nearly a year of analysis. There were issues that kept a team of engineering FTE's busy configuring modules to work with each other correctly after a very expensive purchase. But a series of mysterious data leaks and glitches in how the system escalated and fetched record data correctly began to bubble to the surface.

When this was escalated to the consultant's most senior subject matter experts, located in another market, the answer was quick and to the point. The module purchased was not yet stable, for a variety of reasons. Our patches were on top of the software company's improvements and re-deployments. All the result of brand new systems doing new things in new ways.

Another client who had devised a work around solution was introduced. Another firm with the same problem, just three months ahead.

The consultant's opinion was the module performed poorly at this lifecycle stage and should not have been purchased. However, the truth had been buried. There were signs. Dev forums here and there bemoaning the range of issues, however no clear business intelligence that could be easily deciphered against a set standard of performance. Because the module was part of a larger, multi-year subscription, the cost had to be carried without the benefit of a tool that did what it was advertised to do in the way it was supposed to work.

Still, we had access to another firm's work around, which was adopted (and paid for).

This process led to many unintended consequences, such is the case of new tech.

It's called 'bleeding edge' for a reason.

It was in a large executive meeting with a team of legal counsel weighing the options (in 2014) when a phrase which became DecisionArts tag line was uttered.

"It would have been great to have learned from this mistake without having to have made it."

The executive went on to say that having an independent evaluation against a set of industry performance standards (or regulations) was as important to software and services as it is for manufacturing.

He continued,

"The manufacturing company I invested in, to do business with the OEM's it supports, has to get ISO certified, them maintain against against that certification AND be audited by a third party and provided a performance score. That score determines their value and viability to their OEM clients."

The fact remains in each case, that unless you can validate what you've bought, you are left with figuring it out later. After the fact.

AI tools are too important and too powerful to leave it to chance. The regulation is already in play to require disclosure for any firm using AI tools. Quantifying performance against the standard is next.

So with all of this said, we would like to offer a very important few words on standards, AI governance, quality, and transparency compliance.

One area at this stage which presents a glaring gap is in the efficacy of the artificial decision process as well as security and accuracy of the inputs used in this process and the outputs the processes yields.  Without a clear understanding of the boundaries and scope of this efficacy, there is no way to definitively determine the safety of the input data (such as consumer PII) and the accuracy or malevolency of the output.

Artificial Intelligence Systems, to broadly commercially viable must be uniformly trusted within the sectors of broad commercial use. At the same time these systems must operate within defined boundaries of acceptable behavior (not lying, cheating, or stealing) and conforming with all applicable governmental standards, regulations and laws in place within the marketplaces the system operates. No some of the systems, not 'it depends', not 'trust us'. It is binary.

In essence, AI must adopt and enforce its own version of well adopted and time-tested standards such as GAAP (Generally Accepted Accounting Principles established to govern the common structures of financial accounting processes, and the financial standards for recording these transactions and values).

Standards such as GAAP can also operate in rapid change and evolution environments such as technology.   IEEE SA (The Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) is an operating unit within IEEE that develops global standards in a broad range of industries).

Fortunately, this is under way. We also have emerging Artificial Intelligence standards such as NIST's AI framework, which is now accessible to industry.  It is important to keep this body of work and its applicability and use at the forefront of the work that all professionals operating in this space, as well as, who use AI driven tools (such as those of you reading this) understand and are compliant.

The link to the NIST standard is here.

The goal of the AI RMF (National Artificial Intelligence Initiative Act of 2020) is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.

The Framework is intended to be voluntary, rights-preserving, non-sector specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society to implement the approaches in the Framework. The AI RMF is intended to be practical, to adapt to the AI landscape as AI technologies continue to develop, and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms.

The more commercially valuable and far reaching the sector such as consumer packaged goods, the more important self-adoption, self-governance and self-adherence is to an open, secure and stable marketplace that can be trusted, regardless of speed of innovation or the sophistication of commercial AI advancements.

We’ve seen in the past the impact from industries whose self-governance and business behaviors didn’t keep up with public interest, customer, employee, shareholder or  individual interests, or even the industry’s long-term welfare (think Sarbanes-Oxley and the recent social media Congressional Hearings).

Senior executives are understandably nervous about adopting solutions not fully understood due to possible unintended consequences, given recent Social Media Congressional Hearings

When things go sideways, this is what can happen. Let's avoid this with AI enabled systems. We have this ability now but it is important to constrain the Genie before letting it fully out of the bottle.

DecisionArts is committed to this standard and remaining compliance to NIST standards in order to ensure that our customers can universally trust and rely on the data feeds we provide as being truthful, accurate and in accordance with all applicable global laws such as privacy, consent validation and data sovereignty.

Specifically, here is some of what DecisionArts commits to deliver to our brand and retail clients as well as partners:

Global Privacy compliance to all regulatory body standards.

100% Consent Validation and associated audit certification and validation to global regulatory body standards.

Data Sovereignty audit certification and validation.

Full PII compliance and audit validation on all first and third party data.

Detailed performance score cards regularly made available for or prompted by all DecisionArts clients for all AI enabled DecisionArts graph and algorithm structures and outputs, audits conducted by embedded Truyo AI compliance software and data engines.

Audited AI program governance and performance compliance to NIST standards.

AI solutions and data usage are moving and evolving faster than ever.  Global brands and retailers, as well as consumers have as much to lose as to gain, so ensuring for protection from unintended consequences is critical, not just in the future but now.

Policies are only as good as the standards applied and the ability to demonstrate compliance with transparency.

We expect you to question how this is being done. We'd question it. So, here is how to do that if your are interested or if you want to learn more.

Simply drop us a line at info@decisionarts.io so we can set up a time to walk through your questions and share a few things with you.