fbpx

Artificial Intelligence and Transparency

July 23. 2018. 7 mins read

For most of the articles we write, we don’t consult with the companies beforehand. This helps keep things objective and saves us a great deal of time not having to do the whole “back and forth” thing with some public relations firm that’s getting paid good money while we do free work for them. Speaking directly to founders is almost always illuminating, but anything else usually just results in a mouthful of platitudes from company management that we’re supposed to parrot to the masses and then say something cheerful at the end like “company X has a bright future ahead of it!” To endure that sort of pain, we usually ask that firms pitch us a few bucks to help feed our team of starving MBAs.

Most of the time we will email startups we write about after the article goes out, and they’re almost always grateful for the exposure. Everyone wins. As you might guess, we don’t always get it right. Sometimes we get an email from a company asking us to clarify something, and other times we get objections as to what we concluded. For example, we recently asked the question “does company X really need to use artificial intelligence to do what they do?” That prompted their PR firm to let us know that indeed the “entire platform actually works on AI” and that without AI, they wouldn’t be so efficient and accurate. Fair enough, but are we just supposed to take that at face value? How transparent should we expect startups to be in order to convince the public that they actually use AI?

We decided to reply with the following questions:

  • What unique big data sets do you use to train your AI algorithms?
  • How much historical data do you have?
  • What AI software frameworks are you using?
  • What hardware do you use?

Simple questions, but at least it gives us something to go on. Disclosing any of the above information should not be a problem for any firm. In fact, we might argue that the best way to attract would-be investors or potential acquirers would be to talk all about how your framework uses big data sources to produce valuable insights that increase revenues and decrease costs. Simply just sharing some facts from your “unique big data set” would be enough to get people to come sniffing around. Take for example a firm called Zendrive which can actually tell you which school zones are the least safe around the country based on the driving data they collect from all those “free mileage apps” everyone uses.

School Safety Information from Zendrive
Source Zendrive

What we’re talking about here is the company’s willingness to be transparent about how they are using artificial intelligence in their business model, and what unique big data sets they plan to use for a competitive advantage. As we all know, the best AI algorithms will be those which have access to quality big data sets that nobody else has access to. Of course, that’s not what everyone refers to when they talk about “artificial intelligence and transparency”. There’s another definition floating around based on concerns that we think will fade away as the AI algorithms silently take everything over.

Artificial Intelligence and Transparency

Much of the talk today around “AI and transparency” is referring to the ability of the company to determine how an algorithm reaches its conclusion. This isn’t always that easy, and it’s not exactly a new problem. In the olden days, we called this “data mining” and we didn’t call it “big data”, we referred to it as a “data warehouse”. We used data mining tools to extrapolate valuable insights, like the old “put the beer near the diaper aisle” type of insights. These sorts of disparate relationships can be found without using AI, it’s just far more difficult. Data mining is pretty much the same thing we’re doing today, except that now AI can extract insights we could never come up using traditional data mining techniques. In either case, we may not be able to explain how these conclusions were reached, and some people have a problem with that.

The most commonly found views on “AI and transparency” usually suggest some sort of oversight committee that makes sure the AI is playing fair and treating everyone the same. This is because consumers (rightly or wrongly) get wound up when they find out that people in “zip code A” have different spending habits than people in “zip code B”, and that information is being used to sell more widgets. Some committee out there will then make sure this doesn’t happen. Anyone who has spent time in a corporate role knows how naïve this notion is. When the information technology (IT) department releases a new predicting platform that’s used to make decisions, and it outperforms the old platform by a wide margin, you won’t hear a lot of internal folks demanding transparency. If it works, it’s put to use immediately. It does not matter what the public thinks, companies will be using AI to impact their bottom lines. Let’s take Walmart as an example.

Walmart’s Unique Big Data Set

If anyone has a unique big data set, it’s Walmart. To store all that data, they’ve created the world’s biggest cloud called “Data Café” which houses all the data generated by the world’s biggest retailer. All that internal data is then supplemented by “information from 200 sources including meteorological data, economic data, Nielsen data, telecom data, social media data, gas prices, and local events databases.” That’s according to an article by Forbes on the topic which goes on to say that the Data Café can now solve complex business problems across the entire organization in minutes instead of weeks.

SAP powers Walmart's Data Cafe
Source: SAP Asia

What’s even more valuable is all that upstream supply chain data that Walmart is probably already capturing, and they’re in a great position to strongarm suppliers into providing even more data going forward. If you’ve ever taken a supply chain class, you may recall something called the “bullwhip effect” which is a problem inherent to supply chain forecasting systems. That’s exactly the sort of thing that AI excels at – solving complex problems by analyzing massive amounts of data. When it comes to “transparency”, that’s not going to happen unless companies are forced to by some sort of third party police force that looks to see how the sausage is made.

Policing Artificial Intelligence

As soon as we start demanding that firms start explaining all the decisions their AI algorithms make, everything will go quiet in a hurry. Companies like Walmart will simply imbed the AI algorithms so deep in their corporate technology stack that they’ll be barely perceptible while they continue to drive every decision the company makes. AI startups, on the other hand, will be more visible to the public, and consequently will be forced to spend their precious capital on hiring “Public Perception Officers” who will quickly put out any fires that start as a result of AI algorithms not playing nicely with others. Meanwhile, firms that don’t have this “social burden”, like those in China, will quickly accelerate ahead of those that do.

Then there’s the obvious transparency problem which is what to do when even your developers can’t explain how you just reduced false positives by 95% like they’ve done in the insurance industry. You think anyone in senior management cares about “transparency” when you tell them you just created something exponentially more efficient? Not to mention, when you start using AI algorithms at scale, it will be nearly impossible to answer the “how did you get to that conclusion” question. If firms are asked to explain something, and the answer isn’t what the public wants to hear, then those same AI algorithms would probably be smart enough to come up with an alternative explanation that’s more palatable. In all likelihood though, you’ll never even know how far and wide AI is being used, something that Pega Systems recently pointed out in a consumer survey:

Source: Pega

Conclusion

When everyone’s using AI, nobody’s using it. It will be the new electricity, and we’ll quickly forget all about it. Until then, startups need to be more transparent for the simple fact that there are thousands of AI startups out there now and we know for a fact now that some firms are using humans as “chatbots” because the AI isn’t working quite right yet. We’re not talking about “human in the loop” but rather business models where the AI part is an afterthought. Of course this works just fine for companies who have a business model that generates loads of big data which can then be analyzed later, like all the Chinese bike sharing companies are doing.

Why does this all matter? Because going forward, we’re going to continue calling out business models that look like they could be accomplished without using AI. Since most of the AI startups we write about are venture capital backed, we usually just assume that their AI technology has been vetted appropriately. Still, examples like Theranos provide us with little assurance that a VC’s blessing is sufficient enough. That’s when we move on to looking at the caliber of the management team, the pedigree of the founders, the company’s relationships in academia and the corporate world, etc. etc. If everything checks out and the company claims they’re in “stealth mode”, then a lack of transparency may be appropriate. If not, it’s time to get your marketing team on the case.

Share

Leave a Reply

Your email address will not be published.