
AI means computers that do something smart (a circular definition). In a vain attempt to fend off this dilemma, the industry continually performs an awkward dance of AI definitions that I call the AI shuffle.

To develop an apparatus, you must be able to measure how good it is - how well it performs and how close you are to the goal - so that you know you’re making progress and so that you ultimately know when you’ve succeeded in developing it. If you can’t define it, you can’t build it. Engineering can’t pursue an imprecise goal. That’s bad news if AI is meant to be a legitimate field. When used to describe a machine, it’s relentlessly nebulous. The problem is with the word “intelligence” itself. We face this conundrum whether trying to pinpoint 1) a definition for “AI,” 2) the criteria by which a computer would qualify as “intelligent,” or 3) a performance benchmark that would certify true AI. If it doesn’t mean AGI, it doesn’t mean anything - other suggested definitions either fail to qualify as “intelligent” in the ambitious spirit implied by “AI” or fail to establish an objective goal. Defining “AI” as something other than AGI has become a research challenge unto itself, albeit a quixotic one. Second, there’s no satisfactory way to define AI besides AGI.
#New york hype stores software
Despite the tremendous differences, the boundary between them blurs in common rhetoric and software sales materials. First, the term “AI” is generally thrown around without clarifying whether we’re talking about AGI or narrow AI, a term that essentially means practical, focused ML deployments. “‘AI-powered’ is tech’s meaningless equivalent of ‘all natural.’”ĪI cannot get away from AGI for two reasons. In contrast, ML projects that keep their concrete operational objective front and center stand a good chance of achieving that objective. As a result, most ML projects fail to deliver value. This exacerbates a significant problem with ML projects: They often lack a keen focus on their value - exactly how ML will render business processes more effective. In fact, you couldn’t overpromise more than you do when you call something “AI.” The moniker invokes the notion of artificial general intelligence (AGI), software capable of any intellectual task humans can do. Calling ML tools “AI” oversells what most ML business deployments actually do. But “AI” suffers from an unrelenting, incurable case of vagueness - it is a catch-all term of art that does not consistently refer to any particular method or value proposition. Here’s the problem: Most people conceive of ML as “AI.” This is a reasonable misunderstanding. It’s practical ML use cases like those that deliver the greatest impact on existing business operations, and the advanced data science methods that such projects apply boil down to ML and only ML. And by predicting which credit card transactions are fraudulent, a card processor can disallow them. For example, by predicting which customers are most likely to cancel, a company can provide those customers incentives to stick around.

The predictions drive millions of operational decisions.

This capability translates into tangible value in an uncomplicated manner. This means real value, so long as you eschew false hype that it is “highly accurate,” like a digital crystal ball. Don’t let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions, which is why it’s sometimes also called predictive analytics. Most practical use cases of ML - designed to improve the efficiencies of existing business operations - innovate in fairly straightforward ways. It overly inflates expectations and distracts from the precise way ML will improve business operations. That’s because for most ML projects, the buzzword “AI” goes too far. Even before the latest splashes - most notably OpenAI’s ChatGPT and other generative AI tools - the rich narrative about an emerging, all-powerful AI was already a growing problem for applied ML. You might think that news of “major AI breakthroughs” would do nothing but help machine learning’s (ML) adoption.
