I was 12 years old when I was claiming “technology will end humanity!” To be honest, my dear secondary school friends were not seemed to be taking me so seriously those days. I cannot blame them, as i was unaware of any of the concepts like technological singularity, artificial general intelligence (AGI), or superintelligence back then.
My postulation was rather based on my feelings coupled with observations i gained through my very limited world. Today, i still keep a slightly modified version of my conjecture; that is, “technology has the potential to end humanity!”, but this time based more on research, data, and experience.
The aim of this post is not to discuss research areas, methods, tools, algorithms or widespread applications of artificial intelligence (AI). Instead, I would like to focus on its relevance in business, and potential far-reaching implications in this domain. But let’s set the scene first.
The establishment of the formal AI research dates back to 1950s, and to some extent even to 40s-30s, back to Turing. Throughout its journey until today, there were ups & downs in the funding & pace of AI research, and the down periods are called “AI winter”. Emergence of “Expert Systems” in 1970s-80s, “Deep(er) Blue” beating Kasparov in 1990s, and “IBM Watson” in 2000s can all be considered as some of the milestones. (The list would of course be incomplete without counting the fictional artificially intelligent car “Knight Industries Two Thousand (KITT)” appeared in TV series “Knight Rider” of 1980s.)
These milestones are then followed by advanced pattern, image, face & speech recognition applications (e.g. city surveillance systems), and today’s personal digital assistants in our pockets or at home.
(in the meantime, my first personal exposure to AI was also in the late 90s - early 2000s, during my high school years. While some of my classmates were trying to code AI applications, i was busy trying not only to learn but also to build some of the math behind. Good old days… 😊).
In the last five years or so, and more specifically in the last two-three years, a much rapid development in AI applications is taking place (e.g. autonomous vehicles), together with boosted discussions & a hype around.
The problem with the use of AI in business, though, is again about how it is interpreted. Nowadays, artificial intelligence seems to have replaced the phrase business intelligence (BI) – intelligence being at the intersection.
However, what is marketed as AI or as including or powered by AI mostly refers to what BI applications have been already proposing. You might have even encountered standard functionality in software applications or basic computational capabilities being promoted as AI.
The simplest description of artificial or machine intelligence is to mimic / simulate natural intelligence in a way to perform tasks like humans do. In this regard, one can even call a simple calculator as AI, despite the fact that we have never called it AI, right?
A more formal definition of AI, on the other hand, involves any agent / device / machine / robot / software / system that can interpret & learn from environment / external inputs / data, and use its learnings in taking actions to maximize utility or chance of achieving goals successfully. (utility maximization, goal seeking, sound familiar from other fields?)
Those goals may be simple or complex, ranging from correctly filtering spam messages & keeping your inbox clean, forecasting the next break-down of machinery, diagnosing an illness, segmenting customers, performing market basket analysis, classifying data of any type, fitting a regression line, and so on.
Apparently, learning is a key element within the scope of AI, giving rise to the sub-field of machine learning (ML) & its algorithms. Imagine a driverless car, but one that is somehow not able to learn. Are you going to use it on the same road for the exact same trip every day, as if you were living the “Groundhog Day” or in a “Truman Show” setting?
While learning is key in characterizing state-of-the-art AI research & applications, it is only one of the many traits of the human mind & not sufficient to achieve artificial general intelligence (i.e. to possess all skills, simulate all features of natural intelligence & be able to solve any problem human mind can solve) or superintelligence (i.e. to surpass human intelligence).
Going back to the discussion above on the scope of AI, let’s not get trapped by the famous “AI effect” & not say “AI is what has not been done yet”; degrading the value of what has become standard & been in our use for long enough time. However, given different ongoing AI research projects investigating artificial general intelligence & superintelligence, not sure how fair it would be to call mainstream business intelligence applications & decision-support systems like a planning software or a marketing suggestion system as AI.
In a similar manner, the so-called data science / (advanced) analytics also seems to be advertised as artificial intelligence now. In fact, such re-branding efforts are understandable, given the lead time of technology adoption & widespread awareness to be established. For the case of AI, in particular, using directly the word intelligence must also be much more appealing to marketeers, than more technical & indirect terms. (anyways, there is a self-contained discussion about those older buzzwords in an earlier post here, but to put it short: time flies, things change even faster! 😊)
Meanwhile, the above formal definition of AI also gives a clue on its link to optimization. In finding the actions that will maximize utility or chance of achieving goals successfully subject to the inputs or learning, AI algorithms implicitly follow an optimization - or in far more generality a search - procedure, just like we do in making our decisions.
(we end up satisficing instead of optimization, though, given our bounded rationality; i.e. our inability to search all possible solution alternatives due to intractability of the problem & limited cognitive abilities, limited time, and lack of complete information – but never mind, optimization & satisficing essentially lead to the same outcome, practically, and yes, even theoretically! 😊 RIP Simon…)
If you wish, you can also call a planning software as AI, for example, suppose at the core of which lies cutting-edge mixed integer programming (MIP) models & algorithms to solve those optimization models, because it can perform a tough computational task on your behalf, simulate your intelligence, and tell you the optimal solution if you are lucky enough.
On the other hand, such a software probably lacks the learning bit of the above more formal definition of AI - of course if not coupled with a learning algorithm, maybe in the data pre-processing phase of its embedded engine. (btw, learning - or machine learning - can be incorporated into optimization & heuristic algorithms, as well, in order to increase their performance & achieve better results.)
Indeed, lack of learning is not a deficiency for the optimization model, but in contrast, applications advertised as AI in business context are still playing around descriptive & predictive fronts, and have not yet become superintelligent & learned to build & run prescriptive models, real-time.
Talking about superintelligence, let me come back to my initial postulation & conclude by discussing two primary implications of AI. Is there going to be a point in time when technological development gets out of control & irreversible as described by technological singularity hypothesis? Am i right, can technology end humanity? Will machines take over the world?
In trying to maximize our utility, we may need to make decisions & take actions with imperfect data, given the limits as discussed above. On the other hand, while the structural difference between polynomial time & exponential time or even combinatorial explosion remains, what we consider as computationally expensive like brute-force / exhaustive enumeration might become so cheap when you automate things, find ways to increase computational power & make computations instantaneously fast enough (e.g. quantum computing).
This means it might be feasible for machines to learn anything & tackle any hard problem of the real life (e.g. cracking the most advanced cryptography algorithms in use). So even though what bounded rationality creates will still be bounded, the bound on AI will be relaxed in time & intelligence will eventually diverge.
A further argument is that not all traits of human mind like consciousness & sentience can be imitated by machines. What i would say is, perhaps we are just not there yet, or maybe just not devised the algorithm yet, and do not possess all the info to conclude that the above is possible. So for me, saying that there is a strict limit on machines’ capability to learn or on artificial intelligence, is like saying P (polynomial time) is not equal to NP (nondeterministic polynomial time). Impossible is just nothing!
Therefore, I certainly agree with Gates, Musk, and the late Hawking that AI is a sound threat to humanity, and in fact just one of the couple of important threats like climate change, other than ourselves obviously. Nevertheless, for this threat to realize, we do not even need, for instance, artificial general intelligence which then reprograms itself, enters into recursive self-improvement & becomes superintelligent. In other words, to consider it as a threat, AI does not need to be more intelligent than or as intelligent as humans.
A much closer & more practical problem we have is, AI applications can be replicated (e.g. think of a computer virus). Look at all the catastrophes that a human population of currently around eight billion with bounded rationality had been capable of building so far. Now compare this with somewhat intelligence of theoretically infinite number of AI agents.
So AI is a threat to humanity even at this stage, at least as much as we are to ourselves. If you still need more evidence, just remind yourselves the arms race in history in the literal sense - among many other examples, and then look around to see its upgraded version (v3), this time developing in the form of autonomous weapons / AI arms race.
artificial intelligence (AI) is a threat to humanity even at this stage, at least as much as we are to ourselves...
Finally, suppose that machines do us a favor & decide not to take over the world, so that we are still around, at least for a while. But until that moment, what if they just take over more of our jobs? Well, if you were worried over centuries about this (especially over the last two-three), then you can continue to worry, but no more than you did before please. (If you were not worried, that’s even better, called peace of mind! 😊)
Remember, machines, robots, and automation already took over many jobs, without even having high levels of intelligence (see another earlier post here on digital age, industry 4.0, and how AI fits in as an integral aspect). Yes, the number & complexity of the skills AI will be able to simulate will increase in time, and this will lead to a shift in the modes, processes, patterns & statistics of human employment.
However, this does not necessarily mean that AI will take over every single task we perform & job we do. Even if we create AI that can imitate other skills, and then also demand such AI applications possessing those extra skills, the decisions & actions taken by AI will so long continue to include errors (e.g. fatal accidents of self-driving cars), similar to ours. (do you, for instance, go crazy trying to be served by the increasingly popular artificial call center agents & desperately press 0 to get yourself through a human agent?)
Intelligence of our minds, and intelligence that is artificial, they just complement each other, and we need them both (e.g. think of various drive-assist systems in cars like brake or lane assist). That’s why human-machine interaction is key & in fact enables us to achieve the best outcomes in most cases. This will continue to hold, i believe, as long as we are on the planet.