Judging by the news and recent anti-AI efforts, for an innocent bystander, it might seem like AI is taking over the world. Artificial Intelligence is often presented as something dangerous we won’t be able to control. It’s almost as if we’re developing an equivalent of a nuclear bomb that can go off on its own at any given moment. While there are some reasons for concern, AI is a buzzword for the most part and is mostly used by marketing teams because it seems to help stuff sell faster nowadays.
Nearly all AI apps on the market today rely on machine learning algorithms. In essence, it’s a traditional algorithm with the ability to form its own actions based on provided examples. Cassie Kozyrkov has a helpful kitchen analogy that explains the four steps involved in any machine-learning process. It’s gathering data, feeding it into an algorithm, validating the model, and using it to make predictions. Similarly, you have ingredients, appliances and recipes in your kitchen. Now, there’s a major difference. While the oven is only able to cook the dish, machine learning algorithms can learn how to prepare them, not just cook them. And while that is an impressive engineering feat, experts will rarely say something is an “AI app”. They’ll rather use machine learning or other, more precise terminology. Artificial Intelligence, as a term, is overused, overstated, vague, and simply a buzzword. Don’t fall for it.
Where there is movement, there is friction
Dust is not settling around ChatGPT. And for a good reason. ChatGPT is powered by a large language model (LLM), which has revolutionized how we interact with chatbots. If it seems to you like ChatGPT disrupted our reality overnight, it’s because it kinda did. Within days of its introduction, users figured out it could write code, compose poetry, and grade essays. A few months and several iterations later, ChatGPT can “see” images and even pass the bar exam. Its success had other companies scrambling to join the fray, like Google which hastily released Bard, a similar chat interface to OpenAI’s.
Due to the intense competition and fast growth in recent months, a number of leading researchers and industry figures have expressed concern and have even called for a halt on further development in a doomsday-sounding open letter. This move didn’t seem to slow down efforts among the major contenders, but it has led to some more transparency about research and where we’re headed.
Not everyone is keen to jump on the bandwagon
Most academic institutions, for instance, frown at the use of ChatGPT. They cite it as a threat to traditional learning, asserting it gives some students an unfair advantage and could impair the understanding of those who lean too heavily on it. This ‘bad rep’ isn’t just in academia either. Within the content industry, AI is synonymous with low-quality and sterile articles, despite the hundreds of AI-branded article writing tools.
Powerful as it might be, machine learning models still cannot replicate creativity. ‘AI image generators’ can paint like Picasso only because they have previously studied his work, same with writing styles and code references. As a result, there has been a lot of debate surrounding originality and copyright violations in the realm of generated music, text, and art.
Gaming industry is another good example. While some have tried to create entire games using LLM models, engines like ChatGPT tend to produce generic and uninspiring content that fails to capture the depth and engagement of popular games. And while you could certainly generate something fun and simple like roulette, creating completely unique and artsy worlds is not yet possible.
Also, jobs in areas like legal and healthcare industries are reportedly the least affected by the current trend of AI. While some of the activities in these industries could be automated, humans remain indispensable for their intuition, initiative, and ability to handle complications.
Admittedly, jobs are indeed being lost to machines, mostly in areas where automated processes can easily replace manual effort. However, there are those who believe that companies seeking to implement machine learning models and processes must hire individuals who are capable of efficiently creating, maintaining, and improving these processes. It could be writers who use tools for planning or publishing content at scale or programmers who can deploy and debug code much faster and more efficiently with generative and analytical models. Most companies are still playing it safe for now.
What The Future Holds For AI
Machine learning technology may have sprung up overnight, but its adoption will be much slower, and the development of artificial intelligence, longer still. Most people forget—and some are unaware—that ChatGPT is only a research model and far from being a completed product, as is evident in its occasional bugs, hallucinations, and bias. Although today’s ‘AI tools’ may be capable of much more than we expected, their output and results over time are still largely untested.
As the race heats up, companies like Google and OpenAI hope such models will pave the way to true artificial intelligence. Whether this is achievable in the foreseeable future remains speculative at best. But today, ‘AI’ technology still has a long way to go before it can accurately be labelled as such.