AI (and now GenAI) are rightfully being talked about as technologies that are bringing up the most dramatic shift in our lives and even for the entire humankind. It appears that no sphere of human activity will remain untouched by the advent of AI, with huge benefits on the one hand, but also huge challenges on the other.
This series of articles is our attempt to articulate in layperson terms what this technology is all about, its impact on us in India, and on our B2B thesis at Pentathlon, and hence how we plan to navigate this new tectonic shift. This first article is about the technology, but true to our thesis at Pentathlon, focusing on the disruption it causes rather than the technology itself. (For an excellent primer on the Technology itself, I highly recommend “Decoding GPT” by Devesh Rajadhyax.) Since a lot has been written about what it can do, I am focusing here on its limitations or what it cannot do.
Rules v/s patterns:
The first wave of automation was to let computers know and implement the rules of human behaviour, whether it was science, engineering, business or even social life. This was done through computer systems and programming languages. We found that computers were much better than humans in producing correct outputs, much more reliably. Of course, life is complex and we had to make a lot of assumptions to simplify it for computer systems. And when the assumptions were wrong, the system produced wrong outputs.
At some point, it became so complex to manage all the assumptions with all the ifs and buts, that it made sense to let the system decide the rules based on the patterns in the data available, rather than humans identifying and coding these rules. In other words, Artificial Intelligence (AI)! Of course, it’s easier said than done, as it requires a huge amount of data and huge computing resources. So using rules still makes sense when sufficient data or computing resources are not available.
Even with the data and computing resources, AI has its own challenges. AI systems tend to be black boxes, where nobody may fully understand how outcomes are arrived at. It can also be biased, as the patterns are from past data, which may not match our current social norms that keep constantly changing. While AI works on past data, new techniques such as Retrieval Augmented Generation (RAG) allow real-time data points to shape the AI outcomes. On the other hand, based on the patterns it finds, AI can accelerate the formation of bubbles, or worsen social divide, for example.
Intelligence v/s empathy:
What Gen AI brought to AI was a great user interface (UI). Now, we can actually interact (online chat or even talk) with the AI system with Gen AI systems such as Open GPT. “Look and feel” takes a big leap forward with the Gen AI systems. The biggest fascination for GenAI today is how it can be so human-like in its responses. Even with empathy! How is that done? The underlying technology is Large Language Models (LLM), which, believe it or not, fundamentally just puts together words, pictures and other kinds of data, based on the patterns it has seen before in the humongous data it has. Just that the data is so huge that it covers practically all situations. But all of this without really understanding anything, It is just mimicking intelligence! So it can give completely wrong answers but with full empathy! While the model may be trained well with sufficient data for correct grammar, syntax, and even emotions in the response, the core answer itself is based on the training data related to that question, which may not be sufficient or right data. Also, unlike human intelligence, it will not remember its earlier answers! The next session does not have the context of the previous session that is no longer alive. Increasing the context window and thus making it remember, is possible but can increase cost significantly.
Creativity:
The one place it was felt AI would not be able to beat humans was the arts, the epitome of creativity. But Gen AI has proved that wrong. Indeed Gen AI can create poetry and paintings indistinguishable from human art. The problem is where GenAI gets “creative” in say, business or engineering, where there are specific measures of success. We can say GenAI is being “creative” when it applies its learning from one domain to a totally different one. Then it can go wrong with respect to those specific measures of success. (These errors are called “hallucinations”. To prevent this, AI can be told not to be creative and only rely on facts.) In arts or even forward-looking activities like business strategy, it may not be possible or relevant to specifically measure success. So one could say, it may not matter in these cases. But in many cases, we expect Tech to work without fail. In such cases, where a specific measure of success exists, Gen AI cannot be relied upon to achieve it, because of its probabilistic nature and data available to it.
Needless to add, we are mostly talking about mental tasks that AI and GenAI can take over. Any physical action required to be done is very much out of the scope of this technology.
In conclusion:
The hope is that there may be sufficient data and computing resources for AI to overcome all these problems and mimic intelligence so well that one cannot make the difference between human and artificial intelligence. Firstly, we don’t know how much data will be sufficient – it could be impossibly large or costly! Autonomous cars are still not prevalent. They are trained on a lot of traffic data but there may not be sufficient accident data, for example. In the case of GenAI, it is the data of thousands of languages. And the larger the LLMs, their capability increases but output becomes less explainable!
It is possible that some new technology breakthrough may bring down the language data requirement dramatically, and that technology may replace Gen AI. But till that time we will have to work with the constraints of insufficient data and computing resources. We may have to stick to mental rather than physical tasks. It will be more of an assistive technology for critical activities, requiring specific success metrics or high accuracy output. A judicious balance between rules (other technologies) and patterns in data (AI/GenAI) would be needed to make solutions cost-efficient and practical. Indeed autonomous driving does use a lot of rule-based computing. Human intervention will be required to validate the outputs, remove biases etc.
All of these challenges will be esp. relevant for us in India, and I will be discussing them in the next article of this series.