Approximations and artificial intelligence
When we talk about artificial intelligence, many approximations are often made. Between automation, robotization and artificial intelligence there are many distinctions between the different technological tools available to us. Particularly in the world of software quality assurance, these distinctions are crucial in order to know how to properly test these tools. To help us better understand the technological reality of AI, our Consultant Manager – Patrick-Michel Dagenais – shares his expertise and his opinion on the subject.
Levels of AI
To begin, let’s take an example that everyone knows: PAC MAN. In this internationally known game, ghosts are trying to catch you. These ghosts do not move randomly on the map and each of them has unique moves. The behavior of the ghosts is generated by an AI! Without human intervention, an automation will manage the decisions of the ghosts. Here we find the 1st level of AI: perform a simple action (move the ghost) automatically. It is a basic level, but it allows to identify one of the bases of AI: autonomous action.
In the second level, decisions are made in reaction to predicted factors. No situation is foreseen in the AI’s code, and if it were, the AI could not invent a solution to this new situation. In this second level, the AI is still in a posture of automation of its actions and decisions, but it has the capacity to analyze its environment and to react accordingly.To illustrate the point, we could take the case of an AI playing chess. Each move played will influence the next decision of the AI. We find here a new basis of AI: contextual analysis.
At the third level, we are no longer really into automation and reflex action. We are much closer to what most people have in mind when talking about AI. In this level, the code does not foresee all the possible situations and an expected reaction. On the contrary, the AI must be able to act on its own in any situation it might encounter. The code must therefore give it the possibility to make decisions alone. The most recent example of this level of AI is ChatGPT. We find here the last basis of AI: decisional creativity.
Until now, we have observed algorithms capable of making decisions within a limited range of possibilities. One can even go so far as to say that if a system produces an outcome that is not part of a predetermined list of expected responses, it is a failure in itself. The reason is simple – we know all the possible questions, and only some answers are acceptable.
This is not the case for so-called progressive artificial intelligence: The algorithm guesses the meaning of the questions sent and produces approximate answers. The question of the interlocutor can be wrong, the interpretation of the algorithm can be wrong, and the results could also be wrong for a lot of possible reasons. We are talking about a system that has no predefined constraints, and moreover, the same system is fluid – every answer provided is analyzed once to strengthen the artificial intelligence. The same system we tested yesterday is not necessarily the same one we will test today.
How to test an artificial intelligence? With another artificial intelligence? How do we test that other intelligence to make sure it does a job?
The very nature of adaptive artificial intelligence demands that we get out of the arena of predetermination. That’s the next hurdle in the QA world – how to test what doesn’t seem to be testable.
Being replaced by AI?
Today, the clumsiness and approximations surrounding artificial intelligences are often a source of panic and anxiety. Will AIs replace human beings? Yes and no, we are currently very far from proposing AIs that reproduce human behavior with accuracy. However, the democratization of the use of AIs, notably with ChatGPT, is leading us more and more towards a world where AIs play a role of accompaniment for human beings. It remains to be seen what use we will make of them…