giancarlomangiagli.it

My take on the matter of AI

Just a notice to the reader

I don’t consider myself a guru on the AI field, though I came to put together some of my thoughts on the matter with the chance of incurring in some nonsense and false steps. With that being said, I’m willing to take in feedbacks and corrections via e-mail, so that I can improve this piece. The last thing I want is to give misinformation.

I will start by making a few clarifications

It’s been decades since we’ve started studying and finding applications for AI; what really makes a difference today is that manufacturers are interested in presenting a whole new era, which will surely shake the western lifestyle to its core. I’m not so sure about this prediction, since AI is limited when it comes to autoreferentiality and self-containment. When that limit is reached, what is left is a reality that as today is considered “foolishly natural” (relying on the literature on AI). With that being said, I’m not trying to give any magic wand property to AI: it can’t simply turn anything not-intelligent and/or not-artificial in intelligent and/or artificial. There are many obstacles to that, some of which I’ll attempt to explain.

Misinterpretation

“Good morning, I’m here for my pension”. This is something most pensioners would say to a bank clerk. What follows is a perfectly correct answer. “Good morning. You are already a pension beneficiary. For further clarification, please contact the social security authority”. This, however, will probably be enough to trigger a far less pleasant answer from the pensioner. We often deal with such convoluted messages that are object of confusion; the problems are vary, may them be colloquialisms and obscure figure of speech, or again a wrong interpretation on the listener’s side. When I was a kid I was sure that the words “surface” and “perimeter” had similar connotations, and as of today I still say “chair” instead of “sofa” and vice versa – I feel way more comfortable talking about these examples than any other. With that in mind, you can see how it would be hard for a human to avoid misinterpreting a conversation, so how would an AI succeed anyway? Can it be proven? One should probably try each and every combination of possible interactions between man and AI to prove it. In brief, I’m not so sure of how an artificial system, no matter how intelligent, might manage misinterpretation better than a human being – but there’s a chance it might deal with that better under some circumstances. What’s typical for us humans is that we’re all different. We can however make sure that every system that relies on AI conforms to each other so that they all have common properties, such as the ability to learn from their mistakes (which lacks in some humans) or stop in case of ambiguity, so that further investigation can be made (in cases where, for example, a human being would probably just act on whim). And although in most of the cases homologation would allow AI to have a higher gear with human’s intellect, right now wouldn’t be achieved because of a problem that affects many systems, not just those relying on AI. We’ll discuss that later.

Inexperience

How do we, as humans, respond to a whole new experience? We choose what to do. Unfortunately, sometimes we can only tell a new experience after we’ve already engaged in it and that leaves little to no room for any choice. With that in mind, you can say there’s a difference when it comes to stop and thinking about how to deal with a new situation (which is something a machine would probably be able to do, yet considering all that’s been said previously in this article), and not being able to understand the newness of that very same experience while living it. Take for instance what would happen if you were a salesperson dealing in alcoholic beverages and this is your first day. You would probably only try to sell what you know your clients need at the moment. The very next day, you’re in trouble because you have sold alcohol to a minor – and this happened because you had treated said minor as any other client. That minor related accident was a “new experience”, and you discovered it only later in time. Such circumstances are hard to deal with for both the human being and the AI; the only way to have things easier is to ask the world help step-by-step, so that you would always know if you’re doing the right thing. Our limited experience makes it easier to fail both for humans and AI and things only get worse when heterogeneity and isolation are on the board.

Unreliability

Who would ever trust an AI with highly risky or complex tasks? Well, let’s think for a moment about the famous electric car driven by an AI: what’s riskier than driving? Those who use that kind of cars have three choices. The first one is to allow the car to drive itself, hence giving it full trust – and if you think about what we’ve been saying in this article about limits of inexperience and misunderstanding I don’t think this would be a good idea given any other possible choice. The second option would be using the AI just to assist the driver (but given how the car would be able to drive by itself we’d be starting to rely on it completely in no time): one way the car could do it is by imposing itself in circumstances it believes to be risky, but that would work the other way around as well; this calls for a lot of false positives and negatives, both potentially dangerous. The third and final option is for the human to drive autonomously with no regards for the tech the car is supplied with, or even deactivating all the automations and warning signs – which may be the right choice.

Driving was both a hard and risky task, but we can see how even easier (and yet risky) tasks may run into tricky outcomes. Take for instance what happens when you try to transfer some money (a delicate operation yet not too hard) via a voice assistant: “Giuseppex – in this case our assistant’s made-up name – make a wire transfer of [insert here €] to [insert here bank details] in the name of mr. [insert here name]“. What happens if Giuseppex misunderstands the value you intended to transfer? You can check right away to see if the assistant got the bank details right, but that doesn’t apply to the amount of money sent. In conclusion, when the accuracy is essential you shouldn’t rely completely on AI.

Heterogeneity

Each AI system is a world to itself, developed at its own terms or even proprietary tech. In most cases, algorithms and technologies are very similar but what changes is how they act in the end, since that behaviour is deeply influenced by each implemental choice and that of design. As a result, we have a stack of limits and flaws that are specific to each existing system.

Isolation

Science says that when it comes to humans or groups (being them countries, societies, institutions etc), the best choice is always to cooperate. The same would apply to AI but the idea of connecting different systems (which would make for better communication and interactions) still isn’t widespread; for this reason, every system is still isolated. The best course would be to establish common codes, norms, and shared system procedures, which would be constructive and useful both to reinforce the whole area operating in AI and to bridge the huge gap, in particular limits and flaws.

Prematurity

What follows is what I think is the most important part and what pushed me into writing this article. Let’s start with a metaphor. Say I’m a terrible student, one that hasn’t even bought all the books and that takes next to no notes – and when they do, they do it with inaccuracy. Trying to study hard the night before a test would call for a check of my notes, but since they’re not accurate there’s a high chance I would understand nothing. What I could do is to rely on an AI system properly instructed to study and understand well written notes, so that it can help me reorganize my own notes and maybe fill in the gaps. The result wouldn’t be stellar, but I’d still be able to get a passing score. I’m now going to use traditional software applications as a term of comparison, and although I have no intention to tar every product with the same brush, I get the idea that there are too many products that are mediocre at best. That happens because developers didn’t follow pre-existing models, good practices and well-established rules, creating something that is bound to be too different, closed, isolated and faulty. Users are in most cases destined to not care enough and move on. On the other hand, industry experts won’t be able to ignore the facts. AI is the next step to older computer science models (both determistic and schematic), to which the scientific research is ensuring a proper and adequate follow-up by giving enough time to the first step to develop as much as needed before moving on. This didn’t happen with the productive world; what is missing there is a certain development in the first stage which, in my opinion, makes the idea of a “revolutionary” AI lack credibility and reliability, just like a student slacking off on his homework who – at the same time – is trying to impress their teacher by faking when it comes to test their own knowledge. Companies might be getting a less than ideal result with the AI if their approach to classic computer science is wrong: in that same structure it is possible to have a branch who is working on that said approach longing for the opportunity that’s been given to another branch that has the chance to work with an AI – but nothing says that AI is supposed to work properly. A manager might think to suppress an unorganized branch and use an AI based system to take its place, but I’m ready to bet it isn’t going to get better performances or they wouldn’t be able to solve old issues, even risking to create greater problems. If we circle back to our student’s metaphor, if they don’t learn to take better notes, they won’t ever be able to answer correctly to all the questions found in a test, not even resolving to the most sophisticated tricks. This is even more obvious in fields like those of automation and robotics – also known as the terrible era in which every worker is going to be replaced in part/completely by the AI. Robots allow for most procedures (both physical and not) to be automatized, but when said procedures have been ill-conceived from the start hence being inefficient and faulty, what happens is that robots are less than useful only working faster and occasionally diminishing errors. An ill-conceived procedure is planned and fixed by human intervention (on certain limits) on a daily basis; this kind of job is something that a robot won’t be able to do, which means that the whole working activity is bound to fail even though it can be carried out as quickly as possible. Let’s say Mario Rossi would need a whole hour to get something wrong and fail his task (or get it right thanks to his experience and knowledge), a robot would probably fail in less than 2 minutes – because they will, eventually. To circle back to the AI, let’s now consider bureaucracy. If a manager wanted to replace one bureaucrat each day with an AI operating system, not only he would have to consider each fault I already described in this article (and there’s still much to say about those), he’d have to consider how faulty bureaucratic procedures are. The AI won’t be able to fix those faults created by an inefficient organization, though it can still operate in the place of a human worker if the foundations on which it operates are solid. Here’s an hypothetical scenario. Here’s what you one must do when issuing a certificate: you go to the right office, identify yourself, fill every paper that is required, pay for whatever fee is due, and after that you can get the documentation you needed. What happens when you introduce an AI? Let’s say that the next step is introducing virtual assistants that identify the user, guide them in the process of filing the papers, have them sign, and pay etc. We won’t discuss the high expenses and how much time a procedure that complex would require to be developed; what is really important is that the procedure would be full of mistakes and flaws so important to make it not worth using. The worst manager would rather force technicians to make things work than to admit they made a mistake, just for the sake of spending more money and hiding their mistakes. The most obvious way to take before introducing an AI would be to rethink the whole bureaucratic process to be properly computerised (in a traditional way), so that when the artificial component is introduced it is for limited and targeted tasks (like vocal assistant when it comes to accessibility or to verify certain requisites); or it could be discovered that no AI is needed by creating a process that would allow anyone to simply make the necessary arrangements online by using their own digital signature. To recap: if we introduce the AI in a system that isn’t stable and mature enough, it is going to fail. We need to make sure that the pre-existing organisation and its IT infrastructure is strong enough, otherwise introducing the AI element is premature and not recommended. Maybe we’d need to replace managers first and workers later!

Proposal in the near future for the application of AI

At the end of this consideration, I think that AI makes for a valid tool when accompanied by other operative systems – when the requirements are specific, not for anything, and when humans really do need it. It works just fine when a cleaning robot must be guided, or when a system needs to guess users’ preferences so that it can suggest targeted ads, or to play chess, or help an expert (a doctor, a lawyer, a researcher, etc.) when it comes to cross-reference data and sources of information, so the whole process of extrapolating information is faster and more precise. There are still a lot of case-studies to be considered, but what do they all have in common? They deal in no risk fields of application or if there is a certain degree of risk involved the AI is only supporting the human being, who in the end calls all the shots. Another example. I have heard of a credit institution who asks his clients to put a graphometric signature with an automatic validity check (which works as a validation of the original signature) when a bank employee is present. It seems that sometimes that check stops the process by not recognizing the authentic signatures. The system would be of much more help if it just signalled anomalies in the signature to an employee instead of shutting down the whole process. There are circumstances in which the AI should operate freely – and it’s when the human being cannot be of any help. For instance (and I’m not sure this is likely, but that’s to prove a point) if during a flight both the captain and his second were to collapse, the autopilot – while considering the risks involved – might become the only thing between safety and a horrible death.

In the near future I think we should consider relying on AI only when the higher is the machine’s autonomy (no human involved), the lower is the risk and vice versa. If C stands for “convenience”, A “autonomy” and R “risk”, the formula is: C = 1 – (A*R), all variables should be numbers between 0 and 1 and convenience while using AI is established for C > 0.5 (author’s note: I know I just expressed a trivial idea but I’ve always dreamt of writing one just like those we read in many articles – and then some, by writing a trivial formula to a trivial idea).

Medium and long-term propositions for the application of AI

We shouldn’t only consider AI but also the way culture and a sensitivity for IT science is taught not only to developers, but also managers so that the final products are more efficient, compliant with the rules, best practices, and predisposed to interaction with other systems (via API open data, to name one). This makes for fertile ground to expand development to AI.

About the whole AI question, I think that to rise its autonomy and apply it to higher risk situations a joint effort should be made by all the producers through specific initiative; two of those can be drawn from what’s been already said in this article:

By doing that and joining forces it would be possible to bring us to the future and overcome a great deal of problems regarding misunderstanding, inexperience and unreliability.

I know that this last proposition is problematic on the matter of privacy and other implications for security, but I do believe that the difficulty to transform specific information in anonymously generic ones can be overcome, still with no loss of intrinsic worth. Is it worth it? Yes, because thinking about AI as a model capable of mimicking in every way a less performative version of a single human brain might result too confining. On the other hand, having a net formed by a combination of different knowledge and experience originating from other AI systems acting as a single might change for good this field, thus overcoming many limits coming from the uniqueness of human beings with virtues found in a cohesive and cooperative collective. Even if that is intangible and artificial.

(translation by Silvia Di Mauro)