Artificial Intelligence vs Computational Intelligence
Bruno Di Stefano
One can hardly scan through newspapers and magazines without finding the term “Artificial Intelligence” (AI). For instance, The Economist, a very respected and reliable magazine focused on world events, politics and business, but also on science and technology, has published plenty of in depth articles and special reports on various aspects of AI and its impact on society and the economy. Tracking the frequency of searches for the topic “Artificial Intelligence” with Google Trends, one can see an increase of 33% since 2004. Google Books Ngram Viewer provides similar results about hard printed material. Thus, IEEE members may be surprised noticing that in spite of the fact that IEEE has 39 technical Societies, it does not have an “Artificial Intelligence Society”. How is it possible?
Looking closer, one can see that IEEE has a “Computational Intelligence” (CI) Society (IEEE CIS). It sounds very similar… and some members even claim that it is the same thing. Is it?
AI and CI have a similar long-term goal: to achieve “general intelligence”, i.e. the intelligence of a machine that could perform any intellectual task that a human being can. However, according to James C. Bezdek, a scientist of foundational importance for CI, but who has contributed to both, AI and CI, AI is based on hard computing methods (i.e. computing based on crisp binary-based computation), while CI is based on soft computing methods (i.e. neural network, fuzzy logic, and genetic algorithm, according to an original definition by Lotfi Zadeh).
Given the absence of an IEEE AI Society, IEEE CIS is the society that ends dealing with the majority of AI matters within IEEE, including matters with very strong societal implications, as, for instance with Explainable AI.
Explainable AI (XAI) is AI where the results of a solution can be understood by humans. This is in contrast with “black box” AI in machine learning where even its designers cannot explain why an AI algorithm arrived to a specific decision. IEEE CIS has setup a working group to develop an IEEE standard on XAI: P2976 – “Standard for XAI – eXplainable Artificial Intelligence – for Achieving Clarity and Interoperability of AI Systems Design”. The purpose of this standard is to enable “engineers and scientists developing AI systems to design systems with improved interoperability, supporting the export and import of AI systems and solutions from one implementation to another.”
The aim is to provide researchers, developers and designers of AI (including machine learning, rule-based, neural network and other) systems and industrial applications with a unified and high-level methodology for classification of their products as partially or fully explainable. This standard includes an “XML Schema” describing the requirements and constraints that have to be satisfied.The rationale of the XAI effort is that there is a social “right to explanation”. End users need to be able to trust that the AI algorithm is making good decisions.
What is the “right to explanation”? We all assume that we have a right to know why a decision about us was made, particularly when such decision is negative and goes against our expectations. For instance, a student wants to know why his/her test was evaluated with a lower grade than he/she expected. A candidate is entitled to know why he/she was not hired. An applicant is entitled to know why a mortgage application, a rent application, or an insurance claim, etc. were denied. Everybody expects to know why he/she got a certain credit In summary, by standardizing on a common EHR format that can convert selected data to and from other EHR formats, this enables interoperability with multiple EHR services to provide a holistic view of a patient’s healthcare data. This, in turn, allows the use of a common blockchain to record and apply a patient’s privacy decisions to clinicians viewing their data. Healthcare services retain the freedom to implement privacy regulations in their own manner. However, it may be beneficial for the services to access the blockchain as a source of the patient’s healthcare data privacy decisions and permissions. The advantage to the patient is that they can edit and maintain their privacy decisions in one system knowing that it will be applied consistently throughout the healthcare jurisdiction. The PCPS is a key component in our work that combines blockchain, semantic, and graph database technologies to allow clinicians to quickly obtain the right patient data at the right time to improve treatment of the patient at lower cost.
This “right to explanation” is at the basis of the ability to improve “next time” and to avoid the negative outcome.
In an age when job applications & resumes are pre-screened by computer, when insurance rates are set by by computer, when behavioral risks are assessed by computer, those being assessed are entitled to know the rationale of the decision making process. In short , in the words of the Wikipedia, “In the regulation of algorithms, particularly artificial intelligence and its subfield of machine learning, a right to explanation (or right to an explanation) is a right to be given an explanation for an output of the algorithm. Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially.”
IEEE is not alone in working toward the goal of XAI. For instance:
- The National Institute of Standards and Technology (NIST) published (August 2020) draft document NISTIR 8312 with the title “Four Principles of Explainable Artificial Intelligence“. It has an emphasis on ethics and human side and is less technical than the planned focus of IEEE P2976.
- The World Wide Web Consortium (W3C) published on 31 Oct 2018 an online post in its AI Knowledge Representation (AI KR) Community group called “Toward a Web Standard for Explainable AI?”. It is not a standard on its own, but it is important because it indicates interest in the W3C towards this topic.
P2976 is just starting its work, which is expected to reach full completion in 2025. However, there will be many opportunities to follow the progress of this working group as various reports and standard drafts will be issued along the way. Both IEEE CIS and IEEE SA will circulate information as it becomes available.