The research into A.I. started roughly 50 years ago. Back then, the components used, the facilities provided along with the algorithms were all carried out in big research laboratories. Today, we all carry and use A.I. based components to aid us in our daily lives. This is finally the time when A.I. has become more than a mere concept in a common man’s life and has entered reality.
So what factors have bridged this huge gap between A.I. and its applications in daily lives? This is due to three primary reasons:
- A.I.’s techniques have come of age. Some examples of this are: Virtual Personal Assistants like Siri, Google Now, and Cortana on various popular platforms (iOS, Android, and Windows Mobile). Video games like Far Cry and Call of Duty, autonomous vehicles, fraud detection, online customer support and news generation all make use of A.I.on a daily basis. Whether it’s use is apparent or not, A.I. has become a big part in proper functioning of many such systems.
- Big Data and A.I. convergence is being used by almost all the industries to facilitate automation of better decision making from their data repositories. Machine learning, expert systems and analytics along with Big Data can lead to powerful results in this age of IoT. It would be impossible, to collect, process, distribute, analyze, and drive decision making of how to benefit from IoT without this.
- Hardware and advancement in CPUs, GPUs and machine learning. Big companies and research organizations are investing heavily in research and production of advanced hardware to compete with the progress in deep learning and A.I. Recently, Google built an A.I. based processor called Tensor Processing Unit (TPU) to speed up machine learning. Few days ago, NVIDIA unveiled the latest addition to their Pascal™ architecture-based deep learning platform with NVIDIA® Tesla® P4 and P40 GPU accelerators. They are designed specifically for inferencing, which uses trained deep neural networks to recognize speech, images or text in response to queries from users and devices.
One of the important questions to ask about this time would be: what defines intelligence for a machine? Machine intelligence can broadly be classified into two categories:
- Task Oriented Intelligence : The machine follows the algorithm for what it is to be performed without much cognitive abilities. These are used to perform mundane tasks such as speech and voice recognition, computer vision, language translation, calculations etc. This has been mostly applied in various fields in the real world from industries to our daily chores.
- Artificial General Intelligence (AGI) : It involves the machine’s ability of reasoning, abstraction, communication, understanding intention and context etc. We still have to research more into this in deep learning. Its implementation will require the algorithm to explore the potential space, to ask the right series of questions in order to arrive at the correct answer.
The growth of the Internet and information availability have enabled even the startups to contribute to A.I. While it maybe true that the software giants like Google who have been researching into A.I. for a long time have now gone ahead even in the autonomous vehicle sector, there is still hope for the startups. Now, the complex algorithms aren’t restricted to just the big research labs. People like George Hotz have even built driverless cars on their own! The software companies that have been in the scene for a long time certainly have an edge to them. But startups can compete with them in a way that instead of building the entire system they can build one critical component that is better than anybody else.
Similarly, marketing, training and UI are also very important components of this. Anthropologists, psychologists and people from various sectors have been engaged to study this. For these complex systems, there should be a clear 360 degree view of the design for it to work. It should be built around not just what the technology enables but also about what are human expectations around it. Since, compared to machines, human brains are very slow at computation. So when we are talking about interaction and split second decision making scenarios, we really have to factor in that response time. For their effective assimilation in the real world, understanding social spaces by understanding human dynamics and whose navigation is socially acceptable is a must. To acquire this, the manufactured robot should be more focused on how to learn and learning to learn as opposed to just mimicking the training data. It should be a process of incremental learning by observing the space for behavior patterns and such for adapting to that environment. It will also enable cross cultural usage.
While it is true, that the A.I. has endless potential and applications in the real world but intelligent machines with cognitive and humanistic thinking approach is still a distant dream. Though much work is going on in robotics and automation of vehicles, it is equally important to highlight their downsides and focus on how the machine learns. Their is still lack of diversity in A.I. It would require people and knowledge from all walks of life to add to the overall knowledge pool, cognitive thinking and creativity.
This article is inspired by from When Humanity Meets A.I. podcast by Fei-Fei Li, Frank Chen, and Sonal Chokshi.