Track of Intelligent Personal Assistant's technology (An overview)

There are many ways to communicate with others but listening and speaking feel more freely and natural for us, humans. Intelligent Personal Assistants are made up of many programming languages and behave differently but all types are fall in the same branch of computer sciences. To create this system Human-Computer Interaction (HCI) and Natural Language Processing (NLP) are essential parts. The more the system develops, the more the scope of computer sciences involves as Artificial Intelligence (AI) and Machine Learning (ML) have also played a vital role in the implementation of it.
In the early era of the computer, the only way to communicate with it is through mouse and keyboard instructions. The user gives input and the computer gives output by showing it on its monitor i.e. screen in form of text or any other graphical way of showing it through programming software. More easily, the only means of communicating is through typing and reading. At that time, it was just a dream of us to give or take instructions to a computer by listening and speaking. This dream comes true through advancement in computer programming with Natural Language Processing (NLP) to make a computer program for understanding human language.
For this purpose, IBM created the first supercomputer which can understand and solve complex problems given in human language to that computer in 2006 resulted in a Deep-QA project named "Watson". It uses an advanced form of NLP at that time and many other technologies to make it able in interacting with humans using verbal commands/ communication.
After so many years later, it was Apple Inc. who broke the ice and makes the breakthrough project which brings the fully loaded application that has the advanced ability to understand human language which is also known as Siri in 2011 and bundled it into the iPhone 4S operating system. Apple Inc. suggested to its users they just need to extend their speaking habits to the imperceptible interlocutor embedded in their iPhones. Siri got the attention which it deserves and talking to Siri is just like talking to a human being, user can speak with their natural voice, give instructions with their conversational tone. It becomes possible because Siri relates to its core algorithms through a server, the more the user uses it, the more it will understand things, this application is just that client is using services from the server. Siri also uses the user's personals information to understand him better. Unfortunately, Siri was just available for iPhone 4S at that time even other Apple's product didn't get this treatment, so it became a necessity for other companies to build this type of system to gain the interest of their customers.
In 2011, 50.9% of market shares were held by android smartphones around the world on the other hand Apple just held 23.8% of the market's shares. So similar tools (i.e. IPA) were soon developed by other digital corporations. In 2013 Microsoft developed its voice assistant Cortana. In 2014 Amazon launched its voice assistant i.e. IPA named Alexa. In 2016 Google developed its voice assistant i.e. IPA named Google Assistant.
In just a few years this technology left the restricted space of smartphones to lodge in all sorts of digital devices, from watches to tablets occupying both domestic and professional atmosphere. IPAs are solely based on software that identifies and produces voice inputs. User's voice commands and questions are then particularized through language processing algorithms that provide replies to the commands and queries or execute tasks such as searching, sending, contacting, and managing.
 

Posted: 29 Dec 2020

Visit for more

Joined in Dec 2020

Computer Scientist


() ()

Hussnain

informative
() (1206 days, 4 hours ago)

Misbah

Explained well
() (1206 days, 3 hours, 46 minutes ago)