AI has been bringing a lot of business opportunities for all the giant tech companies out there as there is still a lot of scope in the growing field of AI and its application. And in order to capitalize on the AI market, Microsoft has announced the acquisition of the Berkley, California based conversation AI company Semantic Machines.
The announcement of the acquisition of the conversation AI company Semantic Machines was made through the official blog post of Microsoft by Microsoft Corporate Vice President and Chief Technology Officer of AI and Research David Ku. Semantic Machines deal with technologies like synthesis, deep learning, and natural language processing.
After the acquisition, the natural language processing technology, which has been developed by Semantic Machines will be used for different AI-based products of Microsoft like AI and Azure Bot Service.
Semantic Machines stated itself as a company which develops different technologies that goes beyond understanding commands to understanding communication. There conversational AI technologies allow computers to communicate, collaborate, understand the goals and conquer tasks.
By taking help from the conversational AI technology of Semantic Machines, Microsoft will be able to compete with AI-based conversational computing initiatives such as Alexa from Amazon, Siri from Apple, Google’s Assistant and Samsung’s Bixby.
When it comes to the employee expertise in Semantic Machines then you should know that it has some very big names in the tech industry like Larry Gillick, former chief scientist at Siri Apple, Dan Klein, US Berkley Professor, Percy Liang, Standford University Professor, etc.
The co-founder and the CEO of Semantic Machines, Daniel Roth, is a technology entrepreneur who has launched Voice Signal Technologies and Shaser Bioscience. Other than this, Damon Pender, co-founder, and CFO of Semantics Machines, was earlier the CFO of TeraDiode, Shaser BioScience and NeoSaej.
The main product of Semantic Machines is Conversation Engine. The advanced Semantic Engines extracts semantic intent from natural input like voice or text and then produces a self-generating learning framework in order to manage dialogue context, salience, and the objectives of the end user.
By implementing and launching advanced AI platforms like Cortona and XiaoIce, Microsoft has become the first company that has paired full duplex voice sense with a conversational AI system that allows users to naturally communicate.
Microsoft has been focusing on making advancements in speech recognition and natural language technology for more than two decades. With a goal and vision of making a world where computers around the world can see, hear, talk and understand just like humans. The same vision and goal of Microsoft led to the formation of Cognitive Service Framework in 2016.
The Cognitive Service Framework concentrates on the developments of bots and the integration of speech recognition and natural language understanding into intelligent assistants. The Cognitive Service Framework has now more than 1 million developers as users and 300,000 developers using the Azure bot service.