IBM introduces Intu, new AI form which enables developers to embed supercomputer Watson functionality into numerous end-user devices, providing an advanced architecture for creating cognitive-enabled experiences.




In IBM’s speech, “cognitive computing” describes machine learning. The idea about Project Intu is that developers will have the chance to use the platform to embed the various machine learning features offered by IBM’s Watson service into different applications and products, and make them work across a wide range of form variables.

Intu makes easier the process for developers wanting to develop cognitive expertises in various form factors such as spaces, characters, bots or other IoT devices, and it extends cognitive techniques into the real world. The system enables devices to interact more naturally with users, causing different feelings and behaviors and making more significant and immersive experience for users.

Creators can take advantage of Watson services like the Conversation, Language and Visual Recognition APIs to apply various cognitive habits to devices. The project supports Raspberry PI, Mac OS, Windows and Linux environments, and many mores.

“IBM is taking cognitive technology beyond a physical technology interface like a smartphone or a robot toward an even more natural form of human and machine interaction,” Rob High, IBM Fellow, vice president and CTO of IBM Watson, said in an announcement.

Steve Abrams, IBM vice president of Watson Developer Advocacy, wrote in a latest blog post, “Our philosophy at IBM is to put our technology in the hands of developers, because for every good idea we have, we know they’re thinking up thousands more.”



Project Intu is still an experimental system, and it can be find via the Watson Developer Cloud, the Intu Gateway and also on GitHub.

IBM is wishing developers will experiment with the platform and provide responses before releasing it as a fully-fledged beta in the future. [Bitbillions]