Header Ads

IBM and MIT team up to help Autonomous robots and other AI systems see and hear like humans

IBM and MIT think Autonomous robots and other AI systems can do better. They've begun a "multi-year" partnership that aims to improve AI's ability to interpret sight and sound as well as humans. IBM will supply the expertise and technology from its Watson cognitive computing platform, while MIT will conduct research. It's still very early, but the two already have a sense of what they can accomplish.

The new IBM-MIT Laboratory for Brain-inspired Multimedia Machine Comprehension’s (BM3C) goal will be to develop cognitive computing systems that emulate the human ability to understand and integrate inputs from multiple sources of audio and visual information into a detailed computer representation of the world that can be used in a variety of computer applications in industries such as healthcare, education, and entertainment.

One of the biggest challenges will be to advance pattern recognition and prediction. A human can easily describe what they saw happen in an event and predict what happens next, IBM says, but that's virtually "impossible" for current AI. 

Guru Banavar, Chief Science Officer, Cognitive Computing and VP, IBM Research & Professor James DiCarlo, head of the Department of Brain & Cognitive Sciences (BCS) at the Massachusetts Institute Technology (MIT) (Credit: IBM Research)

There's no guarantee that IBM and MIT will crack a problem that has daunted Google, Facebook and countless academics. However, it's rare that scientists get access to this kind of technology. You might just see breakthroughs that aren't practical for teams that have only limited use of AI-friendly hardware and code.
That ability to quickly summarize and foresee events could be useful for everything from health care workers taking care of the elderly to repairing complicated machines, among other examples


source: IBM




No comments

blogmytuts. Powered by Blogger.