Friday, September 12, 2014

NUI: Natural User Interface

English: The Microsoft Kinect peripheral for t...
English: The Microsoft Kinect peripheral for the Xbox 360. (Photo credit: Wikipedia)
The NUI will be a bigger departure than the GUI, Graphical User Interface, was. This is a big one. Microsoft has some advantages here. But it will not be any one company thing. This will be a tectonic shift.

Intel Says Laptops and Tablets with 3-D Vision Are Coming Soon
Laptops with 3-D sensors in place of conventional webcams will go on sale before the end of this year ...... Partners already working with Intel include Microsoft’s Skype unit, the movie and gaming studio Dreamworks, and the 3-D design company Autodesk ....... a startup called Volumental, lets you snap a 3-D photo of your foot to get an accurate shoe size measurement—something that could help with online shopping. ..... data from a tablet’s 3-D sensor can be used to build very accurate augmented reality games, where a virtual character viewed on a device’s screen integrates into the real environment. In one demo, a flying robot appeared on-screen and selected a landing spot on top of a box on a cluttered table. As the tablet showing the character was moved, it stayed perched on the tabletop, and even disappeared behind occluding objects. ...... the front-facing 3-D sensors can be used to recognize gestures to play games on a laptop, or take control of some features of Windows. ...... reminiscent of Microsoft’s Kinect sensor for its Xbox gaming console, which introduced gamers to depth sensing and gesture control in 2010. Microsoft launched a version of Kinect aimed at Windows PCs in 2012, and significantly upgraded its depth-sensing technology in 2013, but Kinect devices are too large to fit inside a laptop or tablet.

Nanostructured Ceramics

Just like biotech has a huge appetite for Big Data, I keep wondering how you plug in software into these nano scale equations. You know how they say for sensory perception the software-only approach is a dead end. You need to innovate at the hardware level. So you design neuro inspired chips. I think nano could see software use in that direction. Smarter hardware? In built smartness?

A Super-Strong and Lightweight New Material
they could make ceramics, metals, and other materials that can recover after being crushed, like a sponge. The materials are very strong and light enough to float through the air like a feather. ..... In conventional materials, strength, weight, and density are correlated. ..... nanoscale trusses made from ceramic materials can be both very light—unsurprising, since they are mostly air—and extremely strong. ..... To make the ceramic nano-trusses, Greer’s lab uses a technique called two-photon interference lithography. It’s akin to a very low-yield 3-D laser printer. ..... Nanostructures have a very high surface area and are lightweight, a combination that could make for a fast-charging battery that stores a lot of energy in a convenient package.

Thursday, September 11, 2014

Neuromorphic Chips

Dr. Isaac Asimov, head-and-shoulders portrait,...
Dr. Isaac Asimov, head-and-shoulders portrait, facing slightly right, 1965 (Photo credit: Wikipedia)
Is this what you see on the way to Singularity?

Neuromorphic Chips

Traditional chips are reaching fundamental performance limits. ..... The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works. ..... They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems” ...... Today’s computers all use the so-called von Neumann architecture, which shuttles data back and forth between a central processor and memory chips in linear sequences of calculations. That method is great for crunching numbers and executing precisely written programs, but not for processing images or sound and making sense of it all. It’s telling that in 2012, when Google demonstrated artificial-­intelligence software that learned to recognize cats in videos without being told what a cat was, it needed 16,000 processors to pull it off. ..... “There’s no way you can build it [only] in software,” he says of effective AI. “You have to build this in silicon.” ...... Isaac Asimov’s “Zeroth Law” of robotics: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” ..... glasses for the blind that use visual and auditory sensors to recognize objects and provide audio cues; health-care systems that monitor vital signs, provide early warnings of potential problems, and suggest ways to individualize treatments; and computers that draw on wind patterns, tides, and other indicators to predict tsunamis more accurately.