Tag Archives: Tech

Google DeepMind AI navigates a Doom-like 3D maze just by looking

Google DeepMind just entered the 90s. Fresh off their success in playing the ancient game of Go, DeepMind’s latest artificial intelligence can navigate a 3D maze reminiscent of the 1993 shooter game Doom.

Unlike most game-playing AIs, the system has no access to the game’s internal code. Instead it plays just as a human would, by looking at the screen and deciding how to proceed. This ability to navigate a 3D space by “sight” could be useful for AIs operating in the real world.

The work builds on research DeepMind published last year, in which the team trained an AI to play 49 different video games from the Atari 2600, a games console popular in the 1980s. The software wasn’t told the rules of the games, and instead had to watch the screen to come up with its own strategies to get a high score. It was able to beat a top human player in 23 of the games.

High score

That AI relied on a technique called reinforcement learning, which rewards the system for taking actions that improves its score, combined with a deep neural network that analyses and learns patterns on the game screen. It was also able to look back into its memory and study past scenarios, a technique called experience replay.

But experience replay has drawbacks that make it hard to scale up to more advanced problems. “It uses more memory and more computation per real interaction,” write the DeepMind team in its latest paper. So it has come up with a technique called asynchronous reinforcement learning, which sees multiple versions of an AI tackling a problem in parallel and comparing their experiences.

This approach requires much less computational might. While the previous system required eight days of training on high-end GPUs to play Atari games, the new AI achieved better performance on more modest CPUs in just four days. With Atari well and truly beaten, the team moved on to other games. In a simple 3D racing game (see video below) it achieved 90 per cent of a human tester’s score.

The AI’s greatest challenge came from a 3D maze game called Labyrinth, a test bed for DeepMind’s tech that resembles Doom without the shooting (see video at top). The system is rewarded for finding apples and portals, the latter of which teleport it elsewhere in the maze, and has to score as high as possible in 60 seconds.

“This task is much more challenging than [the driving game] because the agent is faced with a new maze in each episode and must learn a general strategy for exploring mazes,” write the team. It succeeded, learning a “reasonable strategy for exploring random 3D mazes using only a visual input”.

Source:

http://arxiv.org/abs/1602.01783

Magic Leap’s CEO, who just raised $793 million, is getting ready to mass produce his hallucinogenic technology

06_magic_leap_magickid_6

It’s official: The secretive “cinematic reality” startup Magic Leap has raised $793.5 million in Series C funding at an astounding $4.5 billion post-money valuation.

And according to Magic Leap CEO Rony Abovitz, that means the company’s mysterious “mixed reality lightfield” technology — which has been described by some as a combination of virtual reality and an acid trip — is getting closer to launching.

Abovitz tells Business Insider that the roughly 500-person team needed the hefty cash infusion to help it accelerate the manufacturing and launch phase of its product.

“We’re now setting up a production line to mass fabricate,” he said. “We’re sort of past the ‘sciencing the heck out it’ phase and now getting this pilot production level of it.”

When exactly the technology will become available to the public is still undetermined. Abovitz says he doesn’t want to put a date on it yet.

What is it?

So, what exactly is Magic Leap making?

Unlike virtual reality products like Facebook’s forthcoming Oculus Rift, the company doesn’t create a 3D world for you inside of a headset that a consumer must wear.

And Abovitz also describes it as “very different” from what people call augmented reality like Google Glass or Microsoft’s HoloLens, in which digital images are overlayed on real world scenery. Although Microsoft hasn’t revealed exactly how its holographic goggles will work, either, Magic Leap describes its technology as projecting digital light fields onto your retina, helping your eyes and brain see things that look like they’re part of the real world.

Source:

http://www.businessinsider.com/magic-leap-rony-abovitz-interview-on-793-million-fundraise-at-45-billion-valuation-2016-2

Scientists Just Read Someone’s Brain Signals And Decoded What That Person Was Perceiving

brainwaves

Neuroscientists have developed a new technique that enables them to decode what people are perceiving just by looking at a readout of their brain signals. This ability to spontaneously decipher human consciousness in real-time could have wide-ranging implications, potentially leading to novel treatments for brain injuries or helping people with locked-in syndrome to communicate.

The researchers collaborated with seven epilepsy patients at a hospital in Seattle, who had a number of electrodes called electrocorticographic (ECoG) arrays implanted into their brains. These targeted the temporal and occipital lobes of the brain’s cortex, concerned with hearing and vision, respectively.

Patients were each shown a series of grayscale images of faces and houses, which flashed up on a screen in a random order for 400 milliseconds each. Using a novel framework for interpreting subjects’ brain activity data, the researchers were able to tell exactly when each patient had seen an image, and what that image contained. A report of this process has been published in the journal PLOS Computational Biology.

Lead researcher Kai Miller told IFLScience that “there have been other studies where scientists have been able to tell when a patient is looking at one type of an image or another, but the timing of this stimulus had always been known ahead of time.

“However, we were able to decode spontaneously from the signal, so we were able to look at the brain signal and say at this point in time they saw this particular type of image.” To achieve this, the team focused on two types of brain signals: event-related potentials(ERPs) and broadband.

ERPs are electrical signals emitted by neurons in each individual region of the brain. Deflections in these signals indicate that some sort of stimulation has occurred, and can therefore be used to accurately predict the timing of this stimulus. However, Miller explains that “these deflections have different shapes in different brain regions, so it’s hard to know what aspect of these signals is the most important [when attempting to decipher the nature of the stimulus].”

Broadband signals, however, provide a better indication of the average electrical output across the brain, and were found by the researchers to provide a better indication of what type of stimulus had occurred. Therefore, by using ERPs to determine the timing and broadband to determine the nature of the image that had been shown, the team was able to predict exactly what each subject had seen, and when, with greater than 95 percent accuracy.

“The breakthrough was that I was able to take different aspects of the signals that we measured and put them together in a novel way, both to show that the different aspects of the signal carry different types of information, and to read these signals to continuously decode what was going on,” said Miller.

By developing this technique further, he believes it may be one day be possible to retrain brain circuits in those who have suffered neurological damage as a result of injuries or strokes. In such cases, different brain regions may have lost the ability to communicate with one another, although Miller hopes that by reading the signals originating in one area of the brain and then artificially stimulating the region for which this information was intended, brain functionality could be restored.

“You could also imagine this being used for people who are looked in, meaning they can see but that’s about it,” he says. For instance, by decoding what they are experiencing, it may be possible to improve their prospects of communicating with others.

Source:

http://www.iflscience.com/sites/www.iflscience.com/files/styles/ifls_large/public/blog/%5Bnid%5D/brainwaves.jpg?itok=OTwZPkIO

Gogoro: More than the Tesla of scooters

The Gogoro Smartscooter is refined, stylish and edgy, like the love child of an iPhone and a Vespa with the detail-oriented hallmarks of a European luxury car. A vehicle for the digital age, the Smartscooter is easily customized through your smartphone — from the color of your dash panel to how much power the engine delivers.

It’s these little details that get Horace Luke the most excited. Co-founder and CEO of Gogoro, Luke played a pivotal role in Gogoro’s July launch of the futuristic Smartscooter.

“On motorcycles, fuses are so randomly placed. I can’t tolerate that. So for us, all the fuses are all together,” he says.

Designed and manufactured entirely in-house by Gogoro in Taipei, Taiwan, every detail aims to deliver a particular, exacting experience. The effect invites comparisons to Elon Musk’s Tesla, which has made electric cars as desirable as any luxury car. But the Smartscooter packs hidden power that could surpass even Tesla and may shift thinking on how electric vehicles should fit into high-density urban environments. Gogoro isn’t just selling scooters; it’s selling a swappable battery service.

This is a very new concept, and not even industry observers are sure it will work. Ryan Citron, research analyst from Navigant Research, thinks Gogoro is starting in the right part of the world to make this work — “all the highest sales and highest growth rates [for electric scooters] are expected in that region” — but he notes car battery swap startup Better Place failed miserably.

“This is quite different however. There are much lower costs for electric scooter batteries versus a car battery,” says Citron. “Is it going to be like an iPhone or an iPod kind of reaction there? A lot of companies think that with their products. Which ones can actually do that? I’m not sure.”

Source:

http://www.cnet.com/roadshow/news/gogoro-scooters/

Physicists might have just solved one of the big problems with light-based computers

A team of Russian physicists has figured out how to keep a key component inlight-based computers from overheating, which means one of the biggest obstacles standing between us and processing data at the speed of light might have just been overcome.

The simple act of replacing electrons with light particles (photons) in our microprocessors would not only result in computers that run tens of thousands of times faster, it would also solve a very big problem that affects us all – we’ve just about hit the limit for how fast electrons can travel between the processor and the memory.

Known as the von-Neumann bottleneck, this problem means there’s no point developing faster processors for electron-based computer systems if we’ve already hit the limit for how fast information can be transported to and from the memory. We need to completely rethink the system, and that’s where quantum computers (which replace bits with qubits) and light-based computers (which replace electrons with photons) come in.

While the idea of replacing electrons with photons sounds pretty simple, actually making it happen is anything but. As we explained back in September, while running current computers on light instead of electricity would effectively speed up the rate at which we could transmit data, silicon chips still require the photons to be converted back to electrons in order to be processed.

This means everything would be slowed back down again, and the system would consume a whole lot of extra energy during the conversion process, which makes it even less efficient than if we’d just used electrons in the first place.

So we need to rebuild our computers from the ground-up to handle photons, that much is clear, and the likes of IBM, Intel, HP, and the US Defense Force are currently investing billions of dollars into developing the ‘optoelectronic chips’required. These chips compute electronically, but use light to move information.

If you’ve ever seen a microchip up close, you’ll know they’re composed of all kinds of tightly wound channels along which the electrons travel. The problem with building a photon-compatible version of this is that it’s extremely difficult to get light to travel around bends. The answer? Plasmonic components, “which take advantage of the unique oscillating interactions of photons and electrons on the surface of metal”, Patrick Tucker explains over at Defense One.

Sounds good right? But once again, it’s not that simple. A lightwave is approximately 1 micrometre (1,000 nanometres), but we’re close to making transistors as small as 10 nanometres. So we have two options: transmit lightwaves ‘as is’ and destroy an efficiency gains by having enormous components, or confine the light into nanoscale surface waves known as surface plasmon polaritons (SPPs).

We can do all of this already, but in the process, the plasmonic components will experience temperature increases of around 100 Kelvin, and basically fizzle out and die. And keeping them cool isn’t as easy as simply running a fan over them. “You need a cooling system that works on the scale of the photonic chip’s key features, less than a billionth of a metre in size,” says Tucker. “It’s one reason why many don’t consider fully light-based transistors a practical possibility for decades.”

In the words of George Constanza himself, “Why must there always be a problem?”

But for the first time, researchers from the Moscow Institute of Physics and Technology say they’ve come up with a solution. The heat comes from when the SPPs are absorbed by the metal in the components, so the Russian researchers have inserted what they call ‘high-performance thermal interfaces’ into the components to protect them from the metal.

These interfaces are basically just layers of thermally conductive materials placed between the chip and a conventional cooling system to ensure efficient heat removal from the chip, the team explains in the journal ACS Photonics. They say this method can keep temperature increases to within 10 degrees Celsius.

It’s now up to the researchers to demonstrate this working within a more complete computer system, and they’ve got their work cut out. Late last year,UK-based researchers made their own significant advances towards light-based computer technology, so it’s ‘game on’ for everybody involved.

Source:

http://www.sciencealert.com/physicists-might-have-just-solved-one-of-the-big-problems-with-light-based-computers

Project Skybender: Google’s secretive 5G internet drone tests revealed

Trials at New Mexico’s Spaceport Authority are using new millimetre wave technology to deliver data from drones – potentially 40 times faster than 4G.

Google is testing solar-powered drones at Spaceport America in New Mexico to explore ways to deliver high-speed internet from the air, the Guardian has learned.

In a secretive project codenamed SkyBender, the technology giant built several prototype transceivers at the isolated spaceport last summer.

SkyBender is using drones to experiment with millimetre-wave radio transmissions, one of the technologies that could underpin next generation 5Gwireless internet access. High frequency millimetre waves can theoretically transmit gigabits of data every second, up to 40 times more than today’s 4G LTE systems. Google ultimately envisages thousands of high altitude “self-flying aircraft” delivering internet access around the world.

“The huge advantage of millimetre wave is access to new spectrum because the existing cellphone spectrum is overcrowded. It’s packed and there’s nowhere else to go,” says Jacques Rudell, a professor of electrical engineering at the University of Washington in Seattle and specialist in this technology.

However, millimetre wave transmissions have a much shorter range than mobile phone signals. A broadcast at 28GHz, the frequency Google is testing at Spaceport America, would fade out in around a tenth the distance of a 4G phone signal. To get millimetre wave working from a high-flying drone, Google needs to experiment with focused transmissions from a so-called phased array. “This is very difficult, very complex and burns a lot of power,” Rudell says.

Google is not the first organisation to work with drones and millimetre wave technology. In 2014, Darpa, the research arm of the US military, announced a program called Mobile Hotspots to make a fleet of drones that could provide one gigabit per second communications for troops operating in remote areas.

Source:

http://www.theguardian.com/technology/2016/jan/29/project-skybender-google-drone-tests-internet-spaceport-virgin-galactic?CMP=twt_a-technology_b-gdntech