Reality Editor is the Internet of Things Without the Privacy Sacrifice


Researchers from MIT Media Lab’s Fluid Interfaces group have created an app called Reality Editor, which will allow users to control real world objects using virtual devices while keeping your data private. This app is a direct response to companies having control over data gathered from the use of the Internet of Things, a network of physical, real world objects that are user controlled.

Research leader Valentin Heun proposes that instead of the “Internet of Things,” users should have what he calls “Connected Objects” that are controlled using the Reality Editor app. The app communicates with these objects through existing networking technologies and is used by pointing a device’s camera at the object in order to access its programmed capabilities, which will be available for editing.

These Connected Objects then send their “FingerPrints” and network IP to the Reality Editor app. From there, users will be able to drag virtual lines connecting these objects to each other to establish functional systems. These connections are on decentralized private networks and are not constantly connected to central entities that require a central cloud services to function. All interface, data, connection and functionality are saved within the Connected Object, keeping all that data safe and secure.

You can see how it works, and learn more about the process, in the video below.


Through all the different virtual-to-real connections, all produced data would be retained by the user and remain private unless absolutely necessary, like when connecting to third party networks.

In an interview with The Creators Project, Heun says, “If I switch a light switch on at home that light switch communicates directly with the light. There is no need to send this action all around the world and then back into my home. Data connections should always take the most possible direct rout, reflecting a user’s privacy interests.”

He adds, “What we consider virtual or digital is defined by the paradigms of a personal computing. The Reality Editor is a first small step to create a new set of paradigms.”

An avid biker and an engineer join hands to build an IoT device that ensures road safety

According to Jayanth Jagadeesh, VP BD and Marketing, eLsys Intelligent Devices Pvt Ltd, India is among the nations that have the highest number of road accidents in the world: one person dies every four minutes. “Prime Minister Narendra Modi, in his radio address to the nation, too, has expressed the strong need to build a national emergency system/framework to manage, analyse and avert road emergencies,” he says.

Jayanth, therefore, calls it a wonderful coincidence that his company has been working on solving the same problem for the last year-and-a-half. “Our vision is to revolutionise how Indians call for help, and how India responds to road emergencies,” he says.

Bringing road safety through IoT

Through its product Raksha SafeDrive, eLsys aims to leverage the power of IoT (Internet of Things) devices, telecom revolution and cloud technologies to create an integrated road accident management and analysis platform. The device is capable of automatic crash detection, two-way call connectivity, GPS tracking, engine health monitoring, and smart panic button.

Genesis and foundation of the core team

The idea to leverage technology to avert and manage road accidents better came to Prasad Pillai in 2013, after narrowly averting an accident himself. “Most drivers on Indian roads experience a close shave every week. We thank our stars, curse the other commuter and move on. It is important that our accident preparedness and management is not so unorganised. Our passion is to apply technology in making roads safer and drivers responsible,” says Prasad.


Jayanth and Prasad

Jayanth, on the other hand, is an avid biker, and has even completed a 5,000-km solo motorcycle road trip from Kashmir to Kanyakumari. The duo met through a common friend and their passion for road safety got them to work as a team. “Travel and exploration is meant to be fun. Road trips are supposed to excite people and make them come alive. But most people do not dare to explore. Raksha SafeDrive answers most of the ‘what-if’questions,” he adds.

What does the product do?

Jayanth says Raksha SafeDrive is capable of automatically detecting an accident and proactively calling for emergency care assistance. The team claims that it has leveraged multiple technologies to devise an intelligent road accident management platform that can detect, alert, notify and perhaps even predict driver behaviour that may lead to an accident.

Raksha SafeDrive follows the subscription model for revenue. The revenue comes from the one-time device cost and a monthly/yearly fee for continuous accident monitoring and human assistance for emergencies, roadside assistance and parking location retrieval.

Challenges and future plans

Jayanth says that Raksha SafeDrive is a complex electronics product complemented by IoT, telecom and cloud technologies. Unlike a software product, the successive iterations in designing, building and testing a stable and sturdy product is both time and resource consuming. The team has invested two years of research and development to come up with the product.

“Currently, the company is sustaining its operations from the founders’capital investment. We are exploring the possibility of an angel funding to accelerate our go-to market plans,” says Jayanth.

The team would like to build an effective and technology-assisted accident management and analysis system in India. It has also initiated a ‘Road Safety Consortium’, a platform for organisations that care about making roads safer and minimising accidents in India. It is reaching out to car manufacturers, emergency care providers, roadside assistance providers, NHAI (National Highways Authority of India) and other government and NGO entities to join hands in making the roads safe.


How Neuromorphic Image Sensors Steal Tricks From the Human Eye

When Eadweard Muybridge set up his cameras at Leland Stanford’s Palo Alto horse farm in 1878, he could scarcely have imagined the revolution he was about to spark. Muybridge rigged a dozen or more separate cameras using trip wires so that they triggered in a rapid-fire sequence that would record one of Stanford’s thoroughbreds at speed. The photographic results ended a debate among racing enthusiasts, establishing that a galloping horse briefly has all four legs off the ground—although it happens so fast it’s impossible for ­anyone to see. More important, Muybridge soon figured out how to replay copies of the images he took of animal gaits in a way that made his subjects appear to move.

Generations of film and video cameras, including today’s best imaging systems, can trace their lineage back to ­Muybridge’s boxy cameras. Of course, modern equipment uses solid-state detectors instead of glass plates, and the number of frames that can be taken each second is vastly greater. But the basic strategy is identical: You capture a sequence of still images, which when played back rapidly gives the viewer the illusion of motion.

If the images are to be analyzed by a computer rather than viewed, there’s no need to worry about whether the illusion is a good one, but you might still need to record lots of frames each second to track the action properly.

Actually, even with a high frame rate, your equipment may not be up to the task: Whatever you are trying to analyze could be changing too quickly. What then do you do? Many engineers would answer that question by looking for ways to boost the video frame rate using electronics with higher throughput. We argue that you’d be better off reconsidering the whole problem and designing your video equipment so it works less like Muybridge’s cameras and instead functions more like his eyes.

The general strategy of creating electronic signal-­processing systems inspired by biological ones is called neuromorphic engineering. For decades, this endeavor has been an exercise in pure research, but over the past 10 years or so, we and other investigators have been pursuing this approach to build practical vision systems. To understand how an artificial eye of the kind we’ve been investigating can outperform even a high-speed video camera, let us first disabuse you of the idea that the way modern video gear operates is sensible.

Imagine for a moment that you’re trying to analyze something that happens really fast, say, a pitcher throwing a baseball. If you try to use a conventional video camera, which records at something like 30 or perhaps even 60 frames per second, you’ll miss most of the movement of the pitcher’s arm as he whips the ball toward the plate. Perhaps some frames will catch his arm in different positions. But you’ll capture rela­tively little information of interest, along with much redundant imagery of the pitcher’s mound, the infield turf, and other unchanging parts of the background. That is, the scene you record will be under- and oversampled at the same time!

There’s no way to avoid that problem given that all parts of the image sensor in your camera share a common timing source. While this weakness won’t be a problem for a casual viewer, if you wanted a computer to analyze nuances of the pitcher’s arm motion, your data will be woefully inadequate. In some cases, sophisticated postprocessing might let you derive the results you wanted. But this brute-force approach would fail you in environments with limited power, bandwidth, and computing resources such as on mobile devices, multicopter drones, or other kinds of small robots.

The machine-vision community has been stuck with this basic problem for decades. But the situation may soon be changing for the better as we and other researchers develop equipment that samples different parts of the scene at different rates, mimicking how the eye works. With such gear, those parts of the scene that contain fast motions are sampled rapidly, while slow-changing portions are sampled at lower rates, going all the way down to zero if nothing changes.

Getting video cameras to work this way is tricky, because you don’t know beforehand which parts of the scene will change and how rapidly they will do so. But as we describe below, the human eye and brain deal with this problem all the time. And the rewards of copying how they work would be enormous. Not only would it make fast-changing subjects—explosions, insects in flight, shattering glass—more ­amenable to analysis, it would also allow the video cameras on smartphones and other battery-operated devices to record ordinary motions using much less power.

Engineers often liken the eye to a video ­camera. There are some similarities to be sure, but in truth the eye is a much more complicated creation. In particular, people’s retinas don’t just turn light into electrical signals: They process the output of the eye’s photoreceptor cells in sophisticated ways, capturing the stuff of interest—spatial and temporal changes—and sending that information to the brain in an amazingly efficient manner.

Knowing how well this approach works for eyes, we and others are studying machine-vision systems in which each pixel adjusts its own sampling in response to changes in the amount of incident light it receives. What’s needed to implement this scheme is electronic circuitry that can track the amplitudes of each pixel continuously and record changes of only those pixels that shift in light level by some very small prescribed amount.

This approach is called level-crossing sampling. In the past, some people have explored using it for audio signals—for example, to cut down on the amount of data you’d have to record with the usual constant-rate sampling. And academic researchers have been building electronic analogues of the retina in silicon for research purposes since the late 1980s. But only in the past decade have engineers attempted to apply level-crossing sampling to the practical real-time acquisition of images.

Inspired by the biology of the eye and brain, we began developing imagers containing arrays of independently operating pixel sensors in the early 2000s. In our more recent cameras, each pixel is attached to a level-crossing detector and a separate exposure-measurement circuit. For each individual pixel, the electronics detect when the amplitude of that pixel’s signal reaches a previously established threshold above or below the last-recorded signal level, at which point the new level is then recorded. In this way every pixel optimizes its own sampling depending on the changes in the light it takes in.

With this arrangement, if the amount of light reaching a given pixel changes quickly, that pixel is sampled frequently. If nothing changes, the pixel stops acquiring what would just prove to be redundant information and goes idle until things start to happen again in its tiny field of view. The electronic circuitry associated with that pixel outputs a new measurement just as soon as a change is detected, and it also keeps track of the position in the sensor array of the pixel experiencing that change. These outputs, or “events,” are encoded according to a protocol called Address Event ­Representation, which came out of Carver Mead’s lab at Caltech in the early 1990s. The train of events such a vision sensor outputs thus resembles the train of spikes you see when you measure signals traveling along a nerve.

The key is that the visual information is not acquired or recorded as the usual series of complete frames separated by milliseconds. Rather, it’s generated at a much higher rate—but only from parts of the image where there are new readings. As a result, just the information that is relevant is acquired, transmitted, stored, and eventually processed by machine-vision algorithms.

We designed the level-crossing and recording circuits in our camera to react with blazing speed. With our equipment, data acquisition and readout times of a few tens of nanoseconds are possible in brightly lit scenes. For standard room-light levels, acquisition and readout require a few tens of microseconds. These rates are beyond all but the most sophisticated high-speed video cameras available today, cameras costing hundreds of thousands of dollars. And even if you could afford such a camera, it would deluge you with mostly worthless information. Sampling different pixels at different rates, on the other hand, reduces not just equipment cost but also power consumption, transmission bandwidth, and memory requirements—advantages that extend well beyond the acquisition stage. But you’ll squander those benefits if all you do is reconstruct a series of ordinary video frames from the data so that you can apply conventional image-processing algorithms.

To fully unlock the potential of eyelike vision sensors, you need to abandon the whole notion of a video frame. That can be a little hard to get your head around, but as soon as you do that, you become liberated, and the subsequent processing you do to the data can resolve things that you could otherwise easily miss—including the detailed arm motions of our hypothetical baseball pitcher.

To do this, though, you’ll have to rethink how you process the data, and you’ll probably have to write new code instead of using a standard video-analysis library. But the mathematical formulations appropriate for this new kind of video camera are simple and elegant, and they yield some very efficient algorithms. Indeed, in applying such algorithms to the output of our autosampling vision sensors, we were able to show that certain real-time vision tasks could be run at a rate of tens to even hundreds of kilohertz, whereas conventional frame-based video-analysis techniques applied to the same situation topped out at a painfully slow 60 hertz.

Another advantage of analyzing the nearly continuous data streams from our eyelike sensors instead of a series of conventional video frames is that we can make good use of signal timing, just as biological neurons do. This is perhaps best explained with a specific example.

Suppose you wanted to design a mobile robot that uses a machine-vision system to navigate its environment. Clearly, having a 3-D map of the things around it would be helpful. So you’d no doubt outfit the robot with two somewhat separated cameras so that it had stereo vision. That much is simple enough. But now you have to program its robotic brain to analyze the data it receives from its cameras and turn that into a representation of 3-D space.

If both cameras record something ­distinct—let’s say it’s a person stepping in front of the robot—it’s easy enough to work out how far away the person is. But suppose two different people enter the robot’s field of view at the same time. Or six people. Working out which one is which in the two camera views now gets more challenging. And without being able to ascertain identities for certain, the robot will not be able to determine the 3-D position of each one of these human obstacles.

With vision sensors of the type we’ve been studying, such matching operations become simpler: You just need to look for coincidences in the readings from the two cameras. If pixels from separate cameras register changes at the very same instant, they are almost certainly observing the same event. Applying some standard geometrical tests to the observed coincidences can further nail down the match.

Tobi Delbrück and others at ETH Zurich demonstrated the power of this approach in 2007 by building a small-scale robotic soccer goalie using an eyelike sensor that was broadly similar to ours. It had a reaction time under 3 milliseconds. (Peter Schmeichel, eat your heart out.) Were you to try to achieve that speed using a conventional video camera, you’d need to find one that could record some hundreds of frames per second, and the computational burden would be enormous. But with Delbrück’s neuromorphic Dynamic Vision Sensor, the computer running his soccer goalie was loping along at a mere 4 percent CPU load.

Compared with standard video techniques, neuromorphic vision sensors offer increased speed, greater dynamic range, and savings in computational cost. As a result, demanding machine-vision tasks—such as mapping the environment in 3-D, tracking multiple objects, or responding quickly to perceived actions—can run at kilohertz rates on cheap battery-powered hardware. So this kind of equipment would allow for “always-on” visual input on smart mobile devices, which is currently impossible because of the amount of power such computationally intense tasks consume.

Another natural application of neuromorphic vision sensors is in electronic retinal implants for restoring sight to those whose vision has been lost to disease. Indeed, two of us (Posch and Benosman) helped to found Pixium Vision, a French company that has developed a neuromorphic retinal implant, which is now undergoing clinical trials. Unlike competing implants under development, which are frame based, Pixium’s products use event-based sampling to provide patients with visual stimulation. Right now, these implants are able to give patients only a general ability to perceive light and shapes. But the technology should improve swiftly over the next few years and perhaps one day will be able to offer people who have lost their natural vision the ability to recognize faces—all thanks to artificial retinas inspired by real ones.

You can expect eyelike vision sensors to evolve from the pioneering designs available today into forms that eventually play a big role in medical technology, robotics, and more. Indeed, it wouldn’t surprise us if they proved just as seminal as Muybridge’s wooden cameras.

Will the Internet of Things Speak a Language of Light?

We’ve gotten used to navigator apps telling us what to do when we’re in the car, but as the Internet of Things moves into more and more of our personal spaces, it’s clear that a bunch of objects all over our homes and workplaces babbling away at us isn’t going to work.

Two IoT companies graduating from the Highway 1 accelerator last week are solving that problem by turning to a language of light—one they say we already speak, having been conditioned to associate red with stop, green with go, and yellow with caution. Both intend to use smart objects and light signals to change behavior.

For Nexi, the behavior at issue is energy use. Intended for homes already wired to smart meters, the $99 gadget looks at household energy use, both in the moment and over the past eight hours, and an inner ring (average all-day energy use) and outer ring (current and historical use) show how you’re doing compared with set thresholds. You can get more data, or adjust thresholds, using a smartphone app.

“Kids get it immediately, they run around turning everything off,” says CEO Kimberli Hudson. This might have helped train my kids when they were younger to perhaps not turn and leave on every light in the house. That’s a habit that now, unfortunately, is going to be harder to break, even with colored lights. I’d also like someone to figure out how to train teens not to use their bedroom floors for clothing storage, nobody’s got that app coming to market yet.

Another Highway 1 company, Moti, can’t tell me how to break bad habits; instead, the company is trying to use colored light to nurture positive ones. Its IoT gizmo provides two of the three things needed to create a habit—trigger and reward—the third thing, routine, is up to the user.

The trigger, company CEO Kayla Matheus says, is simply seeing the gadget sitting on your desktop or countertop—it reminds you that there’s something you’re trying to do, like drink more water or exercise more. When you do whatever you’re trying to do more of, you tap a button, and you get a little happy light show, that’s the reward. When you haven’t done the activity in a while, your gadget starts looking depressed—it’s kind of a guilt-inducing Tamagotchi. Moti provides “a deeper type of accountability than a phone app or wristband,” says Matheus, “You are impressing someone, even though it isn’t a someone.” I do get the appeal, but at $79, it seems a little pricey for what it does.

Long-Distance Teleportation and Quantum Entanglement With Twisted Photons

During the past three decades, the theory of quantum communication and computing has progressed with the addition of new protocols and algorithms. However, implementing these theories in order to design a future quantum Internet is a continuing challenge because actually building the technology required for processing quantum information, such as the still elusive quantum repeater, has proven extremely difficult.

Anton Zeilinger, a researcher at the University of Vienna, is one of the pioneers in quantum communication; his group in Austria realized the first teleportation of photons in 1997. On Monday last week, Zeilinger and his team published two papers in the Proceedings of the National Academy of Sciences (PNAS) that report a breakthrough in the teleportation of entanglement. They generated entanglement between independent qubits over a record distance of 143 kilometers, linking the Canary Islands of La Palma and Tenerife. They also achieved entanglement of twisted photons across a distance of 3 km.

For the teleportation of entanglement, also known as entanglement swapping, the researchers made use of a curious phenomenon. It’s possible to entangle two photons by performing a joint measurement on them, known as a Bell-state measurement. These photons are then linked, and by switching the polarization of one of them, for example from up to down, the other photon will have its polarization switched from down to up. Assume you have two pairs of entangled photons, “0” and “1” in the receiving station and “2” and “3” in the transmitting station. Both entangled pairs are completely unaware of each other; in other words, no physical link exists. Now, assume you send photon 3 from the transmitter to the receiver, and perform a Bell-state measurement simultaneously on photon 3 and on photon 1. As a result, 3 and 1 become entangled. But surprisingly, photon 2, which stayed home, is now also entangled with photon 0, at the receiver. The entanglement between the two pairs has been swapped, and a quantum communication channel has been established between photons 0 and 2, although they’ve never been formally introduced.

Entanglement swapping in conjunction with quantum memory will be an important component of future secure quantum links with satellites, says Thomas Scheidl, a member of Zeilinger’s research group.

According to Mario Krenn, a member of Zeilinger’s research group, the team is working with a group at the University of Science and Technology of China on a satellite project. Next year, when the Chinese Academy of Science launches its Quantum Science Satellite (which will have an onboard quantum source), the satellite and ground stations in Europe and China will form the first space-Earth quantum network. Says Krenn: “You would create two quantum channels between the space station, one linking with Europe, and one with China. You can combine the results and obtain 100 percent secure quantum communication.”

Krenn is a coauthor of the second PNAS paper, which describes the entanglement of twisted photons despite having been located in buildings that are 3 km apart. A year ago, we reported in IEEE Spectrum the Vienna team’s experiment with the transmission of another quantum state of light, orbital angular momentum (AOM), over a similar distance. “Last year was a necessary step, and it was successful,” says Krenn. “And now we were able to show that on the single photon level, each photon can keep information in the form of orbital angular momentum over a large distance, and can be entangled even after three kilometers.”

Photons can only exist in two polarization states or levels, up and down. But the number of orbital angular momentum states is, in theory, unlimited, explains Krenn. “In the lab, we have shown that we can create a 100-dimensional entanglement—up to a hundred different levels of the photons can be entangled.”

To find out whether entanglement with OAM modes can be achieved across a turbulent atmosphere, the researchers created polarized photon pairs in the sender. Both were sent (one with a delay) to the receiver via a 30-meter optical fiber. Before being sent to the receiver, the photon sent without delay had its polarization state transformed into one of two OAM states that corresponded to the original polarization state. By performing separate but simultaneous measurements of the quantum states of both the slightly delayed photon in the sender and the photon detected in the receiver, the researchers found that the two photons were entangled.

“We were sure that entanglement took place,” says Krenn. “The measurements were prepared in such a way that there was no classical [not quantum] bypass of information.” Krenn notes that the measurement results could not influence each other because the distance was too large even for a speed-of-light signal to traverse the stations when the first one was measured.

The control of twisted quantum states is much more complicated than the control of polarization states, but the possibility of being able to entangle photons on multiple levels is worth the effort, says Krenn.

Hadoop on GPU: Boost performance of your big data analytics by 200x

Hadoop, an open source framework that enables distributed computing, has changed the way we deal with big data. Parallel processing with this set of tools can improve performance several times over. The question is, can we make it work even faster? What about offloading calculations from a CPU to a graphics processing unit (GPU) designed to perform complex 3D and mathematical tasks? In theory, if the process is optimized for parallel computing, a GPU could perform calculations 50-100 times faster than a CPU.

The idea itself isn’t new. For years scientific projects have tried to combine the Hadoop or MapReduce approach with the capacities of a GPU. Mars seems to be the first successful MapReduce framework for graphics processors. The project achieved a 1.5x-16x increase in performance when analyzing Web data (search/logs) and processing Web documents (including matrix multiplication).

Following Mars’ groundwork, other scientific institutions developed similar tools to accelerate their data-intensive systems. Use cases included molecular dynamics, mathematical modeling (e.g., the Monte Carlo method), block-based matrix multiplication, financial analysis, image processing, etc.

On top of that, there is BOINC, a fast-evolving, volunteer-driven middleware system for grid computing. Although it does not use Hadoop, BOINC has already become a foundation for accelerating many scientific projects. For instance, GPUGRID relies on BOINC’s GPU and distributed computing to perform molecular simulations that help “to understand the function of proteins in health and disease.” Most of other BOINC projects related to medicine, physics, mathematics, biology, etc. could be implemented using Hadoop+GPU, too.

So, the demand for accelerating parallel computing systems with GPUs does exist. Institutions invest in supercomputers with GPUs or develop their own solutions. Hardware vendors, such as Cray, have released machines equipped with GPUs and pre-configured with Hadoop. Amazon has also launched Elastic MapReduce (Amazon EMR), which enables Hadoop on its cloud servers with GPUs.

But does one size fit all? Supercomputers provide the highest possible performance, yet cost millions of dollars. Using Amazon EMR is feasible only in projects that last for several months. For larger scientific projects (two to three years), investing in your own hardware may be more cost-effective. Even if you increase the speed of calculations using GPU within your Hadoop cluster, what about performance bottlenecks related to data transfer?

Still, it is hard to tell which architecture will become mainstream in high performance and distributed computing. In case they evolve — and some of them certainly will — it may change our understanding of how huge arrays of data should be processed.


Hyderabad Is Going To Host India’s First Centre of Excellence In Big Data

Hyderabad is going to host India’s first centre of excellence in big data and analytics shortly, involving an investment of Rs 10 crore, to be dedicated for government sanctioned projects, said Department of Science and Technology’s advisor and head of big data initiatives Dr KR Murali Mohan.

Hyderabad Is Going To Host India’s First Centre of Excellence In Big Data

“We are exploring three different models for the Centre of Excellence – one which will work on pure government model, one set up by industry association and the third will be a PPP model. Depending on the success of each model, we will scale up the rest of the facilities,” said Dr Murali Mohan.

He added that while both state and central government collect a repertoire of data, the gap in analyses and adoption of new technology will be filled by the Centres of Excellence.


World’s Fastest Quantum Random Number Generator Unveiled in China

Quantum cryptography can only become successful if somebody can generate quantum random numbers at the rate of tens of billions per second. Now Chinese physicists say they’ve done it.

Privacy is one of society’s most valued qualities. The ability to send private messages and to carry out financial transactions without fear of being monitored lies at the heart of many government, military, and commercial activities.

One technology that allows this to be done perfectly is quantum cryptography, and it requires a powerful source of random numbers.

But there’s a problem. Random numbers are surprisingly hard to generate in large quantities. One of the best sources is the quantum world which is fundamentally random. But the best commercially available quantum random number generators produce them only at a rate of a million per second, far short of the many tens of billions per second that many applications require.

Today, that looks to have changed thanks to the work of You-Qi Nie at the Hefei National Laboratory for Physical Sciences in China and a few pals who say they have built a quantum random number generator capable of producing 68 billion of them per second. They say the technique should remove an important barrier preventing governments, the military, and the rest of us from benefiting from perfect security.

Random numbers have to be unpredictable and irreproducible, and this rules out generating them using ordinary algorithmic processes, which tend to be both predictable and reproducible. Computer scientists have long known that programs claiming to produce random numbers usually turn out to do nothing of the sort.

Instead, physicists have turned to natural processes to produce random numbers. For example, turbulence is thought to be entirely random so measuring that turbulent effects that the atmosphere has on a laser beam is one method of producing random numbers, albeit a rather slow one and one that could easily be biased by environmental factors.

That’s why physicists prefer to use quantum processes to generate random numbers. These are thought to be random in principle and fundamental in nature which is important because it means there cannot be some underlying physical process that might introduce predictability.

Physicists have tried lots of ways to produce quantum random numbers. One of the most popular is to send a stream of photons through a beam splitter, which transmits or reflects them with a 50 percent probability. Simply counting the photons that are reflected or transmitted produces a random sequence of 0s and 1s.

That’s exactly how the world’s only commercially available quantum random number generator works. But its speed is limited to about one megabit per second. That’s because single photon detectors cannot count any faster than this.

Recently, physicists have begun to mess about with a new technique.  This arises from the two different ways photons are generated inside lasers. The first is by stimulated emission, which is a predictable process producing photons that all have the same phase. The second is spontaneous emission, an entirely random quantum process. These photons are usually treated as noise and are in any case swamped when the laser is operating at full tilt.

However, spontaneous emission becomes significant when the laser operates at its threshold level, before stimulated emission really takes hold. If it is possible to measure these photons, then it may be possible to exploit their random nature.

You-Qi and co have done exactly that. These guys have created a highly sensitive interferometer that converts fluctuations in the phase of photons into intensity changes. That’s important because intensity changes can be easily measured using conventional photodetectors that work at much higher rates than single photon detectors.

That has allowed the team to measure these random changes and digitize them at a rate of 80 Gbps. This data stream then has to be cleaned up in various ways to remove any biases introduced by the measurement process.

But after this, the team is still able to produce random numbers at the rate of 68 Gbps.

There’s no way of guaranteeing that any sequence of numbers is perfectly random but there are a set of standard tests that can spot certain kinds of patterns, if they are present. You-Qi and co say their random number sequences pass all these tests with flying colors.

The end result is the fastest quantum random number generator ever produced by some margin.

That’s impressive work that should prepare the ground for some mainstream applications using quantum cryptography. “Our demonstration shows that high-speed quantum random number generators are ready for practical usage, say You-Qi and co. “Our quantum random number generator could be a practical approach for some specific applications such as QKD systems with a clock rate of over 10 GHz.”

In other words, many organizations that need a practical system that offers secrecy guaranteed by the laws of quantum physics may not have much longer to wait.

Digital Banking I: The Continuously Financing Behaviors and Industry 4.0

20 years ago, we were still shopping in the department stores, driving 20 miles away and picking up the goods in monthly or weekly basis. We started to enjoy surfing the internet and choose our favorites from eBay or Amazon since early 21st century, after work in daily basis.

Today we are sliding the smart phones and refreshing the shopping apps for every single hour or even minute. Our shopping behaviors have been changing so much for the last decades. It becomes much more frequent and digital. The direct result of these changing behaviors online leads consumers to use digital currencies, payments as well as online banking services. We are seeing these changes now. More and more banks are giving up physical branches and providing so called “digital retail banking” services.

So, what is the next coming thing? The new “IoT” (Internet of Things) will eventually connect all the physical objects around you to the internet. In the near future, you might be able toGoogle your clothes, food or even furniture. When the time comes and the world has enough information and computation power, you will be able to spend and consume not just in minutes but in a continuously time frame.

The power of industry 4.0.

Imagining you are going to the restaurant in the future. You might be able to pay a small fee to choose your favorite seat either near the window or with a sea view. If you are driving, it is always ok to choose a faster lane anytime by paying extras. If you are buying a car, you are able to configure every part of the car with color, engine or even whether compatible with IOS or Android or not before it was manufactured. You are able to track the productions and expected delivery date with a accuracy to single minute. This is the power of industry 4.0.

Now you will need the banks to provide you a “continuously” financing for your huge numbers but small amounts of financing needs.  Moreover, instead of financing your whole thing, you are able to finance your vehicle parts by parts. You may choose the different rates for your different behaviors or parts of your products as the future digital banks might have their own preferences. Maybe at that time, your bank or Google might know you better than yourself.

Having enough storage and computation power for such huge amount of the information, the cost of accessing the retail clients would not be much higher than corporate clients but with a much higher margin. The cut-off of the retail clients and corporate clients becomes blurring.

Does it sound incredible but a little bit far away? The truth is that: it might not be as far as you thought. Amazon fulfillment grants the retail customers to delivery their goods to others with a corporate level efficiency of logistic. The retailers are able to track their individual and small amount of the goods in real time. Amazon is responsible of tagging those goods and turn into “IoT”. Meanwhile, Google is partnership with Calvin Klein todesign the fashion by top key words result. Another sample is that Alibaba is working with banks to issue Letter of Credits for their customers’ trade finance business. This is a formal commercial and corporate business but managing them in a much smaller micro level.

The consumer behaviors changed the traditional retail banking services into the digital banking one. “IoT” and Industry 4.0 will eventually enhance the changes to commercial, corporate and investment banking business. Things are happening now but still, we have a chance to catch up as most of the recent developments are still piece by piece.


What can a Smartwatch do for your Health?

Tech giants have committed themselves to making 2015 the year of wearable technology. During the first few months of this year, they’ve been introducing a wide array of wearable devices, such as smartwatches and smart bands. Most of these innovative devices have one thing in common: they try to seduce consumers by promising a healthier and more active life.

Users can now find out, with increasing precision, how many steps they take every day, the calories they burn, the miles they run or swim, the time they spend sleeping, or have a register of their pulse rate. But well beyond the use people can find for this data, there’s another possibility that hast just come forth: donating this data to science. And this could very well stir up medical research and clinical trials. In addition to personal motivations, users can actively participate in studies to improve public health.


The underlying technology was already there, thanks to Big Data and mobile phones. In factwearables have yet to prove themselves to be more precise than smartphones when tracking physical activity. Or so reads a University of Pennsylvania research study, comparing 2014’s most popular phones (iPhone 5S and Galaxy S4) with fitness-oriented smartbands (like Nike FuelBand, Jawbone UP and Fitbit Flex). The study’s authors question the need to invest in a new gadget. They argue that latest-generation phones, already so widely distributed (more than 65% of adults in the US own a smartphone), are a much better solution for physical activity and basic health data monitoring.

Last year, companies such as WebMD launched new services to help translate all that biometric data into useful information for the end user. Thanks to an app, the data compiled by the phone itself or by a paired device (such as a smartband, wireless weight scale or glucose monitor) is sent to health professionals who are capable of analyzing that data and who then recommend measures for the patient to take. Health objectives and actions are established from within the app. This initiative could appeal to anyone with an interest in a healthier lifestyle or, more specifically, to patients who suffer from chronic illnesses like diabetes type II or obesity. These patients need closer health status monitoring and the study of their biometric data could allow physicians to foresee their crises.

WebMD’s Healthy Target program also cleared a new path: once a patient hands over her biometric data to a platform, which treats it anonymously and ensures confidentiality, this data acquires a new value that goes beyond individual use for the patient’s treatment. Data analysis on a massive scale, with millions of smartphone users connected to a medical program, could offer revealing new patterns that could improve our understanding of diabetes, obesity and other diseases.


Apple became the first tech giant to move in this direction, with an ambitious initiative that may have been eclipsed by their own smartwatch’s launch. On March 9th, the same day the Apple Watch was presented, the company announced ResearchKit, a new platform that allows the 700 million iPhone users worldwide to participate in clinical trials and medical studies, providing data compiled through their phones or external devices connected to them.

The introduction of this medical research platform, as well as the first apps that use it is, according to some analysts, much more relevant than the Apple Watch itself. Apple partnered with first-rate medical centers to develop five programs that are already compiling data and conducting surveys on asthma, Parkinson’s, diabetes, breast cancer or cardiovascular disease. In exchange for their data, volunteers participating in the studies receive useful information: for example, people with asthma are informed of the areas in their cities where air quality is worse.

Despite being an open source initiative, ResearchKit is still limited to iOS devices, thus excluding Android users from participating in these studies. This introduces an important bias to be taken into account when analyzing data.

Moreover, when ResearchKit was opened in April 2015 to any institution, pharmaceutical lab or doctor willing to use it for purposes of scientific research, developers then discovered how easy it was for minors to join medical studies without parental consent, which is a legal prerequisite. Apple later modified the requirements to include parental consent as well asthe approval of an independent ethics committee.

Preliminary data from a study on asthma carried out by Mount Sinai Hospital in New York are quite optimistic on the level of engagement of users. During the first weeks of the studypatients used the app almost as frequently as social media.

Data safety in these medical programs is also a concern since, once again, technological possibilities progress faster than the reference legal framework. And like any technology set on granting the anonymity of the data it handles, it faces the possibility of security breaches. Users yielding their data for such studies will always face a certain risk of it being used for undesired purposes, or even against them.

Meanwhile, the first use of this biometric data in a courtroom has proven positive for a Canadian car accident victim who fights for compensation. The attorneys are using data compiled by a smartband to demonstrate how the accident’s sequels impair their client’s life. They use Vivametrica, a platform that processes public research data, to compare her physical activity pattern with that of the general population.