Category Archives: Technology

The automated home is a mess?

That’s the bad news. The good news: it’s getting better, fast.

Automated home

We’ve been promised the smart home for decades and always been disappointed, but we’re told that this time is different.

The smart home – a home where even the humblest appliance, plug socket or light bulb is connected to everything else and controllable through apps – is absolute, positively, definitely ready for prime time.

But is it? This year’s CES may have been packed with smart home devices, but behind the home hub hype, there’s a mess of incompatible standards, security worries and the odd manufacturer behaving badly.

Why standards matter

Let’s start with the language these machines speak. Standards are those languages: gadgets can only communicate with each other if they understand what you’re saying. In smart homes, those languages are standards such as Apple’s HomeKit, Google’s Weave and Samsung’s SmartThings, and unfortunately, those standards aren’t compatible with one another.

Let’s say you’ve got an iPhone and you want to control your smart home with Siri. None of the major smart thermostats currently on sale in the UK – Heat Genius, Hive, Honeywell Eco home, Nest, To and Heat Miser – are compatible with Apple’s HomeKit, so Siri can’t control them.

Belkin WeMo smart switches aren’t HomeKit-compatible either, and neither is the first generation of Philips’ Hue smart lighting system (the second generation, launched in late 2015, does have HomeKit).

It’s a similar story with other standards. Google’s Nest thermostat runs a version of its Weave software, although Nest Weave isn’t the same as the Google Weave that’s been published for others – so future Weave products may not work with Nest, even though they apparently share the same language.

 

IMage

The automated home is a mess | TechRadar

Amazon’s Echo requires Amazon-specific apps. And there are tons of smart home products from other manufacturers who use their own proprietary technologies and software too.

Kevin Meagher is vice president of business development for ROC-Connect, which works with some of the world’s largest telecommunication firms, device manufacturers, utility firms and retailers to create smart home products and services. “I think the problem is not technology, but business models,” he told techradar.

“Many businesses don’t want compatibility; they want to sell as many of their own proprietary branded products and services as possible… it is simpler to deploy point to point – single device, single app – in this early market.”

That’s where the incompatibilities come from. Remember the early days of the internet, when the likes of AOL, CompuServe and The Microsoft Network offered competing walled gardens that didn’t want you to go anywhere else?

The smart home has its own walled gardens, and like the internet ones they’ll have to go. As Meagher says, “the market has already started to recognise that consumers will not want to stand on the doorstep opening the door with one app, controlling the heating with another and so on.”

Getting smarter

In addition to multiple standards, there are multiple ways for devices to connect to one another. Wi-Fi is currently ubiquitous, but it’s too complex and power-hungry for smaller devices.

To date smaller smart home devices have used low power mesh networking based on a different wireless standard, 802.15.4.

Image

The big names here are ZigBee (which powers the likes of Hue) and ZWave (which is used by firms such as ADT). ZWave is proprietary – to use it, devices need to include Sigma’s radio chips – but ZigBee is more open and more flexible.

ZigBee became even more attractive this year, when at CES 2016 the ZigBee Alliance announced a partnership with Thread. Thread aims to create a standard protocol – like the internet’s TCP/IP – for smart home devices, and its members include Google, ARM, Samsung and Qualcomm. Thread, like ZigBee, is based on 802.15.4.

The thread is all about the connection; the software sits on top of it and could be Google’s Weave, ZigBee’s software or anybody else’s. It’s like web browsers: whether you use Chrome or Firefox, you’re talking TCP/IP.

Google’s backing gives Thread a lot of weight – with Android on phones, the Brillo operating system on simpler devices, Thread communications and Weave tying everything together, it’s a compelling system for device manufacturers – but there’s another standard emerging, called HaLow.

That’s the Wi-Fi Alliance’s name for the 802.11ah standard, which uses lower frequencies than traditional Wi-Fi for longer range and lower power consumption.

Remember when Bluetooth LE sparked a boom in connected devices for smartphones? The Wi-Fi Alliance is hoping that HaLow is the milkshake that brings all the boys to the yard, where the boys are smart devices and the yard is your router.

Meagher explains that manufacturers are working on what he calls “curated technologies”, where the smart home systems use whatever technology they like best but plug into the key ecosystems such as Weave or HomeKit.

 

Images

 

“The good news for consumers is that if they buy into any of the ecosystems using curated technology, the devices are usually compatible across all platforms so the only expense to move between service providers might be a new hub,” he says.

That hub might be a brand new router – routers that offer Thread or HaLow support alongside the normal Wi-Fi channels are on the horizon – or it might be a dongle that plugs into an older router to enable specific smart home technologies.

Hub-aaagh hub-aaagh

Updating your smart home with a new hub sounds like a great idea, and it’s how Philips brought HomeKit compatibility to its ZigBee-based Hue lights: the new hub made your existing bulbs HomeKit compatible.

Unfortunately, the upgrade also showed the dark side of smart home systems when a firmware update removed compatibility for cheaper third party bulbs. Philips said it was due to security and performance concerns, but the internet thought something more sinister was going on.

For Meagher, shutting out other products is a move that can only hurt. “The more difficult they make it for customers to scale Philips products and force them to interoperate with other platforms, the less they will ultimately sell. Like a lot of manufacturers, they need to decide whether they are a service provider or a device supplier.

 

Image

 

Philips has since promised to reverse the update, and its “Friends of Hue” programme will certify non-Philips bulbs as safe to use with Hue systems. Google has a similar programme, Works With Nest, which turned the closed Nest system into a home automation hub for third-party devices.

One of our own concerns is of backing the wrong horse by choosing the wrong platform – which could be an expensive mistake. Meagher recommends “those with the most labels” detailing the smart home standards they support; ultimately, “devices with open APIs using the mainstream technologies will win the day.”

Samsung agrees: speaking at the Samsung European Forum, Rory O’Neill said that he wanted to see the industry “breaking down any barriers to entry and keeping things simple… we have to use common standards so things will work together.”

We’re some way from smart home systems where we can control absolutely everything with a single word to Siri, Cortana, Alexa or Google Now, and the likelihood of a single home automation standard rising to encompass everything seems rather unlikely.

However, manufacturers are increasingly aware that compatibility matters, and there’s every chance that devices will emerge that support Apple’s HomeKit, Google’s Weave and the wider Thread simultaneously.

Courtesy: techradar

Visible Light Communications

What is Visible Light Communication?

There is now a lot of talk about Visible Light Communication (VLC) and indeed this blog site is dedicated to the topic, but what is VLC?

On this site when we talk about VLC we tend to be referring to an illumination source (e.g. a light bulb) which in addition to illumination can send information using the same light signal. So in our terms:

VLC = Illumination + Communication

Imagine a flash light which you might use to send a morse code signal. When operated manually this is sending data using the light signal, but because it is flashing off and on it cannot be considered to be a useful illumination source, so it is not really VLC by our definition. Now imagine that the flash light is switched on and off extremely quickly via a computer, then we cannot see the data and the flash light appears to emitting a constant light, so now we have illumination and communication and this does fits our definition of VLC. Of course we would need a receiver capable of receiving the information but that is not too difficult to achieve.

Visible Light Communication

In literal terms any form of information that can be sent using a light signal that is visible to humans could be considered to be VLC, but by our definition we should be able to see the light, but cannot “see” the data. So although there seems to be no universally agreed definition of VLC is, we can at least agree what we mean by VLC.

The opportunity to send data usefully in this manner has largely arisen because of the widespread use of LED light bulbs. LEDs are semiconductor devices similar to silicon chips. Consequently we can switch these bulbs at very high speeds that were not possible with older light bulb technologies such as fluorescent and incandescent lamps. The rapid adoption of LED light bulbs has created a massive opportunity for VLC. The problem of congestion of the radio spectrum utilised by Wi-Fi and cellular radio systems is also helping to create the market for VLC.

There are other terms used in the VLC space which are quite widely used but have slightly different meaning to VLC. Three terms closely associated with VLC are:

Free space optical (FSO) communication is similar to VLC but is not constrained to visible light, so ultraviolet (UV) and infrared (IR) also fall into the FSO category. Additionally, there is no illumination requirement for FSO and so this tends to be used in narrow beams of focussed light for applications such as communication links between buildings. FSO often uses laser diodes rather than LEDs for the transmission.

Li-Fi is a term often used to describe high speed VLC in application scenarios where Wi-Fi might also be used. The term Li-Fi is similar to Wi-Fi with the exception that light rather than radio is used for transmission.  Li-Fi might be considered as complementary to Wi-Fi. If a user device is placed within a Li-Fi hot spot (i.e. under a Li-Fi light bulb), it might be handed over from the Wi-Fi system to the Li-Fi system and there could be a boost in performance.

Optical Wireless communication (OWC) is a general term which  refers to all types of optical communications where cables (optical fibres) are not used. VLC, FSO, Li-Fi and infra-red remote controls are all examples of OWC.

Source: http://visiblelightcomm.com/

Hadoop on GPU: Boost performance of your big data analytics by 200x

Hadoop, an open source framework that enables distributed computing, has changed the way we deal with big data. Parallel processing with this set of tools can improve performance several times over. The question is, can we make it work even faster? What about offloading calculations from a CPU to a graphics processing unit (GPU) designed to perform complex 3D and mathematical tasks? In theory, if the process is optimized for parallel computing, a GPU could perform calculations 50-100 times faster than a CPU.

The idea itself isn’t new. For years scientific projects have tried to combine the Hadoop or MapReduce approach with the capacities of a GPU. Mars seems to be the first successful MapReduce framework for graphics processors. The project achieved a 1.5x-16x increase in performance when analyzing Web data (search/logs) and processing Web documents (including matrix multiplication).

Following Mars’ groundwork, other scientific institutions developed similar tools to accelerate their data-intensive systems. Use cases included molecular dynamics, mathematical modeling (e.g., the Monte Carlo method), block-based matrix multiplication, financial analysis, image processing, etc.

On top of that, there is BOINC, a fast-evolving, volunteer-driven middleware system for grid computing. Although it does not use Hadoop, BOINC has already become a foundation for accelerating many scientific projects. For instance, GPUGRID relies on BOINC’s GPU and distributed computing to perform molecular simulations that help “to understand the function of proteins in health and disease.” Most of other BOINC projects related to medicine, physics, mathematics, biology, etc. could be implemented using Hadoop+GPU, too.

So, the demand for accelerating parallel computing systems with GPUs does exist. Institutions invest in supercomputers with GPUs or develop their own solutions. Hardware vendors, such as Cray, have released machines equipped with GPUs and pre-configured with Hadoop. Amazon has also launched Elastic MapReduce (Amazon EMR), which enables Hadoop on its cloud servers with GPUs.

But does one size fit all? Supercomputers provide the highest possible performance, yet cost millions of dollars. Using Amazon EMR is feasible only in projects that last for several months. For larger scientific projects (two to three years), investing in your own hardware may be more cost-effective. Even if you increase the speed of calculations using GPU within your Hadoop cluster, what about performance bottlenecks related to data transfer?

Still, it is hard to tell which architecture will become mainstream in high performance and distributed computing. In case they evolve — and some of them certainly will — it may change our understanding of how huge arrays of data should be processed.

Resource: http://www.networkworld.com/article/2167576/tech-primers/hadoop—gpu–boost-performance-of-your-big-data-project-by-50x-200x-.html

Hyderabad Is Going To Host India’s First Centre of Excellence In Big Data

Hyderabad is going to host India’s first centre of excellence in big data and analytics shortly, involving an investment of Rs 10 crore, to be dedicated for government sanctioned projects, said Department of Science and Technology’s advisor and head of big data initiatives Dr KR Murali Mohan.

Hyderabad Is Going To Host India’s First Centre of Excellence In Big Data

“We are exploring three different models for the Centre of Excellence – one which will work on pure government model, one set up by industry association and the third will be a PPP model. Depending on the success of each model, we will scale up the rest of the facilities,” said Dr Murali Mohan.

He added that while both state and central government collect a repertoire of data, the gap in analyses and adoption of new technology will be filled by the Centres of Excellence.

Source: hpc-asia.com

World’s Fastest Quantum Random Number Generator Unveiled in China

Quantum cryptography can only become successful if somebody can generate quantum random numbers at the rate of tens of billions per second. Now Chinese physicists say they’ve done it.

Privacy is one of society’s most valued qualities. The ability to send private messages and to carry out financial transactions without fear of being monitored lies at the heart of many government, military, and commercial activities.

One technology that allows this to be done perfectly is quantum cryptography, and it requires a powerful source of random numbers.

But there’s a problem. Random numbers are surprisingly hard to generate in large quantities. One of the best sources is the quantum world which is fundamentally random. But the best commercially available quantum random number generators produce them only at a rate of a million per second, far short of the many tens of billions per second that many applications require.

Today, that looks to have changed thanks to the work of You-Qi Nie at the Hefei National Laboratory for Physical Sciences in China and a few pals who say they have built a quantum random number generator capable of producing 68 billion of them per second. They say the technique should remove an important barrier preventing governments, the military, and the rest of us from benefiting from perfect security.

Random numbers have to be unpredictable and irreproducible, and this rules out generating them using ordinary algorithmic processes, which tend to be both predictable and reproducible. Computer scientists have long known that programs claiming to produce random numbers usually turn out to do nothing of the sort.

Instead, physicists have turned to natural processes to produce random numbers. For example, turbulence is thought to be entirely random so measuring that turbulent effects that the atmosphere has on a laser beam is one method of producing random numbers, albeit a rather slow one and one that could easily be biased by environmental factors.

That’s why physicists prefer to use quantum processes to generate random numbers. These are thought to be random in principle and fundamental in nature which is important because it means there cannot be some underlying physical process that might introduce predictability.

Physicists have tried lots of ways to produce quantum random numbers. One of the most popular is to send a stream of photons through a beam splitter, which transmits or reflects them with a 50 percent probability. Simply counting the photons that are reflected or transmitted produces a random sequence of 0s and 1s.

That’s exactly how the world’s only commercially available quantum random number generator works. But its speed is limited to about one megabit per second. That’s because single photon detectors cannot count any faster than this.

Recently, physicists have begun to mess about with a new technique.  This arises from the two different ways photons are generated inside lasers. The first is by stimulated emission, which is a predictable process producing photons that all have the same phase. The second is spontaneous emission, an entirely random quantum process. These photons are usually treated as noise and are in any case swamped when the laser is operating at full tilt.

However, spontaneous emission becomes significant when the laser operates at its threshold level, before stimulated emission really takes hold. If it is possible to measure these photons, then it may be possible to exploit their random nature.

You-Qi and co have done exactly that. These guys have created a highly sensitive interferometer that converts fluctuations in the phase of photons into intensity changes. That’s important because intensity changes can be easily measured using conventional photodetectors that work at much higher rates than single photon detectors.

That has allowed the team to measure these random changes and digitize them at a rate of 80 Gbps. This data stream then has to be cleaned up in various ways to remove any biases introduced by the measurement process.

But after this, the team is still able to produce random numbers at the rate of 68 Gbps.

There’s no way of guaranteeing that any sequence of numbers is perfectly random but there are a set of standard tests that can spot certain kinds of patterns, if they are present. You-Qi and co say their random number sequences pass all these tests with flying colors.

The end result is the fastest quantum random number generator ever produced by some margin.

That’s impressive work that should prepare the ground for some mainstream applications using quantum cryptography. “Our demonstration shows that high-speed quantum random number generators are ready for practical usage, say You-Qi and co. “Our quantum random number generator could be a practical approach for some specific applications such as QKD systems with a clock rate of over 10 GHz.”

In other words, many organizations that need a practical system that offers secrecy guaranteed by the laws of quantum physics may not have much longer to wait.

Digital Banking I: The Continuously Financing Behaviors and Industry 4.0

20 years ago, we were still shopping in the department stores, driving 20 miles away and picking up the goods in monthly or weekly basis. We started to enjoy surfing the internet and choose our favorites from eBay or Amazon since early 21st century, after work in daily basis.

Today we are sliding the smart phones and refreshing the shopping apps for every single hour or even minute. Our shopping behaviors have been changing so much for the last decades. It becomes much more frequent and digital. The direct result of these changing behaviors online leads consumers to use digital currencies, payments as well as online banking services. We are seeing these changes now. More and more banks are giving up physical branches and providing so called “digital retail banking” services.

So, what is the next coming thing? The new “IoT” (Internet of Things) will eventually connect all the physical objects around you to the internet. In the near future, you might be able toGoogle your clothes, food or even furniture. When the time comes and the world has enough information and computation power, you will be able to spend and consume not just in minutes but in a continuously time frame.

The power of industry 4.0.

Imagining you are going to the restaurant in the future. You might be able to pay a small fee to choose your favorite seat either near the window or with a sea view. If you are driving, it is always ok to choose a faster lane anytime by paying extras. If you are buying a car, you are able to configure every part of the car with color, engine or even whether compatible with IOS or Android or not before it was manufactured. You are able to track the productions and expected delivery date with a accuracy to single minute. This is the power of industry 4.0.

Now you will need the banks to provide you a “continuously” financing for your huge numbers but small amounts of financing needs.  Moreover, instead of financing your whole thing, you are able to finance your vehicle parts by parts. You may choose the different rates for your different behaviors or parts of your products as the future digital banks might have their own preferences. Maybe at that time, your bank or Google might know you better than yourself.

Having enough storage and computation power for such huge amount of the information, the cost of accessing the retail clients would not be much higher than corporate clients but with a much higher margin. The cut-off of the retail clients and corporate clients becomes blurring.

Does it sound incredible but a little bit far away? The truth is that: it might not be as far as you thought. Amazon fulfillment grants the retail customers to delivery their goods to others with a corporate level efficiency of logistic. The retailers are able to track their individual and small amount of the goods in real time. Amazon is responsible of tagging those goods and turn into “IoT”. Meanwhile, Google is partnership with Calvin Klein todesign the fashion by top key words result. Another sample is that Alibaba is working with banks to issue Letter of Credits for their customers’ trade finance business. This is a formal commercial and corporate business but managing them in a much smaller micro level.

The consumer behaviors changed the traditional retail banking services into the digital banking one. “IoT” and Industry 4.0 will eventually enhance the changes to commercial, corporate and investment banking business. Things are happening now but still, we have a chance to catch up as most of the recent developments are still piece by piece.

Source: https://www.bbvaopenmind.com/en/digital-banking-i-the-continuously-financing-behaviors-and-industry-4-0/?utm_source=facebook&utm_medium=techreview&utm_campaign=MITcompany&utm_content=bancadigital40

What can a Smartwatch do for your Health?

Tech giants have committed themselves to making 2015 the year of wearable technology. During the first few months of this year, they’ve been introducing a wide array of wearable devices, such as smartwatches and smart bands. Most of these innovative devices have one thing in common: they try to seduce consumers by promising a healthier and more active life.

Users can now find out, with increasing precision, how many steps they take every day, the calories they burn, the miles they run or swim, the time they spend sleeping, or have a register of their pulse rate. But well beyond the use people can find for this data, there’s another possibility that hast just come forth: donating this data to science. And this could very well stir up medical research and clinical trials. In addition to personal motivations, users can actively participate in studies to improve public health.

BBVA-OpenMind-smartwatch-1

The underlying technology was already there, thanks to Big Data and mobile phones. In factwearables have yet to prove themselves to be more precise than smartphones when tracking physical activity. Or so reads a University of Pennsylvania research study, comparing 2014’s most popular phones (iPhone 5S and Galaxy S4) with fitness-oriented smartbands (like Nike FuelBand, Jawbone UP and Fitbit Flex). The study’s authors question the need to invest in a new gadget. They argue that latest-generation phones, already so widely distributed (more than 65% of adults in the US own a smartphone), are a much better solution for physical activity and basic health data monitoring.

Last year, companies such as WebMD launched new services to help translate all that biometric data into useful information for the end user. Thanks to an app, the data compiled by the phone itself or by a paired device (such as a smartband, wireless weight scale or glucose monitor) is sent to health professionals who are capable of analyzing that data and who then recommend measures for the patient to take. Health objectives and actions are established from within the app. This initiative could appeal to anyone with an interest in a healthier lifestyle or, more specifically, to patients who suffer from chronic illnesses like diabetes type II or obesity. These patients need closer health status monitoring and the study of their biometric data could allow physicians to foresee their crises.

WebMD’s Healthy Target program also cleared a new path: once a patient hands over her biometric data to a platform, which treats it anonymously and ensures confidentiality, this data acquires a new value that goes beyond individual use for the patient’s treatment. Data analysis on a massive scale, with millions of smartphone users connected to a medical program, could offer revealing new patterns that could improve our understanding of diabetes, obesity and other diseases.

BBVA-OpenMind-smartwatch-2

Apple became the first tech giant to move in this direction, with an ambitious initiative that may have been eclipsed by their own smartwatch’s launch. On March 9th, the same day the Apple Watch was presented, the company announced ResearchKit, a new platform that allows the 700 million iPhone users worldwide to participate in clinical trials and medical studies, providing data compiled through their phones or external devices connected to them.

The introduction of this medical research platform, as well as the first apps that use it is, according to some analysts, much more relevant than the Apple Watch itself. Apple partnered with first-rate medical centers to develop five programs that are already compiling data and conducting surveys on asthma, Parkinson’s, diabetes, breast cancer or cardiovascular disease. In exchange for their data, volunteers participating in the studies receive useful information: for example, people with asthma are informed of the areas in their cities where air quality is worse.

Despite being an open source initiative, ResearchKit is still limited to iOS devices, thus excluding Android users from participating in these studies. This introduces an important bias to be taken into account when analyzing data.

Moreover, when ResearchKit was opened in April 2015 to any institution, pharmaceutical lab or doctor willing to use it for purposes of scientific research, developers then discovered how easy it was for minors to join medical studies without parental consent, which is a legal prerequisite. Apple later modified the requirements to include parental consent as well asthe approval of an independent ethics committee.

Preliminary data from a study on asthma carried out by Mount Sinai Hospital in New York are quite optimistic on the level of engagement of users. During the first weeks of the studypatients used the app almost as frequently as social media.

Data safety in these medical programs is also a concern since, once again, technological possibilities progress faster than the reference legal framework. And like any technology set on granting the anonymity of the data it handles, it faces the possibility of security breaches. Users yielding their data for such studies will always face a certain risk of it being used for undesired purposes, or even against them.

Meanwhile, the first use of this biometric data in a courtroom has proven positive for a Canadian car accident victim who fights for compensation. The attorneys are using data compiled by a smartband to demonstrate how the accident’s sequels impair their client’s life. They use Vivametrica, a platform that processes public research data, to compare her physical activity pattern with that of the general population.

Source: https://www.bbvaopenmind.com/en/what-can-a-smartwatch-do-for-your-health/?utm_source=facebook&utm_medium=techreview&utm_campaign=MITcompany&utm_content=smartwatchhealth

The Race to Build a Real-Life Version of the “Star Trek” Tricorder

Tatiana Rypinski is maybe two bites into her salad when she realizes it’s time for her next meeting. She gets to her feet and heads to the Bio-medical Engineering Design Studio, a hybrid of prototyping space, wet lab, and machine shop at the Johns Hopkins University’s Homewood campus, in Baltimore. Rypinski and a few of her colleagues gather near some worktables with power outlets dangling from the ceiling. A tool cart is in one corner, a microscope in another. Two 3-D printers sit idle along a wall. The students have agreed to meet me here to discuss their work on a project whose goal is not just inspired by science fiction—it actually comes straight out of “Star Trek.” They want to build a medical tricorder.

In 1966, “Star Trek” introduced the tricorder as, in essence, a plot device. Like the transporter, which could “beam” people between starships and planets without asking the audience to sit through lengthy landing sequences, the tricorder could rapidly diagnose medical conditions and suggest treatments, keeping the story moving. With a wave of this fictional device, a Starfleet crew member could get a comprehensive medical analysis without having to be admitted to the ship’s sick bay.

 

Here in the real world, though, if you have a non emergency situation, you may wait days—weeks, in some places—to see a physician. And if you need laboratory tests, receiving a diagnosis can take even longer. A lot of waiting is involved, and waiting is the last thing you want to do when you’re sick. It’s even worse in the developing world, where a shortage of medical facilities and personnel means that seeing a doctor may not be an option at all. What we need is a tricorder. A real one.

Rypinski is the leader of Aezon, one of the teams participating in theQualcomm Tricorder XPrize. The competition launched in 2012, when theXPrize Foundation and U.S. chipmaker Qualcomm challenged innovators from around the world to develop a portable, consumer-friendly device capable of diagnosing a comprehensive set of medical conditions. More than 300 teams registered, and after a series of reviews, the organizers selected 10 finalists, announced last August.

This month, the final phase of the competition starts. Each finalist team was expected to deliver 30 working prototypes, which will now undergo a battery of tests with real patients. Prizes totaling US $10 million will go to the winner and two runners-up, to be announced early next year, when “Star Trek” will be celebrating its 50th anniversary.

Aezon is the youngest finalist team: All of its members are undergraduates at Hopkins. Some have never even seen the original “Star Trek” episodes. “My dad is a huge fan, though,” one student tells me. For her part, Rypinski is unfazed. “This is something we’re doing because we love it,” she says, “and I think that sets us apart.”

The other finalists include high-profile startups like Scanadu, in Silicon Valley, and well-funded medical companies like DNA Medicine Institute, in Cambridge, Mass., which has a partnership with NASA. Four teams are based in the United States, and the other six are from Canada, England, India, Northern Ireland, Slovenia, and Taiwan.

Their tricorders won’t be all-powerful portable scanners like those in “Star Trek,” but they still must demonstrate some impressive capabilities. They’ll have to diagnose 13 medical conditions, including anemia, diabetes, hepatitis A, leukocytosis, pneumonia, stroke, tuberculosis, and urinary tract infections. In addition, teams choose three additional conditions from a list that includes food-borne illness, melanoma, osteoporosis, whooping cough, shingles, mononucleosis, strep throat, and HIV. And their systems must be able to monitor vital signs like temperature, blood pressure and oxygen saturation, heart rate, and respiratory rate—not only in real time but for periods of several days as well.

The goals may seem impossibly difficult, but XPrize believes they can be achieved, thanks to a host of relatively recent technological advances. These include sophisticated machine-learning methods applied to medical data, cost-effective microfluidic and other lab-on-a-chip systems, and faster and cheaper laboratory tests such as rapid polymerase chain reaction (PCR) for DNA analysis. Just as important, there’s the popularization of personal genomics services and fitness-tracking gear, exemplifying people’s desire to learn more about their bodies and health.

Because the enabling technologies already exist in some form today, much of the challenge is about integrating them into a compelling system, says Grant Company, senior director of the Qualcomm Tricorder XPrize. A tricorder isn’t intended to keep you out of your doctor’s office: It won’t be able to treat any of the conditions it can identify. But it will be able to give you a fast and detailed picture of what may be the matter with you—which is much better than googling your symptoms and sorting through dubious medical websites, as many people do today.

Company says the diseases chosen for the challenge are often not diagnosed early enough and therefore lead to a significant number of deaths and hospitalizations: “The goal here is to try to identify things as soon as possible so that people don’t wait and get sicker.”

 

This Drone Uses a Smartphone for a Brain!!

A Qualcomm processor in this stock Android phone is powerful enough to autonomously fly this robot.

UPENN’s new quadcopter uses a smartphone for autonomous flight, employing only on-board hardware and vision algorithms—no GPS involved. The drone was built as a collaborative project between Qualcomm and a team of University of Pennsylvania researchers

Just to be clear on this, the only thing that the quadrotor has in terms of electronics is a motor controller and a battery. All of the clever stuff is being handled entirely by the phone, which is just a stock Android smartphone with a Qualcomm Snapdragon inside. In other words, this is not a special device (like Google’s Project Tango phone, which the UPenn researchers used in a demo last year); it’s something that you can pick up for yourself, and the UPenn guys only half jokingly offered to install their app on my phone and let it fly the robot.

This is a fantastic example of just how far smartphones have come: they’re certainly powerful computers, but it’s the integrated sensing that comes standard in almost all of them (things like gyros, accelerometers, IMUs, and high resolution cameras) that makes them ideal for low-cost brains for robots. What’s unique about the CES demo is that it’s the first time that a sophisticated platform like this (vision-based real-time autonomous navigation of a flying robot is pretty darn sophisticated) has been controlled by a very basic consumer device.

Source: IEEE Spectrum

Solar Impulse: Global flight completes first leg

The solar-powered aircraft, Solar Impulse 2 , which aims to fly around the globe without a drop of fuel, made a historic night landing in Ahmedabad in Gujarat late on Tuesday.

The aircraft successfully landed in the western state of Gujarat at 11:25 pm local time (1755 GMT) to complete its first major sea leg and to finish its second leg in a little less than 16 hours after taking off from the Omani capital Muscat.

View image on Twitter

After the plane landed at Sardar Vallabhbhai Patel International Airport, Ahmedabad, members in the control room applauded Pilot Bertrand Piccard, who was at the controls on the 1,465 kilometre (910 mile) journey over the Arabian Sea.

According to the Swiss embassy, the Solar Impulse will be in Ahmedabad for four days during which “several events are planned on the theme of renewable energy and sustainable development”.

Capable of flying over oceans for several days and nights in a row, Si2 will travel 35,000 km around the world in 25 days over the course of roughly five months. It will pass over the Arabian Sea, India, Myanmar, China and the Pacific Ocean.

The aircraft is also likely to hover above the river Ganga in Varanasi to spread the message of cleanliness and clean energy.

UN Secretary-General Ban Ki-moon on Monday congratulated the the team behind the Si2 project and wished them every success in their historic attempt.

“We take inspiration from their example and efforts to harness the power of multilateralism to address climate change and to inspire the world to achieve sustainable development through …sustainable energy and renewable energy,” he said.

“With their daring and determination, we can all fly into a new sustainable future,” he added.

The Si2 is an airborne laboratory and the largest aircraft of its kind ever built, with a weight equivalent to that of a small car.

With a wing covered by more than 17,000 solar cells, the plane can fly up to an altitude of 8,500 metres at speeds ranging from 50 to 100 km per hour.

After travelling around the globe, Si2 is expected to arrive back in Abu Dhabi in late July or early August.

Source: Zee news