Tag Archives: Google

Risen From the Dead: Google Glass 2.0 Is Now for the Enterprise

Google Glass is back. Last week, X, a subsidiary of Google parent Alphabetannounced a revival of its most embarrassing wearable mishap with a new focus on the enterprise market. In the past couple years, Google Glass Enterprise Edition (EE) has been silently tested in pilot programs with companies such as GEDHLBoeingVolkswagen, and Sutter Health. After last week’s announcement of Glass EE, the wearable device will now be more widely available via a network of partners. As of now, there are no further plans to bring back the original consumer edition. Continue reading

Evaluating IP Activity in Sweat Sensing

We’ve expressed in the past that accurate and reliable early-stage disease detection combined with non-invasive sample collection is the holy grail of molecular diagnostics. Previously, we discussed the growing popularity of non-invasive saliva-based diagnostics in the context of this theme (see the insight “Digital IVD sample of the future: Saliva” [client registration required]). While less mature, sweat based tests, too, present a compelling avenue for non-invasive sensing in medical, enterprise, and consumer applications. To gauge the state of innovation in sweat sensing, we surveyed the evolving landscape of sweat sensors patents. In total, we identified 1,009 patents for the search terms “sweat sensor” and “perspiration sensor” published in the past decade. As evident by Figure 1 below, sweat sensing technologies have seen consistent increase in patents applications. 2016 saw most activity, with a total of 194 patent grants and applications.

Continue reading

Six Reasons Why Electric Vehicles and Autonomous Vehicles Will Inevitably Merge

As the automotive industry evolves, two major innovations have emerged almost in parallel – increased electrification, peaking in fully electric vehicles (EVs), and increased driver assist features, peaking in the (not yet achieved) idea of self-driving cars (client registration required for both). A common question we receive is whether these two must be combined: Must self-driving cars be electric? The short answer is no – or, more accurately, not yet. It will be possible to make a competent self-driving car using older internal combustion engine (ICE) technology as the power source that drives the wheels. However, there are six good reasons why it is most likely that self-driving cars will be overwhelmingly electric – that is, six reasons why the two technologies will “merge”: Continue reading

Why Apple’s New Energy Subsidiary Means Absolutely Nothing

Apple recently launched a subsidiary, Apple Energy, which filed with the Federal Energy Regulatory Commission (FERC) as a supplier of power to wholesale markets across the U.S. for energy, capacity, and ancillary services.

Since the announcement, the media has been abuzz with speculation as to what Apple Energy might mean for utilities, power customers, and Apple, but ultimately, this development is hardly newsworthy. To cut through the hype, we clarify some misunderstandings to explain why Apple Energy should not raise any eyebrows in the energy community: Continue reading

Smart Glasses Hyped at the Enterprise Wearable Technology Summit, but Other Devices Are Being Overlooked

Lux Research analysts recently attended the Enterprise Wearable Technology Summit East in Atlanta, GA. The conference included about 250 attendees, with presenters and exhibitors focused on the opportunities and challenges of implementing wearable devices in the workplace. The buzz at the conference centered around smart glasses and three main issues facing smart glasses in the workplace today: Continue reading

Augmented and Virtual Reality Technologies Hit a VC Hype Cycle; Enter with the Right Tools for Success

GOTW_5_8_16_1.2.jpg

Venture capital (VC) firms have invested $7 billion into electronic user interface (EUI) technologies between 2005 and today, but momentum is building in the space with a record $1.3 billion invested in 2014, and 2016 already seeing $1.1 billion invested as of the end of Q1. There is plenty of nuance under the growth of VC in this sector, however. The technologies receiving the funding have evolved significantly, as newcomers like augmented and virtual reality have been garnering much of the funding over the past several years.

2D displays technologies have received the most funding of any individual technology area over the entire time period, receiving almost $2.6 billion, unpinned by massive rounds such as the $700 million whale into Plastic Logic in 2011. That said, augmented reality (AR) investments have taken off over the past several years, growing steadily from about $8 million in 2005 to $38 million in 2012, and then significantly expanding to $669 million in 2014 and $175 million in 2015. As of March 2016, AR has already received $856 million, due to a $793 million round raised by Magic Leap. Virtual reality (VR) has also seen a significant spike, growing from $32 million in 2012 to $371 million in 2015. Of the user input technologies – touch controls, voice controls, gesture control, and eye tracking – touch control received the most VC investment, with $560 million. Voice controls was second with $335 million, followed by gesture with $133 million, and eye tracking with $94 million. Touch controls have steadily increased almost every year from 2005, when $12 million was invested, until 2014, when $119 was invested, but fell off in 2015 with only $31 million.

Within this landscape, it is interesting to see what corporate investors – investors with presumably a better bead on the market – are investing in. Corporate venture capital (CVC) has become more active in recent years, participating in more than 70 funding rounds since 2014. Intel, Samsung, Qualcomm, Google, and BASF were the most active CVCs, but their portfolios vary widely. Intel Capital is the most active CVC in the field, making 47 transactions in 34 companies, far ahead of others. Within Intel’s transactions, the company had nine deals in 2D display companies, eight in voice control, seven in AR, six in touch control, and 17 in other technologies. Google and Samsung made investments throughout the virtual reality ecosystem, having made investments from hardware to content generation to social media, while Intel maintains a broader portfolio across 2D displays, augmented reality, voice controls, touch controls, and virtual reality. In this sense, the leading CVCs are building out entire AR and VR ecosystems.

CVCs in this space, although still relatively less active than institutional VCs in seed and A rounds, show more interest in these higher risk investments than is typical with 17% of CVC transactions invested in the seed round. The lead CVCs in the space traditionally operate with higher risk profiles than the broader CVC group, but this also means CVCs coming from more conservative sectors will either need to change their risk profile and engage sooner or partner effectively in the CVC community with the Intels, Googles and Samsungs of the world. Despite the higher than usual risk profile, CVC-backed companies still have a better outcome profile than the general venture-backed cadre. Out of the 98 CVC-backed companies, 17% were acquired, 3% went IPO, and zero went out of business. In comparison, the same figures for general VC-backed companies were 12%, 1%, and 5%. Needless to say, there is some clear frothiness in the EUI startup space so investors should enter at their own risk, especially in AR and VR areas. Thankfully, there are predictive analytics available for which startups will be successful.

By: Tony Sun

Surviving and Thriving as the Internet of Everyone Evolves to an Ubiquitous Reality

The Quantified Self (QS) movement began with fringe consumers obsessed with self-measurement, but today’s Internet of Things (IoT) – with sensors on and inside bodies, connected cars, and smart homes, offices, and cities – is expanding it to include everyone. Consumers will not have a shortage of devices or data to choose from anytime in the near future. Looking out further, to 2025, three specific factors will drive the technical evolution of the QS/IoT as a computing platform, each with implications for consumer relationships: improvement of individual devices; integration, from aspects of inner self to a holistic view of inner, outer, and extended self; and intervention in consumer actions.

  • Improvement: Before too long, gimmicky and overpriced devices will disappear from the market, while runaway hits will make headlines (and millions of dollars). From 2005 until now, sensors have driven QS – specifically, sensors attached to or focused on humans. An early example is fitness wearables, but they’re already a commodity; today’s Samsung, Google, and Apple smartwatches are a natural evolution. Bragi headphones now do health tracking; Samsung’s Artik platform, Intel’s Curie and GE’s GreenBean offer startups an easy way to create consumer IoT devices. Image sensors – cameras – enable gesture interfaces and new channels like lifelogging, where users of Twitter’s Periscope and Facebook’s Facescope live-stream their lives.
  • Integration: Fitness trackers and action cameras capture data on or next to consumers’ bodies. IoT technologies quantify consumers’ “inner selves,” and marketers can learn as much from them as they have by examining purchase histories, web surfing habits, and other digital footprints. Other IoT datapoints include vital signs from exercise, sports, and adventure wearables; food, from precision agriculture to smart utensils like HAPIfork, to microbiomes and Toto’s smart toilet; and medical bioelectronics, personal genomics, and mood- and mind-monitoring like Neurosky. The IoT tracks consumers’ outer lives of family via smart baby bottles and wearables for pets, and extended selves via connected thermostats, diagnostic dongles in cars, and image-recognition systems in stores and city streets.

Continue reading

Differentiating Consumer Smart Glass Hype From Enterprise Smart Glass Potential… Google Leads in One of These Categories

GOTW_1_17_16

Smart glasses launched to much fanfare but commensurate disappointment with Google’s initial consumer-focused product line, but, like many such initial products with glitches, the seeds were sown for other developers and end-users to connect and innovate. Enter the enterprises looking for new tools that can improve productivity, a domain in which smart glasses have received significant buzz recently. The devices’ unique form factors and hands-free controls attracted interest from many different industries, ranging from automotive and construction to medical and retail. This end-user interest together with the entrance of a plethora of device developers has created a major battlefield for smart glasses with numerous pilot projects being pursued. The question is, what glasses are the best fit for what enterprise use cases?

By analyzing more than 70 enterprise use cases, we found that these pilot programs can all be boiled down to three core functions. Those for accessing information enable users to pull information like checklists, product info, and notifications from various sources and view it in the head-mounted display (HMD). Sometimes, the visualized information is overlaid on top of the real object to achieve augmented reality (AR). In real-time communication use cases, smart glasses are used to stream live video from point of view and enable discussions with managers, remote experts, or customers. Finally, in documentation applications, smart glasses are used to take pictures and record audio and video clips, and then saved to local or remote storage, where no immediate feedback is needed.

Given the diversity of enterprise use cases and the diversity of technical capabilities in various smart glass devices, it’s not surprising that not all are a good fit to all cases. Continue reading

Tesla’s Autopilot – Concerns, Near Misses, and a Few Redeeming Factors

What They Said

Just a few weeks ago, Tesla announced the release of the official beta-version of its Autopilot. Only a few months off schedule from the original plans for an end-of-summer deployment, the system was immediately met with huge acclaim from Tesla owners. Many of those same owners started immediately abusing the system, using it in scenarios it wasn’t intended for and pushing the technology to its limits.

Automakers, in turn, have expressed concern – if not outright disdain – for what Tesla is doing. The concerns of automakers, many of which were recently voiced at the Tokyo Auto Show, is that if an accident happens with one of these vehicles, there will be an immediate backlash on the regulatory environment and progress of all autonomous vehicle technology.

What We Think

Automakers have every right to be concerned about Tesla’s latest product release. The company is essentially operating in a legal loophole, where it does not consider its technology to be “autonomous” but rather an advanced driver safety feature. As a result, it’s pushed its product to market while operating in this legal grey zone between autonomous and autopilot. Back in 2014, when California required Google to install steering wheels and pedals back into its driverless cars (client registration required), automakers began debating the language of autonomous versus autopilot. Attempting to distinguish between the two is somewhat ridiculous, and seems like a ploy from automakers like Tesla to ensure that the blame for any accident falls to the driver rather than the vehicle. Ideally, recent announcements like those from Volvo (client registration required) will help clear up some of this confusion, but in the end it will be up to regulators to decide what’s legal and where liability falls.

In the meantime, Tesla is focusing on the good news: in the three weeks since deployment, there have been lots of near misses but no major accidents. The system even appears to be learning and improving. One driver even claimed that the autopilot saved the vehicle from an otherwise unavoidable accidentHowever, for all the positive press, it’s important to note that the recent accident avoidance by the Tesla Uber actually highlights the confusion over advanced driver-assist systems and how they work.

The driver says Autopilot deserves credit for avoiding the accident; however, it was really the car’s forward-collision warning with automatic braking that avoided the accident. Forward collision warning and braking is increasingly a standard safety feature among many vehicle brands, not just Tesla, and is technically a component of the Autopilot but not actually its distinguishing feature. Rather, it’s the system’s ability to change lanes that makes it a significant feature jump from previous advanced driver assist systems (ADAS).

The problem with Autopilot style systems is that you cannot trust drivers to take control of the car, particularly once they become comfortable with the technology. It will lead to complacency and even longer re-engagement times (client registration required). What OEMs need to focus on is how to predict the scenarios in which the car can no longer self-drive and pass control to the driver with more than sufficient time – it cannot be a “just-in-time” operation, as that will almost certainly result in accidents. Today, the focus on partial autonomy on the highway is driven by the fact that the highway is actually really predictable in terms of traffic patterns, other vehicle behaviors, and accurate/adequate maps. Most OEM’s are building for the situation of “full autonomy,” but confined to the highway environment and passing control as an exit, comes situations where the car can give ample warning.

What many of those automakers are concurrently working on is how to track driver engagement – either through facial recognition or eye tracking – in order to enable a safe passing of controls. Driver tracking is something that Tesla has decided to entirely forgo, instead opting to simply warn its drivers that they are still responsible for their vehicles. A lack of driver tracking, combined with the fact that Tesla’s Autopilot currently lacks a safety system that would prevent the driver from turning it on in inappropriate situations – such as urban streets – could spell disaster for the company. So far, Tesla’s Autopilot has performed relatively admirably, and thankfully there have been no major issues, but there is still plenty of reason to be concerned.

Does Google’s Recent Push Into Life Sciences Signal Shift From Chasing Dreams to Chasing Profits?

It started with a vision to solve major health challenges facing mankind by using powerful computing – Google formed Calico in 2013 to develop technologies to tackle health issues related to aging and, in parallel, continued working on other technologies aimed at chronic diseases like diabetes through its secretive Google X lab. Over the last several years, we heard of a nanodiagnostics platform, a cardiac and activity monitor, a contact-lens-based continuous glucose monitoring device, and the Baseline Study (an effort by Google to collect genetic and molecular information from hundreds of people to establish a baseline of a healthy human body). With the recent corporate restructuring, they saw a creation of the parent company, Alphabet. They told the news that one of the entities under its umbrella will be Google Life Sciences, led by Andy Conrad as the CEO.

In and of itself, the creation of a separate Life Sciences company indicates an intent to start generating revenues in a relatively short term. However, even more intriguing is one of the first steps the newly formed company made – on September 15, 2015 they announced that they hired Thomas R. Insel, the director of the National Institute of Mental Health, to head the company’s efforts in neurology and mental health (client registration required). Dr. Insel has spent more than a decade at the helm of the agency and has recently spearheaded U.S. President Obama’s BRAIN initiative.

The official line is that he will lead initiatives aimed at finding more effective solutions for early detection and prevention of mental disorders and neurological diseases – all very noble causes – but we wonder if there are more financially tangible underlying motives behind the hire. One project Dr. Insel discussed is detecting psychosis early using language analytics by picking up the semantic signature of the disorganized thinking characteristic of this disease. Other ideas revolve around utilizing technology to identify and more precisely address the sources of depression and anxiety, including social interactions or sleep disruption. A common theme for all these ideas is continuous tracking of an individual’s brain activity and mental state to enable physiological data analytics-based behavioral studies. There we get to the true potential of the technology – in addition to improving health, this can also be used to enhance consumer behavior profiling in the advertising industry. According to Statista, Google controlled almost 10% of the half-trillion-dollar advertising industry in 2012. This is, in our opinion, the right segment of the medical industry for Google to focus on, as having a technology that can “read” consumers’ minds can only help the company increase its market share. It also won’t hurt that neurological diseases are one of the fastest growing medical conditions with an annual economic burden in the hundreds of millions of dollars.

We will leave with a discussion of the potential implications of an advertising giant controlling the sensitive health care information of a large number of individuals for another time.