Lux Populi

The Lux Research Analyst Blog


Google and Facebook’s Drone Strategies, from Buzz to Breakthroughs: The Sky’s the Limit

The technology world is abuzz with the recent announcement that Google is buying Titan Aerospace, a maker of high-altitude unmanned aerial vehicles (UAVs) that Facebook had only recently been considering (it bought Ascenta for $20 million instead). Ostensibly, both companies are looking at UAVs (also referred to as “drones”) as an opportunity to deliver Internet access to the roughly five billion people who lack reliable land-based access today. But that goal still leaves many people wondering about the business rationale – how will billing work, who will pay to advertise to the unconnected masses, and what are those technology giants really up to anyway?

To understand why content providers are spending billions on drones, you have to think about their long-term strategy. Recently, there was a huge defeat for Google and other content providers in a ruling about what’s called “Net Neutrality.” It basically says that landline and mobile carriers like AT&T and Verizon can start charging more for people to access certain sites, even though they swear the action will not be anticompetitive. So, for example, you might have to pay the carrier extra to see YouTube (which Google owns) or Instagram (which Facebook owns) or Netflix or Amazon Prime movies. In fact, just in February Netflix struck a deal to pay Comcast, which supposedly is already showing faster access times, but has not stopped the partners from bickering over unfair competition and exertion of power. Also, AT&T has a $500 million plan to crush Netflix and Hulu, so the competitive backstabbing has already begun.

Where do drones disrupt this strategy? Most obviously, having their own networks would allow Facebook and Google to bypass the domination of wireless and wireline carriers (like AT&T and Verizon in the U.S.) whose business practices – e.g. knocking down Net Neutrality – are geared towards throttling content providers like Facebook, Google, and their partners and subsidiaries like YouTube. Need more bandwidth? New neighborhood being built? Blackout? Natural catastrophe? Launch more drones – and expand service in hours, not years. Drones serving network connectivity allow Google, Facebook, and Amazon to bypass the toll lanes – and, incidentally, make instantly obsolete the landline infrastructure that their enemies Comcast, AT&T, and Verizon have spent decades and tens to hundreds of billions of dollars building out. Connectivity in emerging markets is a feint – look for delivering content in the developed world to be the first battle, and call these Machiavellian strategies the “Game of Drones.”

Could this really happen? Both drone technology and wireless connectivity technology are relatively mature and work well. Both are still improving every year of course, and it is possible to deliver some connectivity via drones today. However, more innovation is needed for them to be commercially viable, and future incremental development will be about integrating and improving parts, so more people can have more bandwidth with greater reliability and lower cost. For example, the engineers might integrate the broadband transceiver antenna with the drone’s wings (as Stratasys and Optomec have tried — client registration required) which could eliminate the cost and weight of a separate antenna, while allowing the antenna to also be very large and more effective. Drones’ needs could drive development of battery chemistries that outperform lithium-ion (client registration required), like lithium-sulfur (client registration required) from companies like Oxis Energy (client registration required). High-performance composites and lightweight, lower-power electronics technologies like conductive polymers (client registration required) will also be key.

What’s next? One of the most obvious additional uses would be to attach cameras, and use them for monitoring things like traffic, agriculture, and parks, even finding empty parking spaces – things that an AT&T repair van can never do. Maybe the drones become telemedicine’s robotic first responders (client registration required), sending imagery of accidents as they happen, and swooping down to help doctors reach injured victims within seconds, not minutes. While these examples may seem far-fetched, it’s really very hard to say exactly what they will be used for, only because our own imaginations are very limited.

Within the autonomous airspace space, there’s much more flying around than just glider-style UAVs. For example, Google’s “Project Loon” has similar stated goals of delivering internet access. The new investment in Titan does not necessarily mean Google is leaving lighter-than-air technologies; it’s just that Google has already invested in that technology and is now looking at other aircraft platforms for doing similar things in different environments. Investments in small satellites from companies like SkyBox and PlanetLabs are also taking off. And of course, there are Amazon’s delivery drones – rotary-wing UAVs more like helicopters: speed and navigation in small spaces are important, and they need to carry the weight of packages, so they need to be small and powerful.

Each of these technologies has spin-off effects – both threats and opportunities – for companies in adjacent spaces, such as materials or onboard power. Only batteries or liquid fuels are dense enough energy sources for rotary-wing aircraft, while Google’s Titan and Loon aircraft are more like glider planes or blimps: big, light, and slow, just staying in roughly the same place for hours, days, or even years. Solar energy needs a large area for collecting solar energy, so big glider and blimp drones can use solar. Technology providers in these areas stand to gain if more companies deploy their own UAV fleets.

So, UAVs are an important strategic technology for both companies, even if the money-making part of the business is far off. Yes, someday you might have a Google drone as your ISP, but that’s not the primary business case behind these investments today. Google and Facebook need to make investments in these airborne platforms for the same reasons that countries did 100 years ago – to defend their territory, metaphorically speaking. For example, Nokia should have done a better job launching smartphones before Apple and Google, and Kodak should have launched digital cameras before all the consumer electronics companies did. If Google and Facebook (and Amazon, and others…) don’t have drone technology in five to 10 years, they may be as bankrupt as Nokia and Kodak (ironically, Nokia launched mobile phone cameras, which accelerated Kodak’s bankruptcy). Instead, it may be today’s mobile phone and cable television providers who go the way of the landline.

Looking beyond the land of information technology, these examples are powerful illustrations of the fact that we seldom actually know what any new technology is really going to be used for. Even today, we dismiss mobile phone cameras, Facebook, and Twitter as frivolous social tools, but where would Tunisia and Egypt be today without them? Local Motors (client registration required) is just making one-off dune buggies – until GE sees that their microfactories are the future of manufacturing appliances, too. Crowdfunding is just a bunch of kids selling geegaws – until products like the Pebble smartphone beat the Samsung Gear (client registration required), start challenging the now-retreating Nike Fuelband, and even attack the smart home market. Google and Facebook might be saying today that they intend to bring connectivity to new places, even if in reality nobody at all can really say what they’ll do in 2018. While they probably have secret plans, those plans are almost certainly wrong – but better than no plan at all. Companies that plan to survive beyond a few quarterly earnings calls have to make sure they are well positioned to catch whatever falls from new technology’s blue skies.


Metamaterials Could Enable Ultrathin Terahertz Photosensors for Imaging and Communication Applications

What They Said

Researchers at the Vienna University of Technology recently announced development of a metamaterial designed to change the polarization of light in the terahertz band, with frequencies between 300 GHz and 3 THz. Metamaterials are materials that derive their materials properties – in this case optical properties – from a finely patterned microstructure or nanostructure rather than from their bulk chemistry. In this case, the researchers improved the absorption efficiency of THz radiation in semiconductor photodetectors by patterning the metallic top electrode with a micron-scale metamaterial pattern. As a result, they were able to demonstrate a micron-thick THz photodetector, more than 1,000 times thinner than the wavelength of the incident radiation.

What We Think

Terahertz radiation occupies a frequency range between the microwave and infrared bands. It is non-ionizing, but can penetrate several millimeters of plastics or living tissue. As a result, it has been studied as a safer (and potentially higher resolution) alternative to x-rays for a number of medical and security imaging applications (client registration required). In addition, its high frequency relative to traditional Wi-Fi frequencies could enable high-speed, high-bandwidth short-range wireless communications. However, thus far generating and detecting THz radiation has proven difficult, limiting its use to high-end scientific applications in spectroscopy.

Researchers are gradually developing methods for generating THz radiation from several sources, such as nanoscale graphene antennas. The metamaterial described in this paper could, in turn, represent a path toward an efficient detection method. Moreover, because it is being used in a semiconductor device that already relies on process like photolithography for patterning fine features, the cost barrier for patterning one additional device layer is minimal, compared to other emerging metamaterial technologies. As a result, this development could enable high-end, high-speed electronics applications to come to market within the next five years.


The 3D Printed Part Market Will Reach $7 Billion in 2025


Currently, 3D printing’s largest applications are for making prototypes, molds, and tooling. Direct production of end use parts, however, is beginning to grow in industries including aerospace, medical, automotive, consumer products, architecture, and electronics. The coming decade will see both growth and disruption in the diverse 3D printing landscape, with the total market for 3D printers, printable materials, and printed parts reaching $12 billion in 2025. This is comprised of $2.0 billion in formulated materials and $3.2 billion in printers, but the highest share will reside in $7.0 billion of part production value, growing at a 21% CAGR from only $684 million in 2013.

While prototypes, and molds and tooling have respectable growth rates, the most robust growth is production parts, growing at a 36% CAGR from $81 million to $3.2 billion over this time period. This will vary significantly across applications. Medical applications  – such as surgical tools and orthopedic implants – will take off from $6 million in 2013 to $391 million in 2025, a 42% CAGR, driven by the high value of customization in this market, such as prosthetics and implants that can be readily fit to individual patients or their injuries. Small-volume automotive applications, such as parts for high-end vehicles as well as replacement parts for vehicles no longer in production, will reach the market in 2015, but rise quickly to $695 million by 2025. Aerospace, an early leader in industrial adoption will remain important, but longer product growth cycles won’t keep pace with the slightly later to adopt industries. Aggregated applications also exist in consumer and architectural segments, a list that will no doubt increase as developers target a broad swath of potential applications, each of which may or may not pan out. These include customized sporting goods, on-site military or offshore oil and gas production of replacement parts, direct production of consumer electronic devices by 3D printers, or even manufacturing objects in space using local materials such as moon rocks or asteroids.

Certainly there is opportunity and robust growth to be harvested in the 3D printing value chain, although still not enough to match the hype. Current or prospective market participants should also prepare for on-going flux as the landscape solidifies. While the leading 3D printer companies’ razor/blade model – developed to address prototyping – could inhibit growth, emerging third party material suppliers and equipment manufacturers with more open models are beginning to challenge their dominance. This has been fueled by an onset of patent expiration. In 2006, expiration of several early FFF patent families enabled the emergence of low-cost desktop printers from new suppliers and corresponding popular interest in the technology. In early 2014, U.S. patent 5,597,589, a foundational patent for SLS, expired. This is bound to result in increased competition among industrial SLS printer suppliers to follow as new entrants come to market. Beyond these disruptions to incumbency, traditional design tools are unwieldy and inadequate leaving room for emerging intuitive design tools that will point the way to more efficient part design.

The opportunity in 3D printing is clear, but all value chain participants will need to stay pragmatically nimble to keep pace with the materials, processes, parts and applications on its decade long journey to $12 billion.

Source: Lux Research report “How 3D Printing Adds Up: Emerging Materials, Processes, Applications, and Business Models” — client registration required.


Ammono-Unipress Hybrid GaN Process Looks Very Promising; Commercial Viability Needs to be Proven

What They Said

Ammono and the Institute of High Pressure Physics of the Polish Academy of Sciences (Unipress) recently announced that they have developed a hybrid ammonothermal-hydride vapor phase epitaxy (HVPE) process to produce low-dislocation-density gallium nitride (GaN) substrates. The traditional HVPE process is a two-chamber process using sapphire as the seed crystal while the traditional ammonothermal process is a single-chamber process using GaN substrates as the seed crystal; the details about this hybrid process were not disclosed. The partners claim that they have achieved smooth GaN layers up to 2.5 mm thick (crystallized with a stable growth rate of about 240 μm/hour) without any cracks, and at a dislocation density of 5 x 104/cm2 (which is about three orders of magnitude better than a typical HVPE GaN substrate dislocation density). They also used GaN wafers from this hybrid process as the seed crystals for Ammono’s ammonothermal process; the company claims that using a hybrid ammonothermal-HVPE seed crystal in a subsequent ammonothermal process still resulted in very low dislocation density of about 2 x 104/cm2.

What We Think

Ammono-Unipress demonstration of a hybrid GaN substrate process looks very promising especially in terms of rate of growth and dislocation density although its commercial viability remains unproven. The crystallization growth rate of 240 μm/hour is about 20x faster than that of the traditional ammonothermal process and can result in a dramatically more cost effective process – about 40% cost reduction overall. The dislocation density achieved in the hybrid GaN wafer is only slightly worse than the 2 x 104/cm2 that can be achieved using the ammonothermal process further validating the promise of such a hybrid approach. Ammono (client registration required) is also currently participating in one US Department of Energy (DOE) Advanced Research Projects Agency-Energy (ARPA-E) project under the Strategies for Wide-Bandgap, Inexpensive Transistors for Controlling High-Efficiency Systems (SWITCHES) program with Kyma to develop a similar hybrid manufacturing approach.

GaN substrates with such low dislocation density are especially attractive in not only lasers but also high-voltage power electronics applications that typically mandate a vertical device structure (see the report “Price or Performance: Bulk GaN Vies with Silicon for Value in LEDs, Power Electronics and Laser Diodes” — client registration required). Until now, the prohibitively high costs of GaN substrates have prevented their use in power electronic devices in power electronics – should this process become commercially viable, that scenario could change quickly; clients should continue to scope out for investment opportunities in GaN substrate players such as Ammono and Fairfield Crystal (client registration required.)


Wearable and Distributed Computing Will Hit $45 Billion in Revenue by 2033


Devices like smartphones and tablets have created a revolution in how people connect with the world and each other, all while providing a gradual evolution and convergence of features. In contrast, the devices of tomorrow will see dramatic innovations in the user experience, including improved wearability, added biosensor functionality, and seamless connectivity. Overlaying all device adoption trends from today’s and tomorrow’s devices, it is clear that the emergence of the latter does not necessarily come at the cost of dramatically shrinking market share for the former. Although tablets will continue to cannibalize notebook PC sales, other devices remain strong, often in concert with each other as opposed to being in competition.

Around 2018, smart watches and other accessories, such as fitness bands, will make an increasing impact, but remain in many ways dependent on the paired functionality of an evolved smartphone. After a slow start out of the gate, smart watches will grow quickly from 2016 through 2020, reaching 0.7% installed base and a market size of $3.6 billion but never matching the ubiquity of smartphones and tablets.

By 2023, wearable devices will emerge, appearing no more complex than a plastic wristband and ring, but able to identify the wearer and coordinate interaction with all shareable devices in the area. Wearable devices will take advantage of distributed screens, for example, which in most cases will feature their own power source, processor, and so forth. This device-to-device interaction will enable much more sharing and merging of technology once OEMs agree on interoperability standards, and by 2033 wearable devices will reach the mainstream, enjoying an installed base of 41% and peak revenues around $14 billion.

In 2028 “invisible” computing will enter the mainstream with a class of devices that offer a seamless and wearable user experience, by virtue of innovations in flexible and robust materials and innovative inputs like electroencephalography, electrocardiography, and galvanic skin response, along with advanced speech recognition and gestures. Rather than a single unit, invisible computing will consist of an adaptable network of devices that change according to consumer needs on monthly, weekly, daily, and even hourly timeframes, and adding up to a $27 billion market in 2033.

Innovation in consumer electronics will continue to create major new businesses, but more than ever the devices will rely on new technologies, like robust flexible materials and curving displays and batteries, along with software innovations that enable sharing and interconnectivity across OEM brands and systems. The entire value chain, from materials to OEMs, have opportunities to drive disruptive innovation to market in advanced materials, energy storage, display technologies, biosensors, and more. Those that partner effectively and integrate efficiently will have the clear advantage.

Source: Lux Research report “Building the Devices of Tomorrow: Electronics Shift to Wearable and Distributed Computing” — client registration required.


UK Doctors 3D Print Patient a New Face

Doctors in the UK recently used computer-aided design (CAD) software and 3D printed components to reconstruct the face of a motorcycle crash survivor. Dr. Adrian Sugar, a consultant maxillofacial surgeon at Morriston Hospital in Swansea, U.K., where the surgery took place, said, “[W]e produced guides at each stage of the surgical process, not only to cut the bone but to reposition the bones, and then we had custom implants 3D printed.” The surgical team spent months planning the procedure, which included taking scans of the patient’s face, creating a software model of his head, designing scaffolds to be used during the surgery, and the final implants. Dr. Sugar said that the team took extra care to document and design a repeatable process that would enable much shorter turnaround time for future surgeries. He added, “We’re talking maybe days as opposed to months. The ultimate aim is to undertake planning and be able to use custom-made guides and implants on a routine basis.”

As 3D printing makes a splash across multiple technology areas, one of the most promising could be medical implants and prosthetics. As mentioned in the report “Building the Future: Assessing 3D Printing’s Opportunities and Challenges” (client registration required), companies including Oxford Performance Materials (client registration required) and Arcam (client registration required) have already received U.S. Food and Drug Administration’s (FDA) 510(k) clearance for the use of their 3D printed materials for orthopedic and cranial implants. The ability to customize and quickly redesign 3D printed components has a specific added value for orthopedic implants and prostheses that need to be custom fitted to each patient. To this point, 3D printing has been restricted to passive medical applications. However, the emergence of 3D printing systems like those developed by Optomec (client registration required) that can deposit metal and plastic concurrently could enable the production of customized active implants – e.g. pacemakers, insulin pumps, and neurostimulators. Clients interested in new approaches to medical therapies should consider engaging with players in the 3D printing space that have a proven record of regulatory compliance, while steering clear of companies that claim medical applications with no proven track record.


Wuhan University Leads the Fundamental Research of Shale Gas Development Under National “973″ Program

A national “973″ research project, supercritical carbon dioxide (CO2)-enhanced shale gas development, commenced recently. Academician Xiaohong Li at Wuhan University was assigned as the primary investigator (PI) in collaboration with other universities and institutes, namely, Chongqing University, China University of Petroleum, Southwest Petroleum University, Institute of Rock and Soil Mechanics-Chinese Academy of Sciences, and Sinopec Research Institute of Petroleum Exploration, as well as an industrial entity Sinopec Jianghan Oil Field Company. The government-funded research project will focus on the use of supercritical CO2 fluid to assist in shale gas exploration and production, especially on the key technologies such as supercritical CO2-enhanced rock breaking, fracturing permeability, and shale gas displacement, among others. The ultimate goal of this fundamental study is to develop a theoretical system and technology of high-efficiency shale gas exploitation.

To achieve the shale gas target (client registration required) set in the 12th Five-Year Plan, 6.5 billion m3 per year in 2015, the Chinese government is working on the promotion of shale gas by involving private-sector players in the government-led shale gas tenders (client registration required), encouraging shale gas exploration joint ventures (JVs) between state-owned enterprises (SOEs) and non-SOEs (client registration required). Furthermore, the government is enhancing the application of natural gas, to replace coal, in both energy supply and transportation markets under the worsening air pollution nationwide (client registration required).

As one of the most important national research funding programs, the “973″ Program, also known as National Basic Research Programs of China, is supported by the National Science Foundation of China (NSFC) to drive primarily fundamental research projects and usually supported at the levels of tens of millions RMB. More importantly, it is focusing on the research fronts and leading edges related to the national strategy. In this sense, the launch of the shale-gas specialized fundamental research project indicates the additional effort Chinese authorities put on the theory and technology development of the shale gas sector. One notable thing related to this “973″ project is the active engagement of Sinopec via its research institute and subsidiary, which again proves Sinopec is a step ahead in shale gas exploration technology (client registration required). Hence, clients seeking opportunities in the emerging shale gas market of China should consider engaging leading academic and industrial players as mentioned, to penetrate and preposition in a changing energy landscape of China, yet treat them as potential technology competitors as well. In the meantime, Lux will keep a close track on the shale gas technology development and progress made by leading Chinese players, and analyze the consequent influence and impact on foreign technology providers.


Biopesticides Grow Past $4.5 Billion by 2023


Agricultural production relies on technology for increased yield and strong yield stability. In the face of climate change and concerns over agriculture’s environmental impacts, alternative “greener” technologies emerge as potential routes to alleviate these concerns. Often these technologies are specific replacements for individual technologies. Biopesticides are a “green” alternative for chemical synthetic pesticides. No-till farming is a “low-impact” alternative to intensive tilling agriculture. Biochar presents a biobased alternative to fertilization and soil amendment. While these approaches may be technologically promising, not all are legitimate financial opportunities. No-till farming is poised for growth, but the market opportunity is small, while biochar has potential, but needs a champion and a little luck. In contrast, while biopesticides have historically represented a small market, growth that will endure is already a reality, buoyed up by multiple factors.

Recent legal changes are paving the way for biopesticide development and use over synthetic pesticides. From the EU to the U.S. and India, regulations and consumer sentiments are driving investment in biopesticides and adoption of those products. As an example, growth in the EU will occur most rapidly in the face of the neonicotinoid ban beginning in 2014. This regulatory pressure will drive adoption more rapidly than in other geographies, and the market will be larger by 2023 as a result.  The environmental impacts of biopesticides are much smaller than their synthetic counterparts. Biopesticides enjoy very short re-entry intervals for treated areas and often require very small doses for effective treatment. As such, we expect this market to grow at more than 8% CAGR through at least 2023. Innovations in the field coupled with swelling consumer sentiments against synthetic pesticides will push more and more adoption of these products in the future. This creates growth opportunities for those looking to make a play here.

Vestaron is an excellent example of a multi-pronged approach that advances a biochemical biopesticide, a transgenic PIP, and a synthetic pesticide at one time. Others in the field should try similar multi-faceted approaches to be resilient to changing consumer demands. Other novel biopesticide varieties will also claim increasing market share. Specifically, biologically-sourced small molecules have a better ability to achieve specific control of individual insect pests, weeds, and even fungal diseases. Morflora’s technology, which uses nontransgenic approaches to impart resistance to myriad pathogens and pests via PIPs, is a model for future biopesticide development.

Biopesticide developers and aspiring market entrants should consider acquisitions now, not later, to get a piece of the opportunity. The existing market is flooded with small-scale developers; many of them offer a single product. As the industry matures, these small-scale developers will rapidly realize growth and success, increasing the potential purchase prices of these start-ups by significant margins. Those seeking to innovate externally should act now to make their selections, rather than waiting and having to pay more for new products.

Source: Lux Research report “Green Dreams or Growth Opportunities: Assessing the Market Potential for “Greener” Agricultural Technologies” — client registration required.


Setting the Record Straight on Natural Gas Fuels

From the day the Model-T entered the consumer market, oil has firmly held its position as the king of transportation fuels. The widespread availability of oil derived fuels such as diesel and gasoline has made it the natural choice for powering heavy equipment and machinery on the oil field. However, the shale gas revolution has transformed the North American energy landscape in just a few years, dropping natural gas prices to record lows. The price of natural gas itself is not as important as its price relative to oil, and today, that gap is wider than ever.

Many companies are attempting to capitalize on the gap. The oilfield services company CanElson Drilling bought CanGas Solutions (client registration required) to power their equipment using compressed natural gas (CNG). In 2012, Seneca Resources announced it would convert their drilling rigs from using diesel to cleaner burning liquefied natural gas (LNG) (client registration required.) Sasol plans to invest up to $14 billion on a gas-to-liquids (GTL) facility in Louisiana that would produce diesel, in addition to other petroleum derived products. There are many unique pathways that convert natural gas to a transportation fuel.

The three main competing natural gas based transportation fuels are CNG, LNG, and liquids produced from GTL processes. Each has its own merits (see the report “Shale Takes on Automotive: The Future of Natural Gas Vehicles” — client registration required).

  • Of the three, CNG is the simplest and least expensive to produce. It involves compressing methane to about 1% of the volume it occupies at atmospheric pressure. It is often stored at 21 °C and 3,600 psi.
  • Liquefaction involves super-cooling natural gas to about -160 °C and 14.7 psi. Production and storage of LNG is more energy intensive than it is for CNG; however, LNG occupies less than 0.2% of the volume of natural gas at room temperature. Still, diesel remains 1.7 times more energy dense than LNG.
  • Though large-scale GTL technologies are established just like LNG and CNG processes, the challenge is in scaling these down so it can utilize associated and stranded gas. GTL plants are capital intensive (~$100,000 per barrel of daily capacity). Unlike CNG and LNG, GTL produces a drop-in fuel, and does not require an infrastructure overhaul. Efficiencies for GTL plants range from 40% to 70% and can produce a range of fuels and chemicals.

These differences have led each natural gas based fuel to find its own way into the fuels market. Most of the 17.7 million natural gas vehicles on the road use CNG; largely the the only available passenger vehicles in the U.S. run on CNG, but the long-haul trucking industry prefers LNG because of its higher energy density. Oil companies are attracted to the prospect of using low-cost “waste” natural gas instead of high-cost diesel for their drilling rigs, and companies such as REV LNG, Prometheus Energy, and Halliburton are leading the way. GTL processes require permanent and capital-intensive plants, forcing companies like G2X (client registration required), and Sasol to bet on long-term price discrepancies between two commodities. Even the smaller-scale GTL plants from companies like Velocys (client registration required) and CompactGTL (client registration required) are much larger than the typical CNG/LNG fueling stations.

Despite the current low price of natural gas in North America, oil will continue to reign over the trillion-dollar transportation fuels market. Natural gas fueling infrastructure is not as ubiquitous as gasoline/diesel, and consumers need to sacrifice trunk space to fit the larger gas tank to account for the lower energy density. Because of these barriers, natural gas vehicle adoption has been slower than the fuel price spread would suggest. However, emerging technologies that convert raw natural gas into a consumable fuel offer a way for oil producers to cut costs, while generating revenue for technology developers.


Flax Oil as the New Omega-3 Source

The recent ruling by Health Canada, the Canadian equivalent of the US FDA charged with overseeing food and nutrition, that approves the claim that flax diet reduces cholesterol, is expected to drive the market for flax seed and flax seed oil. The ruling, coming after two years of study following a claim for allowance regarding therapeutic benefits, followed after a meta-study examination of eight clinical studies using an average 40 gram daily diet of ground flax seeds in hypercholesterolemic patients. The metastudy concluded that both total cholesterol and LDL cholesterol were lowered in pooling data from the eight studies.

The implications for market expansion for flax are significant since in its ruling Health Canada also noted that 39% of all Canadians between the ages of 6 to 79 have unhealthy cholesterol levels. Canada has a population of approximately 39 million people; assuming 90% of them fall within this age range and assuming a diet of 40 g per day as recommended, this would result in a potential 220,000 tons of annual demand within Canada for health and wellness reasons.

Flax as a source of essential oils has over the past 10 years been flagged as an alternative to fish oils, based on their availability, cost to produce, and potential for higher purity extracts. However, it has lagged behind fish-derived omega-3 oils in terms of claims and marketing. This new ruling will undoubtedly strengthen flax’s competitive positioning in the market place.

The fish oil nutraceutical market is now over $1B in annual sales. Companies such as BASF have increasingly raised the stakes and benchmarks for this market with its acquisition in 2012 of fish oil omega-3 producer Pronova for $844M, and its subsequent acquisition of Equatec, another concentrated fish oil omega-3 producer for an undisclosed sum in order to gain access to its novel chromatographic technology to produce ultrapure oils. These come after BASF’s initial foray into fish oils via its acquisition of Cognis in 2010 for €3B.

It is of interest to note that BASF stands at the threshold of the pharmaceutical industry, as Pronova is the primary producer of GlaxoSmithKlines’ blockbuster drug Lovaza for the treatment of hypertriglyceridemia. Thus the line between food and pharmaceuticals continues to blur as we’ve noted before. A leitmotif we expect to see increasingly in the future is food as pharmaceutical; evolving along the path of; food to extract to increasingly purified and therapeutically active extract to final purified active ingredient. The recent ruling on flaxseed oil will certainly give BASF and others in the chemical and pharmaceutical industries a run for their money.