Stanford University researchers have published a new study in Energy & Environmental Sciences that applies artificial intelligence (AI) techniques to accelerate the development of advanced batteries. Specifically, they looked to improve solid-state battery electrolytes, which are a very promising class of materials that could potentially improve the safety, performance, and cost of energy storage, affecting important applications like plug-in vehicles. While this initial Stanford study did not physically result in better batteries yet, it does present an early and important case study in how AI will impact how science will be done in the future, and how it can accelerate progress on open problems like next-generation battery development. Continue reading
Amazon recently announced Amazon Go, aiming to transform the ages-old brick-and-mortar retail experience. The official news broke via multiple channels, including a well-produced YouTube video showing shoppers entering a stylish grocery store. Central to the concept is the absence of any physical checkout system; shoppers check in upon arrival, and browse as they normally would. Amazon says it uses a combination of computer vision, machine learning, and artificial intelligence (AI) to track users and items throughout the store. When a shopper picks a product, this array of in-store sensors and back-end analytics automatically tabulate the final bill (deducting it from their Amazon account, of course), allowing shoppers to be on their way − “Just walk out technology.” According to initial reports, Amazon has actually built a 1,800 ft2 test site in one of its Seattle buildings. While today it is only open to Amazon employees, it says it may allow the public to shop in “early 2017.” To put the store’s scope in perspective, it is the size of a modest home, whereas most of the U.S.’s 38,000 supermarkets are 50,000 ft2 or more – more than 25 times the size of Amazon’s store. Continue reading
The headlines are rife with claims of huge market opportunities highlighted by poorly defined buzzwords such as the Internet of Things, Artificial Intelligence, Industry 4.0, and the Digital Thread. While few would quibble with the potentially massive opportunity, the majority of enterprises are unprepared for how the digital transformation of their business will change both the products and services they offer, and the processes they use to generate them.
In this webinar Lux:
- Explored what emerging technologies will threaten the status quo
- Provided guidance for sorting through the hype to understand the offerings of players both big and small
- Suggested a consideration of a cohesive corporate strategy to maximize the roles of disparate functions like IT, product development, and M&A
To view this webinar recording click here.
For the audio recording click here.
By: Kevin See
Artificial intelligence (AI) is one of the most over-hyped phrases of the 21st century, one that inspires both wonder and fear in equal parts. Today, we’ve only implemented the bare minimum capabilities of AI research to make our machines and systems smarter, faster, and more efficient. However, implementing AI techniques in today’s products and processes comes with a host of challenges, not the least of which is understanding how the pieces fit together. Visualizations are often useful for depicting complicated relationships in multivariate systems in an intuitive manner. A Sankey Diagram framework is ideal for this purpose, thereby breaking the space into three key areas: Applications, Domains, and Methods, and then linking these to the root disciplines. AI applications are the complex tasks that computers must complete to successfully execute higher level functions. Domains are essentially fields of study within AI. Methods are the technical approaches that computer and data scientists apply to solve machine learning challenges.
We can show the landscape of AI through the relationships between these different levels. AI applications are shown on the left. Their relationships with domains are illustrated via connections. For example, effective computing applications predominantly utilize computer vision (e.g., facial expression recognition), natural language processing (e.g., semantic analysis of text), and computer audition (e.g., inferring emotion from voice). The size of each connection is used to portray the relative importance/predominance of each domain to each application area. The domains are then mapped to AI methods and techniques that they harness. For example, computer vision has been using deep learning successfully, especially in the past several years; previously, computer vision relied more heavily on older regression techniques, such as random forest decision tree models and support vector machines. Images and videos can, in total, comprise massive but somewhat redundant data sets; hence in some cases, dimensionality reduction serves to compress information in the process of making predictive inferences. Finally, the connections between methods and technique and their root disciplines. For example, deep learning is, in essence, a class of methods that came from the machine learning community. More classic regression techniques are also claimed in machine learning but from a historical perspective are more heavily rooted in statistics.
The mapping shown in this figure is intended to be representative, but not exhaustive. In some cases, the relationships could actually be thought of like a Venn diagram. For example, while Figure 1 taken literally would imply that statistics, data mining, and machine learning are independent, in reality there is a large amount of overlap among the three disciplines. Deep learning could also fairly be considered just another type of regression. However, we attempted to disentangle and make orthogonal as much as possible the important components of the landscape for clarity.
More importantly, conducting such an exercise enables understanding of AI methodologies and domains based on their core concepts and leads to insights on hype versus reality of some of the more common sight techniques. For example, deep learning approaches can be extremely useful, but will also have their limitations. Deep learning works well where there are massive data sets that are well labeled, and can bring huge advances in areas like speech, voice, and object recognition. This map also reinforces that there is no singular solution that will enable “intelligent machines,” but rather, AI will continue to grow as a combination of techniques and approaches used to solve discrete problems. Finally, it is clear that not all AI methods are suited to the same types of problems, as some techniques are best suited to huge data sets where the data is all of one type, whereas others are better when the input variables span a range of data types, such as images, text, and sound.
There is little doubt about the eventual impact of AI on industry and society more broadly, meaning an understanding will be required by companies and governments alike. It is most certainly an area where tracking the developments in a structured way will be critical.