Tag Archives: Artificial Intelligence

Artificial Creativity: AI Levels up yet Again

We’re just beginning to get our heads around Artificial Intelligence, but the machines are already making their next move: creativity. While we still think of imagination as an innately human capability, advances in computing power are making arts as diverse as architecture, music, movies, and material design into easily-accessible, programmable spaces. In some areas, machines have already surpassed human originality and quality – as rated by other humans – and more fields are likely to fall. Continue reading

WEBINAR – Beyond Buzzwords: Capitalizing on the Digital Transformation of the Enterprise

The headlines are rife with claims of huge market opportunities highlighted by poorly defined buzzwords such as the Internet of Things, Artificial Intelligence, Industry 4.0, and the Digital Thread. While few would quibble with the potentially massive opportunity, the majority of enterprises are unprepared for how the digital transformation of their business will change both the products and services they offer, and the processes they use to generate them.

In this webinar Lux:

  • Explored what emerging technologies will threaten the status quo
  • Provided guidance for sorting through the hype to understand the offerings of players both big and small
  • Suggested a consideration of a cohesive corporate strategy to maximize the roles of disparate functions like IT, product development, and M&A

To view this webinar recording click here.

For the audio recording click here.

By: Kevin See

Getting Smarter About the Applications, Domains, and Methods Within Artificial Intelligence

GOTW7_10_11_15

Artificial intelligence (AI) is one of the most over-hyped phrases of the 21st century, one that inspires both wonder and fear in equal parts. Today, we’ve only implemented the bare minimum capabilities of AI research to make our machines and systems smarter, faster, and more efficient. However, implementing AI techniques in today’s products and processes comes with a host of challenges, not the least of which is understanding how the pieces fit together. Visualizations are often useful for depicting complicated relationships in multivariate systems in an intuitive manner. A Sankey Diagram framework is ideal for this purpose, thereby breaking the space into three key areas: Applications, Domains, and Methods, and then linking these to the root disciplines. AI applications are the complex tasks that computers must complete to successfully execute higher level functions. Domains are essentially fields of study within AI. Methods are the technical approaches that computer and data scientists apply to solve machine learning challenges.

We can show the landscape of AI through the relationships between these different levels. AI applications are shown on the left. Their relationships with domains are illustrated via connections. For example, effective computing applications predominantly utilize computer vision (e.g., facial expression recognition), natural language processing (e.g., semantic analysis of text), and computer audition (e.g., inferring emotion from voice). The size of each connection is used to portray the relative importance/predominance of each domain to each application area. The domains are then mapped to AI methods and techniques that they harness. For example, computer vision has been using deep learning successfully, especially in the past several years; previously, computer vision relied more heavily on older regression techniques, such as random forest decision tree models and support vector machines. Images and videos can, in total, comprise massive but somewhat redundant data sets; hence in some cases, dimensionality reduction serves to compress information in the process of making predictive inferences. Finally, the connections between methods and technique and their root disciplines. For example, deep learning is, in essence, a class of methods that came from the machine learning community. More classic regression techniques are also claimed in machine learning but from a historical perspective are more heavily rooted in statistics.

The mapping shown in this figure is intended to be representative, but not exhaustive. In some cases, the relationships could actually be thought of like a Venn diagram. For example, while Figure 1 taken literally would imply that statistics, data mining, and machine learning are independent, in reality there is a large amount of overlap among the three disciplines. Deep learning could also fairly be considered just another type of regression. However, we attempted to disentangle and make orthogonal as much as possible the important components of the landscape for clarity.

More importantly, conducting such an exercise enables understanding of AI methodologies and domains based on their core concepts and leads to insights on hype versus reality of some of the more common sight techniques. For example, deep learning approaches can be extremely useful, but will also have their limitations. Deep learning works well where there are massive data sets that are well labeled, and can bring huge advances in areas like speech, voice, and object recognition. This map also reinforces that there is no singular solution that will enable “intelligent machines,” but rather, AI will continue to grow as a combination of techniques and approaches used to solve discrete problems. Finally, it is clear that not all AI methods are suited to the same types of problems, as some techniques are best suited to huge data sets where the data is all of one type, whereas others are better when the input variables span a range of data types, such as images, text, and sound.

There is little doubt about the eventual impact of AI on industry and society more broadly, meaning an understanding will be required by companies and governments alike. It is most certainly an area where tracking the developments in a structured way will be critical.