Quantcast
Channel: iTWire - Entertainment
Viewing all articles
Browse latest Browse all 4710

Machine learning 'the next competitive frontier' in a decade

$
0
0
Machine learning 'the next competitive frontier' in a decade

Machine learning will be the next competitive frontier in 10 years and having good machine learning models, that perform the best, will be the competitive advantage that will differentiate between the winners and losers, a senior executive at the data management company MapR says.

Dr Crystal Valentine, the company's vice-president of technology strategy, told iTWire in an interview that it was still the very early days of seeing machine learning and deep learning being put to work by enterprises outside academia.

Dr Valentine has a background in big data research and practice and before joining MapR, she was a professor of computer science at Amherst College. She has authored various academic publications in the areas of algorithms, high-performance computing, and computational biology and holds a patent for Extreme Virtual Memory.

As a former consultant at Ab Initio Software, working with Fortune 500 companies to design and implement high-throughput, mission-critical applications and as a tech expert consulting for equity investors focused on technology, Dr Valentine has developed significant business experience in the enterprise computing industry.

{loadposition sam08}She has a doctorate in computer science from Brown University and was a Fulbright Scholar to Italy. She was interviewed by email.

iTWire: What does machine learning mean, in ordinary layman's language?

Dr Crystal Valentine: Machine learning encompasses a number of different algorithms for training computers to solve specific tasks, including tasks that are part of larger artificial intelligence systems. Machine learning techniques have proven successful at automating complex tasks that were previously thought to be too difficult for computers to solve, including understanding human language and identifying objects in digital images.

Traditional algorithmic approaches to solving problems without machine learning require a human programmer to give step-by-step instructions to the computer, defining explicit rules and logic. As an analogy, in primary school when we learn how to divide two integers, we learn long division to solve the problem. This is basically an algorithm!

Crystal Valentine.

Dr Crystal Valentine: "Fundamentally, computers are deterministic, which means they only do exactly what they’re told to do."

Machine learning solutions are trained to build models that allow the application to learn non-obvious patterns that it can use to solve future problems through pattern recognition without having explicit instructions on how to do so.

Again, by analogy, when a human sees a picture of a cat, he/she can identify it as being a cat without following a programmatic set of instructions – it’s done by complex pattern recognition carried out in the brain, drawing on the cats he/she has seen before. Machine learning models can recognise patterns in the same way. The training of machine learning models is computationally intensive and requires large volumes of data but is a very powerful and robust method for developing applications.

When one speaks of an intelligent application, what does that mean vis-a-vis the average human's understanding of the word intelligence?

“Intelligence” is a bit of a loaded term when it comes to computers and applications. In 1950, Alan Turing devised a test that he posited would be a standard by which we could measure whether computers had achieved intelligence. Essentially, the test was to see if a human interacting with a “chat bot” type of application could be fooled into thinking it was another human and not a computer. If the human was fooled, then that application could be rightly considered intelligent. Modern applications have since passed that test. Nonetheless, the computer science community has evolved their thinking and has collectively raised the bar on what a truly intelligent system should be able to do. The consensus is that chatting is actually quite a basic example of an intelligent system.

Fundamentally, computers are deterministic, which means they only do exactly what they’re told to do. Therefore, machine learning models can only simulate what a human would do through pattern recognition. When we talk about human intelligence, we include ideas like emotional intelligence, creativity, inference and even wit. We might be able to train a machine learning model to simulate wit or even artistic creativity in the style of a certain artist, but it doesn’t really know what it is – it can only simulate it. Whether simulating human intelligence through pattern recognition is equivalent to actually being intelligent is something for philosophers to debate.

In today’s vernacular, an intelligent application typically refers to an application that relies on deep analytical processing which might include machine learning. For example, a credit card fraud prevention system today could use a machine learning model to detect transactions that are likely fraudulent based on patterns it identified in past fraudulent transactions, rather than using an explicitly defined list of rules that would indicate fraud.

A key technical feature of intelligent applications is that they use machine learning or other analytical methods to automate decisions. This goes beyond traditional analytical applications that merely report information.

Again, in layman's language, what is deep learning?

Deep learning is a subset of machine learning. Deep learning models belong to a particular class of machine learning models, called artificial neural networks, that were inspired by the structure of biological neural networks. Artificial neural networks are essentially functions with many parameters, corresponding to “neurons,” that can be trained to do pattern recognition.

The function represented by the artificial neural network takes an input (like a digital image) and outputs a classification for that image (like a label that says the image is a picture of a cat). Deep learning models are artificial neural networks, comprised of a large number of artificial neurons. Deep learning has gained popularity of late because it has recently become practical, thanks to improvements in the data platforms used to train the models, sometimes involving volumes of data and computational power that would have been too expensive as little as five years ago.

Outline one application of deep learning in a common business.

One of the most compelling and valuable business applications of deep learning is in the computer vision systems built into autonomous driving vehicles. Deep learning models perform a number of important tasks in autonomous cars, including performing semantic segmentation on video data collected by dashboard-mounted cameras. Semantic segmentation helps the car distinguish between different types of objects in view, including pedestrians, stationary objects, other vehicles on the road, the driving surface, and traffic signals.

How are insights extracted from images (still and moving)?

Machine learning models only get us part of the way to “insight". Machine learning models take data (like a raw digital image) as input and give us information as output (like a message stating the image is a picture of cat). To get from information to insight requires another analytical step and the problem domain is really important here.

What is meant by deep neural networks, convolutional neural networks and recurrent neural networks?

“Deep neural network” is synonymous with “deep learning model” – it refers to an artificial neural network with a large number of neurons that are organised into many layers (and is therefore “deep”). Convolutional neural networks and recurrent neural networks are just classes of neural networks that have been useful in doing image recognition and natural language processing, respectively.

What does one mean by a converged data platform?

Converged data platforms are a new category of modern data management platforms that were designed to take advantage of emerging infrastructure trends, like cloud computing, edge computing, containers, and the proliferation of large commodity hardware clusters, and to support next-generation intelligent applications. Intelligent applications frequently require a variety of data management and real-time application processing technologies working together concurrently. A converged data platform provides the necessary data management capabilities and supports a wide variety of processing technologies in a single, fully integrated platform in order to support intelligent applications with the necessary speed, scale, and reliability.

Legacy technologies, including relational databases, do not scale, are too expensive, and are inadequate for supporting these applications. Big data technologies, like open source or hybrid open source products based on Apache Hadoop, are fundamentally limited. They cannot serve as a system of record for mission-critical applications and lack the speed, scale, and reliability required by intelligent applications. In sum, the requirements of today’s intelligent applications are not adequately addressed by legacy or big data technologies thus necessitating a new category, namely, converged data platforms.

Why do businesses that deal in these technologies prefer to make them sound mysterious instead of explaining the benefits to the common man?

There is definitely a lot of hype around machine learning and deep learning and, as a result, there are a lot of companies that are positioning themselves in the “me too” camp. To a practitioner, it’s pretty easy to distinguish between the companies that are really innovating and those that are not.

But to a casual industry observer, I agree that it’s hard to distinguish between what’s real and what’s just noise and this is compounded by the fact that machine learning is fundamentally a complex mathematical construct. So it’s not something that we should expect to be intuitive for the layman. (After all, developing a good machine learning model can take years and is often done in academia as part of a doctoral dissertation.)

I view this noise and confusion as a natural feature of any fast-growing field. We have seen this in other fields as they have grown. The idea of big data, for example, when it started to grow in earnest about a decade ago, at first appeared to be just hype, and it seemed that everyone was jumping on that bandwagon.

Today, the market has matured substantially and as practitioners gain experience, it becomes easier for them to understand how platforms are differentiated on the basis of their capabilities. Machine learning is only recently becoming mainstream in enterprise computing, so it will take time for that market to mature. Until that happens, the companies that are just jumping on the machine learning bandwagon can ride the hype wave by presenting very vague and jargon-laden descriptions of what they offer.

And, finally, to what extent has this industry become a necessary part of the tech landscape and what will be its role 10 years from now?

We are still in the very early days of seeing machine learning and deep learning being put to work by enterprises outside of academia. While machine learning is not a new field, it is used increasingly today by businesses thanks to advances in computing and data management technologies that support the computationally intensive model training processes that sometimes require upwards of petabytes of data. In many industries machine learning will, like any technological innovation, provide incremental improvements to existing applications.

For example, credit card fraud detection algorithms which used to be based on hard-coded sets of rules are now often built on machine learning models, yielding better results and improved margins. Oil drilling operations today use machine learning models to make real-time adjustments to the drill to improve output, again yielding improved margins.

In other industries, however, machine learning will be truly disruptive. Autonomous driving vehicles, for example, will be incredibly disruptive to several industries including automotive, ride-sharing, transportation, and logistics. Whether providing incremental improvements to business operations or as a major disruptor, I believe that machine learning and deep learning will become pervasive across all industries.

Fundamentally, technology and automation has the effect of compressing margins in the long-term. As that happens, businesses need to improve operating efficiency to stay competitive. In 10 years, machine learning will be the next competitive frontier. Having good machine learning models that perform the best will be the competitive advantage that will differentiate between the winners and losers.


Viewing all articles
Browse latest Browse all 4710

Trending Articles