The Essentials of Deep Learning

By
Ava Chisling
-
July 8, 2017

Jimoh Ovbiagele is the Chief Technology Officer & co-founder of ROSS Intelligence. He is a self-taught programmer, starting at the age of 10, who founded several startups in college and worked on self-driving cars. When he was 21, Jimoh came up with the idea for and co-founded ROSS Intelligence. Two years later, he was named by the American Bar Association as a Legal Rebel and by Forbes as one of their 30 Under 30. He speaks around the world — from Canada to China – about artificial intelligence and the future of law.

1. What is deep learning, in terms everyone can understand?

Deep learning is a field of machine learning, which is a field of computer science, focused on designing algorithms that learn how to do things by looking at examples of how to do them (training data) rather than being instructed how to do them through explicit programming. These algorithms are called deep neural networks and are loosely inspired by the network of neurons in the human brain.

2. What is the difference between Big Information, Big Data, Analytics, and Deep Learning?

Big information and big data are pretty much synonymous. They refer to the vast volumes of data that our computers have collected and produced like financial transactions, videos, emails, texts, call records, medical records, etc. Analytics refers to the set of techniques that we have to analyze and model this data. Deep learning is just one of these analytical methods.

ROSS uses deep neural networks to do these very things. This enables ROSS not only to bring back the relevant authority but also to intelligently return the exact sentences in the body that answer the question.

3. What are neural networks and how do they work?

Traditionally, programmers enable computers to perform a task by explicitly writing the instructions of how to do it using a computer programming language. The inherent limitation is that these programmers can only program tasks they know how to articulate and instruct logically, resulting in computer applications that solve only problems they already understand and know how to solve. But how do you tell a computer to recognize objects like tumors in CAT scans, for instance, and provide solutions to problems the programmer has never seen before? This challenge is where neural networks come in.

At a high-level, you can think of neural networks as a black box. You input data on one end and it renders a response on the other end. Inside of this black box is a network of artificial neurons. When you pass in data, pathways in the network fire, producing a response. At first, its responses are random like a newborn baby, but we can teach or train the network to intelligently respond so that when you pass in a CAT scan with a tumor present, it returns “positive” for example.

During training, we tune how the network’s pathways fire provided various inputs by comparing its responses to our desired responses in its training data (human-generated examples of correct responses). This tuning is not done by hand: It’s done automatically by a training algorithm that analyzes thousands, millions, billions of training examples. Once it finishes training, the network can give “intelligent” responses to similar inputs it has never seen before.

Essentials of Deep Learning with Jimoh Ovbiagele

4. How critical is deep learning to ROSS and helping lawyers?

With the ballooning volume of legal information and the shift of legal research from a profit to a cost center, it’s imperative for lawyers to use AI-powered legal research services like ROSS to recover time and focus on higher value activities for their clients.

The impediment of traditional legal research services is that they just spew information and force lawyers to comb it painstakingly because these services simply use keyword matching to find relevant documents. By doing this, they include unrelated documents because of the inclusion of keywords and exclude related documents because of the exclusion of keywords. They have no sense of what the words mean and their synonyms, the words’ usage in context, and the importance of each phrase in a sentence. ROSS uses deep neural networks to do these very things. This enables ROSS not only to bring back the relevant authority but also to intelligently return the exact sentences in the body that answer the question.

5. How is the average company using deep learning? How does one even begin to take advantage of big data, etc?

The average company isn’t using deep learning today. Deep learning requires having a lot of training data, which the average company doesn’t have. Many companies have data and think they have training data, but not all data is suitable for training deep neural networks (which is a whole other discussion). It also requires having people with deep learning expertise, which is rare today. Until breakthroughs in 2013, deep learning was a fringe field of computer science, which is why experts are scarce. ROSS started at the University of Toronto, which has pioneered these breakthroughs. We opened an AI lab in Toronto earlier this year in partnership with the University to continue to create breakthrough technologies.

Companies looking to get into deep learning first need to ask themselves:

  1. What are we seeking to get the deep neural network to learn?
  2. Do we have thousands of examples of how to do this to train the system?
  3. Do we have people who can choose and build the right neural network architecture?
  4. Do we have the budget to purchase or rent the computers to run these complex algorithms?

Over time, questions three and four will become less relevant as companies like Google and Facebook will develop pre-built neural networks that businesses can use with their data and computers will continue to get faster in accordance with Moore’s Law and the maturation of quantum computers. However, for the next few years, those developing complex systems will need to develop their systems in-house.

Where do you see deep learning being of most use today — and 10 years down the line?

Today, I am seeing a lot of use in e-commerce, social networking, transportation, finance, healthcare, robotics, cybersecurity, and we are leading the charge in law. In the next 10 years, I see increased use in these sectors and unfortunately on additional sector: warfare.

7. What’s the Next Big Thing in this area?

The next big thing is multi-modal systems. Currently, engineers and scientists are developing deep neural networks that can perceive either sight, sound, taste, touch, motion, language, etc. but not multiple at the same time. The world is multi-modal, so if you can only perceive one modality, then you don’t really have the full picture. Multi-modal systems will have a much greater understanding of our world and therefore will be able to have more significant impact.

Ava Chisling

Ava is an award-winning lawyer and editor who counsels creative types, writes about pop culture/tech+law and sometimes creates ad campaigns. She is Quebec counsel for Momentum Law.