The history of AI is a neural network of the greatest thoughts and minds of humankind

By
Ava Chisling
-
October 9, 2017

Artificial intelligence (AI) is not a new concept. The underpinnings of AI have been kicked around, at times inadvertently, by an amazing series of mankind’s most famous philosophers and thinkers, mathematicians and computer scientists, theoreticians and psychologists. AI has, in its own peculiar DNA, a neural network of the greatest thoughts and minds of humankind.

The history of AI is a neural network of the greatest thoughts and minds of humankind

Consider this star-runged ladder of human thought: Aristotle bestowed us logic and reason. Descartes declared “I think, therefore I am,” proposing a duality of mind and body. Spinoza gave us an order of all nature reasoned through geometrical proofs, a work later influencing Einstein’s search for a unified theory of the universe. And Leibnitz suggested mind and body were separate, but evenly matched. He dreamed of reducing reasoning to algebra of thought. (source: Machines Who Think, by Pamela McCorduck)

The history of Artifical Intelligence George Boole

George Boole

A century later came George Boole. Every keystroke on your computer, every swipe on your phone, every answer from Siri can be traced to Boole. We are awash in Boole. Boolean logic of reducing thought to 1s and 0s is the foundation for all digital computing. There is Boolean algebra, using “symbolic logic,” Boolean search — “this AND that” — and Boolean logic.

Boole was a 19th-century polymath and professor of mathematics at Ireland’s Queen’s College (now University College) Cork. This child prodigy, self-taught linguist, practical scientist, social reformer, poet, psychologist, humanitarian and religious thinker not only laid the groundwork for modern computing and the digital revolution — he was a progenitor in a remarkable family shaping thought, mathematics and AI through today. He also married Mary Everest, a niece of George Everest, the geographer and surveyor after whom the world’s highest mountain is named.

According to his biography, The Life and Work of George Boole: A Prelude to the Digital Age, Boole was deeply interested in the idea of expressing the workings of the human mind in symbolic form.

His two books on this subject, The Mathematical Analysis of Logic (1847) and An Investigation of the Laws of Thought (1854), form the basis of today’s computer science and electronic circuitry. He also made important contributions to areas of mathematics such as invariant theory (of which he was the founder), differential and difference equations and probability. Much of the “new mathematics” now studied by children in school — set theory, binary numbers and Boolean algebra, has its origins in Boole’s work. (here)

In Boolean logic, all variables are either “true” or “false,” or “on” or “off.” (Now digitals 1s and 0s.) Although Boole’s theory far preceded the digital age, American Claude Shannon applied Boolean logic to build the electrical circuits in the 1930s that led to modern computers.

The history of artificial Intelligence Statue of Alan Turing

Statue of Alan Turing. Photo: Jon Callas

These switches helped innovators like Alan Turing, depicted in the movie, The Imitation Game, the true story of cracking previously indecipherable German codes from the Enigma machine to help win World War II. Turing later developed a hypothetical machine (known as the Turing machine) that can simulate any computer algorithm using 1s and 0s. Turing also proposed that a test for an artificial general intelligence: a computer that could, over the course of five minutes of text exchange, successfully deceive a real human into thinking he was conversing with another. (here)

Only such a learning machine was impossible during Turing’s lifetime. Computers simply lacked the power, until recently.

The artificial Intelligence Geoffrey Hinton

Geoffrey Hinton

AI researchers like the University of Toronto’s Geoffrey Hinton are using today’s computing power to help build and nurture deep learning neural networks that now go as far as achieving accurate translation, object and image recognition, and producing natural language — all learning that requires advanced intelligence.

In addition to his teaching, Hinton works with the Google Brain project to study and develop AI.

Hinton is also the great-great-grandson of George Boole, of Boolean logic, algebra, search, you name it. Another of Hinton’s great-great-grandfathers was a celebrated surgeon, His father was a venturesome entomologist, and his father’s cousin a Los Alamos researcher. (here) Another relative was a mathematician who postulated a fourth dimension. Geoffrey Hinton’s middle name, of course, is Everest.

“Hinton is one the most famous researchers in the field of artificial intelligence. His work helped kick off the world of deep learning we see today. He earned his PhD in artificial intelligence back in 1977 and, in the 40 years since, he’s played a key role in the development of back-propagation and Boltzmann Machines.” (here)

According to Jimoh Ovbiagele, Chief Technology Officer and co-founder of ROSS Intelligence: “The big problem with classical AIs based on Boolean logic is we have to program functionality (if A is true and B is true then do C). Traditional AI’s are painstaking to build and functions like driving vehicles or playing Go are too complicated to program because they require too many rules. For example, imagine if someone asked you how to drive from point A to point B, and you had to instruct them how to move every inch of their body factoring in everything that they might encounter along the way to get there. Neural networks, however, can learn complex logical functions just by analyzing the data. However, they currently require copious amounts of the right data, and as a result, are limited by the data we have available to feed them.”

The history of Artificial Intelligence, ROSS CTO Jimoh Ovbiagele

Like a child, computers will make logical mistakes while learning. So Hinton and a previous team developed a back-propagation algorithm that provides insights into how changing the weights and biases changes the overall behavior of the network. (here). Back-propagation has roots of thought that go back to psychologists Sigmund Freud and Carl Jung. Which brings us to today. And to ROSS.

The link between Boole and Hinton and the ROSS team is more direct than one would think. Jimoh Ovbiagele was a student of Hinton’s at University of Toronto. He cites Hinton’s neural networks course as one inspiration behind starting ROSS Intelligence and his involvement in AI algorithms. “What amazed me about neural networks was that they could learn ascending levels of abstractions from raw data,” says Jimoh. “For example, given raw pixels in an image, neural networks can learn that a pattern of dark pixels is an edge, and a pattern of edges forms a nose; and a nose, mouth, and eyes forms a human face!”

Jimoh was particularly interested in the applications of neural networks to natural language processing (understanding language). “With neural networks, computers can store the meaning of words in a numerical form called a Thought Vector and use those vectors to understand relationships between concepts like ‘king’ — ‘queen’ = ‘man’ — ‘woman’ to provide a simple example.”

The AI algorithms ROSS uses differ from its competition and are an example of how far the science has come. “Our competitors just match exact keywords between queries and documents, which is the cause of their imprecision,” says Jimoh. “ROSS leverages information present inside natural language sentences to find precise answers inside of cases. It recognizes words compositional meaning depending on their grammatical relations to each other (‘the cat in the hat’ vs. ‘the hat in the cat’), what words are synonymous with each other (‘bike’ = ‘bicycle’), and the various ways a single idea can be expressed (‘he ran up the bill’ = ‘he increased the bill’).”

ROSS is the culmination of the history and research and science you see above. Says Jimoh, “ROSS will be known as the company that transformed the law with AI. It will transcend being a mere tool into an indispensable utility. ROSS will be the electricity that powers every law firm in the world.”

The work of Hinton and others is now making possible the kind of artificial intelligence and machine learning that not only can translate and communicate, but drive autonomous vehicles, locate tumors in radiology, research legal cases, pilot drones, predict storms and weather events, and determine probabilities for a wide-range of business applications that better mankind. The power and promise of artificial intelligence has been a long-time coming.

Ava Chisling

Ava is an award-winning lawyer and editor who counsels creative types, writes about pop culture/tech+law and sometimes creates ad campaigns. She is Quebec counsel for Momentum Law.