Following up on my chat a few months back with Randy Goebel, I convinced Randy to take time out of his busy schedule to chat with me about where he sees AI initiatives going in 2018 (both our own homegrown work, as well as broader macro trends), what his hopes are for our newly released (and free!) EVA platform, and what misconception he’d most like to address about AI. Randy is one of the foremost researchers in the machine learning and reinforcement learning space, and on top of being an all-around great guy, brings diverse experience in both academia and industry to our team. For our first time readers, seasoned AI vets and everyone in between, this is a great article on the state of the industry, and I hope you have as much fun reading it as we did writing it!
The quality and depth of analysis for question answering and information retrieval. As researchers working on NLP and legal informatics, we understand the challenges in building the underlying machine-learned models, and my team is pretty excited about what’s already been accomplished.
The biggest challenges we tackle in NLP related to the AI-complete challenge of automated summarization at multiple levels of detail. Multi-level summaries are necessary for multi-level explanations of legal reasoning (and, of course all other domains, e.g., finance, medicine). What excites us is how ROSS provides a stable platform of curation of legal texts, upon which we can experiment with new methods, and refine existing methods within a business model that provides incremental guidance in assessing priorities. Since the problems we tackle are typically motivated by general AI, any assessment of where immediate value lies helps us focus our own research resources.
EVA is the best demonstration of legal information access that we have seen. We use it as an example of the kind of platform we believe should exist for a variety of legal scenarios, not just for expediting legal research for law firms, but for access to law … sometimes referred to as access to justice … for all people.
Related to number 3 above, most AI scientists have long ago abandoned the idea that we could ever “get it right” in any meaningful way, so incremental improvements and feedback from users is an essential part of the overall process of developing and delivering value. One can’t move in a positive direction without starting somewhere, and EVA is clearly a great start.
Remember that NLP research is only beginning to scratch the surface of what can be done with the automated understanding of text, and reasoning therein. Our experience across a broad spectrum of domains is that the best way to guide progress is by building demand, and understanding priorities emerging from that demand.
There are two kinds of improvement. One comes from the continuous incremental curation of legal information from all sources, which helps guide incremental improvements in particular domains. For example, in contract law, the curation of contracts and annotation of salient summaries will help the continuous adjustment of supporting the process of assessing contracts, e.g., for mergers and acquisitions.
The second kind of improvement arises when users begin to not only trust the support system they access, but are confident enough to drive it to the edge of what it can not do. From that demand side (versus data side), new process demands will emerge. For example, by classifying or clustering an accumulating corpus of questions, one should be able to target the development of appropriate target summary representations, so that a particular case or set of legal cases support an explanation for a judge or lawyer from one kind of representation, and another for the law student, and even another for the legal layperson.
Once legal information representation and reasoning systems build confidence in discovery and research, one can image exploiting the same capabilities to help design legislation that is easier to understand and apply. So one doesn’t just use legal informatics to automate parts of what is currently the normal practice, but then provides legislators and regulators with tools to consider the creation of laws.
The current biggest misconception arises because of the recent success of a handful of machine learning methods, especially deep learning. We tend to get excited about how easily one can engineer high accuracy classification systems, and then think that the whole world is a classification. We repeatedly fall into thinking characterized by popularity of method, so that, to use the hammer analogy, every problem looks like a nail.
AI contains machine learning as a proper subset, and machine learning contains deep learning as a proper subset. The misconceptions maintained by those who believe deep learning solves the AI problem will create an accumulation of unrealized expectations.
ROSS is an advanced legal research tool that harnesses the power of artificial intelligence to make the research process more efficient.