Symbolic-neural learning involves deep learning methods in combination with symbolic structures. A “deep learning method” is taken to be a learning process based on gradient descent on real-valued model parameters. A “symbolic structure” is a data structure involving symbols drawn from a large vocabulary; for example, sentences of natural language, parse trees over such sentences, databases (with entities viewed as symbols), and the symbolic expressions of mathematical logic or computer programs. Natural applications of symbolic-neural learning include, but are not limited to, the following areas:
– Image caption generation and visual question answering
– Speech and natural language interactions in robotics
– Machine translation
– General knowledge question answering
– Reading comprehension
– Textual entailment
– Dialogue systems
Various architectural ideas are shared by deep learning systems across these areas. These include word and phrase embeddings, recurrent neural networks (LSTMs and GRUs) and various attention and memory mechanisms. Certain linguistic and semantic resources may also be relevant across these applications. For example dictionaries, thesauri, WordNet, FrameNet, FreeBase, DBPedia, parsers, named entity recognizers, coreference systems, knowledge graphs and encyclopedias. Deep learning approaches to the above application areas, with architectures and tools subjected to quantitative evaluation, loosely define the focus of the workshop.
The workshop consists of invited oral presentations and contributed poster presentations.
|Date||July 5, 2018 (Thu) - July 6, 2018(Fri) 09:00 - 17:10|