Natural language processing

From Wikitia
Jump to navigation Jump to search

Natural language processing (NLP) is an area of study within the broader disciplines of linguistics, computer science, and artificial intelligence that focuses on the ways in which machines interact with, and learn from, human language, and more specifically how to write computer programmes to process and analyse large quantities of such data. The end objective is a computer that can "understand" what's written in a document, even down to the subtleties of the language used. Once the papers have been correctly categorised and organised, the technology can then reliably extract the information and insights contained inside them.

Problems that arise in natural language processing usually entail recognising spoken language, deciphering written text, and writing new text in natural language.

The field of natural language processing may be traced back to the 1950s. Alan Turing introduced the Turing test as a criteria of intelligence in his 1950 paper "Computing Machinery and Intelligence," albeit it was not yet recognised as a distinct issue from artificial intelligence. One of the tasks specified for this evaluation is the automatic creation and interpretation of natural language.