Natural Language Processing (NLP) is on the interface of Computer Science, Artificial Intelligence, and Linguistics and is concerned with the
interactions of computers and human Natural Languages. NLP is as the interface of Human - Computer Interactions. NLP mainly deals with Natural Language Understanding and therefore enables Computers, i.e. Machines to derive Meaning from human or Natural Language Input.
NLP is the Technology for dealing with the ubiquitous product: Human Language, as it appears in Emails, Web Pages, Tweets, Product Descriptions, Newspaper Stories, Social Media, and Scientific Articles, in thousands of Languages and Varieties. Successful Natural Language Processing Applications are part of diverse range of Application from grammar correction in Word Processors to Machine Translation on the Web, from Email Span Detection to Automatic Question Answering, from detecting human's opinions about products and services to extracting Appointments from Emails.
IBM's Watson is able to beat humans at Jeopardy although he makes silly mistakes. Spell Check and Grammar checker can identify mistakes in human written text. Some Application can even answer question on a phone. A TV can be build to be controlled by Language. Machine Translation can transform a text from one Language into another. The Stock market can be predicted just by looking at how cheerful people are on Twitter.
Unstructured information is material in human Languages. Unstructured information must be turned into an actual intelligence. From this one has to build text classifiers, automatically extract information from text, parse structured text to identify adjectives describing an Entity.
Natural Language Processing involves word and sentence Tokenization, text Classification and sentiment Analysis, Spelling Correction, Information Extraction, Parsing, Meaning Extraction, and Question Answering.
We will use Crowd knowledge for curating NLP activities in the literature. The aim is to mine literature then provide a curation tool for people to sign onto and assess relevance. One can imagine how broadly applicable this might be. This will provide the raw material for Crowd curation but can also assess how good they are and validate it by spiking it with protein-protein Interactions for example or known disease associations. The data can be captured in an RDF Store and we would rapidly build up are large Knowledge Base, especially if linked to the integration of age-related data from other sources.