The Semantic Web is an extension of the traditional Web in the sense that information in the form of Natural Language text in the Web will be complemented by its explicit Semantics based on a formal Knowledge Representation. Thus, the meaning of information expressed in Natural Language can be accessed in an automated way and interpreted correctly, i.e. it can be understood by machines.
In the Semantic Web the meaning of things (Semantic) is made explicit, because if we have access to the meaning of things, then we are able to understand these things and also the computer will be able to understand it. Therefore the Semantic Web is dealing with explicit Semantics.
The Semantic Web is a Project intended to convert the World Wide Web into a universal Semantic Network through the use of Markup languages and advanced meta-data allowing the Internet to become machine-readable in order to enhance the collective knowledge management of computers.
Information has always played a vital role in the success and failure of an enterprise. From the smallest one-man Research Lab to the largest multinational Organizations, the importance of a Research Intelligence cannot be overstated.
The simple possessions of Information, however, is not enough to ensure Research Intelligence. Much more important is the ability to convert that information into valuable data, to allow it to be used in the formation of Research Strategies.
Research Entities collect vast amounts of published data every day, from Article Reports and Reviews to Grant Applications and Online Resources. Unfortunately, it is not possible for a Research Group to study all of this data closely (and, even if they could, it would be out of date by the time they have finished). It is vital, therefore, that Scientists have some method of analyzing data quickly efficiently so as to highlight patterns and trends that may be useful in the achievements of their Objectives.
The Internet is the largest repository of Information on earth. Potentially, this data can be of enormous value to Research. However, in reality valuable information often proves difficult to find. At present we are limited to performing searches for Information stored on the Internet. Rather than search for information containing the targeted keywords it will be possible to search the semantic Meaning of the Content, allowing Search Engines to return vastly more targeted Information.
Researchers can also benefit from the Semantic Web from their own websites. By adopting a semantic model on a corporate website it becomes possible to provide the visitors with the Information they want. Websites built using Semantic Metadata can recognize synonyms, so they are more flexible and intuitive than typical syntactic sites.
The Semantic Web is a mesh of Information linked up in such a way as to be easily processable by machines, on a global scale. It can be imagined as being an efficient way of representing data on the World Wide Web, or as a globally linked Database.
To answer any question like "Why do we age?", a Semantic Web system or web intelligent system incorporating semantics would proceed something like as follows.
A site a.de collects facts by processing lots of web data. So we get facts like ROS levels increases, telomeres are shortening... and many other facts about aging-related changes.
Another site b.de might be extracting information about how certain changes are limiting Lifespan, e.g. increased DNA damage reduced lifespan, blocking lysosomal degradation leads to premature aging, etc.
A third site now might combine Facts from a.de and b.de and come to the conclusion using Rule learning with some degree of confidence that if
x is causing all these age-related changes than
x is the cause of aging.
A process by which a bunch of facts is generalized to a rule using techniques like rule mining is combined is called
Inductive Reasoning, as opposed to Deductive Reasoning which is normal Logical Inference. Inductive Reasoning is almost always probabilistic to a certain extent.
A(x) => C(x); A(y) => C(y) therefore C(y) therefore A => C
Further this new fact is then added back to appropriate part of the semantic way of dealing with facts of this nature.
This vision, the vision of the Semantic Web is a powerful vision. Its not exactly realized today, but much of the technology needed to express facts and rules, in a form that can be shared, across, different systems using XML language such as RDF,RDFS (just called RDF schema), and OWL (Web Ontology Language) has been developed.
The Web of Data and Semantics is in principle possible. The question is how is populating this web. Web scale inference is in some sense also possible. It is not happening in exactly the same way as initially envisioned, but it is happening with efforts such as Google Squared where it is just figured out from the web. This is an attempt to extract lots of different facts from the World Wide Web. Another recent Search Engine is Wolfram Alpha which relies on learning lots about the world. There is also Watson, an IBM program that won the Jeopardy challenge. These are technologies that do not use techniques like RDS, OWL and Semantic Web Technologies. Thought they have a similar intent in spirit, which is essentially to learn facts from the web and be able to reason about those facts in a Web Intelligence System, as opposed to merely searching for Web pages.
The Semantic Web Vision is about a Web of Data and Semantics, which is shared so that one can have Inference or Reasoning at web scale. A bunch of technologies which is designed to enable this. RDF (Resource Description Framework as its expansion is), the Web Ontology Language and various variances of that. These are all technologies designed to enable this sharing of data and Semantics across the Web. At the same time their use to actually perform reasoning, as not necessarily proceeded in exactly the same way as originally envisioned. Google Squared, Wolfram Alpha, and Watson do in fact reason using facts learned from the web, but not necessarily using the same technology backbone.