Reading through these exchanges and through the Technology Review article was my first attempt to understand what the Semantic Web really is. Personally, I can see clear value in this, both from the view of refining searches to a much higher degree than is currently possible; and from the view of being able to synthesize disparate data across web pages and computer applications. For instance, I might want to do identify conferences being held in 2005 in Nepal having something to do with the Internet. Presumably, a Semantic Web search engine would allow you to not only identify keywords or phrases, but also what type of data they are. So I would search for something like: "date" = "2005", "country" = "Nepal", "event" = "conference", and "theme" = "Internet". Of course there would need to be a lot of fuzzy programming behind that so that the search engine would identify reasonable variations of my search terms, but search engines today already do this, so I would not see any problem in this way. Web site authoring tools would be able to create or identify the existing files which contain the types of data in the web page; for many forms of data, they might do so automatically.

What worries me about the structure, as it is described in Andy's original posting, is the fact that all these data fields would be defined in other files on the Internet. One could have literally hundreds of links to other web pages to define one's data on each given web page. To load one single web page in your browser, your computer would need to access a hundred different files from all over to get all the actual bits of data it needs. If you cannot access a given file, then the word or phrase being defined will not load at all. At present, we download the html file, the various graphics, and sometimes a cascading style sheet. For heavy web pages, that in itself can often take quite a while. Coding for the Semantic Web would drastically increase the size of the file that contains it, in addition to all of this other downloading. A simple text-only web page would no longer be possible. At present, we often have at least the text appear, and then the other bits and pieces continue loading as we start to read. With the Semantic Web, though, the individual words in the text would be downloaded from multiple different files, making this impossible. They would randomly and gradually appear, as graphics do nowadays.

With this structure, I worry that one would need a broadband connection to effectively use the Semantic Web, which would further increase the digital divide. HTML pages would be invisible to Semantic Web applications since they contain none of the identifiers needed to make it functional. Individuals and organizations might increasingly start to write or rewrite their web pages with this coding so that their information would be identifiable to target audiences through the features of the Semantic Web. If such a trend were to proceed to becoming a standard for web pages, those with low levels of connectivity would be effectively cut off from the information that is theoretically available, even more so than they are now.

As I mentioned at the beginning, this is my first exploration of the concept and the structure of the Semantic Web, so some or all of my fears may be unfounded. If anybody else knows more about the technical issues which would have implications for exacerbation of the digital divide, I would be interested to hear about them.


Layton Montgomery

 /\  /\   /\   /\   /\
/ \/   \/   \/   \/   \
  /      \  /    /      \
  /         \    /          \
/ Layton Montgomery \
  Executive Secretary
  Mountain Forum

  [EMAIL PROTECTED]
  http://www.mtnforum.org

  c/o ICIMOD
  GPO Box 3226
  Kathmandu, Nepal


_______________________________________________ DIGITALDIVIDE mailing list [EMAIL PROTECTED] http://mailman.edc.org/mailman/listinfo/digitaldivide To unsubscribe, send a message to [EMAIL PROTECTED] with the word UNSUBSCRIBE in the body of the message.

Reply via email to