Sunday, August 29, 2010

On morality, ethics, pragmatics, aesthetics, and existentialism

I tried to come up with the narrowest possible subject line for this post about mistakes, but it does cover quite a range.

Although we do casually use "wrong" in a pragmatic sense such as "making a wrong turn" on a trip, or "giving the wrong answer" on a test, and technically this is a proper usage, my own understanding has been that "wrong" as in "right and wrong" is primarily an issue of morality. We can speak of a "wrong turn in life", as an error in judgment which has led to moral issues. I think of mistakes and errors in a hierarchy of philosophical levels:

  1. Wrong - morality, at a moral level, all about principle
  2. Improper - ethics, an ethical lapse, or issue of legality (illegal, irregardless of whether it is morally right or wrong)
  3. Incorrect - pragmatics, a "technical" mistake (including an invalid scientific theory) which has practical implications, but not in a moral or ethical sense
  4. Undesirable - aesthetics, not really a practical problem per se, but a cause for unpleasantness or embarrassment or social stigma (even if it might be technically correct or legal or "right")
  5. Dangerous - existentialism, leads to a threat to survival or risk of significant imminent physical harm

My point is that we can interpret mistakes or "wrong" at any or all of these levels and should be clear when we speak as to which we are talking about.

This is a casual model on my part. There could be other categories or the categories could be divided differently. In other words, I could be wrong, in a category 3 or 4 sense. I reserve the right to "revise and extend" my model later in the discussion.

Drinking, especially by underage adults and teens can quickly lead to category 5 "mistakes", such as the young woman who died in a fall from a high-rise apartment after an evening of "clubbing." Drunk driving, mistakes by aircraft pilots and vehicle  drivers, and medical errors can also result in category 5 mistakes.

BTW, my hunch is that the "shame" referenced by Kathryn Schulz in her book Being Wrong would be for my category 3 and 4 mistakes which is pragmatic or aesthetic, not an ethical, moral, or existential problem, but quite unpleasant and embarrassing.

See:

Being Wrong: Adventures in the Margin of Error by Kathryn Schulz
Stuart Jeffries is cheered by a writer who sees a social value in our habit of mucking things up
http://www.guardian.co.uk/books/2010/aug/28/being-wrong-kathryn-schulz-review

and

http://www.slate.com/blogs/blogs/thewrongstuff/archive/2010/06/07/on-air-and-on-error-this-american-life-s-ira-glass-on-being-wrong.aspx

and

Slate posts by Kathryn Schulz
http://www.slate.com/blogs/search/searchresults.aspx?u=2434

-- Jack Krupansky

Friday, August 6, 2010

Ontology dowsing

How bad is the Semantic Abyss for the Semantic Web? Well, it is so bad that the process of trying to find or construct an ontology has been dubbed ontology dowsing]. Really. Seriously. It is that bad. The web page for Ontology Dowsing tells us:

At the moment, the methods used in practice to locate an adequate vocabulary for describing one's data in RDF are more akin to dowsing than to an educated, technically-guided choice, supported by scientific tools and methodologies. While the situation is improving with the progress of Semantic Web search engines and better education, oftentimes data publishers still rely on informal criteria such as word-of-mouth, reputation or follow-your-nose strategies.

This page tries to identify methods, tools, applications, websites or communities that can help Linked Data publishers to discover or build the right vocabulary they need.

The web page provides references to:

  1. Lists of ontologies
  2. Search engines
  3. Repositories
  4. Mailing lists/online communities
  5. Ontology Editors
  6. Evaluation
  7. Related Events, Projects, etc.

That's a good start, but the bottom line is that automatic search for ontologies is still a hard AI problem.

What I want to see is a relatively simple tool that lets me describe my data as I see it, including example data, and then goes off and tries to match my ontological structure and data examples with existing ontologies and data and then suggests possible ontologies. A further step would be to then automatically generate an ontology alignment mapping (inferences) so that my data can then appear to the world as if structured in known ontologies. In some cases I might want to move to a known ontology, but in other cases my ontology may be "better" or maybe just a more convenient shorthand that works well for me. Alas, my model is not "aligned" with current reality. Hence, another manifestation of The Semantic Abyss.

-- Jack Krupansky