Sunday, April 22, 2012

I'll be talking about the agent server at the next NYC Semantic Web Meetup

I'll be giving a 3-minute elevator pitch for the Base Technology Software Agent Server at the upcoming NYC Semantic Web Meetup, on Thursday, April 26, 2012. That won't be enough time to go into any details, but hopefully will pique a little interest.
 
In preparation, I have refined my short summary as well as a more detailed summary.

Saturday, March 31, 2012

Moving forward with developing a software agent server

Back in the middle of January I ruminated about the possibility that after 15 years of thought and research, maybe I was finally on the verge of being ready to actually make some forward progress with developing a software agent server. About a week later I started writing some serious code in Java and two months later I now have a preliminary working version of an agent server. It is still far from finished and I would not want anybody to actually start trying to use it, but I do have open source code and a downloadable Java zip file up on github. I call it "the Base Technology Agent Server – Stage 0." Call it pre-alpha quality. after I get some preliminary feedback from some people, fill in some gaps, and finish some documentation, then I will officially make it public. For now, people can actually take a peek if they are adventurous enough:
 
 
I hope to get the introductory doc and tutorial in at least marginally usable shape within a week or so.

Friday, November 25, 2011

The trick of knowledge

Computational agents can be considered intelligent to the extent that they utilize human-level knowledge in their behavior. How to do that is the great difficulty. I submit that the trick of knowledge is going beyond mere possession of the facts of knowledge to the ability to know how to apply knowledge. So, if we want to encode knowledge in a form that is useful to computational agents, that encoding must also include an encoding of the knowledge of how to apply that knowledge. Sure, we can hard-wire that latter knowledge, but that may be difficult, error prone, and probably much less flexible or adaptable to evolution of the environment. And even if we are successful at that hard-wiring, that hard-wired knowledge must be properly parameterized to be used in a complex environment.
 
It is worth noting that even the knowledge of how to apply knowledge needs its own knowledge of how to apply that knowhow, and so on seemingly ad infinitum. Clearly at some level there must be hard-wired knowledge. Picking that level is a central challenge, but does highlight the need for a rich knowledge-based infrastructure.
 
In any case, the trick of knowledge is not in what you know, but in your ability to apply that knowledge. Maybe that is the essence of intelligence itself.

Wednesday, November 23, 2011

Truth of statements and truth of existence

There are two forms of truth that we have to deal with:
  1. Truth of existence. Does an object or phenomenon exist in reality?
  2. Truth of statements. Is a statement true or false?
In real life both are equally relevant. We are surrounded by and part of the physical world. The world of language and statements is but a subset of our world, but a very important subset and the subset that is the primary focus of what separates humanity from the rest of the physical world.
 
But inside of a computer, where reality is kept at a distance and almost literally is a separate and distinct universe, truth is concerned mainly with statements and the notion of existence outside the realm of inside of a computer is itself a mere statement. In other words, the computer can know about the real world only to the extent that we seed its pool of statements with statements about the real world that we know to be true. We must define what is true of the real world.
 
If truth of existence means anything inside of a computer, it is simply as statements making assertions about the real, outside world, statements which a computer cannot evaluate per se and whose truth can only be obtained by human or other physical input in statements whose only justification is of the form "because we, agents of the outside world, say it is so."
 
In short, beliefs about the outside world are true inside of a computer only to the extent that we external agents have correctly encoded our external truth into machine-cognizable statements of truth. We must define truth about the outside world. Unfortunately, we may be wrong or may make mistakes when doing such encodings, so there is the risk that a computer may not start out with a true understanding of the outside world.
 
Eventually, ultimately, as we embellish computers with sensors and the ability to directly "learn" from those sensors and human documents and other artifacts, it may be possible for a computer to directly "learn" at least some aspects of truth in the real world. But, once again, truth inside the computer will be limited by pre-programmed assumptions about how sensors and human artifacts work. After all, how can a computer "know" whether our manufactured sensors accurately convey "the truth of the real world" and don't distort this "truth" in some ways of either malevolent or accidental nature, sometimes even despite the best of intentions or maybe because of the worst of intentions.
 
Could we construct the ideal criminal or achieve some ideal sense of evil, either by accident, negligence, or by intention? We may even create evil simply as a test case, but will we be able to control it? Or maybe someone may create evil because they do seek to control it, or maybe someone might create an intentionally uncontrollable evil solely in the pursuit of chaos.
 
In any case, achieving alignment between the truth of the world and the truth of statements within a computer is a tricky business to be sure.

Sunday, July 3, 2011

Semantic gap between text and semantic markup

No matter how advanced our Semantic Web technology becomes, we still have an inherent problem, namely, the semantic gap between simple, plain text and our semantic markup. How do we correlate a textual representations and semantically marked-up representations?
 
At the most basic level, we need to be able to correlate semantic entities with textual references to them. Sometimes that can be a simple text lookup, but often there are multiple semantic entities that have similar if not identical textual representations, especially when the textual representations are frequently shorthand notations rather full, detailed entity references.
 
Lookups are complicated by the fact that some entities have names that are raw natural language prose so that they cannot be unambiguously distinguished from simple prose. For example, names of bands, songs, plays, books, movies, parks, etc. As an even more complex example, a movie based on a book may have the same name.
 
Even for references to people, people use nick names and some people have the same name. For examples, "Krupansky, J." may be a reference to me in the bibliography of a technical paper, or it may be a reference in a legal document to one of two court judges. This particular example suggests that context can aid in the identification process, but with the two judges even context can be problematic. A human can tell the two judges apart since one was at the state level and the other at the federal level, but both were in Ohio. They in fact were brother and sister, but with no apparent relation to me. How a computer would differentiate those two or even all three of us without significant guidance or hand-coded "intelligence" is an open question.
 
One simple identification issue is the use of articles in entity names. Technically, the Beatles are really "The Beatles" and "The" is quite significant when referring to "The Office." A lot of traditional text processing algorithms like to ignore punctuation, articles, and so-called "stop words", but increasingly these ephemera are becoming more significant. Yahoo is really "Yahoo!". And then there is the musician formerly known as "The Artist Formerly Known as Prince" with a non-textual symbol as his formal entity name. The point is that casual and even somewhat formal textual references to entities can be quite far from the pure, true, formal, literal entity identifier.
 
References to the works of an entity or to characteristics of an entity can be similarly problematic in raw text representations. Ultimately there may be a single, hard URI for the referenced entity, but getting from raw text to URI can be a real challenge.
 
In some cases, even our best computational efforts may still result in ambiguous references. Then we have a really tough choice, either to pick the "best" reference by some measure or heuristic, or to simply represent a list of possible references. The latter works semi-well for display for a human user, such as the results from a search engine, but is somewhat problematic when a computer program is processing the results and expecting a singular result.
 
The good news is that in many cases just a little context can go a long way. If someone is querying about computers and software, I would have a higher probability of being a match than the judges. If someone is querying about legal cases, then Krupansky the judge(s) could be selected, although even in that case we still have an ambiguity.
 
Correlating bands and songs is at least superficially a slam dunk since the mapping between bands and songs tends to be relatively sparse, but there are no guarantees and the state of the art for automated software is that some form of guarantee is needed.
 
Misspelling of entity names is also a problem. If you know the category of the entity, such as that it is a band or a song, then traditional spell-checking algorithms may be sufficient, but if you are just looking at a fragment of raw text with no context or category, the problem becomes much harder. A mis-phrased song or book title can look a lot like a lot of raw prose. Still, traditional phrase matching algorithms may do reasonably well telling you if a fragment of text happens to match up with one or more entity names, but you could also get a lot of false positives when the user is simply making a casual statement rather than intentionally referring to a named entity. Still, alerting the user to the possible entity reference can have at least some value even if it may not be 100% relevant. The harder problem is if there are a very large number of partial matches; then the user could well  be overwhelmed rather than aided.
 
A simple solution is faceting where the user is told not the list of all possible matches, but the categories of matches. This can dramatically reduce the amount of information to be presented to the user. The user can then drill down for more detail. Still, even this approach may result in information overload.
 
Another tool is a user-generated dictionary that fills in the particular user's preference for a partial or ambiguous entity reference the first time it needs to be resolved. Not that any user would necessarily need to manually create such a dictionary. In fact a collection of such resolution dictionaries may be automatically supplied with just a little context about the user and their tasks. Once source is to find other users of similar characteristics and then offer the dictionaries for that other user as a starting point. Maybe the user could supply a list of people they "think like" or are interesting in following and that can be used to seed the user's resolution dictionary collection.
 
In summary, matching textual entity references embedded in raw text is an open problem. Yes, there are a lot of tools readily available that may address the problem, more work in this area may be quite helpful. And, most importantly, bridging the semantic gap between the worlds of text and semantic entities is an important goal.
 

Friday, July 1, 2011

What color is an apple?

I have been trying to think about how to encode even relatively simple human knowledge in simple RDF triples and what issues arise. What could be simpler than... an apple and its color? Sure, some things are simpler, but so much is much more complicated.
 
In a simple, toy system I might define a class of objects called "fruit", a sub-class called "apple", and have instances of the apple class. Simple enough. I might have a "color" property. Simple enough. Hmmm... but what are the values of color? A literal string like "red"? A numeric set of RGB values? Shades of primary colors? Add another object of class "color", and push out the same questions to the propertiesof that class? So, in my simple, toy system, I would now have objects of class "apple" each of which has an associated "color" object. Although, I am not so comfortable saying that basic properties must be promoted to the level of objects even if having all values be objects may be a better system architecture. This is starting to be a lot of complexity for simple things.
 
One question: Does each object have its own color? Sure, that makes sense? But, shouldn't I be able to ask questions about objects of that class in general? Sure. Now, there are two ways to go about that: 1) do a statistical analysis of all instances of the class and then summarize the results, possible as a histogram or something like that, or 2) contrive an abstract rule that generally describes what the population of the class would be, even if that is only an approximation. I might simply want to know "generally", or "typically", or "as a common case" what color are apples. Some are yellow, some green, but many are red, so how can I represent that information as a "rule" rather than have to do a massive data collection effort and sift through the results?
 
But even for apples that are "red" or "green" or whatever, rarely would they be exactly and perfectly one single color. They may be "reddish" or "greenish" or "mostly red" or "mostly green."
 
In fact, rather than an individual apple having a specific color, you can come up with a wide range of colors by sampling various points on the object's surface, each of which can have its own color. Once again, we can do a massive amount of data collection and then look at distributions of values and highlight dominant values or average values. And that is just for one instance of the class and we would like to do similar analysis across the population of the class instances. And this is just for a relatively simple object. Once again, we would like to come up with relatively simple rules to describe the class and its instances.
 
The general problem here is not simply how to encode information about objects and classes, but in what forms do users want to examine that information? What is the user's context? Do they want a simple, general answer? Do they want a simple, general, and specific answer that may not be technically accurate for individual objects (e.g., "Most apples are red."), but nonetheless be generally useful? Maybe they want that statistical summary. Maybe they want that vaguer but more accurate simple answer, "reddish." Maybe they want that full, sampled image with its discrete values for their own analysis. Or maybe they have a reference color and they simply want to know if it is an "acceptable" or even "normal" (not "rotten") value for an apple.
 
Maybe there are a range of generic "query formats" for fetching object and class properties that can be shared across all objects and classes or at least over broad classes of objects. So, one of these generic formats could be chosen by the user and the actual property value(s) would be transformed to match the requested format.
 
And maybe they just want to start somewhere with a basic answer and then examine it and drill down for more detail or more abstraction on their own as they see fit.
 
Maybe the user can specify a "degree of specificity" in their request and that would guide what specific form the returned property value would take.
 
And now we go on to other classes of objects and their "color" and we wonder to what extent we can compare or use colors across those disparate classes. Is a particular car the same color as objects of the class apple?
 
I am probably only scratching the surface, but these are some of the issues with trying to represent human-level knowledge in a technology world where representing even only basic information is still a real challenge.
 
Hmmm... I wonder if developing functionally complete ontologies for apples and colors may be even more of a challenge than even a dozen doctorate dissertations? Toy ontologies, no problem; human-level knowledge, that's a harder nut to crack.

Sunday, June 26, 2011

Where are all the intelligent agents?

So, where are all the intelligent agents? The question keeps popping up and the list of excuses remains long and the final answer is always some variant of "coming soon." My own personal answer is that intelligent agents are critically dependent on having a very rich intelligent semantic infrastructure. In other words, factor a lot of the intelligence out of individual agents and leverage the merged intelligence in a common, shared rich intelligent semantic infrastructure so that individual agents can be relatively dumb in their implementation but appear to be quite intelligent in operation.
 
In short, there are lots of tools and services and even data out there, but it is all too disjoint and nebulous and not coherent and cohesive and integrated enough to constitute the kind of deep integrated rich intelligent semantic infrastructure that is needed to make software agents grow like weeds. So, maybe, but not necessarily, we have all the pieces but they are not arranged in a critical mass where software agents can readily sprout.

-- Jack Krupansky

Richness of semantic infrastructure

Making intelligent software agents both powerful and easy to construct, manage, and maintain will require a very rich semantic infrastructure. Without such a rich semantic infrastructure, the bulk of the intelligence would have to be inside the individual agents, or very cleverly encoded by the designer, or even more cleverly encoded in an armada of relatively dumb distributed agents that offer collective intelligence, but all of those approaches would put intelligent software agents far beyond the reach of average users or even average software professionals or average computer scientists. The alternative is to leverage all of that intellect and invest it in producing an intelligent semantic infrastructure that relatively dumb software agents can then feed off of. Simple-minded agents will effectively gain intelligence by being able to stand on the shoulders of giants. How to design and construct such a rich semantic infrastructure is an open question.
 
Some of the levels of richness that can be used to characterize a semantic infrastructure:
  • Fully Automatic – intelligent actions occur within the infrastructure itself without any explicit action of agents
  • Goal-Oriented Processing – infrastructure processes events and conditions based on goals that agents register
  • Goal-Oriented Triggering – agents register very high-level goals and the infrastructure initiates agent activity as needed
  • Task-Oriented Triggering – agents register for events and conditions and are notified, much as database triggers
  • Very High-Level Scripting – agents have explicit code to check for conditions, but little programming skill is needed
  • Traditional Scripting – agents are scripted using scripting languages familiar to today's developers
  • Hard-Coded Agents – agents are carefully hand-coded for accuracy and performance using programming languages such as Java or C++
  • Web Services – agents rely on API-level services provided by carefully selected and coded intelligent web servers
  • Proprietary Services – Only a limited set of services are available to the average agent on a cost/license basis
  • Custom Network – a powerful distributed computing approach, but expensive, not leveraged, difficult to plan, operate, and maintain
This is really only one dimension of richness, a measure of how information is processed. Another dimension would be the richness of the information itself, such as data, information, knowledge, wisdom, and various degrees within each of those categories. In other words, what units of information are being processed by agents and the infrastructure. The goal is to get to some reasonably high-level form of knowledge as the information unit. The Semantic Web uses URIs, triples, and graphs, which is as good a starting point as any, but I suspect that a much higher-level unit of knowledge is needed to achieve a semantic infrastructure rich enough to support truly intelligent software agents that can operate at the goal-oriented infrastructure level and be reasonably easy to conceptualize, design, develop, debug, deploy, manage, and maintain, and to do all of that with a significantly lower level of skill than even an average software professional. End-users should be able to build and use such intelligent agents.

Sunday, March 6, 2011

Linked lists for consumer-generated content for the Semantic Web

RDF and other Semantic Web technologies are powerful tools for hard-core information professionals to publish data for the Semantic Web, but are hardly usable for mere mortals such as consumers and other average users who wish to make their own content available on the Semantic Web. I propose what I call linked lists as a possible approach to publishing consumer-generated content for the Semantic Web. I am not using the term in the sense of traditional computer science (the linked list data structure), but more as a derivative of Linked Data and the Linked Open Data (LOD) movement. I started by noting that people like to keep and reference lists: lists of things to do, lists of people, lists of places, lists of songs, lists of movies, lists of restaurants, and even lists of lists. Lists tend to have a simple structure, easily processed by computer programs, and much of the data on the lists can relatively easily be translated into RDF-style URIs, at least in theory, and assuming that a sufficient library of the underlying concepts is developed, which is of course the segue into the world of Linked Open Data.

It is not the purpose or intent of this post to go into technical details, but simply to raise awareness of the basic concept of using consumer-generated lists as a way to introduce average users into being not just consumers of the Semantic Web, but generators of Semantic Web content as well.

Some lists are simple single-column lists of named entities. Simple enough, but the names may be nick names, incomplete partial names, misspelled names, ambiguous names, etc. That raise the point about the importance of entity name resolution for "entry" into the world of the Semantic Web. I see this as a solvable problem, but it does illustrate just how yawning is the chasm between the world of real people and the Semantic Web itself. One opportunity here is that the multiple items on the list itself can provide a form of context that can help identify the category to be used for the list. Do the items look like names of places, names of things, names of people, movies, songs, bands, etc.? Once the category is identified, entity name resolution is substantially simpler. In some cases automated methods can complete 100% of the resolution, in some cases the user can be presented with a single likely match for confirmation, and in other cases a list of possible matches can be offered.

Multi-column lists would seem to be a harder problem, but the columns provide context. A name column may not be unique, but address or phone number may provide enough disambiguation. A song name may not be unique to a performer and spelled out properly, but adding a band or album name column might be plenty to disambiguate. The song name and performer names might both be incomplete or partially wrong, but combined they may actually be sufficient for disambiguation or to at least dramatically reduce the possible likely options.

Multiple columns may be unnecessary other than as memory aids and for disambiguation. After all, the LOD cloud should have all of the public data for an any entity. So, the user can simply maintain their own stripped-down representation for any entity and then let the SemWeb itself supply any additional desired information. As long as enough info is supplied to identify the entity (or even plural entities), there is no need for the user to keep more detailed info in their own list. So, maybe the user can conceptually think of their lists as having two sides or parts: 1) their own raw list in their own preferred format (e.g., simple text file or spreadsheet), and 2) their preferred representation of the actual referenced LOD entities. Note that the SemWeb representation might be in a non-list format such as a graphical map or other structured format or even a full spreadsheet or database layout, if that is what the user has chosen. Of course, the user could choose any number of formats.

There will likely be some interest in templates for multi-column lists, but I don't see them as a requirement since the rows of the list provide disambiguating context. In fact, generally, the category of most lists will be quite obvious to even relatively simple automated analyzers, presuming there are enough rows. This does highlight the importance of being able to identify the category of SemWeb entities.

The user could of course author and maintain their lists in their favorite local editing tool such as a text editor or spreadsheet, but it is likely that keeping lists online would be preferable. Presumably sites would spring up which specialize in maintaining and publishing SemWeb lists. Of course there would be privacy controls so that private lists remain completely private or only shared as the user decides, but it should be dirt-simple easy to quickly publish a user-generated list. And once a user-generated list gets published to the Semantic Web, presto, it is now a candidate for getting linked into the LOD cloud.

Linking of user lists can occur in two ways: 1) A simple, direct link, such as a user-generated "list of favorite lists", or 2) creating a derivative list based on one or more existing published lists. Besides creating their own list from scratch or by wholesale copying of an existing published list the user could reference an existing list and tell the software that the user wants to "start with" the existing list and then supplement it, adding some items and deleting others. The user might even request that multiple lists be combined. Or maybe include only some columns of data. A common usage would be for a user to identify a trendsetter (maybe just a friend) and supplement that list with their own personal interests. The key is to maintain is a dynamic reference to the base list and the user's full, published list will change as any base lists change.

The user's lists would be as the user creates and maintains them and completely devoid of formal URIs or other arcane SemWeb concepts. The published version would of course be in hard-core RDF, but with the clear-text source as well. The user would also have the option of automatically "cleaning up" their list to correct spelling errors, complete names, etc.

Linked lists provide an opportunity for dramatically increasing the scope of the Sematic Web and also provide an opportunity to escape from the current paradigm of web sites such as Facebook and LinkedIn being walled gardens holding user data captive.

The issue of exactly where online user lists would be published and store is open, but the simple answer is: anywhere. In some sense user lists would be similar to blogs in that a user might have their own domain or chose a hosting site that caters to their personal skills and interests. The real point is that it truly does not matter where linked lists reside once they are identified or registered as being part of the Semantic Web. That raises the question of how to register new lists, but I am sure there will be plenty of sites and users ready and willing to fill that void.

-- Jack Krupansky

The semantic gap between bits and knowledge

We have a wide-range spectrum of levels of abstraction for representing information in computers, none of which is particularly well adapted to representing human knowledge in a form that is readily comprehended by computer programs. At the low end of the spectrum we have bits, bytes, characters, text, databases, XML, and even RDF for the Semantic Web. We have specialized abstractions for specialized applications as well. Somewhere in the middle of the spectrum we have various so-called knowledge representation languages which purport to being able to represent knowledge, but only in a host of well-defined, limited, constrained, forms that are still not representative of true human knowledge and are not directly recognizable and usable by mere mortals. Sad to say it, but text for natural language is the closest form we have in computers to something that is recognizable and usable by mere mortals. Unfortunately, free-form text is not readily and easily recognizable and usable to computer programs (as a surrogate for human knowledge.) So, we have a vast semantic gap between the bits of computers and the knowledge of humans.

I wish I had some graphic ability so that I could draw a fancy diagram of this spectrum of information and knowledge representation, but I don't, so I'll the spectrum as a simple list, starting at the low end:

  1. Bits - zero and one, on and off.
  2. Bytes
  3. Characters and numbers
  4. Strings - sequences of characters representing individual words or identifiers
  5. Text - free-form sequences of strings or words, possibly even natural language prose
  6. Structured text - tabular lists (e.g., CSV)
  7. Databases
  8. Application-specific data formats
  9. XML
  10. RDF
  11. Big Gap #1
  12. Knowledge representation languages
  13. Big Gap #2
  14. Human knowledge and human language

RDF is a knowledge representation language of sorts, but is more specialized and adapted to representing raw information than more humanly-recognizable knowledge.

It is worth noting that there is a distinction between knowledge and communication, but that is beyond the scope of the main issue of the point about bits vs. human knowledge. One distinction is the concept of tacit knowledge which is knowledge that defies straightforward communication or representation in language.

This information/knowledge spectrum layout immediately begs the question of the sub-spectrum of knowledge representation languages, a topic worthy of attention, but that is beyond the scope of the immediate issue.

One of the most notable causes of the vast sematic gap between current knowledge representation languages and human knowledge is the issue of vocabulary definition. Computer-based systems strive to minimize and eliminate ambiguity while human knowledge and language embrace and thrive on ambiguity. Coping with ambiguity may be the ultimate abyss to be hurdled before computers can have ready access to human knowledge.

One major problem with knowledge representation languages is that there are a lot of them, a virtual Tower of Babel of them, so that we do not have a common knowledge language that can be leveraged across all forms of knowledge and all application domains. Leverage is a very powerful tool for solving problems with computers, but lack of leverage is one of the most serious obstacles to solving problems with computers. Leverage can rapidly accelerate the adoption of new technology, but lack of leverage seriously retards adoption. XML and RDF were big leaps forward in leverage, but still nowhere near enough.

One open question is whether a rich-enough knowledge representation language can be built using RDF as its lower level, or whether something richer and more flexible than RDF is needed. This may hinge on what stance you take on ambiguity.

-- Jack Krupansky

Tuesday, December 28, 2010

Semantic whitespace

At a grossly oversimplified level, the Semantic Web consists of semantic islands and semantic links between those islands, each represented by a uniform resource identifier (URI) and a collection (graph) of very brief statements (triples) in turn constructed from URIs. In essence, each of these semantic islands is a concentration of knowledge, as is each of the semantic links as well.

But what about all of the space between islands as well as all of the dotted-line links that are too weak to be recognized as formal semantic links? All of this white space that is somehow not a recognized part of the semantic map of formal knowledge. I'll refer to all of this informal knowledge that is outside and beside and under and over and around and in between the formal islands of knowledge and the formal links between these formal islands of  knowledge as semantic white space.

Semantic white space is informal knowledge that exists in the margins of our formal knowledge which is just as important, but usually is accorded even less status than a second-class citizen in the hierarchy of knowledge.

If we are ever to construct a true consumer-centric knowledge web we will need to accord informal knowledge and semantic white space first-class status on a par with formal knowledge.

-- Jack Krupansky

Thursday, September 16, 2010

What is the significance of borders?

Cafe Philo in New York City will meet next week on Thursday, September 23, 2010 with a discussion on the topic of "Could we live well without borders?" It is time to explore the nuances of that question.

We have to start by exploring what we even mean by the concept of borders.

First off, there are a lot of kinds of "borders", but I will assume that we are primarily interested in political borders.

There are lots of types of political entities, including towns, boroughs, cities, counties, states, and countries, which all have a lot of common qualities, but I will assume that we are primarily interested in national political entities or countries.

So, when we casually refer to "borders" we are typically referring to the borders between countries.

One nuance is that there is distinction that could be drawn between borders and boundaries. A boundary is more of an imaginary or virtual "line" between adjacent countries, as in the lines drawn on a map, whereas an actual border is the physical manifestation of that imagined boundary in the real world, such as signs, fences, walls, other markers, checkpoints, etc. In some cases there may be no actual border per se, such as a country bounded by an ocean, or where a lake or river separates two countries and the division is only that imagined boundary line.

The border between Iraq and Iran is a great example. In some places there is a very visible border with border crossings under strict control. Then you have the southern portion of the Tigris river to the Persian Gulf (actually referred to as the Shatt al-Arab waterway) where there is no real border per se other than the imagined boundary in the middle of the Tigris waterway. That lack of a clearly discernable border led to the capture of British sailors by Iran who claimed they were in Iranian territory. One report indicates that Iraq and Iran have no formal agreement as to where the boundary line is, so the simple notion of an imaginary line down the middle of the waterway (relative to some agreed tidal conditions) is up in the air in that situation. Recently we have seen the case of the alleged "hikers" in northern Iraq who supposedly "strayed" across the border into Iran without even realizing that they had in fact crossed any "border". In other words, there is no visible border unless you are aware of local custom, even if legally there might be a more formal virtual boundary that may be clearly discernable on maps.

Another nuance is air travel where you hop on a plane "in" one country and then "land" in another country without physically encountering any actual border, just a traversal across that imagined boundary line or maybe even an ocean. In fact, you may "fly over" any number of countries during that flight, but are you ever really "in" any of them? Have you ever really "entered" a country except by passing across a physical border or a surrogate for the border in the form of an immigration station at the airport?

Personally, I would say that I have never been "in" Vietnam, but back in 1987 or so I was on a flight from Singapore to Hong Kong and the pilot announced that we were "over" Da Nang (I think, or one of the other notable cities in Vietnam.) I would not say that I have been "to" Vietnam, but maybe I can semi-legitimately claim that I was "in" Vietnam in the sense of being within its boundaries, at least as a crow flies.

Some borders are heavily fortified or require advance permission (a visa) to cross, or at least some sort of documentation such as a passport or drivers license to cross. Then there are the borders within the European Union which are effectively open, regardless of which member country you are a citizen of.

To me, this discussion topic is less about the physical manifestation of a border than the abstract concept of the imagined boundary. Even further, it is not the actual boundary that matters, but an abstract boundary that for all intents and purposes is just a circle or rectangle that lets us refer simply to "here" and "there" or "us" and "them." So, I think the core subject of the discussion topic is not borders per se, but what I would call abstract national borders.

But even that is still not be specific about the desired concept. My hunch is that ultimately the discussion topic is really about whether dividing the world and people into countries is necessary or necessarily advantageous. In other words, maybe the discussion is about whether world government is viable, or is it beneficial to divide people and places into separate and distinct nations with clear delineations between them. Or maybe we could say that we are interested in discussing the notion of national identity and whether it is needed or not or beneficial or maybe even harmful.

In any case, the four big things that people seem to care the most about relative to borders are laws, culture (including language and customs), communications, and trade. Political borders allow a clear distinction in how law is decided and structured. Culture does not require borders per se and can differ dramatically by regions within  country that are not necessarily political in nature, but is still a major differentiation between countries. Trade certainly occurs regardless of whether there is a political boundary involved, but the terms of the trade, including laws that relate to trade can be affected greatly by political differences between countries. Communications seems to stand out as something that is likely to occur regardless of borders, although regulation of the communications infrastructure within and between separate countries can be impacted by political considerations within and between separate countries.

I'll stop there for now to give myself and others a chance to review and ponder all of that before continuing.

-- Jack Krupansky

Sunday, August 29, 2010

On morality, ethics, pragmatics, aesthetics, and existentialism

I tried to come up with the narrowest possible subject line for this post about mistakes, but it does cover quite a range.

Although we do casually use "wrong" in a pragmatic sense such as "making a wrong turn" on a trip, or "giving the wrong answer" on a test, and technically this is a proper usage, my own understanding has been that "wrong" as in "right and wrong" is primarily an issue of morality. We can speak of a "wrong turn in life", as an error in judgment which has led to moral issues. I think of mistakes and errors in a hierarchy of philosophical levels:

  1. Wrong - morality, at a moral level, all about principle
  2. Improper - ethics, an ethical lapse, or issue of legality (illegal, irregardless of whether it is morally right or wrong)
  3. Incorrect - pragmatics, a "technical" mistake (including an invalid scientific theory) which has practical implications, but not in a moral or ethical sense
  4. Undesirable - aesthetics, not really a practical problem per se, but a cause for unpleasantness or embarrassment or social stigma (even if it might be technically correct or legal or "right")
  5. Dangerous - existentialism, leads to a threat to survival or risk of significant imminent physical harm

My point is that we can interpret mistakes or "wrong" at any or all of these levels and should be clear when we speak as to which we are talking about.

This is a casual model on my part. There could be other categories or the categories could be divided differently. In other words, I could be wrong, in a category 3 or 4 sense. I reserve the right to "revise and extend" my model later in the discussion.

Drinking, especially by underage adults and teens can quickly lead to category 5 "mistakes", such as the young woman who died in a fall from a high-rise apartment after an evening of "clubbing." Drunk driving, mistakes by aircraft pilots and vehicle  drivers, and medical errors can also result in category 5 mistakes.

BTW, my hunch is that the "shame" referenced by Kathryn Schulz in her book Being Wrong would be for my category 3 and 4 mistakes which is pragmatic or aesthetic, not an ethical, moral, or existential problem, but quite unpleasant and embarrassing.

See:

Being Wrong: Adventures in the Margin of Error by Kathryn Schulz
Stuart Jeffries is cheered by a writer who sees a social value in our habit of mucking things up
http://www.guardian.co.uk/books/2010/aug/28/being-wrong-kathryn-schulz-review

and

http://www.slate.com/blogs/blogs/thewrongstuff/archive/2010/06/07/on-air-and-on-error-this-american-life-s-ira-glass-on-being-wrong.aspx

and

Slate posts by Kathryn Schulz
http://www.slate.com/blogs/search/searchresults.aspx?u=2434

-- Jack Krupansky

Friday, August 6, 2010

Ontology dowsing

How bad is the Semantic Abyss for the Semantic Web? Well, it is so bad that the process of trying to find or construct an ontology has been dubbed ontology dowsing]. Really. Seriously. It is that bad. The web page for Ontology Dowsing tells us:

At the moment, the methods used in practice to locate an adequate vocabulary for describing one's data in RDF are more akin to dowsing than to an educated, technically-guided choice, supported by scientific tools and methodologies. While the situation is improving with the progress of Semantic Web search engines and better education, oftentimes data publishers still rely on informal criteria such as word-of-mouth, reputation or follow-your-nose strategies.

This page tries to identify methods, tools, applications, websites or communities that can help Linked Data publishers to discover or build the right vocabulary they need.

The web page provides references to:

  1. Lists of ontologies
  2. Search engines
  3. Repositories
  4. Mailing lists/online communities
  5. Ontology Editors
  6. Evaluation
  7. Related Events, Projects, etc.

That's a good start, but the bottom line is that automatic search for ontologies is still a hard AI problem.

What I want to see is a relatively simple tool that lets me describe my data as I see it, including example data, and then goes off and tries to match my ontological structure and data examples with existing ontologies and data and then suggests possible ontologies. A further step would be to then automatically generate an ontology alignment mapping (inferences) so that my data can then appear to the world as if structured in known ontologies. In some cases I might want to move to a known ontology, but in other cases my ontology may be "better" or maybe just a more convenient shorthand that works well for me. Alas, my model is not "aligned" with current reality. Hence, another manifestation of The Semantic Abyss.

-- Jack Krupansky

Tuesday, June 15, 2010

Simulacrum

Doing a Google search of "The news makes the news" uncovered this interesting item:

Simulacra

Stephen Tyler (the Deconstructionist, not the lead signer for Aerosmith) describes the simulacrum thus:

"Where modernism focused on the central notion of representation, of the substitution of appearance, of a copy for an original version, post-modernism speaks of 'simulacra,' of models, of simulations, of constructed realities, of appearance as reality. The post-modernist simulacra undermine the notion of fundamental difference between reality and appearance, so we no longer think of 'models of reality' but in 'models as reality.' Simulacra do not re-present a prior or original presentation of the real, they are the real."

I saw a cartoon recently which perfectly illustrates the idea of simulacra: A TV camera is sitting in an easy chair watching television. A cable comes out of the camera and goes into the back of the TV set that the camera is watching. This cutting out of the "middle man," of self-simulation, is the essence of simulacra.

Of course the best example of simulacrum is the evening news. The news makes the News makes the news. CNN reports on the activities of Sadam Hussein, while Sadam watches CNN to find out what he's doing.

"The news" purports to represent a reasonably accurate account of some event that transpired, but this "image" by definition is a somewhat inaccurate representation of what actually and exactly happened. Even live video and audio doesn't give you the complete picture or context. It is only an approximation.

Even beyond the truth and accuracy and completeness of the details of what happened, the higher-level abstraction of what the event was or is categorically and what it "means" and its "significance" are open to debate. And context is both debatable and subject to purely subjective definition (e.g., what to "connect" the event to.) I would simply note that it seems as though a lot of people intentionally turn to "the news" media, whether a trusted "news anchor" on TV or a cherished newspaper (or web site), for... "analysis" that offers up interpretation of the actual news (the observable details) to provide meaning, context, and significance. Just the title of the "news story" alone can be very telling about how it is being "spun." "The news" can very quickly take on a life of its own that can be quite distinct from the reality that it purports to represent.

But, which do people really want? Do they want the boring details or the juicy story concocted with the details being only the starting point? I'll take the boring details any day, but it appears that most people prefer an elaborated story, no matter how far it diverges from objective reality.

So, this is a central problem with facts: the degree to which they represent reality, even if the intent of the observer and reporter is absolutely pure.

Maybe we simply have to accept the fact that all facts are subjective no matter how objective they seem or purport to be.

Could somebody remind me why I call this The Semantic Abyss?

-- Jack Krupansky

Friday, April 9, 2010

Dumb question about intelligent agents

How dumb could a software agent be and still be considered an intelligent agent, presuming that it can communicate with and take advantage of the services of other, more intelligent software agents?

This still begs the question of how we define or measure the intelligence of a specific software agent. Do we mean the raw, native intelligence contained wholly within that agent, or the effective intelligence of that agent as seen from outside of that agent and with no knowledge as to how the agent accomplishes its acts of intelligence?

We can speak of the degree to which a specific agent leverages the intelligence of other agents. Whether we can truly measure and quantify this leverage is another matter entirely.

In humans we see the effect that each of us can take advantage of the knowledge (and hence to some degree the intelligence) of others. Still, we also speak of the intelligence of the individual.

Maybe a difference is that with software agents, they are much more likely to be highly interconnected at a very intimate level, compared to normal humans, so that agents would typically operate as part of a multi-mind at a deeper level rather than as individuals loosely operating in social groups as humans do. Or, maybe it is a spectrum and we might have reasons for choosing to design or constrain groups of agents to work with varying degrees of interconnectivity, dependence, and autonomy.

So, maybe the answer to the question is that each agent can be extremely dumb or at least simple-minded, provided that it is interconnected with other agents into a sufficiently interconnected multi-mind.

But even that answer begs the question, leading us to ponder what the minimal degree of interconnectivity is that can sustain intelligence.

-- Jack Krupansky

Dumb question

How dumb could a software agent be and still be considered intelligent, presuming that it can communicate with and take advantage of the services of other, more intelligent software agents?

This still begs the question of how we define or measure the intelligence of a specific software agent. Do we mean the raw, native intelligence contained wholly within that agent, or the effective intelligence of that agent as seen from outside of that agent and with no knowledge as to how the agent accomplishes its acts of intelligence?

We can speak of the degree to which a specific agent leverages the intelligence of other agents. Whether we can truly measure and quantify this leverage is another matter entirely.

In humans we see the effect that each of us can take advantage of the knowledge (and hence to some degree the intelligence) of others. Still, we also speak of the intelligence of the individual.

Maybe a difference is that with software agents, they are much more likely to be highly interconnected at a very intimate level, compared to normal humans, so that agents would typically operate as part of a multi-mind at a deeper level rather than as individuals loosely operating in social groups as humans do. Or, maybe it is a spectrum and we might have reasons for choosing to design or constrain groups of agents to work with varying degrees of interconnectivity, dependence, and autonomy.

So, maybe the answer to the question is that each agent can be extremely dumb or at least simple-minded, provided that it is interconnected with other agents into a sufficiently interconnected multi-mind.

But even that answer begs the question, leading us to ponder what the minimal degree of interconnectivity is that can sustain intelligence.

-- Jack Krupansky

Wednesday, April 7, 2010

Truth, proof, and evidence

We encounter all manner of statements, beliefs, facts, and claims which we assert are either true or not true, or might be true or might be false. If a statement is true, how do we know it. Can we prove that it is true or false? Do we have evidence that it is true or false. What does it mean to say that we have proof or that we have evidence?

Truth is our ultimate objective as seekers of knowledge and wisdom. Whether we ever achieve truth is usually a matter of debate.

A proof or our ability to prove a statement, belief, fact, conclusion, assertion, or claim is some collection of knowledge and artifacts which when viewed by an independent, objective, competent observer with the necessary expertise would lead that observer to conclude that the claim is "true beyond all doubt." That's quite a tall order. In fact, except in pure mathematics or particular bureaucratic institutions, it is essentially an impossible objective. In truth, the best we can usually hope for is to approximate a proof of a claim. Each of us and each of our social and political institutions has a subjective right to define our own criteria for what standards we wish to accept for proof of any particular claim. Different individuals and institutions can differ on what proofs they accept as fact.

Evidence is basically anything, such as an observation, measurement, calculation, document, physical object, reasoning, knowledge, etc. that supports a claim, or, alternatively, anything that undermines a claim. Generally, evidence by itself does not necessarily prove (or disprove) a claim, but can help to guide us in the direction of strengthening or weakening our confidence in a claim. If we collect enough evidence of enough strength, we may in fact eventually be able to confidently assert that we are able to prove or disprove a claim beyond our own doubts. Maybe, but not necessarily. Evidence does not per se have to be able to prove or disprove a claim for us to assert that it supports or undermines a claim. Evidence does not have to be strong and convincing; it may merely be weak and flimsy, or maybe even irrelevant. The only real requirement is that evidence is offered to support or dispute a claim.

We could use the analogy of a destination and a journey to that destination. We seek to arrive at a proof. Evidence is the collection of individual steps along the way to that destination. We can, to some degree, measure our progress on our journey. At some point we may be able to confidently conclude that we have in fact arrived at the destination, that truth has been obtained.

As a final note, even an accepted proof may not necessarily be the definitive achievement of truth. People may in fact justifiably believe in a proof, but there may be a flaw in their assumptions, observations, measurements, analysis, calculations, or reasoning. People may not initially be aware of any such flaws, but as our minds, knowledge, and technology advance, flaws may become apparent. Then, suddenly, our knowledge and its proof may be overturned. Much of the evidence itself may still be just as valid as before, but our interpretation, integration, and induction of conclusions based on that evidence may be different.

In short, constantly seek evidence that supports or undermines claims and be careful to leap too quickly to conclusions and acceptance of proof, and always be ready for the sudden appearance of new evidence or new ways of thinking about and working with existing evidence that may undermine an existing proof or in fact support a new proof.

-- Jack Krupansky

Wednesday, March 31, 2010

Does philosophy bake bread?

There is an old saying that "Philosophy bakes no bread", implying that philosophy has no significant practical value, but I disagree, at least somewhat. I do agree that a large portion of what is called "philosophy", especially as practiced in modern times, is rather disjoint from progress in the real world, but a significant portion of philosophy, especially in a historical context is and has been extremely valuable, and eminently practical.

Early philosophy was really the precursor of a lot of modern science and logic. Basically, early philosophy studied and promoted the kind of disciplined and structured thought that is needed for virtually all modern disciplines, from mathematics, science, and engineering to law and our social and political systems.

Another way of saying this is that over time, every modern discipline and social system borrowed concepts, methods, and techniques from early philosophy. That is an understatement; every modern discipline and social system is based on the products of philosophy. Without the early works of Aristotle, Socrates, and Plato, and the enlightened efforts of Hume, Locke, and Rousseau, among countless other brilliant philosophers over the centuries, we would not have much of what we call modern in the modern world.

The simple fact is that all modern disciplines absorbed concepts from philosophy over the centuries so effectively that the concepts are considered part of those disciplines rather than being owned by the "discipline" called philosophy.

Critical thinking is, well, critical to analyzing the facts in any discipline. The world can be a complex and confusing place. Winnowing truth from fiction and relevance from irrelevance can be a very difficult proposition even on a good day. Technology can certainly help as a tool, but all tools must be used properly to be effective. Critical thinking is essential to guiding us to making practical and workable decisions from mushy and vague raw data.

A very pragmatic issue is that a lot of difficult questions are so poorly or vaguely phrased or framed so that it simply is not practical to even begin to answer the questions in a practical and workable manner until a deep and broad philosophical analysis can tell us what the questions are really all about. Answering inappropriate interpretations of questions can certainly lead to answers or solutions that do not meet the original needs that the questions may have been intended to address.

Another simple fact is that the adoption of the concepts of philosophy has been so thorough over the centuries that we have reached the stage where the rate of adoption of the remaining un-adopted concepts of philosophy is very slow or so slow that the average person simply cannot see it, even if they look very hard. But, as they say, appearances can be deceiving.

It is also true that a lot of "modern" philosophy has gotten so esoteric and so apparently disjoint from apparent reality that most people see philosophy as being completely disconnected from reality, even if that is not completely the case.

The truth is a bit more complex. Granted a lot of philosophy does appear disconnected from reality and maybe a lot of the time that is the case, but just as often it is simply that philosophy can run well ahead of the times. Politicians may not be ready to pass reasonable and workable laws permitting doctors to "pull the plug on granny" or define precisely how privacy and trust should function in online computer networks, but philosophers and leading edge experts in all disciplines can and do spend serious time discussing these kinds of issues seriously. They are, to put it simply, being philosophical. That is philosophy in action. That is philosophy baking bread. It may not be bread that the average person can eat today, but it is bread that the average person will be taking for granted somewhere down the road in coming years, decades, and centuries.

At the extreme, even the nature of existence itself is still an unsolved problem, with quantum mechanics, string theory, and "god particles" a matter of the kind of speculation and debate normally reserved exclusively by the kind of philosophers who supposedly do not bake bread.

In computer science there is an old saying that AI (Artificial intelligence) is just all of the things that we do not know how to do yet.

I would suggest a similar statement, that philosophy is where we hold preliminary discussions about the toughest unsolved problems of humanity or new ways of thinking that can be applied to those problems.

Philosophy is less concerned with what to do in life, but more about how to think about values and the processes of thought and action so that we can divine better systems of values and better techniques for thought and action that can then be applied in new and novel ways to solve problems in more innovative ways than were readily available to us in the past.

Put in more pragmatic terms, to be sure, philosophers do not directly bake break or tell people how to bake bread, but rather offer people new and novel approaches to how to think about new approaches to baking bread and how to leap ahead and think about meeting biological and social needs that bread was intended to address in the first place. Philosophy can guide us in forward thinking about ethical and social concerns related to global poverty and global health.

In short, there is still plenty of mileage to be gotten from philosophy, especially in leading edge research efforts of virtually all disciplines, especially in situations where significant uncertainty, lack of determinism, and ethical, social, and political concerns outstrip classical mechanistic approaches to problem solving.

So, yes, philosophers can certainly seem to be lost in the clouds, but a large part of that is because that is where some of the hardest problems facing humanity lie. So many of us remain so lost in the mundane concerns of daily life and the rote details of our disciplines that we continue to stumble through life precisely because we do not have the vantage point from the clouds that would enable us to distinguish the forest from the trees.

-- Jack Krupansky

Monday, March 22, 2010

Progression of knowledge

I have a simplified model of the progression of knowledge. Knowledge somehow needs to start somewhere as informal knowldge or tentative knowledge and eventually end up as formal knowledge, something seriously believed to be true.

In my simplified model knowledge progresses (loosely) in the following incremental steps:

  1. An observation or a thought, something that just pops into your head.
  2. An idea, something you think about.
  3. A concept or belief, something you have given some serious thought to and believe is likely to be valid and true.
  4. A conjecture, a relatively formalized and structured form of a concept.
  5. A theory, a fully formalized version of one or more conjectures.
  6. A hypothesis, a prediction based on a theory that can be tested to prove or disprove part or all of a theory.
  7. A lawconclusion, or generalization based on the theory and tested through hypotheses, experiments, experience, and the passage of time.

So, one or more statements can be categorized as to how developed they are in my knowledge progression.

-- Jack Krupansky

The semantic abyss: reality vs. our perception and our models

From the very moment we first open our eyes or first hear some sound or first touch anything we feel that we are experiencing the world around us and that we know that world, reality, but do we? Given enough experience, we gradually realize that some if not many of our earlier perceptions are not completely in accord with reality as it really exists. So, the most basic conception of the Semantic Abyss is that we have two worlds to deal with: 1) the real world, reality itself, and 2) the perceived world, our mental model of what we think or imagine the real world is.

We actually have a third and fourth world to deal with: 3) a model of the real world constructed from conceptions based on our perceptions that we can express to others, and 4) the models of the world that others have constructed and endeavored to communicate to us.

Somehow, we merge, mesh, and blend these three models and derive a composite model of the real world.

Over time and with enough input with enough diversity we come up with ever-better models that better represent the reality of the real world, but despite our best efforts, there will always be a lingering Semantic Abyss between the real world as it really is and our best mental model of what the world is.

Another issue is that even when we are fortunate enough to establish a workable one-to-one correspondence between the real world and our mental model of the real world, there is no guarantee that each correspondence of reality and mental model is accurate and rich enough to adequately model the full complexity of the real world.

A final issue is that we wish to share our models with computers and other artificial entities (e.g., robots which seek to move around and interact with the real world) so that computer programs can make sense of the real world, either in terms of recorded data or real-time sensor data.

In short, we deal with four models of the real world:

  1. The real world as it is that we can observe and interact and experiment with.
  2. Our perception and internal conception of the real world.
  3. The communicable model of the real world that we share with each other.
  4. Computer models of the real world which can be readily manipulated by computer programs.

There are plenty of gaps between those four models of reality that we need to cope with when dealing with knowledge of the real world.

-- Jack Krupansky

Thursday, March 18, 2010

Bridging the semantic gap

Given that there is a semantic gap that I have been referring to as the Semantic Abyss, how exactly do we go about bridging the gap? My overall position remains that the gap is far too great to completely bridge now, or at any time in the near future. That said, it is worth considering the many ways in which we can partially bridge the gap and what potholes, barriers, and mine fields exist in the remainder of the gap. I will not endeavor to do all of that right now and right here, but some examples are worth considering.

First, we have to acknowledge that it is virtually impossible to bridge the semantic gap as a general proposition and that at best we can only hope to approximate bridging the gap. Ray Kurzweil's vision of a Singularity would obviously have to complete the 100% bridging of the semantic gap by the year 2045 if not sooner, but that is far beyond the scope of my near-term interests.

Second, we have to acknowledge that there are a multitude of semantic gaps. For example there is the semantic gap between any two individuals, we need to acknowledge that the gap differs between every distinct pair of individuals, especially depending on how much knowledge they already share.

Third, in general, bridging the gap is a bidirectional process, not a one-way communication. For example, the more knowledgeable party has to learn at least an overview of what the less-knowledgeable party already knows and doesn't know before or as part of the process of bridging the semantic gap. As a general proposition, every party has their semantic strengths and their semantic weaknesses and bridging the semantic gap simply means a semantic balancing, so that at the end of the process they each know what the other knew. But that is only as a general proposition.

Fourth, bridging the semantic gap is frequently and intentionally an asymmetric process, where one or more of the parties seeks a semantic advantage over the other. For example, in a negotiation or propaganda. Or even education where it is usually preferable to incrementally stage the semantic transfer rather than attempt to accomplish it all at once since the cognitive capabilities of the students are under development over an extended period of time. You could say that a typical student accepts a semantic weakness if only because of the incremental nature of education. An amateur might also accept a semantic weakness relative to a professional.

As a general proposition the process of bridging the semantic gap is a learning process. As an extreme case, absolute semantic peers are "on the same page" and can communicate without any significant learning required. As a practical matter even nominal semantic peers frequently are not exactly "on the same page" and miscommunication occurs until one or both parties recognizes that the peer relationship has broken down and learning is required.

Partial knowledge is a common "solution" to bridging the semantic gap. By both parties "agreeing" that not all seemingly relevant knowledge is needed in any particular situation, the semantic gap can be dramatically reduced, "by definition" (agreement.) As a specialized case, we may simply decide that computers are still far too "weak" to support full comprehension of human knowledge and decide on a structured subset of knowledge to shrink the semantic gap to a manageable size.

Hardwired knowledge is another common solution, especially, but not exclusively, with artificial entities. The entity with the hardwired knowledge doesn't really "know" what it is dealing with in a very deep and meaningful sense, but "knows" at least deep enough so that a relatively meaningful conversation can occur.

(To be continued, eventually.)

-- Jack Krupansky

Wednesday, March 17, 2010

What do I know? What do you know? What do we know?

So, what do I know? Literally. Or what do you know? And what do we collectively know? Even if we sincerely wanted to represent everything that we know and are very diligent about going about the task, it is still virtually impossible for us to adequately convey to anyone, person or computer or simply in written word, all knowledge that we possess, either individually or collectively. At best, we can approximate what we know.

One of the biggest problems is dealing with tacit knowledge where we are clearly able to perform various tasks but are literally unable to express in natural language exactly how we are able to perform those tasks.

Another big problem is that most people do not have photographic memories and are frequently unable to recall knowledge on demand even though in some other situation or simply after the passage of time or if prompted their recall may come much more readily or with more fidelity.

There are many other difficulties with any of us being able to fully express the totality of our knowledge.

The real problem is that even if we could express everything that we know, there is no reliable way for any of the rest of us to read or view or listen to those expressions and have a 100% certainty that we understood what the other person intended that they expressed.

So, we have these distinct, although overlapping collections of knowledge:

  • What do you or I know by ourselves
  • What personal knowledge can we consciously contemplate
  • What personal knowledge can we adequately express in natural language or any other knowledge artifact
  • What personal knowledge do we choose and intend to express
  • What did we actually express relative to what we intended to express
  • What portion of our expressed personal knowledge can be reliably deciphered by others
  • How others interpret what they read or hear that we have expressed
  • How much of what they have interpreted can be remembered and recalled and with what reliability and accuracy
  • How reliably and accurately can others relate our knowledge that they have acquired to a third party
  • How much acquired knowledge of another (or others) and our own personal knowledge are coalesced into shared knowledge
  • How much shared knowledge can be reliably and accurately shared with other parties
  • Our ability to distinguish which portions of knowledge came from whom or among whom it is shared

And that is just between two real people. Add more people, many more people. And add the many combinations of two or more people, the groupings of people we find in the real world. Layer onto that the huge issue of how to represent human knowledge in a form that artificial entities can adequately process. Obviously that is what we want to try to do in a full-blown knowledge web.

And even after we have done all of that, we must acknowledge and cope with the fact that our knowledge is a living thing, subject to constant and continual change.

-- Jack Krupansky

Tuesday, March 16, 2010

Relationship between sentience and knowledge

Although my primary interest is in representation of knowledge, it makes sense to focus attention on how knowledge is generated and used by sentient entities, whether they are real people or artificial sentient entities such as robots and software agents.

I propose a fairly simple model of the structure of a sentient entity in terms of functional capabilities that somehow relate to consumption and production of data, information, knowledge, or wisdom:

  • Sense, observe, measure entities (both sentient and non-sentient) and phenomena in the environment
  • Feel, sense (at a higher level, processing what was sensed at the raw sensory level), react - emotions, instinctive processes
  • Remember and recall
  • Think - perceive, analyze, conceive, speculate, contemplate, believe, desire, intend, plan, decide, control
  • Express feelings, emotions, reactions (relatively unidirectional)
  • Communicate mental state, record information
  • Act, behave
  • Interact with other sentient entities in relatively intense conversational mode
  • Intuit
  • Read minds [Really? Well, at least conceptually.]

Data, information, and knowledge flow into each of these functional capabilities either from the environment or other functional capabilities, is processed to some degree and some new form of data, information, or knowledge is generated and made available to other functional capabilities or the environment.

The representation of data, information, and knowledge within any of these functional capabilities may or may not be comparable or synchronized with external representations of that data, information, and knowledge in a Semantic Web or Knowledge Web. There may be some boundary lines delineating internal versus external flows or availability of any or all of the data, information, and knowledge.

Note: I am not sure how the information from a neurological brain scan fits into this model. Maybe it is simply an indistinct composite or a composition of neurological state information. Conceptually, the same issue can occur with an artificial sentient entity, by examining raw data in state variables (e.g., a raw memory dump). Whether this might have some utility is unknown, but it could have some analogy to a signature such as is done with a computer virus scan.

One could also view the DNA of a sentient entity, or the code of an artificial sentient entity as data, information, and knowledge as well.

Ditto for the biography or history of the sentient entity as well.

-- Jack Krupansky

Monday, March 15, 2010

What determines the future (or caused some outcome)?

Only the most mindless simpleton believes that the future is predetermined and that everything that happens does so because it was "destined" or predetermined to happen. Most of us can agree that predestination is not an adequate account of reality. But that leaves open the general question of what determines the future or even any outcome in the present? If hard, full determinism does not preordain all outcomes, what model for the progression of reality should we be using? Just for the record, I will state my simplified model of what determines the future.

Every event or outcome or change of state in reality (the universe) is determined by some combination of factors, even if we may not be able to clearly determine what those factors may specifically be in any given instance. The categories of these factors are:

  1. Natural progression. Law-like behavior such as gravity, an object rolling down a hill, hot air rising, momentum, orbiting bodies, or the life cycle of living things. Or something as simple as evaluating a mathematical equation across its domain. Outcome is very predictable and causality is well-defined.
  2. Specific causal factors. Forces, objects, actors, drives, etc. which are reasonably "clear", including the proverbial "smoking gun." Outcome may be moderately predictable and causality relatively easily determined.
  3. Non-specific causal factors. Something influenced or caused a change even if we have difficulty or are even unable to determine what the causal events actually were. Outcome has low or no predictability and any apparent causality will tend to be mostly speculative in nature.
  4. Random variability. Ranging from quantum indeterminism and radioactive decay to statistical, stochastic, and chaotic processes. Even if we recreate the exact prior situation (say, in a parallel universe), the outcome could vary. Even omniscience and omnipotence would not determine the outcome. No predictability other than possibly a statistical distribution. Causality may sometimes be established by the nature of the event (e.g., radioactive decay), but may be completely indeterminate (e.g., judging free will decision vs. known bias.)
  5. Free will. Choice by a sentient entity (e.g., person or computer) unconstrained by any factors. Various factors may inform or influence or guide or even bias choice, but ultimately there is an act of free will making the decision. May or may not be predictable. Causality may be very difficult if not impossible to establish, although a sentient entity might communicate its decision-making process or a brain scan might suggest whether free will was a significant factor or not.
  6. Intervention by a deity. Not everyone believes in a God, but those who do might find the intentions of a God a more credible explanation for events and outcomes than other, more worldly factors.

Now, I have attempted to summarize a model for what determines the future (or caused some past outcome) in the real world for real people. That said, is this same model valid for any or all virtual worlds? I think so, but not necessarily. Some categories of factors may not be relevant in some specific virtual worlds, but are there other categories that are operative in all or some specific virtual worlds but not operative in our real world? Conceivably one could define such a virtual world, although I have not personally heard of one. Nonetheless, it would be interesting to speculate what additional categories of factors might conceivably apply to the Semantic Web and future Knowledge Webs, especially as artificial sentient entities (software agents, robots, etc.) begin to proliferate.

As a final note, all of this ties in with provenance as well, a topic of emerging interest in the Semantic Web, although currently the Semantic Web is more interested in the who of a change in data rather than some deeper why.

-- Jack Krupansky

Sunday, March 7, 2010

David Gelernter: Time to Start Taking the Internet Seriously

I just finished reading an essay on Edge by noted computer scientist David Gelernter entitled "Time to Start Taking the Internet Seriously" which basically argues for his concept of lifestreams as a better model for publishing and accessing information than today's web model. Rather that organizing information in a spatial form, he recommends that we think about and organize information along the time dimension. As he puts it:

The Internet's future is not Web 2.0 or 200.0 but the post-Web, where time instead of space is the organizing principle -- instead of many stained-glass windows, instead of information laid out in space, like vegetables at a market -- the Net will be many streams of information flowing through time. The Cybersphere as a whole equals every stream in the Internet blended together: the whole world telling its own story.

He proceeds to describe the nature of the problem and how lifestreams will address it:

13. The traditional web site is static, but the Internet specializes in flowing, changing information. The "velocity of information" is important -- not just the facts but their rate and direction of flow. Today's typical website is like a stained glass window, many small panels leaded together. There is no good way to change stained glass, and no one expects it to change. So it's not surprising that the Internet is now being overtaken by a different kind of cyberstructure.

14. The structure called a cyberstream or lifestream is better suited to the Internet than a conventional website because it shows information-in-motion, a rushing flow of fresh information instead of a stagnant pool.

15. Every month, more and more information surges through the Cybersphere in lifestreams — some called blogs, "feeds," "activity streams," "event streams," Twitter streams. All these streams are specialized examples of the cyberstructure we called a lifestream in the mid-1990s: a stream made of all sorts of digital documents, arranged by time of creation or arrival, changing in realtime; a stream you can focus and thus turn into a different stream; a stream with a past, present and future. The future flows through the present into the past at the speed of time.

16. Your own information -- all your communications, documents, photos, videos -- including "cross network" information -- phone calls, voice messages, text messages -- will be stored in a lifestream in the Cloud.

17. There is no clear way to blend two standard websites together, but it's obvious how to blend two streams. You simply shuffle them together like two decks of cards, maintaining time-order -- putting the earlier document first. Blending is important because we must be able to add and subtract in the Cybersphere. We add streams together by blending them. Because it's easy to blend any group of streams, it's easy to integrate stream-structured sites so we can treat the group as a unit, not as many separate points of activity; and integration is important to solving the information overload problem. We subtract streams by searching or focusing. Searching a stream for "snow" means that I subtract every stream-element that doesn't deal with snow. Subtracting the "not snow" stream from the mainstream yields a "snow" stream. Blending streams and searching them are the addition and subtraction of the new Cybersphere.

18. Nearly all flowing, changing information on the Internet will move through streams. You will be able to gather and blend together all the streams that interest you. Streams of world news or news about your friends, streams that describe prices or auctions or new findings in any field, or traffic, weather, markets -- they will all be gathered and blended into one stream. Then your own personal lifestream will be added. The result is your mainstream: different from all others; a fast-moving river of all the digital information you care about.

In short:

To accomplish this, we merely need to turn the whole Cybersphere on its side, so that time instead of space is the main axis.

There is much more to his model for information in the "Cybersphere", but time-based lifestreams are his core starting point.

-- Jack Krupansky

The welling up of knowledge

I was reading an essay on Edge by noted computer scientist David Gelernter entitled "Time to Start Taking the Internet Seriously" and ran across a reference to the concept of information welling up in the context of his conception of lifestreams. He wrote:

Ten years ago I described the computer of the future as a "scooped-out hole in the beach where information from the Cybersphere wells up like seawater."  Today the spread of wireless coverage and the growing power of mobile devices means that information does indeed well up almost anywhere you switch on your laptop or cellphone; and "anywhere" will be true before long.

That's an interesting concept. Rather than explicitly accessing data by going to its source or explicitly searching for it, all one need do is create the proper situation (the well) and the data simply appears or wells up, welcomed but not directly or explicitly bidden per se.

So, we have a collection of concepts here, in my view:

  • knowledge wells (or data wells or information wells) which are places where information can simply materialize (or the data equivalent)
  • knowledge welling, the incremental (or streaming or merely "seeping") appearance of data in a knowledge well (or data well or information well)
  • welled knowledge (or welled data or welled information), which is knowledge that appears in a knowledge well
  • wellable knowledge (or wellable data or wellable information), which is knowledge that is somehow prepared or packaged or published in a form that makes it readily distributable to knowledge wells.

At a simplistic level, a knowledge well could simply be a search query directed at some data source, but to truly fulfill Gelernter's vision, something far more sophisticated is needed. What that something might be I cannot say at this time.

Curiously, maybe there is a community collaboration angle there as well, since the term reminds me of the famous The Whole Earth 'Lectronic Link known as The WELL. Whether or not a connection between the two concepts would make sense would depend on how specific and narrow one wants to define the terms. One could define a simple RSS feed as an information well, I suppose. One could define the Twitter public timeline as an information well. Sure, one can tap into any "conference" on The WELL, but then that is a fairly narrow information stream. Somehow, a Gelernteresque knowledge well would have a more global, blended un-focus, I would think.

Thinking about how information might well up reminds me of a concept I considered years ago, something I call GMWIMW, for Give Me What I Might Want, a mythical filter for information on topics that I do not even know about yet. That would be at least one type of knowledge well that I would be interested in.

-- Jack Krupansky

Wednesday, March 3, 2010

What is the unit of meaning?

Superficially, that is the question: What is the unit of meaning? But, that one question is part of a bundle of questions, including (but not limited to):

  1. What is the unit of knowledge?
  2. What is the unit of semantics?
  3. What is the unit of meaning?
  4. What is the unit of communications?
  5. What is the unit of expression?
  6. What is the unit of thought?
  7. What is the unit of facts?
  8. What is the unit of reasoning?
  9. What is the unit of objectivity?
  10. What is the unit of subjectivity?
  11. What is the unit of context?
  12. What is a unit in a holistic system?

In natural language the obvious choices for a unit are word, sentence, phrase, and morpheme. I would lean towards word or term or sometimes phrase, but at least when it comes to foreign language translation, anything less than a sentence is questionable for capturing meaning. Sure, we can look a word up in a dictionary, but frequently we find that a word will have multiple senses and the context of a phrase, clause, or sentence is needed to decipher which sense is appropriate. Maybe this simply means that word are still an appropriate unit, but that context is needed as well, much in the way that pieces of wood and nails can be units for building, but tools such as a saw and a hammer and a plan are needed to make sense of the units.

In the Semantic Web, we have units such as URI and literals, but it is the statement or triple as a unit of expression that seems the most useful focus. Or maybe not. An RDF statement is somewhat analogous to a natural language statement. A URI or literal string is comparable to a natural language word. The same literal can be used in multiple RDF statements, with a multiplicity of senses, each suggested by the resource and predicate of the RDF statement which contains it.

For now, I would suggest that the word is the natural unit of meaning in natural language, and the URI is the natural unit of meaning in the Semantic Web.

One related question that concerns me: The Semantic Web does not seem to have the concept of sense for URIs that we have in a natural language dictionary. Hmmm...

Language, natural or otherwise, is used to convey meaning from one party to another. Meaning and knowledge exist primarily in the minds of the parties who are communicating. Actual words and sentences or expressions in natural language are knowledge artifacts rather than the actual knowledge and meaning itself. As carefully as we may try, analysis of natural language text can only approximate whatever meaning was intended by the initiator of the expression. So, in some sense, deciding on the unit for natural language text does not necessarily tell us the unit for meaning and knowledge in the human mind. Nonetheless, we need to start somewhere and the knowledge artifacts of natural language are a rich trove to start with.

I'll stop there for now. More thought is needed.

-- Jack Krupansky