Now that we have deconstructed the terms understanding and language, we can use the elements we have so roughly carved from the whole to discuss more clearly the functionality of the system we attempt. Equally as important, we wish to circumscribe this functionality, delineating not only those things that we expect it to do but, as well, to carefully define those things the system is not expected to do. In particular, we will not expect it to understand in the same way that humans understand; our system has no human sensors, no human sensibility, only the reflection of these qualities that is contained in the text itself.
What we do expect is that the system act as though it understands; that it be able to converse with a human in a way that reflects the understandings that humans have developed over time and set down in text, to respond in such a way that its conversation be essentially indistinguishable from the conversation of a human. This is a subtle and important distinction and here we will attempt to make this distinction between seeming to understand and actually understanding as clear as is possible. We attempt to do this as we outline and explain the functionality of the various elements of the system.
The diagram just below is intended both to delineate the major components of the system, Understanding Textual Language, as we now see them and, for clarity, to relate them to process and data elements in the previous diagrams. Secondarily, it is also intended to relate these elements to appropriate sections of the Design Notes on the collaboration site (authorization required) which contains the ongoing elaboration of the detailed design, our day-to-day “knowledge base” so to speak.
In developing the diagram shown here we have, in a sense, lobotomized the diagram Understanding and Language by cutting away the portions which we do not intend to deal with directly in this project. The diagram here shows what remains. Note in particular:
· The names and acronyms of the several major components, terms that are generally used in the Design notes for the project are shown. The element numbers of the two process elements shown in previous diagrams remain the same so that they can be related to their analogs in those diagrams.
· The communication among system elements has been generalized by representing their connection with a bus rather than with directed arrows as before. This change, rather than reflecting a more perfect design, only emphasizes our ignorance at this early stage of precisely how each component is to communicate with the others.
LexNet, the “word” or lexical portion of the memory
network data element in the Understanding
diagram is now neatly excised from its conjoined twin, that sub-element
which we have called “deep memory” in the previous diagrams, and here LexNet
stands alone. The “deep memory ”, that is to say the “sense-based” content of our
memories, that element shown in earlier diagrams and explained in the section Understanding,
is now assumed to have been implicitly embedded (however imperfectly) in the
text itself that is to be read from the Internet, represented in the diagram
above as a cloud element: As people create text (or speak) they attempt to
construct phrases from their lexicon, phrases that best represent and
encapsulate the complex sense-based thoughts in their deep memory; it is in
this fashion that the text itself represents the author’s
genuine understanding, that which is otherwise inexpressibly, even inchoately, embedded in one’s deep memory.
· Some of the elements of the language portion of the Understanding and Language diagram, those having to do with speech comprehension and production are not included; Understanding Textual Language is the nature of our project. This excision is a great simplifier since we are then not required to deal with: detecting words from the often imprecise flow of generated sound, distinguishing among pronunciations, the variability of accents, and the many other difficulties of spoken language.
If we imply through this diagram that these elements, those which remain here, are enumerated and delineated once and for all time; they are not. This diagram, Understanding Textual Language, as it stands here, represents a starting point; it’s elements may need to be supplemented, restructured, or completely replaced as we gain new insights, encounter difficulties, or otherwise find more effective processes and data structures. They are, properly considered, a beginning, a direction, a terminology. Everything else is in flux: Be forewarned: “Here be dragons.”
Now we will review each of the elements shown above to explain their general functionality and to relate them to the Design notes where they will gradually be further elaborated; it is a work in progress. In this review, we will attempt to outline the major design issues and approaches that might usefully be taken to resolve them, and in the process we will attempt to elaborate the subtle distinction between understanding words and understanding deep memory as a human would do. We begin with LexNet since that element is in some sense the core of the system.
The vast bulk of Internet text is written in English and this is the natural language of the collaborators. This points initially—if only in the interest of simplicity—to English as our primary focus. It seems prudent now not to attempt to generalize the system to handle any language whatsoever, though certainly the fundamentals of semantics, with which we will deal, are similar in every language. This inclines us to think that, though for now we choose English, the basic notions of the memory network should, with some effort, be extensible to any language.
The term LexNet has been adopted here and in the design notes because, internally, we see it consisting of nodes containing English words, which is to say the English lexicon, and the various relationships between those words, and between word fragments (word roots and their appropriate affixes). The development and organization of this network is a major research area because it is central to every aspect of the project.
In life, children learn language in a variety of ways using a variety of methods. Obviously, these methods are not generally available to us in this project; in particular the system does not have the senses of a child. Nor does it have an inherent inclination towards mimicry, a feature quite natural to children’s (and many other animals’) learning. Since we are not able to learn as children do, we seem to be free to develop an equivalent methodology. (As was mentioned before, and looking at it optimistically: In the same way that modern airplanes do not flap their wings in order to fly, so this “unnatural” formation that may be developed may prove a net benefit.) One way to do this trick is to provide LexNet initially with at least the beginnings of a vocabulary right out of the gate, so to speak. In constructing this data structure there are a number of possibilities:
At one extreme is simply to endow LexNet initially with the entire English dictionary, after all it is not as though that is a task that has not been done before, nor is it one that takes up a great deal of memory. At the other extreme is to simply read text from the Internet and get new words from there, gradually building up the lexicon. As with most extremes, each of these have serious liabilities.
In the first case, that of simply constructing the dictionary in the first place, the liability is not so much the initial effort (though that would be considerable because, unlike normal dictionaries, the myriad relationships that connect words, and the categorizations which classify them into families, would certainly be a significant effort to develop initially). There is a more serious problem with this choice: It is that if this route is chosen it implies that the dictionary either remains static (and eventually becomes out of date) or constantly needs to be manually updated. And then, if we wish to solve this problem of constant maintenance, we need somehow to automatically be able to pick up new words, including:
· Acronyms, slang and jargon
· Nouns that have been modernly confounded into verbs (“she googled his name”)
· Compound words hyphenated into modifiers (“multi-layered”), and then, which still later, have simply lost the hyphen and been combined into a single word (“multilayered”)
· Brand names that have been commandeered into categories ("Frigidaire" into “fridge” and, for a time, “Xerox”, the brand, into “xerox”, the verb)
And these are only a sampling; language is highly mutable. So then, if we find a way to manage these things automatically, why not build the lexicon in this second way in the first place, by simply reading? The reason why this might be extremely difficult is that we have nothing to build on, the system has no visual pictorial or pointing cues supplied by a mother or a teacher:
See the bunny. See it hop. It’s a rabbit.
With this aid, the child’s senses immediately and at the appropriate age, and seemingly without effort indicate to the child the nature of the thing itself (even if, at first, the genuine deep-memory or tactile memory of the words themselves is not clear). Yet, without equivalent help which is not available, the system will not automatically know that “bunny” and “rabbit” are objects, and that “hop” is an action. Neither will it have any notion of the function or semantics of the determiners: “the” and “a”, or that the variable reference “It’s” refers to the previous object-referent, the bunny. While it is not impossible that this might be accomplished, it seems a very difficult and thorny path indeed.
In both of the two extreme approaches there remains the job of abstraction and categorization to be achieved somehow: that a rabbit is an instance of a four legged animal, and that animal is a mammal, that it shares many qualities with a squirrel (four legs, a tail, fur, similar size, both hop and lives in the woods,…), that it is an instance of an animal, and on and on.
There is a third alternative which has considerable attraction and that is to blend the two approaches by providing what could be termed a “starter kit” of language, which, in addition to a significant number of words and their dominant relationships, might include a certain amount of categorization and abstraction. This initial foundation could then be built up or elaborated upon as pages are read by Reader. With the aid of this starter kit, we have clues to certain fundamental properties of the words, such as whether they are an object or an action or modifier of some sort. With this knowledge we will be then be able to utilize our knowledge of the construction of sentences and the general arrangement of subject-action-object phraseology that is contained in the data structure SynPat, a data element which is elaborated in a somewhat cursory fashion below.
Naturally we are not the first to face the challenge of a lexical network. This basic element, ordinarily represented in the form of a network graph, has traditionally been termed a semantic network. Research on semantic networks began in the 1950s. The link here provides a useful overview of the topic and leads, through a number of links in the article, to a considerable body of knowledge on the thoughts, the graph types and the methodologies that have already been attempted.
As we inspect this previous work though, we ought to keep in the forefront of our minds that we have advantages unavailable to earlier researchers. In particular, we have the Internet and its vast store of formal and informal text and ways of accessing it in all of its complexity and mutability and, perhaps one should add, it’s dirtiness—the Internet is full of things that are: poorly punctuated, misspelled, miss-worded, transposed, improperly capitalized and otherwise grammatically messy. Nevertheless this abundance of text ought to permit us to take a more "quantum" approach to semantic networks than has previously been attempted.
By the use of the term quantum in this context is meant, at the same time, both a more statistical approach and also one in which a link or a relationship is not simply a connection between two nodes, but a more complex association that has a number of qualities, some that are static and some that change dynamically depending upon the context of the text being read. This concept, more important than it might at first seem, is elaborated to some extent below, but it is a work in progress.
Our semantic network, LexNet, in the form of a graphical network, consists of two components: nodes and relationships. In the figure just below one sees nodes containing words or word fragments—word roots and their related bound morphemes—and relationships, the directed arrows, that connect them. Additional information, or properties, are contained in each of these entities but are not shown in this simple illustration.
Since the English lexicon is so large only fragments of it, used to illustrate a particular situation or issue, can be graphed at one time. Consider this network fragment (we use the term FragNet). This diagram is not complete, either in illustrating all of the relationships that might exist between words or in showing the properties that these two data elements contain. Its intent is merely to illustrate several important characteristics of the network graph as a prelude to a more detailed explanation and to highlighting some of the major issues that require resolution. The following subsections discuss the two elements of the graph and highlight the major issues involved.
Nodes are data elements that contain words or word fragments. Consider the node kiss around which the FragNet above is centered. There are two particularly important issues involved here:
· The first is that the word kiss, standing by itself, can be either an object or an action (a noun or a verb—the reason for this distinction in nomenclature: object instead of noun, and action in lieu of verb, is explained in the Design Notes). Each of its suffixes change the word kiss in some way that distinguishes one of its uses. This function is typical of affixes, which can change the meaning of the associated word or root in a number of ways. And, of course these same affixes are associated through relationships with many other words or word roots not shown in this diagram, so that the number of relationships pointing to the node ing would likely be in the thousands.
· Secondly, each of the other nodes that connect to the kiss node, or which chain to it, such as friend→ly, those that are not affixes or other inflections, imbue the central word, or word root with some property or characteristic that the word can sometimes appropriately take on. (Yielding, in this case, a “friendly kiss”).
So, not only does the inflection make clear the particular function of the central node when taken together with it (whether object or action) but, as well, it may define the time aspect (the tense) of the term, as well as other characteristics, such as possession. For example, in a text containing “… silly kissing…”, silly is a modifier, and ing is a suffix, in this particular example it is a present participle or clitic that tells us that the activity is, or that it was, ongoing (“Yesterday the two little girls were in the garage practicing some silly kissing.”)
The notion of defining nodes in this fashion, where inflections and other word fragments are separate and independent nodes themselves and are combined with a great many other nodes through relationships of one sort or another has pluses and minuses:
· On the plus side, it is very economical: one need not repeat in the network each inflection of potentially thousands of words or roots that can take on numerous forms. There are not many affixes, all in all, so entering them once and simply associating them with all appropriate roots or words is an advantage if, as now seems likely, relationships themselves are much smaller (in terms of memory space) than nodes. Another plus is that a modifier node, like silly, need be associated only with appropriate root words and not directly with each of their combinations (inflections or other constructions) as would be the case if word roots, with each of their inflections, were each stored separately as a node.
· On the minus side, there are other ways of handling such elements as inflections that may in some cases prove more economical: a separate data table simply for inflections comes to mind. But then one must develop definitive rules for determining which words or roots can legitimately take which inflections. This economy is particularly obvious with the plurals, s or es, which can be used in combination with almost every object or action node thus requiring a great many relationships, whereas if a simple "plural rule" could be developed all these relationships would be unnecessary.
These are only some of the issues concerning nodes that need to be resolved. Here are more:
· There are some 750,000 headwords in the Oxford English dictionary, and 150,000 headwords in the online dictionary, WordWeb. And probably neither of these includes many of the modern pseudo-words that might be expected to be encountered in personal or ad hoc writing on the Internet, including misspellings, typographic errors and so forth. The average vocabulary of an adult English speaking person is said to be approximately 50,000 words. What should be the size of the initial starter kit, if there is one?
· Instead of separating word roots from their inflections and other endings, nodes could be treated as head words and have list properties that, like a dictionary, defines all the various word-classes that a particular node can take on. For example, the node direct can be both an action, " George, you may now direct the orchestra" and a modifier, such as an adjective or adverb, "He took the in-direct route, but she went directly to the office.”
This is by no means a complete list of issues concerning nodes, but for now, since this is only an overview, we will move on to some of the other characteristics of LexNet.
Relationships are data elements that relate one node to another. (We often use the simpler term rel for the term relationship.)
Having used the word “quantum” above in relation to the network, we ought to explain more fully just what it is that is meant. Essentially, it is that in our current concept of the network, the relationships between nodes are rather more complex than that two nodes are simply related in some fashion to each other. A rel in LexNet indicates not merely that a relationship exists between the nodes it connects, but also that a particular kind (or kinds) of relationship exists. This link provides an initial look at these types and their meanings, those that are now contemplated (there are undoubtedly many more). And because of the extensive textual data that is available on the Internet we suppose that the very arrangement of words in a phrase or sentence has significance when a certain relative positioning occurs frequently in text. For example, if we very frequently encounter the close—perhaps simply consecutive—relationship between, for example, the words red and grape, over time, and we also know that red is a modifier and grape is an object, we would come to think that these two words have a closer relationship to each other than mere chance might indicate. So that in addition to particular sorts of relationships, each relationship also has a strength value. Each of these characteristics of relationships is explained more fully below.
It is interesting to note that in a very large network graph, such as that for the English language, most relationship types form true “network” structures, meaning essentially peer-to-peer relationships. Yet, curiously, some types of relationships actually form hierarchies within the more general peer-to peer network itself. As an example, consider (in the list of relationship types linked-to above) the relationship type IO, Instance Of (sometimes termed an “IS A” relationship in traditional semantic network research):
Lassie is an instance of a dog, which is an instance of a mammal, which is an instance of an animal which is an instance of an object. The PO, or Part Of, relationship is also has hierarchical characteristic: a tail can be a part of a dog, as can a nose, fur, leg and so forth (for now nevermind that a leg can also be a part of a chair or a table). Furthermore, the two hierarchic types can intersect: a leg is a PO, or Part Of, a chair while a chair is an IO, or Instance Of, furniture, which is an instance of furnishings, which is an instance of goods, and so on up to the more and more general.
The key quality of a hierarchy is that each node in it, except its root node, has one, and only one, parent node. Thus hierarchies can be very quickly traced upwards from any node within the hierarchy by following its parentage all the way to its root node, nevermind that the hierarchy is embedded in the entire network, not all of which is by any means hierarchically organized.
Hierarchies in no way preclude other types of relationships within and among the hierarchical relationships that are not hierarchical; in fact, they enhance them.
Consider the figure to the right: Fido, Rover, Lassie, and all other instances of dog inherit modifier properties hierarchically, the property furry from dog, warm from mammal, and their cold noses through the MD rel, indirectly from the nose PO rel, or Part Of, relationship to dog.
Of course there are great many properties associated with
all of the
nodes shown here that, for clarity, are not illustrated. Neither are the directional arrows. The key point we wish to show here is simply that the quality of heritability is quite powerful and that it results in significant economies.
The characteristic of heritability is not unlike class based programming languages where properties themselves and even functions are heritable. Modifiers and some other sorts of relationships can, in fact, be considered as functions. For example, a relationship to an ing node functions in such a way as to endow a number of qualities to the receiving node, including tense. And this is only one illustration of a general capability; there are many others.
We know that certain things become more firmly implanted in the human memory than others; we know that certain memories fade over time, some more quickly than others. We know that repetition enhances memory. And we know that some relationships, those of high significance, rarely or never fade, some seemingly “burned” nearly forever in our brains. From this it seems likely that the potency or strength of a relationships between nodes depends not only upon the mere awareness of some relationship but, as well, on other, more mutable, factors:
The circumstances or situation—or context, a term we will use frequently—under which a relationship between two words is initially instantiated indicates significance as, of course, does the frequency with which each similar relationship is encountered in text read later, after its first instantiation. All this points toward something of a statistical or tabular approach to the developing, the maintaining and the techniques of “traveling through”, a semantic network. In LexNet, in support of this realization, a relationship has not only a type, but a strength. The strength property of a relationship has two variations:
The first variation of this quality is a relatively static strength, one that is contained semi-permanently in the memory store itself. It is initially determined by the context during which the relationship is initially instantiated. It is then gradually modified—enhanced—by the frequency with which that relationship is encountered in text read, or periodically degraded if it has not been encountered for some preset time period:
If the words “red grape” are frequently encountered consecutively this would enhance the “static” strength of the relationship between them. Thus it is a slowly changing strength, but not completely static: periodically one of the functional elements of the system degrades these strengths throughout the network by a small preset amount, and it is only as more text is read and the same relationship encountered that its strength is built back up. Thus if it is very frequently encountered it will continue to increase in strength because the decrement that the function imposes upon it is relatively small. This is meant to emulate the way in which memory fades naturally through disuse.
In addition, each relationship has a dynamic strength property that is a function of the static property, which is to say a temporary multiplier or enhancer. This dynamic strength depends upon the current context of the text being read. This can operate in a number of subtle ways in order that tracing the correct path through LexNet, one that makes sense, is facilitated by the general context, the gist, of the text currently being read. In this manner different paths maybe favoured when reading texts with different contexts.
For example, to highlight this dynamic strength concept somewhat extravagantly: suppose a text contains, “… the knife approached his throat…”, from which phrase it is reasonable to assume that danger is involved and that fear tensions are heightened, as might happen, were this a film in a movie theater, as people began to shift nervously in their seats. Just how this nuanced appreciation of the words is to be detected is an interesting puzzle, but one that does not seem insurmountable. Suppose that it is detected; what is its effect?
From the nodes knife, approach and throat, it ought to be possible to locate an IO path up to the fear node which, among its other properties, since it is a category (IO relationships emanate from it), to then quickly trace all of these relationships throughout the network dynamically enhancing their static strengths temporarily—at least until the context changes—by some multiple, thus enhancing their significance within the current context, and later relaxing it when the context subsequently changes. Since this multiplier effect is dynamic it is not stored permanently in the network. It is solely a function of the text currently being read.
Let us discuss a second instance of this important dynamic strengthening and the quantum aspects through which the LexNet is traced in search of meaning. Suppose that the text is explaining the pharmaceutical qualities of certain drug, the myriad relationships in the network that flow to the node pharma will have their ordinary static strengths temporarily enhanced, but only for the duration of that article or story. This will have the effect of “greasing” those paths, but only for the duration of that reading, after which they fall back to their normal semi-static strengths based essentially on the power of their initial instantiation and the frequency of their subsequent encounter.
In the Understanding and Language diagram, this element corresponds to process 7: Reading comprehension. Here, to more accurately reflect its function in the project, we style it simply as Reader: Text Comprehension. Its primary function is to read text on the Internet and to understand it. The key question here is precisely what is meant here when we write of Understanding?
At its most simple, understanding means only to parse a particular text into its textual elements: paragraphs, sentences, phrases, words, and to identify those words as elements of syntax: which words are objects, which actions and which are modifiers and specifiers of one sort or another? And there are other aspects to parsing text at this level which include: segregating major elements of text as may be indicated by titles, headlines and chapter headings, by resolving contextual variables (to whom does the term “he” refer when encountered in text); inferring unwritten (elided) punctuation, and other similar difficulties of what might be called general parsing or syntactical resolution, a sort of sentence diagramming. None of this is simple yet neither should it prove extraordinarily difficult. But there is a much more subtle, and therefore significantly more difficult, meaning when we write here of understanding:
At its most subtle, and here we reflect upon and restate what has been delved into before; consider that we have spent considerable effort attempting to describe both Understanding and Language as well as to comprehend how they relate to one another. At this more subtle level the logical, elemental connection point between understanding and language is the data element that we have forged: the Memory Network, and we have symbolically pulled this particular data element apart by separating it into two sub-elements, the Lexicon or semantic network, what we here call Lexicon or LexNet, and the remaining portion of the element, what we have previously termed “Deep memory”, which is to say the memory of sensations associated with objects, plans, scenarios and actions learned early in life, even before speech, and enhanced as experience deepens.
Furthermore, to continue this recapitulation, since the speaker—text-writer in our case—chooses words from his lexicon in such a way as to best reflect the more subtle sense-based, complex thoughts in his deep memory, we may say that the text itself, the words from the writer’s lexicon, written in a specific order, one which conforms to some recognized form of syntax, these words themselves reflect the writers understanding directly, as well as the author is able is able to describe it in words. And these words and phrases can be then be transmitted to readers (undoubtedly in imperfect, but nevertheless sufficient, form), thus triggering in some fashion the reader’s equivalent deep memories and transmitting in this way, from one to another, what we think of generally as understanding.
Consider this three-step process of communication in language:
· Encoding one’s deep memories into words from one’s lexicon;
· Transmitting those words to another and then,
· that other person decodes them into his own deep memories where, to an extent unknowable, they reflect the originators deep memories.
To provide a simpler yet very similar process for comparison, this process of communication strongly resembles a sort of “Morse code” of language at a different level wherein words themselves (a document) are encoded by one person into a simpler form, here dots and dashes, which are then transmitted using a mechanism, and the dots and dashes are then heard and decoded back into text by another person. Consider the diagram below:
We would not say that the instruments used in this process, the telegraph keys and the wires connecting them, understand the words being transmitted in code. The telegraphers themselves may not actually understand the messages that they routinely transmit since they may or may not be aware of the context of the documents that they transmit for others; but they understand the words, which they then encode or decode, but if the document is, say, a complex contract or a highly technical description, they are likely to be completely unaware of the sense of what it is that is being communicated.
In the same fashion we cannot say that our “system” understands the deep meanings being encoded into words, in a particular syntax, then transmitted to another through writing or text, and then decoded by the reader(s) into his own deep memories.
Secondary to the actual reading itself, but not significantly less important for that, is an additional task of Reader: continual enhancement and expansion of LexNet as a result of text that has been read. It is intended to be done in a number of ways:
· Adding new words to LexNet—those words that are not yet part of its lexicon—as they are encountered in text read, and in some fashion keeping account of the frequency with which specific words are used.
· Adding new relationships to LexNet on the basis of text read, this to be accomplished through recognition of positional relationships between words read; for example in the phrase “as the ball rolled down”, and understanding that a ball is an object, it is tentatively to be supposed that roll, an action, is something that a ball does or can do, that is to say that there is a relationship between the two nodes of a certain type and if this relationship already exists it is to be strengthened, and if it does not exist it is to be created, and the creation is, tentatively, to be given a minimal “strength”, the value of which will be incremented if it is encountered frequently and periodically decremented if it is not. Furthermore, that there is a relationship between rolling and down that is likewise added or modified.
· Periodically and arbitrarily decrementing the “strengths” of relationships on a routine basis so that eventually the strength of the rel goes to zero unless encountered in text frequently enough to compensate.
This process diagram is shown to the left: Reading the myriad documents from the Internet, the project we develop, Understanding textual language is intended to "digest” their meaning in the sense that, using LexNet, it attempts to understand the thrust of the words that millions of authors have created. Here, the term understanding is used to mean that it not only understands the words themselves but it understands the sense of the words, how they are related to other words, and more. It is important to keep in mind that in performing this process the system does not understand the authors’ deep memories, instead it attempts to enhance and inform LexNet in such a way that it will be able to reply intelligently and meaningfully in a conversational manner to users that wish to know something.
The argument of whether or not syntax patterns are, in some way, fundamentally built into humans’ “wetware” or whether they are simply some ability or convention that we “pick-up” through attention to natural rhythmic patterns and correction of mistakes, is an old one. Fortunately, since we are not required to copy human methodologies, we need not take sides concerning this issue. Besides that, it seems relatively simple, in comparison to other tasks we must accomplish, for us to build in the capability of recognizing proper syntax, and thus comprehending meaning, when reading text, even when it may not be perfectly expressed and, as well, to use these proper syntax patterns when making conversation with the user using Writer.
At this time, the feeling, that this task would be relatively simple, is only an instinctive judgment since almost no work has been done in this area. The requirements of this data element are to assist in deciphering the meaning of phrases and sentences. This of course involves, among other things, the comprehension of such fundamentals of phraseology as subject, action and object.
There are numerous rules for the proper formation of phrases and sentences and for their punctuation. Unfortunately, the rules of punctuation, and more recently the slow abandonment of much of it in favor of the emulation of speech and the reliance of the author upon the reader to sort it all out intuitively, will make this all the more difficult.
In spite of the thinness of the explanation of this section, at this point in time it is all we have. It is for the reader to fill in the gaps as more detail is developed.
This section has had little or no thought put into it as yet. All that can be understood at this point is that its function is to converse with the user. In performing this task it will of course use LexNet as a source of words that in some fashion make sense.