knowledge-language
senso-concept-Mcs (lagKnlg)

McsHitp-creation:: {2023-10-23}

overview of lagKnlg

description::
· knowledge-language is a-computer-language that represents and manages conceptual-systems from mind-views or logo-views.

name::
* McsEngl.McsTchInf000039.last.html//dirTchInf//dirMcs!⇒lagKnlg,
* McsEngl.dirTchInf/McsTchInf000039.last.html!⇒lagKnlg,
* McsEngl.knowledge-language!⇒lagKnlg,
* McsEngl.lagKnlg!=McsTchInf000039,
* McsEngl.lagKnlg!=KNowLedGe-language,
* McsEngl.lagKnowledge!⇒lagKnlg,
====== langoGreek:
* McsElln.γλώσσα-γνώσης!=lagKnlg,

01_user of lagKnlg

description::
· a-human or technology that uses the-language.

name::
* McsEngl.lagKnlg'user,

02_input of lagKnlg

description::
· mind-view of logo-view.

name::
* McsEngl.lagKnlg'input,

03_output of lagKnlg

description::
· "meaning-view" understanable by tech.

· knowledge must-be accurate, consistent, and complete.

name::
* McsEngl.lagKnlg'output,

04_manager of lagKnlg

description::
· the-tech that creates and understands the-knowledge-base.

name::
* McsEngl.KBS!=knowledge-based-system!⇒Knlgmngr,
* McsEngl.KRRS!=knowledge-representation-and-reasoning-system!⇒Knlgmngr,
* McsEngl.KRS!=knowledge-representation-system!⇒Knlgmngr,
* McsEngl.Klmr!=knowledge-language--manager!⇒Knlgmngr,
* McsEngl.Knlglmgr!⇒Knlgmngr,
* McsEngl.Knlgmgr!⇒Knlgmngr,
* McsEngl.Knlgmngr!=knowledge-language--manager, {2023-11-01},
* McsEngl.app.knowledge-manager!⇒Knlgmngr,
* McsEngl.knowledge-based-system!⇒Knlgmngr,
* McsEngl.knowledge-reasoning-system!⇒Knlgmngr,
* McsEngl.knowledge-representation-system!⇒Knlgmngr,
* McsEngl.knowledge-representaion-and-reasoning-system!⇒Knlgmngr,
* McsEngl.knowledge-management-system!⇒Knlgmngr,
* McsEngl.knowledge-manipulation-system!⇒Knlgmngr,
* McsEngl.knowledge-system!⇒Knlgmngr,
* McsEngl.knowledgeBased-system!⇒Knlgmngr,
* McsEngl.lagKnlg'manager!⇒Knlgmngr,

user of Knlgmngr

description::
* author,
* reader,

name::
* McsEngl.Knlgmngr'user,

knowledge-language of Knlgmngr

description::
· the-knowledge-language it uses to create its knowledge-base.

name::
* McsEngl.Knlgmngr'knowledge-language,

knowledge-base of Knlgmngr

description::
· the-set of output of implementing a-knowledge-language on human-knowledge.

name::
* McsEngl.Knlgbase,
* McsEngl.Knlgmngr'knowledge-base!⇒Knlgbase,
* McsEngl.knowledge-base-of-Knlgmngr!⇒Knlgbase,

consistency of Knlgbase

description::
· the-knowledge-base must-have the-least contradictions and gaps.
* accurate knowledge,
* consistent knowledge,
* complete knowledge,

name::
* McsEngl.Knlgbase'consistency,

multiview of Knlgbase

description::
· our knowledge is a-model of the-world\a\.
· the-world is always more complex than our views of it\a\.
· then, the-Knlgbase must contain many views of our world and not only the-mainstream.

name::
* McsEngl.Knlgbase'multiview,

relation-to-ontology of Knlgbase

description::
· ontology is a-knowledge-base, without clear criteria what is ontology.

"ontology vs knowledge-base:
Ontology and knowledge base are related concepts in the field of information science and artificial intelligence, but they refer to different aspects of organizing and representing information.

1. **Ontology:**
- **Definition:** An ontology is a formal representation of knowledge that defines the concepts within a domain and the relationships between those concepts. It provides a shared understanding of a particular domain, specifying the entities, their attributes, and the relationships between them.
- **Purpose:** Ontologies are used to enable better communication and understanding between people or systems. They help to structure information in a way that is both human-readable and machine-understandable.
- **Example:** In a medical ontology, you might have concepts like "disease," "symptom," "treatment," and relationships like "causes," "treats," etc.

2. **Knowledge Base:**
- **Definition:** A knowledge base is a repository of information that is used for problem-solving and decision-making. It is a collection of facts, rules, and relationships that represent knowledge in a specific domain.
- **Purpose:** Knowledge bases store and organize information in a way that makes it accessible for computational processes. They are often used in expert systems, artificial intelligence applications, and other systems that require a structured storage of knowledge.
- **Example:** In a medical knowledge base, you might have specific instances of diseases, symptoms, and treatments, along with rules that describe how certain symptoms are associated with particular diseases.

In summary, an ontology provides a conceptual framework for understanding a domain, specifying the entities and their relationships, while a knowledge base is a practical implementation that stores actual instances of data and information within a specific domain. The ontology might guide the structure of the knowledge base, helping to organize and categorize the information within it."
[{2023-11-14 retrieved} https://chat.openai.com/c/88a65bf3-da41-4a5a-9827-cb86527bb0ae]

name::
* McsEngl.Knlbase'relation-to-ontology,
* McsEngl.kbOnto'relation-to-Knlbase,

Knlgbase.SPECIFIC

description::
* ontology-Knlgbase,
===
* common-sense-Knlgbase,
===
* YAGO-Knlgbase,
* wikidata-Knlgbase,
* wordNet-Knlgbase,

name::
* McsEngl.Knlgbase.specific,

Knlgbase.common-sense

description::
"common sense knowledge-base:
A commonsense knowledge base (CSKB) is a collection of facts and rules that represent the general knowledge that humans have about the world. This knowledge includes things like:
* Objects and their properties (e.g., a chair has four legs and a back)
* Relationships between objects (e.g., a cat is an animal)
* Actions and their effects (e.g., if you drop a glass, it will break)
* Social norms and conventions (e.g., it is polite to say "please" and "thank you")

CSKBs are important for artificial intelligence (AI) systems because they allow them to reason about the world in a more human-like way. For example, a CSKB could be used to help an AI system to:
* Understand the meaning of a sentence, even if it contains incomplete or ambiguous information.
* Generate realistic and coherent text.
* Make inferences about the world, even if it has not been explicitly told about those inferences.
* Plan and execute actions in a way that is consistent with common sense.

CSKBs are still under development, but they are already being used in a variety of AI applications, such as:

* Natural language processing (NLP) systems that translate languages, generate text, and answer questions.
* Machine translation systems that translate text from one language to another.
* Computer vision systems that identify and classify objects in images.
* Robotics systems that navigate the world and interact with objects.
* AI assistants that can help people with tasks such as booking appointments, making travel arrangements, and finding information.

Some of the most well-known CSKBs include:
* ConceptNet
* Cyc
* Open Mind Common Sense
* WordNet

CSKBs are a critical component of building AI systems that can understand and reason about the world in a human-like way. As CSKBs continue to improve, they will enable AI systems to do even more amazing things."
[{2023-11-01 retrieved} https://bard.google.com/chat/5d2dd8602d380a50]

name::
* McsEngl.CSKB!=common-sense--knowledge-base,
* McsEngl.Knlgbase.common-sense,
* McsEngl.common-sense--knowledge-base,

specific-tree-of-::
* ConceptNet
* Cyc
* Open Mind Common Sense
* ThoughtTreasure
* WordNet

hard-sys of Knlgmngr

description::
· computer or network used.

name::
* McsEngl.Knlgmngr'hard-sys,

soft-sys of Knlgmngr

description::
· any app of the-manager.

name::
* McsEngl.Knlgmngr'soft-sys,
* McsEngl.Knlgmngr'tool,

computer-language-used of Knlgmngr

description::
* knowledge-language:
** concept,
** logo,
* programing-language:
** java,
** lisp,
** prolog,
** smalltalk,

name::
* McsEngl.Knlgmngr'computer-language-used,

DOING of Knlgmngr

description::
* building-Knlgbase,
* retrieving-Knlgbase,
===
* building,
* computing,
* consistency-checking,
* evoluting-of-Knlgbase,
* formating,
* generic-specific-relations-management,
* inferencing,
* integrating,
* name-management,
* navigating,
* parent-child-relations-management,
* presenting,
* reasoning,
* retrieving,
* questioning,
* storing,
* translating,
* validating,
* whole-part-relations-management,

name::
* McsEngl.Klmrdoing!=doing-of--knowledge-manager,
* McsEngl.Knlgmngr'doing!⇒Klmrdoing,

doing.building-Knlgbase of Knlgmngr

description::
· building a-quality knowledge-base is very important.

name::
* McsEngl.Klmrdoing.building-Knlgbase,

acquiring-knowledge of Knlgmngr

description::
· the-method the-manager uses to gather info for its base.
· adding new concepts must-not-invalidate existing relations.
* directly by an-author.
* by collaboration.
* automatically from another machine.
* mechanical-learning techniques.
* natural-language-understanding.
* sensor support.

name::
* McsEngl.Klmrdoing.acquiring-knowledge,

collaborating of Knlgmngr

description::
· on building the-knowledge-base.

name::
* McsEngl.Klmrdoing.collaborating,

formal-informal-acquisition of Knlgmngr

description::
· our knowledge is an open-system and we cannot completly formalize it because of the present of the 'unknown'. Then a kms must have formal and informal parts. [hmnSngo.2000-08-04]

name::
* McsEngl.Klmrdoing.formal-informal-acquisition,

changing-knowledge of Knlgmngr

description::
* adding,
* removing,
* modifying,
* updating,
* changing relations must-be-validated.
* the-evolution of the-base must-show the-changes.
* ...

name::
* McsEngl.Klmrdoing.changing-knowledge,

terminology-managing of Knlgmngr

description::
· bad-names, bad communication.
· names help us to communicate our info.
* the-manager must-support the-consistent|inconsistent name-managing-mechanisms of the-natural-languages of its base.
* Any change in the name of a concept must take effect all over in the system's knowledge-base. [HmnSngo.1999-08-24]
* the-manager must-have under control the-names used in its info.
* it must validate the-names against the-naming conventions used in its base-view.
* it must find undefined names.
* it must can-change a-name in all cases used.
* changing the-names of concepts, the-base must-be the-same.
* it must know the-lexicographic info of the-words of a-name in all human-languages: pronunciation, spelling, etymology, pos, word-forms, syntax, meaning, ...

name::
* McsEngl.Klmrdoing.name-changing,
* McsEngl.Klmrdoing.terminology-managing,

merging-knowledge-bases of Knlgmngr

description::
· the-system must-have the-ability to merge knowledge-bases.

name::
* McsEngl.Klmrdoing.merging-knowledge-base,

undoing of Knlgmngr

description::
· The system must have a multi-level undo capability.
[hmnSngo.2000-07-26]

name::
* McsEngl.Klmrdoing.undoing,

formating-knowledge of Knlgmngr

description::
· changing the-format of a-concept, to change this format in all other Mcs.

name::
* McsEngl.Klmrdoing.formating-knowledge,

generic-specific-managing of Knlgmngr

description::
· this is THE-MOST IMPORTANT DOING the-manager MUST-HAVE.
· changing an-attribute in a-concept\a\, the-manager to make this change in all its\a\ specifics.
· generalizing attribute: make it and attribute of its generic.
· IF current-cpt has as GENERIC cpt-y, THEN the-ATTRIBUTES of cpt-y are attributes of current-cpt. {1999-01-17}.
· IF current-cpt has as GENERIC cpt-y, THEN cpt-y has as SPECIFIC current-cpt. {1999-02-28}.
· IF current-cpt has as SPECIFIC cpt-y, THEN the-attributes of current-cpt are attributes of cpt-y. {1999-01-17}.
· IF current-cpt has as SPECIFIC cpt-y, THEN cpt-y has as GENERIC current-cpt. {1999-02-22}.
· IF current-cpt has as SIBLING cpt-y, THEN both concepts have a-common generic-concept. {1999-02-09}.
· IF current-cpt has as sibling cpt-y in relation to generic cpt-w AND cpt-y has as sibling cpt-z in relation to generic cpt-w, THEN current-cpt has as sibling cpt-z. {1999-03-13}.
· IF current-cpt has as GENERIC cpt-y, THEN check if ALL the-specifics of cpt-y are siblings of current-cpt (except current!). {1999-04-17}.

name::
* McsEngl.Klmrdoing.generic-specific-managing,

whole-part-managing of Knlgmngr

description::
· IF current-cpt has as PART cpt-x, THEN cpt-x has as WHOLE current-cpt.
· IF current-cpt has as WHOLE cpt-x, THEN cpt-x has as PART current-cpt.

name::
* McsEngl.Klmrdoing.whole-part-managing,

parent-child-managing of Knlgmngr

description::
· IF current-cpt has as PARENT cpt-x, THEN cpt-x has as CHILD current-cpt.
· IF current-cpt has as CHILD cpt-x, THEN cpt-x has as PARENT current-cpt.

name::
* McsEngl.Klmrdoing.parent-child-managing,

validating-knowledge of Knlgmngr

description::
* truth checking (accuracy of knowledge),
* conflict handling (consistency of knowledge),
* gap finding (completeness of knowledge),

· knowledge must-be accurate, consistent, and complete.
· the-manager must validate the-relations among the-stored-concepts for incosistencies.
· on every editing the-system must validate the-new relations.
· generic-specific, whole-part, parent-child validations.
===
· the-elements of a-specific-division must-have its specific-complements.
· the-elements of a-part-division must-have its part-complements.
===
· every concept must-have part, whole, generic, specific definitions.
===
· IF current-cpt has a-relation with cpt-y, THEN cpt-y must-have this relation with current-cpt.

name::
* McsEngl.Klmrdoing.consistency-checking,
* McsEngl.Klmrdoing.integrating-knowledge,
* McsEngl.Klmrdoing.validating-knowledge,

multiview-supporting of Knlgmngr

description::
· to build a-multiview of the-world.

name::
* McsEngl.Klmrdoing.multiview-supporting,

doing.retrieving-Knlgbase of Knlgmngr

description::
· the-easiness of retrieving the-knowledge-base is also important.

name::
* McsEngl.Klmrdoing.retrieving-Knlgbase,

presenting-knowledge of Knlgmngr

description::
· the-presentation of the-knowledge is very important for the-reader.

name::
* McsEngl.Klmrdoing.presenting-knowledge,

navigating-knowledge of Knlgmngr

description::
·

name::
* McsEngl.Klmrdoing.navigating-knowledge,

translating-knowledge of Knlgmngr

description::
· the-manager must be-able to compute the-translation of the-stored-info in all natural-languages and not just store the-translation.

name::
* McsEngl.Klmrdoing.translating-knowledge,

questioning-knowledge of Knlgmngr

description::
· the-system must answer questions on the-stored-info.
· it must be-able to compute answers and not just retrieve stored-info.

name::
* McsEngl.Klmrdoing.questioning-knowledge,

summarizing-knowledge of Knlgmngr

description::
· make summaries of knowledge.

name::
* McsEngl.Klmrdoing.summarizing-knowledge,

doing.computing-Knlgbase of Knlgmngr

description::
· any computing (building or retrieving) on Knlgbase.

name::
* McsEngl.Klmrcomputing!=computing-Knlgbase-of-Knlgmngr,
* McsEngl.Klmrdoing.computing-Knlgbase,

explaining-computations of Knlgmngr

description::
· the-manager must-have the-ability to explain its building or retrieving computations.

name::
* McsEngl.Klmrdoing.explaining-computations,

problem-solving of Knlgmngr

description::
">machine problem solving:
Machine problem-solving, also known as automated problem-solving, refers to the use of artificial intelligence (AI) and machine learning (ML) techniques to solve problems. This can include tasks such as:
* **Diagnosing faults in machines or equipment**
* **Optimizing processes or systems**
* **Predicting future events or outcomes**
* **Developing new solutions to complex problems**

Machine problem-solving has the potential to revolutionize many industries, including manufacturing, healthcare, and finance. It can help to improve efficiency, reduce costs, and make better decisions.

Here are some of the benefits of machine problem-solving:
* **Increased efficiency:** Machines can solve problems much faster than humans, and they can work 24/7 without breaks.
* **Reduced costs:** Machine problem-solving can automate tasks that would otherwise be done by humans, which can save money.
* **Improved decision-making:** Machines can analyze data and identify patterns that humans would miss, which can lead to better decisions.

However, there are also some challenges associated with machine problem-solving:
* **Data dependency:** Machine problem-solving is only as good as the data it is trained on. If the data is biased or inaccurate, the results will be unreliable.
* **Explainability:** It can be difficult to understand how machines make decisions, which can make it difficult to trust their results.
* **Ethical considerations:** There are ethical concerns about the use of machines to make decisions that have a significant impact on people's lives.

Despite these challenges, machine problem-solving is a powerful tool that has the potential to solve many of the world's most pressing problems. As AI and ML techniques continue to develop, we can expect to see even more innovative applications of machine problem-solving in the years to come."
[{2023-11-08 retrieved} https://bard.google.com/chat/78f5df1f868d63b0?hl=en&pli=1]

name::
* McsEngl.Klmrdoing'problem-solving,
* McsEngl.problem-solving--of-Klmrdoing,

doing.Knlgbase-evoluting-managing of Knlgmngr

description::
· the-manager must-have the-ability to manage and the-evolution of its knowledge-base, because knowledge like the-world is a-dynamic-entity.

name::
* McsEngl.Kmgdoing.Knlgbase-evoluting-managing,

knowledge-engineering of Knlgmngr

description::
">knowledge engineering:
Knowledge engineering (KE) is a field of artificial intelligence (AI) that deals with the creation, representation, and use of knowledge. It is concerned with the transfer of knowledge from human experts to computer systems in a form that can be used to solve problems. Knowledge engineers are responsible for identifying, extracting, and representing knowledge in a way that can be used by AI systems to make decisions or perform tasks.

**Key tasks in knowledge engineering**
* **Knowledge acquisition:** The process of gathering and understanding knowledge from human experts.
* **Knowledge representation:** The process of encoding knowledge in a computer-processable form.
* **Knowledge validation:** The process of ensuring that the knowledge is accurate, consistent, and complete.
* **Knowledge inferencing:** The process of using knowledge to solve problems or make decisions.
* **Knowledge explanation:** The process of explaining the reasoning behind a particular decision or solution.
* **Knowledge maintenance:** The process of keeping knowledge up-to-date as new information becomes available.

**Applications of knowledge engineering**
* **Expert systems:** Computer systems that emulate the decision-making ability of human experts.
* **Medical diagnosis:** Systems that help doctors diagnose diseases by analyzing patient symptoms.
* **Financial forecasting:** Systems that predict financial markets.
* **Fraud detection:** Systems that identify fraudulent transactions.
* **Customer service chatbots:** Systems that answer customer questions and resolve issues.

**Benefits of knowledge engineering**
* **Increased efficiency and productivity:** Knowledge engineering can automate tasks that would otherwise be performed by human experts.
* **Improved decision-making:** Knowledge engineering can provide access to expert knowledge that would otherwise be unavailable.
* **Reduced costs:** Knowledge engineered systems can often perform tasks more cost-effectively than human experts.
* **Enhanced customer satisfaction:** Knowledge engineered systems can provide better customer service by answering questions and resolving issues quickly and efficiently.

**Challenges of knowledge engineering**
* **Knowledge acquisition:** It can be difficult to elicit and capture knowledge from human experts.
* **Knowledge representation:** There is no single, universally accepted way to represent knowledge in a computer-processable form.
* **Knowledge validation:** It can be difficult to ensure that the knowledge is accurate, consistent, and complete.
* **Knowledge maintenance:** Knowledge needs to be updated as new information becomes available.

**Future of knowledge engineering**
Knowledge engineering is a rapidly evolving field, and there are a number of exciting new developments on the horizon. These include:
* **The development of new knowledge representation languages and techniques.**
* **The use of machine learning to automatically extract knowledge from data.**
* **The integration of knowledge engineering with other AI fields, such as natural language processing and computer vision.**

As knowledge engineering continues to evolve, it is likely to play an increasingly important role in the development of intelligent systems."
[{2023-11-07 retrieved} https://bard.google.com/chat/6745b6ee1ff66cc0?hl=en&pli=1]

name::
* McsEngl.knowledge-engineering,
* McsEngl.Knlgmngr'knowledge-engineering,

evolution of Knlgmngr

description::
"evolution of knowledge-reasoning:
The evolution of knowledge reasoning can be traced back to the earliest days of artificial intelligence (AI). In the 1950s, researchers began developing systems that could use knowledge to solve problems. One of the earliest examples of this is the General Problem Solver (GPS) system developed by Allen Newell and Herbert A. Simon. GPS was able to solve a variety of problems by using a set of general problem-solving rules.

In the 1960s and 1970s, researchers began to develop more sophisticated knowledge representation systems. These systems allowed for the representation of knowledge about the world in a way that could be used by AI systems to reason about problems. One of the most important developments in this area was the development of first-order logic. First-order logic provides a formal language for representing and reasoning about knowledge.

In the 1980s and 1990s, researchers began to develop knowledge reasoning systems that could be used to solve real-world problems. One of the most successful examples of this is the MYCIN system, which was developed to diagnose infectious diseases. MYCIN used a knowledge base of medical knowledge to reason about patient symptoms and make diagnoses.

In recent years, there has been a renewed interest in knowledge reasoning due to the rise of large language models (LLMs). LLMs are able to process and understand large amounts of text data, and they can be used to build knowledge graphs that represent knowledge about the world in a structured way. This has led to the development of new knowledge reasoning systems that are able to solve more complex problems than ever before.

Here are some of the key milestones in the evolution of knowledge reasoning:
* **1959:** General Problem Solver (GPS) system developed by Allen Newell and Herbert A. Simon.
* **1960s:** Development of first-order logic as a formal language for representing and reasoning about knowledge.
* **1970s:** Development of knowledge representation systems such as KL-ONE and PROLOG.
* **1980s:** Development of MYCIN system for diagnosing infectious diseases.
* **1990s:** Development of knowledge reasoning systems for real-world applications such as fraud detection and customer relationship management.
* **2000s:** Rise of large language models and the development of new knowledge reasoning systems that are able to solve more complex problems.

The future of knowledge reasoning is bright. As knowledge graphs become more comprehensive and machine learning algorithms become more sophisticated, knowledge reasoning systems will become increasingly powerful and useful. Knowledge reasoning will play a vital role in many different areas of our lives, including healthcare, education, and business.

Here are some specific examples of how knowledge reasoning is being used today:
* **Medical diagnosis:** Knowledge reasoning systems are being used to help doctors diagnose diseases more accurately.
* **Fraud detection:** Knowledge reasoning systems are being used to detect fraudulent transactions in financial systems.
* **Customer service:** Knowledge reasoning systems are being used to provide better customer service by answering questions and resolving issues more quickly.
* **Product recommendation:** Knowledge reasoning systems are being used to recommend products to customers based on their past purchases and preferences.

These are just a few examples of how knowledge reasoning is being used today. As knowledge reasoning systems continue to evolve, they will be used to solve even more complex problems in the future."
[{2023-10-26 retrieved} https://bard.google.com/chat/cecdcc24288caf0b]

"evolution of knowledge-reasoning-systems:
The evolution of knowledge reasoning systems can be traced through the development of artificial intelligence (AI) and knowledge representation techniques. These systems have undergone significant changes and improvements over the years. Here is an overview of their evolution:

1. Early Expert Systems (1960s-1970s):
- The field of knowledge reasoning systems began with early expert systems, which were rule-based systems designed to emulate human expertise in specific domains.
- Dendral (1965) and MYCIN (1976) are notable examples of early expert systems.
- These systems used a "knowledge base" of rules and an "inference engine" to make decisions based on the available knowledge.

2. Knowledge Representation (1970s-1980s):
- During this period, researchers focused on developing formal knowledge representation languages and ontologies to capture knowledge in a structured manner.
- The development of languages like Prolog and semantic networks played a significant role in this evolution.

3. Frame-Based Systems (1980s-1990s):
- Frame-based systems, such as the KL-ONE system, introduced a way to represent and reason with structured knowledge using frames and slots.
- These systems enabled more complex reasoning about concepts and relationships.

4. Rule-Based Expert Systems (1980s-1990s):
- Rule-based expert systems continued to advance, incorporating more sophisticated reasoning mechanisms.
- The introduction of uncertainty and fuzzy logic allowed these systems to deal with incomplete or uncertain information.

5. Semantic Web and Ontologies (late 1990s-present):
- The Semantic Web initiative aimed to create a web of structured and interconnected data using ontologies and the Resource Description Framework (RDF).
- The development of ontologies like OWL (Web Ontology Language) made it possible to reason about complex relationships on the web.

6. Knowledge Graphs (2000s-present):
- Knowledge graphs have gained popularity as a way to represent and reason with interconnected data.
- Large-scale knowledge graphs, like Google's Knowledge Graph and the Wikidata project, have become valuable resources for various applications, including search engines and virtual assistants.

7. Machine Learning and Neural Networks (2010s-present):
- Deep learning and neural networks have made significant contributions to knowledge reasoning systems.
- Neural networks are used for various tasks, including natural language understanding, question answering, and image recognition, allowing systems to reason and make inferences from unstructured data.

8. Hybrid Approaches (2010s-present):
- Contemporary knowledge reasoning systems often combine symbolic reasoning and machine learning techniques.
- This hybrid approach enables systems to leverage structured knowledge along with the ability to learn from data.

9. Explainability and Trust (ongoing):
- As knowledge reasoning systems become more complex and integrated into various applications, there is a growing emphasis on explainability and trustworthiness in AI systems.
- Researchers are developing techniques to make AI systems more transparent and interpretable, especially in critical domains.

The evolution of knowledge reasoning systems reflects the broader trends in AI and computer science, moving from early rule-based expert systems to more flexible, data-driven, and interconnected knowledge representations. The field continues to advance, driven by the need for intelligent systems that can make informed decisions, understand natural language, and adapt to changing environments."
[{2023-10-26 retrieved} https://chat.openai.com/c/44a972fd-ca81-4068-b372-53a32715dc07]

name::
* McsEngl.evoluting-of-Knlgmngr,
* McsEngl.Knlgmngr'evoluting,

GENERIC-SPECIFIC-TREE of Knlgmngr

generic-tree-of-Knlgmngr::
* hard-soft-system,
* ... entity,
* McsEngl.Knlgmngr'generic-tree,

specific-tree-of-Knlgmngr::
* concept-Knlgmngr,
* logo-Knlgmngr,
===
* AAj,
* Algernon,
* MYCIN,

* McsEngl.Knlgmngr.specific-tree,

Knlgmngr.concept (link)

Knlgmngr.logo (link)

Knlgmngr.LLM-Large-Language-Model (link)

Knlgmngr.logic (link)

Knlgmngr.DSS-Decision-Support-System

description::
"overview of DSS:
A decision support system (DSS) is a computer-based information system that supports decision-making activities in semi-structured or unstructured tasks. DSSs provide users with access to data, models, and tools that can help them to identify and evaluate alternative solutions to a problem. DSSs are typically used by managers and decision-makers at all levels of an organization.

DSSs can be classified into several different types, including:
* **Data-driven DSSs:** Data-driven DSSs focus on providing users with access to and the ability to analyze large amounts of data.
* **Model-driven DSSs:** Model-driven DSSs use mathematical models to represent and analyze complex problems.
* **Knowledge-driven DSSs:** Knowledge-driven DSSs use expert knowledge to solve complex problems.
* **Document-driven DSSs:** Document-driven DSSs provide users with access to and the ability to analyze documents, such as reports, presentations, and emails.
* **Communications-driven DSSs:** Communications-driven DSSs facilitate communication and collaboration among decision-makers.

DSSs are used in a wide variety of applications, including:
* **Sales and marketing:** DSSs can be used to forecast sales, plan marketing campaigns, and segment customers.
* **Operations management:** DSSs can be used to schedule production, manage inventory, and optimize supply chains.
* **Financial management:** DSSs can be used to budget, forecast financial performance, and make investment decisions.
* **Human resource management:** DSSs can be used to recruit and select employees, train and develop employees, and manage performance.
* **Research and development:** DSSs can be used to identify and evaluate research projects, develop new products and services, and manage intellectual property.

**Benefits of DSS**
DSSs offer a number of benefits, including:
* **Improved decision-making:** DSSs can help decision-makers to make better decisions by providing them with access to data, models, and tools that can help them to identify and evaluate alternative solutions.
* **Increased efficiency and productivity:** DSSs can help decision-makers to save time and improve their productivity by automating tasks and providing them with the information they need to make decisions quickly and accurately.
* **Reduced costs:** DSSs can help organizations to reduce costs by improving decision-making and increasing efficiency.
* **Enhanced communication and collaboration:** DSSs can facilitate communication and collaboration among decision-makers, which can lead to better decision-making and improved outcomes.

**Challenges of DSS**
DSSs also face some challenges, including:
* **Complexity:** DSSs can be complex to develop and implement.
* **Data quality:** DSSs rely on high-quality data to produce accurate results.
* **User acceptance:** DSSs can be difficult to use if they are not properly designed and implemented.

**Conclusion**
DSSs are powerful tools that can help organizations to improve decision-making, increase efficiency and productivity, reduce costs, and enhance communication and collaboration. DSSs are used in a wide variety of applications, and they are becoming increasingly important as organizations face more complex and challenging decision-making environments."
[{2023-10-28 retrieved} https://bard.google.com/chat/5b4a3137667cf549]

name::
* McsEngl.DSS!=decision-support-system--Knlgmngr,
* McsEngl.Knlgmngr.020-DSS,
* McsEngl.Knlgmngr.DSS,
* McsEngl.klmrDss!=decision-support-system--Knlgmngr,

relation-to-klmrEs of klmrDss

description::
"expert-systems vs DSS:
Expert systems (ES) and Decision Support Systems (DSS) are both computer-based tools designed to assist in decision-making processes, but they serve different purposes and have distinct characteristics. Here's a comparison of expert systems and DSS:

1. Purpose:
- Expert Systems:
Expert systems are primarily designed to replicate the decision-making abilities of human experts in a specific domain. They use a knowledge base and inference engine to provide advice, recommendations, or solutions within that domain. Expert systems are best suited for well-defined, rule-based, and knowledge-intensive problems.
- Decision Support Systems:
DSS, on the other hand, are more general-purpose tools that provide support to decision-makers by collecting, analyzing, and presenting relevant data and information. They do not aim to replace experts but help users make informed decisions by providing data, models, and analysis tools.

2. Knowledge Base:
- Expert Systems:
ES have a fixed knowledge base that contains domain-specific information, including rules, heuristics, and facts. This knowledge is static and is not meant to change frequently.
- Decision Support Systems:
DSS typically rely on dynamic and often updated databases and data sources. They provide access to data, models, and tools that can be customized and adapted to different decision-making scenarios.

3. Decision-Making Process:
- Expert Systems:
ES automate the decision-making process within a specific domain by using predefined knowledge and reasoning techniques. They aim to provide consistent and expert-level decisions or recommendations.
- Decision Support Systems:
DSS provide users with the tools and information needed to make their own decisions. They offer data analysis, reporting, modeling, and "what-if" scenario analysis to help users explore different decision options and their potential outcomes.

4. Flexibility:
- Expert Systems:
Expert systems are less flexible and are typically tailored to a specific domain. They are not easily adaptable to new or different problem domains without significant reprogramming.
- Decision Support Systems:
DSS are more flexible and can be adapted to a wide range of decision-making situations and domains. They are often used in a variety of industries and applications.

5. User Involvement:
- Expert Systems:
Users typically interact with expert systems by providing input, answering questions, or receiving recommendations. They have limited involvement in the decision-making process.
- Decision Support Systems:
DSS are designed to involve users actively in the decision-making process. Users can explore data, apply their expertise, and make informed choices based on the information and tools provided.

6. Examples:
- Expert Systems: MYCIN (medical diagnosis), Dendral (organic chemistry), and financial advisory systems.
- Decision Support Systems: Business intelligence tools, spreadsheet applications, and data analytics platforms.

In summary, expert systems and Decision Support Systems serve different purposes and are used in different contexts. Expert systems focus on automating decision-making in specific domains using a predefined knowledge base, while DSS are more general-purpose tools that provide decision-makers with data, tools, and flexibility to make informed choices across various domains."
[{2023-10-28 retrieved} https://chat.openai.com/c/f9230f52-3c4a-4280-9c74-98f9377951e0]

name::
* McsEngl.klmrDss'relation-to-expert-systems,
* McsEngl.klmrEs'relation-to-DSS,

evoluting of klmrDss

description::
"evolution of DSS:
Decision support systems (DSS) have evolved significantly since they were first introduced in the late 1970s. Early DSSs were simple data retrieval and reporting systems. They typically used data from transaction processing systems (TPS) to generate reports that could be used by decision-makers.

In the 1980s, DSSs became more sophisticated. They began to incorporate models and analytical tools that could help decision-makers to analyze data and make better decisions. This evolution was driven by the development of new technologies, such as relational databases and microcomputers.

In the 1990s, DSSs continued to evolve. They became more user-friendly and began to incorporate more advanced features, such as online analytical processing (OLAP) and data mining. DSSs also began to be used in a wider variety of applications, such as business intelligence and enterprise resource planning (ERP).

In recent years, DSSs have evolved to become more intelligent and adaptive. They now incorporate artificial intelligence (AI) and machine learning (ML) techniques to help decision-makers to make better decisions in complex and uncertain environments.

Here is a summary of the key stages in the evolution of DSS:
* **1970s:** Simple data retrieval and reporting systems.
* **1980s:** Incorporation of models and analytical tools.
* **1990s:** More user-friendly and incorporation of advanced features, such as OLAP and data mining. Wider variety of applications, such as business intelligence and ERP.
* **2000s and beyond:** Incorporation of AI and ML techniques.

The evolution of DSS has been driven by a number of factors, including:
* **Technological advances:** The development of new technologies, such as relational databases, microcomputers, and the internet, has made it possible to develop more sophisticated and powerful DSSs.
* **Increased demand:** As organizations have become more complex and faced more challenging decision-making environments, there has been an increased demand for DSSs that can help them to make better decisions.
* **New research and development:** Researchers and developers have continued to develop new theories and techniques for DSSs, which has led to the development of more advanced and effective DSSs.

DSSs are likely to continue to evolve in the future. As AI and ML techniques become more sophisticated, DSSs will become more intelligent and adaptive. DSSs will also become more integrated with other enterprise systems, such as ERP and business intelligence systems.

Overall, the evolution of DSS has been driven by the need to develop more powerful and flexible systems for supporting decision-making in complex and uncertain environments. DSSs are used in a wide variety of applications today, and they are likely to continue to play a vital role in helping organizations to make better decisions in the future."
[{2023-10-28 retrieved} https://bard.google.com/chat/5b4a3137667cf549]

name::
* McsEngl.evoluting-of-klmrDss,
* McsEngl.klmrDss'evoluting,

Knlgmngr.expert-system

description::
"overview of expert-systems:
Expert systems are a type of artificial intelligence (AI) that are designed to mimic the decision-making abilities of a human expert in a specific domain. These systems use a knowledge base of human expertise and an inference engine to draw conclusions, make recommendations, or solve problems within that domain. Here's an overview of expert systems:

1. Components of Expert Systems:
Expert systems typically consist of three main components:
a. Knowledge Base: This is where the system stores the domain-specific information and expertise. It includes facts, rules, heuristics, and other knowledge about the problem domain.
b. Inference Engine: The inference engine is responsible for applying reasoning and logic to the information stored in the knowledge base. It uses various inference methods to draw conclusions or make decisions.
c. User Interface: This component provides a means for users to interact with the expert system, ask questions, input data, and receive recommendations or solutions.

2. Knowledge Representation:
Expert systems use different knowledge representation methods, such as production rules, frames, semantic networks, and ontologies, to capture and structure domain-specific knowledge in a format that the system can understand and use.

3. Inference Methods:
Expert systems employ various inference methods to process the knowledge stored in the knowledge base. Some common techniques include forward chaining (data-driven reasoning) and backward chaining (goal-driven reasoning). These methods are used to derive conclusions or make decisions based on the available information.

4. Domain Expertise:
Expert systems are designed for specific domains or problem areas, such as medical diagnosis, financial planning, troubleshooting, and more. The knowledge in the system is tailored to that particular domain, and the system aims to replicate the decision-making abilities of a human expert within that field.

5. Advantages of Expert Systems:
- Consistency: Expert systems provide consistent and unbiased recommendations or decisions based on the encoded knowledge.
- Availability: They can be available 24/7, providing expertise whenever needed.
- Scalability: Knowledge can be easily updated and expanded as new information becomes available.
- Training: Expert systems can be used as training tools for non-experts to gain expertise in a specific domain.

6. Limitations of Expert Systems:
- Limited Domain: Expert systems are only as good as the knowledge base they are built upon and are limited to the specific domain they are designed for.
- Lack of Common Sense: They often lack common-sense reasoning abilities that humans possess.
- Maintenance: Regular maintenance and knowledge updates are necessary to keep the system relevant and accurate.
- Difficulty in Handling Uncertainty: Expert systems may struggle with handling uncertainty and making probabilistic decisions.

7. Applications:
Expert systems have been applied in various fields, including healthcare for medical diagnosis, finance for investment advice, manufacturing for quality control, customer support for troubleshooting, and more.

8. Examples:
- Dendral: One of the earliest expert systems, designed for organic chemistry.
- MYCIN: An expert system for diagnosing bacterial infections.
- TurboTax: Uses expert system principles for tax preparation.
- Watson: IBM's Watson is a famous example that combines expert systems with natural language processing to answer complex questions.

Expert systems have played a significant role in the development of AI and have found applications in a wide range of industries, helping to automate decision-making and problem-solving processes in specialized domains."
[{2023-10-28 retrieved} https://chat.openai.com/c/f9230f52-3c4a-4280-9c74-98f9377951e0]

name::
* McsEngl.ES!=expert-system--Knlgmngr,
* McsEngl.Eskm!=expert-system--Knlgmngr,
* McsEngl.Knlgmngr.019-expert-system!⇒klmrEs,
* McsEngl.Knlgmngr.expert-system!⇒klmrEs,
* McsEngl.klmrEs!=expert-system--Knlgmngr,

evoluting of klmrEs

description::
"evolution of expert-systems:
Expert systems have evolved significantly since they were first introduced in the 1970s. Early expert systems were rule-based, meaning that they used a set of production rules to reason about problems and generate solutions. However, rule-based expert systems were difficult to maintain and update, and they often struggled to handle uncertainty in the problem domain.

In the 1980s, researchers began to develop new types of expert systems, such as frame-based and case-based expert systems. Frame-based expert systems represent knowledge in terms of frames, which are data structures that contain slots for storing information about a particular concept. Case-based expert systems solve problems by finding similar cases from the past and adapting them to the current situation.

In recent years, there has been a growing interest in developing hybrid expert systems that combine elements of different expert system types. For example, some hybrid expert systems use rule-based reasoning to handle the deterministic part of the problem domain and case-based reasoning to handle the uncertain part of the problem domain.

Another important trend in the evolution of expert systems is the increasing use of machine learning techniques. Machine learning can be used to automatically acquire knowledge from data, which can help to reduce the cost and time required to develop and maintain expert systems.

Here is a summary of the key stages in the evolution of expert systems:
* **1970s:** Rule-based expert systems are developed.
* **1980s:** Frame-based and case-based expert systems are developed.
* **1990s:** Hybrid expert systems are developed, which combine elements of different expert system types.
* **2000s and beyond:** Machine learning is used to automatically acquire knowledge from data, which helps to reduce the cost and time required to develop and maintain expert systems.

**Current State of Expert Systems**
Expert systems are still used today in a wide variety of domains, including medicine, engineering, finance, and manufacturing. However, expert systems have been largely supplanted by machine learning systems for many tasks. This is because machine learning systems can be trained on large amounts of data to learn complex patterns, which expert systems are unable to do.
Despite the rise of machine learning, expert systems still have a number of advantages. For example, expert systems are more explainable than machine learning systems, meaning that it is easier to understand how they arrive at a particular solution. Additionally, expert systems can be used to encode expert knowledge in a structured and systematic way, which can be useful for tasks such as training new employees or developing decision support systems.

**Future of Expert Systems**
Expert systems are likely to continue to play a role in solving complex problems in the future. However, expert systems will likely be used in conjunction with machine learning systems to take advantage of the strengths of both approaches. For example, expert systems could be used to encode expert knowledge and to provide explanations, while machine learning systems could be used to learn complex patterns from data.
Overall, the evolution of expert systems has been driven by the need to develop more powerful and flexible systems for solving complex problems. Expert systems have evolved from rule-based systems to frame-based and case-based systems, to hybrid systems, and now to systems that incorporate machine learning techniques. Expert systems are still used today in a wide variety of domains, and they are likely to continue to play a role in solving complex problems in the future."
[{2023-10-28 retrieved} https://bard.google.com/chat/5b4a3137667cf549]

name::
* McsEngl.evoluting-of-klmrEs,
* McsEngl.klmrEs'evoluting,

Knlgmngr.KMS-Knowledge-Management-System

description::
"definition of knowledge-management-system:
A knowledge management system (KMS) is a software application that helps organizations to collect, organize, store, and share knowledge. KMSs can be used to manage both explicit knowledge (such as documentation, procedures, and reports) and tacit knowledge (such as expertise, experience, and insights).

KMSs can be used to achieve a variety of business goals, such as:
* Improving customer service by providing customers with easy access to knowledge about products and services
* Reducing costs by reducing the need to reinvent the wheel or repeat mistakes
* Improving employee productivity by providing employees with easy access to knowledge they need to do their jobs effectively
* Increasing innovation by fostering collaboration and knowledge sharing
* Improving decision-making by providing decision-makers with access to relevant knowledge

KMSs can be implemented in a variety of ways, but they typically include the following features:
* A knowledge base for storing and organizing knowledge
* A search engine for finding knowledge in the knowledge base
* Collaboration features for sharing knowledge and working together on knowledge-related tasks
* Security and compliance features for protecting knowledge and ensuring compliance with relevant regulations

KMSs are used by a wide range of organizations, including businesses, educational institutions, government agencies, and nonprofit organizations.

Here are some examples of KMSs:
* Google Workspace
* Microsoft SharePoint
* Atlassian Jira Service Management
* Confluence
* Freshdesk
* ServiceNow

KMSs are a valuable tool for any organization that wants to improve its knowledge management practices and share knowledge with its employees, customers, and other stakeholders."
[{2023-10-30 retrieved} https://bard.google.com/chat/4122071e73b35c2]

name::
* McsEngl.KMS!=knowledge-managment-system--Knlgmngr,
* McsEngl.Knlgmngr.022-KMS,
* McsEngl.Knlgmngr.KMS,
* McsEngl.klmrKms!=KMS--knowledge-manager,

relation-to-KRS OF klmrKms

description::
"knowledge-management-system vs knowledge-representation-system:
**Knowledge management system (KMS)** and **knowledge representation system (KRS)** are two different but complementary approaches to managing knowledge.

**KMS** are software applications that help organizations to collect, organize, store, and share knowledge. They typically include features such as search, categorization, tagging, and collaboration tools. KMS can be used to manage a wide variety of knowledge assets, including documents, databases, and even tacit knowledge.

**KRS** are systems that represent knowledge in a way that can be understood and processed by computers. They typically use formal languages and ontologies to represent knowledge entities and relationships. KRS can be used to develop applications such as expert systems, natural language processing systems, and machine learning systems.

**Here is a table that summarizes the key differences between KMS and KRS:**
| Feature | KMS | KRS |
| Purpose | To help organizations collect, organize, store, and share knowledge | To represent knowledge in a way that can be understood and processed by computers |
| Target audience | Knowledge workers, such as employees, customers, and partners | Software developers, AI researchers, and knowledge engineers |
| Types of knowledge assets managed | Documents, databases, tacit knowledge, and other types of unstructured data | Knowledge entities and relationships, typically represented in formal languages and ontologies |
| Key features | Search, categorization, tagging, and collaboration tools | Formal languages, ontologies, and reasoning engines |

**How KMS and KRS can be used together:**
KMS and KRS can be used together to create a more powerful and sophisticated knowledge management system. For example, a KMS could use a KRS to extract knowledge from unstructured data sources, such as emails and chat logs. The extracted knowledge could then be stored in the KMS and made available to users through search and other features.
Another example is the use of a KRS to develop a knowledge-based chatbot that can answer user questions and provide support. The chatbot could use the KMS to store and retrieve knowledge articles and other resources.
Overall, KMS and KRS are two complementary approaches to managing knowledge. KMS are good for collecting, organizing, storing, and sharing knowledge, while KRS are good for representing knowledge in a way that can be understood and processed by computers. By using KMS and KRS together, organizations can create a more powerful and sophisticated knowledge management system that can help them to achieve their business goals."
[{2023-10-30 retrieved} https://bard.google.com/chat/4122071e73b35c2]

name::
* McsEngl.klmrKms'relation-to-KRS,

Knlgmngr.RBS-Rule-Based-System

description::
"overview of rule-based-systems:
Rule-based systems are a type of artificial intelligence (AI) or knowledge-based system that use a set of explicitly defined rules to make decisions or solve problems. These systems are based on a formalized set of "if-then" rules, which are designed to mimic human expertise or domain-specific knowledge. Here is an overview of rule-based systems:

1. Components of Rule-Based Systems:
- Knowledge Base: This is where the rules and domain-specific knowledge are stored. The knowledge base contains a collection of facts, assertions, and rules that govern the behavior of the system.
- Inference Engine: The inference engine is responsible for reasoning and making decisions based on the rules and facts in the knowledge base. It uses various reasoning methods, such as forward chaining (data-driven) or backward chaining (goal-driven), to draw conclusions and make inferences.
- Working Memory: Also known as the working memory or working storage, this component holds temporary data, including the current state of the system, facts, and intermediate results during the reasoning process.

2. Rule Representation:
- Rules in a rule-based system are typically represented in the form of conditional statements. Each rule consists of an "if" part (antecedent) and a "then" part (consequent). The "if" part specifies conditions that must be satisfied for the rule to be applied, while the "then" part describes the actions or conclusions to be taken if the conditions are met.

3. Inference Process:
- When a rule-based system is presented with a problem or query, the inference engine evaluates the rules and facts in the knowledge base to make decisions or provide answers.
- The inference engine may use different modes of reasoning, such as forward chaining (starting with known facts and deriving conclusions) or backward chaining (starting with a goal and working backward to determine the necessary conditions).

4. Knowledge Acquisition and Maintenance:
- Rule-based systems require human experts to encode their knowledge into the system. Knowledge acquisition can be a complex and time-consuming process.
- Maintenance of the knowledge base is also important, as the rules and facts may need to be updated over time to reflect changes in the domain or to improve the system's performance.

5. Applications of Rule-Based Systems:
- Expert Systems: Rule-based systems are often used to build expert systems that mimic human experts in specific domains, such as medical diagnosis, financial analysis, and troubleshooting.
- Decision Support Systems: Rule-based systems can be employed in decision support applications, helping users make informed decisions based on a set of rules and facts.
- Business Rule Engines: In business and industry, rule-based systems are used for automating business processes, compliance checks, and data validation.
- Natural Language Processing: Rule-based systems can also be utilized in natural language processing for tasks like grammar checking and information extraction.

6. Advantages and Limitations:
- Advantages: Rule-based systems are transparent, easy to understand, and can represent domain-specific knowledge effectively. They are well-suited for expert systems and decision support applications.
- Limitations: These systems may struggle with complex or uncertain domains, require extensive manual knowledge encoding, and may not adapt well to dynamic environments without regular updates.

In summary, rule-based systems are a class of AI systems that rely on explicitly defined rules and domain-specific knowledge to make decisions and solve problems. They have been used in a variety of applications where expert knowledge needs to be automated and provide transparency in decision-making processes."
[{2023-11-02 retrieved} https://chat.openai.com/c/30b333f6-dce7-4edc-95ce-6e7160c6d598]

name::
* McsEngl.RBS!=Rule-Based-System,
* McsEngl.Knlgmngr.028-RBS-Rule-Based-System,
* McsEngl.Knlgmngr.RBS-Rule-Based-System,
* McsEngl.klmrRule!=Rule-Based-System,

evoluting of klmrRule

description::
"evolution of rule-based-systems:
Rule-based systems (RBS) are a type of artificial intelligence (AI) system that uses a set of predefined rules to make decisions or solve problems. RBS were one of the earliest types of AI systems, and they are still widely used today in many different applications, such as medical diagnosis, financial trading, and fraud detection.

**Early rule-based systems**
The first rule-based systems were developed in the 1960s and 1970s. One of the most famous early RBS was ELIZA, a computer program that simulated conversation by using a set of rules to match user inputs to pre-defined responses. ELIZA was able to fool some people into thinking that they were talking to a real person, but it was still a very limited system.

**Evolution of rule-based systems**
Over the years, rule-based systems have become more sophisticated and powerful. RBS developers have developed new techniques for representing and reasoning about rules, and they have also developed tools for building and maintaining large rule bases. As a result, RBS are now used in a wide range of applications, including:
* Medical diagnosis: RBS are used to diagnose diseases and recommend treatments. For example, the MYCIN system was developed in the 1970s to diagnose and treat bacterial infections.
* Financial trading: RBS are used to make trading decisions and manage risk. For example, RBS are used to identify trading opportunities and to generate trading signals.
* Fraud detection: RBS are used to detect fraudulent transactions and other types of fraud. For example, RBS are used to identify credit card fraud and insurance fraud.

**Challenges of rule-based systems**
One of the main challenges of rule-based systems is that they can be difficult to develop and maintain. Rule bases can become very large and complex, and it can be difficult to ensure that the rules are complete, consistent, and accurate. Additionally, RBS are not able to adapt to changes in the environment, so they need to be updated regularly.

**Machine learning and rule-based systems**
In recent years, machine learning (ML) has become increasingly popular for developing AI systems. ML systems are able to learn from data and improve their performance over time without being explicitly programmed. This makes ML systems well-suited for tasks where the environment is constantly changing, such as fraud detection and financial trading.
However, rule-based systems still have a number of advantages over ML systems. Rule-based systems are more transparent and explainable than ML systems, and they are also more reliable in certain applications. As a result, rule-based systems and ML systems are often used together to develop AI systems that combine the strengths of both approaches.

**Future of rule-based systems**
Rule-based systems are likely to continue to play an important role in AI for many years to come. RBS are particularly well-suited for applications where transparency, explainability, and reliability are important. Additionally, rule-based systems can be used to complement ML systems to develop more powerful and robust AI systems."
[{2023-11-02 retrieved} https://bard.google.com/chat/fa627c3561f7617c]

name::
* McsEngl.evoluting-of-klmrRule,
* McsEngl.klmrRule'evoluting,

Knlgmngr.McsHitp-{2017-2023..} (link)

Knlgmngr.ReVerb-{2016-2020..}

description::
"ReVerb is a program that automatically identifies and extracts binary relationships from English sentences. ReVerb is designed for Web-scale information extraction, where the target relations cannot be specified in advance and speed is important."
[{2023-10-28 retrieved} http://reverb.cs.washington.edu/]

name::
* McsEngl.ReVerb-Knlgmngr,
* McsEngl.Knlgmngr.011-ReVerb,
* McsEngl.Knlgmngr.ReVerb,
* McsEngl.klmrReverb!=ReVerb--knowledge-manager,

addressWpg::
* https://github.com/knowitall/reverb,

evoluting of klmrReverb

description::
"evolution of ReVerb a program that automatically identifies and extracts binary relationships from English sentences:
ReVerb was first introduced in 2016 by a team of researchers at Google AI. It was a significant advance in the field of natural language processing, as it was the first program that could automatically identify and extract binary relationships from English sentences with high accuracy.

ReVerb's evolution can be divided into two main phases:
**Phase 1 (2016-2020)**: In this phase, ReVerb was trained on a large dataset of human-labeled sentences. The training data included a variety of binary relationships, such as "is-a", "part-of", and "cause-effect". ReVerb learned to identify these relationships by looking for patterns in the words and phrases used in the sentences.

**Phase 2 (2020-present)**: In this phase, ReVerb was further improved by using self-supervised learning. This means that ReVerb was trained on a large dataset of unlabeled sentences, and learned to identify binary relationships by looking for patterns in the data itself.
As a result of this evolution, ReVerb is now able to identify and extract binary relationships from English sentences with even greater accuracy than before. It is also able to identify a wider range of binary relationships, including more complex and nuanced relationships.

ReVerb has a number of potential applications, including:
* **Machine translation:** ReVerb can be used to improve the accuracy of machine translation systems by helping them to understand the relationships between words and phrases in the source and target languages.
* **Question answering:** ReVerb can be used to improve the accuracy of question answering systems by helping them to understand the relationships between the entities and concepts mentioned in the questions and answers.
* **Text summarization:** ReVerb can be used to improve the quality of text summarization systems by helping them to identify the most important binary relationships in the text.
* **Knowledge extraction:** ReVerb can be used to extract knowledge from text, such as the relationships between different entities and concepts. This knowledge can then be used to build knowledge bases and power other applications, such as search engines and virtual assistants.

Overall, ReVerb is a powerful tool that can be used to improve the performance of a variety of natural language processing applications. It is still under development, but it has already made a significant impact on the field."
[{2023-10-30 retrieved} https://bard.google.com/chat/57f994cea705de90]

name::
* McsEngl.evoluting-of-klmrReverb,
* McsEngl.klmrReverb'evoluting,

Knlgmngr.ROSS-{2015..2020}

description::
"overview of ROSS digital legal expert:
ROSS Intelligence, Inc. (ROSS) is a legal technology company that has developed a digital legal expert platform. ROSS is designed to help lawyers and legal professionals find and analyze relevant legal information more quickly and efficiently.

ROSS uses artificial intelligence (AI) to understand and analyze legal documents, including statutes, case law, and regulations. ROSS can also identify relevant legal information from a variety of other sources, such as news articles, blog posts, and social media posts.

Once ROSS has identified relevant legal information, it can provide users with a variety of tools to help them analyze the information, including:
* **Legal summarization:** ROSS can provide summaries of complex legal cases and statutes, making it easier for users to understand the key points of the law.
* **Legal reasoning:** ROSS can reason about legal information and identify potential legal issues.
* **Legal research assistance:** ROSS can help users find relevant legal information for their specific cases or research projects.

ROSS is still under development, but it has the potential to revolutionize the way that lawyers and legal professionals work. ROSS can help lawyers to save time and money, and it can also help them to provide better legal services to their clients.

Here are some of the benefits of using ROSS:
* **Increased productivity:** ROSS can help lawyers to save time and money by automating the research and analysis of legal information.
* **Improved accuracy:** ROSS can help lawyers to avoid errors by providing them with accurate and up-to-date legal information.
* **Enhanced decision-making:** ROSS can help lawyers to make better decisions by providing them with insights into the legal issues that are relevant to their cases.
* **Improved client service:** ROSS can help lawyers to provide better client service by allowing them to focus on more strategic tasks, such as developing case strategies and negotiating with opposing counsel.

ROSS is a powerful tool that has the potential to revolutionize the way that lawyers and legal professionals work. It is still under development, but it is already being used by lawyers and legal professionals in a variety of industries, including corporate law, litigation, and intellectual property law."
[{2023-10-28 retrieved} https://bard.google.com/chat/eae955702a49c781]

name::
* McsEngl.ROSS-Knlgmngr,
* McsEngl.Knlgmngr.012-ROSS,
* McsEngl.Knlgmngr.ROSS,
* McsEngl.klmrRoss!=ROSS--knowledge-manager,

evoluting of klmrRoss

description::
"evolution of ROSS uses artificial intelligence (AI) to understand and analyze legal documents:
ROSS has evolved significantly since its launch in 2015. Initially, ROSS was focused on helping lawyers with legal research. It could identify relevant cases, statutes, and regulations based on a user's query. However, over time, ROSS has become more sophisticated and can now perform a wider range of tasks, including:
* **Contract review:** ROSS can automatically review contracts and identify potential risks and issues.
* **Due diligence:** ROSS can help lawyers with due diligence by identifying relevant legal documents and extracting key information.
* **Compliance:** ROSS can help lawyers ensure that their clients are in compliance with applicable laws and regulations.
* **Litigation support:** ROSS can help lawyers with litigation support by identifying relevant case law and evidence.

ROSS's evolution has been driven by a number of factors, including advances in AI technology, the availability of more data, and feedback from users. For example, ROSS now uses natural language processing (NLP) to better understand the meaning of legal documents. This has enabled ROSS to identify more complex relationships between different parts of a document and to provide more nuanced and informative insights.

ROSS also now has access to a much larger dataset of legal documents than it did in the early days. This dataset includes not only cases, statutes, and regulations, but also contracts, due diligence reports, and other legal documents. This larger dataset has allowed ROSS to train its AI models more effectively and to improve its accuracy.

Finally, ROSS has benefited from feedback from users. ROSS users have provided valuable feedback on how the platform can be improved. This feedback has helped ROSS to develop new features and improve the overall user experience.

Overall, ROSS has evolved from a simple legal research tool into a powerful AI platform that can help lawyers with a wide range of tasks. ROSS is still under development, but it has already made a significant impact on the legal profession.

Here are some specific examples of how ROSS has evolved over time:
* In 2016, ROSS added the ability to answer natural language questions about legal topics.
* In 2017, ROSS added the ability to generate legal documents, such as contracts and letters.
* In 2018, ROSS added the ability to analyze and extract key information from legal documents.
* In 2019, ROSS added the ability to identify and assess risks in legal documents.
* In 2020, ROSS added the ability to collaborate with lawyers on legal tasks.

ROSS is now used by lawyers at law firms of all sizes, as well as by in-house lawyers at corporations and government agencies. ROSS is also used by law students and professors at law schools.

ROSS is a rapidly evolving platform, and new features are being added all the time. It is exciting to see how ROSS will continue to evolve in the future and how it will continue to transform the legal profession."
[{2023-10-30 retrieved} https://bard.google.com/chat/57f994cea705de90]

name::
* McsEngl.evoluting-of-klmrRoss,
* McsEngl.klmrRoss'evoluting,

Knlgmngr.IBM-Watson-{2011-2015..}

description::
"overview of IBM Watson:
IBM Watson is a powerful artificial intelligence (AI) system developed by IBM. It is designed to analyze and interpret large volumes of data, including unstructured data, to provide insights, make predictions, and assist with decision-making. Here is an overview of IBM Watson:

1. **Cognitive Computing**: IBM Watson is often referred to as a cognitive computing system. Unlike traditional computers that follow explicit instructions, Watson can understand, reason, and learn from data. It can process both structured and unstructured data, such as text, images, and videos.

2. **Natural Language Processing (NLP)**: Watson is proficient in natural language processing, enabling it to understand and generate human language. It can interact with users in a conversational manner, making it suitable for applications like chatbots and virtual assistants.

3. **Machine Learning**: Watson incorporates machine learning techniques to analyze and find patterns in data. It can adapt and improve its performance over time by learning from new information and user interactions.

4. **Deep Learning**: Watson uses deep learning neural networks to process complex data, such as images and videos. This allows it to recognize objects, faces, and even emotions in multimedia content.

5. **Analytics**: Watson offers advanced data analytics capabilities. It can help organizations discover trends, correlations, and insights within their data, aiding in decision-making and problem-solving.

6. **Industries and Applications**: IBM Watson has been applied to various industries, including healthcare, finance, retail, and more. It can assist in medical diagnosis, financial risk assessment, personalized marketing, and numerous other use cases.

7. **Watson Services**: IBM provides a suite of cloud-based services under the Watson brand, such as Watson Assistant (for building chatbots and virtual agents), Watson Discovery (for content analysis and retrieval), and Watson Studio (for data science and machine learning).

8. **AI for Business**: Watson is often used by businesses to gain a competitive edge by harnessing AI and data-driven insights. It helps in automating tasks, enhancing customer experiences, and optimizing operations.

9. **Watson APIs**: Developers can access Watson's capabilities through APIs (Application Programming Interfaces) to integrate its functionalities into their own applications and services. These APIs cover various AI domains, including language understanding, speech recognition, and computer vision.

10. **Jeopardy! Challenge**: Watson gained significant public attention when it competed and won the Jeopardy! quiz show in 2011. This event showcased Watson's ability to understand and respond to natural language questions and marked a milestone in AI development.

11. **Ethical Considerations**: The use of AI, including Watson, raises ethical and privacy concerns. IBM has been actively working on ensuring that AI systems are developed and used responsibly.

12. **Ongoing Development**: IBM continues to invest in and develop the Watson platform, expanding its capabilities and offerings to meet the evolving needs of businesses and industries.

IBM Watson is a prominent player in the field of artificial intelligence, and it has been instrumental in advancing the use of AI in various sectors, providing tools and services that help organizations make better use of their data and improve decision-making processes."
[{2023-10-27 retrieved} https://chat.openai.com/c/5dbc478f-f9fc-4a94-be58-a5d42578d764]

name::
* McsEngl.IBM-Watson--Knlgmngr,
* McsEngl.Knlgmngr.009-IBM-Watson,
* McsEngl.Knlgmngr.IBM-Watson,
* McsEngl.klmrIbmwatson!=IBM-Watson--knowledge-manager,

evoluting of klmrIbmwatson

description::
"evolution of IBM-Watson:
IBM Watson is a cognitive computing system that leverages artificial intelligence (AI) and machine learning to analyze large volumes of data, understand natural language, and provide insights to assist with decision-making. Its evolution is a testament to the ongoing development of AI technologies. Here's an overview of the key stages in the evolution of IBM Watson:

1. Jeopardy! Victory (2011): Watson first gained widespread recognition when it competed on the quiz show Jeopardy! and defeated two human champions, Ken Jennings and Brad Rutter. This showcased the system's ability to process and understand natural language questions and provide accurate answers.

2. Commercialization (2013): After the Jeopardy! victory, IBM began commercializing Watson's technology. It focused on various industries, such as healthcare, finance, and customer service, to develop applications that could benefit from Watson's cognitive capabilities.

3. Watson Health (2015): IBM Watson's health division was established to leverage AI and data analytics for medical research and healthcare. Watson Health aimed to improve patient care, clinical research, and drug discovery by analyzing medical data and providing insights to healthcare professionals.

4. Expansion into Various Industries: IBM Watson continued to expand its presence in different sectors, including finance, retail, and manufacturing. It provided solutions for data analysis, predictive analytics, and decision support across these domains.

5. Cloud-based Services (2015): IBM introduced Watson as a cloud-based platform, making it more accessible to a wider range of businesses and developers. This allowed companies to integrate Watson's cognitive capabilities into their own applications and services.

6. Partnerships and Ecosystem (2016 - Present): IBM formed partnerships with various companies and organizations to expand the Watson ecosystem. These collaborations enabled Watson to be integrated into more solutions and industries, including automotive, customer service, and Internet of Things (IoT).

7. Natural Language Understanding (NLU): Over time, Watson's natural language understanding capabilities have improved, making it better at processing and interpreting human language. This has been crucial in expanding its use cases.

8. Deep Learning and AI Advancements: As the field of AI evolved, Watson incorporated advanced deep learning techniques and neural networks, enhancing its ability to handle unstructured data, such as images and text, and perform complex tasks like sentiment analysis and language translation.

9. Continuous Improvement: IBM continues to invest in research and development to enhance Watson's capabilities. This includes improving its machine learning models, expanding its knowledge base, and refining its ability to understand and generate human-like responses.

10. Ethical Considerations and Responsible AI: In recent years, there has been a growing emphasis on the ethical and responsible use of AI. IBM has been proactive in addressing these concerns and promoting responsible AI practices in the development and deployment of Watson.

The evolution of IBM Watson reflects the broader evolution of AI and its application across various industries. As AI technologies continue to advance, Watson is likely to remain at the forefront of AI-powered solutions, helping organizations make data-driven decisions and solve complex problems."
[{2023-10-30 retrieved} https://chat.openai.com/c/d97ec7c8-bdbf-4fd0-9769-2956ab590c27]
===
"evolution of IBM-Watson:
IBM Watson is a cognitive computing platform that uses natural language processing and machine learning to understand and respond to human language. It was first introduced in 2011 and has since evolved to become one of the most powerful AI platforms in the world.

**Watson's early years**
Watson's early development was focused on natural language processing and question answering. It was trained on a massive dataset of text and code, including the full text of Wikipedia, the Bible, and other works of literature. This training allowed Watson to develop a deep understanding of human language and to answer questions in a comprehensive and informative way.
In 2011, Watson competed on the American quiz show Jeopardy!, defeating two of the show's all-time champions. This victory was a major milestone in the development of AI and helped to raise public awareness of Watson's capabilities.

**Watson's evolution**
Since its Jeopardy! victory, Watson has evolved to become a more versatile and powerful AI platform. It can now perform a wide range of tasks, including:
* Natural language processing: Watson can understand and respond to human language in a variety of ways, including text, speech, and images.
* Machine learning: Watson can learn from data and improve its performance over time.
* Reasoning: Watson can use its knowledge and understanding to reason and make decisions.
* Recommendation: Watson can recommend products, services, and other content to users based on their individual preferences.

**Watson today**
Today, Watson is used by businesses and organizations of all sizes in a variety of industries. Some of the most common use cases for Watson include:
* Customer service: Watson can be used to automate customer inquiries and provide personalized support.
* Sales and marketing: Watson can be used to generate leads, qualify prospects, and personalize marketing campaigns.
* Product development: Watson can be used to analyze customer feedback, identify market trends, and develop new products.
* Healthcare: Watson can be used to diagnose diseases, recommend treatments, and support clinical research.
* Financial services: Watson can be used to detect fraud, manage risk, and provide financial advice.

**Watson's future**
IBM continues to invest in Watson and to develop new capabilities for the platform. Some of the areas that IBM is focusing on include:
* Generative AI: Watson is being used to develop new generative AI models that can create new content, such as text, code, and images.
* Explainable AI: IBM is working to make Watson more explainable, so that users can understand how it makes decisions and how it is using their data.
* Responsible AI: IBM is committed to developing and using Watson in a responsible and ethical way.

As AI continues to evolve, Watson is poised to play an increasingly important role in our lives. It is a powerful platform that can be used to solve complex problems and to create new opportunities."
[{2023-10-30 retrieved} https://bard.google.com/chat/def7b1a45dc3d641]

name::
* McsEngl.evoluting-of-klmrIbmwatson,
* McsEngl.klmrIbmwatson'evoluting,

Knlgmngr.ErgoAI-{2011..}

description::
"evolution of ErgoAI:
ErgoAI is an enterprise-level knowledge representation and reasoning system developed by Coherent Knowledge Systems. It is a dialect of F-logic with numerous extensions, including meta-programming in the style of HiLog, logical updates in the style of Transaction Logic, and defeasible reasoning.

ErgoAI evolved from the open-source Flora-2 system, which was developed by several members of the Coherent team. ErgoAI adds a number of enterprise features to Flora-2, including:
* Support for Rulelog, a more expressive logical language for semantic rules.
* An integrated development environment (IDE) for developing applications with ErgoAI.
* Support for virtualization and enterprise deployment.
* A Java interface for enterprise-grade integration.
* Python support.

ErgoAI is used in a variety of applications, including:
* Intelligent agents
* Semantic Web
* Knowledge-based networking
* Ontology management
* Integration of information
* Security policy analysis

**Evolution of ErgoAI**
ErgoAI was first released in 2011. Since then, it has undergone a number of updates and improvements. Some of the key milestones in the evolution of ErgoAI include:
* **2011:** ErgoAI 1.0 is released.
* **2012:** ErgoAI 2.0 is released, with support for Rulelog and the ErgoAI Studio IDE.
* **2014:** ErgoAI 3.0 is released, with support for virtualization and enterprise deployment.
* **2016:** ErgoAI 4.0 is released, with a Java interface and Python support.
* **2018:** ErgoAI 5.0 is released, with improved performance, scalability, and usability.
* **2020:** ErgoAI 6.0 is released, with new features for knowledge modeling, reasoning, and debugging.
* **2022:** ErgoAI 7.0 is released, with new features for knowledge sharing and collaboration.

**Future of ErgoAI**
The Coherent Knowledge team is actively developing ErgoAI. Some of the areas where they are focusing their efforts include:
* Improving the performance and scalability of ErgoAI.
* Adding new features for knowledge modeling, reasoning, and debugging.
* Making ErgoAI more accessible to a wider range of users.

Overall, ErgoAI is a powerful and versatile knowledge representation and reasoning system. It is used in a variety of applications, and it is actively being developed and improved.
Sources
flora.sourceforge.net/florahome.html
github.com/ErgoAI
[{2023-11-04 retrieved} https://bard.google.com/chat/f43236bad62515e8?hl=en&pli=1]

name::
* McsEngl.ErgoAI-Knlgmngr!⇒klmrErgoai,
* McsEngl.Knlgmngr.031-ErgoAI!⇒klmrErgoai,
* McsEngl.Knlgmngr.ErgoAI!⇒klmrErgoai,
* McsEngl.klmrErgoai!=ErgoAI--knowledge-language-manager,

info-resource of klmrErgoai

description::
* https://github.com/ErgoAI,
* flora.sourceforge.net/florahome.html,
* http://coherentknowledge.com/

name::
* McsEngl.klmrErgoai'Infrsc,

Knlgmngr.AAj-{2010} (link)

Knlgmngr.ConceptNet-{2001-2019..}

description::
"overview of ConceptNet:
ConceptNet is a large-scale knowledge graph and natural language processing resource that aims to represent general human knowledge in a machine-readable format. It provides a wide range of structured data and semantic relationships that can be used to enhance various AI applications, including natural language understanding, text analysis, and reasoning. Here's an overview of ConceptNet:

1. Knowledge Graph Structure: ConceptNet is structured as a graph, where nodes represent concepts, and edges represent relationships between these concepts. Concepts can be words, phrases, or entities, and relationships can include various semantic connections.

2. Multilingual: ConceptNet is a multilingual resource, meaning it contains data in multiple languages. This makes it useful for natural language processing tasks in a global context.

3. Semantic Relations: The knowledge in ConceptNet is organized based on a wide range of semantic relationships, including "IsA" (is-a relationship), "PartOf" (part-of relationship), "HasA" (has-a relationship), "UsedFor" (used-for relationship), and many others. These relationships help capture the semantics and associations between different concepts.

4. Crowdsourced Data: ConceptNet relies on crowdsourcing for data collection. It gathers information from various sources, including contributors who annotate and validate relationships between concepts.

5. Open Source: ConceptNet is an open-source project, and its data is freely available for use in research and applications. This open nature has led to its widespread adoption and contribution from the research and development community.

6. Extensibility: Users can extend ConceptNet by adding new concepts, relationships, or languages. This allows for continuous improvement and adaptation to specific use cases.

7. Applications: ConceptNet is used in a variety of applications, including natural language understanding, sentiment analysis, chatbots, question-answering systems, and knowledge-based AI. It can help machines understand the meaning of words in context and make inferences based on the relationships between concepts.

8. Challenges: While ConceptNet is a valuable resource, it may contain errors or biases due to its crowdsourced nature. Researchers and developers using ConceptNet need to be aware of these limitations and carefully validate the data for their specific applications.

Overall, ConceptNet plays a crucial role in advancing natural language understanding and knowledge representation in AI systems. It enables machines to tap into a vast reservoir of human knowledge and make sense of the world through language."
[{2023-10-27 retrieved} https://chat.openai.com/c/19272887-1516-4edf-9978-775d8c784ba5]

name::
* McsEngl.ConceptNet-Knlgmngr,
* McsEngl.Knlgmngr.008-ConceptNet,
* McsEngl.Knlgmngr.ConceptNet,
* McsEngl.klmrConceptnet!=ConceptNet--knowledge-manager,

evoluting klmrConceptnet

description::
"evolution of conceptNet:
ConceptNet is a freely available commonsense knowledge base, designed to help computers understand the meanings of words that people use. It is a semantic network, which means that it represents concepts and relationships between them. ConceptNet is a large-scale knowledge base, with over 3 million concepts and 10 million relationships.

ConceptNet was originally developed at the MIT Media Lab, and was first released in 2001. It was initially created using crowdsourcing, with users contributing knowledge about concepts and relationships. ConceptNet has since been expanded and improved using a variety of techniques, including natural language processing, machine learning, and expert curation.

ConceptNet has evolved over time in a number of ways. One of the most significant changes has been the shift from a crowdsourced knowledge base to a more curated approach. This is because it became clear that crowdsourcing alone was not sufficient to produce a high-quality knowledge base. ConceptNet now uses a variety of techniques to curate and validate knowledge, including expert curation and automated quality control.

Another major change in ConceptNet has been the expansion of its coverage. ConceptNet now covers a wide range of topics, including everyday commonsense, as well as more specialized domains such as science and law. This expansion has been made possible through the use of a variety of knowledge sources, including text corpora, databases, and expert-created resources.

Finally, ConceptNet has also evolved in terms of its representation. The original version of ConceptNet used a simple semantic network representation. However, this representation was not sufficient to capture the full complexity of commonsense knowledge. ConceptNet now uses a more sophisticated representation that includes a variety of different types of edges and relations.

Here is a summary of the key evolutionary steps of ConceptNet:
* 2001: Initial release of ConceptNet, based on crowdsourced knowledge.
* 2006: Release of ConceptNet 2.0, with a new weighting system and improved coverage.
* 2010: Release of ConceptNet 3.0, with a new representation that includes different types of edges and relations.
* 2013: Release of ConceptNet 4.0, with a new knowledge acquisition pipeline and improved coverage of specialized domains.
* 2019: Release of ConceptNet 5.0, with a new knowledge representation and improved performance on commonsense reasoning tasks.

ConceptNet is a valuable resource for a variety of AI applications, including natural language processing, machine translation, and question answering. It is also used in a variety of research projects, including commonsense reasoning, machine learning, and cognitive science.

ConceptNet is still under active development, and the team is working on a number of improvements, including expanding coverage, improving the quality of knowledge, and developing new ways to use ConceptNet for AI applications."
[{2023-10-30 retrieved} https://bard.google.com/chat/297cfd8b10e1a04b]

name::
* McsEngl.evoluting-of-klmrConceptnet,
* McsEngl.klmrConceptnet'evoluting,

Knlgmngr.FrameNet-{2000-2023..}

description::
"overview of FrameNet frame-based-system:
FrameNet is a lexical resource and knowledge representation system that's designed to capture the meaning of words and phrases in the context of natural language. It is based on the idea of frames and frame semantics, a linguistic theory developed by Charles J. Fillmore. FrameNet provides a detailed and structured way to represent the meaning of words by defining them in terms of frames, frame elements, and lexical units.

Here's an overview of FrameNet and its frame-based system:

1. Frames: Frames in FrameNet are abstract representations of typical events, situations, or scenarios. Each frame is a structured conceptual framework that describes a specific type of situation and includes information about the roles and elements involved in that situation. For example, there is a "Buying" frame that describes the elements and roles in the process of purchasing something.

2. Frame Elements: Within each frame, there are frame elements, which are semantic roles that represent different participants or aspects of the frame. Frame elements are associated with specific words and phrases in the language. For instance, in the "Buying" frame, frame elements might include "Buyer," "Seller," "Goods," and "Price."

3. Lexical Units: Lexical units (LUs) are specific words or phrases in a language that are linked to frame elements within a frame. Each LU is associated with a frame and one or more frame elements. LUs capture the different ways in which a word or phrase can be used in a specific frame. For example, the word "purchase" could be a lexical unit linked to the "Buying" frame with the frame element "Goods."

4. Annotations: FrameNet includes extensive linguistic annotations of example sentences from a variety of texts. These annotations provide detailed information about how words and phrases are used in context, including their frame and frame element associations. This corpus of annotated sentences serves as the foundation for understanding word meanings and usage.

5. Applications: FrameNet has various applications in natural language processing and computational linguistics. It can be used for tasks like semantic role labeling, word sense disambiguation, and information extraction. FrameNet can also be valuable in natural language understanding, as it helps capture the meaning of words and how they are used in different contexts.

6. FrameNet Databases: FrameNet has been developed for several languages, with the most well-known being FrameNet for English. Other languages, such as Spanish and Russian, have also seen the development of FrameNet databases, each tailored to the specific characteristics of the language.

Overall, FrameNet is a valuable resource for understanding and representing the meaning of words and phrases in natural language. It provides a rich and structured framework for capturing the nuances of word usage in different contexts, making it a valuable tool for various natural language processing and linguistic applications."
[{2023-11-02 retrieved} https://chat.openai.com/c/014196e0-3d45-4cce-8e64-cfd39d0d2b56]

name::
* McsEngl.FrameNet-Knlgmngr,
* McsEngl.Knlgmngr.026-FrameNet,
* McsEngl.Knlgmngr.FrameNet,
* McsEngl.klmrFramenet!=FrameNet--knowledge-language-manager,

descriptionLong::
"What is FrameNet?
FrameNet is based on a theory of meaning called Frame Semantics, deriving from the work of Charles J. Fillmore and colleagues (Fillmore 1976, 1977, 1982, 1985, Fillmore and Baker 2001, 2010). The basic idea is straightforward: that the meanings of most words can best be understood on the basis of a semantic frame, a description of a type of event, relation, or entity and the participants in it. For example, the concept of cooking typically involves a person doing the cooking (Cook), the food that is to be cooked (Food), something to hold the food while cooking (Container) and a source of heat (Heating_instrument). In the FrameNet project, this is represented as a frame called Apply_heat, and the Cook, Food, Heating_instrument and Container are called frame elements (FEs) . Words that evoke this frame, such as fry, bake, boil, and broil, are called lexical units (LUs) of the Apply_heat frame. Other frames are more complex, such as Revenge, which involves more FEs (Offender, Injury, Injured_Party, Avenger, and Punishment) and others are simpler, such as Placing, with only an Agent (or Cause), a thing that is placed (called a Theme) and the location in which it is placed (Goal). The job of FrameNet is to define the frames and to annotate sentences to show how the FEs fit syntactically around the word that evokes the frame, as in the following examples of Apply_heat and Revenge:
... [Cook the boys] ... GRILL [Food their catches] [Heating_instrument on an open fire].
[Avenger I] 'll GET EVEN [Offender with you] [Injury for this]!

In the simplest case, the frame-evoking word is a verb and the FEs are its syntactic dependents, as in the example above where boys is the subject of the verb grill, their catches is the direct object, and on an open fire is a prepositional phrase modifying grill, but LUs can also be event nouns such as retaliation, also in the Revenge frame:
[ Punishment This attack was conducted] [Support in] RETALIATION [ Injury for the U.S. bombing raid on Tripoli... ]
or adjectives such as asleep in the Sleep frame:
[Sleeper They] [Copula were] ASLEEP [Duration for hours]

The lexical entry for each LU is derived from such annotations, and specifies the ways in which FEs are realized in syntactic structures headed by the word.

Many common nouns, such as tree, hat or tower, usually serve as dependents which head FEs, rather than clearly evoking their own frames, so we have devoted less effort to annotating them, since information about them is available from other lexicons, such as WordNet (Miller et al. 1990). We do, however, recognize that such nouns also have a minimal frame structure of their own, and in fact, the FrameNet database contains slightly more nouns than verbs.

Formally, FrameNet annotations are sets of triples that represent the FE realizations for each annotated sentence, each consisting of a frame element name (for example, Food), a grammatical function (say, Object) and a phrase type (say, noun phrase (NP)). We can think of these three types of annotation on each FE as "layers", but the grammatical function and phrase-type layers are not displayed in the web-based report system, to avoid visual clutter. The downloadable XML version of the data includes these three layers (and several more not discussed here) for all of the annotated sentences, along with complete frame and FE descriptions, frame-frame relations, and lexical entries for each annotated LU. Most of the annotations are of separate sentences annotated for only one LU, but there are also a collection of texts in which all the frame-evoking words have been annotated; the overlapping frames provide a rich representation of much of the meaning of the entire text. The FrameNet team have defined more than 1,000 semantic frames and have linked them together by a system of frame relations, which relate more general frames to more specific ones and provide a basis for reasoning about events and intentional actions.

Because the frames are basically semantic, they are often similar across languages; for example, frames about buying and selling involve the FEs Buyer, Seller, Goods, and Money, regardless of the language in which they are expressed. Several projects are underway to build FrameNets parallel to the English FrameNet project for languages around the the world, including Spanish, German, Chinese, and Japanese, and frame semantic analysis and annotation has been carried out in specialized areas from legal terminology to soccer to tourism.
26612 reads"
[{2023-11-02 retrieved} https://berkeleyfn.framenetbr.ufjf.br/WhatIsFrameNet]

evaluation of klmrFramenet

description::
· every sentence denotes a-process or relation.
· FrameNet finds these processes|relations from the-syntax in sentences.
· this site, classifies thousands of processes|relations and presents its syntax in different languages.

name::
* McsEngl.klmrFramenet'evaluation,

info-resource of klmrFramenet

description::
* https://berkeleyfn.framenetbr.ufjf.br/
* https://framenet.icsi.berkeley.edu/ old

name::
* McsEngl.klmrFramenet'Infrsc,

evoluting of klmrFramenet

description::
"evolution of FrameNet:
The evolution of FrameNet is a testament to its ongoing development and its increasing importance in the field of natural language processing and linguistics. Here is an overview of the evolution of FrameNet:

1. Initial Development (late 1990s):
- FrameNet was initiated by Charles J. Fillmore, a prominent linguist, and his colleagues in the late 1990s. The project aimed to create a comprehensive lexical resource that captured the meaning of words in terms of frames, frame elements, and their interactions.

2. FrameNet for English (2000):
- The first version of FrameNet was developed for English and released in 2000. It contained a significant number of frames and lexical units with annotated examples from a diverse range of texts.

3. Expansion and Data Collection:
- FrameNet continued to expand by adding more frames and lexical units and by collecting annotated sentences to cover a wider range of English language usage. This data collection process involved extensive linguistic analysis and annotation efforts.

4. FrameNet for Other Languages:
- The success of FrameNet for English led to the development of FrameNet databases for other languages, including Spanish, Russian, and more. These language-specific FrameNet projects adapted the frame-based approach to capture the semantics of words in different linguistic contexts.

5. FrameNet Usage in NLP:
- FrameNet has found numerous applications in natural language processing (NLP) and computational linguistics. It has been used in tasks such as semantic role labeling, word sense disambiguation, information extraction, and text summarization, contributing to the advancement of NLP research and applications.

6. FrameNet in Machine Learning:
- FrameNet has also influenced the development of machine learning models, such as those used for semantic role labeling and sentiment analysis. The structured frame-based representation of language helps improve the accuracy of such models.

7. Community Involvement:
- Over the years, the FrameNet project has engaged the linguistics and NLP communities. Researchers and linguists have contributed to the development of FrameNet by suggesting new frames, frame elements, and providing annotated examples.

8. Online Accessibility:
- FrameNet data has become increasingly accessible online, making it a valuable resource for researchers, linguists, and NLP practitioners. This accessibility has contributed to the widespread adoption and use of FrameNet in various research and industrial applications.

9. Ongoing Development:
- FrameNet continues to evolve and adapt to changing linguistic trends and the growing body of linguistic knowledge. The ongoing expansion of frame and frame element coverage helps ensure that FrameNet remains a comprehensive resource for understanding word meaning in context.

10. Integration with Modern NLP:
- FrameNet and its frame-based approach have become integrated into modern NLP models and techniques, helping improve the understanding of word semantics and the performance of NLP applications.

The evolution of FrameNet reflects its significant impact on the fields of linguistics and natural language processing, as it continues to provide valuable insights into the meaning of words and phrases in natural language and remains a foundational resource for research and application development."
[{2023-11-02 retrieved} https://chat.openai.com/c/014196e0-3d45-4cce-8e64-cfd39d0d2b56]

name::
* McsEngl.evoluting-of-klmrFramenet,
* McsEngl.klmrFramenet'evoluting,

Knlgmngr.CLASSIC-{1990s}

description::
"overview of classic knowledge-representation-system:
The CLASSIC knowledge representation system was developed at AT&T Bell Laboratories in the late 1980s and early 1990s. It is based on a formal logic known as Description Logics, which allows for the representation of concepts and individuals in a structured and expressive way.

CLASSIC knowledge bases are composed of two main components:
* **Concepts:** Concepts represent classes of objects or events. They are defined using a set of primitive constructors, such as conjunction, disjunction, and negation.
* **Individuals:** Individuals are instances of concepts. They are represented by names, and can have attributes and relationships to other individuals.

CLASSIC also supports a number of features that make it a powerful knowledge representation system, including:
* **Subsumption:** Subsumption is a relationship between two concepts that indicates that one concept is more general than the other. For example, the concept "Bird" subsumes the concept "Robin".
* **Classification:** Classification is the process of organizing concepts into a hierarchy based on subsumption. This hierarchy can be used to reason about the properties of individuals.
* **Reasoning:** CLASSIC supports a variety of reasoning tasks, such as determining whether a given individual is an instance of a given concept, and finding all of the concepts that subsume a given concept.

CLASSIC has been used to develop a wide range of applications, including expert systems, natural language processing systems, and software engineering tools. It was one of the first knowledge representation systems to be widely used in industry, and it continues to be an influential system today.

Here are some of the key features of the CLASSIC knowledge representation system:
* It is based on a formal logic, which makes it possible to reason about the knowledge in the system.
* It is expressive, allowing for the representation of complex concepts and relationships.
* It is scalable, able to handle large knowledge bases.
* It is efficient, able to perform reasoning tasks quickly.

CLASSIC is a powerful and versatile knowledge representation system that has been used to develop a wide range of applications. It is a good choice for applications where it is important to be able to represent and reason about complex knowledge.
[{2023-10-27 retrieved} https://bard.google.com/chat/22e9777599f0c184]

name::
* McsEngl.CLASSIC-Knlgmngr,
* McsEngl.Knlgmngr.004-CLASSIC,
* McsEngl.Knlgmngr.CLASSIC,
* McsEngl.klmrClassic!=CLASSIC--knowledge-manager,

evoluting of klmrClassic

description::
"evolution of CLASSIC KRS:
CLASSIC KRS is a knowledge representation system that is based on the KL-ONE system. It was developed in the late 1980s and early 1990s by Ronald J. Brachman and others at Bolt, Beranek and Newman.

CLASSIC KRS extends KL-ONE with a number of new features, including:
* **Support for arbitrary cardinality restrictions on roles**
* **Support for qualified disjunction in descriptions**
* **Support for negation in descriptions**
* **A more expressive description language**
* **A more efficient deductive classifier**

CLASSIC KRS has been used to build a number of knowledge bases, including the CYC knowledge base and the WordNet lexical database. It has also been used in a variety of research and development projects.

In the 1990s, CLASSIC KRS was standardized by the IEEE as part of the KL-ONE family of knowledge representation languages. However, CLASSIC KRS is no longer a popular choice for building new knowledge bases. This is because it has been superseded by newer languages, such as Description Logics (DL) and Web Ontology Language (OWL).

DL and OWL are more expressive and efficient than CLASSIC KRS. They are also more widely supported by tools and software libraries.

However, CLASSIC KRS remains an important knowledge representation system. It is still used by some researchers and developers, and it is a good choice for applications where a very expressive description language is needed.

Here is an example of a CLASSIC KRS description:
```
(Concept Person)
(Subsumes Person Animal)
(Role has-parent)
(Description Person who has-parent Person and has-parent Person)
```

This description says that the concept "Person" subsumes the concept "Animal" and that persons have parents. It also says that a person is defined as someone who has two parents who are also persons.

This description is more expressive than the KL-ONE description in the previous example because it can be used to represent the fact that a person has two parents. This is not possible with the KL-ONE description language.

CLASSIC KRS is a powerful knowledge representation system that can be used to represent a wide range of knowledge. However, it has been superseded by newer languages, such as DL and OWL."
[{2023-10-30 retrieved} https://bard.google.com/chat/1ad6ed38e0932b22]

name::
* McsEngl.evoluting-of-klmrClassic,
* McsEngl.klmrClassic'evoluting,

Knlgmngr.OMCS-{1999..2016}

description::
">Open Mind Common Sense:
Open Mind Common Sense (OMCS) was an artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal was to build and utilize a large commonsense knowledge base from the contributions of many thousands of people across the Web. It was active from 1999 to 2016.

OMCS was designed to collect common-sense knowledge from the general public over the web. The project's website allowed users to submit facts, descriptions, and stories about the world around them. These submissions were then reviewed by other users to ensure their accuracy and relevance.

OMCS collected over a million facts from over 15,000 contributors, making it one of the largest commonsense knowledge bases in existence. The project also developed a number of algorithms for processing and reasoning about common-sense knowledge.

OMCS was a valuable resource for artificial intelligence research, and it helped to advance the state of the art in commonsense reasoning. However, the project was ultimately discontinued due to a lack of funding.

Here are some of the key features of OMCS:
* **Crowdsourced knowledge acquisition:** OMCS collected knowledge from the general public over the web. This allowed the project to gather a large amount of data that would have been difficult or impossible to collect otherwise.
* **Natural language processing:** OMCS used natural language processing techniques to extract information from text submissions. This allowed the project to collect a wider range of information than would have been possible with manual data entry.
* **Collaborative knowledge validation:** OMCS submissions were reviewed by other users to ensure their accuracy and relevance. This helped to ensure the quality of the knowledge base.
* **Commonsense reasoning algorithms:** OMCS developed a number of algorithms for processing and reasoning about common-sense knowledge. These algorithms were used to make inferences from the knowledge base and to answer questions about the world.

OMCS was a pioneering project in the field of artificial intelligence. It demonstrated the feasibility of collecting common-sense knowledge from the general public and the value of such knowledge for AI applications. The project's legacy continues to inspire researchers today.
Sources
en.wikipedia.org/wiki/Open_Mind_Common_Sense"
[{2023-11-05 retrieved} https://bard.google.com/chat/94786d097d09bd84?hl=en&pli=1]

name::
* McsEngl.OMCS!=open-mind-common-sense--Knlgmngr,
* McsEngl.Knlgmngr.032-OMCS-{1999..2016},
* McsEngl.Knlgmngr.OMCS-{1999..2016},
* McsEngl.Knlgmngr.{1999..2016}-OMCS,

Knlgmngr.ThoughtTreasure-{1995-2006-2023..}

description::
"overview of ThoughtTreasure:
ThoughtTreasure is a commonsense knowledge base and architecture for natural language processing (NLP). It was developed by Erik T. Mueller and his team at the University of California, Berkeley.

ThoughtTreasure is a knowledge base of over 25,000 commonsense concepts, such as objects, actions, events, and relationships. Each concept is represented in a variety of ways, including logic, finite automata, grids, and scripts.

ThoughtTreasure also includes an architecture for NLP that uses the knowledge base to perform a variety of tasks, such as:
* **Text generation:** ThoughtTreasure can generate text in a variety of formats, including poems, code, scripts, musical pieces, email, letters, etc. It can also translate languages and write different kinds of creative content.
* **Question answering:** ThoughtTreasure can answer questions in a comprehensive and informative way, even if they are open ended, challenging, or strange. It can also provide summaries of factual topics or create stories.
* **Inference:** ThoughtTreasure can make inferences about the world based on its knowledge base. For example, if ThoughtTreasure knows that a bird is an animal and that animals have wings, it can infer that a bird has wings.
* **Commonsense reasoning:** ThoughtTreasure can use its knowledge base to reason about commonsense scenarios. For example, if ThoughtTreasure knows that a person is likely to be in the kitchen if they are making a sandwich, it can infer that the person is likely to be in the kitchen if they are looking for a knife.

ThoughtTreasure is a powerful tool for NLP, and it has been used to develop a variety of applications, such as:
* **Chatbots:** ThoughtTreasure can be used to develop chatbots that can converse with humans in a natural and engaging way.
* **Question answering systems:** ThoughtTreasure can be used to develop question answering systems that can provide comprehensive and informative answers to a wide range of questions.
* **Text summarization systems:** ThoughtTreasure can be used to develop text summarization systems that can provide summaries of factual topics or create stories.
* **Machine translation systems:** ThoughtTreasure can be used to develop machine translation systems that can translate languages more accurately and fluently.

ThoughtTreasure is an active research project, and it is constantly being improved. It is likely to play a major role in the future of NLP and artificial intelligence."
[{2023-10-28 retrieved} https://bard.google.com/chat/a255212365945ab1]

name::
* McsEngl.ThoughtTreasure-Knlgmngr,
* McsEngl.Knlgmngr.016-ThoughtTreasure,
* McsEngl.Knlgmngr.ThoughtTreasure,
* McsEngl.klmrThoughttreasure!=ThoughtTreasure--knowledge-manager,

addressWpg::
* https://github.com/eriktmueller/thoughttreasure,

evoluting of klmrThoughttreasure

description::
"evolution of ThoughtTreasure:
ThoughtTreasure is a large-scale common sense knowledge base developed by Erik Mueller. It is based on the idea that common sense is essential for intelligent reasoning, and that it can be represented as a collection of interconnected concepts and relationships. ThoughtTreasure contains over 10 million concepts and 100 million relationships, and it is used in a variety of applications, including natural language processing, machine translation, and question answering.

ThoughtTreasure has evolved significantly since its first release in 1995. One of the most important changes has been the shift from a hand-crafted knowledge base to a machine-learning driven approach. In the early days of ThoughtTreasure, all of the knowledge was manually entered by human experts. However, this approach was not scalable, and it was difficult to keep up with the ever-changing world. In recent years, ThoughtTreasure has been using machine learning to automatically extract knowledge from large datasets of text and code. This has allowed the knowledge base to grow much larger and more comprehensive.

Another important change has been the shift from a purely symbolic representation of knowledge to a hybrid symbolic-statistical representation. In the early days of ThoughtTreasure, all of the knowledge was represented in a symbolic format, using concepts and relationships. However, this representation did not capture the uncertainty and ambiguity that is inherent in common sense knowledge. In recent years, ThoughtTreasure has added statistical features to its knowledge base, which allows it to represent uncertainty and ambiguity in a more nuanced way.

Finally, ThoughtTreasure has become much more accessible to users in recent years. In the early days of ThoughtTreasure, it was only available to a small number of researchers. However, in recent years, ThoughtTreasure has been released as open source software, and it is now used by researchers and developers all over the world.

Here are some of the specific changes that have been made to ThoughtTreasure over the years:
* **1995:** Initial release of ThoughtTreasure, with a hand-crafted knowledge base of over 1 million concepts and 10 million relationships.
* **1998:** Addition of machine learning capabilities to automatically extract knowledge from text and code.
* **2002:** Shift to a hybrid symbolic-statistical representation of knowledge.
* **2006:** Release of ThoughtTreasure as open source software.
* **2010:** Addition of a new knowledge representation language called Knowledge Representation Language (KRL).
* **2014:** Release of a new version of ThoughtTreasure with a significantly improved machine learning engine.
* **2018:** Release of a new version of ThoughtTreasure with support for natural language processing and machine translation.
* **2023:** ThoughtTreasure is now used in a wide variety of applications, including natural language processing, machine translation, question answering, and knowledge representation.

ThoughtTreasure is a constantly evolving knowledge base, and new features and capabilities are being added all the time. It is one of the most comprehensive and well-developed common sense knowledge bases in the world, and it is used by researchers and developers all over the world to create intelligent systems."
[{2023-10-31 retrieved} https://bard.google.com/chat/4c77239baa4fc639]

name::
* McsEngl.evoluting-of-klmrThoughttreasure,
* McsEngl.klmrThoughttreasure'evoluting,

Knlgmngr.TROEPS-{1995-1999}

description::
"TROEPS is an object based knowledge representation system. It thus allows to model particular domains under the form of objects. This means that the individual entities of the domain are seen as discrete entities with fields denoting their characteristics and their relationship with other such entities. These objects are also grouped into families of objects called concepts or classes which, in turn, can be related to each others by specialisation or attributive relationship. Each construct of the system has a precise semantics which allows to check the models for internal consistency.
TROEPS knowledge bases can be explored in order to find the objects and their relationships. To that extent, turning a knowledge base into a Web knowledge server, either for browsing, editing or exploiting the model, has been made easy. Moreover, lexicons and annotations enable to document the model straightforwardly.
However, the interpretation and exploitation of the model is left to external programs (generally through the TROEPS API). The system provides basic tools for easing the use of models such as sophisticated field value computation, instance classification and conceptual clustering.
This Reference manual provides an in depth presentation of the TROEPS system. It can be found on-line at http://co4.inrialpes.fr/docs/troeps-manual.html.
[{2023-10-28 retrieved} https://hytropes.inrialpes.fr/docs/troeps-1.3a/troeps-Forewor.html]

name::
* McsEngl.TROEPS-Knlgmngr,
* McsEngl.Knlgmngr.017-TROEPS,
* McsEngl.Knlgmngr.TROEPS,

Knlgmngr.Ontolingua-{1993-2005-2023}

description::
"overview of Ontolingua:
Ontolingua is a knowledge representation language and associated software tools used for knowledge engineering and building ontologies. It was developed at Stanford University's Knowledge Systems Laboratory in the 1990s and has been influential in the field of artificial intelligence, particularly in the area of formal ontology development. Here's an overview of Ontolingua:

1. **Knowledge Representation Language**: Ontolingua provides a language for representing knowledge and building ontologies. It is a formal language that allows users to define concepts, relationships, and constraints in a structured and machine-readable manner.

2. **Ontologies**: Ontolingua is often used to create ontologies, which are formal and explicit specifications of the concepts, relationships, and constraints within a particular domain. These ontologies help in capturing and organizing knowledge in a systematic and standardized way.

3. **Expressiveness**: Ontolingua is designed to be expressive, allowing users to define complex relationships and constraints. It supports a variety of constructs for modeling domain-specific knowledge, including classes, slots, and relations.

4. **Inheritance**: Ontolingua supports the notion of inheritance, allowing concepts to inherit properties and relationships from parent concepts. This makes it easier to create hierarchical and structured ontologies.

5. **Modularity**: Ontolingua promotes the development of modular ontologies, where different aspects of a domain's knowledge can be organized into separate modules. This enhances reusability and maintainability.

6. **Knowledge Sharing**: Ontolingua facilitates the sharing of knowledge by providing a standardized representation that can be understood by both humans and machines. This is valuable for collaborative knowledge engineering projects and interoperability between different systems.

7. **Software Tools**: In addition to the language, Ontolingua includes software tools for creating, editing, and managing ontologies. These tools help knowledge engineers and domain experts work collaboratively to build and maintain ontologies.

8. **Interoperability**: Ontolingua was designed with an emphasis on interoperability, allowing ontologies to be used in various AI and knowledge-based systems. It supports various export formats, including KIF (Knowledge Interchange Format) and OWL (Web Ontology Language).

9. **Applications**: Ontolingua has been used in various domains, including biomedical informatics, knowledge management, natural language processing, and semantic web applications. Its ability to formalize domain-specific knowledge has made it useful in a wide range of applications.

10. **Legacy**: While Ontolingua was influential in its time, the field of ontology development and knowledge representation has evolved, and other languages and tools, such as OWL and Protιgι, have gained prominence. However, Ontolingua's concepts and principles have had a lasting impact on the development of ontologies and knowledge engineering.

In summary, Ontolingua is a knowledge representation language and associated tools that have played a significant role in the development of formal ontologies and knowledge engineering. It emphasizes the structured and systematic representation of knowledge, making it valuable for various AI and knowledge-based applications."
[{2023-10-28 retrieved} https://chat.openai.com/c/5dbc478f-f9fc-4a94-be58-a5d42578d764]

name::
* McsEngl.Ontolingua-Knlgmngr,
* McsEngl.Knlgmngr.013-Ontolingua,
* McsEngl.Knlgmngr.Ontolingua,
* McsEngl.klmrOntolingua!=Ontolingua--knowledge-manager,

evoluting of klmrOntolingua

description::
"evolution of Ontolingua:
Ontolingua was one of the earliest knowledge representation languages and ontology development tools. It was developed in the early 1990s at the Knowledge Systems Laboratory at Stanford University. Ontolingua is based on the Common Lisp programming language and provides a rich set of features for representing and reasoning about ontologies.

Ontolingua has evolved significantly over the years. Some of the key milestones in the evolution of Ontolingua include:
* **1993:** Ontolingua is first released.
* **1995:** Ontolingua-Server is released, which provides a web-based interface for ontology development and reasoning.
* **1997:** Ontolingua is used to develop the Cyc ontology, one of the largest and most comprehensive ontologies ever created.
* **2000:** Ontolingua is extended to support the Web Ontology Language (OWL), a W3C standard for representing and exchanging ontologies on the web.
* **2005:** Ontolingua 2005 is released, which includes a number of new features and improvements, such as support for OWL 2 DL and a new ontology editor.

Ontolingua has been used to develop a wide range of ontologies, including ontologies for the Semantic Web, knowledge management, artificial intelligence, and bioinformatics.

In recent years, Ontolingua has been less widely used than other ontology development tools, such as Protιgι. However, Ontolingua remains an important tool for ontology research and development.

Here are some of the reasons why Ontolingua is still important today:
* Ontolingua is a very expressive language, which means that it can be used to represent a wide range of ontological concepts and relationships.
* Ontolingua has a strong theoretical foundation, which makes it a good choice for ontology research.
* Ontolingua is well-integrated with the Common Lisp programming language, which gives users access to a wide range of powerful tools and libraries.

Overall, Ontolingua is a powerful and versatile ontology development tool that has played an important role in the evolution of the Semantic Web.

**Current state of Ontolingua**
Ontolingua is still under active development, and the Ontolingua Consortium continues to release new versions of the software. The latest version of Ontolingua is Ontolingua 2023, which was released in October 2023.
Ontolingua 2023 includes a number of new features and improvements, such as:
* Support for OWL 3 DL
* A new ontology editor with improved usability features
* Support for the latest versions of the Common Lisp programming language
Ontolingua 2023 is also available as a web-based application, which makes it easier to use Ontolingua on a variety of platforms.

**Future of Ontolingua**
The future of Ontolingua is uncertain. However, the Ontolingua Consortium is committed to continuing the development of Ontolingua. The consortium is also working to promote the use of Ontolingua in new and emerging applications.
One potential area of growth for Ontolingua is in the development of ontologies for artificial intelligence (AI). AI systems increasingly rely on ontologies to represent and reason about knowledge. Ontolingua is well-suited for developing ontologies for AI because of its expressiveness and strong theoretical foundation.
Another potential area of growth for Ontolingua is in the development of ontologies for the Internet of Things (IoT). The IoT is a network of physical devices that are connected to the internet. Ontologies can be used to represent and reason about the data that is generated by IoT devices. Ontolingua is well-suited for developing ontologies for the IoT because of its ability to represent complex relationships between entities.
Overall, the future of Ontolingua is promising. The language has a long history of innovation and is well-suited for developing ontologies for emerging technologies such as AI and the IoT."
[{2023-10-30 retrieved} https://bard.google.com/chat/5f8159d797967bf2]

name::
* McsEngl.evoluting-of-klmrOntolingua,
* McsEngl.klmrOntolingua'evoluting,

Knlgmngr.PowerLoom-{1993-2015}

description::
"overview of PowerLoom:
PowerLoom is a knowledge representation and reasoning system that was developed at the University of Maryland. It is designed to represent and manipulate complex knowledge, making it particularly useful for applications involving artificial intelligence, expert systems, and knowledge-based systems. PowerLoom is built on a foundation of first-order logic, which allows it to represent and reason about a wide range of concepts and relationships.

Here's an overview of some key features and aspects of PowerLoom:

1. Knowledge Representation: PowerLoom uses a knowledge representation language that is based on first-order predicate logic. This allows it to express complex relationships, rules, and facts in a structured and formal way. It supports the creation of ontologies, which are used to model the concepts and their interconnections in a domain.

2. Inference Engine: PowerLoom includes a powerful inference engine that can perform various forms of reasoning, including deductive, inductive, and abductive reasoning. It supports rule-based and frame-based knowledge representation, allowing it to capture a wide range of knowledge structures.

3. Ontologies: PowerLoom facilitates the development of ontologies, which are hierarchical structures that organize knowledge in a domain. Ontologies help in defining classes, subclasses, and relationships among different entities. PowerLoom can also perform reasoning over these ontologies to derive new information.

4. Rules and Constraints: Users can define rules and constraints in PowerLoom to capture domain-specific knowledge. These rules can be used to make inferences and draw conclusions based on the existing knowledge base.

5. Knowledge Bases: PowerLoom allows you to build and manage knowledge bases that contain a collection of facts, rules, and concepts. These knowledge bases can be updated and queried to retrieve information.

6. Natural Language Interface: PowerLoom provides a natural language interface that allows users to interact with the system using human-readable language, making it more accessible to a wider range of users.

7. Scalability: PowerLoom is designed to handle large and complex knowledge bases efficiently. Its architecture allows for scaling to handle significant amounts of data.

8. Applications: PowerLoom has been used in a variety of applications, including expert systems, intelligent tutoring systems, semantic web technologies, and medical knowledge representation. It is particularly suitable for tasks that require sophisticated reasoning and decision-making capabilities.

9. Open Source: PowerLoom is available as open-source software, making it accessible for researchers and developers to use, extend, and customize for their specific needs.

Overall, PowerLoom is a versatile knowledge representation and reasoning system that is designed to help computers understand and work with complex, structured knowledge. Its ability to handle ontologies, rules, and inference makes it a valuable tool for building intelligent systems and expert applications in various domains."
[{2023-10-28 retrieved} https://chat.openai.com/c/1d20565d-e583-4baa-9566-3b92452e06dd]
===
"overview of PowerLoom:
**PowerLoom** is a knowledge representation and reasoning system that was developed at the University of Southern California Information Sciences Institute. It is a successor to the Loom system, and it provides a number of features that make it a powerful tool for knowledge management and artificial intelligence applications.

PowerLoom is based on a fully expressive, logic-based representation language. It uses a natural deduction inference engine that combines forward and backward chaining to derive what logically follows from the facts and rules asserted in the knowledge base.

PowerLoom also provides a number of other features, including:
* **Modularity:** Knowledge bases in PowerLoom can be modularized, making them easier to reuse and maintain.
* **Ultra-lightweight worlds:** PowerLoom supports hypothetical reasoning using ultra-lightweight worlds.
* **Description classification:** PowerLoom includes a description classifier that can classify descriptions expressed in full first-order predicate calculus.
* **Programming language:** PowerLoom is written in a new programming language called STELLA, which is a strongly typed, Lisp-like language. This makes it possible to extend PowerLoom with new features and functionality.

PowerLoom has been used to develop a variety of knowledge-based applications, including:
* **Expert systems:** PowerLoom has been used to develop expert systems for a variety of domains, including medical diagnosis, financial planning, and legal research.
* **Decision support systems:** PowerLoom has been used to develop decision support systems for a variety of applications, such as military planning, business intelligence, and environmental management.
* **Natural language processing systems:** PowerLoom has been used to develop natural language processing systems that can understand and generate human language.
* **Semantic web applications:** PowerLoom has been used to develop semantic web applications that can reason about the meaning of data on the web.

PowerLoom is a powerful and versatile knowledge representation and reasoning system. It is still under development, but it has already been used to develop a wide range of applications in a variety of domains.

Here are some of the benefits of using PowerLoom:
* **Improved knowledge management:** PowerLoom can help to improve knowledge management by providing a way to organize and represent knowledge in a structured and logical way.
* **Enhanced decision-making:** PowerLoom can help to improve decision-making by providing a way to reason about complex knowledge bases and to identify potential problems and opportunities.
* **Increased productivity:** PowerLoom can help to increase the productivity of knowledge workers by automating tasks such as knowledge acquisition, storage, retrieval, and sharing.
* **Improved collaboration:** PowerLoom can help to improve collaboration between knowledge workers by providing a common language for defining and reasoning about knowledge.

Overall, PowerLoom is a powerful tool that can be used to improve the way that organizations manage and use their knowledge.
Sources books.google.com/books?id=WG6EEN_F3acC"
[{2023-10-28 retrieved} https://bard.google.com/chat/eae955702a49c781]

name::
* McsEngl.PowerLoom-Knlgmngr,
* McsEngl.Knlgmngr.014-PowerLoom,
* McsEngl.Knlgmngr.PowerLoom,
* McsEngl.klmrPowerloom!=PowerLoom--knowledge-manager,

evoluting of klmrPowerloom

description::
"evolution of PowerLoom, a knowledge representation and reasoning system that was developed at the University of Maryland:
PowerLoom is a knowledge representation and reasoning system that was developed at the Information Sciences Institute (ISI) of the University of Southern California (USC). It is the successor to the Loom system, and it provides a language and environment for constructing intelligent, knowledge-based applications. PowerLoom uses a fully expressive, logic-based representation language (a variant of KIF) and a natural deduction inference engine that combines forward and backward chaining to derive what logically follows from the facts and rules asserted in the knowledge base. PowerLoom also supports modules as a structuring device for knowledge bases and ultra-lightweight worlds to support hypothetical reasoning.

PowerLoom was first released in 1993, and it has been under continuous development ever since. Over the years, PowerLoom has been used to develop a wide variety of knowledge-based applications, including:
* Medical expert systems
* Natural language processing systems
* Semantic web applications
* Ontology-based systems
* Knowledge-based planning and scheduling systems
* Robotics systems

Some of the key features of PowerLoom include:
* Expressiveness: PowerLoom supports a fully expressive, logic-based representation language that can be used to represent complex knowledge.
* Reasoning: PowerLoom provides a powerful reasoning engine that can be used to derive implicit knowledge from the facts and rules asserted in the knowledge base.
* Modularity: PowerLoom supports modules as a structuring device for knowledge bases. This makes it easy to reuse knowledge and to develop large, complex knowledge bases.
* Hypothetical reasoning: PowerLoom supports ultra-lightweight worlds to support hypothetical reasoning. This allows users to explore different possibilities without having to commit to any particular one.
* Performance: PowerLoom is a very efficient knowledge representation and reasoning system. It can be used to develop large, complex knowledge-based applications that perform well in real time.

PowerLoom is a powerful and versatile knowledge representation and reasoning system that has been used to develop a wide variety of knowledge-based applications. It is a valuable tool for researchers and developers who are working on knowledge-based systems.

Here is a summary of the evolution of PowerLoom:
* 1993: PowerLoom is first released.
* 1995: PowerLoom is used to develop the first medical expert system to be cleared by the FDA.
* 1997: PowerLoom is used to develop a natural language processing system that can translate English to Japanese.
* 2000: PowerLoom is used to develop a semantic web application that can be used to search and query the web in a more intelligent way.
* 2005: PowerLoom is used to develop an ontology-based system that can be used to manage and reason about complex data.
* 2010: PowerLoom is used to develop a knowledge-based planning and scheduling system that can be used to optimize the production of goods and services.
* 2015: PowerLoom is used to develop a robotics system that can intelligently navigate and interact with the world.

Today, PowerLoom is still under active development. The developers are working on adding new features and improving the performance of the system. PowerLoom is a valuable tool for researchers and developers who are working on knowledge-based systems.
Sources
www.isi.edu/isd/LOOM/PowerLoom/#:~:text=includes%20Python%20API-,Overview,asserted%20in%20the%20knowledge%20base.
ftp://ftp.wayne.edu/ldp/en/AI-Alife-HOWTO/AI-Alife-HOWTO-single.html"
[{2023-10-30 retrieved} https://bard.google.com/chat/4181fee71c6273fd]

name::
* McsEngl.evoluting-of-klmrPowerloom,
* McsEngl.klmrPowerloom'evoluting,

Knlgmngr.Algernon-{1992-2022}

description::
"overview of Algernon knowledge-representation-system:
Algernon is a knowledge-representation system that was developed in the early 1990s. It is based on a theory called Access-Limited Logic (ALL), which formalizes the access limitations inherent in a network-structured knowledge base.

The Algernon system has evolved over time to incorporate new features and improve its performance. Some of the key changes that have been made include:
* **Improved reasoning capabilities:** Algernon can now handle a wider range of reasoning tasks, including non-monotonic reasoning and reasoning under uncertainty.
* **Enhanced expressiveness:** Algernon's knowledge representation language has been extended to allow for more expressive representations of knowledge.
* **Improved scalability:** Algernon can now handle larger and more complex knowledge bases.
* **Support for new applications:** Algernon has been used to develop a wide range of applications, including natural language processing systems, expert systems, and intelligent agents.

Here is a more detailed overview of the evolution of the Algernon knowledge-representation system:
* **1992:** The Algernon system is first developed by David Poole.
* **1994:** Algernon is extended to support non-monotonic reasoning.
* **1996:** Algernon is extended to support reasoning under uncertainty.
* **1998:** Algernon is extended to support more expressive representations of knowledge.
* **2000:** Algernon is extended to improve its scalability.
* **2002:** Algernon is used to develop a natural language processing system called AlgernonNL.
* **2004:** Algernon is used to develop an expert system called AlgernonX.
* **2006:** Algernon is used to develop an intelligent agent called AlgernonAgent.
* **2008:** Algernon is released as an open source project.
* **2010:** Algernon is extended to support new knowledge representation languages, such as OWL and RDF.
* **2012:** Algernon is used to develop a semantic web application called AlgernonSW.
* **2014:** Algernon is extended to support big data applications.
* **2016:** Algernon is used to develop a machine learning system called AlgernonML.
* **2018:** Algernon is extended to support distributed reasoning.
* **2020:** Algernon is used to develop a knowledge graph system called AlgernonKG.
* **2022:** Algernon is extended to support explainable AI.

Today, Algernon is a powerful and versatile knowledge-representation system that is used to develop a wide range of applications. It is a valuable tool for researchers and practitioners who are working on artificial intelligence, natural language processing, and machine learning.

## Future directions
The future of the Algernon knowledge-representation system is bright. Researchers are working on a number of new features and enhancements, including:
* **Support for new reasoning tasks:** Algernon is being extended to support new reasoning tasks, such as abductive reasoning and analogical reasoning.
* **Improved efficiency:** Algernon is being optimized to improve its efficiency on large and complex knowledge bases.
* **Enhanced usability:** Algernon is being made more user-friendly and accessible to a wider range of users.

Algernon is also being integrated with other AI technologies, such as machine learning and deep learning. This will allow Algernon to be used to develop even more sophisticated and powerful applications.
[{2023-10-27 retrieved} https://bard.google.com/chat/c280dc55b3d3942c]

name::
* McsEngl.Algernon-Knlgmngr,
* McsEngl.klmrAlgernon!=Algernon--knowledge-manager,
* McsEngl.Knlgmngr.001-Algernon,
* McsEngl.Knlgmngr.Algernon,

Knlgmngr.GALEN-{1991-2020}

description::
"overview of GALEN system of University of Manchester:
GALen (Generalized Architecture for Languages, Encyclopedias, and Nomenclatures in Medicine) is a comprehensive medical terminology and knowledge management system developed at the University of Manchester. It is designed to support various aspects of healthcare, including clinical practice, research, and healthcare information systems. GALen was developed with the goal of providing a standardized and structured approach to medical terminology and knowledge representation.

Here's an overview of the GALEN system:
1. Terminology Standardization: GALEN focuses on standardizing and formalizing medical terminology. It provides a framework for representing medical concepts, including diseases, symptoms, drugs, procedures, and more, in a consistent and computable manner.

2. Knowledge Representation: GALEN uses a knowledge representation system to model medical concepts and relationships between them. This allows for more precise and semantically rich descriptions of medical data.

3. Ontology Development: GALEN includes a comprehensive ontology that defines the structure and relationships of medical concepts. This ontology is designed to be used in healthcare information systems to enhance interoperability and data integration.

4. Clinical Decision Support: GALEN supports the development of clinical decision support systems by providing a structured knowledge base for healthcare professionals. It can be used to build tools that aid in diagnosis, treatment recommendations, and other clinical decision-making processes.

5. Terminology Mapping: GALEN provides tools and resources for mapping between different medical terminologies and classifications. This is essential for data exchange and collaboration between healthcare organizations and systems that use different terminologies.

6. Research and Development: The GALEN project has contributed to various research areas in medical informatics and healthcare, including the development of terminologies, ontologies, and decision support systems. It has also played a role in standardization efforts within the healthcare industry.

7. Clinical Data Integration: GALEN's structured terminology and knowledge representation enable the integration of diverse clinical data sources, making it easier to analyze and share healthcare information.

It's important to note that GALEN has been involved in various research and development projects over the years, and its influence extends beyond the University of Manchester. The system has contributed to the field of medical informatics and the development of international standards for healthcare terminologies and ontologies.

Please note that the specific capabilities and applications of GALEN may have evolved since my last knowledge update in January 2022, and you may want to consult more recent sources for the latest information on the system."
[{2023-10-27 retrieved} https://chat.openai.com/c/96008a08-e9ce-4554-8acb-e8502cda2f62]

name::
* McsEngl.GALEN-Knlgmngr,
* McsEngl.Knlgmngr.007-GALEN,
* McsEngl.Knlgmngr.GALEN,
* McsEngl.klmrGalen!=GALEN--knowledge-manager,

evoluting of klmrGalen

description::
"evolution of Generalized Architecture for Languages, Encyclopedias, and Nomenclatures in Medicine:
The Generalized Architecture for Languages, Encyclopedias, and Nomenclatures in Medicine (GALEN) project was launched in 1991 with the goal of developing a new generation of terminology tools to support multilingual clinical information systems. GALEN is a pervasive technology that enables the effective integration of advanced information systems into clinical care.

The evolution of GALEN can be divided into three main phases:
**Phase 1 (1991-1995)**: The focus of this phase was on developing the core conceptual model of GALEN, known as the Coding Reference (CORE) model. The CORE model is a formal representation of medical concepts and the relationships between them. It is based on the principles of semantic decomposition and interlinguas, which allows it to be used in a variety of languages and cultures.

**Phase 2 (1996-2000)**: The focus of this phase was on developing the Semantic Encyclopedia of Terminology (SET), a software system that implements the CORE model. The SET provides a central repository of medical knowledge that can be used by a variety of clinical information systems.

**Phase 3 (2001-present)**: The focus of this phase has been on the dissemination and adoption of GALEN. GALEN has now been adopted by a number of organizations around the world, and it is being used in a variety of clinical and research applications.

In addition to the three main phases, GALEN has also evolved in a number of other ways over the years. For example, the CORE model has been expanded to cover a wider range of medical concepts, and the SET software has been improved to make it more efficient and scalable. GALEN has also been integrated with a number of other standards and technologies, such as SNOMED CT and HL7.

Here is a summary of some of the key evolutionary steps in GALEN:
* 1991: Launch of the GALEN project.
* 1995: Release of the CORE model.
* 1996: Release of the SET software.
* 2001: First international GALEN conference.
* 2005: Release of GALEN version 3.0, with a new knowledge representation and improved performance on commonsense reasoning tasks.
* 2010: Release of GALEN version 4.0, with expanded coverage and improved integration with other standards and technologies.
* 2015: Release of GALEN version 5.0, with a new knowledge acquisition pipeline and improved performance on clinical reasoning tasks.
* 2020: Release of GALEN version 6.0, with a new knowledge representation and improved performance on multimodal reasoning tasks.

GALEN is a valuable resource for a variety of healthcare applications, including clinical decision support, clinical documentation, and patient education. It is also used in a variety of research projects, including medical informatics, clinical linguistics, and artificial intelligence.

GALEN is still under active development, and the team is working on a number of improvements, including expanding coverage, improving the quality of knowledge, and developing new ways to use GALEN for healthcare applications."
[{2023-10-30 retrieved} https://bard.google.com/chat/297cfd8b10e1a04b]

name::
* McsEngl.evoluting-of-klmrGalen,
* McsEngl.klmrGalen'evoluting,

Knlgmngr.ConceptBase-{1987-2013..}

description::
"overview of ConceptBase:
ConceptBase is a multi-user deductive object manager intended for conceptual modeling, metamodeling, and coordination in design environments. It is a database system for storing interdependent models at any abstraction level. ConceptBase utilizes the object-oriented principles of instantiation, specialization, and attribution/association to represent models. Its logic-based rule and constraint language allows circumscribing the intended semantics of classes, meta classes, meta meta classes, and so forth.

ConceptBase was originally conceived in the 1980s as a repository system maintaining heterogeneous models created during the development of data-intensive applications, i.e. software systems that rely on database systems as one of their components.

**Features:**
* **Multi-user:** ConceptBase supports multiple users to concurrently access and modify the same model.
* **Deductive:** ConceptBase can infer new knowledge from existing knowledge using deductive rules.
* **Object-oriented:** ConceptBase uses object-oriented principles to represent models, such as instantiation, specialization, and attribution/association.
* **Logic-based:** ConceptBase has a logic-based rule and constraint language that allows users to define the semantics of their models.
* **Metamodeling:** ConceptBase can be used to model other models, which is known as metamodeling.

**Benefits:**
* **Improved modeling quality:** ConceptBase's deductive and logic-based features can help users to improve the quality of their models by identifying and eliminating inconsistencies.
* **Reduced development time:** ConceptBase's metamodeling capabilities can help users to reduce development time by allowing them to reuse existing models and modeling patterns.
* **Improved communication and collaboration:** ConceptBase's multi-user support can help users to communicate and collaborate more effectively on their models.

**Applications:**
* **Conceptual modeling:** ConceptBase can be used to develop conceptual models of information systems, software systems, and other complex systems.
* **Metamodeling:** ConceptBase can be used to model other models, such as domain models, process models, and architecture models.
* **Method engineering:** ConceptBase can be used to develop and manage modeling methods.
* **Knowledge representation:** ConceptBase can be used to represent and manage knowledge in a variety of domains, such as medicine, finance, and engineering.

Overall, ConceptBase is a powerful tool for modeling, metamodeling, and knowledge representation. It is used by a variety of organizations in academia and industry, including Google, Microsoft, and IBM.
Sources
www.researchgate.net/figure/Buttons-of-the-History-Window_fig5_326352578
modeling-languages.com/conceptbase-database-system-metamodeling/"
[{2023-10-27 retrieved} https://bard.google.com/chat/db619d6544f8c301]

name::
* McsEngl.ConceptBase-Knlgmngr,
* McsEngl.Knlgmngr.006-ConceptBase,
* McsEngl.Knlgmngr.ConceptBase,
* McsEngl.klmrConceptbase!=ConceptBase-knowledge-manager,

info-resource of klmrConceptbase

description::
* https://conceptbase.sourceforge.net/,
* https://modeling-languages.com/conceptbase-database-system-metamodeling/,

name::
* McsEngl.klmrConceptbase'Infrsc,

evoluting of klmrConceptbase

description::
"evolution of ConceptBase:
ConceptBase was originally conceived in the 1980s as a repository system maintaining heterogeneous models created during the development of data-intensive applications, i.e. software systems that rely on database systems as one of their components. It was developed at the University of Passau in Germany, and the first version was released in 1987.

ConceptBase is a deductive and object-oriented database management system, which means that it combines the features of object-oriented databases with the ability to infer new knowledge from existing facts and rules. This makes it well-suited for storing and managing conceptual models, which are abstract representations of the real world or of software systems.

ConceptBase has evolved over the years to support a wider range of applications and to improve its performance and usability. Some of the major milestones in the evolution of ConceptBase include:
* **1992:** ConceptBase is ported to the Unix operating system.
* **1994:** ConceptBase is used to support the development of the Unified Modeling Language (UML).
* **1997:** ConceptBase is ported to the Windows operating system.
* **2003:** ConceptBase is released as open-source software.
* **2013:** ConceptBase is ported to the Java programming language.
* **2023:** ConceptBase is still under active development, and new features and improvements are being added regularly.

Today, ConceptBase is used by researchers and practitioners in a wide range of domains, including software engineering, information systems engineering, and artificial intelligence. It is used to support a variety of tasks, including:
* Conceptual modeling
* Metamodeling
* Method engineering
* Ontology development
* Knowledge management
* Data warehousing

ConceptBase is a powerful and versatile tool for managing conceptual knowledge. It is well-suited for applications where it is necessary to represent and reason about complex concepts and their relationships.

Here are some specific examples of how ConceptBase has been used in recent years:
* ConceptBase was used to develop a conceptual model of the European Union's General Data Protection Regulation (GDPR). This model is being used to help organizations comply with the GDPR.
* ConceptBase is being used to develop a knowledge base for the diagnosis and treatment of cancer. This knowledge base is being used by doctors to improve the quality of cancer care.
* ConceptBase is being used to develop a decision support system for farmers. This system helps farmers to choose the best crops to plant and to manage their crops effectively.

ConceptBase is a valuable tool for anyone who needs to manage and reason about conceptual knowledge. It is a powerful and versatile tool that can be used in a wide range of applications.
Sources modeling-languages.com/conceptbase-database-system-metamodeling/"
[{2023-10-30 retrieved} https://bard.google.com/chat/e98093b122dd89d0]

name::
* McsEngl.evoluting-of-klmrConceptbase,
* McsEngl.klmrConceptbase'evoluting,

Knlgmngr.WordNet-{1985-2011}

description::
"overview of WordNet:
WordNet is a lexical database and a comprehensive lexical reference system for the English language. It was developed at Princeton University and is widely used in natural language processing, artificial intelligence, and computational linguistics. WordNet provides a structured and detailed inventory of words and their semantic relationships. Here's an overview of WordNet:

1. **Lexical Database:** WordNet is a lexical database, which means it's a repository of words and their various aspects, such as definitions, synonyms, antonyms, part-of-speech information, and more. It doesn't just store words in isolation but also provides information about how they relate to each other.

2. **Hierarchy of Concepts:** One of the key features of WordNet is the hierarchical organization of words and concepts. It groups related words into synsets (synonym sets) and organizes these synsets into a hypernym-hyponym hierarchy. This means that words are organized into broader and narrower categories, helping to understand the relationships between words and concepts.

3. **Synonyms and Antonyms:** WordNet provides synonyms and antonyms for many words, making it a valuable resource for natural language understanding and generation tasks.

4. **Semantic Relations:** It defines various semantic relations between words and synsets. Some of the common semantic relations in WordNet include meronymy (part-whole relationships), holonymy (whole-part relationships), hypernymy (is-a relationships), and more. These relationships help in understanding the semantic structure of the English language.

5. **Part of Speech:** WordNet includes information about the part of speech of each word, such as whether a word is a noun, verb, adjective, or adverb. This information is essential for many natural language processing tasks.

6. **Applications:** WordNet has been used in various applications, including machine translation, information retrieval, information extraction, and sentiment analysis. It's also used as a resource for building semantic networks and knowledge graphs.

7. **Word Sense Disambiguation:** WordNet is helpful for word sense disambiguation, which is the process of determining the correct meaning of a word in a given context. By providing multiple senses for a word and their relationships, WordNet aids in disambiguating word meanings.

WordNet is a valuable resource for researchers and developers working on natural language understanding and generation. It has not only been influential in the development of NLP algorithms but has also contributed to our understanding of lexical semantics and language structure. Note that while WordNet primarily focuses on English, similar projects exist for other languages."
[{2023-10-28 retrieved} https://chat.openai.com/c/8600496d-550c-4407-bb78-e1c4a45755e5]

name::
* McsEngl.Wnkm!=WordNet-Knlgmngr,
* McsEngl.WordNet-Knlgmngr!⇒Wnkm,
* McsEngl.Knlgmngr.018-WordNet!⇒Wnkm,
* McsEngl.Knlgmngr.WordNet!⇒Wnkm,
* McsEngl.wordnet!⇒Wnkm,

descriptionLong::
"a semantic lexicon for the English language"
[{2023-09-18 retrieved} https://en.wikipedia.org/wiki/Hyponymy_and_hypernymy]
===
"WordNet® is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. The resulting network of meaningfully related words and concepts can be navigated with the browser(link is external). WordNet is also freely and publicly available for download. WordNet's structure makes it a useful tool for computational linguistics and natural language processing.
WordNet superficially resembles a thesaurus, in that it groups words together based on their meanings. However, there are some important distinctions. First, WordNet interlinks not just word forms—strings of letters—but specific senses of words. As a result, words that are found in close proximity to one another in the network are semantically disambiguated. Second, WordNet labels the semantic relations among words, whereas the groupings of words in a thesaurus does not follow any explicit pattern other than meaning similarity."
[{2021-02-02} https://wordnet.princeton.edu/]
===
"Some early knowledge graphs were topic-specific. In 1985, Wordnet was founded, capturing semantic relationships between words and meanings – an application of this idea to language itself."
[{2021-01-31} https://en.wikipedia.org/wiki/Knowledge_graph]

knowledge-base of Wnkm

description::
"155 327 words organized in 175 979 synsets for a total of 207 016 word-sense pairs;"
[{2023-11-14 retrieved} https://en.wikipedia.org/wiki/Upper_ontology]

name::
* McsEngl.Knlgbase.wordnet,
* McsEngl.Wnkm'knowledge-base,

evaluation of Wnkm

description::
· WordNet made a very good job on grouping nouns, adjectives, adverbs, and verbs per concept denoted.
· what we need is to have together the-nouns, adjectives, verbs, ... that denote ONE CONCEPT and this is what this site is doing.
· this presentation, presupposes a different understanding (from mainstream) of what a-language is.
[hmnSngo.{2023-05-19}]

name::
* McsEngl.Wnkm'evaluation,

info-resource of Wnkm

description::
* https://wordnet.princeton.edu/,
* online: http://wordnetweb.princeton.edu/perl/webwn,
* concept neighbourhoods: http://www.ketlab.org.uk/wordnet.html,

name::
* McsEngl.Wnkm'Infrsc,

Knlgmngr.CYC-{1984-1995-2023} (link)

Knlgmngr.KL-ONE-{1982}

description::
× generic: frame-language--manager,

"overview of KL-ONE:
KL-ONE is a knowledge representation system in the tradition of semantic networks and frames. It is a frame language that provides a structured inheritance network for representing conceptual information.
*
* **Concepts** in KL-ONE are the basic building blocks of knowledge representation. Concepts can be primitive, meaning that they are not further defined, or defined, meaning that they are defined in terms of other concepts.
*
* **Roles** are relationships between concepts. For example, the concept "Person" might have a role called "has-parent", which would relate a person to their parent.
*
* **Descriptions** in KL-ONE are used to represent the properties and relationships of concepts. A description can be simple, such as "Person has-parent Person", or it can be complex, such as "Person who is a parent of two or more children."
*
* **Inheritance** in KL-ONE is used to organize concepts into a hierarchy. For example, the concept "Person" might inherit from the concept "Animal". This means that everything that is true of an animal is also true of a person.
*
* **KL-ONE also supports multiple inheritance**, which means that a concept can inherit from multiple other concepts. This allows for more complex and nuanced representations of knowledge.
*
* **KL-ONE has a number of features that make it a powerful knowledge representation system:**
* * **Structured inheritance network:** KL-ONE's structured inheritance network allows for efficient reasoning about knowledge.
* * **Complex descriptions:** KL-ONE's complex descriptions allow for the representation of a wide range of knowledge, from simple facts to complex relationships.
* * **Multiple inheritance:** KL-ONE's multiple inheritance allows for more flexible and expressive knowledge representation.
* * **Deductive classifier:** KL-ONE includes a deductive classifier that can be used to validate knowledge bases and deduce new knowledge.
*
* KL-ONE has been used in a variety of applications, including natural language processing, knowledge engineering, and machine learning.
*
* Here is an example of a KL-ONE description:
* ```
* (Concept Person)
* (Subsumes Person Animal)
* (Role has-parent)
* (Description Person who has-parent Person)
* ```
*
* This description says that the concept "Person" subsumes the concept "Animal" and that persons have parents. It also says that a person is defined as someone who has a parent who is also a person.
*
* KL-ONE is a powerful knowledge representation system that can be used to represent a wide range of knowledge. It is a good choice for applications where complex and flexible knowledge representation is needed."
[{2023-10-30 retrieved} https://bard.google.com/chat/1ad6ed38e0932b22]

name::
* McsEngl.KL-ONE-Knlgmngr!⇒klmrKlone,
* McsEngl.klmrKlone!=KL-ONE--knowledge-manager,
* McsEngl.Knlgmngr.021-KL-ONE!⇒klmrKlone,
* McsEngl.Knlgmngr.KL-ONE!⇒klmrKlone,

evoluting of klmrKlone

description::
"evolution of KL-ONE:
KL-ONE was originally developed in the late 1970s by Ronald J. Brachman and others at Bolt, Beranek and Newman. It was one of the first knowledge representation systems to use a structured inheritance network and complex descriptions.

KL-ONE has evolved over time, with new features being added and existing features being improved. Some of the major milestones in the evolution of KL-ONE include:
* **1980:** KL-ONE is first described in a paper by Brachman.
* **1981:** The first KL-ONE workshop is held.
* **1982:** The first KL-ONE implementation is released.
* **1983:** Brachman and James G. Schmolze publish a paper that describes the semantics and classification of KL-ONE.
* **1985:** The KL-ONE system is used to build the CYC knowledge base.
* **1987:** The KL-ONE family of knowledge representation languages is standardized by the IEEE.
* **1990s:** KL-ONE is used to build a number of other knowledge bases, including the WordNet lexical database and the UMLS Metathesaurus.
* **2000s:** KL-ONE continues to be used in a variety of research and development projects.

Some of the key features that have been added to KL-ONE over time include:
* **Support for multiple inheritance**
* **A deductive classifier**
* **The ability to represent temporal and spatial knowledge**
* **The ability to represent uncertainty**

KL-ONE has also been extended to support new applications, such as natural language processing, machine learning, and web semantics.

Today, KL-ONE is no longer a single system. Instead, it is a family of knowledge representation languages that are based on the original KL-ONE system. Some of the most well-known KL-ONE languages include:
* **Description Logics (DL)**
* **Web Ontology Language (OWL)**
* **Resource Description Framework (RDF)**

These languages are used to build a wide variety of knowledge bases, including the Semantic Web, the Global Biodiversity Information Facility, and the Gene Ontology.

KL-ONE has had a major impact on the field of knowledge representation. It has been used to build a wide variety of knowledge bases and has inspired the development of many new knowledge representation languages. KL-ONE continues to be an important tool for researchers and developers who need to build knowledge-based systems.
[{2023-10-30 retrieved} https://bard.google.com/chat/1ad6ed38e0932b22]

name::
* McsEngl.evoluting-of-klmrKlone,
* McsEngl.klmrKlone'evoluting,

Knlgmngr.MYCIN-{1974..1990s}

description::
"overview of MYCIN:
MYCIN was an early expert system developed in the 1970s at Stanford University. It was designed to help physicians diagnose and treat bacterial infections. MYCIN was a backward chaining expert system, meaning that it started with a hypothesis and then asked questions to confirm or disconfirm that hypothesis.

MYCIN's knowledge base consisted of over 500 production rules. Each production rule represented a piece of knowledge about bacterial infections, such as "If the patient has a fever and a headache, then they may have meningitis." MYCIN would use its knowledge base to ask the physician a series of questions about the patient's symptoms and medical history. Based on the physician's answers, MYCIN would generate a list of possible diagnoses.

MYCIN would then recommend a course of treatment for each diagnosis. The treatment recommendations were based on MYCIN's knowledge of the bacteria that cause each infection, the antibiotics that are effective against those bacteria, and the patient's individual characteristics, such as their age and weight.

MYCIN was evaluated in a number of clinical trials, and it was found to be as accurate as human experts in diagnosing bacterial infections. MYCIN was also found to be more consistent in its treatment recommendations than human experts.

MYCIN was never widely deployed in clinical practice, but it was a landmark achievement in the field of artificial intelligence. MYCIN is considered to be the "granddaddy of all expert systems."

Here are some of the key features of MYCIN:
* **Backward chaining:** MYCIN used backward chaining to reason from a hypothesis to a conclusion.
* **Production rules:** MYCIN's knowledge base was represented as a set of production rules.
* **Expert knowledge:** MYCIN's knowledge base was developed by consulting with experts in the field of infectious disease.
* **Consistency:** MYCIN was more consistent in its treatment recommendations than human experts.

MYCIN was a pioneering expert system that helped to lay the foundation for the field of knowledge-based systems. MYCIN's legacy can be seen in the many expert systems that are used today in a variety of domains, such as medical diagnosis, financial planning, and fraud detection."
[{2023-10-27 retrieved} https://bard.google.com/chat/2afefc8b86a6d9bd]

name::
* McsEngl.Knlgmngr.002-MYCIN,
* McsEngl.Knlgmngr.MYCIN,
* McsEngl.MYCIN-Knlgmngr,
* McsEngl.klmrMycin!=MYCIN--knowledge-manager,

evoluting of klmrMycin

description::
"evolution of MYCIN:
MYCIN was one of the first expert systems to be developed, and it had a significant impact on the evolution of artificial intelligence. It was developed at Stanford University in the early 1970s by Edward Shortliffe and Bruce Buchanan, with the goal of developing a computer system that could assist physicians in the diagnosis and treatment of infectious diseases.

MYCIN was a rule-based system, meaning that it used a set of rules to infer conclusions from data. The rules were developed in collaboration with infectious disease experts, and they encoded their knowledge about the diagnosis and treatment of infections.

MYCIN was able to achieve impressive results in early studies. For example, in one study, MYCIN was able to provide therapy recommendations that were comparable to those of human experts in over 70% of cases.

However, MYCIN also had some limitations. One limitation was that it was very domain-specific, meaning that it could only be used to diagnose and treat a limited range of infections. Another limitation was that the knowledge base was difficult to maintain and update.

Despite its limitations, MYCIN was a groundbreaking achievement in artificial intelligence. It demonstrated that it was possible to develop computer systems that could encode and apply human expertise in complex domains.

MYCIN's development also led to the development of expert system shells, which are software tools that can be used to develop expert systems for a variety of domains. Expert system shells have been used to develop expert systems for a wide range of applications, including medical diagnosis, financial planning, and engineering design.

MYCIN is no longer in use today, but it had a significant impact on the evolution of artificial intelligence. It pioneered the development of rule-based expert systems and expert system shells, which are still used today to develop intelligent systems for a variety of applications.

Here is a brief timeline of the evolution of MYCIN:
* 1972: Work on MYCIN begins at Stanford University.
* 1974: MYCIN is first able to provide therapy recommendations for meningitis cases.
* 1976: MYCIN is tested in a clinical setting at Stanford Medical School.
* 1980: E-MYCIN, an expert system shell based on MYCIN, is released.
* 1984: The book *Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project* is published.
* 1990s: MYCIN is no longer actively developed or used.

MYCIN's evolution reflects the broader evolution of artificial intelligence. Early AI systems, such as MYCIN, were rule-based and domain-specific. Later AI systems, such as machine learning systems, are able to learn from data and generalize to new domains.

Today, AI systems are being used for a wide range of applications, including medical diagnosis, financial planning, and self-driving cars. MYCIN was a pioneer in this field, and it continues to inspire AI researchers today."
[{2023-10-30 retrieved} https://bard.google.com/chat/d987193f5687a98a]

name::
* McsEngl.evoluting-of-klmrMycin,
* McsEngl.klmrMycin'evoluting,

Knlgmngr.CODE4

description::
"CODE4 (Conceptually Oriented Description Environment) is a general purpose knowledge management system intended for analysing, debugging, and delivering knowledge about some domain. It is designed to be easily adaptable to many applications, such as natural language processing, software specification and design, expert systems, general terminological analysis, or teaching subjects such as biology or Unix. It can be learned by non-computer people in a few days."
[{2023-10-27 retrieved} https://www.site.uottawa.ca/kaml/CODE4.html]

"CODE4 knowledge-management-system:
CODE4 is a general-purpose knowledge management system (KMS) that can be used to store, retrieve, and analyze conceptual knowledge. It is intended to be easy to use and flexible enough to meet the needs of a variety of users.

CODE4 has a number of features that make it a powerful KMS:
* **Conceptual modeling:** CODE4 allows users to model their knowledge in a way that is both expressive and easy to understand. This makes it possible to store and retrieve knowledge in a way that is relevant to the user's needs.
* **Automatic inferencing:** CODE4 can automatically infer new knowledge from the knowledge that is stored in the system. This allows users to gain new insights from their knowledge without having to explicitly encode them.
* **Collaboration:** CODE4 supports collaboration between users on knowledge management tasks. This makes it possible to share knowledge and expertise more effectively.
* **Extensibility:** CODE4 is extensible, so users can add new features and functionality to the system to meet their specific needs.

CODE4 has been used in a variety of applications, including:
* **Product design:** CODE4 can be used to store and retrieve knowledge about products, such as their specifications, design constraints, and usage scenarios. This information can be used to improve the design of new products and to troubleshoot problems with existing products.
* **Software development:** CODE4 can be used to store and retrieve knowledge about software, such as its design, implementation, and testing. This information can be used to improve the development process and to ensure the quality of software products.
* **Customer service:** CODE4 can be used to store and retrieve knowledge about customers, such as their preferences, past interactions with the company, and unresolved issues. This information can be used to provide better customer service and to resolve customer problems more quickly.

Overall, CODE4 is a powerful and versatile KMS that can be used to improve knowledge management in a variety of organizations."
[{2023-10-27 retrieved} https://bard.google.com/chat/a911ade8e47b3291]

name::
* McsEngl.CODE4-Knlgmngr,
* McsEngl.Knlgmngr.005-CODE4,
* McsEngl.Knlgmngr.CODE4,

descriptionLong::
"overview of CODE4 KMS:
CODE4 KMS is a cloud-based knowledge management system developed by the Google AI team. It is a powerful tool that helps organizations to collect, organize, and share knowledge. CODE4 KMS is easy to use and navigate, and it includes a number of features to help users to find the knowledge they need quickly and easily.

Here are some of the key features of CODE4 KMS:
* **Natural language processing (NLP):** CODE4 KMS supports NLP, which allows users to search for knowledge using natural language queries. For example, a user could type in the query "What is the best way to treat a migraine?" and CODE4 KMS would return relevant knowledge articles, documentation, and other resources.
* **Machine learning (ML):** CODE4 KMS also supports ML, which allows users to train the system to identify and extract knowledge from unstructured data sources. This can be useful for extracting knowledge from emails, chat logs, and other types of unstructured data.
* **Knowledge graph:** CODE4 KMS includes a knowledge graph, which is a network of interconnected nodes that represent knowledge entities. The knowledge graph can be used to browse and explore knowledge in a more visual way.
* **Collaboration features:** CODE4 KMS includes collaboration features that allow users to share knowledge with each other, annotate knowledge articles, and create discussion forums.
* **Security and compliance:** CODE4 KMS includes security and compliance features that help organizations to protect their knowledge and ensure compliance with relevant regulations.

CODE4 KMS is a valuable tool for a wide range of organizations, including:
* **Businesses:** CODE4 KMS can be used by businesses to improve their knowledge management practices, share knowledge with employees and customers, and improve decision-making.
* **Educational institutions:** CODE4 KMS can be used by educational institutions to collect and organize knowledge for students and faculty, and to improve the teaching and learning process.
* **Government agencies:** CODE4 KMS can be used by government agencies to collect and organize knowledge about laws, regulations, and policies, and to improve the delivery of government services.
* **Nonprofit organizations:** CODE4 KMS can be used by nonprofit organizations to collect and organize knowledge about their programs and services, and to share knowledge with their stakeholders.

Overall, CODE4 KMS is a powerful and versatile knowledge management system that can be used by organizations of all sizes to improve their knowledge management practices and share knowledge with their employees, customers, and other stakeholders."
[{2023-10-30 retrieved} https://bard.google.com/chat/4122071e73b35c2]

Knlgmngr.IKARUS

description::
"overview of Intelligent knowledge acquisition and retrieval universal system:
**Intelligent Knowledge Acquisition and Retrieval Universal System (IKARUS)** is a knowledge management system that aims to provide a comprehensive and integrated solution for knowledge acquisition, representation, storage, retrieval, and sharing. It is designed to be a universal system that can be used in a wide variety of domains, including business, education, healthcare, and government.

IKARUS is based on a number of key principles, including:
* **Semantic knowledge representation:** IKARUS uses a semantic knowledge representation model that allows it to represent the meaning of knowledge in a way that is both machine-readable and human-understandable. This makes it possible for IKARUS to reason about knowledge and to generate new knowledge based on existing knowledge.
* **Intelligent knowledge acquisition:** IKARUS provides a number of intelligent knowledge acquisition tools that can be used to extract knowledge from a variety of sources, including unstructured text, databases, and expert systems. This makes it possible to create and maintain large knowledge bases without the need for manual data entry.
* **Machine learning:** IKARUS uses machine learning techniques to learn from user interactions and to improve its performance over time. This makes IKARUS more intelligent and adaptive, and allows it to better meet the needs of its users.

IKARUS provides a number of features that make it a powerful and versatile knowledge management system, including:
* **Knowledge search and retrieval:** IKARUS provides a powerful knowledge search and retrieval engine that allows users to find the information they need quickly and easily. Users can search for knowledge using a variety of criteria, including keywords, semantic concepts, and relationships between concepts.
* **Knowledge reasoning and inference:** IKARUS can reason about knowledge and generate new knowledge based on existing knowledge. This makes it possible for IKARUS to answer complex questions that would be difficult or impossible to answer using traditional search engines.
* **Knowledge sharing and collaboration:** IKARUS allows users to share knowledge and collaborate with each other. Users can create and share knowledge bases, and they can work together to solve problems and make decisions.

IKARUS is a powerful and versatile knowledge management system that can be used in a wide variety of domains. It is still under development, but it has the potential to revolutionize the way that we acquire, manage, and use knowledge.

Here are some potential applications of IKARUS:
* **Business:** IKARUS can be used to manage customer knowledge, product knowledge, and market knowledge. It can also be used to develop expert systems and decision support systems.
* **Education:** IKARUS can be used to create and manage knowledge bases for educational resources, such as textbooks, articles, and videos. It can also be used to develop intelligent tutoring systems and adaptive learning systems.
* **Healthcare:** IKARUS can be used to manage patient knowledge, medical knowledge, and drug knowledge. It can also be used to develop expert systems for medical diagnosis and treatment planning.
* **Government:** IKARUS can be used to manage citizen knowledge, policy knowledge, and law knowledge. It can also be used to develop expert systems for public policy analysis and decision making.

IKARUS is a powerful tool that has the potential to revolutionize the way that we acquire, manage, and use knowledge. It is still under development, but it has the potential to have a significant impact on a wide range of industries and sectors."
[{2023-10-28 retrieved} https://bard.google.com/chat/eae955702a49c781]

name::
* McsEngl.IKARUS-Knlgmngr,
* McsEngl.Knlgmngr.010-IKARUS,
* McsEngl.Knlgmngr.IKARUS,
* McsEngl.klmrIkarus!=IKARUS--knowledge-manager,

descriptionLong::
"evolution of IKARUS-KRS:
IKARUS-KRS is an AI system developed by the German Aerospace Center (DLR). It is a knowledge-based reasoning system that is designed to solve complex problems in the areas of aerospace, robotics, and artificial intelligence.

**IKARUS-KRS's early years**
IKARUS-KRS was first introduced in the early 2000s. It was developed in response to the need for an AI system that could reason about complex systems and make decisions in real time.
IKARUS-KRS is based on a number of different AI technologies, including:
* Knowledge representation: IKARUS-KRS uses a knowledge representation language called KIF to store its knowledge. KIF is a powerful language that allows IKARUS-KRS to represent a wide range of different types of knowledge, including facts, rules, and relationships.
* Reasoning: IKARUS-KRS uses a variety of reasoning algorithms to solve problems. These algorithms include forward chaining, backward chaining, and abduction.
* Planning: IKARUS-KRS can also be used to plan and execute actions. It does this by using a planning algorithm called A*.

**IKARUS-KRS's evolution**
Since its introduction, IKARUS-KRS has evolved to become a more powerful and versatile AI system. Some of the key improvements that have been made to IKARUS-KRS include:
* The addition of new knowledge representation languages: In addition to KIF, IKARUS-KRS now also supports other knowledge representation languages, such as OWL and SWRL. This makes it easier to integrate IKARUS-KRS with other AI systems.
* The development of new reasoning algorithms: IKARUS-KRS now includes a number of new reasoning algorithms, such as non-monotonic reasoning and probabilistic reasoning. These algorithms allow IKRUS-KRS to reason about more complex problems.
* The improvement of planning capabilities: IKARUS-KRS's planning capabilities have also been improved. It can now plan for more complex tasks and it can also take into account uncertainty in the environment.

**IKARUS-KRS today**
Today, IKARUS-KRS is used in a variety of different applications, including:
* Aerospace: IKARUS-KRS is used to support the development of new aerospace systems, such as aircraft and spacecraft. It is also used to monitor and control aerospace systems during operation.
* Robotics: IKARUS-KRS is used to develop and control robots. It is also used to plan and execute robot missions.
* Artificial intelligence: IKARUS-KRS is used to develop new AI algorithms and systems. It is also used to evaluate the performance of AI systems.

**IKARUS-KRS's future**
DLR is continuing to invest in the development of IKARUS-KRS. Some of the areas that DLR is focusing on include:
* The development of new knowledge representation languages: DLR is working on developing new knowledge representation languages that are better suited for representing the knowledge that is needed to solve real-world problems.
* The development of new reasoning algorithms: DLR is also working on developing new reasoning algorithms that are more efficient and effective.
* The improvement of planning capabilities: DLR is working on improving IKARUS-KRS's planning capabilities so that it can plan for more complex tasks and take into account more uncertainty in the environment.

As AI continues to evolve, IKARUS-KRS is poised to play an increasingly important role in the development and deployment of new AI systems."
[{2023-10-30 retrieved} https://bard.google.com/chat/def7b1a45dc3d641]
===
"evolution of IKARUS-KMS:
IKARUS-KMS is a knowledge management system (KMS) developed by the Swiss Federal Institute of Technology in Zurich (ETH Zurich). It was originally released in 1999 and has since evolved to become one of the most powerful and versatile KMSs available.

**IKARUS-KMS in the early years**
The early versions of IKARUS-KMS focused on providing a central repository for storing and managing knowledge. It could be used to store a variety of different types of content, including documents, images, videos, and web pages. IKARUS-KMS also provided a number of features for organizing and searching knowledge, such as tagging, versioning, and full-text search.

**The evolution of IKARUS-KMS**
Over the years, IKARUS-KMS has evolved to become a more sophisticated and feature-rich KMS. Some of the key enhancements that have been made include:
* Support for collaborative knowledge management: IKARUS-KMS now supports a variety of features for collaborative knowledge management, such as discussion forums, wikis, and blogs. This allows users to work together to create, share, and manage knowledge.
* Integration with other systems: IKARUS-KMS can now be integrated with a variety of other systems, such as content management systems (CMSs), enterprise resource planning (ERP) systems, and customer relationship management (CRM) systems. This allows users to manage their knowledge in a single place and to access it from anywhere.
* Support for new content types: IKARUS-KMS now supports a wider range of content types, including social media posts, emails, and chat transcripts. This allows users to capture and manage all of their knowledge in one place.

**IKARUS-KMS today**
Today, IKARUS-KMS is used by a wide range of organizations, including universities, research institutes, government agencies, and businesses. It is a popular choice for organizations that need a powerful and versatile KMS to manage their knowledge assets.

**The future of IKARUS-KMS**
The developers of IKARUS-KMS are continuing to invest in the platform and to develop new features. Some of the areas that they are focusing on include:
* Support for artificial intelligence (AI): IKARUS-KMS is being integrated with AI technologies to make it more intelligent and user-friendly. For example, IKARUS-KMS can now use AI to recommend relevant content to users and to automate tasks such as knowledge extraction and classification.
* Support for the Semantic Web: IKARUS-KMS is being integrated with the Semantic Web to make it easier to share and link knowledge across different systems and organizations.
* Support for new devices and platforms: IKARUS-KMS is being developed to support a wider range of devices and platforms, including mobile devices and wearable devices.

As knowledge management becomes increasingly important for organizations, IKARUS-KMS is well-positioned to remain a leading KMS platform."
[{2023-10-30 retrieved} https://bard.google.com/chat/def7b1a45dc3d641]

info-resource of klmrIkarus

description::
* https://web.archive.org/web/20010805153754/http://www.site.uottawa.ca:80/~kavanagh/Ikarus/IkarusInfo.html,

name::
* McsEngl.klmrIkarus'Infrsc,

relation-to-programing-language of lagKnlg

description::
"knowledge-representation-language vs programming-language:
Knowledge representation languages (KRLs) and programming languages serve different purposes and have distinct characteristics. Here's a comparison between the two:

1. Purpose:
- Knowledge Representation Language (KRL): KRLs are designed to represent and manipulate knowledge or information in a form that is understandable to both humans and machines. They are used in artificial intelligence (AI) and knowledge-based systems to model, store, and reason about information.
- Programming Language: Programming languages are used to write instructions and algorithms that tell a computer how to perform specific tasks. They are used for developing software, applications, and systems.

2. Abstraction:
- KRLs typically abstract away from low-level implementation details and focus on expressing concepts, facts, and relationships.
- Programming languages often deal with low-level details and are used to specify step-by-step procedures for solving problems.

3. Syntax and Semantics:
- KRLs often have a more natural language-like syntax and semantics to facilitate easy understanding by domain experts.
- Programming languages have syntax and semantics that are designed for precise and unambiguous execution by a computer.

4. Human Readability:
- KRLs aim for human readability and comprehension, as they are often used for knowledge sharing, expert systems, and knowledge representation.
- Programming languages are primarily meant for machine execution, and their syntax may be more complex and less human-friendly.

5. Inference and Reasoning:
- KRLs are designed to support various forms of reasoning and inference, such as deduction, induction, and abduction, which are essential in AI and expert systems.
- Programming languages do not inherently provide reasoning capabilities and are focused on algorithm execution.

6. Examples:
- Examples of KRLs include RDF (Resource Description Framework), OWL (Web Ontology Language), and Prolog (a logic programming language often used for knowledge representation).
- Examples of programming languages include Python, Java, C++, and JavaScript.

7. Use Cases:
- KRLs are used in applications like semantic web, expert systems, natural language processing, and knowledge-based systems.
- Programming languages are used for developing a wide range of software applications, including web development, mobile apps, desktop applications, and system software.

In summary, KRLs and programming languages are distinct in their purposes and design. KRLs are primarily used for representing and manipulating knowledge in a human-readable format, while programming languages are used to instruct computers to perform specific tasks. While there may be some overlap and integration in certain applications (such as using KRLs to define knowledge bases for AI systems), they remain fundamentally different in their roles and functionalities."
[{2023-10-23 retrieved} https://chat.openai.com/c/7b4ed7ec-2f8e-4d0b-888c-7ee3aed90417]

name::
* McsEngl.lagCpgm'relation-to-knowledge-language,
* McsEngl.lagKnlg'relation-to-programing-language,

descriptionLong::
"knowledge-representation-language vs programming-language:
**Knowledge representation languages (KRLs)** are formal languages used to represent knowledge in a way that can be understood and processed by computers. KRLs are used in a variety of artificial intelligence applications, such as expert systems, natural language processing, and robotics.

**Programming languages**, on the other hand, are used to write programs that perform specific tasks. Programming languages are more general-purpose than KRLs, and can be used to develop a wide range of applications, from web browsers to video games.

**Key differences between KRLs and programming languages:**
* **Purpose:** KRLs are designed specifically to represent knowledge, while programming languages are designed to write programs.
* **Expressiveness:** KRLs are typically more expressive than programming languages, meaning that they can represent a wider range of knowledge concepts.
* **Ease of use:** Programming languages are generally easier to use than KRLs, as they are designed to be programmed by humans.
* **Performance:** KRLs can be slower than programming languages, as they need to perform additional reasoning tasks.

**Examples of KRLs:**
* OWL (Web Ontology Language)
* RDF (Resource Description Framework)
* Prolog
* Description logics
* Frames

**Examples of programming languages:**
* Python
* Java
* C/C++
* JavaScript
* Go

**Which one to use?**
The choice of whether to use a KRL or a programming language depends on the specific application. If the goal is to represent a large amount of knowledge and perform complex reasoning tasks, then a KRL is a good choice. If the goal is to develop a specific application, such as a web browser or a video game, then a programming language is a better choice.

Here are some specific examples of when to use a KRL or a programming language:
* **Use a KRL:**
* To develop an expert system that can diagnose diseases or recommend products.
* To build a natural language processing system that can understand and generate human language.
* To develop a robotic system that can plan and execute actions in the real world.
* **Use a programming language:**
* To develop a web browser that displays web pages.
* To build a video game that allows users to interact with a virtual world.
* To develop a mobile app that helps users manage their tasks.

In some cases, it may be beneficial to use a combination of a KRL and a programming language. For example, a KRL can be used to represent the knowledge of an expert system, and a programming language can be used to implement the reasoning engine.

Ultimately, the best way to choose between a KRL and a programming language is to consider the specific requirements of the application."
[{2023-10-23 retrieved} https://bard.google.com/chat/1217e0651f4ac5a6]

knowledge-tech of lagKnlg

description::
· knowledge-technology is any tech related to knowledge: knowledge-language, knowledge-base, knowledge-manager, natural-language-processing, artificial-intelligence, ...

name::
* McsEngl.knowledge-technology!⇒techKnlg,
* McsEngl.techKnlg!=knowledge-technology,

info-resource of lagKnlg

addressWpg::
*

name::
* McsEngl.lagKnlg'Infrsc,

DOING of lagKnlg

description::
*

name::
* McsEngl.lagKnlg'doing,

evoluting of techKnlg

description::
"evolution of knowledge-representation-languages:
The evolution of knowledge representation languages (KRLs) can be divided into four broad phases:

**Phase 1: Early KRLs (1960s-1970s)**
Early KRLs were focused on representing simple facts and relationships. They were often based on Lisp or Prolog, and they used a variety of different representations, such as logical expressions, frames, and rules.
Some examples of early KRLs include:
* **SHRDLU** (1970): A language for representing and reasoning about the physical world.
* **ELIZA** (1966): A language for representing and reasoning about human natural language.
* **MYCIN** (1972): A language for representing and reasoning about medical knowledge.

**Phase 2: Ontologies and Description Logics (1980s-1990s)**
In the 1980s and 1990s, there was a growing interest in using KRLs to represent ontologies, which are formal representations of knowledge about a particular domain. This led to the development of new KRLs based on description logics, which are a family of formal logics that are well-suited for representing ontologies.
Some examples of KRLs from this phase include:
* **LOOM** (1987): One of the first ontological KRLs.
* **KL-ONE** (1985): Another early ontological KRL.
* **SHOE** (1994): A KRL based on description logics that was designed for representing ontologies in a variety of domains.

**Phase 3: The Web Ontology Language (OWL) (2000s)**
In the 2000s, the Web Ontology Language (OWL) was developed as a standard KRL for representing ontologies on the Web. OWL is based on description logics, and it provides a rich set of features for representing different types of knowledge, such as classes, individuals, properties, and relationships.
OWL is now the most widely used KRL for representing ontologies in a variety of domains, including bioinformatics, medicine, and engineering.

**Phase 4: Modern KRLs (2010s-present)**
In recent years, there has been a growing interest in developing new KRLs that are more expressive and powerful than OWL. Some of these new KRLs are based on new formal logics, such as modal logics and temporal logics. Others are based on new machine learning techniques, such as deep learning.
Some examples of modern KRLs include:
* **SWRL** (2004): A KRL that combines OWL with rules.
* **SPIN** (2011): A KRL that allows users to add new semantics to OWL ontologies.
* **RDFS++** (2012): A KRL that extends OWL with new features for representing temporal and spatial knowledge.
* **WebReason** (2016): A KRL that is based on modal logic and allows users to reason about counterfactuals and possible worlds.

The evolution of KRLs has been driven by a number of factors, including the need to represent more complex knowledge, the need to reason about knowledge in more sophisticated ways, and the need to integrate knowledge from different sources.

Modern KRLs are still under development, but they have the potential to revolutionize the way we represent and reason about knowledge in AI systems."
[{2023-10-23 retrieved} https://bard.google.com/chat/3c669bacca1bb3be]

"evolution of knowledge-representation-languages:
The evolution of knowledge representation languages is a fascinating journey that has played a significant role in the development of artificial intelligence and related fields. These languages are designed to capture and model knowledge in a way that can be used by computers for reasoning and problem-solving. Here is an overview of the key milestones and stages in the evolution of knowledge representation languages:

1. Early Symbolic Notation (1950s-1960s):
- In the early days of AI research, knowledge representation involved symbolic notations. Researchers used formal logic, such as first-order logic and propositional logic, to represent knowledge.
- Researchers also utilized early programming languages to represent knowledge, which were often ad hoc and lacked formal semantics.

2. Frame-Based Systems (late 1960s-1970s):
- Frame-based knowledge representation systems, such as Minsky's "frames," introduced the idea of organizing knowledge in structured entities called frames. These frames contained slots and fillers for organizing and storing information.

3. Semantic Networks (1960s-1980s):
- Semantic networks represented knowledge using nodes and links to connect concepts and their relationships. This approach made it easier to model hierarchical and associative relationships between concepts.

4. Rule-Based Systems (1970s-1980s):
- Rule-based systems used production rules to represent knowledge in the form of "if-then" statements. These systems were particularly useful for expert systems and rule-based reasoning.

5. Description Logics (1980s-present):
- Description logics are a family of formal knowledge representation languages that are based on first-order logic and set theory. They underpin modern ontology languages and are used in the Semantic Web and ontology modeling.

6. Ontology Languages (late 1990s-present):
- Ontologies, represented using languages like OWL (Web Ontology Language), RDF (Resource Description Framework), and RDFS (Resource Description Framework Schema), have gained prominence for representing knowledge in a structured, machine-readable format. These ontologies are used for semantic web applications and knowledge organization.

7. CycL (1984-present):
- CycL is a knowledge representation language associated with the Cyc project, which aims to capture common-sense knowledge in a machine-readable format. It employs a rich, formalized language for representing a wide range of knowledge.

8. RDF and Linked Data (2000s-present):
- RDF and linked data principles have become fundamental for representing and connecting knowledge on the web. They allow for the creation of knowledge graphs and the integration of data from diverse sources.

9. Neural Networks and Embeddings (2010s-present):
- In recent years, deep learning techniques, particularly neural network-based embeddings, have played a significant role in learning and representing knowledge from large datasets. Embeddings like Word2Vec, GloVe, and BERT capture semantic relationships between words and concepts.

The evolution of knowledge representation languages reflects the ongoing quest to make computers understand and manipulate human knowledge more effectively. Modern knowledge representation often involves a combination of these approaches, and research continues to explore ways to make knowledge representation more efficient and adaptable for various AI applications."
[{2023-10-23 retrieved} https://chat.openai.com/c/d4587416-de37-4dd4-8641-a1bdd7b9a0ab]

name::
* McsEngl.evoluting-of-lagKnlg,
* McsEngl.lagKnlg'evoluting,
* McsEngl.evoluting-of-techKnlg,
* McsEngl.techKnlg'evoluting,

{2023-10-23}-McsHitp-creation::
· creation of current concept.

{2021..2030} of techKnlg

description::
* {2020s}: AI is used to develop new products and services, such as self-driving cars and virtual assistants.
* {2023}:
- Douglas-Lenat, creator of Cyc, died. [https://en.wikipedia.org/wiki/Douglas_Lenat]
- GPT-4 announcement on 2023-03-14.
* {2022} ChatGPT was released in November 2022 and quickly gained popularity due to its ability to engage in open-ended, fluent conversations. It could generate human-quality text, translate languages, write different kinds of creative content, and answer your questions in an informative way. [https://bard.google.com/]
* {2021} RDF-star: is an extension of RDF that allows triples to be nested within other triples.

name::
* McsEngl.techKnlg'{2030i10},
* McsEngl.{techInfo'2030i10}-techKnlg,

{2011..2020} of techKnlg

description::
* {2010s}: AI is widely adopted in a variety of industries.
* {2020}: OWL 2 WSDL Profile is introduced, enabling the integration of OWL ontologies into Web Services Description Language (WSDL).
* {2018-2019} Transfer Learning: particularly using pre-trained models like BERT for natural language understanding, became a prominent strategy for various NLP tasks. It significantly reduced the need for massive labeled datasets. [https://chat.openai.com/]
* {2017}:
- the Transformer architecture was introduced in large-language-models.
- SHACL Shapes Constraint Language: is a W3C Recommendation that provides a way to describe and validate the structure and constraints of RDF graphs.
* {2016} WebReason: A KRL that is based on modal logic and allows users to reason about counterfactuals and possible worlds.
* {2015} Deep Reinforcement Learning: The combination of deep learning with reinforcement learning led to significant progress in AI, with algorithms like AlphaGo beating world champions in board games. [https://chat.openai.com/]
* {2014} JSON-LD: JSON Linked Data is a JSON-based serialization for RDF data.
* {2012} RDFS++: A KRL that extends OWL with new features for representing temporal and spatial knowledge.

name::
* McsEngl.techKnlg'{2020i10},
* McsEngl.{techInfo'2020i10}-techKnlg,

{2001..2010} of techKnlg

description::
* {2000s}: Deep learning revolutionizes AI research.
* {2009}: OWL 2 DL becomes a W3C Recommendation, offering full logical reasoning capabilities.
* {2008} SPARQL: is a query language and protocol for querying RDF data.
* {2004}:
- OWL 1.0 becomes a W3C Recommendation, providing a foundation for representing rich semantics on the web.
- SWRL: A KRL that combines OWL with rules.

name::
* McsEngl.techKnlg'{2010i10},
* McsEngl.{techInfo'2010i10}-techKnlg,

{1991..2000} of techKnlg

description::
* {1990s}: Statistical learning gained prominence, and algorithms like Support Vector Machines (SVM) and decision trees became popular. The field also saw the emergence of ensemble methods. [https://chat.openai.com/]
* Development of knowledge reasoning systems for real-world applications such as fraud detection and customer relationship management.
* {2000}:
- DAML: DARPA Agent Markup Language, developed by the US Department of Defense.
- OIL: Ontology Inference Layer, developed by the European Union.
* {1999} RDF 1.0 was published as a W3C Recommendation in 1999.
* {1995}:
- Cycorp, Inc., based in Austin, Texas: Founded in January 1995 by AI pioneer Doug Lenat as a spin-off from MCC
- ONTOLINGUA: Public anouncement of the Ontolingua-Server.

name::
* McsEngl.techKnlg'{2000i10},
* McsEngl.{techInfo'2000i10}-techKnlg,

{1981..1990} of techKnlg

description::
* {1980s}: Machine learning becomes a popular subfield of AI.
* The expert systems of the early 1980s proved to be difficult to bulid because of the challenge of capturing all of an expert's knowledge. They were also difficult to maintain, because their large rule bases had little organization. Most expert systems were stand-alone applications on dedicated workstations. [BYTE, JUL 1993, 107]
* {1987} LOOM: One of the first ontological KRLs.
* {1985} KL-ONE: Another early ontological KRL.
* {1984} Cyc: Lenat initiates the Cyc project at MCC.
* {1982} Doug Lenat: Publishes the Knowledge Based Systems in Artificial Intelligence (1982, McGraw-Hill)

name::
* McsEngl.techKnlg'{1990i10},
* McsEngl.{techInfo'1990i10}-techKnlg,

{1971..1980} of techKnlg

description::
* In the 1970s, researchers began to develop knowledge-based systems (KBSs). KBSs are AI systems that use explicit knowledge to solve problems. This knowledge is represented in a variety of ways, such as logic rules, frames, and semantic nets. [https://bard.google.com/]
* Development of knowledge representation systems such as KL-ONE and PROLOG.
* {1972-1976} MYCIN is notable example of early expert systems.

name::
* McsEngl.techKnlg'{1980i10},
* McsEngl.{techInfo'1980i10}-techKnlg,

{1961..1970} of techKnlg

description::
* Development of first-order logic as a formal language for representing and reasoning about knowledge.
* {1966} ELIZA: A language for representing and reasoning about human natural language.
* {1965} Dendral is notable example of early expert systems.

name::
* McsEngl.techKnlg'{1970i10},
* McsEngl.{techInfo'1970i10}-techKnlg,

{1951..1960} of techKnlg

description::
* {1959} General Problem Solver (GPS) system developed by Allen Newell and Herbert A. Simon.
* {1957} perceptron: by Frank Rosenblatt, an American psychologist, was a simplified model of a neuron, capable of learning and recognizing patterns. [https://bard.google.com/]
* {1956}: John McCarthy coins the term "artificial intelligence".

name::
* McsEngl.techKnlg'{1960i10},
* McsEngl.{techInfo'1960i10}-techKnlg,

{1941..1950} of techKnlg

description::
* {1950}: Alan Turing proposes the Turing test, a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

name::
* McsEngl.techKnlg'{1950i10},
* McsEngl.{techInfo'1950i10}-techKnlg,

PARENT-CHILD-TREE of lagKnlg

parent-tree-of-lagKnlg::
* ,
* McsEngl.lagKnlg'parent-tree,

child-tree-of-lagKnlg::
* ,
* McsEngl.lagKnlg'child-tree,

WHOLE-PART-TREE of lagKnlg

whole-tree-of-lagKnlg::
* ,
* ... Sympan.
* McsEngl.lagKnlg'whole-tree,

part-tree-of-lagKnlg::
* ,
* McsEngl.lagKnlg'part-tree,

GENERIC-SPECIFIC-TREE of lagKnlg

generic-tree-of-lagKnlg::
* ,
* ... entity.
* McsEngl.lagKnlg'generic-tree,

specific-tree-of-lagKnlg::
* concept-lagKnlg,
* logo-lagKnlg,

* McsEngl.lagKnlg.specific-tree,

lagKnlg.concept (link)

lagKnlg.logo

description::
· logo-lagKnlg is knowledge-language that maps logo to "machine-meaning".
· also its output is more close to logo-views.
· meaning-language is a-computer-language that represents human-meaning FROM human-logo-(text, speech).
· concept-language maps mind-concept-systems to "meaning".

name::
* McsEngl.formal-meaning-representation-language!⇒lagLgkl,
* McsEngl.lagCmpr.014-meaning!⇒lagLgkl,
* McsEngl.lagCmpr.meaning!⇒lagLgkl,
* McsEngl.lagKnlg.002-logo!⇒lagLgkl,
* McsEngl.lagKnlg.logo!⇒lagLgkl,
* McsEngl.lagLgkl!=LoGo--Knowledge-Language,
* McsEngl.lagMeaning!⇒lagLgkl,
* McsEngl.lagMeng!⇒lagLgkl,
* McsEngl.lagMeng!=meaning-lagCmpr!⇒lagLgkl,
* McsEngl.logo--knowledge-language!⇒lagLgkl,
* McsEngl.logo-lagKnlg!⇒lagLgkl,
* McsEngl.meaning-language!⇒lagLgkl,
* McsEngl.meaning-representation-language!⇒lagLgkl,

concept-mining of lagLgkl

description::
"overview of concept mining:
**Concept mining** is the process of extracting concepts from unstructured data, such as text, images, and audio. It is a subfield of knowledge discovery in databases (KDD) and text mining, and it uses a variety of techniques from artificial intelligence and statistics to identify and extract meaningful concepts from large datasets.

Concepts are abstract representations of real-world entities and ideas. They can be simple, such as "dog" or "house," or more complex, such as "democracy" or "artificial intelligence." Concepts can also be related to each other in various ways, such as through hypernymy (is-a relationship), meronymy (part-of relationship), or synonymy (similar meaning).

Concept mining is a powerful tool for discovering hidden knowledge in large datasets. It can be used for a variety of tasks, such as:
* **Topic modeling:** Identifying the main topics or themes in a collection of documents
* **Ontology construction:** Building knowledge bases that represent the relationships between concepts
* **Information retrieval:** Improving the accuracy and efficiency of search engines
* **Natural language processing:** Developing systems that can understand and generate human language
* **Machine learning:** Improving the performance of machine learning algorithms by providing them with better representations of the data

Concept mining is used in a wide range of industries, including healthcare, finance, marketing, and education. For example, concept mining can be used to identify the most common diseases and treatments in a medical database, or to discover the most popular products and brands in a social media dataset.

Here is a high-level overview of the concept mining process:
1. **Preprocessing:** The data is cleaned and prepared for analysis. This may involve removing noise, correcting errors, and stemming and lemmatizing words.
2. **Feature extraction:** Features are extracted from the data that are relevant to the concept mining task. For example, in text mining, features may include words, phrases, and syntactic patterns.
3. **Concept representation:** The features are used to represent the concepts in a way that is suitable for the concept mining algorithm. This may involve creating vectors of features or using a graph-based representation.
4. **Concept mining:** A concept mining algorithm is applied to the data to extract the concepts. There are a variety of different concept mining algorithms available, each with its own strengths and weaknesses.
5. **Concept evaluation:** The extracted concepts are evaluated to ensure that they are meaningful and accurate. This may involve using human experts to review the concepts or using automated evaluation metrics.

Concept mining is a complex and challenging task, but it has the potential to provide valuable insights into large datasets. As new concept mining algorithms and techniques are developed, it is likely to become an increasingly important tool for a wide range of applications."
[{2023-10-27 retrieved} https://bard.google.com/chat/65aecaa8a9ec5dac]

name::
* McsEngl.concept-mining//lagLgkl,
* McsEngl.lagLgkl'concept-mining,

semantic-parsing of lagLgkl

description::
"Semantic parsing is the task of converting a natural language utterance to a logical form: a machine-understandable representation of its meaning.[1] Semantic parsing can thus be understood as extracting the precise meaning of an utterance. Applications of semantic parsing include machine translation,[2] question answering,[1][3] ontology induction,[4] automated reasoning,[5] and code generation.[6][7] The phrase was first used in the 1970s by Yorick Wilks as the basis for machine translation programs working with only semantic representations.[8]"
[{2023-04-02 retrieved} https://en.wikipedia.org/wiki/Semantic_parsing]

name::
* McsEngl.lagLgkl'semantic-parsing,
* McsEngl.semantic-parsing,

manager of lagLgkl

description::
· logo-knowledge-language--manager is a-knowledge-language--manager that uses a-logo--knowledge-language-(logo-to--machine-meaning) to build its knowledge-base.

name::
* McsEngl.Knlgmngr.029-logo!⇒Lgklmngr,
* McsEngl.Knlgmngr.logo!⇒Lgklmngr,
* McsEngl.Lgklmngr!=Logo-Knowledge-Language--Manager,
* McsEngl.lagLgkl'manager!⇒Lgklmngr,
* McsEngl.logo-knowledge-language--manager!=Lgklmngr,

Lgklmngr.SPECIFIC

description::
* LLM-Lgklmngr,
* ReVerb-Lgklmngr,
* logic-Knlgmngr,
* propositional-Knlgmngr,

name::
* McsEngl.Lgklmngr.specific,

GENERIC-SPECIFIC-TREE of lagLgkl

generic-tree-of-lagLgkl::
* knowledge-language,
* ... entity,
* McsEngl.lagLgkl'generic-tree,

specific-tree-of-lagLgkl::
* controlled-natural-language,
* language-model,
* logic,
* propositional,

* McsEngl.lagLgkl.specific-tree,

lagLgkl.Abstract-Meaning-Representation

description::
"Abstract Meaning Representation (AMR)[1][2] is a semantic representation language. AMR graphs are rooted, labeled, directed, acyclic graphs (DAGs), comprising whole sentences. They are intended to abstract away from syntactic representations, in the sense that sentences which are similar in meaning should be assigned the same AMR, even if they are not identically worded. By nature, the AMR language is biased towards English – it is not meant to function as an international auxiliary language.
Abstract Meaning Representations have originally been introduced by Langkilde and Knight (1998)[3] as a derivation from the Penman Sentence Plan Language,[4] they are thus continuing a long tradition in Natural Language Generation and this has been their original domain of application. AMRs have re-gained attention since Banarescu et al. (2013),[1] in particular, this includes the extension to novel tasks such as machine translation and natural language understanding. The modern (post-2010) AMR format preserves the syntax and many syntactic conceptions of the original AMR format but has been thoroughly revised to better align with PropBank. Moreover, AMR has been extended with formal conventions for metadata and conventions for entity linking (here, linking with Wikipedia entries).
Existing AMR technology includes tools and libraries for parsing,[5] visualization,[6] and surface generation[7] as well as a considerable number of publicly available data sets. Many of these resources are collected at the AMR homepage[8] at ISI/USC where AMR technology has been originally developed."
[{2023-04-02 retrieved} https://en.wikipedia.org/wiki/Abstract_Meaning_Representation]

name::
* McsEngl.AMR!=Abstract-Meaning-Representation-lagLgkl,
* McsEngl.lagLgkl.Abstract-Meaning-Representation,

lagLgkl.Conceptual-Dependency-Theory

description::
"overview of Conceptual Dependency Theory:
Conceptual Dependency Theory is a knowledge representation framework developed by Roger C. Schank and Robert P. Abelson in the 1970s. This theory aims to model human cognition and understanding of natural language by representing knowledge in the form of conceptual structures called "conceptual dependencies." It is one of the early approaches to knowledge representation in the field of artificial intelligence and cognitive science. Here's an overview of Conceptual Dependency Theory:

1. **Basic Idea:** Conceptual Dependency Theory posits that human understanding and language comprehension can be represented as a set of fundamental, abstract concepts and their relationships. These abstract concepts serve as building blocks for constructing more complex representations of knowledge and understanding.

2. **Conceptual Dependencies:** The central elements of this theory are conceptual dependencies, which are used to represent the meaning of a sentence or a situation. A conceptual dependency consists of several components:
- **Action:** An action is a central concept that describes what is happening or being described in a sentence, such as "eat" or "give."
- **Object:** Objects are entities or things involved in the action, such as "John" or "apple."
- **Attribute:** Attributes provide additional information about an object, such as "red" for "apple."
- **Modifiers:** Modifiers are used to specify conditions or circumstances related to the action or object, such as "quickly" for "eat."
- **Time:** Time elements provide information about when the action occurs, like "yesterday" or "now."

3. **Conceptual Dependency Grammar:** Conceptual Dependency Theory uses a specialized grammar to represent sentences in terms of these conceptual dependencies. This grammar allows for the transformation of natural language sentences into a structured, abstract representation that captures the underlying meaning.

4. **Inference and Reasoning:** The theory includes mechanisms for performing inference and reasoning. It allows for the recognition of relationships, the generation of implications, and the understanding of context.

5. **Limitations:** Conceptual Dependency Theory, while innovative, has some limitations. It can be challenging to represent certain nuances of meaning and complex relationships, and it doesn't always scale well to more extensive knowledge domains. As a result, other knowledge representation models and approaches have gained prominence in artificial intelligence, such as frame-based systems and ontologies.

6. **Influence:** Despite its limitations, Conceptual Dependency Theory had a significant influence on the development of natural language processing and early AI systems. It contributed to the exploration of knowledge representation and understanding in the field.

7. **Evolution:** Over time, Conceptual Dependency Theory has evolved and has been incorporated into various other cognitive and AI models. Elements of it can still be seen in some modern approaches to knowledge representation and natural language understanding.

In summary, Conceptual Dependency Theory is a theory of knowledge representation that seeks to capture the essence of human cognition and language comprehension through abstract conceptual structures. While it is no longer the dominant model for AI knowledge representation, it has played a foundational role in the history of artificial intelligence and cognitive science."
[{2023-10-24 retrieved} https://chat.openai.com/c/5596bf85-a830-4ee1-9a2c-5c5a1eb3c481]

name::
* McsEngl.Conceptual-Dependency-Theory,
* McsEngl.lagLgkl.Conceptual-Dependency-Theory,

lagLgkl.embeddings

description::
"overview of embeddings:
Embeddings are a mathematical representation of data in a lower-dimensional space that preserves the underlying relationships between the data points. Embeddings are often used to represent complex data types, such as images, text, or audio, in a way that machine learning algorithms can easily process.

Embeddings are learned from data, and the specific learning algorithm used will depend on the type of data being embedded. For example, word embeddings are learned from text data, while image embeddings are learned from image data.

Once embeddings have been learned, they can be used for a variety of machine learning tasks. For example, word embeddings can be used for tasks such as sentiment analysis, text classification, and machine translation. Image embeddings can be used for tasks such as image classification and object detection.

Here are some of the benefits of using embeddings:
* **Reduced dimensionality:** Embeddings can be used to reduce the dimensionality of data, which can make machine learning algorithms more efficient and less prone to overfitting.
* **Improved performance:** Embeddings have been shown to improve the performance of machine learning algorithms on a variety of tasks.
* **Interpretability:** Embeddings can be used to visualize the relationships between data points, which can help to interpret the results of machine learning models.

Here are some examples of how embeddings are used in practice:
* **Search engines:** Search engines use word embeddings to match search queries to relevant documents.
* **Social media:** Social media platforms use word embeddings to recommend content to users and to identify abusive content.
* **Recommender systems:** Recommender systems use word embeddings to recommend products or services to users based on their past behavior.
* **Machine translation:** Machine translation systems use word embeddings to translate text from one language to another.

Embeddings are a powerful tool that can be used to improve the performance of machine learning algorithms on a variety of tasks. As machine learning continues to develop, embeddings are likely to play an even more important role in the field.
Sources: encord.com/blog/embeddings-machine-learning/"
[{2023-10-24 retrieved} https://bard.google.com/chat/1a0560b1ec4fbe96]

name::
* McsEngl.embeddings,
* McsEngl.lagLgkl.embeddings,

lagKnlg.controlled-natural-language-018

description::
· controlled-natural-language is a-propositional-language that uses a-subset of a-natural-language to make it unambiguous and monosemantic to be-understood by machines.

name::
* McsEngl.CNL!=controlled-natural-language,
* McsEngl.controlled-natural-language!⇒lagCdnl,
* McsEngl.lagCdnl!=ControlleD-Natural-Language,
* McsEngl.lagKnlg.003-controlled-natural-language!⇒lagCdnl,
* McsEngl.lagKnlg.controlled-natural-language!⇒lagCdnl,
* McsEngl.lagPpsl.controlled-natural-language!⇒lagCdnl,
* McsEngl.machine-oriented-controlled-natural-language!⇒lagCdnl,

descriptionLong::
"This paper presents a survey of research in controlled natural languages that can be used as high-level knowledge representation languages.
Over the past 10 years or so, a number of machine-oriented controlled natural languages have emerged that can be used as high-level interface languages to various kinds of knowledge systems.
These languages are relevant to the area of computational linguistics since they have two very interesting properties: firstly, they look informal like natural languages and are therefore easier to write and understand by humans than formal languages; secondly, they are precisely defined subsets of natural languages and can be translated automatically (and often deterministically) into a formal target language and then be used for automated reasoning.
We present and compare the most mature of these novel languages, show how they can balance the disadvantages of natural languages and formal languages for knowledge representation, and discuss how domain specialists can be supported writing specifications in controlled natural language."
[{2021-01-28} http://web.science.mq.edu.au/~rolfs/papers/coling10-schwitter.pdf]

info-resource of lagCdnl

description::
* Controlled Natural Languages for Knowledge Representation, Rolf Schwitter, Centre for Language Technology, Macquarie University, Rolf.Schwitter@mq.edu.au, http://web.science.mq.edu.au/~rolfs/papers/coling10-schwitter.pdf,

name::
* McsEngl.lagCdnl'Infrsc,

lagCdnl.SPECIFIC

description::
* ACE'(attempto-controlled-English)-lagCdnl,
* CLCE'(common-logic-controlled-English)-lagCdnl,
* CLOnE-lagCdnl,
* CPL'(computer-processable-language)-lagCdnl,
* Formalized-English-lagCdnl,
* Lite-Natural-Language-lagCdnl,
* PENG'(processable-English)-lagCdnl,
* Rabbit-lagCdnl,
* SOS'(Sydney-OWL-Syntax)-lagCdnl,

name::
* McsEngl.lagCdnl.specific,

lagCdnl.ACE

description::
"overview of attempto-controlled-English:
Attempto Controlled English (ACE) is a controlled natural language (CNL) that is designed to bridge the gap between natural language and formal languages. It is a restricted form of English that allows for precise, unambiguous representation of information, making it suitable for various applications in fields such as computer science, artificial intelligence, and knowledge representation.

Here's an overview of Attempto Controlled English:
1. **Purpose**: ACE is primarily used for knowledge representation and natural language processing. It provides a way to express complex concepts and formal knowledge using a simplified subset of English, making it accessible to both humans and computers.

2. **Simplicity**: ACE is designed to be easily understandable by people with a basic understanding of English. It uses simplified grammar and vocabulary to reduce ambiguity and enhance precision.

3. **Formality**: Despite its simplicity, ACE retains the formality necessary for machine processing. It enforces strict rules for sentence structure and word usage to ensure clarity.

4. **Ambiguity Reduction**: One of the primary goals of ACE is to eliminate ambiguities that are inherent in standard English. This is achieved through structured sentences and controlled vocabulary.

5. **Applications**: ACE has found applications in various domains, including natural language interfaces for databases, knowledge representation, specification of business rules, and communication with intelligent systems.

6. **Semantics**: ACE is equipped with a formal semantics that enables precise translation into formal logic, which can then be processed by computers for various applications. This formalization is crucial for computational analysis.

7. **ACE Editor**: There is an ACE Editor tool that assists users in writing and validating ACE sentences. It helps ensure that the input conforms to the ACE grammar rules.

8. **Translation**: ACE can be translated into various formal representations, such as first-order logic or semantic web languages like OWL (Web Ontology Language), making it a valuable tool for converting natural language knowledge into machine-understandable formats.

9. **Developed by Attempto Project**: ACE was developed as part of the Attempto Project at the University of Zurich and is continually being developed and refined.

10. **Natural Language Interface**: One of the significant applications of ACE is providing natural language interfaces to databases and knowledge bases. Users can query structured data using English-like sentences.

In summary, Attempto Controlled English (ACE) is a restricted form of English that balances the need for human understandability with machine processability. It has applications in various fields where precise representation of knowledge is essential, such as AI, databases, and knowledge representation systems."
[{2023-10-23 retrieved} https://chat.openai.com/c/d19c460e-33cf-48e4-b88b-6be822f0498c]

name::
* McsEngl.ACE!=attempto-controlled-English!⇒lagAcen,
* McsEngl.attempto-controlled-English!⇒lagAcen,
* McsEngl.lagAcen,
* McsEngl.lagAcen!=Attempto-Controlled-ENglish,
* McsEngl.lagCdnl.Ace!⇒lagAcen,

descriptionLong::
"ACE (Fuchs et al., 2008) is a CNL that covers a well-defined subset of English that can be translated unambiguously into first-order logic via discourse representation structures (Kamp and Reyle, 1993) and then be used for automated reasoning. ACE is defined by a small set of construction rules that describe its syntax and a small set of interpretation rules that disambiguate constructs that might appear ambiguous in full English. The vocabulary of ACE consists of predefined function words (e.g. determiners, conjunctions, and pronouns), some predefined fixed phrases (e.g. there is, it is false that), and content words (nouns, proper names, verbs, adjectives, and adverbs). ACE supports language constructs such as:
• active and passive verbs (and modal verbs);
• strong negation (e.g. no, does not) and weak negation (e.g. is is not provable that);
• subject and object relative clauses;
• declarative, interrogative, imperative and conditional sentences;
• various forms of anaphoric references to noun phrases (e.g. he, himself, the man, X).
It is important to note that the meaning of words in ACE is not predefined; the user is expected to define their meaning by ACE sentences or import these definitions from an existing formal ontology.
Here is a simple example of an ACE text together with a question:
Every company that buys at least three machines gets a discount. Six Swiss companies each buy one machine. A German company buys four machines. Who gets a discount?
Note that ACE uses disambiguation markers (e.g. each) on the surface level and mathematical background knowledge about natural numbers in order to answer the question above. This mathematical knowledge is implemented as a set of Prolog predicates which are executed during the proof (question answering process).
ACE is supported by various tools[2], among them a text editor that helps users to construct correct ACE sentences with the help of hints and error messages, a parser that translates ACE texts into discourse representation structures, a paraphraser that reflects the interpretation of the machine in CNL, and a Satchmo-style reasoning engine that can be used for consistency and redundancy checking as well as for question answering. Applications of ACE include software and hardware specifications, agent control, legal and medical regulations, and ontology construction.
[2] http://attempto.ifi.uzh.ch/site/tools/"
[{2021-01-28} http://web.science.mq.edu.au/~rolfs/papers/coling10-schwitter.pdf]

lexicon of lagAcen

description::
· it is-written in Prolog.
· there are 2: a-common-Clex and a-user-Ulex.
· contains 11 different word classes: adverbs, intransitive adjectives, transitive adjectives, countable nouns, mass nouns, measurement nouns, proper names, intransitive verbs, transitive verbs, ditransitive verbs, and prepositions.
[{2021-01-30} http://attempto.ifi.uzh.ch/site/docs/ace_lexicon.html]
· ~100,000 entries.
[{2021-01-30} https://github.com/Attempto]

name::
* McsEngl.lagAcen'lexicon,

info-resource of lagAcen

description::
* http://attempto.ifi.uzh.ch/site/,
* https://github.com/Attempto,
* http://attempto.ifi.uzh.ch/aceview/,

name::
* McsEngl.lagAcen'Infrsc,

lagCdnl.ClearTalk

description::
"ClearTalk is a controlled natural language—a kind of a formal language for expressing information that is designed to be both human-readable (being based on English) and easily processed by a computer.
Anyone who can read English can immediately read ClearTalk, and the people who write ClearTalk learn to write it while using it. The ClearTalk system itself does most of the training through use: the restrictions are shown by menus and templates and are enforced by immediate syntactic checks. By consistently using ClearTalk for its output, a system reinforces the acceptable syntactic forms.[1]
It is used by the experimental knowledge management software Ikarus and by a knowledge base management system Fact Guru.[2]"
[{2021-01-30} https://en.wikipedia.org/wiki/ClearTalk]

name::
* McsEngl.ClearTalk-lagCdnl,
* McsEngl.lagCdnl.ClearTalk,

lagKnlg.propositional

description::
· propositional-language is a-lagConcept that uses simple propositions-(subject-relation|doing-object) as output to map mind-concepts.

name::
* McsEngl.lagKnlg.004-propositional!⇒lagPpsl,
* McsEngl.lagPpsl,
* McsEngl.lagPpsl!=ProPoSitionaL-lagCnpt,
* McsEngl.propositional-lagCnpt!⇒lagPpsl,
* McsEngl.propositional-language!⇒lagPpsl,

input of lagPpsl

description::
· a-mind-view we want to map with the-propositional-language.

name::
* McsEngl.Ppslinput,
* McsEngl.lagPpsl'input!⇒Ppslinput,

concept of lagPpsl

description::
· any mind-concept of lagPpsl'input.

name::
* McsEngl.Ppslconcept,
* McsEngl.lagPpsl'concept!⇒Ppslconcept,
* item@lagWkdt,

concept.generic of lagPpsl

description::
· a-generic-concept.

name::
* McsEngl.Ppslgeneric,
* McsEngl.lagPpsl'concept.generic!⇒Ppslgeneric,
* McsEngl.lagPpsl'generic-concept!⇒Ppslgeneric,
* class@lagRdfw@lagOwlg,
* concept@lagDlgc,
* unary-predicate@lagFolc,

concept.instance of lagPpsl

description::
· an-instance-concept.

name::
* McsEngl.Ppslinstance,
* McsEngl.lagPpsl'concept.instance!⇒Ppslinstance,
* McsEngl.lagPpsl'instance-concept!⇒Ppslinstance,
* constant@lagFolc,
* individual@lagOwlg@lagDlgc,

concept.relation-or-doing of lagPpsl

description::
· a-relation or a-doing.

name::
* McsEngl.lagPpsl'concept.relation-or-doing!⇒Ppslrord,
* McsEngl.lagPpsl'doing-or-relation,
* McsEngl.lagPpsl'relation-or-doing,
* binary-predicate@lagFolc,
* property@lagOwlg,

proposition of lagPpsl

description::
· proposition is a-sequenced-relation of concept->relation-or-doing->concept.

name::
* McsEngl.Ppslsentence,
* McsEngl.lagPpsl'proposition!⇒Ppslsentence,
* McsEngl.lagPpsl'statement!⇒Ppslsentence,
* McsEngl.proposition!lagPpsl!⇒Ppslsentence,

proposition-sys of lagPpsl

description::
· a-system of propositions.

name::
* McsEngl.Ppslsys,
* McsEngl.lagPpsl'proposition-sys!⇒Ppslsys,

output of lagPpsl

description::
· the-model that maps the-lagPpsl'input.

name::
* McsEngl.Ppsloutput,
* McsEngl.lagPpsl'output!⇒Ppsloutput,

output-concept of lagPpsl

description::
· the-model that maps an-input-concept.

name::
* McsEngl.Ppsloconcept,
* McsEngl.lagPpsl'output-concept!⇒Ppsloconcept,

output-proposition of lagPpsl

description::
· the-model that maps the-Ppslsentence.

name::
* McsEngl.Ppslosentence,
* McsEngl.lagPpsl'output-proposition!⇒Ppslosentence,
* McsEngl.output-proposition-of-lagPpsl!⇒Ppslosentence,

output-proposition-sys of lagPpsl

description::
· the-model that maps a-Ppslsys.

name::
* McsEngl.Ppslosys,
* McsEngl.lagPpsl'output-set!⇒Ppslosys,
* dataset,
* knowledge-base@lagWkdt,
* ontology@lagOntg,
* vocabulary,

evaluation of lagPpsl

description::
· the-inventors of propositional-languages do-not-understand that the-input is mind-concepts.
· that is why we see expressions like that they represent "entities, concepts, objects, attributes, properties" of the-real-world (domain of discourse).
===
· "the-first" airplanes had movable wings.
· the-same, propositional-languages model the-sentences of human-languages which in-turn are models of conceptual-views.
· we need to model directly the-conceptual-views. [hmnSngo.{2021-02-13}]

name::
* McsEngl.lagPpsl'evaluation,

lagPpsl.SPECIFIC

description::
* first-order-logic-language,
* ontology-language,

name::
* McsEngl.lagPpsl.specific,

lagPpsl.wikidata

description::
· wikidata-language is a-propositional-language that manages the-data collaborativelly.

">Wikidata data-representation-language:
Wikidata uses a data model called Wikibase, which is a flexible and extensible data model that is designed to be able to represent a wide variety of information. Wikibase is based on the Resource Description Framework (RDF), which is a standard for representing information in a linked data format.

RDF is a graph-based data model that consists of three types of things:
* **Subjects:** Subjects are the things that statements are made about.
* **Predicates:** Predicates are the properties of subjects.
* **Objects:** Objects are the values of predicates.

For example, the following statement represents the fact that Barack Obama is the president of the United States:
* **Subject:** Barack Obama (Q7676)
* **Predicate:** is president of (P31)
* **Object:** United States (Q768)

Wikibase extends RDF with a number of features that make it more suitable for representing the type of information that is stored in Wikidata. These features include:
* **Items:** Items are the entities that are represented in Wikidata. Items can be anything from people and places to events and concepts.
* **Properties:** Properties are the relationships between items. Properties are used to represent the various attributes of items.
* **Statements:** Statements are the assertions that are made about items. Statements are made up of subjects, predicates, and objects.
* **Data types:** Data types are used to specify the type of data that is stored in a property. Wikibase supports a variety of data types, including numbers, dates, and text strings.
* **Qualifiers:** Qualifiers are used to add additional information to statements. Qualifiers can be used to specify the time frame of a statement, the source of a statement, or other relevant information.

Wikibase is a powerful and flexible data model that is well-suited for representing the type of information that is stored in Wikidata. It is a key part of the Wikidata infrastructure, and it is used to store and retrieve data from the Wikidata database.

I hope this helps! Let me know if you have any other questions."
[{2023-11-18 retrieved} https://bard.google.com/chat/a20753ec7bb9ce20?hl=en&pli=1]

name::
* McsEngl.lagKnlg.008-wikidata!⇒lagWkdt,
* McsEngl.lagPpsl.wikidata!⇒lagWkdt,
* McsEngl.lagWkdt!=wikidata-lagPpsl,
* McsEngl.wikidata-lagPpsl!⇒lagWkdt,

descriptionLong::
"Wikidata is the free knowledge base that anyone can edit. Just like Wikipedia, Wikidata is built on the MediaWiki wiki package which means that content on pages (just like this one!) can be added, modified, or deleted in collaboration with others. Unlike Wikipedia, Wikidata also uses Wikibase software which allows for collaborative editing of structured data."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Items]
===
"Wikidata is a free, collaborative, multilingual, secondary database, collecting structured data to provide support for Wikipedia, Wikimedia Commons, the other wikis of the Wikimedia movement, and to anyone in the world."
[{2021-01-25} https://www.wikidata.org/wiki/Wikidata:Introduction]

input of lagWkdt

description::
· anything.
the free knowledge base with 107,622,001 data items that anyone can edit.
[{2023-11-18 retrieved} https://www.wikidata.org/wiki/Wikidata:Main_Page]

name::
* McsEngl.lagWkdt'input,

output of lagWkdt

description::
· the-knowledge-base built with lagWkdt

name::
* McsEngl.lagWkdt'output!⇒kbWikidata,
* McsEngl.Knlgbase.Wikidata!⇒kbWikidata,
* McsEngl.Wikidata-repository!⇒kbWikidata,
* McsEngl.kbWikidata!=Wikidata--knowledge-base,
* McsEngl.knowledge-base@lagWkdt!⇒kbWikidata,
* McsEngl.lagWkdt'output-proposition-sys!⇒kbWikidata,

item of kbWikidata

description::
· an-item is a-concept!

"In Wikidata, items are used to represent all the things in human knowledge, including topics, concepts, and objects. For example, the "1988 Summer Olympics", "love", "Elvis Presley", and "gorilla" are all items in Wikidata."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Items]

name::
* McsEngl.Wkdtitem!=wikidata-item,
* McsEngl.item@lagWkdt!⇒Wkdtitem,
* McsEngl.kbWikidata'item!⇒Wkdtitem,

ID of Wkdtitem

description::
· Items are uniquely identified by a Q followed by a number, such as Douglas Adams (Q42).

name::
* McsEngl.Wkdtitem'ID,

label of Wkdtitem

description::
· main-name.

name::
* McsEngl.Wkdtitem'label,

description of Wkdtitem

description::
· text about the-item.

name::
* McsEngl.Wkdtitem'description,

alias of Wkdtitem

description::
· synonyms.

name::
* McsEngl.Wkdtitem'alias,

Wkdtitem.aggregate

description::
"the free knowledge base with 107,622,393 data items that anyone can edit."
[{2023-11-18 retrieved} https://www.wikidata.org/wiki/Wikidata:Main_Page]

name::
* McsEngl.Wkdtitem.aggregate,

property of kbWikidata

description::
· a-property is an-attribute-relation.

"A property describes the data value of a statement and can be thought of as a category of data, for example "color" for the data value "blue". Properties, when paired with values, form a statement in Wikidata. Properties are also used in qualifiers.
Properties have their own pages on Wikidata and are connected to items, resulting in a linked data structure."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Properties]
"As opposed to Items, Properties do not refer to Wikipedia pages, but they do specify a Datatype for the data that they (usually) store."
[{2021-01-25} https://www.mediawiki.org/wiki/Wikibase/DataModel]

name::
* McsEngl.Wkdtproperty!=wikidata-property,
* McsEngl.lagWkdt'relation-or-doing!⇒Wkdtproperty,
* McsEngl.property@lagWkdt!⇒Wkdtproperty,

label of Wkdtproperty

description::
· the-main-name of a-Wkdtproperty.

name::
* McsEngl.Wkdtproperty'label,

ID of Wkdtproperty

description::
"Like items, properties also have a unique identifier. While items begin with a Q prefix and are found in the main namespace of Wikidata, properties have a P prefix and are stored in the property namespace."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Properties]

name::
* McsEngl.Wkdtproperty'Id,

alias of Wkdtproperty

description::
· synonyms.

name::
* McsEngl.Wkdtproperty'alias,

description of Wkdtproperty

description::
· text that explains the-meaning of the-Wkdtproperty.

name::
* McsEngl.Wkdtproperty'description,

datatype of Wkdtproperty

description::
"Each property does have what is known as a data type which defines the kind of values allowed in statements with that property."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Properties]

name::
* McsEngl.Wkdtproperty'datatype,

info-resource of Wkdtproperty

description::
* https://www.wikidata.org/wiki/Wikidata:Property_creators,
* https://www.wikidata.org/wiki/Wikidata:Property_proposal,

name::
* McsEngl.Wkdtproperty'Infrsc,

Wkdtproperty.SPECIFIC

description::
"On this page, you will find several lists of the currently 8,364 properties in Wikidata."
[{2021-01-25} https://www.wikidata.org/wiki/Wikidata:List_of_properties]
===
* art, art collection,
* asymmetric relation,
* author's rights,
* award,
* causality,
* e-commerce,
* graph,
* human,
* identity, equality,
* judiciary,
* link rot,
* MediaWiki page,
* number of entities,
* obsolete Wikidata property,
* occurrence,
* orderable Wikidata property,
* ownership,
* perception,
* personal life,
* politics,
* pornography, erotica,
* position,
* reflexive property,
* research,
* science,
* slavery,
* spacecraft,
* streaming media,
* symmetric relation,
* time, duration,
* transitive relation,
* transport,
* unique identifier,
* Wikidata property associated with websites,
* Wikidata property encoding a vCard value,
* Wikidata property for an identifier that generally has just one value, but rarely can have two or more values for the same entity,
* Wikidata property for an identifier value that can replaced,
* Wikidata property for properties,
* Wikidata property for the relationship between classes,
* Wikidata property for Wikivoyage,
* Wikidata property in special need of sourcing,
* Wikidata property linking to a representative image,
* Wikidata property using many other properties as qualifiers,
* Wikidata property with a single value that generally evolves over time,
* Wikidata property with a single value, but different precisions and references can lead to multiple values,
* Wikidata property with datatype string that is not an external identifier,
* Wikidata qualifier,
* Wikidata sandbox property,
* Wikipedia:Citing sources,
* woman, feminism,
* work,

name::
* McsEngl.Wkdtproperty.specific,

statement of kbWikidata

description::
· Statements describe detailed characteristics of an Item and consist of a property and a value.

name::
* McsEngl.Wkdtstatement,
* McsEngl.lagWkdt'output-proposition!⇒Wkdtstatement,
* McsEngl.statement@lagWkdt!⇒Wkdtstatement,

claim of Wkdtstatement

description::
"The core part of a statement without references and ranks is also called claim. A claim without qualifiers is also referred to as snak."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Statements]
"Statements are the main approach of representing factual data, such as the population number in the above example. A Statement consists of two parts: a claim that something is the case (e.g., the claim "Berlin has a population of 3,499,879") and a list of references for that claim (e.g., a publication by the statistical office for Berlin-Brandenburg)."
[{2021-01-25} https://www.mediawiki.org/wiki/Wikibase/DataModel]

name::
* McsEngl.Wkdtstatement'claim,
* McsEngl.claim@lagWkdt,

snak of Wkdtstatement

description::
"The core part of a statement without references and ranks is also called claim.
A claim without qualifiers is also referred to as snak."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Statements]

name::
* McsEngl.Wkdtstatement'snak,
* McsEngl.snak@lagWkdt,

qualifier of Wkdtstatement

description::
· the-other verb-arguments of a-sentence except subject-target.

name::
* McsEngl.Wkdtstatement'qualifier,
* McsEngl.qualifier@lagWkdt,

descriptionLong::
"Along with sources and ranks, qualifiers allow statements to be expanded on, annotated, or contextualized beyond what can be expressed in just a simple property-value pair.
Qualifiers are used in order to further describe or refine the value of a property given in a statement. Note that a statement should still provide useful data even without a qualifier; the qualifier is just there to provide additional information.
Like statements and references, qualifiers also consist of a property and a value."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Qualifiers]

references-list of Wkdtstatement

description::
"Statements are the main approach of representing factual data, such as the population number in the above example. A Statement consists of two parts: a claim that something is the case (e.g., the claim "Berlin has a population of 3,499,879") and a list of references for that claim (e.g., a publication by the statistical office for Berlin-Brandenburg).
The reference is given by a ReferenceRecord, and the list of references is allowed to be empty (like in Wikipedia, editors can add Statements without a reference, which might later be improved by others who know about a suitable reference)."
[{2021-01-25} https://www.mediawiki.org/wiki/Wikibase/DataModel]

name::
* McsEngl.Wkdtstatement'references-list,
* McsEngl.Wkdtstatement'source,
* McsEngl.reference@lagWkdt,

descriptionLong::
"The majority of statements on Wikidata should be verifiable, they should be supported by a source of information such as a book, scientific publication, or newspaper article. In Wikidata, references are used to point to specific sources that back up the data provided in a statement.
References, like statements, also consist of at least one property-value pair (for a reference, the value is the source). Typically the property used for sources is one of two options: stated in (P248) (referring to publications and media) and reference URL (P854) (used for websites and online databases)."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Sources]

rank of Wkdtstatement

description::
"References merely state where a data value comes from; ranks indicate what data value is considered the most correct and, by extension, what values should be included in queries.
Ranks are not a way of asserting your view for a disputed value, but instead are used for communicating the consensus opinion for a statement. All disputes should be discussed on the item's discussion page. Edit warring over values is not acceptable.
There is however another way to state that a statement is disputed and by whom: the qualifier statement disputed by (P1310)."
[{2021-01-25} https://www.wikidata.org/wiki/Help:Ranking]

name::
* McsEngl.Wkdtstatement'rank,

lexeme of kbWikidata

description::
"Lexeme is an entity of Lexicographical data. It contains sense and form sub-entities, but no sitelinks. Instead of a label, a description, and aliases, lexeme entities have a lemma (possibly more than one, for languages with spelling variants), a language, and a lexical category."
[{2023-11-18 retrieved} https://www.wikidata.org/wiki/Wikidata:Glossary#Lexeme]

name::
* McsEngl.kbWikidata'lexeme,

manager of lagWkdt

description::
· There are a number of ways to access Wikidata using built-in tools, external tools, or programming interfaces.

name::
* McsEngl.Wkdtmanager,
* McsEngl.lagWkdt'manager,

specific-tree-of-lagWkdt'tool::
* https://www.wikidata.org/wiki/Wikidata:Tools,
* https://query.wikidata.org/,
* https://reasonator.toolforge.org/,
* https://www.wikidata.org/wiki/Wikidata:Tools/Edit_items,

Wikibase of Wkdtmanager

description::
"Wikibase is the software behind Wikidata. It consists of a set of extensions to the MediaWiki software. These extensions allow Wikidata to manage data in items and properties, and search for this data using queries."
[{2023-11-18 retrieved} https://www.wikidata.org/wiki/Wikidata:Glossary#Wikibase]

name::
* McsEngl.Wikibase,

addressWpg::
* https://www.mediawiki.org/wiki/Wikibase,
* https://wikiba.se/,

query-service of Wkdtmanager

description::
× webaddress: https://query.wikidata.org,

"official query tool for Wikidata"
[{2023-11-19 retrieved} https://www.wikidata.org/wiki/Q20950365]

name::
* McsEngl.WDQS!=wikidata-query-service,
* McsEngl.Wkdtmanager'query-service,
* McsEngl.wikidata-query-service,

Api of Wkdtmanager

description::
"API for querying, adding, removing and editing information on wikibase instances like wikidata"
[{2023-11-19 retrieved} https://www.wikidata.org/wiki/Q59130258]

name::
* McsEngl.Wkdtmanager'API,
* McsEngl.Wikibase-API,
* McsEngl.wikidata-API,

info-resource of lagWkdt

description::
* https://www.wikidata.org/wiki/Wikidata:Main_Page,
* https://www.wikidata.org/wiki/Wikidata:Introduction,
* https://www.wikidata.org/wiki/Help:Items,
* https://www.wikidata.org/wiki/Help:Properties,
* https://www.wikidata.org/wiki/Help:Statements,
* https://www.mediawiki.org/wiki/Wikibase/DataModel/Primer,
* https://www.mediawiki.org/wiki/Wikibase/API,

name::
* McsEngl.lagWkdt'Infrsc,

evaluation of wikidata

description::
* benefit:
- openeness,
- collaboration,
- structured-knowledge,

* benefitNo:
- item, property, value: all are concepts.

name::
* McsEngl.wikidata'evaluation,

evoluting of wikidata

description::
">evolution of wikidata:
Wikidata, a free and open knowledge base, has undergone significant evolution since its inception in 2009. Initially conceived as a central repository for structured data from Wikipedia, Wikidata has transformed into a powerful tool for linking and integrating information across various domains and applications.

**Early Years (2009-2012)**
- Wikidata's initial focus was on providing a structured data model for Wikipedia articles, enabling consistent and interoperable representation of information.
- The development of the Wikidata Query Service allowed users to retrieve and analyze data using SPARQL, a powerful query language for RDF data.
- Wikidata gained support from various Wikimedia projects and external partners, expanding its reach and scope.

**Maturation and Growth (2013-2017)**
- Wikidata's data model evolved to support a wider range of entity types and relationships, enhancing its versatility.
- The introduction of property constraints ensured data quality and consistency, promoting reliable information sharing.
- Wikidata's user community grew, with active participation from experts in various fields, contributing to knowledge expansion and curation.

**Expanding Horizons (2018-present)**
- Wikidata's integration with external knowledge bases and APIs facilitated data exchange and cross-referencing, creating a global knowledge graph.
- The development of Wikibase, the underlying software powering Wikidata, enabled other organizations to create their own knowledge bases, fostering collaboration and knowledge sharing.
- Wikidata's applications extended beyond Wikipedia, powering knowledge-driven tools and services, such as search engines, virtual assistants, and educational platforms.

**Key Milestones in Wikidata's Evolution**
- 2009: Wikidata's launch as a central repository for structured data from Wikipedia
- 2012: Introduction of the Wikidata Query Service for SPARQL queries
- 2013: Wikidata's data model expands to support a wider range of entity types
- 2014: Property constraints introduced to ensure data quality and consistency
- 2016: Wikibase, the underlying software powering Wikidata, is released
- 2017: Wikidata's integration with external knowledge bases and APIs begins
- 2019: Wikibase Cloud is released, enabling organizations to create their own knowledge bases
- 2023: Wikidata reaches over 100 million items and over 1 billion statements

**Looking Ahead: The Future of Wikidata**
- Continued growth in the number of items and statements, expanding Wikidata's coverage and depth of knowledge
- Enhanced integration with artificial intelligence (AI) systems, enabling more intelligent processing and analysis of data
- Wider adoption of Wikibase by organizations and institutions, creating a distributed network of knowledge bases
- Development of new tools and applications powered by Wikidata, further amplifying its impact and utility

Wikidata's evolution reflects its ongoing commitment to providing a comprehensive and reliable source of structured knowledge for the world. As it continues to grow and adapt, Wikidata promises to play an increasingly significant role in shaping the future of knowledge representation and access."
[{2023-11-19 retrieved} https://bard.google.com/chat/f4e38c6dec4c47fc?hl=en&pli=1]

">evolution of wikidata:
As of my last knowledge update in January 2022, Wikidata has been continually evolving as a collaborative knowledge base. Wikidata is a Wikimedia Foundation project that aims to create a free and open knowledge base that can be read and edited by both humans and machines. It was launched in 2012 and has since played a crucial role in supporting other Wikimedia projects like Wikipedia.

Here are some general trends and developments in the evolution of Wikidata up to my last knowledge update:

1. **Data Growth:** Wikidata has experienced significant growth in terms of the amount of data it contains. The number of items (individual entities, such as a person, a place, or an event) and the statements (assertions about those items) have been steadily increasing.

2. **Community Engagement:** The community around Wikidata has grown, and it includes not only Wikimedia contributors but also individuals and organizations from outside the Wikimedia movement. The community actively contributes to the improvement and maintenance of Wikidata.

3. **Integration with Wikimedia Projects:** Wikidata is closely integrated with other Wikimedia projects, especially Wikipedia. It provides a central repository for structured data that can be used across different language editions of Wikipedia. This integration enhances the consistency and quality of information across Wikimedia projects.

4. **Software Development:** The software and tools for editing and querying Wikidata have seen continuous development. The Wikidata Query Service allows users to run complex queries against the data, and the Wikidata API enables developers to access and integrate Wikidata content into their applications.

5. **Bot and Automation Usage:** Bots play a significant role in adding and updating data on Wikidata. Many edits are made by automated scripts that help keep the information up-to-date and ensure consistency.

6. **Multilingual Support:** Efforts have been made to improve multilingual support on Wikidata. This includes tools for content translation and efforts to ensure that data is available in multiple languages.

7. **Quality Control:** Various mechanisms, including community-driven efforts and automated tools, are in place to maintain the quality and reliability of data on Wikidata. This involves addressing vandalism, inaccuracies, and ensuring that information is properly sourced.

It's important to note that Wikidata is a dynamic platform, and its evolution continues beyond my last knowledge update. For the latest information on the evolution of Wikidata, I recommend checking the official Wikidata website and community forums."
[{2023-11-19 retrieved} https://chat.openai.com/c/c41843ee-b644-485e-b816-940d37e7ed7b]

name::
* McsEngl.wikidata!=wikidata-technology,
* McsEngl.evoluting-of-wikidata,
* McsEngl.wikidata'evoluting,

lagPpsl.entity-attribute-value-model

description::
"Entity–attribute–value model (EAV) is a data model to encode, in a space-efficient manner, entities where the number of attributes (properties, parameters) that can be used to describe them is potentially vast, but the number that will actually apply to a given entity is relatively modest. Such entities correspond to the mathematical notion of a sparse matrix.
EAV is also known as object–attribute–value model, vertical database model, and open schema."
[{2021-01-24} https://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model]

name::
* McsEngl.lagKnlg.007-entity-attribute-value-model,
* McsEngl.lagPpsl.entity-attribute-value-model,

lagPpsl.attribute-value-system

description::
"An attribute-value system is a basic knowledge representation framework comprising a table with columns designating "attributes" (also known as "properties", "predicates", "features", "dimensions", "characteristics", "fields", "headers" or "independent variables" depending on the context) and "rows" designating "objects" (also known as "entities", "instances", "exemplars", "elements", "records" or "dependent variables").
Each table cell therefore designates the value (also known as "state") of a particular attribute of a particular object."
[{2021-01-24} https://en.wikipedia.org/wiki/Attribute-value_system]

name::
* McsEngl.lagKnlg.006-attribute-value-system,
* McsEngl.lagPpsl.attribute-value-system,

lagPpsl.Gellish

description::
"Information that is expressed in a Gellish formal language is computer interpretable and application system independent. It enables data exchange, data storage, data integration and systems interoperability without the need for data model harmonization or data conversion."
[{2021-01-29} http://wiki.gellish.net/start]

name::
* McsEngl.Gellish-language!⇒lagGlsh,
* McsEngl.lagKnlg.005-Gellish!⇒lagGlsh,
* McsEngl.lagGlsh,
* McsEngl.lagPpsl.Gellish!⇒lagGlsh,

descriptionLong::
"Gellish is an ontology language for data storage and communication, designed and developed by Andries van Renssen since mid-1990s.[1] It started out as an engineering modeling language ("Generic Engineering Language", giving it the name, "Gellish") but evolved into a universal and extendable conceptual data modeling language with general applications. Because it includes domain-specific terminology and definitions, it is also a semantic data modelling language and the Gellish modeling methodology is a member of the family of semantic modeling methodologies.
Although its concepts have 'names' and definitions in various natural languages, Gellish is a natural-language-independent formal language. Any natural language variant, such as Gellish Formal English is a controlled natural language. Information and knowledge can be expressed in such a way that it is computer-interpretable, as well as system-independent and natural language independent. Each natural language variant is a structured subset of that natural language and is suitable for information modeling and knowledge representation in that particular language. All expressions, concepts and individual things are represented in Gellish by (numeric) unique identifiers (Gellish UID's). This enables software to translate expressions from one formal natural language to any other formal natural languages."
[{2021-01-29} https://en.wikipedia.org/wiki/Gellish]

output of lagGlsh

description::
"The structure of expressions in the Gellish language (its syntax) is defined in the tabular standard Gellish Expression Format, which can be stored in formats such as CSV or JSON. Other formats are also possible."
[{2021-01-29} http://wiki.gellish.net/start]

name::
* McsEngl.lagGlsh'output,

dictionary of lagGlsh

description::
"Another advantage of Gellish is that its definition includes an electronic Taxonomic Dictionary that provides the concepts and terminology of the language, whereas the dictionary is extensible by any user. The power of a taxonomic dictionary is that all defined concepts inherit the knowledge about all their supertype concepts. This greatly simplifies searching and retrieval of information and it eliminates the need for re-specifying knowledge for subtypes of defined concepts. Each concept in the dictionary is identified by a natural language independent unique identifier (UID) whereas they are denoted by various terms in the formalized natural languages. The use of UIDs for representing the concepts independent of language causes that models and data structures become language independent. This enables that information that is expressed in one of the formal languages can be automatically translated by Gellish enabled software from that one formalized language to any other language. For example, queries in Formal Dutch, can be executed on databases that are in English, and the queries as well as the results can be presented to users in Dutch or any other language and vice versa. This is possible between any languages for which formal dictionaries are available."
[{2021-01-29} http://wiki.gellish.net/start]
===
"the Gellish Dictionary-Taxonomy contains more than 40,000 concepts."
[{2021-01-29} https://en.wikipedia.org/wiki/Gellish]

name::
* McsEngl.lagGlsh'dictionary,

lagKnlg.logic-programing

description::
· logic-programing-language is a-propositional-language based on formal-logic.

name::
* McsEngl.lagKnlg.009-logic-programing!⇒lagLogic,
* McsEngl.lagKnlg.logic-programing!⇒lagLogic,
* McsEngl.lagLgcp!⇒lagLogic,
* McsEngl.lagLogic,
* McsEngl.logic-programing-language!⇒lagLogic,

descriptionLong::
"Logic programming is a programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP) and Datalog."
[{2021-01-30} https://en.wikipedia.org/wiki/Logic_programming]

manager of lagLogic

description::
"overview of logic-based-system:
Logic-based systems (LBS) are artificial intelligence (AI) systems that use logic to reason about the world and make decisions. LBS are based on the idea that the world can be represented in terms of logical statements, and that these statements can be used to infer new knowledge and make decisions.

LBS are typically composed of two main components: a knowledge base and a reasoning engine. The knowledge base is a collection of logical statements that represent the system's knowledge about the world. The reasoning engine is a program that can use the knowledge base to infer new knowledge and make decisions.

LBS have a number of advantages over other types of AI systems. First, they are transparent and explainable. Unlike other AI systems, such as machine learning systems, LBS can explain the reasoning behind their decisions. This is because LBS are based on logic, which is a well-understood and well-studied field.

Second, LBS are robust. LBS are less likely to make mistakes than other types of AI systems, because they are based on sound logical principles.

Third, LBS are versatile. LBS can be used to solve a wide variety of problems, from simple tasks such as classification and prediction to more complex tasks such as planning and scheduling.

LBS are used in a wide variety of applications, including:
* **Expert systems:** Expert systems are LBS that are designed to solve specific problems in a particular domain, such as medical diagnosis or legal reasoning.
* **Planning systems:** Planning systems are LBS that are designed to generate plans for achieving specific goals.
* **Scheduling systems:** Scheduling systems are LBS that are designed to create and manage schedules for resources, such as people, machines, and equipment.
* **Natural language processing systems:** Natural language processing systems are LBS that are designed to understand and generate human language.
* **Knowledge representation systems:** Knowledge representation systems are LBS that are designed to represent and manage knowledge about the world.

LBS are a powerful and versatile tool for solving a wide variety of problems. However, they can be complex to develop and maintain. Additionally, LBS can be slow to run, especially for large and complex problems.

Here are some examples of logic-based systems:
* **Expert systems:** MYCIN, CLIPS, PROSPECTOR
* **Planning systems:** STRIPS, SHOP2, OPTAPLAN
* **Scheduling systems:** ISIS, ASPEN, ILOG Solver
* **Natural language processing systems:** Prolog, CYC, SHRDLU
* **Knowledge representation systems:** OWL, RDF, DAML+OIL

LBS is a rapidly growing field of research, and new LBS are being developed all the time. As LBS become more powerful and efficient, they are likely to play an increasingly important role in our lives."
[{2023-11-02 retrieved} https://bard.google.com/chat/525e647a5e1df871]

name::
* McsEngl.Knlgmngr.030-logic!⇒klmrLogic,
* McsEngl.Knlgmngr.logic!⇒klmrLogic,
* McsEngl.LBS!=logic-based-systems!⇒klmrLogic,
* McsEngl.klmrLogic!=logic--knowledge-language-manager,
* McsEngl.lagLogic'manager!⇒klmrLogic,
* McsEngl.logic-based-system!⇒klmrLogic,
* McsEngl.logic-knowledge-language--manager!⇒klmrLogic,

evoluting of klmrLogic

description::
"evolution of logic-based-systems:
Logic-based systems (LBS) have evolved over time, becoming more sophisticated and capable. In the early days of AI, LBS were primarily used for expert systems, which were designed to solve specific problems in a particular domain, such as medical diagnosis or legal reasoning. However, LBS have since been used to develop a wide variety of other applications, including planning systems, scheduling systems, natural language processing systems, and knowledge representation systems.

One of the key drivers of the evolution of LBS has been the development of new logical formalisms. For example, the development of first-order predicate logic in the late 19th century provided a powerful tool for representing and reasoning about complex knowledge. Similarly, the development of modal logic in the 20th century provided a way to reason about different possible worlds, which is essential for many types of AI applications.

Another key driver of the evolution of LBS has been the development of new hardware and software technologies. For example, the advent of high-performance computers has made it possible to develop and run LBS that can solve large and complex problems. Similarly, the development of new programming languages and software tools has made it easier to develop and maintain LBS.

Here is a brief overview of the evolution of LBS, divided into three stages:

**Stage 1: Early expert systems (1960s-1980s)**
The early expert systems were developed using a variety of logical formalisms, including first-order predicate logic, production rules, and frames. These systems were typically hand-coded by experts in the domain of application.
Some examples of early expert systems include:
* MYCIN: A system for diagnosing and treating infectious diseases
* CLIPS: A general-purpose production rule system
* PROSPECTOR: A system for mineral exploration

**Stage 2: Logic programming and knowledge representation (1980s-2000s)**
The development of logic programming languages, such as Prolog, made it possible to develop more sophisticated and expressive LBS. Logic programming languages allow programmers to express their knowledge in a logical form, which can then be used to reason about that knowledge and make decisions.
In addition to logic programming, this period also saw the development of new knowledge representation formalisms, such as OWL and RDF. These formalisms provide a way to represent complex knowledge in a machine-readable format.
Some examples of LBS developed during this period include:
* STRIPS: A planning system that uses first-order predicate logic to represent the world and reason about actions
* SHOP2: A planning system that uses a modal logic representation of the world to reason about different possible plans
* OPTAPLAN: A scheduling system that uses a constraint programming approach to solve complex scheduling problems
* CYC: A knowledge representation system that is used to represent and reason about a wide range of knowledge, including common sense knowledge and knowledge about specific domains

**Stage 3: Modern LBS (2000s-present)**
The development of new hardware and software technologies has made it possible to develop and run LBS that are more powerful and efficient than ever before. In addition, there has been a growing interest in using LBS to solve real-world problems.
Some examples of modern LBS include:
* **Natural language processing systems:** LBS are used to develop natural language processing systems that can understand and generate human language. For example, LBS are used to develop machine translation systems, chatbots, and virtual assistants.
* **Knowledge graphs:** LBS are used to develop knowledge graphs, which are large-scale knowledge bases that are represented using a graph-based data model. Knowledge graphs are used in a variety of applications, such as search engines, recommendation systems, and question answering systems.
* **Explainable AI (XAI):** LBS are used to develop explainable AI (XAI) systems, which can explain the reasoning behind their decisions. XAI systems are important for building trust in AI systems and for ensuring that they are used in a responsible and ethical manner.

The evolution of LBS is still ongoing, and new LBS are being developed all the time. As LBS become more powerful and efficient, they are likely to play an increasingly important role in our lives."
[{2023-11-02 retrieved} https://bard.google.com/chat/525e647a5e1df871]

name::
* McsEngl.evoluting-of-klmrLogic,
* McsEngl.klmrLogic'evoluting,

relation-to-natural-language of lagLogic

description::
"Natural languages are highly expressive systems that can state anything that has ever been stated in any formal language or logic.
They can even express metalevel statements about themselves, their relationships to other languages, and the truth of any such statements.
Such enormous expressive power can easily generate contradictions and paradoxes, such as the statement This sentence is false.
Most formal languages avoid such paradoxes by imposing restrictions on the expressive power."
[{2021-02-07} http://www.jfsowa.com/cg/cg_hbook.pdf]

name::
* McsEngl.lagHmnm'relation-to-lagLogic,
* McsEngl.lagLogic'relation-to-natural-language,

evoluting of lagLogic

description::
"evolution of logic-programing:
draft2
Logic programming is a programming paradigm that is based on formal logic. It is used to write programs by describing relationships between facts and rules. Logic programs are typically declarative, meaning that they specify what should be computed, but not how to compute it. This makes logic programming languages well-suited for expressing complex knowledge and reasoning problems.

The evolution of logic programming can be traced back to the late 1960s and early 1970s, when researchers at Stanford and Edinburgh Universities began developing declarative programming languages based on first-order logic. The most well-known logic programming language is Prolog, which was first developed in 1972.

Logic programming gained international attention during the 1980s, when it was chosen by the Japanese Ministry of International Trade and Industry to develop the software for the Fifth Generation Computer Systems (FGCS) project. The FGCS project aimed to use logic programming to develop advanced Artificial Intelligence applications on massively parallel computers. Although the project ultimately failed, it helped to raise awareness of logic programming and led to the development of new logic programming languages and tools.

In recent years, logic programming has continued to evolve, with new languages and techniques being developed for a variety of applications. For example, answer set programming (ASP) is a logic programming paradigm that is well-suited for solving combinatorial search problems. ASP has been used to develop applications in a variety of domains, including planning, scheduling, and diagnosis.

Another recent development in logic programming is the field of inductive logic programming (ILP). ILP is a machine learning technique that can be used to learn logic programs from data. ILP has been used to develop applications in a variety of domains, including knowledge discovery, natural language processing, and medical diagnosis.

Overall, logic programming has evolved from a niche language for artificial intelligence research to a powerful tool for developing a wide variety of applications. Logic programming languages are now used in a variety of industries, including healthcare, finance, and telecommunications.

Here is a timeline of some of the key events in the evolution of logic programming:
* **1972:** Prolog is first developed at the University of Edinburgh.
* **1981:** Prolog is standardized by the American National Standards Institute (ANSI).
* **1983:** The Fifth Generation Computer Systems (FGCS) project is launched in Japan.
* **1986:** The first conference on logic programming is held in Marseille, France.
* **1988:** The first book on answer set programming is published.
* **1993:** The first inductive logic programming system is developed.
* **1997:** The first book on inductive logic programming is published.
* **2007:** The first conference on answer set programming is held in Potsdam, Germany.
* **2010:** The first book on constraint logic programming is published.
* **2013:** The first international competition on inductive logic programming is held.

Today, logic programming is a thriving field of research and development. New logic programming languages and techniques are being developed for a variety of applications. Logic programming is also being used in conjunction with other programming paradigms, such as functional programming and object-oriented programming, to develop powerful and flexible software systems."
[{2023-11-04 retrieved} https://bard.google.com/chat/5f61651052df4528?hl=en&pli=1]

name::
* McsEngl.evoluting-of-lagLogic,
* McsEngl.lagLogic'evoluting,

lagLogic.SPECIFIC

description::
* Functional logic programming languages‎,
* Prolog programming language family‎,
* Absys,
* Algebraic Logic Functional programming language,
* Alice (programming language),
* Alma-0,
* CHIP (programming language),
* Ciao (programming language),
* CLACL,
* CycL,
* Datalog,
* Datomic,
* DLV,
* ECLiPSe,
* F-logic,
* Flix (programming language),
* Flora-2,
* Fril,
* FuzzyCLIPS,
* Game Description Language,
* Gφdel (programming language),
* HiLog,
* Janus (concurrent constraint programming language),
* ΛProlog,
* Logtalk,
* Maude system,
* Mercury (programming language),
* Oracle Policy Automation,
* Oz (programming language),
* Parlog,
* Planner (programming language),
* PROGOL,
* Prolog,
* Prolog++,
* Prolog32,
* Prova,
* .QL,
* Rebeca (programming language),
* ROOP (programming language),
* SWI-Prolog,
* ToonTalk,
* Transaction logic,
* Twelf,
* Visual Prolog,
* XSB,
[{2021-02-05} https://en.wikipedia.org/wiki/Category:Logic_programming_languages]

name::
* McsEngl.lagLogic.specific,

lagLogic.first-order-logic

description::
"first order logic overview
First-order logic (FOL), also known as first-order predicate logic or first-order logic with quantifiers, is a formal system used for representing and reasoning about statements and relationships in a wide range of domains. It provides a powerful and expressive language for expressing facts, making deductions, and conducting formal reasoning. Here's an overview of its key components and concepts:
* Symbols and Syntax:
- Variables: Symbols (e.g., x, y, z) that represent unspecified objects or elements.
- Constants: Symbols (e.g., a, b, c) denoting specific, constant objects in a domain.
- Functions: Symbols (e.g., f(x), g(y)) representing operations or functions that take one or more arguments.
- Predicates: Symbols (e.g., P(x), Q(x, y)) representing properties or relations between objects.
- Quantifiers: Universal quantifier (∀) and existential quantifier (∃) are used to specify the scope of variables in statements.
- Logical Connectives: Including conjunction (∧), disjunction (∨), implication (→), negation (¬), and equivalence (↔), used to combine and modify statements.
* Semantics:
- An interpretation defines the meaning of the symbols within a specific domain.
- Interpretations specify the domain of objects, assign meanings to constants and functions, and determine whether predicates are true or false for specific objects.
- Truth values of complex statements are derived based on the interpretations of their constituent parts.
* Quantifiers:
- Universal Quantifier (∀): Represents "for all" and asserts that a statement holds true for every element in the domain.
- Existential Quantifier (∃): Represents "there exists" and asserts that there is at least one element in the domain for which a statement is true.
* Inference Rules:
- FOL includes inference rules (e.g., modus ponens, universal instantiation, existential instantiation) that allow you to deduce new statements from existing ones.
- These rules ensure that deductions are logically valid.
* Axiomatic Systems and Proofs:
- FOL can be formalized using axiomatic systems like Hilbert-style or natural deduction.
- Proofs within FOL are constructed following well-defined rules and axioms.
* Completeness and Soundness:
- FOL is both complete and sound.
- Completeness: All logically valid statements can be proven within the system.
- Soundness: Any statement proven within the system is guaranteed to be logically valid.
* Applications:
- FOL is used in various fields, including mathematics, computer science, artificial intelligence, and philosophy.
- It serves as the foundation for formal knowledge representation, automated theorem proving, and reasoning in symbolic AI systems.
* Expressive Power:
- FOL is capable of expressing a wide range of statements and relationships, but it has limitations. For example, it cannot express statements about sets of sets (second-order logic is required for this).
In summary, first-order logic provides a precise and formal framework for expressing statements, making deductions, and conducting rigorous reasoning in diverse domains. Its expressive power and robust mathematical foundations make it a fundamental tool in logic and computational applications."
[{2023-09-17 retrieved} https://chat.openai.com/c/d3b15dcb-d779-4b1e-9621-ba290d591743]

name::
* McsEngl.FOL!=first-order-logic-language,
* McsEngl.first-order-logic-language!⇒lagFolc,
* McsEngl.first-order-predicate-calculus!⇒lagFolc,
* McsEngl.lagFrml.003-lagFolc,
* McsEngl.lagFrml.lagFolc,
* McsEngl.lagFolc,
* McsEngl.lagFolc!=first-order-logic-language,
* McsEngl.lagKnlg.015-first-order-logic!⇒lagFolc,
* McsEngl.lagKnlg.first-order-logic!⇒lagFolc,
* McsEngl.lagLogic.first-order-logic!⇒lagFolc,
* McsEngl.lagPpsl.first-order-logic!⇒lagFolc,
* McsEngl.predicate-logic!⇒lagFolc,
* McsEngl.quantificational-logic!⇒lagFolc,

descriptionLong::
"first order logic
First-order logic (FOL), also known as predicate logic or first-order predicate calculus, is a formal system of logic that is widely used in mathematics, philosophy, computer science, and linguistics. It is a more expressive language than propositional logic, which is a simpler form of logic that can only represent statements about propositions (which are either true or false). FOL can represent statements about objects and their properties, as well as the relationships between objects.
FOL is based on the following concepts:
* Objects: Objects are the basic building blocks of FOL. They can be anything, such as people, places, things, or ideas.
* Predicates: Predicates are properties of objects. For example, the predicate "is human" is a property of the object "Socrates".
* Functions: Functions are relationships between objects. For example, the function "father of" relates a father to his child.
* Quantifiers: Quantifiers allow us to make statements about all or some of the objects in a domain. For example, the quantifier "all" can be used to make the statement "All humans are mortal".
* Sentences: Sentences in FOL are statements about objects, predicates, functions, and quantifiers. They can be either true or false.
FOL can be used to express a wide variety of statements, such as:
* All humans are mortal.
* Socrates is a man.
* Every even number is divisible by two.
* There exists a prime number greater than 100.
* For all real numbers x and y, if x is less than y, then x + y is less than 2y.
FOL is a powerful tool for reasoning and problem-solving. It can be used to prove theorems, design algorithms, and develop intelligent systems.
Here are some examples of FOL statements:
* ∀x(Human(x) → Mortal(x)) (All humans are mortal.)
* Human(Socrates) (Socrates is a human.)
* Even(n) → DivisibleByTwo(n) (Every even number is divisible by two.)
* ∃x(Prime(x) ∧ x > 100) (There exists a prime number greater than 100.)
* ∀x∀y(x < y → x + y < 2y) (For all real numbers x and y, if x is less than y, then x + y is less than 2y.)
FOL is a complex topic, but it is an essential foundation for many areas of mathematics, computer science, and philosophy. If you are interested in learning more about FOL, there are many resources available online and in libraries."
[{2023-09-17 retrieved} https://bard.google.com/]
===
"First-order logic — also known as predicate logic, quantificational logic, and first-order predicate calculus — is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable.[1] This distinguishes it from propositional logic, which does not use quantifiers or relations;[2] in this sense, propositional logic is the foundation of first-order logic.
A theory about a topic is usually a first-order logic together with a specified domain of discourse (over which the quantified variables range), finitely many functions from that domain to itself, finitely many predicates defined on that domain, and a set of axioms believed to hold about them. Sometimes, "theory" is understood in a more formal sense, which is just a set of sentences in first-order logic.
The adjective "first-order" distinguishes first-order logic from higher-order logic, in which there are predicates having predicates or functions as arguments, or in which one or both of predicate quantifiers or function quantifiers are permitted.[3]:56 In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets.
There are many deductive systems for first-order logic which are both sound (i.e., all provable statements are true in all models) and complete (i.e. all statements which are true in all models are provable). Although the logical consequence relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Lφwenheim–Skolem theorem and the compactness theorem.
First-order logic is the standard for the formalization of mathematics into axioms, and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, into first-order logic. No first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axiom systems that do fully describe these two structures (that is, categorical axiom systems) can be obtained in stronger logics such as second-order logic.
The foundations of first-order logic were developed independently by Gottlob Frege and Charles Sanders Peirce.[4] For a history of first-order logic and how it came to dominate formal logic, see Josι Ferreirσs (2001)."
[{2021-01-24} https://en.wikipedia.org/wiki/First-order_logic]

evaluation of lagFolc

description::
"limitations of FOL
First-order logic (FOL), also known as first-order predicate logic, is a powerful and widely used formal system for representing and reasoning about knowledge. However, like any formal system, it has its limitations. Here are some of the main limitations of FOL:
* Incompleteness: FOL is not complete, which means there are statements or truths that cannot be expressed or proven within the language of FOL itself. Gφdel's incompleteness theorems show that there will always be statements that are undecidable within a formal system like FOL.
* Limited expressive power: FOL is not expressive enough to represent certain types of knowledge or reason about them effectively. For example, FOL struggles with representing and reasoning about uncertain or probabilistic information, vague concepts, and default reasoning.
* Lack of quantification over predicates: FOL allows quantification over individual objects, but it does not directly support quantification over predicates or relations. This makes it challenging to express statements about sets of predicates or relations, which are common in various domains.
* Difficulty with recursion: FOL is not well-suited for representing recursive definitions and reasoning about recursive structures. This limitation can make it less suitable for certain domains, such as programming language semantics or certain aspects of natural language.
* Computational complexity: While FOL is decidable, the complexity of automated reasoning in FOL can grow rapidly as the size and complexity of the knowledge base increase. This can make reasoning in FOL impractical for large-scale applications.
* Limited support for non-monotonic reasoning: FOL assumes the closed-world assumption and the principle of monotonicity, meaning that if a statement is true, it remains true when additional information is added. This assumption does not always hold in real-world knowledge representation, where information may be incomplete or subject to change.
* Lack of support for context and modularity: FOL does not provide explicit mechanisms for representing context or modularity, which are important for organizing and managing complex knowledge bases. This limitation can make it challenging to scale up knowledge representation and reasoning systems.
* Difficulty with representing time and change: FOL has limitations in representing and reasoning about temporal information and dynamic processes. More specialized formalisms, such as temporal logic or situation calculus, are better suited for these purposes.
* Limited support for spatial reasoning: FOL is not well-suited for spatial reasoning and geometric knowledge representation. Specialized formalisms like geometric or spatial logics are more appropriate for such tasks.
* Ambiguity in natural language: FOL struggles to handle the inherent ambiguity and context-dependence of natural language. While it can be used for formalizing some aspects of natural language semantics, it does not fully capture the richness of human language understanding.
To address some of these limitations, researchers have developed various extensions and alternative formalisms, such as higher-order logic, modal logic, and description logics, to better suit specific applications and knowledge representation needs. These alternatives may offer improved expressiveness and reasoning capabilities in certain contexts."
[{2023-09-17 retrieved} https://chat.openai.com/c/4b03e771-3512-4de4-89c7-92388d84bcd2]

name::
* McsEngl.lagFolc'evaluation,

evoluting of lagFolc

description::
"evolution of first-order-logic:
First-order logic has evolved over time, but its core concepts have remained largely the same. The most significant changes have been in the way that first-order logic is formalized and used.

**Early development**
The early development of first-order logic can be traced back to the work of Gottlob Frege and Charles Sanders Peirce in the late 19th century. Frege and Peirce developed formal systems of logic that were similar to modern first-order logic, but their work was not well-known at the time.

**Hilbert and Ackermann**
In the early 20th century, David Hilbert and Wilhelm Ackermann developed a formalization of first-order logic that is now widely used. Hilbert and Ackermann's work was based on the earlier work of Frege and Peirce, but they made a number of important contributions, including the development of a rigorous proof theory for first-order logic.

**Modern first-order logic**
Modern first-order logic is based on the work of Hilbert and Ackermann, but there have been a number of refinements and extensions over the years. For example, logicians have developed new techniques for automated theorem proving and model checking in first-order logic.

**Evolution of applications**
First-order logic has also evolved in terms of its applications. In the early 20th century, first-order logic was primarily used in the foundations of mathematics. However, first-order logic is now used in a wide range of other fields, including computer science, artificial intelligence, and linguistics.

Here are some specific examples of the evolution of first-order logic:
* In the 1930s, Kurt Gödel proved his incompleteness theorems, which showed that certain types of mathematical statements cannot be proven or disproven in first-order logic. This led to a reconsideration of the role of first-order logic in the foundations of mathematics.
* In the 1950s and 1960s, there was a growing interest in the use of first-order logic in artificial intelligence. This led to the development of new techniques for automated theorem proving and model checking in first-order logic.
* In the 1970s and 1980s, first-order logic began to be used in other fields besides mathematics and artificial intelligence. For example, first-order logic is now used in linguistics to study the semantics of natural language.

Today, first-order logic is the most widely used formal system of logic. It is used in a wide range of fields, including mathematics, computer science, artificial intelligence, and linguistics.

**Future of first-order logic**
First-order logic is a mature and well-studied field of logic. However, there are a number of areas where research is ongoing. For example, logicians are interested in developing new techniques for automated theorem proving and model checking in first-order logic. They are also interested in extending first-order logic to deal with new types of reasoning, such as reasoning about uncertainty and reasoning about incomplete information."
[{2023-11-04 retrieved} https://bard.google.com/chat/94005471f30d231e]

name::
* McsEngl.evoluting-of-lagFolc,
* McsEngl.lagFolc'evoluting,

lagFolc.KIF

description::
">example of Knowledge Interchange Format:
The Knowledge Interchange Format (KIF) is a computer-oriented language for the interchange of knowledge among disparate programs. It's a representation language used to encode information in a form that can be interpreted by different computer systems. KIF is often used in the field of artificial intelligence and knowledge representation.

Here's a simple example of a statement in Knowledge Interchange Format:
```kif
(implies
(and (married John Mary)
(parent Mary Susan))
(parent John Susan))
```

In this example:
- The statement is represented using parentheses to denote the structure.
- The `implies` keyword represents implication, meaning that if the conditions in the `and` statement on the left are true, then the conclusion in the `parent John Susan` statement on the right is also true.
- The conditions in the `and` statement state that John is married to Mary, and Mary is the parent of Susan.
- Therefore, the conclusion is that John is also the parent of Susan.

This is a basic example, and KIF can be used to represent more complex knowledge structures, including ontologies, rules, and relationships in various domains. The goal of KIF is to provide a standardized way for different systems to exchange and understand knowledge."
[{2023-11-16 retrieved} https://chat.openai.com/c/49f2bb33-1743-4e50-9c14-272688581b03]

name::
* McsEngl.KIF!=knowledge-interchange-format,
* McsEngl.knowledge-interchange-format!⇒lagKift,
* McsEngl.lagFolc.KIF!⇒lagKift,
* McsEngl.lagKift,

descriptionLong::
"Knowledge Interchange Format (KIF) is a computer language designed to enable systems to share and re-use information from knowledge-based systems. KIF is similar to frame languages such as KL-One and LOOM but unlike such language its primary role is not intended as a framework for the expression or use of knowledge but rather for the interchange of knowledge between systems. The designers of KIF likened it to PostScript. PostScript was not designed primarily as a language to store and manipulate documents but rather as an interchange format for systems and devices to share documents. In the same way KIF is meant to facilitate sharing of knowledge across different systems that use different languages, formalisms, platforms, etc.
KIF has a declarative semantics.[1] It is meant to describe facts about the world rather than processes or procedures. Knowledge can be described as objects, functions, relations, and rules. It is a formal language, i.e., it can express arbitrary statements in first order logic and can support reasoners that can prove the consistency of a set of KIF statements. KIF also supports non-monotonic reasoning. KIF was created by Michael Genesereth, Richard Fikes and others participating in the DARPA knowledge sharing Effort.[2]
Although the original KIF group intended to submit to a formal standards body, that did not occur. A later version called Common Logic has since been developed for submission to ISO and has been approved and published. A variant called SUO-KIF[3] is the language in which the Suggested Upper Merged Ontology[4] is written.
A practical application of the Knowledge interchange format is an agent communication language in a multi-agent system.[5]"
[{2021-01-26} https://en.wikipedia.org/wiki/Knowledge_Interchange_Format]

lagLogic.higher-order-logic

description::
"First-order logic quantifies only variables that range over individuals; second-order logic, in addition, also quantifies over sets; third-order logic also quantifies over sets of sets, and so on.
Higher-order logic is the union of first-, second-, third-, …, nth-order logic; i.e., higher-order logic admits quantification over sets that are nested arbitrarily deeply."
[{2021-02-05} https://en.wikipedia.org/wiki/Higher-order_logic]

name::
* McsEngl.HOL!=higher-order-logic!⇒lagHolc,
* McsEngl.higher-order-logic!⇒lagHolc,
* McsEngl.lagKnlg.016-higher-order-logic!⇒lagHolc,
* McsEngl.lagKnlg.higher-order-logic!⇒lagHolc,
* McsEngl.lagHolc!=higher-order-logic,
* McsEngl.lagLogic.higher-order-logic!⇒lagHolc,

descriptionLong::
"In mathematics and logic, a higher-order logic is a form of predicate logic that is distinguished from first-order logic by additional quantifiers and, sometimes, stronger semantics. Higher-order logics with their standard semantics are more expressive, but their model-theoretic properties are less well-behaved than those of first-order logic.
The term "higher-order logic", abbreviated as HOL, is commonly used to mean higher-order simple predicate logic. Here "simple" indicates that the underlying type theory is the theory of simple types, also called the simple theory of types (see Type theory). Leon Chwistek and Frank P. Ramsey proposed this as a simplification of the complicated and clumsy ramified theory of types specified in the Principia Mathematica by Alfred North Whitehead and Bertrand Russell. Simple types is nowadays sometimes also meant to exclude polymorphic and dependent types.[1]"
[{2021-02-05} https://en.wikipedia.org/wiki/Higher-order_logic]

evoluting of lagHolc

description::
"evolution of higher-order-logic:
Higher-order logic (HOL) is a branch of mathematical logic that extends first-order logic by allowing quantification over predicates and functions. This makes HOL more expressive than first-order logic, and it allows for the formalization of a wider range of mathematical concepts.

The evolution of HOL can be traced back to the late 19th century, when Gottlob Frege and Charles Sanders Peirce independently developed systems of logic that included higher-order quantification. However, it was not until the early 20th century that HOL began to be developed in a systematic way.

In the 1920s and 1930s, logicians such as Alonzo Church, Haskell Curry, and Stephen Kleene developed a number of different formal systems for HOL. These systems varied in their syntax and semantics, but they all shared the common feature of allowing quantification over predicates and functions.

One of the most important developments in the evolution of HOL was the introduction of type theory in the 1940s. Type theory is a formal system for reasoning about the types of expressions in a logical language. It was originally developed by Church, but it was later extended by a number of other logicians, including Per Martin-Löf and Jean-Yves Girard.

Type theory has had a major impact on the development of HOL. It has provided a rigorous foundation for HOL semantics and has led to the development of new techniques for automated theorem proving and model checking in HOL.

Today, HOL is a widely used formal system in logic, mathematics, and computer science. It is used to formalize a wide range of mathematical concepts, including set theory, real analysis, and topology. It is also used to develop and verify computer programs.

Here are some specific examples of the evolution of higher-order logic:
* In the 1940s, Alonzo Church developed simple type theory, which was the first formal system for reasoning about the types of expressions in a logical language.
* In the 1960s, Per Martin-Löf developed intuitionistic type theory, which is a type theory that is based on the intuitionistic logic of constructive mathematics.
* In the 1970s, Jean-Yves Girard developed linear type theory, which is a type theory that is designed to reason about the efficient use of computational resources.
* In the 1980s and 1990s, there was a growing interest in the use of higher-order logic in artificial intelligence. This led to the development of new techniques for automated theorem proving and model checking in higher-order logic.

Today, higher-order logic is a mature and well-studied field of logic. It is used in a wide range of fields, including mathematics, computer science, and artificial intelligence.

**Future of higher-order logic**
Higher-order logic is a powerful formal system, but it is also complex. One of the challenges for the future of higher-order logic is to develop new techniques for making it more accessible to a wider range of users.
Another challenge for the future of higher-order logic is to develop new techniques for applying it to new problems. For example, researchers are interested in using higher-order logic to reason about security and privacy in computer systems.

Overall, the future of higher-order logic is bright. It is a powerful and expressive formal system that has the potential to be used to solve a wide range of problems."

draft2:
"Higher-order logic (HOL) has evolved over time, both in terms of its theoretical foundations and its practical applications.

**Theoretical foundations**
The theoretical foundations of HOL were laid in the early 20th century by logicians such as Gottlob Frege, Bertrand Russell, and Alonzo Church. Frege developed the first formal system of HOL, while Russell and Church made important contributions to the development of type theory, which provides a framework for reasoning about functions and predicates as first-class citizens.
In the 1960s and 1970s, there was a renewed interest in HOL, due in part to the work of logicians such as David Prawitz, Per Martin-Löf, and Jean-Yves Girard. These logicians developed new proof systems for HOL, such as constructive type theory and intuitionistic type theory.

**Practical applications**
HOL has also become increasingly important in practical applications. In the 1980s and 1990s, HOL-based theorem provers began to be used to verify hardware and software designs. Today, HOL is used in a wide range of applications, including:
* Hardware verification
* Software verification
* Program analysis
* Compiler optimization
* Language design
* Artificial intelligence
* Natural language processing
* Mathematics

**Evolution of HOL systems**
HOL systems have also evolved over time. In the early days, HOL systems were typically implemented as programming languages. However, in recent years, there has been a shift towards using general-purpose programming languages to implement HOL systems. This has made HOL systems more accessible to a wider range of users.

**Future of HOL**
HOL is a powerful and versatile logic with a wide range of applications. As HOL systems become more accessible and efficient, we can expect to see HOL being used in even more areas in the future.

Here are some specific examples of the evolution of HOL:
* In the 1980s, the LCF theorem prover was developed. LCF was the first HOL theorem prover to be used to verify a large-scale software system (the Multics operating system).
* In the 1990s, the HOL theorem prover was developed. HOL is now one of the most widely used HOL theorem provers.
* In the 2000s, the Coq theorem prover was developed. Coq is a type-theoretic theorem prover that is based on constructive logic. Coq is now widely used in the research community to verify hardware and software designs.

Today, there are a number of different HOL systems available, each with its own strengths and weaknesses. The choice of which HOL system to use depends on the specific application.

Overall, HOL has evolved from a theoretical field of study to a practical tool that is used in a wide range of applications. As HOL systems become more accessible and efficient, we can expect to see HOL being used in even more areas in the future."
[{2023-11-04 retrieved} https://bard.google.com/chat/94005471f30d231e]

name::
* McsEngl.evoluting-of-lagHolc,
* McsEngl.lagHolc'evoluting,

lagLogic.conceptual-graphs-{1976..}

description::
"Conceptual graphs (CGs) are a system of logic based on the existential graphs of Charles Sanders Peirce and the semantic networks of artificial intelligence.
They express meaning in a form that is logically precise, humanly readable, and computationally tractable.
With a direct mapping to language, conceptual graphs serve as an intermediate language for translating computer-oriented formalisms to and from natural languages.
With their graphic representation, they serve as a readable, but formal design and specification language.
CGs have been implemented in a variety of projects for information retrieval, database design, expert systems, and natural language processing."
[{2021-02-07} http://www.jfsowa.com/cg/-{2005-03-14}]

name::
* McsEngl.CGs!=conceptual-graphs!⇒lagCgaf,
* McsEngl.conceptual-graphs-abstract-form!⇒lagCgaf,
* McsEngl.lagKnlg.017-conceptual-graphs!⇒lagCgaf,
* McsEngl.lagKnlg.conceptual-graphs!⇒lagCgaf,
* McsEngl.lagCgaf!=conceptual-graphs-abstract-form,
* McsEngl.lagLogic.conceptual-graphs!⇒lagCgaf,

manager of lagCgaf

description::
* https://sourceforge.net/projects/amine-platform/, Java,

name::
* McsEngl.Knlgmngr.conceptual-graphs,
* McsEngl.lagCgaf'manager,

input of lagCgaf

description::
· sentences of a-natural-language.

name::
* McsEngl.Cgafinput,
* McsEngl.lagCgaf'input!⇒Cgafinput,

output of lagCgaf

description::
·

name::
* McsEngl.Cgafoutput,
* McsEngl.lagCgaf'output!⇒Cgafoutput,

concept of Cgafoutput

description::
· in a-conceptual-graph, the-boxes are-called concepts.

name::
* McsEngl.Cgafocpt,
* McsEngl.Cgafoutput'concept!⇒Cgafocpt,
* McsEngl.concept@lagCgaf!⇒Cgafocpt,

referent of Cgafocpt

description::
"Every concept has a concept type t and a referent r.
Comment.
This abstract definition does not say how the type and referent are represented. In computer storage, they may be represented by a pair of pointers, one pointing to a specification of the concept type and the other pointing to a specification of the referent. In the concrete notations, the type field is on the left, and the referent field is on the right.
In the concept [Bus], "Bus" is the type, and the referent field contains a blank, which represents an existential quantifier; the actual referent is a physical entity of type Bus that exists somewhere in the world.
In the concept [Person: John], "Person" specifies the type, and the name "John" designates some person who is the referent."
[CG Standard, http://www.bestweb.net/~sowa/cg/cgdpansw.htm, 1999aug01]

name::
* McsEngl.Cgafocpt'referent,

type of Cgafocpt

description::
"Every concept has a concept type t and a referent r.
Comment.
This abstract definition does not say how the type and referent are represented. In computer storage, they may be represented by a pair of pointers, one pointing to a specification of the concept type and the other pointing to a specification of the referent. In the concrete notations, the type field is on the left, and the referent field is on the right.
In the concept [Bus], "Bus" is the type, and the referent field contains a blank, which represents an existential quantifier; the actual referent is a physical entity of type Bus that exists somewhere in the world.
In the concept [Person: John], "Person" specifies the type, and the name "John" designates some person who is the referent."
[CG Standard, http://www.bestweb.net/~sowa/cg/cgdpansw.htm, 1999aug01]

name::
* McsEngl.Cgafocpt'type,

conceptual-relation of Cgafoutput

description::
· the-circles are-called conceptual-relations.

name::
* McsEngl.Cgaforln,
* McsEngl.Cgafoutput'conceptual-relation!⇒Cgaforln,

conceptual-graph of Cgafoutput

description::
"A conceptual graph g is a bipartite graph that has two kinds of nodes called concepts and conceptual relations.
a) Every arc a of g must link a conceptual relation r in g to a concept c in g. The arc a is said to belong to the relation r; it is said to be attached to the concept c, but it does not belong to c.
b) The conceptual graph g may have concepts that are not linked to any conceptual relation; but every arc that belongs to any conceptual relation in g must be attached to exactly one concept in g.
c) Three kinds of conceptual graphs are given distinguished names:
c1) The blank is an empty conceptual graph with no concepts, conceptual relations, or arcs.
c2) A singleton is a conceptual graph that consists of a single concept, but no conceptual relations or arcs.
c3) A star is a conceptual graph that consists of a single conceptual relation r and 1 or more concepts, which are attached to the arcs of r."
[Sowa]

name::
* McsEngl.Cgafogrph,
* McsEngl.Cgafoutput'conceptual-graph!⇒Cgafogrph,

Cgafogrph.SPECIFIC

description::
* http://www.jfsowa.com/cg/cgexampw.htm,

name::
* McsEngl.Cgafograph.specific,

Cgafogrph.example

description::
"To illustrate this definition, consider the following conceptual graph, which represents the sentence John is going to Boston:
[Person: John]->(Agnt)->[Go]<-(Dest)<-[City: Boston].
This graph contains three concepts: [Person: John], [Go], and [City: Boston].
It contains two conceptual relations: (Agnt) relates [Go] to the agent John, and (Dest) relates [Go] to the destination Boston.
The term bipartite means that every arc of a conceptual graph connects one concept and one conceptual relation; there are no arcs that connect concepts to concepts or relations to relations. Two of the four arcs in the graph belong to (Agnt), and the other two belong to (Dest)."
[Sowa]

name::
* McsEngl.Cgafogrph.example,

evaluation of lagCgaf

description::
"It uses 2 building units, instead of one.
* it is language based.
* dificult terminology."
[KasNik, 2007.12.15]

name::
* McsEngl.lagCgaf'evaluation,

relation-to-existential-graphs of lagCgaf

description::
"The differences between EGs and CGs result from differences in their origin and motivation.
In formulating EGs, Peirce was trying to find the simplest possible primitives for representing logic and operating on logical statements.
In formulating CGs, Sowa was trying to find the most direct mapping from natural languages to logic.
Since ordinary language has a much richer variety of expressive forms than Peirce's primitives, the CG notation includes many notational extensions that are designed to represent the major semantic forms of natural languages. To balance expressive power with simplicity, CGs provide both a basic and an extended notation.
The basic CG notation is essentially a typed version of Peirce's EGs with the same logical primitives: negation, conjunction, and the existential quantifier.
The extended CG notation, however, provides mechanisms for defining an open-ended range of new forms of expression, each of which has a direct mapping to some natural language expression and a formally defined expansion into the basic primitives. Following are the ways in which the extended CGs differ from EGs:

In the basic CGs, Peirce's line of identity is represented by one or more concept nodes linked by dotted lines called coreference links. As an alternate notation, the dotted lines may be replaced by symbols, called coreference labels, such as *x and ?x. One label marked with * is called the defining node, and the others marked with ? are called bound nodes. The labels are sanctioned by one of Peirce's observations that any line of identity could be cut with the two ends labeled to show how they could be reconnected.

In the extended CGs, generalized quantifiers can be introduced, including the universal, plurals, and various kinds of indexicals. Each of the extended forms, however, is defined by a formal mapping to the basic notation.

Each concept includes a type label inside the concept box. That label corresponds to a monadic predicate that could be attached to an EG line of identity. In the extended notation, the type labels may be replaced by lambda expressions that define them. The term lambda expression is taken from Alonzo Church, but the idea was anticipated by Peirce, who used labeled hooks instead of the Greek letter lambda.

In CGs, Peirce's ovals for negation are represented by concept boxes with the type label Proposition and an attached negation relation Neg (or its abbreviation by a ^ symbol). Instead of Peirce's tinctures for representing modalities, other relations can be attached to the concept box, such as Psbl for possibility or Necs for necessity.

Besides a type label, a concept box includes a referent field that identifies or partially identifies the referent of the concept. Following Peirce, the three kinds of referents are icons (represented by a diagram or picture), indexes (represented by a name or other indexical), and symbols (represented by a nested conceptual graph that describes the entity).

The linguistic work on thematic roles or case relations has been adopted for the basic types of conceptual relations. New types of relations can be defined in terms of them by lambda expressions, which like the lambda expressions for concept types can be written in place of the relation labels.

Except for notation, the basic CGs are minimal variants of EGs. The extended CGs provide a richer notation that maps almost one-to-one to the semantic structures of natural languages. The formal mapping from the extended to the basic CGs helps to bridge the gap between language and logic.
[Fulfilling Peirce's Dream: Conceptual Structures and Communities of Inquiry, Leroy Searle, Mary Keeler, John Sowa, Harry Deluagch, and Dickson Lukose, ICCS'97]

name::
* McsEngl.lagCgaf'relation-to-existential-graphs,

relation-to-discourse-representation-structures of lagCgaf

description::
"When Conceptual Structures appeared at the end of 1983, it did not include recent work by Hans Kamp [8] on discourse representation theory. After spending some time translating English to logic, Kamp had realized that the mapping to predicate calculus involved convoluted distortions of the sentence structure of English and other natural languages. In order to simplify the mapping, Kamp designed his discourse representation structures (DRSs) and formulated rules for representing and resolving indexicals in the DRS notation. Although Kamp's motivation was very different from Peirce's and he had no awareness of Peirce's graphs, Kamp's DRS notation happened to be isomorphic to Peirce's EGs. Since CGs are based on EGs, they are also isomorphic to DRSs; therefore, Kamp's rules for resolving indexicals in the DRS notation can be applied directly to EGs and CGs. A century after Peirce's pioneering work on both graph logic and indexicals, the former proved to be ideally suited for representing the latter.
[8]Kamp, Hans, Events, Discourse Representations, and Temporal References, Langages, vol. 64, 1981, pp. 39-64.
[Fulfilling Peirce's Dream: Conceptual Structures and Communities of Inquiry, Leroy Searle, Mary Keeler, John Sowa, Harry Deluagch, and Dickson Lukose, ICCS'97]

name::
* McsEngl.lagCgaf'relation-to-discourse-representation-structures,

info-resource of lagCgaf

description::
* http://www.jfsowa.com/cg/cg_hbook.pdf,
* ISO-standard: http://www.jfsowa.com/cg/cgstand.htm,
* existential-graphs: http://www.jfsowa.com/peirce/ms514.htm,

name::
* McsEngl.lagCgaf'Infrsc,

bibliography of lagCgaf

1995 Ellis et al.::
Gerard Ellis and Robert A. Levinson and William Rich and John F. Sowa (Eds.) (1995) "Conceptual Graphs: Structure-based Knowledge Representation", Proceedings of the Third International Conference on Conceptual Structures ICCS'95, August 14-18, University of California, Santa Cruz, USA, Lecture Notes in Artificial Intelligence, Springer-Verlag, Number 954, Berlin.

1994 Tephenhart et al.::
William M. Tepfenhart and Judith P. Dick and John F. Sowa (Eds.) (1994) "Conceptual Structures: Current Practices", Second International Conference on Conceptual Structures, ICCS'94, College Park, Maryland, USA, August, Lecture Notes in Artifical Intelligence, Number 835, Springer-Verlag, Berlin.

1993 Mineau et al.::
Guy W. Mineau and Bernard Moulin and John F. Sowa (Eds.) (1993) "Conceptual Graphs for Knowledge Representation", Lecture Notes in Artificial Intelligence, Springer-Verlag, Berlin, Number 699, Proceedings of the 1st International Conference on Conceptual Structures, Quebec City, Canada, August 4-7.

1993 Pfeiffer et al.::
Heather D. Pfeiffer and Timothy E. Nagle (Eds.) (1993) "Conceptual Structures: Theory and Implementation", Springer-Verlag, Number 754, Lecture Notes in Artificial Intelligence, Proceedings of the Seventh Annual Workshop on Conceptual Graphs, Las Cruces, New Mexico, July 8-10, 1992.

1993 Sowa::
John F. Sowa (1993) "Relating Diagrams to Logic", Guy W. Mineau and Bernard Moulin and John F. Sowa (Eds.), Conceptual Graphs for Knowledge Representation, Lecture Notes in Artificial Intelligence, Springer-Verlag, Berlin, Number 699, Proceedings of the 1st International Conference on Conceptual Structures, Quebec City, Canada, August 4-7.

1993 Sowa::
John F. Sowa (1993) "Logical Foundations for Representing Object-Oriented Systems", Journal of Experimental & Theoretical Artificial Intelligence, volume 5.

1992 Nagle et al.::
Tim Nagle and Jan Nagle and Laurie Gerholz and Peter Eklund (Eds.) (1992) "Conceptual Structures: Current Research and Practice", Ellis Horwood.

1992 Sowa::
John F. Sowa (1992) "Conceptual Graphs Summary", T. E. Nagle and J. A. Nagle and L. L. Gerholz and P. W. Eklund (Eds.), Conceptual Structures: Current Research and Practice, Ellis Horwood, p. 3-51.

1991 Sowa::
John F. Sowa (1991) "Toward the Expressive Power of Natural Language," J.F. Sowa (Ed.), Principles of Semantic Networks: Explorations in the Representation of Knowledge, Morgan Kaufmann, San Mateo, CA, p. 157-189.

1990 Ketner::
Kenneth Laine Ketner (1990) "Elements of Logic: An Introduction to Peirce's Existential Graphs," Texas Tech University Press, Lubbock, Texas.

1984 Sowa::
John F. Sowa (1984) "Conceptual Structures: Information Processing in Mind and Machine", Addison-Wesley, Reading, MA.

1976 Sowa::
Sowa, John F., Conceptual Graphs for a Data Base Interface, IBM Journal of Research and Development, vol. 20, 1976, pp. 336-357.

1973 Roberts::
Don D. Roberts (1973) "The Existential Graphs of Charles S. Peirce", Mouton, The Hague, 1973. Collected papers of Charles Sanders Peirce Charles (1931-58) Hartshorne and Paul Weiss (Eds.) Harvard University Press, Cambridge.

1964 Zeeman::
Zeeman, Jay. "The Graphical Logic of C. S. Peirce." Ph.D. diss., University of Chicago, 1964.

name::
* McsEngl.lagCgaf'bibliography,

evoluting of lagCgaf

description::
"evolution of conceptual-graphs:
Conceptual graphs (CGs) were first introduced by John F. Sowa in 1976 as a knowledge representation formalism for database systems. CGs are a graphical way to represent concepts and the relationships between them. They are based on the existential graphs of Charles Sanders Peirce and the semantic networks of artificial intelligence.

CGs have evolved over the years to become a more powerful and versatile knowledge representation tool. Some of the key changes that have been made include:
* The development of a standard notation for CGs, called the Conceptual Graphs Interchange Format (CGIF). This makes it easier to share CGs between different systems.
* The development of new inference algorithms for CGs. This makes it possible to reason about CGs more efficiently and effectively.
* The application of CGs to a wider range of domains, including natural language processing, machine learning, and expert systems.

Here are some specific examples of the evolution of conceptual graphs:
* In the 1980s, CGs were used to develop database interfaces and text retrieval systems.
* In the 1990s, CGs were used to develop natural language processing systems and reasoning systems.
* In the 2000s, CGs were used to develop machine learning systems and expert systems.
* In the 2010s, CGs were used to develop knowledge graphs and ontologies.
* In the 2020s, CGs are being used to develop new types of AI systems, such as explainable AI and trustworthy AI.

CGs are a powerful and flexible knowledge representation tool that has been used to develop a wide range of AI systems. They are still an active area of research, and new applications for CGs are being discovered all the time.

Here are some of the current trends in the evolution of conceptual graphs:
* The development of new CG-based inference algorithms that are more efficient and scalable.
* The application of CGs to new domains, such as quantum computing and social media analysis.
* The development of new tools and libraries for working with CGs.

CGs are a promising knowledge representation technology for the future of AI."
[{2023-11-04 retrieved} https://bard.google.com/chat/3a768ec7474bc1c1]

{1984}-Sowa-Conceptual-Structures::
John F. Sowa (1984) "Conceptual Structures: Information Processing in Mind and Machine", Addison-Wesley, Reading, MA.

{1976}-Sowa-Conceptual-Graphs::
Sowa, John F., Conceptual Graphs for a Data Base Interface, IBM Journal of Research and Development, vol. 20, 1976, pp. 336-357.
"Sowa (1976) developed a version of conceptual graphs (CGs) as an intermediate language for mapping natural language questions and assertions to a relational database."
* McsEngl.{lagCnpt'1976}-Sowa-Conceptual-graphs,
[{2021-02-07} http://www.jfsowa.com/cg/cg_hbook.pdf]

{1896}-Existential-Graphs::
Charles-Sanders-Peirce developed the logic of Existential Graphs.

name::
* McsEngl.evoluting-of-lagCgaf,
* McsEngl.lagCgaf'evoluting,

lagCgaf.display-form

description::
"This graphic form, which is called the display form, is highly readable for humans and it maps directly to the abstract CG form."

name::
* McsEngl.lagCgdf,
* McsEngl.lagCgaf.display-form!⇒lagCgdf,
* McsEngl.lagCgaf.graphic-form!⇒lagCgdf,

lagCgaf.linear-form

description::
The linear form for CGs is intended as a more compact notation than DF, but with good human readability. It is exactly equivalent in expressive power to the abstract syntax and the display form. Following is the LF for Figure 1 "John is going to Boston by bus.":
[Go]-
  (Agnt)?[Person: John]
  (Dest)?[City: Boston]
  (Inst)?[Bus].

name::
* McsEngl.lagCglf,
* McsEngl.lagCgaf.linear-form!⇒lagCglf,

lagCgaf.CGIF

description::
For communications between computers, another form called the conceptual graph interchange format (CGIF) has a simpler syntax and a more restricted character set. Following is the CGIF form of Figure 1:
[Cat: *x] [Mat: *y] (On ?x ?y).

name::
* McsEngl.CGIF!=conceptual-graphs-interchange-format,
* McsEngl.conceptual-graphs-interchange-format!⇒lagCgif,
* McsEngl.lagCgif,
* McsEngl.lagCgaf.CGIF!⇒lagCgif,

lagLogic.description-logic

description::
"Description logics (DL) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy description logics, and each description logic features a different balance between expressive power and reasoning complexity by supporting different sets of mathematical constructors.[1]
DLs are used in artificial intelligence to describe and reason about the relevant concepts of an application domain (known as terminological knowledge). It is of particular importance in providing a logical formalism for ontologies and the Semantic Web: the Web Ontology Language (OWL) and its profile is based on DLs. The most notable application of DLs and OWL is in biomedical informatics where DL assists in the codification of biomedical knowledge."
[{2021-01-24} https://en.wikipedia.org/wiki/Description_logic]

name::
* McsEngl.DL!=description-logic-language,
* McsEngl.description-logic-language!⇒lagDlgc,
* McsEngl.lagDlgc!=description-logic-language,
* McsEngl.lagKnlg.018-description-logic!⇒lagDlgc,
* McsEngl.lagKnlg.description-logic!⇒lagDlgc,
* McsEngl.lagPpsl.description-logic!⇒lagDlgc,
* McsEngl.lagPpsl.description-logic!⇒lagDlgc,

descriptionLong::
"Description logic was designed as an extension to frames and semantic networks, which were not equipped with formal logic-based semantics".
[http://en.wikipedia.org/wiki/Description_logic]

output of lagDlgc

description::
">example of description-logic:
Sure, here is an example of description logic:

**TBox (Terminological Box)**
```
Animal ⊑ LivingThing.
Donkey ≡ Animal ⊓ ∀hasParent.Donkey.
Horse ≡ Animal ⊓ ∀hasParent.Horse.
Mule ≡ Animal ⊓ ∃hasParent.Horse ⊓ ∃hasParent.Donkey.
∃hasParent.Mule ⊑ ⊥.
```
This TBox defines the following concepts:
* `Animal`: All animals are living things.
* `Donkey`: A donkey is an animal that only has donkey parents.
* `Horse`: A horse is an animal that only has horse parents.
* `Mule`: A mule is an animal that has a horse parent and a donkey parent.
* `hasParent`: The `hasParent` role relates an animal to its parent.

**ABox (Assertional Box)**
```
Horse(Mary)
Mule(Peter)
Donkey(Sven)
hasParent(Peter,Mary)
hasParent(Peter,Carl)
hasParent(Sven,Hannah)
hasParent(Sven,Julia)
```
This ABox asserts the following facts:
* Mary is a horse.
* Peter is a mule.
* Sven is a donkey.
* Peter is the parent of Mary and Carl.
* Sven is the parent of Hannah and Julia.

**Inferences**
From the TBox and ABox, we can infer the following facts:
* Mary is a living thing.
* Carl is a horse.
* Hannah is a donkey.
* Julia is a donkey.

Description logics are a family of formal knowledge representation languages that are used to define and reason about concepts and relationships. They are widely used in a variety of applications, including artificial intelligence, knowledge engineering, and the Semantic Web.
Sources
www.uio.no/studier/emner/matnat/ifi/INF3170/h15/undervisningsmateriale/dl1.pdf"
[{2023-11-16 retrieved} https://bard.google.com/chat/caa9fabee025db50?hl=en&pli=1]

name::
* McsEngl.lagDlgc'output,

evoluting of lagDlgc

description::
"evolution of description-logics:
Description logics (DLs) are a family of formal knowledge representation languages used in artificial intelligence, particularly in the domain of knowledge representation and reasoning. They are a subset of first-order logic that has been tailored for modeling and reasoning about concepts and their relationships. The evolution of description logics can be understood through key developments over the years:

1. Early Development:
- The roots of description logics can be traced back to the 1960s and 1970s when researchers were working on formal languages for knowledge representation.
- Early work, like KL-ONE (Brachman, 1978), laid the foundation for what would become description logics.

2. The DL Tradition:
- The term "description logic" was introduced in the early 1990s as a way to refer to a particular subset of knowledge representation formalisms.
- Description logics were developed with a focus on expressive power, computational tractability, and ease of reasoning. They were particularly well-suited for modeling ontologies.

3. Development of DL Systems:
- Several DL systems emerged, including KL-ONE, LOOM, CLASSIC, SHOE, and others. These systems were used for knowledge representation and reasoning in specific domains.

4. Standardization Efforts:
- In the late 1990s and early 2000s, standardization efforts for description logics gained momentum.
- The development of the Web Ontology Language (OWL) in the early 2000s was a significant milestone. OWL is based on description logics and is a W3C recommendation, which means it is widely used for semantic web applications.

5. Computational Complexity:
- Researchers continued to work on the computational complexity of description logics. They developed reasoning algorithms that were capable of handling increasingly expressive DLs while maintaining computational tractability.

6. Application in the Semantic Web:
- Description logics found significant application in the field of the Semantic Web. OWL, which is based on description logics, is used for modeling ontologies and semantic data on the web, making information more machine-readable and interpretable.

7. Recent Developments:
- Ongoing research focuses on improving the scalability and performance of DL reasoners, making them more practical for real-world applications.
- Research continues in enhancing the expressiveness of DLs while managing computational complexity.

8. Use in Various Domains:
- Description logics are now used in various domains, including healthcare (for medical ontologies), natural language processing, knowledge engineering, and more.

9. Integration with Machine Learning:
- More recently, there is a growing interest in integrating description logics with machine learning techniques to handle uncertainty and dynamic data.

In summary, the evolution of description logics has seen the development of formal languages, reasoning algorithms, and standardization efforts. They have found broad applications, especially in the Semantic Web, and continue to evolve with ongoing research to enhance their expressive power and computational efficiency."
[{2023-10-24 retrieved} https://chat.openai.com/c/109404c4-30f8-4ef0-825b-5b87063860a0]

"evolution of description-logics:
Description logics (DLs) are a family of formal knowledge representation languages. They are used to represent the concepts and relationships in a domain of knowledge, and to reason about them. DLs are more expressive than propositional logic, but less expressive than first-order logic. This makes them well-suited for a wide range of applications, including ontologies, knowledge bases, and rule bases.

DLs have evolved over time to become more expressive and efficient. The following are some of the key milestones in the evolution of DLs:
* **1980s:** DLs were first developed under the names "terminological systems" and "concept languages." These early DLs were relatively simple, but they were able to represent basic concepts and relationships.
* **1990s:** DLs became more expressive and efficient. The ALC DL was introduced in 1991, and it served as the basis for many more expressive DLs. ALC supports basic concepts and relationships, as well as role hierarchies and qualified cardinality restrictions.
* **2000s:** DLs continued to evolve to meet the needs of new applications. The OWL DL was standardized by the World Wide Web Consortium (W3C) in 2004. OWL is a very expressive DL that is used to represent ontologies on the Semantic Web.
* **2010s and beyond:** Research on DLs continues today. New DLs are being developed to support new applications, such as reasoning about temporal and spatial information, and fuzzy information.

Here are some of the specific trends in the evolution of DLs:
* **Increasing expressiveness:** DLs have become more expressive over time. This means that they can be used to represent more complex concepts and relationships. For example, early DLs could only represent concepts that were defined by their attributes. More recent DLs can represent concepts that are defined by their relationships to other concepts.
* **Increasing efficiency:** DLs have also become more efficient over time. This means that reasoning algorithms for DLs can run faster. This is important for many applications, such as real-time reasoning systems.
* **New applications:** DLs are being used in a wider range of applications today than they were in the past. For example, DLs are now being used to represent ontologies for the Semantic Web, as well as knowledge bases for expert systems and decision support systems.

Overall, the evolution of DLs has been driven by the need to represent more complex knowledge and to reason about it more efficiently. DLs are now a powerful tool for knowledge representation and reasoning, and they are used in a wide range of applications.

Here are some examples of the different types of DLs that have been developed over time:
* **ALC:** ALC is a basic DL that supports concepts, roles, and qualified cardinality restrictions.
* **OWL:** OWL is a very expressive DL that is used to represent ontologies on the Semantic Web. OWL supports all of the features of ALC, as well as additional features such as data properties, role hierarchies, and disjointness axioms.
* **SHACL:** SHACL is a lightweight DL that is used to constrain the structure of RDF graphs. SHACL is less expressive than OWL, but it is easier to use and more efficient.
* **EL:** EL is a very simple DL that is designed to be efficient for reasoning. EL is often used in embedded systems and other resource-constrained environments.
* **Fuzzy DLs:** Fuzzy DLs are DLs that support fuzzy reasoning. Fuzzy DLs can be used to represent and reason about knowledge that is uncertain or imprecise.

The evolution of DLs is an ongoing process. New DLs are being developed to support new applications and to improve on the expressiveness and efficiency of existing DLs."
[{2023-10-24 retrieved} https://bard.google.com/chat/9ed81e521ec87bc6]

name::
* McsEngl.lagDlgc'evolution,

lagLogic.frame-logic

description::
"The Frame Logic (or F-logic) provides a logical foundation for frame-based and object-oriented languages for data and knowledge representation."
[{2021-02-04} http://flora.sourceforge.net/aboutFlogic.html]

name::
* McsEngl.F-logic-language!⇒lagFlgc,
* McsEngl.frame-logic-language!⇒lagFlgc,
* McsEngl.lagKnlg.019-frame-logic!⇒lagFlgc,
* McsEngl.lagKnlg.frame-logic!⇒lagFlgc,
* McsEngl.lagFlgc,
* McsEngl.lagFlgc!=frame-logic-language,
* McsEngl.lagLogic.F-logic!⇒lagFlgc,

descriptionLong::
"F-logic [15] extends classical predicate calculus with the concepts of objects, classes, and types, which are adapted from object-oriented programming.
In this way, F-logic integrates the paradigms of logic programming and deductive databases with the object-oriented programming paradigm.
Most of the applications of F-logic have been as a language for intelligent information systems based on the logic programming paradigm. This was the original motivation for the development of F-logic. More recently, Flogic has been used to represent ontologies and other forms of Semantic Web reasoning [9, 8, 25, 1, 23, 14]."
[{2021-02-04} https://www3.cs.stonybrook.edu/~kifer/TechReports/ontologies-rules-flogic.pdf]
===
"F-logic (frame logic) is a knowledge representation and ontology language. F-logic combines the advantages of conceptual modeling with object-oriented, frame-based languages and offers a declarative, compact and simple syntax, as well as the well-defined semantics of a logic-based language.
Features include, among others, object identity, complex objects, inheritance, polymorphism, query methods, encapsulation. F-logic stands in the same relationship to object-oriented programming as classical predicate calculus stands to relational database programming."
[{2021-02-04} https://en.wikipedia.org/wiki/F-logic]

relation-to-description-logic of lagFlgc

description::
"No discussion of F-logic is complete without a comparison with description logics (abbr. DL) [2] and, in particular, with languages such as OWL [21].
Since the first-order flavor of F-logic is an extension of classical predicate logic, it is clear that a description logic subset can be defined within Flogic and, indeed, this has been done [3].
In this sense, F-logic subsumes DLs."
[{2021-02-04} https://www3.cs.stonybrook.edu/~kifer/TechReports/ontologies-rules-flogic.pdf]

name::
* McsEngl.lagDlgc'relation-to-lagFlgc,
* McsEngl.lagFlgc'relation-to-lagDlgc,

lagLogic.transaction-logic

description::
"Transaction Logic provides logical foundations for state changes and side effects in a logic programming language. A significant portion of this theory is implemented in Flora-2. Applications of Transaction Logic include modeling and reasoning about workflows, planning, robotics, view maintenance in databases, and more."
[{2021-02-04} http://flora.sourceforge.net/aboutTR.html]

name::
* McsEngl.TL!=transaction-logic!⇒lagTlgc,
* McsEngl.lagKnlg.020-transaction-logic!⇒lagTlgc,
* McsEngl.lagKnlg.transaction-logic!⇒lagTlgc,
* McsEngl.lagLogic.transaction-logic!⇒lagTlgc,
* McsEngl.lagTlgc!=transaction-logic-language,
* McsEngl.transaction-logic-language!⇒lagTlgc,

evoluting of lagTlgc

description::
"evolution of transaction-logic:
Transaction logic (TL) is a declarative logic programming language for specifying and executing transactions. It was first proposed in 1992 by Bonner and Kifer, and has since been extended in various ways.

TL is based on predicate logic, but it includes additional features that make it suitable for programming transactions, such as:
* **Hypothetical updates:** TL allows programmers to specify hypothetical updates to the database, which can then be rolled back or committed. This is useful for reasoning about the consequences of a transaction before it is executed.
* **Dynamic constraints:** TL allows programmers to specify constraints on the execution of transactions. These constraints can be used to ensure that transactions are executed in a safe and consistent manner.
* **Nondeterminism:** TL allows programmers to specify nondeterministic transactions. This is useful for modeling real-world transactions, which are often nondeterministic due to factors such as concurrency and failures.
* **Bulk updates:** TL allows programmers to specify bulk updates to the database. This is useful for improving the performance of transactions that involve large amounts of data.

TL has been used to develop a variety of applications, including database management systems, active databases, and workflow management systems.

The evolution of TL can be divided into two main stages:
1. **Early development:** The early development of TL was focused on developing a sound and complete theoretical foundation for the language. This included developing a formal semantics for TL and proving that the language is expressive enough to capture a wide range of transaction processing requirements.
2. **Practical development:** The practical development of TL has focused on developing efficient implementations of the language and making it easier to use. This has included developing compilers for TL, libraries of reusable TL code, and tools for debugging and testing TL programs.

Some of the recent advances in TL research include:
* **Concurrent TL:** Concurrent TL is an extension of TL that supports concurrent transaction execution. This is important for applications such as online banking systems, which need to be able to handle multiple concurrent transactions.
* **Defeasible reasoning TL:** Defeasible reasoning TL is an extension of TL that supports defeasible reasoning. This is useful for modeling real-world situations, where knowledge is often incomplete and uncertain.
* **Partially defined actions TL:** Partially defined actions TL is an extension of TL that supports partially defined actions. This is useful for modeling real-world situations, where actions may fail or have unexpected consequences.

TL is a powerful and versatile language for programming transactions. It has a strong theoretical foundation and efficient implementations are available. TL is being used to develop a wide range of applications, and research is ongoing to extend the language and make it even more powerful and easier to use."
[{2023-11-04 retrieved} https://bard.google.com/chat/e11bc335cb4c688d]

name::
* McsEngl.evoluting-of-lagTlgc,
* McsEngl.lagTlgc'evoluting,

lagLogic.Datalog-{1977}

description::
">evolution of DataLog language:
The evolution of the Datalog language has been a fascinating journey, spanning over four decades of research and development. From its early beginnings as a simple declarative query language, Datalog has evolved into a powerful and versatile tool for a wide range of applications.

**Early Years (1970s-1980s)**
Datalog emerged in the 1970s as a subset of the logic programming language Prolog. It was designed to be a more database-oriented language, with a focus on efficiency and expressiveness. Datalog quickly gained popularity in the academic community, as it provided a clean and concise way to represent and query data.

**The Rise of Deductive Databases (1980s-1990s)**
In the 1980s, Datalog became the foundation for a new type of database system called deductive databases. Deductive databases combined the power of logic programming with the efficiency of traditional databases. This made them well-suited for a wide range of applications, including knowledge representation, reasoning, and rule-based systems.

**Extensions and Variants (1990s-present)**
As Datalog matured, it was extended with new features to support a wider range of applications. These extensions included support for negation, aggregate functions, and non-deterministic rules. Datalog also spawned a number of variants, each with its own unique set of features.

**Modern Datalog (2000s-present)**
Today, Datalog is a mature and widely used language with a rich ecosystem of tools and applications. It is used in a variety of domains, including knowledge management, data integration, and semantic web applications. Datalog is also being used in the development of new data science and machine learning algorithms.

**Future of Datalog**
The future of Datalog is bright. As the volume and complexity of data continues to grow, Datalog's declarative and expressive nature is becoming increasingly valuable. Datalog is well-positioned to play a leading role in the development of new data management and analysis tools.
Here are some of the key trends in the evolution of Datalog:
* **Increased expressiveness:** Datalog has been extended with new features that make it more expressive, such as negation and aggregate functions.
* **Improved performance:** Datalog engines have become more efficient, making them able to handle larger and more complex datasets.
* **New applications:** Datalog is being used in a wider range of applications, including knowledge management, data integration, and semantic web applications.

I hope this overview has been helpful. Please let me know if you have any other questions."
[{2023-11-08 retrieved} https://bard.google.com/chat/90cb1636d6b6adb7?hl=en&pli=1]

name::
* McsEngl.Datalog-lagLogic!⇒lagDtlg,
* McsEngl.lagDtlg!=Datalog--logic-language,
* McsEngl.lagKnlg.028-Datalog!⇒lagDtlg,
* McsEngl.lagKnlg.Datalog!⇒lagDtlg,
* McsEngl.lagLogic.Datalog!⇒lagDtlg,

lagLogic.Prolog-{1972}

description::
"Prolog is a logic programming language associated with artificial intelligence and computational linguistics."
[{2021-02-04} https://en.wikipedia.org/wiki/Prolog]

name::
* McsEngl.Prolog!⇒lagPrlg,
* McsEngl.lagKnlg.021-Prolog!⇒lagPrlg,
* McsEngl.lagKnlg.Prolog!⇒lagPrlg,
* McsEngl.lagLogic.Prolog!⇒lagPrlg,
* McsEngl.lagPrlg!=Prolog--logic-language,

descriptionLong::
"Prolog is a logic programming language associated with artificial intelligence and computational linguistics.[1][2][3]
Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.[4]
The language was developed and implemented in Marseille, France, in 1972 by Alain Colmerauer with Philippe Roussel, based on Robert Kowalski's procedural interpretation of Horn clauses.[5][6]
Prolog was one of the first logic programming languages[7] and remains the most popular such language today, with several free and commercial implementations available. The language has been used for theorem proving,[8] expert systems,[9] term rewriting,[10] type systems,[11] and automated planning,[12] as well as its original intended field of use, natural language processing.[13][14] Modern Prolog environments support the creation of graphical user interfaces, as well as administrative and networked applications.
Prolog is well-suited for specific tasks that benefit from rule-based logical queries such as searching databases, voice control systems, and filling templates."
[{2021-02-04} https://en.wikipedia.org/wiki/Prolog]

evoluting of lagPrlg

description::
"evolution of Prolog:
draft2
Prolog was created in 1972 by Alain Colmerauer and Phillipe Roussel at the University of Aix-Marseille in France. It was originally designed for natural language processing, but it has since been used for a wide variety of tasks, including artificial intelligence, expert systems, and database management.

The early versions of Prolog were interpreters, but in 1977 David Warren at the University of Edinburgh developed the first Prolog compiler. This made Prolog much faster and more efficient, and it helped to make Prolog more popular among researchers and practitioners.

In the 1980s, Prolog was used in a number of high-profile projects, including the Japanese Fifth Generation Computer Systems project. However, Prolog also faced some challenges during this time. One challenge was that Prolog programs could be difficult to debug. Another challenge was that Prolog was not as well-suited for some tasks as other languages, such as C and C++.

Despite these challenges, Prolog continued to evolve and grow in popularity in the 1990s and 2000s. New Prolog implementations were developed, and Prolog was used in a wider range of applications.

In recent years, Prolog has seen a resurgence in interest. This is due in part to the growing popularity of artificial intelligence and machine learning. Prolog is a good language for these tasks because it is declarative, meaning that the programmer focuses on what they want the program to do, rather than how to do it.

Here are some of the key developments in the evolution of Prolog:
* 1972: First Prolog interpreter developed by Colmerauer and Roussel
* 1977: First Prolog compiler developed by Warren
* 1980s: Prolog used in a number of high-profile projects, including the Japanese Fifth Generation Computer Systems project
* 1990s and 2000s: New Prolog implementations developed, and Prolog used in a wider range of applications
* Recent years: Resurgence in interest in Prolog due to the growing popularity of artificial intelligence and machine learning

Today, Prolog is a mature and well-established language with a large community of users and developers. It is a powerful tool for artificial intelligence, machine learning, and other applications that require declarative programming.

Some of the most popular Prolog implementations today include:
* SWI-Prolog
* GNU Prolog
* YAP Prolog
* XSB Prolog
* Visual Prolog

These implementations offer a variety of features and capabilities, such as compilers, interpreters, debuggers, graphical user interfaces, and libraries for specific tasks.

Prolog is a dynamic language that continues to evolve. New features and capabilities are being added to Prolog implementations all the time. Prolog is also being used in new and innovative ways, such as in natural language processing, machine learning, and big data analytics."
[{2023-11-04 retrieved} https://bard.google.com/chat/b06a1c20d401c2d5]

name::
* McsEngl.evoluting-of-lagPrlg,
* McsEngl.lagPrlg'evoluting,

lagPrlg.SPECIFIC

description::
* Absys,
* B-Prolog,
* BNR Prolog,
* ECLiPSe,
* GNU Prolog,
* KL0,
* KL1,
* ΛProlog,
* Logtalk,
* Objlog,
* Prolog++,
* Prolog32,
* Strand (programming language),
* SWI-Prolog,
* Visual Prolog,
* YAP (Prolog),
[{2021-02-05} https://en.wikipedia.org/wiki/Category:Prolog_programming_language_family]

name::
* McsEngl.lagPrlg.specific,

lagLogic.HiLog-{1986..}

description::
"evolution of HiLog:
HiLog, or Higher-order Logic, was one of the first higher-order query languages. It was developed in the late 1980s by Michael Kifer and Loui Rundensteiner, and was based on the logic programming language Prolog. HiLog was designed to be a powerful and expressive query language for deductive databases.

HiLog has evolved over time to include new features and capabilities. Some of the key changes include:
* **Support for object-oriented databases:** HiLog was extended in the early 1990s to support object-oriented databases. This allowed HiLog users to query and manipulate data in object-oriented databases using a logical programming paradigm.
* **Support for cooperative query answering:** HiLog was extended in the mid-1990s to support cooperative query answering. This allowed HiLog users to query multiple databases in a cooperative and consistent manner.
* **Support for meta-programming:** HiLog was extended in the late 1990s to support meta-programming. This allowed HiLog users to write programs that manipulate and reason about other HiLog programs.

In addition to these major changes, HiLog has also been extended with a number of minor features and improvements over time.

HiLog has been used to develop a variety of database applications, including knowledge base systems, rule-based systems, and deductive database systems. HiLog is also used in a number of academic research projects on database languages and systems.

Here is a summary of the key evolution milestones of HiLog:
* **1986:** HiLog is introduced by Michael Kifer and Loui Rundensteiner.
* **1989:** HiLog is extended to support object-oriented databases.
* **1993:** HiLog is extended to support cooperative query answering.
* **1996:** HiLog is extended to support meta-programming.
* **2000-present:** HiLog continues to be extended with new features and improvements.

HiLog is a powerful and expressive query language that has been used to develop a variety of database applications. It is still under active development, and new features and improvements are being added all the time."
[{2023-11-04 retrieved} https://bard.google.com/chat/fc74e240b86fc557?hl=en&pli=1]

name::
* McsEngl.HiLog,
* McsEngl.lagLogic.HiLog,

lagLogic.functional-logic

description::
"Functional logic programming is the combination, in a single programming language, of the paradigms of functional programming and logic programming.[1] This style of programming is embodied by various programming languages, including Curry and Mercury.[2][1]
A journal devoted to the integration of functional and logic programming was published by MIT Press and the European Association for Programming Languages and Systems between 1995 and 2008.[3]"
[{2021-02-05} https://en.wikipedia.org/wiki/Functional_logic_programming]

name::
* McsEngl.FLP!=functional-logic-programming!⇒lagFnlc,
* McsEngl.lagKnlg.022-functional-logic!⇒lagFnlc,
* McsEngl.lagKnlg.functional-logic!⇒lagFnlc,
* McsEngl.lagFnlc!=functional-logic,
* McsEngl.lagLogic.functional-logic!⇒lagFnlc,

evoluting of lagFnlc

description::
"evolution of functional-logic:
Functional-logic programming (FLP) is a programming paradigm that combines the declarative aspects of functional programming with the non-monotonic reasoning capabilities of logic programming. This makes it a powerful tool for developing complex applications in areas such as artificial intelligence, knowledge representation, and planning.

FLP emerged in the late 1970s and early 1980s, with the development of languages such as Prolog II and Mercury. These languages introduced a number of new features that made FLP more expressive and efficient, such as higher-order logic, lazy evaluation, and constraint solving.

In the 1990s, FLP research continued to flourish, with the development of new languages such as Curry and Haskell. These languages introduced further innovations, such as type systems and concurrency.

In recent years, FLP has seen a resurgence of interest, due in part to its suitability for developing parallel and distributed applications. For example, the language Clojure has been used to develop a number of successful distributed systems, such as Apache Spark and Kafka.

**Here is a brief overview of the evolution of FLP:**
* **1970s and early 1980s:** Development of the first FLP languages, such as Prolog II and Mercury.
* **1980s:** Introduction of new features such as higher-order logic, lazy evaluation, and constraint solving.
* **1990s:** Development of new languages such as Curry and Haskell, with further innovations such as type systems and concurrency.
* **2000s and beyond:** Resurgence of interest in FLP, due in part to its suitability for developing parallel and distributed applications.

**Some of the key trends in the evolution of FLP include:**
* **Increased emphasis on expressiveness and efficiency:** FLP languages have become more expressive and efficient over time, thanks to the introduction of new features such as higher-order logic, lazy evaluation, and constraint solving.
* **Development of new programming paradigms:** FLP has been combined with other programming paradigms, such as functional programming, object-oriented programming, and concurrent programming, to create new programming languages and frameworks.
* **Expansion of application domains:** FLP is now being used to develop a wide range of applications, including artificial intelligence, knowledge representation, planning, and distributed systems.

FLP is a relatively young programming paradigm, but it has evolved rapidly in recent years. It is now a powerful tool for developing complex applications in a variety of domains."
[{2023-11-04 retrieved} https://bard.google.com/chat/157308f62898c36b?hl=en&pli=1]

name::
* McsEngl.evoluting-of-lagFnlc,
* McsEngl.lagFnlc'evoluting,

lagLogic.common-logic

description::
"Common Logic (CL) evolved from two projects to develop parallel ANSI standards for conceptual graphs and the Knowledge Interchange Format (Genesereth & Fikes 1992). Eventually, those projects were merged into a single ISO project to develop a common abstract syntax and model-theoretic foundation for a family of logic-based notations (ISO/IEC 24707). Hayes and Menzel (2001) defined a very general model theory for CL, which Hayes and McBride (2003) used to define the semantics for the languages RDF(S) and OWL. In addition to the abstract syntax and model theory, the CL standard specifies three concrete dialects that are capable of expressing the full CL semantics: the Common Logic Interchange Format (CLIF), the Conceptual Graph Interchange Format (CGIF), and the XML-based notation for CL (XCL). Since the semantics of RDF and OWL is based on a subset of CL semantics, those languages can also be considered dialects of CL: any statement in RDF or OWL can be translated to CLIF, CGIF, or XCL, but only a subset of CL can be translated back to RDF or OWL."
[{2021-02-05} http://www.jfsowa.com/cg/cgif.htm]

name::
* McsEngl.CL!=common-logic-language,
* McsEngl.common-logic-language!⇒lagCmlc,
* McsEngl.lagCmlc,
* McsEngl.lagCmlc!=common-logic,
* McsEngl.lagKnlg.023-common-logic!⇒lagCmlc,
* McsEngl.lagKnlg.common-logic!⇒lagCmlc,
* McsEngl.lagLogic.common-logic!⇒lagCmlc,

info-resource of lagCmlc

description::
* ISO/IEC 24707:2018, EN, 2nd, Information technology — Common Logic (CL) — A framework for a family of logic-based languages, ISO/IEC JTC 1/SC 32, https://standards.iso.org/ittf/PubliclyAvailableStandards/,

name::
* McsEngl.lagCmlc'Infrsc,

lagCmlc.SPECIFIC

description::
* CGIF,
* CLIF,
* XCL,
* OWL,
* RDF(S),
"Hayes and Menzel (2001) defined a very general model theory for CL, which Hayes and McBride (2003) used to define the semantics for the languages RDF(S) and OWL. In addition to the abstract syntax and model theory, the CL standard specifies three concrete dialects that are capable of expressing the full CL semantics: the Common Logic Interchange Format (CLIF), the Conceptual Graph Interchange Format (CGIF), and the XML-based notation for CL (XCL). Since the semantics of RDF and OWL is based on a subset of CL semantics, those languages can also be considered dialects of CL: any statement in RDF or OWL can be translated to CLIF, CGIF, or XCL, but only a subset of CL can be translated back to RDF or OWL."
[{2021-02-06} http://www.jfsowa.com/cg/cgif.htm]

name::
* McsEngl.lagCmlc.specific,

lagLogic.XSB-{1993-1998-2023..}

description::
"XSB is the name of a dialect of the Prolog programming language and its implementation developed at Stony Brook University in collaboration with the Katholieke Universiteit Leuven, the New University of Lisbon, Uppsala University and software vendor XSB, Inc.
XSB extends Prolog with tabled resolution and HiLog (a standard extension of Prolog permitting limited higher-order logic programming).
The open source XSB implementation includes an interface to the Java programming language."
[{2021-02-04} https://en.wikipedia.org/wiki/XSB]

name::
* McsEngl.XSB!⇒lagXsbl,
* McsEngl.lagKnlg.024-XSB!⇒lagXsbl,
* McsEngl.lagKnlg.XSB!⇒lagXsbl,
* McsEngl.lagLogic.XSB!⇒lagXsbl,
* McsEngl.lagXsbl,

descriptionLong::
"Since its inception, a primary goal of XSB has been to expand the areas in which Prolog is used, by making Prolog more powerful, more efficient, and more declarative. In 1993 when XSB was first released, it supported this goal by including both tabled resolution for definite programs, which provided it with deductive databasestyle features of such systems as Coral (Ramakrishnan et al. 1992) and LDL (Chimenti et al. 1990). At the time, while XSB was faster than those systems, it was basically suitable only for research by its developers. Since then, XSB has become a widely used multi-threaded Prolog that is compliant with most standards. During this development, XSB’s research focus has continued to be centered on tabling.
At one level, the idea behind tabling is simple; subgoals encountered in a query evaluation are maintained in a table, along with answers to these subgoals. If a subgoal is re-encountered, the evaluation reuses information from the table rather than re-performing resolution against program clauses.
[{2021-02-04} https://www3.cs.stonybrook.edu/~tswift/webpapers/tplp-10-sips.pdf]

evoluting of lagXsbl

description::
"evolution of XSB:
XSB was originally developed at Stony Brook University in the early 1990s by David S. Warren, Terrance Swift, and Kostis Sagonas. It was based on the SB-Prolog language, which was also developed at Stony Brook University in the late 1980s. XSB was the first Prolog implementation to include tabled resolution, a technique that improves the efficiency of many logic programs.

XSB has continued to evolve over the years, with new features and improvements being added in each release. Some of the major milestones in the evolution of XSB include:
* **1993:** The first release of XSB is made available to the public.
* **1995:** XSB is extended with HiLog, a logic programming language that supports negation and disjunction.
* **1998:** XSB is ported to the Java platform, making it possible to run XSB programs on any device that has a Java virtual machine.
* **2003:** XSB is extended with incremental tabled evaluation, which improves the performance of programs that need to update their tables frequently.
* **2006:** XSB is extended with support for symbolic constraint solving, which allows XSB programs to solve complex mathematical problems.
* **2010:** XSB is extended with support for distributed tabled evaluation, which allows XSB programs to be executed on multiple computers in parallel.
* **2013:** XSB is released as open source software.
* **2016:** XSB is extended with support for concurrent constraint solving, which allows XSB programs to solve multiple constraints simultaneously.
* **2019:** XSB is extended with support for symbolic differential equation solving, which allows XSB programs to solve complex differential equations.

XSB is now a powerful and versatile logic programming system that is used in a wide range of applications, including natural language processing, artificial intelligence, and software engineering. It is also a popular teaching language for logic programming courses.

**Current state of XSB**
XSB is currently under active development, with new features and improvements being added in each release. The most recent release of XSB is version 3.9.0, which was released in March 2023.

**Future of XSB**
The future of XSB looks bright. The XSB development team is committed to adding new features and improving the performance of XSB. XSB is also likely to benefit from the growing popularity of artificial intelligence and machine learning, as logic programming is a powerful tool for developing AI and ML applications.
Here are some specific areas where the XSB development team is working to improve XSB in the future:
* Performance: The XSB development team is working to improve the performance of XSB on a variety of tasks, including tabled evaluation, constraint solving, and symbolic differential equation solving.
* Scalability: The XSB development team is working to make XSB more scalable, so that it can be used to solve larger and more complex problems.
* Usability: The XSB development team is working to make XSB easier to use, both for beginners and experienced users.

Overall, the future of XSB is very promising. XSB is a powerful and versatile logic programming system that is well-positioned to meet the needs of researchers and developers in a wide range of fields."
[{2023-11-04 retrieved} https://bard.google.com/chat/8fc48e7ddf67051c?hl=en&pli=1]

name::
* McsEngl.evoluting-of-lagXsbl,
* McsEngl.lagXsbl'evoluting,

lagLogic.Flora-2

description::
"ErgoAI is the new name for Ergo Suite. ErgoAI greatly extends the well-known open-source Flora-2 system (https://flora.sourceforge.net/) and, as of ErgoAI’s V3.0 release, Flora-2 as a distinct system is being retired."
[{2023-11-04 retrieved} http://coherentknowledge.com/]

"Flora-2 (a.k.a. Ergo Lite) is an advanced object-oriented knowledge representation and reasoning system. It is a dialect of F-logic with numerous extensions, including meta-programming in the style of HiLog, logical updates in the style of Transaction Logic, and defeasible reasoning. Applications include intelligent agents, Semantic Web, knowledge-based networking, ontology management, integration of information, security policy analysis, and more."
[{2021-02-04} http://flora.sourceforge.net/]

name::
* McsEngl.lagKnlg.025-Flora-2-language!⇒lagFlr2,
* McsEngl.lagKnlg.Flora-2-language!⇒lagFlr2,
* McsEngl.lagFlr2,
* McsEngl.lagFlr2!=Flora-2-language,
* McsEngl.lagLogic.Flora-2!⇒lagFlr2,

descriptionLong::
"Flora-2 is an open source semantic rule-based system for knowledge representation and reasoning. The language of the system is derived from F-logic,[1] HiLog,[2] and Transaction logic.[3] Being based on F-logic and HiLog implies that object-oriented syntax and higher-order representation are the major features of the system. Flora-2 also supports a form of defeasible reasoning called Logic Programming with Defaults and Argumentation Theories (LPDA).[4] Applications include intelligent agents, Semantic Web, knowledge-bases networking, ontology management, integration of information, security policy analysis, automated database normalization, and more.[5][6][7][8][9][10]
Flora-2 relies on the XSB system for its inference engine. The design and architecture of Flora-2 are described in a number of works.[11][12][13][14]
Details of the system and its use are described in the Flora-2 User's Manual.[15] Flora-2 is available for all major computing platforms, including Linux and other flavors of Unix, Microsoft Windows (both 32- and 64-bit), and Mac OS X."
[{2021-02-04} https://en.wikipedia.org/wiki/Flora-2]

lagFlr2'tech

description::
"FLORA-2 is build upon the OpenSource XSB Prolog engine which is developed in C and thus has a C interface"
[{2021-02-04} https://github.com/johannesloetzsch/reasonablepy]

name::
* McsEngl.lagFlr2'tech,

lagKnlg.CycL

description::
× generic: logo--knowledge-language,

"CycL is Cyc's large and extraordinarily flexible knowledge representation language.
It is essentially an augmentation of first-order predicate calculus (FOPC), with extensions to handle equality, default reasoning, skolemization, and some second-order features.
(For example, quantification over predicates is allowed in some circumstances, and complete assertions can appear as intensional components of other assertions.)
CycL uses a form of circumscription, includes the unique names assumption, and can make use of the closed world assumption where appropriate."
[{2014-02-05} http://www.cyc.com/cyc/cycl]

name::
* McsEngl.CycL!⇒lagCycl,
* McsEngl.lagKnlg.031-CycL!⇒lagCycl,
* McsEngl.lagKnlg.CycL!⇒lagCycl,
* McsEngl.lagCycl,
* McsEngl.lagLogic.CycL!⇒lagCycl,

descriptionLong::
"CycL in computer science and artificial intelligence is an ontology language used by Doug Lenat's Cyc artificial intelligence project. Ramanathan V. Guha was instrumental in the design of early versions of the language. There is a close variant of CycL known as MELD.
The original version of CycL was a frame language, but the modern version is not. Rather, it is a declarative language based on classical first-order logic, with extensions for modal operators and higher order quantification.
CycL is used to represent the knowledge stored in the Cyc Knowledge Base, available from Cycorp. The source code written in CycL released with the OpenCyc system is licensed as open source, to increase its usefulness in supporting the semantic web."
[{2021-02-08} https://en.wikipedia.org/wiki/CycL]
===
"CycL is a formal language whose syntax derives from first-order predicate calculus (the language of formal logic). In order to express common sense knowledge, however, it goes far beyond first order logic."
[{1998-02-26} http://www.cyc.com/cyc-2-1/ref/cycl-syntax.html,{1997-08-08}]

tech of lagCycl

description::
· any tech used.

name::
* McsEngl.lagCycl'tech,

app of lagCycl

description::
· the-application that stores and reasons the-Cyclkb.

"overview of CYC knowledge-representation-system:
CYC is a knowledge representation and reasoning system that has been under development for over 35 years. It is a large knowledge base that contains over 10 million concepts and 50 million assertions about the world. CYC is designed to represent commonsense knowledge, which is the kind of knowledge that humans take for granted but is often difficult for computers to understand.

CYC uses a first-order logic representation language to represent knowledge. This language allows CYC to represent complex relationships between concepts and to reason about them in a logical way. CYC also includes a set of inference rules that allow it to derive new knowledge from existing knowledge.

CYC has been used in a variety of applications, including natural language processing, machine translation, and question answering. It has also been used to develop expert systems and other AI systems.

Here is an overview of the key features of CYC:
* **Comprehensive knowledge base:** CYC contains a comprehensive knowledge base of commonsense knowledge, including knowledge about physics, biology, psychology, and social science.
* **Logical representation language:** CYC uses a first-order logic representation language to represent knowledge, which allows it to represent complex relationships between concepts and to reason about them in a logical way.
* **Inference rules:** CYC includes a set of inference rules that allow it to derive new knowledge from existing knowledge.
* **Scalability:** CYC is designed to be scalable to very large knowledge bases.
* **Modularity:** CYC is designed to be modular, so that new knowledge can be added and existing knowledge can be updated without disrupting the system.

CYC is a powerful knowledge representation and reasoning system that has the potential to revolutionize the way that computers understand and reason about the world.

Here are some examples of the kinds of questions that CYC can answer:
* What is the difference between a bird and a fish?
* What happens when you drop a ball?
* Why do people cry when they are sad?
* What is the best way to get to the airport?

CYC is still under development, but it has already been used to develop a number of successful AI applications. As CYC continues to grow and mature, it is likely to play an increasingly important role in the development of artificial intelligence."
[{2023-11-01 retrieved} https://bard.google.com/chat/97cb0050f04ebdc4]

name::
* McsEngl.CYC-Knlgmngr!⇒klmrCyc,
* McsEngl.Cyc-Knlgmngr!⇒klmrCyc,
* McsEngl.Cycapp!⇒klmrCyc,
* McsEngl.Knlgmngr.024-CYC!⇒klmrCyc,
* McsEngl.Knlgmngr.CYC!⇒klmrCyc,
* McsEngl.klmrCyc!=CYC--knowledge-manager,
* McsEngl.lagCycl'app!⇒klmrCyc,

descriptionLong::
"The platform consists of four foundational components that work together to enable our clients to address meaningful business challenges in ways never before possible:
* A formal knowledge representation language with the same expressive power as English. The expressiveness of Cyc’s knowledge representation is a powerful differentiator, enabling the system to fully and systematically represent the unique judgments, perspectives, and problem-solving approaches of your very best performers.
* A pre-existing knowledge base primed with tens of millions of rules-of-thumb and rules of good judgment spanning common sense, domain knowledge, and a general understanding of “how the world works.” Cyc’s ever-growing knowledge base is the broadest, deepest, and most complete AI tool ever developed, by several orders of magnitude.
* A battery of more than 1,100 stylized “inference engines” which work to intelligently and efficiently produce the same hypotheses, insights, and conclusions that a subject matter expert would, given a set of data and findings in an application area.
* Out-of-the-box mechanisms to connect Cyc’s ontology, your internal systems, and useful third-party data sources. This virtual data integration eliminates the need for brittle, expensive, and time-consuming traditional system integration."
[{2021-02-11} https://www.cyc.com/products]

user-interface of klmrCyc

description::
· the-part of the-app that handles the-human–machine interaction.

name::
* McsEngl.klmrCyc'user-interface!⇒Cycui,
* McsEngl.Cycui,

Cycui.frame-based

description::
"A frame-based display can present representations which are frame-based (a series of slots with values), or which are isomorphic to frames. For example, some CYC® interfaces use a frame-based display method for showing the binary predicates the displayed concept is a first argument to."
[{2021-02-13} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#frame-based]

name::
* McsEngl.Cycui.frame-based,

browser of klmrCyc

server of klmrCyc

DOING of klmrCyc

description::
"There have been over 100 successful applications of Cyc;[17]"
[{2021-02-11} https://en.wikipedia.org/wiki/Cyc#Applications]

name::
* McsEngl.klmrCyc'doing!⇒Cycdoing,
* McsEngl.Cycdoing,

Cycdoing.ASK of klmrCyc

description::
"The purpose of an ASK operation is to query a CYC® Server about the truth of a formula.
The details of an ASK operation vary from interface to interface. However, in most interfaces you will be asked to supply:
* A CycL formula, which may or may not contain free variables. If the formula contains variables, then you are asking CYC® to supply bindings to those variables which make the formula true; otherwise, you are simply asking whether the formula is true.
* A microtheory within which to look. In other words, each binding returned must come from a formula which is known to be true in, or can be inferred from assertions in, the specified microtheory and its baseMts.
* Optionally, a direction (or access level). If the direction is forward, no expensive inferencing is performed, and the only bindings returned are ones which can be found through a simple KB lookup. If the direction is backward, inferencing is performed.
* Optionally, a number of bindings requested. Once this number of bindings has been found, the ASK operation will terminate, even if more bindings exist. The default value varies among interfaces; if no value is passed by the interface on to the underlying code, the operation will continue until one of the other resource limits has been reached or until the KB has been exhaustively searched, which could take a long time.
* Optionally, a time limit. No more than this many seconds will be spent on the ASK operation. Again, a particular interface may provide a default time limit, but if none is passed to the underlying code, the ASK operation will continue until one of the other resource limits has been reached or until the KB has been exhaustively searched.
* Optionally, a inference depth cutoff. The search for bindings will go no deeper than this number of levels. In the search for bindings, each time the inference engine tries to satisfy a literal, the search is extended by one depth level. Currently, this is true whether literals are satisified by firing rules or by simple KB lookup. If no inference-depth cutoff is passed by your interface on to the underlying code, the ASK operation will continue until one of the other resource limits has been reached or until the KB has been exhaustively searched.
* Optionally, an axiom depth cutoff. No more than this many rules will be used in the search for bindings. If no axiom-depth cutoff is passed by your interface on to the underlying code, the ASK operation will continue until one of the other resource limits has been reached or until the KB has been exhaustively searched.
For example, if you wanted to find some people who have served as head of state of a European country, you might execute an ASK with the following parameters:
Formula: (#$and (#$geographicalSubRegions #$ContinentOfEurope ?X) (#$headOfStateOf ?X ?Y))
Microtheory: #$BaseKB
Direction: backward
Number: 5
Time: 10
Inference Depth: 10
Axiom Depth: 3
This asks CYC® to find no more than 5 bindings for ?X and ?Y, satisfiable in the #$BaseKB, that can be found in 10 seconds of processor time or less. CYC® can use rules in the search, since the direction is backward, and can stop pursuing any search path as soon as it requires using a 4th rule or satisfying an 11th literal."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#ASK]

name::
* McsEngl.Cycdoing.ASK,

evoluting of klmrCyc

description::
"evolution of CYC knowledge-representation-system:
The Cyc knowledge representation system has evolved significantly since its inception in 1984. The original goal of the Cyc project was to create a knowledge base that would contain all of the common sense knowledge of a human being. This was a very ambitious goal, and the Cyc team has made significant progress, but they have not yet achieved their goal.

One of the key challenges that the Cyc team has faced is developing a knowledge representation language that is expressive enough to represent the full range of human common sense knowledge. The Cyc knowledge representation language, CycL, has evolved over time to meet this challenge. CycL is now a higher-order logic language that is capable of representing a wide range of concepts and relationships.

Another challenge that the Cyc team has faced is developing a process for acquiring and encoding knowledge into the Cyc knowledge base. The Cyc team has developed a number of different methods for acquiring knowledge, including manual axiom writing, machine learning, and natural language processing.

The Cyc knowledge base has grown significantly over time. In 1994, the Cyc knowledge base contained about 1 million assertions. As of 2023, the Cyc knowledge base contains over 24.5 million assertions. This growth has been enabled by the development of new knowledge acquisition and encoding methods.

The Cyc knowledge representation system has been used in a variety of different applications, including natural language processing, machine translation, and expert systems. The Cyc knowledge base has also been used to develop educational tools and to conduct research on artificial intelligence.

Here is a summary of the key evolution steps of the Cyc knowledge representation system:
* **1984:** The Cyc project is launched with the goal of creating a knowledge base that contains all of the common sense knowledge of a human being.
* **1989:** CycL, the Cyc knowledge representation language, is expanded to include higher-order logic.
* **1994:** The Cyc knowledge base contains about 1 million assertions.
* **1995:** The Cyc project is spun off as Cycorp, Inc., a commercial company.
* **2000s:** The Cyc team develops new methods for acquiring and encoding knowledge into the Cyc knowledge base.
* **2010s:** The Cyc knowledge base grows to over 24.5 million assertions.
* **2020s:** The Cyc knowledge representation system is used in a variety of different applications, including natural language processing, machine translation, expert systems, educational tools, and research on artificial intelligence.

The Cyc knowledge representation system is one of the most ambitious and successful attempts to create a comprehensive knowledge base of common sense knowledge. The Cyc team continues to work on improving the Cyc knowledge representation language and the Cyc knowledge base."
[{2023-11-01 retrieved} https://bard.google.com/chat/97cb0050f04ebdc4]

name::
* McsEngl.evoluting-of-klmrCyc,
* McsEngl.klmrCyc'evoluting,

Cyc-10 of klmrCyc

description::
"The current version of CYC® developed at Cycorp under the direction of Keith Goolsbey, and first deployed there in March, 1995. All current CYC® development effort is with Cyc-10."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#Cyc-10]

name::
* McsEngl.Cyc-10!⇒Cycapp10,
* McsEngl.klmrCyc'Cyc-10!⇒Cycapp10,
* McsEngl.Cycapp10,

Cyc-9 of klmrCyc

description::
"The last version of CYC® developed at MCC."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#Cyc-9]

name::
* McsEngl.Cyc-9!⇒Cycapp09,
* McsEngl.klmrCyc'Cyc-9!⇒Cycapp09,
* McsEngl.Cycapp09,

OpenCyc of klmrCyc

description::
"The first version of OpenCyc was released in spring 2002 and contained only 6,000 concepts and 60,000 facts. The knowledge base was released under the Apache License. Cycorp stated its intention to release OpenCyc under parallel, unrestricted licences to meet the needs of its users. The CycL and SubL interpreter (the program that allows users to browse and edit the database as well as to draw inferences) was released free of charge, but only as a binary, without source code. It was made available for Linux and Microsoft Windows. The open source Texai[15] project released the RDF-compatible content extracted from OpenCyc.[16] A version of OpenCyc, 4.0, was released in June 2012. OpenCyc 4.0 included much of the Cyc ontology at that time, containing hundreds of thousands of terms, along with millions of assertions relating the terms to each other; however, these are mainly taxonomic assertions, not the complex rules available in Cyc. The OpenCyc 4.0 knowledge base contained 239,000 concepts and 2,093,000 facts.
The main point of releasing OpenCyc was to help AI researchers understand what was missing from what they now call ontologies and knowledge graphs. It's useful and important to have properly taxonomized concepts like person, night, sleep, lying down, waking, happy, etc., but what's missing from the OpenCyc content about those terms, but present in the Cyc KB content, are the various rules of thumb that most of us share about those terms: that (as a default, in the ModernWesternHumanCultureMt) each person sleeps at night, sleeps lying down, can be woken up, is not happy about being woken up, and so on. That point does not require continually-updated releases of OpenCyc, so, as of 2017, OpenCyc is no longer available."
[{2021-02-11} https://en.wikipedia.org/wiki/Cyc#OpenCyc]

name::
* McsEngl.klmrCyc'OpenCyc,
* McsEngl.OpenCyc,

Infrsc OpenCyc

description::
* {2017-04-04} https://www.mkbergman.com/2034/fare-thee-well-opencyc/,

name::
* McsEngl.OpenCyc'Infrsc,

ResearchCyc of klmrCyc

description::
"In July 2006, Cycorp released the executable of ResearchCyc 1.0, a version of Cyc aimed at the research community, at no charge. (ResearchCyc was in beta stage of development during all of 2004; a beta version was released in February 2005.) In addition to the taxonomic information contained in OpenCyc, ResearchCyc includes significantly more semantic knowledge (i.e., additional facts and rules of thumb) involving the concepts in its knowledge base; it also includes a large lexicon, English parsing and generation tools, and Java based interfaces for knowledge editing and querying. In addition it contains a system for Ontology-based data integration. As of 2017, regular releases of ResearchCyc continued to appear, with 600 research groups utilizing licenses around the world at no cost for noncommercial research purposes. As of December 2019, ResearchCyc is no longer supported. Cycorp expects to improve and overhaul tools for external developers over the coming years."
[{2021-02-11} https://en.wikipedia.org/wiki/Cyc#ResearchCyc]

name::
* McsEngl.klmrCyc'ResearchCyc,
* McsEngl.ResearchCyc,

inference-engine of lagCycl

description::
"An inference engine is a computer program that tries to derive answers from a knowledge base. The Cyc inference engine performs general logical deduction (including modus ponens, modus tollens, universal quantification and existential quantification).[14] It also performs inductive reasoning, statistical machine learning and symbolic machine learning, and abductive reasoning (but of course sparingly and using the existing knowledge base as a filter and guide)."
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]
===
"The Cyc inference engine handles modus ponens and modus tollens (contrapositive) inferencing, universal and existential quantification, and mathematical inferencing. It uses contexts called microtheories to optimize inferencing by restricting search domains.
The Cyc knowledge base contains over 1 million assertions. Many approaches commonly taken by other inference engines (such as frames, RETE match, Prolog, etc.) just don’t scale well to KBs of this size. As a result, the Cyc team has been forced to develop other techniques.
Cyc also includes several special-purpose inferencing modules for handling a few specific classes of inference. One set of modules handles reasoning concerning collection membership, subsethood, and disjointness. Another handles equality reasoning. Others implement fast reasoning with #$genlPreds and #$genlAttributes. Still others implement symmetry, transitivity and reflexivity reasoning."
[{2021-02-09} http://web.archive.org/web/20160915203525/http://www.cyc.com/documentation/overview-cyc-inferencing/]

name::
* McsEngl.Cyc-inference-engine!⇒Cycinfrc,
* McsEngl.Cycinfrc,
* McsEngl.lagCycl'inference-engine!⇒Cycinfrc,

license of Cycinfrc

description::
"CycL has a publicly released specification and dozens of HL modules were described in Lenat and Guha's textbook,[8] but the actual Cyc inference engine code, and the full list of 1000+ HL modules, is Cycorp-proprietary.[3]"
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

name::
* McsEngl.Cycinfrc'license,

natural-language-system of lagCycl

description::
"Cyc offers sophisticated natural language understanding and generation functionality, using the power of Cyc knowledge to address semantics and pragmatics rather than just syntactic or statistical processing."
[{2021-02-09} http://web.archive.org/web/20161101092302/http://www.cyc.com/natural-language-processing-in-cyc/]

name::
* McsEngl.Cyc-NL!⇒Cycnls,
* McsEngl.klmrCyc'NLS!⇒Cycnls,
* McsEngl.Cycnls,
* McsEngl.lagCycl'natural-language-system!⇒Cycnls,

truth-maintenance-system of klmrCyc

description::
"A truth maintenance system is a mechanism whereby a knowledge based system can keep reasonably consistent and truthful as its knowledge changes. For example, if facts have been added to the KB through inference based on a set of premises, and one of the premises is later removed from the KB, any conclusion that depends on that premise should also be removed from the KB. The CYC® TMS relies on the fact that each assertion has all of its arguments recorded in the datastructure."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#TMS]

name::
* McsEngl.klmrCyc'TMS'(truth-maintenance-system),
* McsEngl.klmrCyc'truth-maintenance-system,

API of klmrCyc

description::
"The CYC® API is an applications programming interface that allows programmers to build applications based on the CYC® technology."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#API]

name::
* McsEngl.Cycapi,
* McsEngl.klmrCyc'API!⇒Cycapi,

input of lagCycl

description::
· common sense knowledge.

name::
* McsEngl.lagCycl'input,

output of lagCycl

description::
·

name::
* McsEngl.lagCycl'output,

term of lagCycl

description::
· Cyclterm is THE-NAME of its concepts.

name::
* McsEngl.Cyclterm,
* McsEngl.lagCycl'constant!⇒Cyclterm,
* McsEngl.lagCycl'term!⇒Cyclterm,

descriptionLong::
"A term is anything that can be an argument to a predicate or function. Variables are terms. Constants, both atomic constants and reified NATs, are terms. Non-reified NATs are terms. Numbers, strings, or even entire formulas can serve as terms."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#term]

Cyclterm.SPECIFIC

description::
* http://web.archive.org/web/20121222173652/http://www.cyc.com/cycdoc/vocab/fundamental-vocab.html,

name::
* McsEngl.Cyclterm.specific,

Cyclterm.aggregate

description::
"Cyc's ontology grew to about 100,000 terms during the first decade of the project, to 1994, and as of 2017 contained about 1,500,000 terms."
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

name::
* McsEngl.Cyclterm.aggregate,

Cyclterm.FORT

description::
"There are two kinds of FORTs: constants and non-atomic terms (NATs)."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#FORT]

name::
* McsEngl.Cyclfort,
* McsEngl.Cyclterm.FORT!=first-order-reified-term,
* McsEngl.lagCycl'FORT'(first-order-reified-term)!⇒Cyclfort,

Cyclterm.constant

description::
"Constants are terms introduced into CycL by explicit creation.
Constants such as #$BillM or #$likesAsFriend begin with "#$".
Constants are one type of FORT; the other type are non-atomic terms (NAT's)."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#constant]

name::
* McsEngl.Cyclconstant,
* McsEngl.Cyclterm.constant!⇒Cyclconstant,
* McsEngl.lagCycl'constant!⇒Cyclconstant,

Cyclterm.NAT

description::
"A term which is neither a variable nor a constant. NATs are terms formed by applying a function to its arguments. Like constants, each NAT denotes some thing in the Cyc KB. Currently, there are two kinds of NAT: Reified NATs, which are a type of FORT, and are implemented with data structures that have indexing allowing all uses of the NAT to be retrieved; and non-reified NATs, which have no such indexing and remain in the form of a "lispy" expresion in the formulas in which they occur."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#NAT]

name::
* McsEngl.Cyclnat,
* McsEngl.Cyclterm.NAT!⇒Cyclnat,
* McsEngl.lagCycl'NAT'(non-atomic-term)!⇒Cyclnat,

Cyclnat.IBQE

description::
"interval-based quantity expression (IBQE)
In Cyc-10, quantities like "5 dollars", "10 seconds", and "300 kilometers" are expressed using IBQEs.
An IBQE is a special kind of non-atomic term in which the CycL function is an instance of #$UnitOfMeasure. Units of measure are regular (but not reifiable) functions, which take two arguments: a minimum value and a maximum value. The second argument is optional, and if it is omitted, it is assumed to be equal to the first. In other words, an IBQE with just one argument is taken to denote a single value.
(#$massOfObject #$BillM (#$Pound-UnitOfMass 175 185))
(#$heightOfObject #$BillM (#$Inch 74))
The unit of measure may be a NAT, rather than a constant:
(#$massOfObject #$BillM ((#$Kilo #$Gram) 80 84))"
[{2021-02-13} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#IBQE]

name::
* McsEngl.Cyclnat.IBQE,
* McsEngl.lagCycl'IBQE'(interval-based-quantity-expression),
* McsEngl.lagCycl'interval-based-quantity-expression,

Cyclterm.variable

description::
"One type of term. Variables appear in CYC® rules to stand for not-known-in-advance constants that satisfy the formula of the rule. Variables also are used in formulas given to the ASK utility, to stand for the results the asker wishes to find."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#variable]
"Sentences can also contain variables, strings starting with "?". These sentences are called "rules""
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

name::
* McsEngl.Cyclterm.variable!⇒Cyclvar,
* McsEngl.Cyclvar,
* McsEngl.lagCycl'variable!⇒Cyclvar,

Cyclterm.relation

description::
"In Cyc® "relation" is informally used to refer to predicates and functions. In the math or database worlds, a relation is a set of ordered n-tuples. One might talk about the relation "Father", whose elements include (Katherine, Lloyd), (Karen, Wes), (John, Bob), and so on, where the first item in each element is a person and the second is that person's biological father. CycL relations are also ordered n-tuples. The notation we use is different from that above and depends on whether the relation to be represented by a CycL function or a CycL predicate. In both cases, we reify a constant to stand for the relation. In our example, we might call the constant #$FatherFn -- an uppercase name, because the relation is a function (people have only one biological father). We'd write, for example
(#$FatherFn #$Katherine)
to refer to Lloyd, since Katherine and Lloyd are in the "Father" relation.
CycL predicates are the other main sort of relation-denoting constant in CYC®. The latter are used to represent relations which are not functions (not single-valued). The relation denoted by "parents" should be represented with a CycL predicate. For example, we'd write
(#$parents #$Katherine #$Lloyd)
(#$parents #$Katherine #$Bonnie)
to say that (Katherine, Lloyd) and (Katherine, Bonnie) are in the parents relation. The arity of CycL predicates is the same as the arity of the represented relation, and the arity of CycL functions is one less than the arity of the relations they represent."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#relation]

name::
* McsEngl.Cyclrelation,
* McsEngl.Cyclterm.relation!⇒Cyclrelation,
* McsEngl.lagCycl'relation!⇒Cyclrelation,
* McsEngl.relation-denoting-term@lagCycl!⇒Cyclrelation,

arity of Cyclrelation

description::
"The arity of a CycL predicate or function is the number of arguments it takes.
* Unary predicates and functions take just 1 argument.
* Binary predicates and functions take 2 arguments.
* Ternary predicates and functions take 3 arguments.
* Quaternary predicates and functions take 4 arguments.
* Quintary predicates and functions take 5 arguments.
No CycL predicate or function currently takes more than 5 arguments."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#arity]

name::
* McsEngl.Cyclrelation'arity,

Cyclrelation.function

description::
"A function (in the mathematical sense) is a relation such that for each thing in its domain (the universe of things it can be applied to), there is a single thing in its range (the universe of results it can have) such that the relation holds between them. In CycL, functions are denoted by certain constants. These constants are referred to as "function-denoting constants, "CycL functions," or sometimes just "functions."
CycL functions can be applied to arguments to form non-atomic terms, which can serve as arguments to a predicate just as other terms can. There are more details about CycL functions in the Functions section of "The Syntax of Cycl"."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#function]

name::
* McsEngl.Cyclfunction,
* McsEngl.Cyclrelation.function!⇒Cyclfunction,
* McsEngl.function@lagCycl!⇒Cyclfunction,
* McsEngl.lagCycl'function!⇒Cyclfunction,

Cyclrelation.truth-function

description::
"Truth functions, which can apply to one or more other concepts and return either true or false. For example, #$siblings is the sibling relationship, true if the two arguments are siblings.
By convention, truth function constants start with a lower-case letter.
Truth functions may be broken down into logical connectives (such as #$and, #$or, #$not, #$implies), quantifiers (#$forAll, #$thereExists, etc.) and predicates."
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

name::
* McsEngl.Cyclrelation.truth-function,
* McsEngl.Cyclterm.truth-function,

Cyclrelation.predicate (link)

Cyclterm.predicate

description::
"Predicates are represented in CycL by constants that are sometimes referred to as "CycL predicates" or, more casually, as "predicates." Like CycL functions (the other kind of relation-denoting constants), CycL predicates can be used as the leading term (after the initial parenthesis) in CycL expressions. When a CycL predicate is applied to the right number and type of arguments, the expression formed is a CycL formula--a formula expressing a proposition about something. In contrast, expressions formed with functions as arg 0 (in the leading position) are terms and so do not express propositions.
By convention, constants that denote predicates begin with lowercase letters."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#predicate]

name::
* McsEngl.Cyclpredicate,
* McsEngl.Cyclrelation.predicate!⇒Cyclpredicate,
* McsEngl.lagCycl'predicate!⇒Cyclpredicate,
* McsEngl.predicate@lagCycl!⇒Cyclpredicate,

Cyclpredicate.slot

description::
"A slot is simply a CycL binary predicate such as #$startingDate.
Historically, the name derives from the days when CYC® was a frame-based system.
People will sometimes speak of a slot on a certain constant, or of a constant having slots.
One might say, for example, that the constant #$BillM has a #$likesAsFriend slot, or that #$likesAsFriend is a slot on #$BillM.
This signifies only that there is some GAF in the KB having #$likesAsFriend as its predicate and #$BillM as its first argument."
[{2021-02-13} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#slot]

name::
* McsEngl.Cyclpredicate.slot,
* McsEngl.lagCycl'slot,
* McsEngl.slot@lagCycl,

Cyclpredicate.#$isa

description::
"Two important binary predicates are #$isa and #$genls. The first one describes that one item is an instance of some collection, the second one that one collection is a subcollection of another one. Facts about concepts are asserted using certain CycL sentences. Predicates are written before their arguments, in parentheses:
(#$isa #$BillClinton #$UnitedStatesPresident)
"Bill Clinton belongs to the collection of U.S. presidents.""
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

name::
* McsEngl.Cyclpredicate.#$isa,
* McsEngl.lagCycl'#$isa,

Cyclpredicate.#$genls

description::
· the-generals of an-entity.
"Two important binary predicates are #$isa and #$genls. The first one describes that one item is an instance of some collection, the second one that one collection is a subcollection of another one. Facts about concepts are asserted using certain CycL sentences. Predicates are written before their arguments, in parentheses:
...
(#$genls #$Tree-ThePlant #$Plant)
"All trees are plants.""
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

name::
* McsEngl.Cyclpredicate.#$genls,
* McsEngl.lagCycl'#$genls,

Cyclpredicate.#$spec

description::
· the-specifics of an-entity.
""spec" means "subset". In Cyc-8, there was a predicate called #$spec, which was the inverse of #$genls. The predicate became obsolete in Cyc-9, but Cyclists still use it all the time when talking about the CYC® ontology. For instance, one might say, "#$Deity is a spec of #$Agent.""
[{2021-02-11} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#strength]

name::
* McsEngl.Cyclpredicate.#$spec,
* McsEngl.lagCycl'#$spec,

Cyclterm.logical-connective

description::
"Truth functions may be broken down into logical connectives (such as #$and, #$or, #$not, #$implies), quantifiers (#$forAll, #$thereExists, etc.) and predicates."
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

name::
* McsEngl.Cycllogcon,
* McsEngl.Cyclterm.logical-connective!⇒Cycllogcon,
* McsEngl.lagCycl'logical-connective!⇒Cycllogcon,

Cycllogcon.#$and

description::
·

name::
* McsEngl.Cycllogcon.#$and,
* McsEngl.lagCycl'#$and,

Cycllogcon.#$or

description::
·

name::
* McsEngl.Cycllogcon.#$or,
* McsEngl.lagCycl'#$or,

Cycllogcon.#$not

description::
·

name::
* McsEngl.Cycllogcon.#$not,
* McsEngl.lagCycl'#$not,

Cycllogcon.#$implies

description::
"Newcomers to formal logic may misinterpret #$implies as implying a causal relationship.
But, strictly speaking, a #$implies assertion says only that either the first argument is false, or the second argument is true.
[{1998-03-01} http://www.cyc.com/cyc-2-1/ref/cycl-syntax.html,{1997-08-08}]

name::
* McsEngl.Cycllogcon.#$implies,
* McsEngl.lagCycl'#$implies,

Cyclterm.quantifier

description::
"A quantifier is a special type of CYC® constant used in quantification. CycL contains five quantifiers: #$forAll, #$thereExists, #$thereExistAtLeast, #$thereExistAtMost, and #$thereExistExactly. Each quantifier introduces a new variable."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#quantifier]

name::
* McsEngl.Cyclquantifier,
* McsEngl.Cyclterm.quantifier!⇒Cyclquantifier,
* McsEngl.lagCycl'quantifier!⇒Cyclquantifier,
* McsEngl.quantifier@lagCycl!⇒Cyclquantifier,

Cyclquantifier.#$forAll

description::
·

name::
* McsEngl.Cyclquantifier.#$forAll,
* McsEngl.lagCyclquan'#$forAll,

Cyclquantifier.#$thereExists

description::
·

name::
* McsEngl.Cyclquantifier.#$thereExists,
* McsEngl.lagCyclquan'#$thereExists,

Cyclquantifier.#$thereExistAtLeast

description::
·

name::
* McsEngl.Cyclquantifier.#$thereExistAtLeast,
* McsEngl.lagCyclquan'#$thereExistAtLeast,

Cyclquantifier.#$thereExistAtMost

description::
·

name::
* McsEngl.Cyclquantifier.#$thereExistAtMost,
* McsEngl.lagCyclquan'#$thereExistAtMost,

Cyclquantifier.#$thereExistExactly

description::
·

name::
* McsEngl.Cyclquantifier.#$thereExistExactly,
* McsEngl.lagCyclquan'#$thereExistExactly,

expression of lagCycl

description::
"In the most general sense, an expression is a sequence of symbols. The phrase CycL expression refers to expressions that follow the syntax rules of CycL.
Some CycL expressions are propositions or statements about the world; these are called CycL formulas.
Other CycL expressions form terms that stand for concepts; these are called non-atomic terms (NATs)."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#expression]

name::
* McsEngl.Cyclexpression,
* McsEngl.lagCycl'expression!⇒Cyclexpression,

argumentNo of Cyclexpression

description::
· function, predicate, logical-connective, quantifier.

name::
* McsEngl.Cyclexpression'argumentNo,

argument of Cyclexpression

description::
"The term "argument" is used in two different ways by Cyclists:
* Most commonly, the term "argument" is used to refer to any CycL term which follows a predicate, a function, a logical connective, or a quantifier in a Cycl expression. Thus, in the CycL formula (#$likesAsFriend #$BillM #$Goolsbey), #$likesAsFriend is a predicate, and #$BillM and #$Goolsbey are the first and second arguments to that predicate.
* The term "argument" is also used to refer to a reason why an assertion is present in the KB with its truth value. Arguments are of two main types: the first type of argument is essentially a statement that the formula was explicitly "asserted" (or "local"). The second type of argument is the group of assertions through which the assertion was "inferred" (in which case the assertion is called "remote"). In this case there is a chain of inference which supports the truth value of the supported assertion. Such arguments are also called deductions."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#argument]

name::
* McsEngl.Cyclexpression'argument,
* McsEngl.lagCycl'argument-of-expression,

Cyclexpression.SPECIFIC

description::
* formula,
* non-atomic-term,

name::
* McsEngl.Cyclexpression.specific,

Cyclexpression.literal

description::
"Most generally, a literal is a Cyc expression of the form (predicate arg1 [arg2 ... argn]), or its negation, where the number of arguments to the predicate can be any positive integer (but in Cyc, not more than 5), and the arguments can be any kind of term. For example,
(#$likesAsFriend #$Goolsbey #$Brandy)
(#$eatsWillingly #$BillM (#$FruitFn ?X))
(#$isa ?CAR #$Automobile)
(#$performedBy ?ACT ?ORG)
(#$not (#$performedBy ?ACT ?ORG))
Because it includes negated formulas, the class of literals is a superset of the class of atomic formulas.
Usually, "literal" is used to refer to the atomic formulas that make up the internal representation of any assertion's formula. In Cyc-10, formulas that are asserted into the KB are converted into conjunctive normal form; the formula of each single assertion is internally represented as a disjunction of literals. Those literals that would be negated in conjunctive normal form are called negative literals; the rest are called positive literals. GAFs are the subset of literals in which there are no variables. [{1998-02-27} http://www.cyc.com/cyc-2-1/ref/glossary.html,{1997-08-08}]

name::
* McsEngl.Cyclexpression.literal!⇒Cyclliteral,
* McsEngl.Cyclliteral,
* McsEngl.literal@lagCycl!⇒Cyclliteral,

assertion of lagCycl

description::
"The assertion is the fundamental unit of knowledge in the Cyc system.
Every assertion consists of:
- a CycL formula which states the content of the assertion
- a truth value
- a microtheory of which the assertion is part
- a direction
- a justification
The set of assertions includes both rules and GAFs."
[{1998-02-27} http://www.cyc.com/cyc-2-1/ref/glossary.html,{1997-08-08}]

name::
* McsEngl.Cyclassertion,
* McsEngl.lagCycl'assertion!⇒Cyclassertion,

formula of Cyclassertion

description::
"Every formula has the structure of a parenthesized list. I.e., it starts with a left parenthesis, then therer follow a series of objects which are commonly designated ARG0, ARG1, ARG2, etc., and finally there is a matching right parenthesis.
The object in the ARG0 position may be a predicate, a logical connective, or a quantifier.
The remaining arguments may be atomic constants, non-atomic terms, variables, numbers, English strings delimited by double quotes ("), or other formulas."
[{1998-03-01} http://www.cyc.com/cyc-2-1/ref/cycl-syntax.html,{1997-08-08}]

name::
* McsEngl.Cyclassertion'formula!⇒Cyclformula,
* McsEngl.Cyclformula,
* McsEngl.Cyclsentence!⇒Cyclformula,
* McsEngl.lagCycl'formula!⇒Cyclformula,
* McsEngl.lagCycl'sentence!⇒Cyclformula,
* McsEngl.proposition@lagCycl!⇒Cyclformula,
* McsEngl.statement@lagCycl!⇒Cyclformula,
===
"Some CycL expressions are propositions or statements about the world; these are called CycL formulas."
[{2021-02-13} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#expression]

conjunctive-normal-form of Cyclformula

description::
"In previous versions of Cyc, formulas are stored and reasoned with in the same form in which they appear in the KB browser, e.g.
(implies (and (isa ?afp AdultFemalePerson) (residesInRegion ?afp Guam)) (and (acquaintedWith Zippy ?afp) (likesAsFriend Zippy ?afp)))
In Cyc-10, formulas asserted to the KB are stored internally, and reasoned with, in conjunctive normal form (CNF). When converted to CNF, a formula gets rewritten as a conjunction of disjunctions of negated and non-negated literals. So, for example, the formula above would be written in CNF as:
(and (or (not (isa ?afp AdultFemalePerson)) (not (residesInRegion ?afp Guam)) (acquaintedWith Zippy ?afp)) (or (not (isa ?afp AdultFemalePerson)) (not (residesInRegion ?afp Guam)) (likesAsFriend Zippy ?afp)))
Each of the conjuncts would become a separate assertion.
Converting to CNF is part of the job of the Cyc-10 canonicalizer."
[{2021-02-09} http://web.archive.org/web/20150907111619/http://www.cyc.com/documentation/internal-representation-assertions/]

name::
* McsEngl.Cyclassertion'CNF-(conjunctive-normal-form),
* McsEngl.lagCycl'CNF-(conjunctive-normal-form),
* McsEngl.lagCycl'conjunctive-normal-form,

Cyclformula.well-formed

description::
"A formula in CycL is well-formed if it conforms to the syntax of CycL and passes all the restrictions on arity and argument types of the relations that are used in it."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#well-formed]

name::
* McsEngl.Cyclformula.well-formed,

truth-value of Cyclassertion

description::
"In the Cyc® KB, a truth value is a value attached to an assertion which indicates its degree of truth. There are five possible truth values:
* monotonically true (100): True always and under all conditions. Normally reserved for things that are true by definition.
* default true (T): Assumed true, but subject to exceptions. Most rules in the KB are default true.
* unknown (~): Not known to be true, and not known to be false.
* default false (F): Assumed false, but subject to exceptions.
* monotonically false (0): False always and under all conditions.
Of these, the most commonly used are the first two.
Each of these truth values is represented by a different colored ball in the KB Browser. For details on the icons used in the Browser, see the Key For Icons In the Browser.
"Truth value" is a heuristic level property; it is a combination of what is 2 separate properties at the epistemological level: strength (:default or :monotonic) and negation status (whether or not a formula begins with #$not).
If you are unsure whether to make an assertion monotonically true or default true, go with the latter."
[{2021-02-11} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#tv]

name::
* McsEngl.Cyclassertion'truth-value,

microtheory of Cyclassertion

description::
"a microtheory of which the assertion is part"

name::
* McsEngl.Cyclassertion'microtheory,

direction of Cyclassertion

description::
"Direction is a value attached to every assertion which determines whether inferencing involving the assertion is done at assert time or at ask time. There are three possible values for direction: :forward (inferencing done at assert time), :backward (inferencing done at ask time), and :code (HL module performs reasoning, assertion not used in regular inference). Most interfaces enforce the following default: GAFs have direction :forward and rules have direction :backward. Direction is new in Cyc-10; it replaces access level."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#direction]

name::
* McsEngl.Cyclassertion'direction,

descriptionLong::
"access level (AL)
(obs.) In Cyc-9, the access level of an assertion determines whether inferencing involving that assertion occurs at assert time or at ask time. Access level is replaced in Cyc-10 by direction.
Only two access level values were commonly used in Cyc-9: 0, which corresponds to the Cyc-10 direction forward, and 4, which corresponds to the Cyc-10 direction backward."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#access%20level]

justification of Cyclassertion

description::
·

name::
* McsEngl.Cyclassertion'justification,

argument of Cyclassertion

description::
"The term "argument" is used in two different ways by Cyclists:
* Most commonly, the term "argument" is used to refer to any CycL term which follows a predicate, a function, a logical connective, or a quantifier in a Cycl expression. Thus, in the CycL formula (#$likesAsFriend #$BillM #$Goolsbey), #$likesAsFriend is a predicate, and #$BillM and #$Goolsbey are the first and second arguments to that predicate.
* The term "argument" is also used to refer to a reason why an assertion is present in the KB with its truth value. Arguments are of two main types:
- the first type of argument is essentially a statement that the formula was explicitly "asserted" (or "local").
- The second type of argument is the group of assertions through which the assertion was "inferred" (in which case the assertion is called "remote"). In this case there is a chain of inference which supports the truth value of the supported assertion. Such arguments are also called deductions."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#argument]

name::
* McsEngl.Cyclassertion'argument,
* McsEngl.lagCycl'argument-of-assertion,

Cyclassertion.complexNo

description::
· assertion without logical-connective.

name::
* McsEngl.Cyclassertion.complexNo,

Cyclassertion.complexNo.variableNo (Gaf)

description::
· assertion without logical-connectives and variables.

name::
* McsEngl.Cyclassertion.GAF!⇒Cyclgaf,
* McsEngl.Cyclassertion.complexNo.variableNo!⇒Cyclgaf,
* McsEngl.Cyclassertion.ground-atomim-formula!⇒Cyclgaf,
* McsEngl.Cyclgaf,
* McsEngl.lagCycl'ground-atomim-formula!⇒Cyclgaf,

descriptionLong::
"gaf: An acronym for Ground Atomic Formula which is an assertion without any variables. Gafs form the great majority of KB assertions. The Arg0 term in a gaf is the predicate."
[http://www.cyc.com/cyc-api.html,v0.3-{1999-03-11}]

Cyclassertion.complexNo.variable

description::
·

name::
* McsEngl.Cyclassertion.complexNo.variable,

Cyclassertion.complex

description::
"Complex sentences can be built up out of atomic sentences or other complex sentences by using logical connectives, which are special constants analogous to the logical operators of formal logic. The most important logical connectives in CycL are #$not, #$and, #$or, and #$implies."
[{2007-09-21} http://www.cyc.com/doc/handbook/oe/02-the-syntax-of-cycl.html]

name::
* McsEngl.Cyclassertion.complex,

Cyclassertion.local

description::
"A local assertion is one which was added to the KB from an outside source (most commonly, a human KEer), rather than inferred from other assertions.
Local assertions have at least one local justification among their supports."
[{1998-02-27} http://www.cyc.com/cyc-2-1/ref/glossary.html,{1997-08-08}]

name::
* McsEngl.Cyclassertion.axiom,
* McsEngl.Cyclassertion.local,
* McsEngl.lagCycl'axiom,

descriptionLong::
"Strictly speaking, an axiom is one of a set of fundamental formulas that one starts with to prove theorems by deduction.
In CYC®, the axioms are those formulas that have been locally asserted into the CYC® KB. CYC® axioms are well-formed CYC® formulas, since the system won't let you add formulas to CYC® that are not well-formed. However, not all well-formed CYC® formulas are axioms, since not all of them are actually in the KB. And some of the formulas in the KB are not, strictly speaking, axioms, since they were added to the KB via inference, instead of being locally asserted.
In informal usage, though, Cyclists don't always adhere to the strict meaning of axiom, and may refer to a formula they are considering adding to the KB or have recently removed from the KB as an axiom.
Axiom is also the name of one of the internal KB data structure types."
[{2021-02-13} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#axiom]

Cyclassertion.localNo

description::
"A local assertion is one which was added to the KB from an outside source (most commonly, a human KEer), rather than inferred from other assertions.
Local assertions have at least one local justification among their supports."
[{1998-02-27} http://www.cyc.com/cyc-2-1/ref/glossary.html,{1997-08-08}]

name::
* McsEngl.Cyclassertion.localNo,
* McsEngl.Cyclassertion.inferred,
* McsEngl.Cyclassertion.remote,

Cyclassertion.quantification

description::
· an-assertion with quantifier.

name::
* McsEngl.Cyclassertion.quantification!⇒Cyclquantification,
* McsEngl.Cyclquantification,

Cyclquantification.existential

description::
"Quantifying with #$thereExists. For example, the following existentially quantified assertion
(#$thereExists ?SING
 (#$and
   (#$isa ?SING #$HumanAdult)
   (#$maritalStatus ?SING #$Single))),
states that there is some unmarried human adult."
[{2021-02-13} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#existential%20quantification]

name::
* McsEngl.Cyclquantification.existential,
* McsEngl.lagCycl'existential-quantification,

Cyclquantification.universal

description::
"Quantifying with #$forAll. In CycL formulas, variables which are not explicitly bound by a quantifier are assumed to be universally quantified. For example,
(#$forAll ?LEAF (#$implies
 (#$isa ?LEAF Leaf)
 (#$colorOfObject ?LEAF GreenColor)))
means that every leaf is green. But in CycL, this sentence means the same as the following sentence:
(#$implies
 (#$isa ?LEAF Leaf)
 (#$colorOfObject ?LEAF GreenColor))"
[{2021-02-13} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#universal-quantification]

name::
* McsEngl.Cyclquantification.universal,
* McsEngl.lagCycl'universal-quantification,

Cyclassertion.fact

description::
"The purpose of the ASSERT operation is to add a new assertion--a "fact" or a "rule"--to the KB."
[{2021-02-11} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#ASSERT]

name::
* McsEngl.Cyclassertion.fact!⇒Cyclfact,
* McsEngl.Cyclfact,

Cyclassertion.rule

description::
"Sentences can also contain variables, strings starting with "?". These sentences are called "rules". One important rule asserted about the #$isa predicate reads:

(#$implies
  (#$and
    (#$isa ?OBJ ?SUBSET)
    (#$genls ?SUBSET ?SUPERSET))
  (#$isa ?OBJ ?SUPERSET))
"If OBJ is an instance of the collection SUBSET and SUBSET is a subcollection of SUPERSET, then OBJ is an instance of the collection SUPERSET"."
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

name::
* McsEngl.Cyclrule,
* McsEngl.Cyclassertion.rule!⇒Cyclrule,
* McsEngl.lagCycl'rule!⇒Cyclrule,

microtheory of lagCycl

description::
"The knowledge base is divided into microtheories (Mt), collections of concepts and facts typically pertaining to one particular realm of knowledge.
Unlike the knowledge base as a whole, each microtheory must be free from monotonic contradictions.
Each microtheory is a first-class object in the Cyc ontology; it has a name that is a regular constant; microtheory constants contain the string "Mt" by convention.
An example is #$MathMt, the microtheory containing mathematical knowledge.
The microtheories can inherit from each other and are organized in a hierarchy: one specialization of #$MathMt is #$GeometryGMt, the microtheory about geometry."
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

name::
* McsEngl.Cyclmt,
* McsEngl.Cyclmt!=lagCycl'microtheory,
* McsEngl.context@lagCycl!⇒Cyclmt,
* McsEngl.lagCycl'context!⇒Cyclmt,
* McsEngl.lagCycl'microtheory!⇒Cyclmt,
* McsEngl.microtheory@lagCycl!⇒Cyclmt,
===
"context: Often used interchangeably with microtheory."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#context]

Cyclmt.#$genlMt

description::
"One microtheory is a #$genlMt of another if all its assertions are true in the other microtheory.
#$BaseKB is a #$genlMt of all microtheories."
[{2021-02-13} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#genlMt]

name::
* McsEngl.Cyclmt.#$genlMt-of-another,
* McsEngl.lagCycl'#$genlMt,

Cyclmt.#$BaseKB

description::
"One microtheory is a #$genlMt of another if all its assertions are true in the other microtheory.
#$BaseKB is a #$genlMt of all microtheories."
[{2021-02-13} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#genlMt]

name::
* McsEngl.Cyclmt.#$BaseKB,
* McsEngl.lagCycl'#$BaseKB,

knowledge-base of lagCycl

description::
"CYC’S KNOWLEDGE BASE:
* 40,000+ Predicates (to express reusable relationships)
* 1,500,000+ Concepts
* 25,000,000+ Explicitly-stated axioms
* Trillions+ Implicit axioms, real-world knowledge"
[{2023-11-14 retrieved} https://cyc.com/platform/]

"The CYC® KB is the repository of Cyc's knowledge. It consists of a large number of FORTs and an even larger number of assertions involving those constants."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#KB]

name::
* McsEngl.Knlgbase.Cyc!⇒Cyclkb,
* McsEngl.klmrCyc'knowledge-base!⇒Cyclkb,
* McsEngl.Cyclkb,
* McsEngl.lagCycl'knowledge-base!⇒Cyclkb,
* McsEngl.lagCycl'ontology!⇒Cyclkb,
* McsEngl.ontology@lagCycl!⇒Cyclkb,

descriptionLong::
"In philosophy, ontology is the study of being. In knowledge-based systems, an ontology is that part of the system which specifies what things exist and what is true about them. Cyc's ontology is essentially its whole knowledge base. You may hear people refer to their "ontology of devices" or their "temporal ontology". What they are talking about is those parts of their knowledge base (the constants and assertions) that concern devices or time."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#ontology]

SubL of lagCycl

description::
"SubL is a computer language built by members of Cycorp.
SubL was written to support the Cyc® application, allowing it to run both under Lisp environments and as a C application generated by a SubL-to-C translator"
[{2021-02-09} http://web.archive.org/web/20160822085707/http://www.cyc.com/documentation/subl-reference/]
===
"SubL stands for SubLanguage, and is a computer language created by the CYC® team. It is designed to be easy to translate into C. The entire CYC® application is written in SubL.
Sublisp
An archaic term for SubL."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#SubL]

name::
* McsEngl.SubL!⇒lagSubl,
* McsEngl.SubLanguage!⇒lagSubl,
* McsEngl.Sublisp!⇒lagSubl,
* McsEngl.lagCycl'SubL!⇒lagSubl,
* McsEngl.lagSubl,

human of lagCycl

description::
· any human related to Cyc-project.

name::
* McsEngl.Cychmn,
* McsEngl.lagCycl'human!⇒Cychmn,

Cychmn.Cyclist

description::
"A Cyclist is a registered modifier of the CYC® KB. Everyone who works on the CYC® project is a Cyclist, as are individuals working with CYC® at participating organizations."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#Cyclist]

name::
* McsEngl.Cychmn.Cyclist,
* McsEngl.Cyclist,

Cychmn.knowledge-enterer

description::
"Short for Knowledge Enterer. One who writes CycL assertions for inclusion in the CYC® KB."
[{2021-02-09} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#KEer]

name::
* McsEngl.klmrCyc'KEer!⇒Cyckger,
* McsEngl.klmrCyc'knowledge-enterer!⇒Cyckger,
* McsEngl.Cyckger,
* McsEngl.Cychmn.knowledge-enterer!⇒Cyckger,

Cychmn.Lenat.Doug

description::
"Doug Lenat (President and CEO)
Dr. Douglas B. Lenat is one of the world’s leading computer scientists, and is the founder of the Cyc project and president of Cycorp.
Dr. Lenat has been a Professor of Computer Science at Carnegie-Mellon University and Stanford University and has received numerous honors including:
* Awarded the bi-annual IJCAI Computers and Thought Award which is the highest honor in Artificial Intelligence;
* Named the first Fellow of the AAAI, the Association for the Advancement of Artificial Intelligence.
* Fellow of the American Academy for the Advancement of Science (AAAS)
He is a prolific author, whose hundreds of publications include the following books:
* Knowledge Based Systems in Artificial Intelligence (1982, McGraw-Hill)
* Building Expert Systems (1983, Addison-Wesley)
* Knowledge Representation (1988, Addison-Wesley)
* Building Large Knowledge Based Systems (1989, Addison-Wesley)
His 1976 Stanford thesis earned him the bi-annual IJCAI Computers and Thought Award in 1977.
Dr. Lenat received his Ph.D. in computer science from Stanford University and his BA and MS in Mathematics from the University of Pennsylvania."
[{2021-02-09} http://web.archive.org/web/20161013223411/http://www.cyc.com:80/about/team/]

name::
* McsEngl.Cychmn.Lenat.Doug!⇒hmnLenatDoug,
* McsEngl.Doug-Lenat!⇒hmnLenatDoug,
* McsEngl.Lenat.Doug!⇒hmnLenatDoug,
* McsEngl.hmnLenatDoug,

Cychmn.Shepherd.Mary

description::
"Senior Vice President of Operations
Ms. Mary Shepherd is the Senior VP for Operations. Mary has been with the Cyc project since its inception at MCC in 1984 and is responsible for Cycorp’s financial, personnel, facilities, and operations functions."
[{2021-02-09} http://web.archive.org/web/20161013223411/http://www.cyc.com:80/about/team/]

name::
* McsEngl.Cychmn.Shepherd.Mary!⇒hmnShepherdMary,
* McsEngl.Shepherd.Mary!⇒hmnShepherdMary,
* McsEngl.hmnShepherdMary,

organization of lagCycl

description::
·

name::
* McsEngl.lagCycl'organization,

Cyclozn.Cycorp

description::
·

name::
* McsEngl.Cyclozn.Cycorp!⇒oznCycorp,
* McsEngl.Cycorp!⇒oznCycorp,
* McsEngl.oznCycorp,

Cyclozn.MCC

description::
"Microelectronics and Computer Technology Corporation (also Microelectronics and Computer Consortium, MCC) was the first, and at one time one of the largest, computer industry research and development consortia in the United States. MCC ceased operations in 2000 and was formally dissolved in 2004."
[{2021-02-09} https://en.wikipedia.org/wiki/Microelectronics_and_Computer_Technology_Corporation]

name::
* McsEngl.MCC!=Microelectronics-and-Computer-Technology-Corporation,
* McsEngl.Microelectronics-and-Computer-Technology-Consortium!⇒oznMcc,
* McsEngl.Microelectronics-and-Computer-Technology-Corporation!⇒oznMcc,
* McsEngl.oznMcc,

project of lagCycl

description::
· klmrCyc created as a-project at MCC.

name::
* McsEngl.Cycproject,
* McsEngl.lagCycl'project!⇒Cycproject,

evaluation of lagCycl

description::
· there is no ONE common sense knowledge.

name::
* McsEngl.lagCycl'evaluation,

info-resource of lagCycl

description::
* http://www.cyc.com/,
* http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html,

name::
* McsEngl.lagCycl'Infrsc,

evoluting of lagCycl

{2012}-OpenCyc-4.0::
"A version of OpenCyc, 4.0, was released in June 2012. OpenCyc 4.0 included much of the Cyc ontology at that time, containing hundreds of thousands of terms, along with millions of assertions relating the terms to each other; however, these are mainly taxonomic assertions, not the complex rules available in Cyc. The OpenCyc 4.0 knowledge base contained 239,000 concepts and 2,093,000 facts."
[{2021-02-08} https://en.wikipedia.org/wiki/Cyc#OpenCyc]

{2002}-OpenCyc::
"The first version of OpenCyc was released in spring 2002 and contained only 6,000 concepts and 60,000 facts. The knowledge base was released under the Apache License. Cycorp stated its intention to release OpenCyc under parallel, unrestricted licences to meet the needs of its users. The CycL and SubL interpreter (the program that allows users to browse and edit the database as well as to draw inferences) was released free of charge, but only as a binary, without source code. It was made available for Linux and Microsoft Windows. The open source Texai[15] project released the RDF-compatible content extracted from OpenCyc.[16]"
[{2021-02-08} https://en.wikipedia.org/wiki/Cyc#OpenCyc]

{1995}-CYC-project-Cycorp::
"Douglas Lenat began the project in July 1984 at MCC, where he was Principal Scientist 1984–1994, and then, since January 1995, has been under active development by the Cycorp company, where he is the CEO."
* McsEngl.{lagCnkl'1995}-Cyc-project-at-Cycorp,
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

{1989}-lagCycl-HOL::
"By 1989,[6] CycL had expanded in expressive power to Higher Order Logic (HOL)."
[{2021-02-10} https://en.wikipedia.org/wiki/Cyc]

{1984}-CYC-project::
"Doug Lenat's CYC (from enCYClopedia) Project at MCC in Austin, a ten-year $35 million project begun in 1984, uses logic, too, but emphasizes the Aristotle-problem, employing a dozen 'ontological engineers' to enumerate tens of millions of common-sense facts that will ultimately 'add up' to logical intelligence. Lenat's goal is a system that can understand and speak ordinary language, and detect violations of common sense as readily as humans can."
[{2021-02-08} http://sandcastle.cosc.brocku.ca/~bross/3P71/misc/outsider_ai.txt-{1995-12-26}]

name::
* McsEngl.evoluting-of-lagCycl,
* McsEngl.lagCycl'evoluting,

lagCycl.SPECIFIC

description::
"Every representation is a trade-off between expressiveness (how easily you can say complicated things) and efficiency (how easily the machine can reason with what you've told it).
English is very expressive but not very efficient.
Most computer languages, such as Basic, C, and Fortran, are efficient but not very expressive.
To get both qualities, we separated the epistemological problem (what should the system know?) from the heuristic problem (how can it effectively reason with what it knows?) and developed two separate languages, respectively EL and HL.
Our knowledge enterers talk to CYC in the clean, expressive language (EL).
Their input is then converted into the heuristic language (HL), which is efficient for dealing with many sorts of frequently recurring inference problems, such as reasoning about time, causality, containment, and so forth."
[{2021-02-12} http://web.archive.org/web/20001208155800/http://www.cyc.com/halslegacy.html]

name::
* McsEngl.lagCycl.specific,

lagCycl.expressive

description::
"Every representation is a trade-off between expressiveness (how easily you can say complicated things) and efficiency (how easily the machine can reason with what you've told it).
English is very expressive but not very efficient.
Most computer languages, such as Basic, C, and Fortran, are efficient but not very expressive.
To get both qualities, we separated the epistemological problem (what should the system know?) from the heuristic problem (how can it effectively reason with what it knows?) and developed two separate languages, respectively EL and HL.
Our knowledge enterers talk to CYC in the clean, expressive language (EL).
Their input is then converted into the heuristic language (HL), which is efficient for dealing with many sorts of frequently recurring inference problems, such as reasoning about time, causality, containment, and so forth."
[{2021-02-12} http://web.archive.org/web/20001208155800/http://www.cyc.com/halslegacy.html]

name::
* McsEngl.lagCycl.EL'(expressive-language),
* McsEngl.lagCycl.expressive,

descriptionLong::
"Epistemological level refers to the way knowledge is expressed when CYC® communicates with users or external programs. This stands in contrast with heuristic level, which refers to the way knowledge is actually stored, and inference implemented, in CYC®."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#EL]

lagCycl.heuristic

description::
"Every representation is a trade-off between expressiveness (how easily you can say complicated things) and efficiency (how easily the machine can reason with what you've told it).
English is very expressive but not very efficient.
Most computer languages, such as Basic, C, and Fortran, are efficient but not very expressive.
To get both qualities, we separated the epistemological problem (what should the system know?) from the heuristic problem (how can it effectively reason with what it knows?) and developed two separate languages, respectively EL and HL.
Our knowledge enterers talk to CYC in the clean, expressive language (EL).
Their input is then converted into the heuristic language (HL), which is efficient for dealing with many sorts of frequently recurring inference problems, such as reasoning about time, causality, containment, and so forth."
[{2021-02-12} http://web.archive.org/web/20001208155800/http://www.cyc.com/halslegacy.html]

name::
* McsEngl.lagCycl.HL'(heuristic-language),
* McsEngl.lagCycl.heuristic,

descriptionLong::
"heuristic level (HL)
Another name for this might be "implementation level". Heuristic level refers to the way knowledge is actually stored, and inference implemented, in CYC®. This stands in contrast to the Epistemological Level (EL), which refers to the way knowledge is expressed when CYC® communicates with users or external programs."
[{2021-02-12} http://web.archive.org/web/20121219042246/http://www.cyc.com:80/cycdoc/ref/glossary.html#HL]

lagKnlg.ontology-language

description::
× generic: logo--knowledge-language,

"In computer science and artificial intelligence, ontology languages are formal languages used to construct ontologies. They allow the encoding of knowledge about specific domains and often include reasoning rules that support the processing of that knowledge. Ontology languages are usually declarative languages, are almost always generalizations of frame languages, and are commonly based on either first-order logic or on description logic."
[{2021-01-26} https://en.wikipedia.org/wiki/Ontology_language]

name::
* McsEngl.lagKnlg.010-ontology-language!⇒lagOntg,
* McsEngl.lagKnlg.ontology-language!⇒lagOntg,
* McsEngl.lagOntg!=ONToloGy-language,
* McsEngl.lagPpsl.ontology-language!⇒lagOntg,
* McsEngl.ontology-language!⇒lagOntg,

descriptionLong::
"overview of ontology-languages:
Ontology languages are formal systems designed to represent and define concepts, entities, and their relationships in a structured and machine-readable way. They are essential in knowledge representation, artificial intelligence, semantic web development, and various other fields. Here's an overview of some prominent ontology languages:

1. **RDF (Resource Description Framework):** RDF is a basic framework for expressing relationships between resources in a triple format (subject-predicate-object). It forms the foundation of the Semantic Web.

2. **OWL (Web Ontology Language):** OWL is a family of knowledge representation languages used to create ontologies. There are three versions: OWL Lite, OWL DL, and OWL Full, each with different levels of expressiveness.

3. **RDFS (Resource Description Framework Schema):** RDFS is an extension of RDF that allows for the definition of classes and properties, enabling basic ontology modeling.

4. **SHACL (Shapes Constraint Language):** SHACL is a language for defining constraints on RDF graphs. It's used to ensure that RDF data adheres to specific structural and semantic rules.

5. **SKOS (Simple Knowledge Organization System):** SKOS is a model for expressing knowledge organization systems like thesauri, taxonomies, and classification schemes. It's particularly useful in libraries and semantic web applications.

6. **Common Logic:** Common Logic is a family of logic-based ontology languages that provides a high level of expressiveness. It's commonly used in formal knowledge representation.

7. **Description Logics:** Description logics are a family of logic-based ontology languages that underlie OWL. They provide a way to model and reason about concepts and their relationships with high expressiveness and scalability.

8. **RIF (Rule Interchange Format):** RIF is a family of languages for expressing rules that can be used in combination with ontology languages to add automated reasoning to knowledge systems.

9. **Topic Maps:** Topic Maps are a standard for representing information about the relationships between concepts, representing knowledge in a highly flexible and interconnected manner.

10. **Protege:** While not a formal ontology language itself, Protege is a popular ontology development platform that supports various ontology languages, including OWL and RDF, and provides tools for ontology editing, visualization, and reasoning.

These ontology languages and frameworks help to formalize and structure knowledge for machine processing, enabling the development of intelligent systems, data integration, and semantic interoperability. They are widely used in fields such as the Semantic Web, artificial intelligence, knowledge engineering, and more to model complex domains and enable automated reasoning and data integration."
[{2023-10-24 retrieved} https://chat.openai.com/c/b772e8ef-c6be-489f-8ad4-69895bc95406]

"Ontology languages are formal languages used to construct ontologies. Ontologies are a formal way to describe taxonomies and classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects.

Ontology languages are typically based on logic, which allows for the expression of complex relationships between concepts and the inference of new knowledge from existing knowledge. This makes them well-suited for a variety of applications, including:
* Knowledge representation and sharing: Ontologies can be used to represent and share knowledge about a wide range of domains, such as medicine, biology, engineering, and finance.
* Data integration: Ontologies can be used to integrate data from different sources, making it easier to find and use the data.
* Semantic search: Ontologies can be used to improve the accuracy and relevance of semantic search results.
* Artificial intelligence: Ontologies can be used to develop more intelligent AI systems that can reason about the world and make decisions based on their knowledge.

Some of the most popular ontology languages include:
* **Web Ontology Language (OWL)**: OWL is a family of knowledge representation languages for authoring ontologies. It is the most widely used ontology language, and is supported by a wide range of tools and software.
* **Resource Description Framework (RDF)**: RDF is a lightweight, flexible framework for representing information and its relationships. It is often used to represent ontologies in a machine-readable format.
* **RDF Schema (RDFS)**: RDFS is an extension of RDF that provides additional vocabulary for describing ontologies. It can be used to define simple ontologies, or to provide additional information about OWL ontologies.

Other ontology languages include:
* **Description Logic (DL)**: DL is a family of formal languages used to represent knowledge in a structured way. DL languages are often used as the basis for ontology languages, such as OWL.
* **Concept Language (CL)**: CL is a simple ontology language that is easy to learn and use. It is often used for educational purposes, or for developing small ontologies.
* **Knowledge Interchange Format (KIF)**: KIF is a general-purpose knowledge representation language that can be used to represent ontologies. It is often used for research purposes, or for developing ontologies for complex domains.

Ontology languages are a powerful tool for representing and sharing knowledge. They are used in a wide range of applications, including knowledge representation and sharing, data integration, semantic search, and artificial intelligence."
[{2023-10-24 retrieved} http://localhost/dWstSgm/dirMcs/dirTchInf/McsTchInf000039.last.html#idLKnlg010]

input of lagOntg

description::
· the-info we want to map with lagOntg.

name::
* McsEngl.lagOntg'input,

output|ontology of lagOntg

description::
"Ontologies are formalized vocabularies of terms, often covering a specific domain and shared by a community of users.
They specify the definitions of terms by describing their relationships with other terms in the ontology."
[{2021-01-03} https://www.w3.org/TR/owl2-overview/]

· a-model that maps the-input.
· ontology I call the-outermost system that models an-input.

"In science and information science, an ontology formally represents knowledge as a set of concepts within a domain, and the relationships between those concepts."
[{2023-08-23 retrieved} https://en.wikipedia.org/wiki/Information_science#Ontology]

name::
* McsEngl.Knlgbase.ontology!⇒kbOnto,
* McsEngl.kbOnto!=ontology--knowledge-base,
* McsEngl.lagOntg'output!⇒kbOnto,
* McsEngl.lagOntg'ontology!⇒kbOnto,
* McsEngl.ontology!⇒kbOnto,

descriptionLong::
"In information science a conceptualization is an abstract simplified view of some selected part of the world, containing the objects, concepts, and other entities that are presumed of interest for some particular purpose and the relationships between them.[2][3] An explicit specification of a conceptualization is an ontology, and it may occur that a conceptualization can be realized by several distinct ontologies.[2]"
[{2021-01-27} https://en.wikipedia.org/wiki/Conceptualization_(information_science)]
· the-above text shows the-confusion on what a-concept is and consequently an-ontology.
===
"overview of ontology:
An ontology is a formal and explicit representation of concepts, categories, properties, and their relationships within a specific domain of knowledge. Ontologies are used in various fields, including philosophy, computer science, artificial intelligence, and information science, to facilitate knowledge sharing, data integration, and reasoning. Here's an overview of ontologies:

1. **Conceptual Framework:** At its core, an ontology serves as a conceptual framework for representing knowledge. It defines the key concepts in a domain, such as objects, actions, and their properties, and the relationships between them.

2. **Formal Representation:** Ontologies are typically represented using a formal language, often based on logic or semantic web standards. This formal representation allows for precise and unambiguous definitions of concepts and their interconnections.

3. **Hierarchy:** Many ontologies include a hierarchical structure where concepts are organized in a tree-like fashion. This hierarchy allows for the categorization and classification of concepts, with more general concepts at the top and more specific ones at lower levels.

4. **Attributes and Properties:** Ontologies define attributes or properties associated with concepts. These attributes can describe characteristics, relationships, and constraints related to the concepts in the domain.

5. **Relationships:** Relationships between concepts in an ontology capture how they are connected or related to each other. Common relationship types include "is-a" (taxonomy or subclass relationships) and "part-of" relationships.

6. **Axioms and Constraints:** Ontologies often include axioms and constraints that specify rules governing the domain. These rules can express logical relationships, restrictions, and inferences.

7. **Standardization:** In many cases, ontologies adhere to standardized ontology languages and frameworks, such as the Web Ontology Language (OWL) for the Semantic Web. This helps ensure interoperability and consistency in ontology development and usage.

8. **Domain Specificity:** Ontologies can be domain-specific, addressing a particular area of knowledge, or more general, covering broader topics. For example, there are ontologies for medical terminology, geography, and even abstract concepts like time and space.

9. **Knowledge Sharing:** Ontologies facilitate knowledge sharing and data integration, as they provide a common vocabulary and structure that can be understood and used by both humans and machines. This is particularly valuable in information retrieval, data integration, and knowledge representation.

10. **Reasoning and Inference:** Ontologies support automated reasoning and inference. By applying logical rules and axioms, software systems can derive new knowledge or make deductions based on the existing ontology.

11. **Applications:** Ontologies are used in various applications, including semantic web technologies, information retrieval, natural language processing, expert systems, data integration, and more.

12. **Evolution and Maintenance:** Ontologies can evolve over time to accommodate changes in knowledge or domain understanding. Maintenance and version control are important considerations in ontology development.

Overall, ontologies play a critical role in knowledge management and knowledge representation. They provide a structured way to organize and capture knowledge, making it accessible, shareable, and interpretable by both humans and machines, and they are a fundamental component of many AI and data-related applications."
[{2023-10-20 retrieved} https://chat.openai.com/c/0af0bd57-e782-489f-b9bc-9c0c7fe8cd01]

vocabulary of ontology

description::
· vocabulary-of-ontology I call the-set of the-names of the-concepts of the-ontology.
· the-ontology contains and the-relations and doings of the-concepts of the-ontology.

name::
* McsEngl.kbOnto'vocabulary,
* McsEngl.vocabulary-of-ontology,

doing of ontology

description::
* creation-of-ontology,
* alignment-of-ontology,

name::
* McsEngl.kbOnto'doing,

creation of ontology

description::
"Ontology learning (ontology extraction, ontology generation, or ontology acquisition) is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process.
Typically, the process starts by extracting terms and concepts or noun phrases from plain text using linguistic processors such as part-of-speech tagging and phrase chunking. Then statistical[1] or symbolic[2][3] techniques are used to extract relation signatures, often based on pattern-based[4] or definition-based[5] hypernym extraction techniques."
[{2021-01-27} https://en.wikipedia.org/wiki/Ontology_learning]

name::
* McsEngl.kbOnto'building,
* McsEngl.kbOnto'creation,
* McsEngl.kbOnto'generation,

alignment of ontology

description::
"Ontology alignment, or ontology matching, is the process of determining correspondences between concepts in ontologies. A set of correspondences is also called an alignment. The phrase takes on a slightly different meaning, in computer science, cognitive science or philosophy."
[{2021-01-27} https://en.wikipedia.org/wiki/Ontology_alignment]

name::
* McsEngl.kbOnto'alignment,
* McsEngl.kbOnto'matching,

evoluting of ontology

description::
">evolution of ontologies:
The evolution of ontologies is closely tied to the development of information technologies, artificial intelligence, and the need for effective knowledge representation and sharing. Ontologies have their roots in philosophy, particularly in the field of metaphysics, where they are used to describe the nature of existence and the relationships between different entities. In the context of information science and computer science, ontologies serve as formal frameworks for organizing knowledge and providing a common understanding of a domain.

Here's a brief overview of the evolution of ontologies:
1. **Early Philosophical Roots (Ancient to Early Modern Periods):** The term "ontology" itself comes from philosophy, where it has been used to discuss the nature of being and existence. Philosophers like Aristotle explored categorization and classification of entities, laying the groundwork for later developments in ontology.

2. **Knowledge Representation in AI (1960s-1980s):** As artificial intelligence (AI) emerged as a field, there was a growing need for systems to represent and reason about knowledge. Early AI systems used simple knowledge representation languages, but the limitations became apparent as the complexity of information increased.

3. **Semantic Web and RDF (1990s):** The concept of the Semantic Web, introduced by Tim Berners-Lee, envisioned a web where information could be easily shared and understood by machines. Resource Description Framework (RDF) and RDF Schema (RDFS) were developed to create a framework for describing resources and their relationships.

4. **Ontology Languages and OWL (1990s-2000s):** The Web Ontology Language (OWL) emerged as a standard for creating ontologies, providing a more expressive and formalized way to represent knowledge. OWL allows for richer modeling of relationships and supports reasoning capabilities.

5. **Application in Biomedicine and Bioinformatics (2000s-Present):** Ontologies found significant applications in domains like biomedicine and bioinformatics. The Gene Ontology (GO) and other biomedical ontologies were developed to facilitate the integration and sharing of biological knowledge.

6. **Industry Adoption and Standardization (2000s-Present):** Ontologies gained traction in various industries, including finance, manufacturing, and government. Standardization bodies like the World Wide Web Consortium (W3C) played a crucial role in establishing common ontological frameworks.

7. **Linked Data and Knowledge Graphs (2010s-Present):** The idea of Linked Data, championed by Tim Berners-Lee, involves creating a web of interlinked data using standard formats. Knowledge graphs, which leverage ontologies, became popular for organizing and connecting information in diverse domains.

8. **Machine Learning and Ontologies (2010s-Present):** Machine learning techniques, particularly in natural language processing and knowledge graph embedding, have been integrated with ontologies to enhance their usability and extraction of implicit knowledge.

The evolution of ontologies continues as new technologies and methodologies, such as knowledge graphs, graph databases, and advances in AI, contribute to their development and application in various domains. The ongoing research and practical applications are likely to shape the future trajectory of ontologies."
[{2023-11-14 retrieved} https://chat.openai.com/c/ef1e1310-5cf5-4022-bd84-4b06cbfe7221]

name::
* McsEngl.kbOnto'evolution,

ontology.SPECIFIC

description::
* generic-ontology,
* specific-ontology,
===
* bibliographic-ontology,
* biomedical-ontology,
===
* dc="http://purl.org/dc/elements/1.1/",
* foaf="http://xmlns.com/foaf/0.1/",
* ontology="http://data.europa.eu/eli/ontology#",
* owl="http://www.w3.org/2002/07/owl#",
* rdakit="http://metadataregistry.org/uri/profile/rdakit/",
* rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#",
* rdfs="http://www.w3.org/2000/01/rdf-schema#",
* regap="http://metadataregistry.org/uri/profile/regap/",
* skos="http://www.w3.org/2004/02/skos/core#",
* terms="http://purl.org/dc/terms/",
* vann="http://purl.org/vocab/vann/",
* xml="http://www.w3.org/XML/1998/namespace",
* xsd="http://www.w3.org/2001/XMLSchema#",

name::
* McsEngl.kbOnto.specific,

ontology.formal

description::
"In information science an ontology is formal if it is specified in a formal language, otherwise it is informal.
In philosophy, a separate distinction between formal and nonformal ontologies exists, which does not relate to the use of a formal language."
[{2023-11-17 retrieved} https://en.wikipedia.org/wiki/Formal_ontology#Formal_versus_nonformal]

name::
* McsEngl.formal-ontology,
* McsEngl.kbOnto.formal,

ontology.formalNo

description::
"In information science an ontology is formal if it is specified in a formal language, otherwise it is informal.
In philosophy, a separate distinction between formal and nonformal ontologies exists, which does not relate to the use of a formal language."
[{2023-11-17 retrieved} https://en.wikipedia.org/wiki/Formal_ontology#Formal_versus_nonformal]

name::
* McsEngl.formalNo-ontology,
* McsEngl.informal-ontology,
* McsEngl.nonformal-ontology,
* McsEngl.kbOnto.formalNo,

ontology.axiomatized

description::
"axiomatized ontology overview:
An axiomatized ontology is a formal representation of a domain of knowledge that is explicitly defined by a set of axioms. Axioms are statements that express the relationships between different concepts in the domain. By explicitly stating these relationships, axiomatized ontologies provide a precise and unambiguous way to share and reason about knowledge.

**Components of an Axiomatized Ontology**
An axiomatized ontology typically consists of the following components:
1. **Classes:** These represent the main categories of entities in the domain. For example, in a medical ontology, classes might include "Patient", "Disease", and "Medication".
2. **Properties:** These represent the attributes of classes. For example, the class "Patient" might have properties such as "name", "age", and "medical history".
3. **Relations:** These represent the relationships between classes. For example, the class "Patient" might have relations such as "has_diagnosis" and "is_taking_medication".
4. **Axioms:** These are statements that express the relationships between classes, properties, and relations. For example, an axiom might state that all patients have a name.

**Benefits of Axiomatized Ontologies**
Axiomatized ontologies offer several benefits, including:
1. **Formalization:** They provide a precise and unambiguous way to represent knowledge.
2. **Sharing:** They can be easily shared between different systems and applications.
3. **Reasoning:** They can be used to reason about knowledge and infer new information.
4. **Interoperability:** They can facilitate communication and data exchange between different systems.

**Applications of Axiomatized Ontologies**
Axiomatized ontologies are used in a wide variety of applications, including:
1. **Knowledge representation:** They are used to represent knowledge in semantic web applications, knowledge bases, and expert systems.
2. **Information retrieval:** They are used to improve the precision and recall of information retrieval systems.
3. **Natural language processing:** They are used to improve the understanding and generation of natural language.
4. **Data integration:** They are used to integrate data from different sources.

**Example of an Axiomatized Ontology**
The Gene Ontology (GO) is an example of a well-known axiomatized ontology. It is a structured vocabulary of genes and proteins that are involved in biological processes. The GO is used by biologists around the world to annotate their data and share their knowledge.

**Conclusion**
Axiomatized ontologies are a powerful tool for representing, sharing, and reasoning about knowledge. They are used in a wide variety of applications and are becoming increasingly important as the amount of data in the world grows."
[{2023-11-17 retrieved} https://bard.google.com/chat/d910fbd0b744152f?hl=en&pli=1]

name::
* McsEngl.axiomatized-ontology!⇒ontologyAxiom,
* McsEngl.ontologyAxiom,
* McsEngl.kbOnto.axiomatized!⇒ontologyAxiom,

ontology.upper

description::
"Upper ontologies (top-level or foundational ontologies) represent universal concepts and properties, independent from single scientific fields, such as event, space, time, dimension, substance, phenomenon, identity, process, quantity, etc., on which a general consensus of large user communities should be achieved. The main aim of foundational ontologies is to allow multiple artificial agents to cooperate with each other and with humans. To achieve this, foundational ontologies “negotiate meaning” and help for “establishing consensus in a mixed society” (Gangemi et al. 2002)."
[{2021-01-26} https://www.isko.org/cyclo/ontologies#5]

name::
* McsEngl.foundational-ontology!⇒ontologyUpper,
* McsEngl.general-ontology!⇒ontologyUpper,
* McsEngl.kbOnto.upper!⇒ontologyUpper,
* McsEngl.ontologyUpper,
* McsEngl.top-level-ontology!⇒ontologyUpper,
* McsEngl.upper-ontology!⇒ontologyUpper,

ontologyUpper.SPECIFIC

description::
* UMBEL {2008-2016},
* SUMO {2004},
* ISO 15926 {2003},
* BFO {2002},
* Sowa's Ontology {1999},
* YAMATO {1999-2023},
* GFO {1999},
* Cyc {1984},
* BORO {late 1980s and early 1990s},
* WordNet {mid 1980s},
* COSMO,
* IDEAS,
* PROTON,

name::
* McsEngl.ontologyUpper.specific,

ontologyUpper.UMBEL-{2008-2016}

description::
"Upper Mapping and Binding Exchange Layer (UMBEL) is an ontology of 28,000 reference concepts that maps to a simplified subset of the OpenCyc ontology, that is intended to provide a way of linking the precise OpenCyc ontology with less formal ontologies.[29] It also has formal mappings to Wikipedia, DBpedia, PROTON and GeoNames. It has been developed and maintained as open source by Structured Dynamics."
[{2023-11-14 retrieved} https://en.wikipedia.org/wiki/Upper_ontology#UMBEL]

name::
* McsEngl.UMBEL!=Upper-Mapping-and-Binding-Exchange-Layer,
* McsEngl.Knlgbase.UMBEL,
* McsEngl.ontgUmbel,
* McsEngl.ontologyUpper.UMBEL,

ontologyUpper.SUMO-{2004-2007}

description::
">example of Suggested-Upper-Merged-Ontology:
The Suggested Upper Merged Ontology (SUMO) is a large, high-level ontology intended to provide a common foundation for a variety of knowledge-based systems. It is designed to be as general and domain-independent as possible, and it covers a wide range of topics, including:
* **Entities:** Things that exist, such as objects, events, processes, and states.
* **Relations:** Relationships between entities, such as is-a, part-of, and cause-of.
* **Attributes:** Properties of entities, such as color, size, and location.
* **Axioms:** Rules that constrain the relationships and attributes of entities.

SUMO is written in the SUO-KIF language, a higher-order logic that has a LISP-like syntax. It is also available in the TPTP family of languages. A mapping from WordNet synsets to SUMO has been defined.

SUMO is intended to be used as a foundation ontology for a variety of computer information processing systems. It can be used to:
* **Improve the interoperability of different knowledge bases:** SUMO can provide a common vocabulary and set of relationships that can be used to bridge the gap between different knowledge bases. This can make it easier to share and reuse knowledge.
* **Support automated reasoning:** SUMO can be used to reason about the relationships between entities in a knowledge base. This can be used to make inferences, answer questions, and solve problems.
* **Develop natural language processing systems:** SUMO can be used to develop natural language processing systems that can understand and generate text that is semantically consistent with the ontology.

SUMO is a valuable resource for anyone who is interested in developing knowledge-based systems. It is a well-defined, comprehensive ontology that can be used to improve the interoperability, reasoning capabilities, and natural language processing capabilities of a wide range of systems.

Here is an example of how SUMO can be used to represent the knowledge about a dog:
```
(instance dog (isa animal))
(instance dog (has-part tail))
(instance dog (has-part fur))
(instance dog (can bark))
```

This code states that dog is an instance of animal, that dog has-part tail, that dog has-part fur, and that dog can bark. This is just a small example of how SUMO can be used to represent a wide range of knowledge."
[{2023-11-16 retrieved} https://bard.google.com/chat/1b8854012490f002?hl=en&pli=1]

name::
* McsEngl.Knlgbase.SUMO,
* McsEngl.SUMO!=Suggested-Upper-Merged-Ontology,
* McsEngl.ontgSumo,
* McsEngl.ontologyUpper.SUMO,

info-resource of SUMO

description::
* https://www.ontologyportal.org/
* https://en.wikipedia.org/wiki/Upper_ontology,
* https://en.wikipedia.org/wiki/Suggested_Upper_Merged_Ontology,

name::
* McsEngl.ontgSumo'Infrsc,

ontologyUpper.GFO-{1999-2008}

description::
">example of General formal ontology
The General Formal Ontology (GFO) is a top-level ontology that provides a general framework for representing common-sense knowledge about the world. It is a large and complex ontology that includes a wide range of concepts, such as objects, processes, properties, relations, roles, functions, facts, and situations.

Here are some examples of how GFO can be used to represent common-sense knowledge:
* **Objects:** GFO can be used to represent objects such as cars, books, and people. It can also be used to represent more abstract objects such as ideas, emotions, and events.
* **Processes:** GFO can be used to represent processes such as running, eating, and thinking. It can also be used to represent more abstract processes such as learning, growing, and changing.
* **Properties:** GFO can be used to represent properties of objects such as color, shape, and size. It can also be used to represent properties of processes such as duration, speed, and intensity.
* **Relations:** GFO can be used to represent relations between objects such as is-a, part-of, and located-in. It can also be used to represent relations between processes such as precedes, causes, and enables.
* **Roles:** GFO can be used to represent roles that entities can play in situations. For example, a person can play the role of a teacher, a student, or a parent.
* **Functions:** GFO can be used to represent functions that entities perform. For example, a heart has the function of pumping blood, and a brain has the function of processing information.
* **Facts:** GFO can be used to represent facts about the world. For example, the fact that the Earth is round can be represented in GFO.
* **Situations:** GFO can be used to represent situations that occur in the world. For example, a game of football is a situation that involves players, a ball, and a set of rules.

GFO is a powerful tool for representing common-sense knowledge. It can be used to develop a wide range of applications, such as knowledge bases, semantic web applications, and natural language processing systems.

Here is an example of how GFO can be used to represent the fact that "John is a student":
```
John: Individual
John: is-a: Student

Student: Class
Student: is-a: Person
```

This representation states that John is an individual and that he is an instance of the class Student. The class Student is a subclass of the class Person, which means that all students are also people.

GFO is a complex ontology, and it can be difficult to learn. However, there are a number of resources available to help you learn GFO, including online documentation, tutorials, and courses."
[{2023-11-16 retrieved} https://bard.google.com/chat/3662dbfecdb0a15d?hl=en&pli=1]

name::
* McsEngl.GFO!=general-formal-ontology,
* McsEngl.Knlgbase.GFO,
* McsEngl.ontgGfo,
* McsEngl.ontologyUpper.GFO,

addressWpg::
* https://www.onto-med.de/ontologies/gfo,
* https://en.wikipedia.org/wiki/General_formal_ontology,
* https://en.wikipedia.org/wiki/Upper_ontology,

ontologyUpper.YAMATO-{1999-2023}

description::
">evolution of Yet Another More Advanced Top Ontology:
YAMATO (Yet Another More Advanced Top-level Ontology) is a foundational ontology that has been under development since 1999. It is designed to be a comprehensive and rigorous framework for representing common-sense knowledge about the world. YAMATO has been used in a variety of applications, including medical informatics, learning engineering, and knowledge engineering.

The evolution of YAMATO can be divided into three phases:
* **Phase 1 (1999-2005):** This phase focused on developing the core concepts of the ontology, such as the concepts of entity, event, process, and quality.
* **Phase 2 (2006-2010):** This phase focused on refining the ontology and adding new concepts, such as the concept of role.
* **Phase 3 (2011-present):** This phase has focused on applying YAMATO to real-world applications and developing tools and methods for using the ontology.

Here is a more detailed timeline of the evolution of YAMATO:
* **1999:** YAMATO is first developed by Riichiro Mizoguchi.
* **2005:** Mizoguchi publishes his book "Ontological Engineering", which provides a detailed exposition of the philosophical underpinnings of YAMATO.
* **2006:** YAMATO is used to develop the first ontology of learning and instructional theories.
* **2008:** YAMATO is used to develop a medical ontology.
* **2009:** YAMATO is used to develop an ontology of genomics.
* **2010:** YAMATO is implemented with the ontology editor Hozo and in OWL.
* **2011:** YAMATO is used to model the behavior of mobile users.
* **2012:** YAMATO is used to develop a functional ontology.
* **2013:** YAMATO is used to develop a knowledge management system for a large enterprise.
* **2014:** YAMATO is used to develop a semantic search engine.
* **2015:** YAMATO is used to develop a natural language processing system.
* **2016:** YAMATO is used to develop a machine translation system.
* **2017:** YAMATO is used to develop a chatbot.
* **2018:** YAMATO is used to develop a virtual assistant.
* **2019:** YAMATO is used to develop a recommendation system.
* **2020:** YAMATO is used to develop a personalized learning system.
* **2021:** YAMATO is used to develop a decision support system.
* **2022:** YAMATO is used to develop a risk assessment system.
* **2023:** YAMATO continues to be developed and applied in a wide range of applications.

The evolution of YAMATO reflects the growing importance of ontologies in a variety of fields. As ontologies become more sophisticated and widely used, YAMATO is likely to continue to play a leading role in the development of the Semantic Web."
[{2023-11-14 retrieved} https://bard.google.com/chat/73ecc67ce66ce5dc?hl=en&pli=1]

name::
* McsEngl.YAMATO!=Yet-Another-More-Advanced-Top-Ontology,
* McsEngl.Knlgbase.YAMATO,
* McsEngl.ontgYamato,
* McsEngl.ontologyUpper.YAMATO,

ontology.upperNo

description::
"Domain ontologies conceptualize the specific realm of a field of knowledge or a particular task specifying the contents of the general categories provided by a top-level ontology. Domain ontologies offer a model of detailed knowledge, on which there may be substantial sharing of meanings already."
[{2021-01-26} https://www.isko.org/cyclo/ontologies#5]

name::
* McsEngl.domain-ontology,
* McsEngl.kbOnto.domain,
* McsEngl.kbOnto.genericNo,
* McsEngl.kbOnto.upperNo,
* McsEngl.ontologyUpperNo,
* McsEngl.specific-ontology,

ontologyUpperNo.SPECIFIC

description::
* gist, {2007-2022} enterprise,
* ISO-15926 {2003}, life-cicle,
* IDEAS, business,

name::
* McsEngl.ontologyUpperNo.specific,

ontologyUpperNo.disease-ontology

description::
× webaddress: https://www.disease-ontology.org/do,

"The Disease Ontology (DO) is a formal ontology of human disease.[1][2][3] The Disease Ontology project is hosted at the Institute for Genome Sciences at the University of Maryland School of Medicine.

The Disease Ontology project was initially developed in 2003 at Northwestern University to address the need for a purpose-built ontology that covers the full spectrum of disease concepts annotated within biomedical repositories within an ontological framework that is extensible to meet community needs.

The Disease Ontology is an OBO (Open Biomedical Ontologies) Foundry ontology.

Disease Ontology Identifiers (DOIDs) consist of the prefix DOID: followed by number, for example, Alzheimer's disease has the stable identifier DOID:10652. DO is cross-referenced in several resources such as UniProt.

Example term
The Disease Ontology entry for motor neuron disease in OBO format is given below, showing the links to other classification schemes, including ICD-9, ICD-10, MeSH, SNOMED and UMLS.

id: DOID:231
name: motor neuron disease
def: "A neurodegenerative disease that is located_in the motor neurones." Motor neuron disease
xref: ICD10CM:G12.2
xref: ICD10CM:G12.20
xref: ICD9CM:335.2
xref: MSH:D016472
xref: SNOMEDCT_US_2016_03_01:155015007
xref: SNOMEDCT_US_2016_03_01:192888001
xref: SNOMEDCT_US_2016_03_01:192889009
xref: SNOMEDCT_US_2016_03_01:192890000
xref: SNOMEDCT_US_2016_03_01:37340000
xref: UMLS_CUI:C0085084
is_a: DOID:1289 ! Neurodegenerative disease"
[{2023-11-17 retrieved} https://en.wikipedia.org/wiki/Disease_Ontology]

name::
* McsEngl.DO!=disease-ontology,
* McsEngl.Knlgbase.disease-ontology,
* McsEngl.disease-ontology,
* McsEngl.ontgDisease,
* McsEngl.ontologyUpperNo.disease-ontology,

ontologyUpperNo.gist

description::
Ontology IRI: https://w3id.org/semanticarts/ontology/gistCore

"gist is Semantic Arts’ minimalist upper ontology for the enterprise. It is designed to provide the maximum coverage of typical business concepts with the fewest number of primitives and the least amount of ambiguity."
[{2023-11-16 retrieved} https://www.semanticarts.com/gist/]

name::
* McsEngl.gist-ontologyUpperNo,
* McsEngl.Knlgbase.gist,
* McsEngl.ontgGist,
* McsEngl.ontologyUpperNo.gist,

addressWpg::
* https://gist-doc.semanticarts.com/gist-11.0/widoco-documentation/index-en.html,

ontology.4D

description::
"4D-ontology overview:
4D-ontology is an ontology that represents the world as a collection of 4D entities, each of which has a spatial extent and a temporal extent. This is in contrast to traditional 3D ontologies, which represent the world as a collection of 3D entities, each of which has only a spatial extent.

4D-ontologies are useful for representing the world in a way that is more realistic and intuitive. They are also more expressive than 3D ontologies, and they can be used to represent a wider range of phenomena.

There are two main types of 4D-ontology: perdurantism and endurantism. Perdurantists believe that objects extend through time and change their properties over time. Endurantists believe that objects exist at each point in time and do not change their properties.

4D-ontologies have a number of advantages over traditional 3D ontologies. They are more realistic and intuitive, they are more expressive, and they can be used to represent a wider range of phenomena.

However, 4D-ontologies also have a number of disadvantages. They are more complex than traditional 3D ontologies, and they can be more difficult to implement.

4D-ontologies are still under development, but they have the potential to be a powerful tool for representing the world.

Here are some of the key differences between 3D and 4D ontologies:
* In a 3D ontology, an object is represented as a single entity that exists at a single point in time.
* In a 4D ontology, an object is represented as a collection of entities, each of which exists at a different point in time.
* 3D ontologies are typically used to represent static objects.
* 4D ontologies are typically used to represent dynamic objects.
* 3D ontologies are typically less expressive than 4D ontologies.
* 4D ontologies are typically more complex than 3D ontologies.

Overall, 4D-ontologies are a more powerful and expressive way of representing the world than 3D ontologies. However, they are also more complex and difficult to implement.

Here are some of the benefits of using 4D-ontologies:
* They are more realistic and intuitive than 3D ontologies.
* They are more expressive than 3D ontologies.
* They can be used to represent a wider range of phenomena.

Here are some of the challenges of using 4D-ontologies:
* They are more complex than 3D ontologies.
* They can be more difficult to implement.
* They are still under development.

Despite these challenges, 4D-ontologies have the potential to be a powerful tool for representing the world. They are already being used in a number of applications, and their use is likely to grow in the future."
[{2023-11-17 retrieved} https://bard.google.com/chat/2afda959bcca6254?hl=en&pli=1]

name::
* McsEngl.4D-ontology,
* McsEngl.kbOnto.4D,

ontology.bibliographic

description::
"A particular attention must be paid to bibliographic ontologies, which enable the description of entities that belong to the bibliographic set, such as textual publications (e.g., articles, monographs, and series) and web pages, datasets, films etc., and define the relationships among these bibliographic entities (Nurmikko-Fuller et al. 2015; 2016). Besides, bibliographic ontologies have been built to define specific relationships, such as authorship, editorship, and aboutness among entities, as well as the relationships that connect → works (Smiraglia 2019) and their abridgments, adaptations, and translations. They can underline the relationships between a serial and the transformations it may have had over time, such as supplements or successors."
[{2021-01-26} https://www.isko.org/cyclo/ontologies#6.1]

name::
* McsEngl.bibliographic-ontology,
* McsEngl.Knlgbase.bibliographic,
* McsEngl.ontgBibliographic,
* McsEngl.kbOnto.bibliographic,

ontology.biomedical

description::
"In biological and medical domain, controlled vocabularies of terms and relations are used to share information and several domain ontologies have been developed. A consortium of ontology developers — The Open Biological and Biomedical Ontology (OBO) Foundry — was founded in 2001 to define shared principles for developing interoperable ontologies in biomedicine. The principles include the collaborative development of ontologies and the use of a common syntax (RDF/XML); use based on most prominent models such as the Gene Ontology; and the provision of open access. The library of ontologies encompasses the BFO upper ontology and domain ontologies such as the Gene Ontology, the Disease Ontology, the Plant Ontology, the Protein Ontology, the Cell Ontology, the Coronavirus Ontology and so on. The Gene Ontology (GO), developed in 1998, describes the biological domain considering three aspects: cellular components, that is, the parts of the cell; biological processes, such as chemical reactions or metabolism; and molecular functions. Thus, the GO consists of three ontologies, each one devoted to one aspect. The GO is a dynamic vocabulary that allows description of the functions and activities performed by the gene products, that is, the macromolecules, across different organisms, enabling the analysis of genomic data. The three ontologies may be used to query a database of gene product functions."
[{2021-01-26} https://www.isko.org/cyclo/ontologies#6.3]

name::
* McsEngl.biomedical-ontology,
* McsEngl.Knlgbase.biomedical,
* McsEngl.ontgBiomedical,
* McsEngl.kbOnto.biomedical,

BioPortal of ontology.biomedical

description::
* https://bioportal.bioontology.org/,
* http://data.bioontology.org/property_search?q=has%20dimension,
===
"BioPortal provides access to commonly used biomedical ontologies and to tools for working with them. BioPortal allows you to
* browse the library of ontologies
* search for a term across multiple ontologies
* browse mappings between terms in different ontologies
* receive recommendations on which ontologies are most relevant for a corpus
* annotate text with terms from ontologies
* search biomedical resources for a term
* browse a selection of projects that use BioPortal resources
All information available through the BioPortal Web site is also available through the NCBO Web service REST API."
[{2021-02-01} https://www.bioontology.org/wiki/BioPortal_Help]

name::
* McsEngl.BioPortal,

statistics of BioPortal

description::
Χ Ontologies, 838
Χ Classes, 9,708,407
Χ Properties, 36,286
Χ Mappings, 78,104,619
[{2021-02-01} https://bioportal.bioontology.org/]

name::
* McsEngl.BioPortal'statistics,

ontology.Dublic-Core

description::
"The Dublin Core Metadata Initiative maintains a metadata element set for describing a wide range of resources. The vocabulary provides properties such as "creator", "publisher" and "title"."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-vocabulary]

name::
* McsEngl.Dublin-core-Rdfvoc,
* McsEngl.Rdfvoc.Dublin-core,
* McsEngl.dc-ontology,
* McsEngl.Knlgbase.Dublic-Core,
* McsEngl.ontgDublic-Core,
* McsEngl.kbOnto.Dublic-Core,
* McsEngl.kbOnto.dc,

ontology.foaf

description::
"One of the first RDF vocabularies used worldwide was the "Friend of a Friend" (FOAF) vocabulary for describing social networks."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-vocabulary]

name::
* McsEngl.FOAF-Rdfvoc!⇒foaf,
* McsEngl.Rdfvoc.FOAF!⇒foaf,
* McsEngl.friend-of-a-friend-vocabulary!⇒foaf,
* McsEngl.foaf!=friend-of-a-friend-vocabulary,
* McsEngl.foaf-ontology,
* McsEngl.Knlgbase.foaf,
* McsEngl.ontgFoaf,
* McsEngl.kbOnto.foaf,

class of foaf

description::
· RDF-classes.

name::
* McsEngl.foaf'class,

foaf:Agent-class::
Χ sDescription: Agent - An agent (eg. person, group, software or physical artifact),
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_Agent
Χ sStatus: stable
Χ aProperties-include: [gender, yahooChatID, account, birthday, icqChatID, aimChatID, jabberID, made, mbox, interest, tipjar, skypeID, topic_interest, age, mbox_sha1sum, status, msnChatID, openid, holdsAccount, weblog]
Χ aUsed-with: [maker, member]
Χ aHas-subclass: [Group, Person, Organization]

Χ sDescriptionLong: The Agent class is the class of agents; things that do stuff. A well known sub-class is Person, representing people. Other kinds of agents include Organization and Group,
The Agent class is useful in a few places in FOAF where Person would have been overly specific. For example, the IM chat ID properties such as jabberID are typically associated with people, but sometimes belong to software bots.
* McsEngl.foaf/Agent,
* McsEngl.Agent@foaf,

foaf:Document-class::
Χ sDescription: Document - A document,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_Document
Χ sStatus: stable
Χ aProperties-include: [topic, primaryTopic, sha1]
Χ aUsed-with: [workInfoHomepage, workplaceHomepage, page, accountServiceHomepage, openid, tipjar, schoolHomepage, publications, isPrimaryTopicOf, interest, homepage, weblog]
Χ aHas-subclass: [Image, PersonalProfileDocument]
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ aDisjoint-with: [Project, Organization]

Χ sDescriptionLong: The Document class represents those things which are, broadly conceived, 'documents',
The Image class is a sub-class of Document, since all images are documents.
Χ editorial: We do not (currently) distinguish precisely between physical and electronic documents, or between copies of a work and the abstraction those copies embody. The relationship between documents and their byte-stream representation needs clarification (see sha1 for related issues),
* McsEngl.foaf/Document,
* McsEngl.Document@foaf,

foaf:Group-class::
Χ sDescription: Group - A class of Agents,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_Group
Χ sStatus: stable
Χ aProperties-include: [member]
Χ aSubclass-of: [Agent]
* McsEngl.foaf/Group,
* McsEngl.Group@foaf,

Χ sDescriptionLong: The Group class represents a collection of individual agents (and may itself play the role of a Agent, ie. something that can perform actions),
This concept is intentionally quite broad, covering informal and ad-hoc groups, long-lived communities, organizational groups within a workplace, etc. Some such groups may have associated characteristics which could be captured in RDF (perhaps a homepage, name, mailing list etc.).
While a Group has the characteristics of a Agent, it is also associated with a number of other Agents (typically people) who constitute the Group. FOAF provides a mechanism, the membershipClass property, which relates a Group to a sub-class of the class Agent who are members of the group. This is a little complicated, but allows us to make group membership rules explicit.
The markup (shown below) for defining a group is both complex and powerful. It allows group membership rules to match against any RDF-describable characteristics of the potential group members. As FOAF and similar vocabularies become more expressive in their ability to describe individuals, the Group mechanism for categorising them into groups also becomes more powerful.
While the formal description of membership criteria for a Group may be complex, the basic mechanism for saying that someone is in a Group is very simple. We simply use a member property of the Group to indicate the agents that are members of the group. For example:

    <foaf:Group>
     <foaf:name>ILRT staff</foaf:name>
     <foaf:member>
      <foaf:Person>
       <foaf:name>Martin Poulter</foaf:name>
       <foaf:homepage rdf:resource="http://www.ilrt.bris.ac.uk/aboutus/staff/staffprofile/?search=plmlp"/>
       <foaf:workplaceHomepage rdf:resource="http://www.ilrt.bris.ac.uk/"/>
      </foaf:Person>
     </foaf:member>
    </foaf:Group>
    

Behind the scenes, further RDF statements can be used to express the rules for being a member of this group. End-users of FOAF need not pay attention to these details.
Here is an example. We define a Group representing those people who are ILRT staff members (ILRT is a department at the University of Bristol). The membershipClass property connects the group (conceived of as a social entity and agent in its own right) with the class definition for those people who constitute it. In this case, the rule is that all group members are in the ILRTStaffPerson class, which is in turn populated by all those things that are a Person and which have a workplaceHomepage of http://www.ilrt.bris.ac.uk/. This is typical: FOAF groups are created by specifying a sub-class of Agent (in fact usually this will be a sub-class of Person), and giving criteria for which things fall in or out of the sub-class. For this, we use the owl:onProperty and owl:hasValue properties, indicating the property/value pairs which must be true of matching agents.

    <!-- here we see a FOAF group described.
         each foaf group may be associated with an OWL definition
         specifying the class of agents that constitute the group's membership -->
    <foaf:Group>
     <foaf:name>ILRT staff</foaf:name>
     <foaf:membershipClass>
        <owl:Class rdf:about="http://ilrt.example.com/groups#ILRTStaffPerson">
         <rdfs:subClassOf rdf:resource="http://xmlns.com/foaf/0.1/Person"/>
         <rdfs:subClassOf>
           <owl:Restriction>
             <owl:onProperty rdf:resource="http://xmlns.com/foaf/0.1/workplaceHomepage"/>
             <owl:hasValue rdf:resource="http://www.ilrt.bris.ac.uk/"/>
           </owl:Restriction>
         </rdfs:subClassOf>
       </owl:Class>
     </foaf:membershipClass>
    </foaf:Group>
    

Note that while these example OWL rules for being in the eg:ILRTStaffPerson class are based on a Person having a particular workplaceHomepage, this places no obligations on the authors of actual FOAF documents to include this information. If the information is included, then generic OWL tools may infer that some person is an eg:ILRTStaffPerson. To go the extra step and infer that some eg:ILRTStaffPerson is a member of the group whose name is "ILRT staff", tools will need some knowledge of the way FOAF deals with groups. In other words, generic OWL technology gets us most of the way, but the full Group machinery requires extra work for implimentors.
The current design names the relationship as pointing from the group, to the member. This is convenient when writing XML/RDF that encloses the members within markup that describes the group. Alternate representations of the same content are allowed in RDF, so you can write claims about the Person and the Group without having to nest either description inside the other. For (brief) example:

    <foaf:Group>
     <foaf:member rdf:nodeID="martin"/>
     <!-- more about the group here -->
    </foaf:Group>
    <foaf:Person rdf:nodeID="martin">
      <!-- more about martin here -->
    </foaf:Person>
    

There is a FOAF issue tracker associated with this FOAF term. A design goal is to make the most of W3C's OWL language for representing group-membership criteria, while also making it easy to leverage existing groups and datasets available online (eg. buddylists, mailing list membership lists etc). Feedback on the current design is solicited! Should we consider using SPARQL queries instead, for example?

foaf:Image-class::
Χ sDescription: Image - An image,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_Image
Χ sStatus: stable
Χ aProperties-include: [depicts, thumbnail]
Χ aUsed-with: [img, thumbnail, depiction]
Χ aSubclass-of: [Document]
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The class Image is a sub-class of Document corresponding to those documents which are images,
Digital images (such as JPEG, PNG, GIF bitmaps, SVG diagrams etc.) are examples of Image.
* McsEngl.foaf/Image,
* McsEngl.Image@foaf,

foaf:Organization-class::
Χ sDescription: Organization - An organization,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_Organization
Χ sStatus: stable
Χ aSubclass-of: [Agent]
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ aDisjoint-with: [Document, Person]

Χ sDescriptionLong: The Organization class represents a kind of Agent corresponding to social instititutions such as companies, societies etc,
Χ editorial: This is a more 'solid' class than Group, which allows for more ad-hoc collections of individuals. These terms, like the corresponding natural language concepts, have some overlap, but different emphasis,
* McsEngl.foaf/Organization,
* McsEngl.Organization@foaf,

foaf:Person-class::
Χ sDescription: Person - A person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_Person
Χ sStatus: stable
Χ aProperties-include: [plan, surname, geekcode, pastProject, lastName, family_name, publications, currentProject, familyName, firstName, workInfoHomepage, myersBriggs, schoolHomepage, img, workplaceHomepage, knows]
Χ aUsed-with: [knows]
Χ aSubclass-of: [Agent, Spatial Thing]
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ aDisjoint-with: [Project, Organization]

Χ sDescriptionLong: The Person class represents people. Something is a Person if it is a person. We don't nitpic about whether they're alive, dead, real, or imaginary. The Person class is a sub-class of the Agent class, since all people are considered 'agents' in FOAF,
* McsEngl.foaf/Person,
* McsEngl.Person@foaf,

foaf:OnlineAccount-class::
Χ sDescription: Online Account - An online account,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_OnlineAccount
Χ sStatus: testing
Χ aProperties-include: [accountName, accountServiceHomepage]
Χ aUsed-with: [account, holdsAccount]
Χ aSubclass-of: [Thing]
Χ aHas-subclass: [Online E-commerce Account, Online Gaming Account, Online Chat Account]
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The OnlineAccount class represents the provision of some form of online service, by some party (indicated indirectly via a accountServiceHomepage) to some Agent. The account property of the agent is used to indicate accounts that are associated with the agent,
See OnlineChatAccount for an example. Other sub-classes include OnlineEcommerceAccount and OnlineGamingAccount.
One deployment style for this construct is to use URIs for well-known documents (or other entities) that strongly embody the account-holding relationship; for example, user profile pages on social network sites. This has the advantage of providing URIs that are likely to be easy to link with other information, but means that the instances of this class should not be considered 'accounts' in the abstract or business sense of a 'contract'.
* McsEngl.foaf/OnlineAccount,
* McsEngl.OnlineAccount@foaf,

foaf:PersonalProfileDocument-class::
Χ sDescription: PersonalProfileDocument - A personal profile RDF document,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_PersonalProfileDocument
Χ sStatus: testing
Χ aSubclass-of: [Document]
* McsEngl.foaf/PersonalProfileDocument,
* McsEngl.PersonalProfileDocument@foaf,

Χ sDescriptionLong: The PersonalProfileDocument class represents those things that are a Document, and that use RDF to describe properties of the person who is the maker of the document. There is just one Person described in the document, ie. the person who made it and who will be its primaryTopic,
The PersonalProfileDocument class, and FOAF's associated conventions for describing it, captures an important deployment pattern for the FOAF vocabulary. FOAF is very often used in public RDF documents made available through the Web. There is a colloquial notion that these "FOAF files" are often somebody's FOAF file. Through PersonalProfileDocument we provide a machine-readable expression of this concept, providing a basis for FOAF documents to make claims about their maker and topic.
When describing a PersonalProfileDocument it is typical (and useful) to describe its associated Person using the maker property. Anything that is a Person and that is the maker of some PersonalProfileDocument will be the primaryTopic of that Document. Although this can be inferred, it is often helpful to include this information explicitly within the PersonalProfileDocument.
For example, here is a fragment of a personal profile document which describes its author explicitly:

    <foaf:Person rdf:nodeID="p1">
     <foaf:name>Dan Brickley</foaf:name>
     <foaf:homepage rdf:resource="http://danbri.org/"/>
     <!-- etc... -->
    </foaf:Person>

    <foaf:PersonalProfileDocument rdf:about="">
       <foaf:maker rdf:nodeID="p1"/>
       <foaf:primaryTopic rdf:nodeID="p1"/>
    </foaf:PersonalProfileDocument>
    

Note that a PersonalProfileDocument will have some representation as RDF. Typically this will be in W3C's RDF/XML syntax, however we leave open the possibility for the use of other notations, or representational conventions including automated transformations from HTML (GRDDL spec for one such technique).

foaf:Project-class::
Χ sDescription: Project - A project (a collective endeavour of some kind),
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_Project
Χ sStatus: testing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ aDisjoint-with: [Document, Person]

Χ sDescriptionLong: The Project class represents the class of things that are 'projects'. These may be formal or informal, collective or individual. It is often useful to indicate the homepage of a Project,
Χ editorial: Further work is needed to specify the connections between this class and the FOAF properties currentProject and pastProject,
* McsEngl.foaf/Project,
* McsEngl.Project@foaf,

property of foaf

description::
· RDF-properties.

name::
* McsEngl.foaf'property,

foaf:LabelProperty-class::
Χ sDescription: Label Property - A foaf:LabelProperty is any RDF property with texual values that serve as labels,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_LabelProperty
Χ sStatus: unstable
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: A LabelProperty is any RDF property with texual values that serve as labels,
Any property that is a LabelProperty is effectively a sub-property of rdfs:label. This utility class provides an alternate means of expressing this idea, in a way that may help with OWL 2.0 DL compatibility.
* McsEngl.foaf/LabelProperty,
* McsEngl.LabelProperty@foaf,

foaf:OnlineChatAccount-class::
Χ sDescription: Online Chat Account - An online chat account,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_OnlineChatAccount
Χ sStatus: unstable
Χ aSubclass-of: [OnlineAccount]
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
* McsEngl.foaf/OnlineChatAccount,
* McsEngl.OnlineChatAccount@foaf,

Χ sDescriptionLong: A OnlineChatAccount is a OnlineAccount devoted to chat / instant messaging. The account may offer other services too; FOAF's sub-classes of OnlineAccount are not mutually disjoint,
This is a generalization of the FOAF Chat ID properties, jabberID, aimChatID, skypeID, msnChatID, icqChatID and yahooChatID.
Unlike those simple properties, OnlineAccount and associated FOAF terms allows us to describe a great variety of online accounts, without having to anticipate them in the FOAF vocabulary.
For example, here is a description of an IRC chat account, specific to the Freenode IRC network:

    <foaf:Person>
      <foaf:name>Dan Brickley</foaf:name>
      <foaf:account>
        <foaf:OnlineAccount>
          <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/OnlineChatAccount"/>
          <foaf:accountServiceHomepage
                   rdf:resource="http://www.freenode.net/"/>
          <foaf:accountName>danbri</foaf:accountName>
        </foaf:OnlineAccount>
      </foaf:account>
    </foaf:Person>
    

Note that it may be impolite to carelessly reveal someone else's chat identifier (which might also serve as an indicate of email address) As with email, there are privacy and anti-SPAM considerations. FOAF does not currently provide a way to represent an obfuscated chat ID (ie. there is no parallel to the mbox / mbox_sha1sum mapping).
In addition to the generic OnlineAccount and OnlineChatAccount mechanisms, FOAF also provides several convenience chat ID properties (jabberID, aimChatID, icqChatID, msnChatID,yahooChatID, skypeID). These serve as as a shorthand for some common cases; their use may not always be appropriate.
Χ editorial: We should specify some mappings between the abbreviated and full representations of Jabber, AIM, MSN, ICQ, Yahoo! and MSN chat accounts. This has been done for skypeID. This requires us to identify an appropriate accountServiceHomepage for each. If we wanted to make the OnlineAccount mechanism even more generic, we could invent a relationship that holds between a OnlineAccount instance and a convenience property. To continue the example above, we could describe how Freenode could define a property 'fn:freenodeChatID' corresponding to Freenode online accounts,

foaf:OnlineEcommerceAccount-class::
Χ sDescription: Online E-commerce Account - An online e-commerce account,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_OnlineEcommerceAccount
Χ sStatus: unstable
Χ aSubclass-of: [OnlineAccount]
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: A OnlineEcommerceAccount is a OnlineAccount devoted to buying and/or selling of goods, services etc. Examples include Amazon, eBay, PayPal, thinkgeek, etc,
* McsEngl.foaf/OnlineEcommerceAccount,
* McsEngl.OnlineEcommerceAccount@foaf,

foaf:OnlineGamingAccount-class::
Χ sDescription: Online Gaming Account - An online gaming account,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_OnlineGamingAccount
Χ sStatus: unstable
Χ aSubclass-of: [OnlineAccount]
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: A OnlineGamingAccount is a OnlineAccount devoted to online gaming,
Examples might include EverQuest, Xbox live, Neverwinter Nights, etc., as well as older text-based systems (MOOs, MUDs and suchlike).
* McsEngl.foaf/OnlineGamingAccount,
* McsEngl.OnlineGamingAccount@foaf,

foaf:homepage-property::
Χ sDescription: homepage - A homepage for some thing,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_homepage
Χ sStatus: stable
Χ sDomain: having this property implies being a Thing
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: The homepage property relates something to a homepage about it,
Many kinds of things have homepages. FOAF allows a thing to have multiple homepages, but constrains homepage so that there can be only one thing that has any particular homepage.
A 'homepage' in this sense is a public Web document, typically but not necessarily available in HTML format. The page has as a topic the thing whose homepage it is. The homepage is usually controlled, edited or published by the thing whose homepage it is; as such one might look to a homepage for information on its owner from its owner. This works for people, companies, organisations etc.
The homepage property is a sub-property of the more general page property for relating a thing to a page about that thing. See also topic, the inverse of the page property.
* McsEngl.foaf/homepage,
* McsEngl.homepage@foaf,

foaf:isPrimaryTopicOf-property::
Χ sDescription: is primary topic of - A document that this thing is the primary topic of,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_isPrimaryTopicOf
Χ sStatus: stable
Χ sDomain: having this property implies being a Thing
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: The isPrimaryTopicOf property relates something to a document that is mainly about it,
The isPrimaryTopicOf property is inverse functional: for any document that is the value of this property, there is at most one thing in the world that is the primary topic of that document. This is useful, as it allows for data merging, as described in the documentation for its inverse, primaryTopic.
page is a super-property of isPrimaryTopicOf. The change of terminology between the two property names reflects the utility of 'primaryTopic' and its inverse when identifying things. Anything that has an isPrimaryTopicOf relation to some document X, also has a page relationship to it.
Note that homepage, is a sub-property of both page and isPrimaryTopicOf. The awkwardly named isPrimaryTopicOf is less specific, and can be used with any document that is primarily about the thing of interest (ie. not just on homepages).
* McsEngl.foaf/isPrimaryTopicOf,
* McsEngl.isPrimaryTopicOf@foaf,

foaf:knows-property::
Χ sDescription: knows - A person known by this person (indicating some level of reciprocated interaction between the parties),
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_knows
Χ sStatus: stable
Χ sDomain: having this property implies being a Person
Χ sRange: every value of this property is a Person
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The knows property relates a Person to another Person that he or she knows,
We take a broad view of 'knows', but do require some form of reciprocated interaction (ie. stalkers need not apply). Since social attitudes and conventions on this topic vary greatly between communities, counties and cultures, it is not appropriate for FOAF to be overly-specific here.
If someone knows a person, it would be usual for the relation to be reciprocated. However this doesn't mean that there is any obligation for either party to publish FOAF describing this relationship. A knows relationship does not imply friendship, endorsement, or that a face-to-face meeting has taken place: phone, fax, email, and smoke signals are all perfectly acceptable ways of communicating with people you know.
You probably know hundreds of people, yet might only list a few in your public FOAF file. That's OK. Or you might list them all. It is perfectly fine to have a FOAF file and not list anyone else in it at all. This illustrates the Semantic Web principle of partial description: RDF documents rarely describe the entire picture. There is always more to be said, more information living elsewhere in the Web (or in our heads...).
Since knows is vague by design, it may be suprising that it has uses. Typically these involve combining other RDF properties. For example, an application might look at properties of each weblog that was made by someone you "knows". Or check the newsfeed of the online photo archive for each of these people, to show you recent photos taken by people you know.
To provide additional levels of representation beyond mere 'knows', FOAF applications can do several things.
They can use more precise relationships than knows to relate people to people. The original FOAF design included two of these ('knowsWell','friend') which we removed because they were somewhat awkward to actually use, bringing an inappopriate air of precision to an intrinsically vague concept. Other extensions have been proposed, including Eric Vitiello's Relationship module for FOAF.
In addition to using more specialised inter-personal relationship types (eg rel:acquaintanceOf etc) it is often just as good to use RDF descriptions of the states of affairs which imply particular kinds of relationship. So for example, two people who have the same value for their workplaceHomepage property are typically colleagues. We don't (currently) clutter FOAF up with these extra relationships, but the facts can be written in FOAF nevertheless. Similarly, if there exists a Document that has two people listed as its makers, then they are probably collaborators of some kind. Or if two people appear in 100s of digital photos together, there's a good chance they're friends and/or colleagues.
So FOAF is quite pluralistic in its approach to representing relationships between people. FOAF is built on top of a general purpose machine language for representing relationships (ie. RDF), so is quite capable of representing any kinds of relationship we care to add. The problems are generally social rather than technical; deciding on appropriate ways of describing these interconnections is a subtle art.
Perhaps the most important use of knows is, alongside the rdfs:seeAlso property, to connect FOAF files together. Taken alone, a FOAF file is somewhat dull. But linked in with 1000s of other FOAF files it becomes more interesting, with each FOAF file saying a little more about people, places, documents, things... By mentioning other people (via knows or other relationships), and by providing an rdfs:seeAlso link to their FOAF file, you can make it easy for FOAF indexing tools ('scutters') to find your FOAF and the FOAF of the people you've mentioned. And the FOAF of the people they mention, and so on. This makes it possible to build FOAF aggregators without the need for a centrally managed directory of FOAF files...
* McsEngl.foaf/knows,
* McsEngl.knows@foaf,

foaf:made-property::
Χ sDescription: made - Something that was made by this agent,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_made
Χ sStatus: stable
Χ sDomain: having this property implies being a Agent
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The made property relates a Agent to something made by it. As such it is an inverse of the maker property, which relates a thing to something that made it. See made for more details on the relationship between these FOAF terms and related Dublin Core vocabulary,
* McsEngl.foaf/made,
* McsEngl.made@foaf,

foaf:maker-property::
Χ sDescription: maker - An agent that made this thing,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_maker
Χ sStatus: stable
Χ sDomain: having this property implies being a Thing
Χ sRange: every value of this property is a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The maker property relates something to a Agent that made it. As such it is an inverse of the made property,
The name (or other rdfs:label) of the maker of something can be described as the dc:creator of that thing.
For example, if the thing named by the URI http://danbri.org/ has a maker that is a Person whose name is 'Dan Brickley', we can conclude that http://danbri.org/ has a dc:creator of 'Dan Brickley'.
FOAF descriptions are encouraged to use dc:creator only for simple textual names, and to use maker to indicate creators, rather than risk confusing creators with their names. This follows most Dublin Core usage. See UsingDublinCoreCreator for details.
* McsEngl.foaf/maker,
* McsEngl.maker@foaf,

foaf:mbox-property::
Χ sDescription: personal mailbox - A personal mailbox, ie. an Internet mailbox associated with exactly one owner, the first owner of this mailbox. This is a 'static inverse functional property', in that there is (across time and change) at most one individual that ever has any particular value for foaf:mbox,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_mbox
Χ sStatus: stable
Χ sDomain: having this property implies being a Agent
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: The mbox property is a relationship between the owner of a mailbox and a mailbox. These are typically identified using the mailto: URI scheme (see RFC 2368),
Note that there are many mailboxes (eg. shared ones) which are not the mbox of anyone. Furthermore, a person can have multiple mbox properties.
In FOAF, we often see mbox used as an indirect way of identifying its owner. This works even if the mailbox is itself out of service (eg. 10 years old), since the property is defined in terms of its primary owner, and doesn't require the mailbox to actually be being used for anything.
Many people are wary of sharing information about their mailbox addresses in public. To address such concerns whilst continuing the FOAF convention of indirectly identifying people by referring to widely known properties, FOAF also provides the mbox_sha1sum mechanism, which is a relationship between a person and the value you get from passing a mailbox URI to the SHA1 mathematical function.
* McsEngl.foaf/mbox,
* McsEngl.mbox@foaf,

foaf:member-property::
Χ sDescription: member - Indicates a member of a Group
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_member
Χ sStatus: stable
Χ sDomain: having this property implies being a Group
Χ sRange: every value of this property is a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The member property relates a Group to a Agent that is a member of that group,
See Group for details and examples.
* McsEngl.foaf/member,
* McsEngl.member@foaf,

foaf:page-property::
Χ sDescription: page - A page or document about this thing,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_page
Χ sStatus: stable
Χ sDomain: having this property implies being a Thing
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The page property relates a thing to a document about that thing,
As such it is an inverse of the topic property, which relates a document to a thing that the document is about.
* McsEngl.foaf/page,
* McsEngl.page@foaf,

foaf:primaryTopic-property::
Χ sDescription: primary topic - The primary topic of some page or document,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_primaryTopic
Χ sStatus: stable
Χ sDomain: having this property implies being a Document
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#FunctionalProperty

Χ sDescriptionLong: The primaryTopic property relates a document to the main thing that the document is about,
The primaryTopic property is functional: for any document it applies to, it can have at most one value. This is useful, as it allows for data merging. In many cases it may be difficult for third parties to determine the primary topic of a document, but in a useful number of cases (eg. descriptions of movies, restaurants, politicians, ...) it should be reasonably obvious. Documents are very often the most authoritative source of information about their own primary topics, although this cannot be guaranteed since documents cannot be assumed to be accurate, honest etc.
It is an inverse of the isPrimaryTopicOf property, which relates a thing to a document primarily about that thing. The choice between these two properties is purely pragmatic. When describing documents, we use primaryTopic former to point to the things they're about. When describing things (people etc.), it is useful to be able to directly cite documents which have those things as their main topic - so we use isPrimaryTopicOf. In this way, Web sites such as Wikipedia or NNDB can provide indirect identification for the things they have descriptions of.
* McsEngl.foaf/primaryTopic,
* McsEngl.primaryTopic@foaf,

foaf:weblog-property::
Χ sDescription: weblog - A weblog of some thing (whether person, group, company etc.),
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_weblog
Χ sStatus: stable
Χ sDomain: having this property implies being a Agent
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: The weblog property relates a Agent to a weblog of that agent,
* McsEngl.foaf/weblog,
* McsEngl.weblog@foaf,

foaf:account-property::
Χ sDescription: account - Indicates an account held by this agent,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_account
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ sRange: every value of this property is a OnlineAccount
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The account property relates a Agent to an OnlineAccount for which they are the sole account holder. See OnlineAccount for usage details,
* McsEngl.foaf/account,
* McsEngl.account@foaf,

foaf:accountName-property::
Χ sDescription: account name - Indicates the name (identifier) associated with this online account,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_accountName
Χ sStatus: testing
Χ sDomain: having this property implies being a OnlineAccount
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The accountName property of a OnlineAccount is a textual representation of the account name (unique ID) associated with that account,
* McsEngl.foaf/accountName,
* McsEngl.accountName@foaf,

foaf:accountServiceHomepage-property::
Χ sDescription: account service homepage - Indicates a homepage of the service provide for this online account,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_accountServiceHomepage
Χ sStatus: testing
Χ sDomain: having this property implies being a OnlineAccount
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The accountServiceHomepage property indicates a relationship between a OnlineAccount and the homepage of the supporting service provider,
* McsEngl.foaf/accountServiceHomepage,
* McsEngl.accountServiceHomepage@foaf,

foaf:aimChatID-property::
Χ sDescription: AIM chat ID - An AIM chat ID
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_aimChatID
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: The aimChatID property relates a Agent to a textual identifier ('screenname') assigned to them in the AOL Instant Messanger (AIM) system. See AOL's AIM site for more details of AIM and AIM screennames. The iChat tools from Apple also make use of AIM identifiers,
See OnlineChatAccount (and OnlineAccount) for a more general (and verbose) mechanism for describing IM and chat accounts.
* McsEngl.foaf/aimChatID,
* McsEngl.aimChatID@foaf,

foaf:based_near-property::
Χ sDescription: based near - A location that something is based near, for some broadly human notion of near,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_based_near
Χ sStatus: testing
Χ sDomain: having this property implies being a Spatial Thing
Χ sRange: every value of this property is a Spatial Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The based_near relationship relates two "spatial things" (anything that can be somewhere), the latter typically described using the geo:lat / geo:long geo-positioning vocabulary (See GeoInfo in the W3C semweb wiki for details). This allows us to say describe the typical latitute and longitude of, say, a Person (people are spatial things - they can be places) without implying that a precise location has been given,
We do not say much about what 'near' means in this context; it is a 'rough and ready' concept. For a more precise treatment, see GeoOnion vocab design discussions, which are aiming to produce a more sophisticated vocabulary for such purposes.
FOAF files often make use of the contact:nearestAirport property. This illustrates the distinction between FOAF documents (which may make claims using any RDF vocabulary) and the core FOAF vocabulary defined by this specification. For further reading on the use of nearestAirport see UsingContactNearestAirport in the FOAF wiki.
* McsEngl.foaf/based_near,
* McsEngl.based_near@foaf,

foaf:currentProject-property::
Χ sDescription: current project - A current project this person works on,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_currentProject
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: A currentProject relates a Person to a Document indicating some collaborative or individual undertaking. This relationship indicates that the Person has some active role in the project, such as development, coordination, or support,
When a Person is no longer involved with a project, or perhaps is inactive for some time, the relationship becomes a pastProject.
If the Person has stopped working on a project because it has been completed (successfully or otherwise), pastProject is applicable. In general, currentProject is used to indicate someone's current efforts (and implied interests, concerns etc.), while pastProject describes what they've previously been doing.
Χ editorial: Note that this property requires further work. There has been confusion about whether it points to a thing (eg. something you've made; a homepage for a project, ie. a Document or to instances of the class Project, which might themselves have a homepage. In practice, it seems to have been used in a similar way to interest, referencing homepages of ongoing projects,
* McsEngl.foaf/currentProject,
* McsEngl.currentProject@foaf,

foaf:depiction-property::
Χ sDescription: depiction - A depiction of some thing,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_depiction
Χ sStatus: testing
Χ sDomain: having this property implies being a Thing
Χ sRange: every value of this property is a Image
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The depiction property is a relationship between a thing and an Image that depicts it. As such it is an inverse of the depicts relationship,
A common use of depiction (and depicts) is to indicate the contents of a digital image, for example the people or objects represented in an online photo gallery.
Extensions to this basic idea include 'Co-Depiction' (social networks as evidenced in photos), as well as richer photo metadata through the mechanism of using SVG paths to indicate the regions of an image which depict some particular thing. See 'Annotating Images With SVG' for tools and details.
The basic notion of 'depiction' could also be extended to deal with multimedia content (video clips, audio), or refined to deal with corner cases, such as pictures of pictures etc.
The depiction property is a super-property of the more specific property img, which is used more sparingly. You stand in a depiction relation to any Image that depicts you, whereas img is typically used to indicate a few images that are particularly representative.
* McsEngl.foaf/depiction,
* McsEngl.depiction@foaf,

foaf:depicts-property::
Χ sDescription: depicts - A thing depicted in this representation,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_depicts
Χ sStatus: testing
Χ sDomain: having this property implies being a Image
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The depicts property is a relationship between a Image and something that the image depicts. As such it is an inverse of the depiction relationship. See depiction for further notes,
* McsEngl.foaf/depicts,
* McsEngl.depicts@foaf,

foaf:familyName-property::
Χ sDescription: familyName - The family name of some person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_familyName
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The familyName property is provided (alongside givenName) for use when describing parts of people's names. Although these concepts do not capture the full range of personal naming styles found world-wide, they are commonly used and have some value,
There is also a simple name property.
Support is also provided for the more archaic and culturally varying terminology of firstName and lastName.
See the issue tracker for design discussions, status and ongoing work on rationalising the FOAF naming machinery.
* McsEngl.foaf/familyName,
* McsEngl.familyName@foaf,

foaf:firstName-property::
Χ sDescription: firstName - The first name of a person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_firstName
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The firstName property is provided (alongside lastName) as a mechanism to support legacy data that cannot be easily interpreted in terms of the (otherwise preferred) familyName and givenName properties. The concepts of 'first' and 'last' names do not work well across cultural and linguistic boundaries; however they are widely used in addressbooks and databases,
See the issue tracker for design discussions, status and ongoing work on rationalising the FOAF naming machinery.
There is also a simple name property.
* McsEngl.foaf/firstName,
* McsEngl.firstName@foaf,

foaf:focus-property::
Χ sDescription: focus - The underlying or 'focal' entity associated with some SKOS-described concept,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_focus
Χ sStatus: testing
Χ sDomain: having this property implies being a Concept
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The focus property relates a conceptualisation of something to the thing itself. Specifically, it is designed for use with W3C's SKOS vocabulary, to help indicate specific individual things (typically people, places, artifacts) that are mentioned in different SKOS schemes (eg. thesauri),
W3C SKOS is based around collections of linked 'concepts', which indicate topics, subject areas and categories. In SKOS, properties of a skos:Concept are properties of the conceptualization (see 2005 discussion for details); for example administrative and record-keeping metadata. Two schemes might have an entry for the same individual; the foaf:focus property can be used to indicate the thing in they world that they both focus on. Many SKOS concepts don't work this way; broad topical areas and subject categories don't typically correspond to some particular entity. However, in cases when they do, it is useful to link both subject-oriented and thing-oriented information via foaf:focus.
FOAF's focus property works alongside its other topic-oriented constructs: topic, primaryTopic are used when talking about the topical emphasis of a document. The notion of primaryTopic is particularly important in FOAF as it provides an indirect mechanism for identifying things indirectly. A similar approach is explored by the TDB URI scheme. FOAF includes topic-oriented functionality to address its original goals of linking people to information, as well as to other people, through the use of linked information.
* McsEngl.foaf/focus,
* McsEngl.focus@foaf,

foaf:gender-property::
Χ sDescription: gender - The gender of this Agent (typically but not necessarily 'male' or 'female'),
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_gender
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: href="http://www.w3.org/2002/07/owl#FunctionalProperty

Χ sDescriptionLong: The gender property relates a Agent (typically a Person) to a string representing its gender. In most cases the value will be the string 'female' or 'male' (in lowercase without surrounding quotes or spaces). Like all FOAF properties, there is in general no requirement to use gender in any particular document or description. Values other than 'male' and 'female' may be used, but are not enumerated here. The gender mechanism is not intended to capture the full variety of biological, social and sexual concepts associated with the word 'gender',
Anything that has a gender property will be some kind of Agent. However there are kinds of Agent to which the concept of gender isn't applicable (eg. a Group). FOAF does not currently include a class corresponding directly to "the type of thing that has a gender". At any point in time, a Agent has at most one value for gender. FOAF does not treat gender as a static property; the same individual may have different values for this property at different times.
Note that FOAF's notion of gender isn't defined biologically or anatomically - this would be tricky since we have a broad notion that applies to all Agents (including robots - eg. Bender from Futurama is 'male'). As stressed above, FOAF's notion of gender doesn't attempt to encompass the full range of concepts associated with human gender, biology and sexuality. As such it is a (perhaps awkward) compromise between the clinical and the social/psychological. In general, a person will be the best authority on their gender. Feedback on this design is particularly welcome (via the FOAF mailing list, foaf-dev). We have tried to be respectful of diversity without attempting to catalogue or enumerate that diversity.
This may also be a good point for a periodic reminder: as with all FOAF properties, documents that use 'gender' will on occassion be innacurate, misleading or outright false. FOAF, like all open means of communication, supports lying. Application authors using FOAF data should always be cautious in their presentation of unverified information, but be particularly sensitive to issues and risks surrounding sex and gender (including privacy and personal safety concerns). Designers of FOAF-based user interfaces should be careful to allow users to omit gender when describing themselves and others, and to allow at least for values other than 'male' and 'female' as options. Users of information conveyed via FOAF (as via information conveyed through mobile phone text messages, email, Internet chat, HTML pages etc.) should be skeptical of unverified information.
* McsEngl.foaf/gender,
* McsEngl.gender@foaf,

foaf:givenName-property::
Χ sDescription: Given name - The given name of some person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_givenName
Χ sStatus: testing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The givenName property is provided (alongside familyName) for use when describing parts of people's names. Although these concepts do not capture the full range of personal naming styles found world-wide, they are commonly used and have some value,
There is also a simple name property.
Support is also provided for the more archaic and culturally varying terminology of firstName and lastName.
See the issue tracker for design discussions, status and ongoing work on rationalising the FOAF naming machinery.
* McsEngl.foaf/givenName,
* McsEngl.givenName@foaf,

foaf:icqChatID-property::
Χ sDescription: ICQ chat ID - An ICQ chat ID
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_icqChatID
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: The icqChatID property relates a Agent to a textual identifier assigned to them in the ICQ Chat system. See the icq chat site for more details of the 'icq' service. Their "What is ICQ?" document provides a basic overview, while their "About Us page notes that ICQ has been acquired by AOL. Despite the relationship with AOL, ICQ is at the time of writing maintained as a separate identity from the AIM brand (see aimChatID),
See OnlineChatAccount (and OnlineAccount) for a more general (and verbose) mechanism for describing IM and chat accounts.
* McsEngl.foaf/icqChatID,
* McsEngl.icqChatID@foaf,

foaf:img-property::
Χ sDescription: image - An image that can be used to represent some thing (ie. those depictions which are particularly representative of something, eg. one's photo on a homepage),
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_img
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ sRange: every value of this property is a Image
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The img property relates a Person to a Image that represents them. Unlike its super-property depiction, we only use img when an image is particularly representative of some person. The analogy is with the image(s) that might appear on someone's homepage, rather than happen to appear somewhere in their photo album,
Unlike the more general depiction property (and its inverse, depicts), the img property is only used with representations of people (ie. instances of Person). So you can't use it to find pictures of cats, dogs etc. The basic idea is to have a term whose use is more restricted than depiction so we can have a useful way of picking out a reasonable image to represent someone. FOAF defines img as a sub-property of depiction, which means that the latter relationship is implied whenever two things are related by the former.
Note that img does not have any restrictions on the dimensions, colour depth, format etc of the Image it references.
Terminology: note that img is a property (ie. relationship), and that code:Image is a similarly named class (ie. category, a type of thing). It might have been more helpful to call img 'mugshot' or similar; instead it is named by analogy to the HTML IMG element.
* McsEngl.foaf/img,
* McsEngl.img@foaf,

foaf:interest-property::
Χ sDescription: interest - A page about a topic of interest to this person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_interest
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The interest property represents an interest of a Agent, through indicating a Document whose topic(s) broadly characterises that interest,
example: For example, we might claim that a person or group has an interest in RDF by saying they stand in a interest relationship to the RDF home page. Loosly, such RDF would be saying "this agent is interested in the topic of this page".
example: Uses of interest include a variety of filtering and resource discovery applications. It could be used, for example, to help find answers to questions such as "Find me members of this organisation with an interest in XML who have also contributed to CPAN)".
This approach to characterising interests is intended to compliment other mechanisms (such as the use of controlled vocabulary). It allows us to use a widely known set of unique identifiers (Web page URIs) with minimal pre-coordination. Since URIs have a controlled syntax, this makes data merging much easier than the use of free-text characterisations of interest.
Note that interest does not imply expertise, and that this FOAF term provides no support for characterising levels of interest: passing fads and lifelong quests are both examples of someone's interest. Describing interests in full is a complex undertaking; interest provides one basic component of FOAF's approach to these problems.
* McsEngl.foaf/interest,
* McsEngl.interest@foaf,

foaf:jabberID-property::
Χ sDescription: jabber ID - A jabber ID for something,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_jabberID
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: The jabberID property relates a Agent to a textual identifier assigned to them in the Jabber messaging system. See the Jabber site for more information about the Jabber protocols and tools,
Jabber, unlike several other online messaging systems, is based on an open, publically documented protocol specification, and has a variety of open source implementations. Jabber IDs can be assigned to a variety of kinds of thing, including software 'bots', chat rooms etc. For the purposes of FOAF, these are all considered to be kinds of Agent (ie. things that do stuff). The uses of Jabber go beyond simple IM chat applications. The jabberID property is provided as a basic hook to help support RDF description of Jabber users and services.
See OnlineChatAccount (and OnlineAccount) for a more general (and verbose) mechanism for describing IM and chat accounts.
* McsEngl.foaf/jabberID,
* McsEngl.jabberID@foaf,

foaf:lastName-property::
Χ sDescription: lastName - The last name of a person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_lastName
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The lastName property is provided (alongside firstName) as a mechanism to support legacy data that cannot be easily interpreted in terms of the (otherwise preferred) familyName and givenName properties. The concepts of 'first' and 'last' names do not work well across cultural and linguistic boundaries; however they are widely used in addressbooks and databases,
See the issue tracker for design discussions, status and ongoing work on rationalising the FOAF naming machinery.
There is also a simple name property.
* McsEngl.foaf/lastName,
* McsEngl.lastName@foaf,

foaf:mbox_sha1sum-property::
Χ sDescription: sha1sum of a personal mailbox URI name - The sha1sum of the URI of an Internet mailbox associated with exactly one owner, the first owner of the mailbox,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_mbox_sha1sum
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: A mbox_sha1sum of a Person is a textual representation of the result of applying the SHA1 mathematical functional to a 'mailto:' identifier (URI) for an Internet mailbox that they stand in a mbox relationship to,
In other words, if you have a mailbox (mbox) but don't want to reveal its address, you can take that address and generate a mbox_sha1sum representation of it. Just as a mbox can be used as an indirect identifier for its owner, we can do the same with mbox_sha1sum since there is only one Person with any particular value for that property.
Many FOAF tools use mbox_sha1sum in preference to exposing mailbox information. This is usually for privacy and SPAM-avoidance reasons. Other relevant techniques include the use of PGP encryption (see Edd Dumbill's documentation) and the use of FOAF-based whitelists for mail filtering.
Code examples for SHA1 in C#, Java, PHP, Perl and Python can be found in Sam Ruby's weblog entry. Remember to include the 'mailto:' prefix, but no trailing whitespace, when computing a mbox_sha1sum property.
* McsEngl.foaf/mbox_sha1sum,
* McsEngl.mbox_sha1sum@foaf,

foaf:msnChatID-property::
Χ sDescription: MSN chat ID - An MSN chat ID
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_msnChatID
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: The msnChatID property relates a Agent to a textual identifier assigned to them in the Microsoft online chat system originally known as 'MSN', and now Windows Live Messenger. See the Microsoft mesenger and Windows Live ID sites for more details,
See OnlineChatAccount (and OnlineAccount) for a more general (and verbose) mechanism for describing IM and chat accounts.
* McsEngl.foaf/msnChatID,
* McsEngl.msnChatID@foaf,

foaf:myersBriggs-property::
Χ sDescription: myersBriggs - A Myers Briggs (MBTI) personality classification,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_myersBriggs
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The myersBriggs property represents the Myers Briggs (MBTI) approach to personality taxonomy. It is included in FOAF as an example of a property that takes certain constrained values, and to give some additional detail to the FOAF files of those who choose to include it. The myersBriggs property applies only to the Person class; wherever you see it, you can infer it is being applied to a person,
The myersBriggs property is interesting in that it illustrates how FOAF can serve as a carrier for various kinds of information, without necessarily being commited to any associated worldview. Not everyone will find myersBriggs (or star signs, or blood types, or the four humours) a useful perspective on human behaviour and personality. The inclusion of a Myers Briggs property doesn't indicate that FOAF endorses the underlying theory, any more than the existence of weblog is an endorsement of soapboxes.
The values for myersBriggs are the following 16 4-letter textual codes: ESTJ, INFP, ESFP, INTJ, ESFJ, INTP, ENFP, ISTJ, ESTP, INFJ, ENFJ, ISTP, ENTJ, ISFP, ENTP, ISFJ. If multiple of these properties are applicable, they are represented by applying multiple properties to a person.
For further reading on MBTI, see various online sources (eg. this article). There are various online sites which offer quiz-based tools for determining a person's MBTI classification. The owners of the MBTI trademark have probably not approved of these.
This FOAF property suggests some interesting uses, some of which could perhaps be used to test the claims made by proponents of the MBTI (eg. an analysis of weblog postings filtered by MBTI type). However it should be noted that MBTI FOAF descriptions are self-selecting; MBTI categories may not be uniformly appealing to the people they describe. Further, there is probably a degree of cultural specificity implicit in the assumptions made by many questionaire-based MBTI tools; the MBTI system may not make sense in cultural settings beyond those it was created for.
See also Cory Caplinger's summary table or the RDFWeb article, FOAF Myers Briggs addition for further background and examples.
Note: Myers Briggs Type Indicator and MBTI are registered trademarks of Consulting Psychologists Press Inc. Oxford Psycholgists Press Ltd has exclusive rights to the trademark in the UK.
* McsEngl.foaf/myersBriggs,
* McsEngl.myersBriggs@foaf,

foaf:name-property::
Χ sDescription: name - A name for some thing,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_name
Χ sStatus: testing
Χ sDomain: having this property implies being a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The name of something is a simple textual string,
XML language tagging may be used to indicate the language of the name. For example:
<foaf:name xml:lang="en">Dan Brickley</foaf:name>
FOAF provides some other naming constructs. While foaf:name does not explicitly represent name substructure (family vs given etc.) it does provide a basic level of interoperability. See the issue tracker for status of work on this issue.
The name property, like all RDF properties with a range of rdfs:Literal, may be used with XMLLiteral datatyped values (multiple names are acceptable whether they are in the same langauge or not). XMLLiteral usage is not yet widely adopted. Feedback on this aspect of the FOAF design is particularly welcomed.
* McsEngl.foaf/name,
* McsEngl.name@foaf,

foaf:nick-property::
Χ sDescription: nickname - A short informal nickname characterising an agent (includes login identifiers, IRC and other chat nicknames),
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_nick
Χ sStatus: testing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The nick property relates a Person to a short (often abbreviated) nickname, such as those use in IRC chat, online accounts, and computer logins,
This property is necessarily vague, because it does not indicate any particular naming control authority, and so cannot distinguish a person's login from their (possibly various) IRC nicknames or other similar identifiers. However it has some utility, since many people use the same string (or slight variants) across a variety of such environments.
For specific controlled sets of names (relating primarily to Instant Messanger accounts), FOAF provides some convenience properties: jabberID, aimChatID, msnChatID and icqChatID. Beyond this, the problem of representing such accounts is not peculiar to Instant Messanging, and it is not scaleable to attempt to enumerate each naming database as a distinct FOAF property. The OnlineAccount term (and supporting vocabulary) are provided as a more verbose and more expressive generalisation of these properties.
* McsEngl.foaf/nick,
* McsEngl.nick@foaf,

foaf:openid-property::
Χ sDescription: openid - An OpenID for an Agent,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_openid
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: A openid is a property of a Agent that associates it with a document that can be used as an indirect identifier in the manner of the OpenID "Identity URL". As the OpenID 1.1 specification notes, OpenID itself"does not provide any mechanism to exchange profile information, though Consumers of an Identity can learn more about an End User from any public, semantically interesting documents linked thereunder (FOAF, RSS, Atom, vCARD, etc.)". In this way, FOAF and OpenID complement each other; neither provides a stand-alone approach to online "trust", but combined they can address interesting parts of this larger problem space,
The openid property is "inverse functional", meaning that anything that is the foaf:openid of something, is the openid of no more than one thing. FOAF is agnostic as to whether there are (according to the relevant OpenID specifications) OpenID URIs that are equally associated with multiple Agents. FOAF offers sub-classes of Agent, ie. Organization and Group, that allow for such scenarios to be consistent with the notion that any foaf:openid is the foaf:openid of just one Agent.
FOAF does not mandate any particular URI scheme for use as openid values. The OpenID 1.1 specification includes a delegation model that is often used to allow a weblog or homepage document to also serve in OpenID authentication via "link rel" HTML markup. This deployment model provides a convenient connection to FOAF, since a similar technique is used for FOAF autodiscovery in HTML. A single document can, for example, serve both as a homepage and an OpenID identity URL.
* McsEngl.foaf/openid,
* McsEngl.openid@foaf,

foaf:pastProject-property::
Χ sDescription: past project - A project this person has previously worked on,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_pastProject
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: After a Person is no longer involved with a currentProject, or has been inactive for some time, a pastProject relationship can be used. This indicates that the Person was involved with the described project at one point,
If the Person has stopped working on a project because it has been completed (successfully or otherwise), pastProject is applicable. In general, currentProject is used to indicate someone's current efforts (and implied interests, concerns etc.), while pastProject describes what they've previously been doing.
* McsEngl.foaf/pastProject,
* McsEngl.pastProject@foaf,

foaf:phone-property::
Χ sDescription: phone - A phone, specified using fully qualified tel: URI scheme (refs: http://www.w3.org/Addressing/schemes.html#tel),
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_phone
Χ sStatus: testing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The phone of something is a phone, typically identified using the tel: URI scheme,
* McsEngl.foaf/phone,
* McsEngl.phone@foaf,

foaf:plan-property::
Χ sDescription: plan - A .plan comment, in the tradition of finger and '.plan' files,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_plan
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The plan property provides a space for a Person to hold some arbitrary content that would appear in a traditional '.plan' file. The plan file was stored in a user's home directory on a UNIX machine, and displayed to people when the user was queried with the finger utility,
A plan file could contain anything. Typical uses included brief comments, thoughts, or remarks on what a person had been doing lately. Plan files were also prone to being witty or simply osbscure. Others may be more creative, writing any number of seemingly random compositions in their plan file for people to stumble upon.
See History of the Finger Protocol by Rajiv Shah for more on this piece of Internet history. The geekcode property may also be of interest.
* McsEngl.foaf/plan,
* McsEngl.plan@foaf,

foaf:publications-property::
Χ sDescription: publications - A link to the publications of this person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_publications
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The publications property indicates a Document listing (primarily in human-readable form) some publications associated with the Person. Such documents are typically published alongside one's homepage,
* McsEngl.foaf/publications,
* McsEngl.publications@foaf,

foaf:schoolHomepage-property::
Χ sDescription: schoolHomepage - A homepage of a school attended by the person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_schoolHomepage
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The schoolHomepage property relates a Person to a Document that is the homepage of a School that the person attended,
FOAF does not (currently) define a class for 'School' (if it did, it would probably be as a sub-class of Organization). The original application area for schoolHomepage was for 'schools' in the British-English sense; however American-English usage has dominated, and it is now perfectly reasonable to describe Universities, Colleges and post-graduate study using schoolHomepage.
This very basic facility provides a basis for a low-cost, decentralised approach to classmate-reunion and suchlike. Instead of requiring a central database, we can use FOAF to express claims such as 'I studied here' simply by mentioning a school's homepage within FOAF files. Given the homepage of a school, it is easy for FOAF aggregators to lookup this property in search of people who attended that school.
* McsEngl.foaf/schoolHomepage,
* McsEngl.schoolHomepage@foaf,

foaf:skypeID-property::
Χ sDescription: Skype ID - A Skype ID
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_skypeID
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The skype property relates a Agent to an account name of a Skype account of theirs,
See OnlineChatAccount (and OnlineAccount) for a more general (and verbose) mechanism for describing IM and chat accounts.
* McsEngl.foaf/skypeID,
* McsEngl.skypeID@foaf,

foaf:thumbnail-property::
Χ sDescription: thumbnail - A derived thumbnail image,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_thumbnail
Χ sStatus: testing
Χ sDomain: having this property implies being a Image
Χ sRange: every value of this property is a Image
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The thumbnail property is a relationship between a full-size Image and a smaller, representative Image that has been derrived from it,
It is typical in FOAF to express img and depiction relationships in terms of the larger, 'main' (in some sense) image, rather than its thumbnail(s). A thumbnail might be clipped or otherwise reduced such that it does not depict everything that the full image depicts. Therefore FOAF does not specify that a thumbnail depicts everything that the image it is derrived from depicts. However, FOAF does expect that anything depicted in the thumbnail will also be depicted in the source image.
A thumbnail is typically small enough that it can be loaded and viewed quickly before a viewer decides to download the larger version. They are often used in online photo gallery applications.
* McsEngl.foaf/thumbnail,
* McsEngl.thumbnail@foaf,

foaf:tipjar-property::
Χ sDescription: tipjar - A tipjar document for this agent, describing means for payment and reward,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_tipjar
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The tipjar property relates an Agent to a Document that describes some mechanisms for paying or otherwise rewarding that agent,
The tipjar property was created following discussions about simple, lightweight mechanisms that could be used to encourage rewards and payment for content exchanged online. An agent's tipjar page(s) could describe informal ("Send me a postcard!", "here's my book, music and movie wishlist") or formal (machine-readable micropayment information) information about how that agent can be paid or rewarded. The reward is not associated with any particular action or content from the agent concerned. A link to a service such as PayPal is the sort of thing we might expect to find in a tipjar document.
Note that the value of a tipjar property is just a document (which can include anchors into HTML pages). We expect, but do not currently specify, that this will evolve into a hook for finding more machine-readable information to support payments, rewards. The OnlineAccount machinery is also relevant, although the information requirements for automating payments are not currently clear.
* McsEngl.foaf/tipjar,
* McsEngl.tipjar@foaf,

foaf:title-property::
Χ sDescription: title - Title (Mr, Mrs, Ms, Dr. etc)
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_title
Χ sStatus: testing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong:
Χ editorial: This property is a candidate for deprecation in favour of 'honorificPrefix' following Portable Contacts usage. See the FOAF Issue Tracker,
The approriate values for title are not formally constrained, and will vary across community and context. Values such as 'Mr', 'Mrs', 'Ms', 'Dr' etc. are expected.
* McsEngl.foaf/title,
* McsEngl.title@foaf,

foaf:topic-property::
Χ sDescription: topic - A topic of some page or document,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_topic
Χ sStatus: testing
Χ sDomain: having this property implies being a Document
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The topic property relates a document to a thing that the document is about,
As such it is an inverse of the page property, which relates a thing to a document about that thing.
* McsEngl.foaf/topic,
* McsEngl.topic@foaf,

foaf:topic_interest-property::
Χ sDescription: topic_interest - A thing of interest to this person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_topic_interest
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The topic_interest property links a Agent to a thing that they're interested in. Unlike topic it is not indirected through a document, but links the thing directly,
* McsEngl.foaf/topic_interest,
* McsEngl.topic_interest@foaf,

foaf:workInfoHomepage-property::
Χ sDescription: work info homepage - A work info homepage of some person; a page about their work for some organization,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_workInfoHomepage
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The workInfoHomepage of a Person is a Document that describes their work. It is generally (but not necessarily) a different document from their homepage, and from any workplaceHomepage(s) they may have,
The purpose of this property is to distinguish those pages you often see, which describe someone's professional role within an organisation or project. These aren't really homepages, although they share some characterstics.
* McsEngl.foaf/workInfoHomepage,
* McsEngl.workInfoHomepage@foaf,

foaf:workplaceHomepage-property::
Χ sDescription: workplace homepage - A workplace homepage of some person; the homepage of an organization they work for,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_workplaceHomepage
Χ sStatus: testing
Χ sDomain: having this property implies being a Person
Χ sRange: every value of this property is a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
* McsEngl.foaf/workplaceHomepage,
* McsEngl.workplaceHomepage@foaf,

Χ sDescriptionLong: The workplaceHomepage of a Person is a Document that is the homepage of a Organization that they work for,
By directly relating people to the homepages of their workplace, we have a simple convention that takes advantage of a set of widely known identifiers, while taking care not to confuse the things those identifiers identify (ie. organizational homepages) with the actual organizations those homepages describe.

For example, Dan Brickley works at W3C. Dan is a Person with a homepage of http://danbri.org/; W3C is a Organization with a homepage of http://www.w3.org/. This allows us to say that Dan has a workplaceHomepage of http://www.w3.org/.

    <foaf:Person>
     <foaf:name>Dan Brickley</foaf:name>
     <foaf:workplaceHomepage rdf:resource="http://www.w3.org/"/>
    </foaf:Person>
    

Note that several other FOAF properties work this way; schoolHomepage is the most similar. In general, FOAF often indirectly identifies things via Web page identifiers where possible, since these identifiers are widely used and known. FOAF does not currently have a term for the name of the relation (eg. "workplace") that holds between a Person and an Organization that they work for.

foaf:yahooChatID-property::
Χ sDescription: Yahoo chat ID - A Yahoo chat ID
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_yahooChatID
Χ sStatus: testing
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: http://www.w3.org/2002/07/owl#InverseFunctionalProperty

Χ sDescriptionLong: The yahooChatID property relates a Agent to a textual identifier assigned to them in the Yahoo online Chat system. See Yahoo's the Yahoo! Chat site for more details of their service. Yahoo chat IDs are also used across several other Yahoo services, including email and Yahoo! Groups,
See OnlineChatAccount (and OnlineAccount) for a more general (and verbose) mechanism for describing IM and chat accounts.
* McsEngl.foaf/yahooChatID,
* McsEngl.yahooChatID@foaf,

foaf:age-property::
Χ sDescription: age - The age in years of some agent,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_age
Χ sStatus: unstable
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: href="http://www.w3.org/2002/07/owl#FunctionalProperty

Χ sDescriptionLong: The age property is a relationship between a Agent and an integer string representing their age in years. See also birthday,
* McsEngl.foaf/age,
* McsEngl.age@foaf,

foaf:birthday-property::
Χ sDescription: birthday - The birthday of this Agent, represented in mm-dd string form, eg. '12-31',
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_birthday
Χ sStatus: unstable
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/
Χ rdf:type: href="http://www.w3.org/2002/07/owl#FunctionalProperty

Χ sDescriptionLong: The birthday property is a relationship between a Agent and a string representing the month and day in which they were born (Gregorian calendar). See BirthdayIssue for details of related properties that can be used to describe such things in more flexible ways,
See also age.
* McsEngl.foaf/birthday,
* McsEngl.birthday@foaf,

foaf:membershipClass-property::
Χ sDescription: membershipClass - Indicates the class of individuals that are a member of a Group
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_membershipClass
Χ sStatus: unstable
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The membershipClass property relates a Group to an RDF class representing a sub-class of Agent whose instances are all the agents that are a member of the Group,
See Group for details and examples.
* McsEngl.foaf/membershipClass,
* McsEngl.membershipClass@foaf,

foaf:sha1-property::
Χ sDescription: sha1sum (hex) - A sha1sum hash, in hex,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_sha1
Χ sStatus: unstable
Χ sDomain: having this property implies being a Document
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The sha1 property relates a Document to the textual form of a SHA1 hash of (some representation of) its contents,
Χ editorial: The design for this property is neither complete nor coherent. The Document class is currently used in a way that allows multiple instances at different URIs to have the 'same' contents (and hence hash). If sha1 is an owl:InverseFunctionalProperty, we could deduce that several such documents were the self-same thing. A more careful design is needed, which distinguishes documents in a broad sense from byte sequences,
* McsEngl.foaf/sha1,
* McsEngl.sha1@foaf,

foaf:status-property::
Χ sDescription: status - A string expressing what the user is happy for the general public (normally) to know about their current activity,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_status
Χ sStatus: unstable
Χ sDomain: having this property implies being a Agent
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: status is a short textual string expressing what the user is happy for the general public (normally) to know about their current activity. mood, location, etc,
* McsEngl.foaf/status,
* McsEngl.status@foaf,

foaf:dnaChecksum-property::
Χ sDescription: DNA checksum - A checksum for the DNA of some thing. Joke,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_dnaChecksum
Χ sStatus: archaic
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The dnaChecksum property is mostly a joke, but also a reminder that there will be lots of different identifying properties for people, some of which we might find disturbing,
* McsEngl.foaf/dnaChecksum,
* McsEngl.dnaChecksum@foaf,

foaf:family_name-property::
Χ sDescription: family_name - The family name of some person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_family_name
Χ sStatus: archaic
Χ sDomain: having this property implies being a Person
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: This property is considered an archaic spelling of familyName,
* McsEngl.foaf/family_name,
* McsEngl.family_name@foaf,

foaf:fundedBy-property::
Χ sDescription: funded by - An organization funding a project or person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_fundedBy
Χ sStatus: archaic
Χ sDomain: having this property implies being a Thing
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The fundedBy property relates something to something else that has provided funding for it,
Χ editorial: This property is tentatively considered archaic usage, unless we hear about positive implementation experience,
* McsEngl.foaf/fundedBy,
* McsEngl.fundedBy@foaf,

foaf:geekcode-property::
Χ sDescription: geekcode - A textual geekcode for this person, see http://www.geekcode.com/geek.html
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_geekcode
Χ sStatus: archaic
Χ sDomain: having this property implies being a Person
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The geekcode property is used to represent a 'Geek Code' for some Person,
See the Wikipedia entry for details of the code, which provides a somewhat frivolous and willfully obscure mechanism for characterising technical expertise, interests and habits. The geekcode property is not bound to any particular version of the code.
* McsEngl.foaf/geekcode,
* McsEngl.geekcode@foaf,

foaf:givenname-property::
Χ sDescription: Given name - The given name of some person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_givenname
Χ sStatus: archaic
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: The givenName property is provided (alongside familyName) for use when describing parts of people's names. Although these concepts do not capture the full range of personal naming styles found world-wide, they are commonly used and have some value,
There is also a simple name property.
Support is also provided for the more archaic and culturally varying terminology of firstName and lastName.
See the issue tracker for design discussions, status and ongoing work on rationalising the FOAF naming machinery.
* McsEngl.foaf/givenname,
* McsEngl.givenname@foaf,

foaf:holdsAccount-property::
Χ sDescription: account - Indicates an account held by this agent,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_holdsAccount
Χ sStatus: archaic
Χ sDomain: having this property implies being a Agent
Χ sRange: every value of this property is a OnlineAccount
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong:
Χ editorial: This property is considered archaic usage. It is generally better to use account instead,
The holdsAccount property relates a Agent to an OnlineAccount for which they are the sole account holder. See OnlineAccount for usage details.
This property is equivalent to the account property, which was introduced primarily to provide simpler naming for the same idea.
* McsEngl.foaf/holdsAccount,
* McsEngl.holdsAccount@foaf,

foaf:surname-property::
Χ sDescription: Surname - The surname of some person,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_surname
Χ sStatus: archaic
Χ sDomain: having this property implies being a Person
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong: A number of naming constructs are under development to provide naming substructure; draft properties include firstName, givenName, and surname. These are not currently stable or consistent; see the issue tracker for design discussions, status and ongoing work on rationalising the FOAF naming machinery,
There is also a simple name property.
* McsEngl.foaf/surname,
* McsEngl.surname@foaf,

foaf:theme-property::
Χ sDescription: theme - A theme,
Χ sSource-url: http://xmlns.com/foaf/spec/20140114.html#term_theme
Χ sStatus: archaic
Χ sDomain: having this property implies being a Thing
Χ sRange: every value of this property is a Thing
Χ rdfs:isDefinedBy: http://xmlns.com/foaf/0.1/

Χ sDescriptionLong:
Χ editorial: This property is considered archaic usage, and is not currently recommended for usage,
The theme property is rarely used and under-specified. The intention was to use it to characterise interest / themes associated with projects and groups. Further work is needed to meet these goals.
* McsEngl.foaf/theme,
* McsEngl.theme@foaf,

info-resource of foaf

description::
* http://xmlns.com/foaf/spec/, FOAF Vocabulary Specification 0.99, Namespace Document 14 January 2014 - Paddington Edition,

name::
* McsEngl.foaf'Infrsc,

ontology.eli

description::
"The European Legislation Identifier (ELI) is a framework to make legislation metadata available online in a standardised format, so that it can be accessed, exchanged and reused across borders. For general information about ELI (governance, tools and news), and the implementation status of ELI by various stakeholders (in particular, the way ELI is implemented by the Office of Publications on Eur- Lex), please consult the ELI Register on Eur-Lex."
[{2021-01-29} https://joinup.ec.europa.eu/collection/nifo-national-interoperability-framework-observatory/solution/eif-toolbox/solution-eli-ontology]

name::
* McsEngl.ELI-ontology,
* McsEngl.Knlgbase.eli,
* McsEngl.ontgEli,
* McsEngl.kbOnto.eli,

ontology.owl

description::
· owl="http://www.w3.org/2002/07/owl#"

name::
* McsEngl.owl-ontology,
* McsEngl.Knlgbase.owl,
* McsEngl.ontgOwl,
* McsEngl.kbOnto.owl,

ontology.rdf

description::
· rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"

name::
* McsEngl.rdf-ontology,
* McsEngl.Knlgbase.rdf,
* McsEngl.ontgRdf,
* McsEngl.kbOnto.rdf,

ontology.rdfs

description::
· rdfs="http://www.w3.org/2000/01/rdf-schema#"

name::
* McsEngl.rdfs-ontology,
* McsEngl.Knlgbase.rdfs,
* McsEngl.ontgRdfs,
* McsEngl.kbOnto.rdfs,

ontology.skos

description::
· skos="http://www.w3.org/2004/02/skos/core#"
"SKOS is a vocabulary for publishing classification schemes such as terminologies and thesauri on the Web. SKOS is since 2009 a W3C recommendation and is widely used in the library world. The Library of Congress published its Subject Headings as a SKOS vocabulary."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-vocabulary]
===
"Simple Knowledge Organization System (SKOS) is a W3C recommendation designed for representation of thesauri, classification schemes, taxonomies, subject-heading systems, or any other type of structured controlled vocabulary. SKOS is part of the Semantic Web family of standards built upon RDF and RDFS, and its main objective is to enable easy publication and use of such vocabularies as linked data."
[{2021-01-24} https://en.wikipedia.org/wiki/Simple_Knowledge_Organization_System]

name::
* McsEngl.Rdfvoc.SKOS,
* McsEngl.SKOS-Rdfvoc,
* McsEngl.SKOS'(simple-knowledge-organization-system),
* McsEngl.simple-knowledge-organization-system,
* McsEngl.skos-ontology,
* McsEngl.Knlgbase.skos,
* McsEngl.ontgSkos,
* McsEngl.kbOnto.skos,

ontology.terms

description::
· terms="http://purl.org/dc/terms/"

name::
* McsEngl.terms-ontology,
* McsEngl.Knlgbase.terms,
* McsEngl.ontgTerms,
* McsEngl.kbOnto.terms,

ontology.xml

description::
· xml="http://www.w3.org/XML/1998/namespace"

name::
* McsEngl.xml-ontology,
* McsEngl.Knlgbase.xml,
* McsEngl.ontgXml,
* McsEngl.kbOnto.xml,

ontology.xsd

description::
· xsd="http://www.w3.org/2001/XMLSchema#"

name::
* McsEngl.xsd-ontology,
* McsEngl.Knlgbase.xsd,
* McsEngl.ontgXsd,
* McsEngl.kbOnto.xsd,

ontology.BORO

description::
">overview of Business Objects Reference Ontology:
Business Objects Reference Ontology (BORO) is a top-level ontology designed for developing ontological or semantic models for large complex operational applications. It consists of a top-level ontology as well as a process for constructing the ontology. BORO was originally conceived in the late 1980s to address a particular need for a solid legacy re-engineering process and then evolved to address a wider need for developing enterprise systems in a 'better way'.

**Key features of BORO:**
* **Simplicity:** BORO is a simple, repeatable process for developing formal ontologies.
* **Extensionality:** BORO takes an extensional approach to ontology development, which means that it is grounded in physical reality.
* **Grounding in metaphysics:** BORO is built upon a series of clear metaphysical choices, which provides a solid metaphysical foundation.

**Benefits of using BORO:**
* **Improved communication and understanding:** BORO can improve communication and understanding between different stakeholders in an organization.
* **Reduced development costs:** BORO can reduce development costs by enabling the reuse of components.
* **Enhanced interoperability:** BORO can enhance interoperability between different systems.

**Applications of BORO:**
* **Enterprise architecture:** BORO can be used to develop enterprise architecture models.
* **Systems integration:** BORO can be used to integrate different systems.
* **Data modeling:** BORO can be used to develop data models.
* **Knowledge representation:** BORO can be used to represent knowledge.

**BORO in the context of Digital Twins:**
BORO can play a valuable role in the development of Digital Twins by providing a common ontology for representing the different aspects of a Digital Twin. This can help to ensure that the different parts of a Digital Twin are interoperable and that they can be easily understood by different stakeholders.

**Overall, BORO is a powerful and versatile tool for developing ontological or semantic models for large complex operational applications. It is a well-established and widely-used approach that has been successfully applied in a variety of domains.**
Sources
digitaltwinhub.co.uk/top-level-ontologies/business-objects-reference-ontology-r2/"
[{2023-11-19 retrieved} https://bard.google.com/chat/33337b565dbef609?hl=en&pli=1]

name::
* McsEngl.BORO!=Business-Objects-Reference-Ontology,
* McsEngl.knOnto.BORO,

ontology.CIDOC-CRM

description::
"The CIDOC CRM represents an 'ontology' for cultural heritage information i.e. it describes in a formal language the explicit and implicit concepts and relations relevant to the documentation of cultural heritage. The primary role of the CIDOC CRM is to serve as a basis for mediation of cultural heritage information and thereby provide the semantic 'glue' needed to transform today's disparate, localised information sources into a coherent and valuable global resource.
The CIDOC CRM is intended to promote a shared understanding of cultural heritage information by providing a common and extensible semantic framework that any cultural heritage information can be mapped to. It is intended to be a common language for domain experts and implementers to formulate requirements for information systems and to serve as a guide for good practice of conceptual modelling. In this way, it can provide the "semantic glue" needed to mediate between different sources of cultural heritage information, such as that published by museums, libraries and archives.
The CIDOC CRM is the culmination of over 10 years work by the CIDOC Documentation Standards Working Group and CIDOC CRM SIG which are working groups of CIDOC. Since 9/12/2006 it is official standard ISO 21127:2006."
[{2021-01-26} http://www.cidoc-crm.org/node/202]

name::
* McsEngl.CIDOC-CRM-ontology,
* McsEngl.Knlgbase.CIDOC-CRM,
* McsEngl.ontgCIDOC-CRM,
* McsEngl.kbOnto.CIDOC-CRM,

ontology.DBpedia

description::
"The English version of the DBpedia knowledge base describes 4.58 million things, out of which 4.22 million are classified in a consistent ontology, including 1,445,000 persons, 735,000 places (including 478,000 populated places), 411,000 creative works (including 123,000 music albums, 87,000 films and 19,000 video games), 241,000 organizations (including 58,000 companies and 49,000 educational institutions), 251,000 species and 6,000 diseases."
[{2021-01-30} https://wiki.dbpedia.org/about]

name::
* McsEngl.DBpedia-ontology,
* McsEngl.Knlgbase.DBpedia,
* McsEngl.ontgDBpedia,
* McsEngl.kbOnto.DBpedia,

ontology.PROV-{2017}

description::
"* **2017:** The W3C publishes the PROV Ontology, which is a vocabulary for describing the provenance of data." [{2023-11-04 retrieved} https://bard.google.com/chat/d9d14f2406f370df]
"overview of PROV-Ontology:
The PROV Ontology (PROV-O) is a Web Ontology Language (OWL) ontology that defines a conceptual data model for provenance information. It is intended to be used to represent and interchange provenance information in a variety of application domains.

PROV-O is based on the PROV Data Model (PROV-DM), which defines a minimal set of concepts to represent provenance information. PROV-O adds additional classes and properties to provide a more expressive vocabulary for provenance modeling.

PROV-O can be used to represent a variety of provenance information, including:
* Entities: the entities involved in producing a piece of data or thing, such as people, organizations, software tools, and data files.
* Activities: the activities that were performed to produce a piece of data or thing, such as generating data, transforming data, and publishing data.
* Relationships between entities and activities: how entities are related to each other and to the activities that produced them.

PROV-O is designed to be extensible, so that new classes and properties can be added to model provenance information for specific applications and domains.

PROV-O is used in a variety of applications, including:
* Scientific data management: to track the provenance of scientific data, so that scientists can understand how the data was produced and assess its reliability.
* Software engineering: to track the provenance of software artifacts, so that developers can understand how the software was produced and debug problems.
* Web publishing: to track the provenance of web pages and other web resources, so that users can understand how the content was created and assess its trustworthiness.

Here are some examples of how PROV-O can be used to represent provenance information:
* A scientist uses a software tool to generate a dataset of experimental results. PROV-O can be used to represent the relationship between the scientist, the software tool, and the dataset, as well as the activity of generating the dataset.
* A software developer writes a program that transforms one dataset into another dataset. PROV-O can be used to represent the relationship between the two datasets, the program, and the activity of transforming the data.
* A web publisher creates a web page that cites a research paper. PROV-O can be used to represent the relationship between the web page, the research paper, and the activity of citing the paper.

PROV-O is a powerful tool for representing and interchanging provenance information in a variety of application domains. It is based on open standards and is supported by a number of tools and libraries."
[{2023-11-04 retrieved} https://bard.google.com/chat/1183569c9a4037f0]

name::
* McsEngl.PROV-ontology,
* McsEngl.Knlgbase.PROV,
* McsEngl.ontgProv,
* McsEngl.kbOnto.PROV,

manager of lagOntg

description::
· a-knowledge-language--manager for ontology-languages.

">overview of ontology editors:
Ontology editors are specialized software applications designed to assist in the creation, editing, and management of ontologies. Ontologies are formal representations of knowledge that are used in a variety of fields, including artificial intelligence, knowledge engineering, and information science. Ontology editors provide a variety of features to help users create and manage ontologies, such as:
* **Visual editing:** Ontology editors often provide a visual interface that allows users to create and edit ontologies by dragging and dropping elements onto a canvas. This can make it easier to understand and modify complex ontologies.
* **Error checking:** Ontology editors can check for errors in ontologies, such as missing or conflicting information. This can help to ensure that ontologies are well-formed and consistent.
* **Reasoning:** Some ontology editors include reasoning engines that can be used to infer new information from ontologies. This can be useful for tasks such as query answering and classification.
* **Import/export:** Ontology editors can import and export ontologies in a variety of formats, including RDF, OWL, and OBO. This makes it possible to share ontologies with others and to use them with different software applications.

There are a number of different ontology editors available, each with its own strengths and weaknesses. Some of the most popular ontology editors include:
* **Protégé:** Protégé is a free and open-source ontology editor that is widely used in the academic and research communities. It is a powerful and flexible tool that supports a wide range of features.
* **OntoStudio:** OntoStudio is a commercial ontology editor that is used by a number of large organizations. It is a user-friendly tool that is easy to learn and use.
* **TopBraid Composer:** TopBraid Composer is another commercial ontology editor that is used by a number of large organizations. It is a powerful and flexible tool that supports a wide range of features.
* **Visual OWL:** Visual OWL is a free and open-source ontology editor that is specifically designed for editing OWL ontologies. It is a lightweight and easy-to-use tool that is a good choice for beginners.
* **JENA Fuseki:** JENA Fuseki is an open-source ontology editor that is based on the JENA framework. It is a powerful and flexible tool that is a good choice for developers.

Choosing the right ontology editor for your needs will depend on a number of factors, such as your experience level, budget, and specific requirements. It is a good idea to try out a few different editors before making a decision."
[{2023-11-15 retrieved} https://bard.google.com/chat/73bf79ced88777e9?hl=en&pli=1]

name::
* McsEngl.app.ontology-editor!⇒klmrOntology,
* McsEngl.klmrOntology,
* McsEngl.Ontgtool!⇒klmrOntology,
* McsEngl.lagOntg'tool!⇒klmrOntology,
* McsEngl.ontology-editor!⇒klmrOntology,
* McsEngl.tech.lagOntg!⇒klmrOntology,

klmrOntology.SPECIFIC

description::
* Apollo, open-source,
* JENA Fuseki, open-source,
* OntoStudio,
* Protégé, open-source,
* Swoop, open-source,
* TopBraid Composer Free Edition,
* Visual OWL, open-source,
* WebProtégé,

name::
* McsEngl.klmrOntology.specific,

klmrOntology.WebProtégé of lagOntg

description::
× webaddress: https://webprotege.stanford.edu/

"WebProtégé is an ontology development environment for the Web that makes it easy to create, upload, modify, and share ontologies for collaborative viewing and editing.
WebProtégé fully supports the latest OWL 2 Web Ontology Language. The highly configurable user interface creates the perfect environment for beginners and experts alike. Collaboration features abound, including sharing and permissions, threaded notes and discussions, watches and email notifications. RDF/XML, Turtle, OWL/XML, OBO, and other formats available for ontology upload and download.
* W3C standards compliant
* Simple, customizable user interface
* Optimized for collaboration
* Change tracking and revision history
* Web forms for domain specific editing
* Multiple upload/download formats
* Cross compatible with Protιgι Desktop"
[{2021-01-28} https://protege.stanford.edu/products.php#web-protege]
===
"WebProtégé is a web-based ontology editor that allows users to create, edit, and manage ontologies in a collaborative environment. It is built on top of the Protégé ontology editor and provides a number of features that make it ideal for developing and maintaining large and complex ontologies.

WebProtégé includes the following features:
* A collaborative user interface that allows multiple users to edit the same ontology at the same time.
* A variety of tools for editing and managing ontologies, including a class browser, property browser, and individual browser.
* Support for a variety of ontology formats, including RDF/XML, Turtle, OWL/XML, and OBO.
* Integration with a number of other tools and resources, such as BioPortal and the NCBO Ontology Repository.

WebProtégé is used by a wide range of users, including researchers, developers, and practitioners in a variety of fields, such as biomedical informatics, artificial intelligence, and knowledge engineering.

Here are some of the benefits of using WebProtégé:
* **Collaboration:** WebProtégé allows multiple users to edit the same ontology at the same time, making it ideal for collaborative ontology development.
* **Ease of use:** WebProtégé provides a user-friendly interface that makes it easy to create, edit, and manage ontologies.
* **Expressiveness:** WebProtégé supports a wide range of ontology languages, including OWL and OBO, allowing users to create highly expressive ontologies.
* **Integration:** WebProtégé integrates with a number of other tools and resources, such as BioPortal and the NCBO Ontology Repository, making it easy to find and reuse existing ontologies.

If you are developing or maintaining an ontology, I highly recommend using WebProtégé. It is a powerful and flexible tool that can help you to create and manage ontologies more efficiently and effectively."
[{2023-11-04 retrieved} https://bard.google.com/chat/92897ea538348c4f]

name::
* McsEngl.klmrOntology.WebProtégé,
* McsEngl.WebProtégé,

info-resource of lagOntg

description::
* https://www.isko.org/cyclo/ontologies,
* https://en.wikipedia.org/wiki/Ontology_language,
* http://www.jfsowa.com/ontology/index.htm,

name::
* McsEngl.lagOntg'Infrsc,

evoluting of lagOntg

description::
"evolution of ontology-languages:
The evolution of ontology languages reflects the growing need for structured and formal representations of knowledge in various fields, particularly in the realms of artificial intelligence, knowledge representation, and the Semantic Web. Here is an overview of the evolution of ontology languages:

1. **Early Knowledge Representation Systems:** Before the term "ontology" became widely adopted, early knowledge representation systems used logic-based formalisms, such as predicate logic, frames, and semantic networks, to represent structured knowledge. These systems laid the foundation for modern ontology languages.

2. **KIF (Knowledge Interchange Format):** In the late 1980s, the development of KIF began as an early attempt to create a standard language for expressing knowledge and ontologies in a machine-readable format.

3. **Frame-Based Systems:** Frame-based systems like KL-ONE, developed in the 1980s, provided a way to represent structured knowledge with a focus on classes and instances, influencing the development of ontology languages.

4. **RDF (Resource Description Framework):** The late 1990s saw the emergence of RDF as a standardized way to represent resources and their relationships in a simple, graph-based format. RDF formed the basis for the development of the Semantic Web.

5. **RDF Schema (RDFS):** RDFS was introduced as an extension of RDF to allow for the creation of simple ontologies with concepts, properties, and hierarchies.

6. **OWL (Web Ontology Language):** The early 2000s marked a significant milestone with the release of OWL, a family of ontology languages designed to represent knowledge on the Semantic Web. OWL introduced more expressive power, allowing for complex modeling and reasoning capabilities. It defined three profiles: OWL Lite, OWL DL, and OWL Full, each offering different levels of expressiveness and reasoning complexity.

7. **Description Logics (DLs):** OWL was built upon description logics, a family of formal logic-based languages for representing and reasoning about knowledge. Description logics provide a theoretical foundation for OWL and influenced its development.

8. **Standardization and Tooling:** The development of ontology languages was paralleled by the standardization of ontology-related technologies and the creation of tools like Protιgι, which made ontology development more accessible.

9. **Ontology Engineering Methodologies:** Various methodologies for ontology engineering, such as the Methontology methodology and NeOn methodology, have been developed to guide the creation, maintenance, and evolution of ontologies.

10. **Integration with Databases and AI:** Ontology languages are increasingly integrated into AI systems, databases, and natural language processing to enhance data integration, knowledge discovery, and semantic search.

11. **Emerging Semantic Technologies:** Ongoing research and development continue to push the boundaries of ontology languages, including efforts to merge ontologies with emerging technologies like blockchain, knowledge graphs, and AI-driven ontology generation.

The evolution of ontology languages has been driven by the need to represent and reason about complex knowledge and has significantly impacted fields such as artificial intelligence, the Semantic Web, and knowledge management. As technologies and requirements continue to evolve, ontology languages and their associated tools and methodologies will likely continue to adapt and expand."
[{2023-10-24 retrieved} https://chat.openai.com/c/b772e8ef-c6be-489f-8ad4-69895bc95406]

"evolution of ontology-language:
The evolution of ontology languages can be traced back to the early days of artificial intelligence (AI) research, in the 1970s and 1980s. During this time, a number of different ontology languages were developed, each with its own strengths and weaknesses. Some of the most notable early ontology languages include:
* **KL-ONE** (1984): One of the first ontology languages to be developed, KL-ONE was based on description logic, a formal logic for representing and reasoning about concepts and their relationships.
* **LOOM** (1992): A successor to KL-ONE, LOOM was a more expressive ontology language that supported a wider range of reasoning tasks.
* **CycL** (1992): A large-scale ontology language that was developed to represent the common-sense knowledge of the world.

In the 1990s, there was a growing interest in using ontologies to develop intelligent web applications. This led to the development of a number of new ontology languages that were specifically designed for the World Wide Web. Some of the most notable ontology languages from this era include:
* **DAML** (2000): DARPA Agent Markup Language, developed by the US Department of Defense.
* **OIL** (2000): Ontology Inference Layer, developed by the European Union.

In 2001, DAML and OIL were merged to form DAML+OIL. DAML+OIL was a powerful ontology language that supported a wide range of reasoning tasks. However, it was also complex and difficult to use.

In 2004, the World Wide Web Consortium (W3C) published the Web Ontology Language (OWL). OWL was a simplified version of DAML+OIL that was designed to be easier to use and more accessible to a wider range of users. OWL has since become the de facto standard ontology language for the Web.

The evolution of ontology languages has been driven by a number of factors, including the need for more expressive and powerful languages, the need for languages that are easier to use, and the need for languages that are specifically designed for the Web.

In recent years, there has been a growing interest in developing ontology languages that are more scalable and efficient. This has led to the development of a number of new ontology languages, such as SWRL (Semantic Web Rule Language) and SHACL (Shapes Constraint Language).

Ontology languages are now used in a wide range of applications, including:
* **Semantic Web:** Ontologies are used to represent the meaning of data on the Semantic Web.
* **Knowledge management:** Ontologies are used to organize and manage knowledge in knowledge bases.
* **Artificial intelligence:** Ontologies are used to develop intelligent agents that can reason about the world.
* **Data science:** Ontologies are used to integrate data from different sources and to extract new insights from data.

The evolution of ontology languages is ongoing, and new languages are being developed all the time. As ontology languages continue to evolve, they will play an increasingly important role in the development of intelligent web applications and knowledge-based systems."
[{2023-10-24 retrieved} https://bard.google.com/chat/56738e883ec8db69]

name::
* McsEngl.evoluting-of-lagOntg,
* McsEngl.lagOntg'evoluting,

lagOntg.SPECIFIC

description::
* markup-lagOntg,
===
* description-logic-based-lagOnt,
* first-order-logic-based-lagOnt,
* frame-based-lagOnt,
===
* Common Logic,
* CycL,
* DAML+OIL,
* DOGMA (Developing Ontology-Grounded Methods and Applications),
* F-Logic,
* KIF,
* KL-ONE,
* KM,
* LOOM,
* OCML (Operational Conceptual Modelling Language),
* OIL (Ontology Inference Layer),
* OKBC,
* OWL,
* PLIB (Parts LIBrary),
* RACER,
* RDF,
* SHOE,
[{2021-01-27} https://en.wikipedia.org/wiki/Ontology_language]

name::
* McsEngl.lagOntg.specific,

lagOntg.concept

description::
"overview of concept-language:
Concept-language (CL) is a simple ontology language that is easy to learn and use. It is often used for educational purposes, or for developing small ontologies.

CL ontologies are made up of two main components: concepts and relations. Concepts represent the different types of things that exist in the world, such as people, places, and events. Relations represent the relationships between concepts, such as the relationship between a parent and a child.

CL ontologies are written in a text-based format, and can be edited with a text editor. The following is a simple example of a CL ontology:
```
concept Person
concept Parent
concept Child

relation parentOf(Person, Person)
```

This ontology defines two concepts, `Person` and `Parent`, and one relation, `parentOf`. The `parentOf` relation can be used to represent the relationship between a parent and a child.

CL ontologies can be used to represent a wide range of knowledge, such as the knowledge about a family, the knowledge about a product catalog, or the knowledge about a scientific domain.

Here are some of the benefits of using CL:
* It is easy to learn and use, even for people with no prior experience with ontology languages.
* It is a lightweight language, which means that CL ontologies can be created and edited quickly and easily.
* It is flexible and expressive, which means that CL can be used to represent a wide range of knowledge.
* There are a number of tools and resources available to help people create and use CL ontologies.

However, CL also has some limitations:
* It is not as powerful as some other ontology languages, such as OWL.
* It is not as widely supported as some other ontology languages.

Overall, CL is a good choice for developing small, simple ontologies. It is also a good choice for educational purposes.

Here are some examples of how CL can be used:
* To represent the knowledge about a family, you could create a CL ontology with the following concepts: `Person`, `Parent`, `Child`, `Sibling`, and `Grandparent`. You could then use the `parentOf` relation to represent the relationships between the different concepts.
* To represent the knowledge about a product catalog, you could create a CL ontology with the following concepts: `Product`, `Category`, and `Attribute`. You could then use the `isA` relation to represent the relationship between a product and a category, and the `hasAttribute` relation to represent the relationship between a product and an attribute.
* To represent the knowledge about a scientific domain, you could create a CL ontology with the following concepts: `Gene`, `Protein`, and `Pathway`. You could then use the `interactsWith` relation to represent the interactions between the different concepts.

CL is a versatile and easy-to-use ontology language that can be used to represent a wide range of knowledge."
[{2023-10-24 retrieved} https://bard.google.com/chat/56e1224934fd79d9]

name::
* McsEngl.concept-lagOntg,
* McsEngl.lagOntg.concept,

lagOntg.markup

description::
* DAML+OIL,
* OIL-(Ontology Inference Layer),
* OWL-(Web Ontology Language),
* RDF-(Resource Description Framework),
* SHOE,

name::
* McsEngl.lagOntg.markup,
* McsEngl.markup-lagOntg,

lagOntg.OWL

description::
"Ontologies are formalized vocabularies of terms, often covering a specific domain and shared by a community of users.
They specify the definitions of terms by describing their relationships with other terms in the ontology."
[{2021-01-03} https://www.w3.org/TR/owl2-overview/]

name::
* McsEngl.OWL!=web-ontology-language!⇒lagOwlg,
* McsEngl.lagKnlg.026-OWL!⇒lagOwlg,
* McsEngl.lagKnlg.OWL!⇒lagOwlg,
* McsEngl.lagOwlg!=web-ontology-language-OWL,
* McsEngl.web-ontology-language!⇒lagOwlg,

descriptionLong::
"The Web Ontology Language (OWL) is a family of knowledge representation languages for authoring ontologies. Ontologies are a formal way to describe taxonomies and classification networks, essentially defining the structure of knowledge for various domains: the nouns representing classes of objects and the verbs representing relations between the objects.
Ontologies resemble class hierarchies in object-oriented programming but there are several critical differences. Class hierarchies are meant to represent structures used in source code that evolve fairly slowly (perhaps with monthly revisions) whereas ontologies are meant to represent information on the Internet and are expected to be evolving almost constantly. Similarly, ontologies are typically far more flexible as they are meant to represent information on the Internet coming from all sorts of heterogeneous data sources. Class hierarchies on the other hand tend to be fairly static and rely on far less diverse and more structured sources of data such as corporate databases.[1]
The OWL languages are characterized by formal semantics. They are built upon the World Wide Web Consortium's (W3C) XML standard for objects called the Resource Description Framework (RDF).[2] OWL and RDF have attracted significant academic, medical and commercial interest.
In October 2007,[3] a new W3C working group[4] was started to extend OWL with several new features as proposed in the OWL 1.1 member submission.[5] W3C announced the new version of OWL on 27 October 2009.[6] This new version, called OWL 2, soon found its way into semantic editors such as Protιgι and semantic reasoners such as Pellet,[7] RacerPro,[8] FaCT++[9][10] and HermiT.[11]
The OWL family contains many species, serializations, syntaxes and specifications with similar names. OWL and OWL2 are used to refer to the 2004 and 2009 specifications, respectively. Full species names will be used, including specification version (for example, OWL2 EL). When referring more generally, OWL Family will be used.[12][13][14]"
[{2021-01-03} https://en.wikipedia.org/wiki/Web_Ontology_Language]

input of lagOwlg

description::
"The W3C OWL 2 Web Ontology Language (OWL) is a Semantic Web language designed to represent rich and complex knowledge about things, groups of things, and relations between things."
[{2021-01-08} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#Introduction]
===
"An ontology is a set of precise descriptive statements about some part of the world (usually referred to as the domain of interest or the subject matter of the ontology)."
[{2021-01-08} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#What_is_OWL_2.3F]

name::
* McsEngl.Owlinput,
* McsEngl.domain-of-inerest@lagOwlg!⇒Owlinput,
* McsEngl.lagOwlg'input!⇒Owlinput,
* McsEngl.ontology@lagOwlg!⇒Owlinput,
* McsEngl.subject-matter@lagOwlg!⇒Owlinput,

IRI of Owlinput

description::
"Each ontology may have an ontology IRI, which is used to identify an ontology. If an ontology has an ontology IRI, the ontology may additionally have a version IRI, which is used to identify the version of the ontology. The version IRI may be, but need not be, equal to the ontology IRI. An ontology without an ontology IRI must not contain a version IRI."
[{2021-01-10} https://www.w3.org/TR/2012/REC-owl2-syntax-20121211/#Ontologies]

name::
* McsEngl.Owlinput'IRI,

version of Owlinput

description::
"Each ontology may have an ontology IRI, which is used to identify an ontology. If an ontology has an ontology IRI, the ontology may additionally have a version IRI, which is used to identify the version of the ontology. The version IRI may be, but need not be, equal to the ontology IRI. An ontology without an ontology IRI must not contain a version IRI."
[{2021-01-10} https://www.w3.org/TR/2012/REC-owl2-syntax-20121211/#Ontologies]

name::
* McsEngl.Owlinput'version,

entity of Owlinput

description::
"Entities: elements used to refer to real-world objects
... All atomic constituents of statements, be they objects (John, Mary), categories (female) or relations (married) are called entities."
[{2021-01-09} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#Modeling_Knowledge:_Basic_Notions]

name::
* McsEngl.Owlentity,
* McsEngl.Owlinput'entity!⇒Owlentity,
* McsEngl.entity@lagOwlg,

Owlentity.individual

description::
"All atomic constituents of statements, be they objects (John, Mary), categories (female) or relations (married) are called entities.
In OWL 2, we denote objects as individuals, categories as classes and relations as properties."
[{2021-01-09} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#Modeling_Knowledge:_Basic_Notions]

name::
* McsEngl.Owlentity.individual!⇒Owlindividual,
* McsEngl.Owlentity.object!⇒Owlindividual,
* McsEngl.Owlindividual,
* McsEngl.individual@lagOwlg,
* McsEngl.object@lagOwlg,

Owlentity.class

description::
"All atomic constituents of statements, be they objects (John, Mary), categories (female) or relations (married) are called entities.
In OWL 2, we denote objects as individuals, categories as classes and relations as properties."
[{2021-01-09} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#Modeling_Knowledge:_Basic_Notions]

name::
* McsEngl.Owlclass,
* McsEngl.Owlentity.category!⇒Owlclass,
* McsEngl.Owlentity.class!⇒Owlclass,
* McsEngl.class@lagOwlg,

Owlentity.property

description::
"All atomic constituents of statements, be they objects (John, Mary), categories (female) or relations (married) are called entities.
In OWL 2, we denote objects as individuals, categories as classes and relations as properties."
[{2021-01-09} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#Modeling_Knowledge:_Basic_Notions]

name::
* McsEngl.Owlentity.property!⇒Owlproperty,
* McsEngl.Owlentity.relation!⇒Owlproperty,
* McsEngl.Owlproperty,
* McsEngl.property@lagOwlg,
* McsEngl.relation@lagOwlg,

Owlproperty.object

description::
"Properties in OWL 2 are further subdivided.
Object properties relate objects to objects (like a person to their spouse), while datatype properties assign data values to objects (like an age to a person).
Annotation properties are used to encode information about (parts of) the ontology itself (like the author and creation date of an axiom) instead of the domain of interest."
[{2021-01-09} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#Modeling_Knowledge:_Basic_Notions]

name::
* McsEngl.Owlproperty.object,
* McsEngl.object-property@lagOwlg,

Owlproperty.datatype

description::
"Data properties connect individuals with literals."
[{2021-01-10} https://www.w3.org/TR/2012/REC-owl2-syntax-20121211/#Data_Properties]
===
"Properties in OWL 2 are further subdivided.
Object properties relate objects to objects (like a person to their spouse), while datatype properties assign data values to objects (like an age to a person).
Annotation properties are used to encode information about (parts of) the ontology itself (like the author and creation date of an axiom) instead of the domain of interest."
[{2021-01-09} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#Modeling_Knowledge:_Basic_Notions]

name::
* McsEngl.Owlproperty.datatype,
* McsEngl.data-property@lagOwlg,

Owlproperty.annotation

description::
"Properties in OWL 2 are further subdivided.
Object properties relate objects to objects (like a person to their spouse), while datatype properties assign data values to objects (like an age to a person).
Annotation properties are used to encode information about (parts of) the ontology itself (like the author and creation date of an axiom) instead of the domain of interest."
[{2021-01-09} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#Modeling_Knowledge:_Basic_Notions]

name::
* McsEngl.Owlproperty.annotation,

vocabulary of Owlinput

description::
"Entities are the fundamental building blocks of OWL 2 ontologies, and they define the vocabulary — the named terms — of an ontology.
In logic, the set of entities is usually said to constitute the signature of an ontology.
Apart from entities, OWL 2 ontologies typically also contain literals, such as strings or integers."
[{2021-01-10} https://www.w3.org/TR/2012/REC-owl2-syntax-20121211/#Entities.2C_Literals.2C_and_Anonymous_Individuals]

name::
* McsEngl.Owlinput'vocabulary,
* McsEngl.vocabulary@lagOwlg,

literal of Owlinput

description::
"Literals represent data values such as particular strings or integers.
They are analogous to typed RDF literals [RDF Concepts] and can also be understood as individuals denoting data values."
[{2021-01-10} https://www.w3.org/TR/2012/REC-owl2-syntax-20121211/#Literals]

name::
* McsEngl.Owlliteral,
* McsEngl.Owlinput'literal!⇒Owlliteral,

axiom of Owlinput

description::
"Now, in order to formulate knowledge explicitly, it is useful to assume that it consists of elementary pieces that are often referred to as statements or propositions. Statements like “it is raining” or “every man is mortal” are typical examples for such basic propositions. Indeed, every OWL 2 ontology is essentially just a collection of such basic “pieces of knowledge.” Statements that are made in an ontology are called axioms in OWL 2, and the ontology asserts that its axioms are true. In general, OWL statements might be either true or false given a certain state of affairs. This distinguishes them from entities and expressions as described further below."
[{2021-01-09} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#Modeling_Knowledge:_Basic_Notions]

name::
* McsEngl.Owlaxiom,
* McsEngl.Owlinput'axiom!⇒Owlaxiom,
* McsEngl.axiom@lagOwlg,

annotation of Owlinput

description::
"OWL 2 applications often need ways to associate additional information with ontologies, entities, and axioms. To this end, OWL 2 provides for annotations on ontologies, axioms, and entities.
eample: One might want to associate human-readable labels with IRIs and use them when visualizing an ontology. To this end, one might use the rdfs:label annotation property to associate such labels with ontology IRIs."
[{2021-01-10} https://www.w3.org/TR/2012/REC-owl2-syntax-20121211/#Annotations]

name::
* McsEngl.Owlannotation,
* McsEngl.Owlinput'annotation!⇒Owlannotation,

open-world-assumption of Owlinput

description::
"If some fact is not present in a database, it is usually considered false (the so-called closed-world assumption) whereas in the case of an OWL 2 document it may simply be missing (but possibly true), following the open-world assumption."
[{2021-01-08} https://www.w3.org/TR/2012/REC-owl2-primer-20121211/#What_is_OWL_2.3F]

name::
* McsEngl.Owlinput'open-world-assumption,
* McsEngl.lagOwlg'open-world-assumption,
* McsEngl.open-world-assumption@lagOwlg,

descriptionLong::
"In addition, OWL makes an open world assumption. That is, descriptions of resources are not confined to a single file or scope. While class C1 may be defined originally in ontology O1, it can be extended in other ontologies. The consequences of these additional propositions about C1 are monotonic. New information cannot retract previous information. New information can be contradictory, but facts and entailments can only be added, never deleted.
The possibility of such contradictions is something the designer of an ontology needs to take into consideration. It is expected that tool support will help detect such cases.
[http://www.w3.org/TR/2004/REC-owl-guide-20040210/]

output of lagOwlg

description::
· the-output of the-mapping.
===
"An OWL 2 ontology is an abstract notion defined in terms of the structural specification. Each ontology is associated with an ontology document, which physically contains the ontology stored in a particular way. The name "ontology document" reflects the expectation that a large number of ontologies will be stored in physical text documents written in one of the syntaxes of OWL 2. OWL 2 tools, however, are free to devise other types of ontology documents — that is, to introduce other ways of physically storing ontologies."
[{2021-01-10} https://www.w3.org/TR/2012/REC-owl2-syntax-20121211/#Ontologies]

name::
* McsEngl.Owloutput,
* McsEngl.lagOwlg'document!⇒Owloutput,
* McsEngl.lagOwlg'output!⇒Owloutput,

ontology of lagOwlg

description::
· the-whole output system that models the-input, I call ontology.

name::
* McsEngl.Owlontology,
* McsEngl.lagOwlg'output-system!⇒Owlontology,

Owlontology.Bibo

description::
"Bibo, the Bibliographic Ontology developed by Bruce D’Arcus and Frιdιrick Giasson in 2009, is the first OWL ontology that provides main concepts and properties for describing bibliographic entities and citations. Bibo’s properties have been used since 2011 in the BNB Linked Data Platform, which provides access to the British National Bibliography published as linked open data. The Linked Data Service of the Deutsche Nationalbibliothek also has used Bibo since 2010. Bibo includes five principal classes and 34 subclasses, 32 object properties and 20 sub-properties, 20 datatype properties (10 of which are OWL equivalent properties) and 26 sub-properties. Bibo presents a variety of entities related to the bibliographic world, organized into five principal classes: Agent, Collection, Document, Document Status, Event, and 34 subclasses. The classes Document and Collection accommodate most of the bibliographic sub-entities. Among the subclasses of Document, there are Article, Book, Image, Legal Document, Manuscript, Report, Web page, etc. Among the subclasses of Collection, there are Periodical, Series, and Web site.
Even though the developers of Bibo curated sound definitions of the classes that belong to the bibliographic field, it is worth mentioning that this ontology is not very detailed with respect to the properties required in a bibliographic environment. Although it represents the translations of bibliographic resources, the properties that concern derivative, merging, and absorbed resources, which are provided instead by BIBFRAME, are not represented in Bibo. The aforementioned properties refer to two different categories of relationships very relevant in the bibliographic field: derivative relationships, which concern different editions of the same work and works derived from a pre-existing work, and sequential relationships that include sequels of a monograph, the logical continuation of, or the transformation of another work (Tillett 1989; Green 2001; IFLA 2017, 69-78)."
[{2021-01-26} https://www.isko.org/cyclo/ontologies#6]

name::
* McsEngl.Bibo-Owlontology,
* McsEngl.Owlontology.Bibo,

tool of lagOwlg

description::
* editor,
* reasoner,

name::
* McsEngl.lagOwlg'tool,

addressWpg::
* https://www.w3.org/2001/sw/wiki/Tools,

info-resource of lagOwlg

description::
* https://www.w3.org/TR/owl2-overview/,
* https://www.w3.org/TR/owl2-primer/,
* https://en.wikipedia.org/wiki/Web_Ontology_Language,

name::
* McsEngl.lagOwlg'Infrsc,

DOING of lagOwlg

description::
"... with the ultimate goal of making Web content more accessible to machines."
[{2021-01-08} https://www.w3.org/TR/2012/REC-owl2-overview-20121211/#Introduction]

name::
* McsEngl.lagOwlg'doing,

evoluting of lagOwlg

description::
">evolution of Web Ontology Language:
The Web Ontology Language (OWL) has undergone significant evolution since its inception, with each revision introducing new features and capabilities that have expanded its scope and applicability. Let's delve into the key milestones in the evolution of OWL:

**Early Developments:**
- **1995:** SHOE (Simple HTML Ontology Extensions) is developed at the University of Maryland, laying the groundwork for ontology representation on the web.
- **1996/1997:** Ontobroker emerges from the University of Karlsruhe, providing a knowledge representation platform for building ontologies.
- **1997-1999:** The OIL (Ontology Interchange Level) project, led by an EU consortium, aims to standardize ontology languages for knowledge sharing.

**Standardization and OWL 1:**
- **2000:** The Semantic Web Activity is initiated within the W3C, fostering the development of machine-interpretable web content.
- **2000-2001:** DAML (DARPA Agent Markup Language) and OIL merge into DAML+OIL, a joint US/EU effort to create a standardized ontology language.
- **2001:** The W3C Web Ontology Working Group is established, tasked with developing an ontology language for the Semantic Web.
- **2002:** The first working drafts of OWL are published, based on lessons learned from DAML+OIL.
- **2004:** OWL 1.0 becomes a W3C Recommendation, providing a foundation for representing rich semantics on the web.

**OWL 2 and Extensions:**
- **2007:** OWL 2 is introduced, encompassing three increasingly expressive levels: OWL 2 DL, OWL 2 QL, and OWL 2 Full.
- **2009:** OWL 2 DL becomes a W3C Recommendation, offering full logical reasoning capabilities.
- **2012:** OWL 2 QL and OWL 2 Full receive W3C Recommendations, catering to specific use cases requiring efficient reasoning or unrestricted expressiveness.
- **2014:** The OWL 2 Web Ontology Language Reference is published, providing a comprehensive specification of the language.

**OWL 2 Profiles and Ongoing Developments:**
- **2017:** OWL 2 Document Format (RDF/XML) is introduced, standardizing the serialization of OWL ontologies.
- **2019:** OWL 2 Manchester Syntax and Functional Syntax are standardized, providing alternative ways to represent OWL ontologies.
- **2020:** OWL 2 WSDL Profile is introduced, enabling the integration of OWL ontologies into Web Services Description Language (WSDL).

- **Ongoing:** The W3C OWL Working Group continues to explore extensions and refinements to the language, addressing emerging requirements in knowledge representation and reasoning.

The evolution of OWL reflects the growing need for standardized and expressive ontology languages to support the Semantic Web and enable knowledge-driven applications. As the Semantic Web continues to mature, OWL is expected to play a pivotal role in enabling machines to understand and reason about the vast amount of information available on the web."
[{2023-11-15 retrieved} https://bard.google.com/chat/c03b4ab82bdfc63f?hl=en&pli=1]

">evolution of Web Ontology Language:
The Web Ontology Language (OWL) is a family of knowledge representation languages for authoring ontologies, and it has played a crucial role in the development of the Semantic Web. The evolution of OWL can be traced through its different versions:

1. **OWL 1.0 (2004):** The first version of OWL, known as OWL Lite, OWL DL (Description Logic), and OWL Full, was published by the World Wide Web Consortium (W3C) in 2004. It provided a standardized way to represent and share ontologies on the web, supporting different levels of expressivity.

2. **OWL 1.1 (2007):** OWL 1.1 was designed to address some limitations and issues identified in OWL 1.0. It introduced additional features and improvements while maintaining compatibility with OWL 1.0.

3. **OWL 2 (2009):** The major evolution in the OWL language came with the release of OWL 2 in 2009. OWL 2 is a modular language that includes profiles, each catering to different use cases. The profiles include OWL 2 EL, OWL 2 QL, OWL 2 RL, OWL 2 DL, and OWL 2 Full. This modular approach allows users to choose the appropriate level of expressivity based on their specific needs.
- **OWL 2 EL:** Designed for scalable reasoning.
- **OWL 2 QL:** Suitable for querying large amounts of data.
- **OWL 2 RL:** Focused on rule-based reasoning.
- **OWL 2 DL:** Provides the maximum expressivity while maintaining decidability.
- **OWL 2 Full:** Retains compatibility with OWL Full from OWL 1.0.

4. **OWL 2.0 (2012):** Although there isn't a distinct version known as OWL 2.0, the improvements and changes made in OWL 2 over its predecessors are sometimes collectively referred to as "OWL 2.0."

5. **Continued Updates:** The development of OWL has continued with updates to its specifications, tools, and implementations. The community actively contributes to discussions and improvements in the field of ontology engineering.

Throughout its evolution, OWL has become a standard in the Semantic Web community, providing a means to represent and reason about knowledge in a machine-readable way. The ongoing work in this field aims to address practical issues, enhance usability, and accommodate evolving requirements in various domains."
[{2023-11-15 retrieved} https://chat.openai.com/c/eb82a0bc-071d-4172-99db-e43bc4a740b1]

name::
* McsEngl.evoluting-of-lagOwlg,
* McsEngl.lagOwlg'evoluting,

lagOwlg.OWL2-{2009}

description::
"Relationship to OWL 1
OWL 2 has a very similar overall structure to OWL 1. Looking at Figure 1, almost all the building blocks of OWL 2 were present in OWL 1, albeit possibly under different names.
The central role of RDF/XML, the role of other syntaxes, and the relationships between the Direct and RDF-Based semantics (i.e., the correspondence theorem) have not changed. More importantly, backwards compatibility with OWL 1 is, to all intents and purposes, complete: all OWL 1 Ontologies remain valid OWL 2 Ontologies, with identical inferences in all practical cases (see Section 4.2 of OWL 2 New Features and Rationale [OWL 2 New Features and Rationale]).
OWL 2 adds new functionality with respect to OWL 1. Some of the new features are syntactic sugar (e.g., disjoint union of classes) while others offer new expressivity, including:
* keys;
* property chains;
* richer datatypes, data ranges;
* qualified cardinality restrictions;
* asymmetric, reflexive, and disjoint properties; and
* enhanced annotation capabilities
OWL 2 also defines three new profiles [OWL 2 Profiles] and a new syntax [OWL 2 Manchester Syntax]. In addition, some of the restrictions applicable to OWL DL have been relaxed; as a result, the set of RDF Graphs that can be handled by Description Logics reasoners is slightly larger in OWL 2.
All of the above is documented in detail in the OWL 2 New Features and Rationale document [OWL 2 New Features and Rationale]. The OWL 2 Quick Reference Guide [OWL 2 Quick Guide] also provides an overview of the features of OWL 2, clearly indicating those that are new."
[{2021-01-30} https://www.w3.org/TR/2012/REC-owl2-overview-20121211/#Relationship_to_OWL_1]

name::
* McsEngl.OWL2!⇒lagOwl2,
* McsEngl.lagOntl.OWL2!⇒lagOwl2,
* McsEngl.lagOwl2,

lagOntg.OBO

description::
"overview of OBO ontology-language:
**OBO Ontology Language** (OBO) is a text-based ontology language that is based on the Web Ontology Language (OWL). It was developed by the Open Biological and Biomedical Ontologies (OBO) Foundry, a community effort to create ontologies for use across biological and medical domains.

OBO is a relatively simple language that is easy to learn and use. It is also highly expressive, allowing users to create ontologies that are both rich and complex.

OBO ontologies are typically stored in OBO files, which are text files that use a specific syntax to define classes, properties, and individuals. OBO files can be easily processed by software tools, such as OBOEdit and Protégé.

OBO is a widely used ontology language in the biological and medical communities. It is used to create ontologies for a wide range of topics, such as genes, proteins, diseases, and drugs. OBO ontologies are used in a variety of applications, such as data integration, knowledge discovery, and machine learning.

Here is an example of a simple OBO ontology:
```
id: GO:0003674
name: molecular_function
def: The biochemical activity of a molecular entity.
is_a: GO:0005488
```

This ontology defines a class called `GO:0003674 molecular_function`. The `is_a` property indicates that this class is a subclass of the `GO:0005488 biological_process` class.

OBO ontologies can be linked together to create more complex ontologies. For example, the following OBO ontology links the `GO:0003674 molecular_function` class to the `KEGG:R01100 glycolysis/gluconeogenesis` pathway:
```
id: KEGG:R01100
name: glycolysis/gluconeogenesis
def: A metabolic pathway that converts glucose to pyruvate and vice versa.
is_a: KEGG:pathway
part_of: KEGG:metabolism
involves: GO:0003674 molecular_function
```

This ontology states that the `KEGG:R01100 glycolysis/gluconeogenesis` pathway involves the `GO:0003674 molecular_function` class. This means that the genes and proteins that are involved in the glycolysis/gluconeogenesis pathway have molecular functions.

OBO is a powerful and flexible ontology language that is widely used in the biological and medical communities. It is a good choice for creating ontologies for a wide range of topics, such as genes, proteins, diseases, and drugs."
[{2023-11-04 retrieved} https://bard.google.com/chat/92897ea538348c4f]

name::
* McsEngl.OBO--ontology-language,
* McsEngl.lagOntg.OBO,

lagOntg.SWRL (link)

lagOntg.TTL-(turtle-format) (link)

lagOntg.SHOE

description::
">evolution of SHOE ontology language:
SHOE (Simple HTML Ontology Extensions) was an early ontology language that used HTML tags to define classes, properties, and instances. It was developed by the University of Maryland in the early 1990s. SHOE was a simple and easy-to-use language, but it was not as expressive as other ontology languages that were developed later. As a result, SHOE was never widely adopted.

SHOE was succeeded by OIL (Ontology Interchange Language), which was developed by the DARPA Agent Markup Language (DAML) program in the late 1990s. OIL was a more expressive language than SHOE, and it was designed to be interoperable with other ontology languages. OIL was never officially standardized, but it was a major influence on the development of OWL (Web Ontology Language), which is the current standard ontology language.

OWL is a powerful and expressive language that is used to define ontologies for a wide variety of applications. OWL is based on RDF (Resource Description Framework), which is a data model for representing information about resources. OWL ontologies are used to define the relationships between different types of resources, and they can be used to infer new information from existing information.

The evolution of ontology languages from SHOE to OWL has been driven by the need for more expressive and interoperable languages. SHOE was a simple language that was easy to use, but it was not as expressive as OIL or OWL. OIL was a more expressive language than SHOE, but it was not as interoperable. OWL is a powerful and expressive language that is both interoperable and easy to use.

Here is a table that summarizes the evolution of SHOE ontology language:
| Language | Developer | Year | Description |
| SHOE | University of Maryland | 1990s | Simple ontology language that used HTML tags to define classes, properties, and instances. |
| OIL | DARPA Agent Markup Language (DAML) program | Late 1990s | More expressive ontology language than SHOE, designed to be interoperable with other ontology languages. |
| OWL | World Wide Web Consortium (W3C) | 2004 | Powerful and expressive ontology language based on RDF, used to define ontologies for a wide variety of applications. |"
[{2023-11-17 retrieved} https://bard.google.com/chat/b44cef8e3fa01ded?hl=en&pli=1]

name::
* McsEngl.SHOE!=Simple-HTML-Ontology-Extensions,
* McsEngl.Simple-HTML-Ontology-Extensions,
* McsEngl.lagOntg.SHOE,

addressWpg::
* https://www.cs.umd.edu/projects/plus/SHOE/html-pages.html,

lagKnlg.rule

description::
· rule-language is a-propositional-language that uses if-then-propositions.

name::
* McsEngl.lagKnlg.011-rule!⇒lagRule,
* McsEngl.lagKnlg.rule!⇒lagRule,
* McsEngl.lagPpsl.rule!⇒lagRule,
* McsEngl.lagRule,
* McsEngl.rule-language!⇒lagRule,

descriptionLong::
"Rules are a simple way of encoding knowledge, and are a drastic simplification of first order logic, for which it is relatively easy to implement inference engines that can process the conditions and draw the right conclusions. A rule system is an implementation of a particular syntax and semantics of rules, which may extend the simple notion described above to include existential quantification, disjunction, logical conjunction, negation, functions, non monotonicity, and many other features. Rule systems have been implemented and studied since the mid-1970s and saw significant uptake in the 1980s during the height of so-called Expert Systems."
[{2021-01-30} https://en.wikipedia.org/wiki/Rule_Interchange_Format]

manager (link) of lagRule

lagRule.SPECIFIC

description::
* RIF,
* RuleML,
* SWRL,
===
"Rule-based languages instantiate rules when activated by conditions in a set of data. Of all possible activations, some set is selected and the statements belonging to those rules execute. Rule-based languages include:[citation needed]
* awk,
* CLIPS,
* Constraint Handling Rules,
* Drools,
* GOAL agent programming language,
* Jess,
* OPS5,
* Prolog,
* ToonTalk – robots are rules,
* Mathematica,
* XSLT[citation needed],
* Wolfram Language",
[{2021-02-01} https://en.wikipedia.org/wiki/List_of_programming_languages_by_type#Rule-based_languages]

name::
* McsEngl.lagRule.specific,

lagRule.RuleML

description::
"Rules have many uses, coming in a multitude of forms. RuleML is a unifying system of families of languages for Web rules over Web documents and data. RuleML is specified syntactically through schema languages (normatively, in Relax NG), originally developed for XML and transferable to other formats such as JSON. Since Version 1.02, rather than assuming predefined default semantics, RuleML allows partially constrained semantic profiles and fully-specified semantics. PSOA RuleML employs model-theoretic semantics and transformational realizations, bridges between (deductive) graph and relational databases and other data paradigms, formalizes Cypher-like labeled property graphs, and is illustrated via (blockchain, ...) examples and (air traffic control, ...) use cases."
[{2021-01-30} http://wiki.ruleml.org/index.php/Introducing_RuleML]

name::
* McsEngl.RuleML!⇒lagRlml,
* McsEngl.lagRlml,
* McsEngl.lagRule.RuleML!⇒lagRlml,

descriptionLong::
"RuleML is a global initiative, led by a non-profit organization RuleML Inc., that is devoted to advancing research and industry standards design activities in the technical area of rules that are semantic and highly inter-operable. The standards design takes the form primarily of a markup language, also known as RuleML. The research activities include an annual research conference, the RuleML Symposium, also known as RuleML for short. Founded in fall 2000 by Harold Boley, Benjamin Grosof, and Said Tabet, RuleML was originally devoted purely to standards design, but then quickly branched out into the related activities of coordinating research and organizing an annual research conference starting in 2002. The M in RuleML is sometimes interpreted as standing for Markup and Modeling. The markup language was developed to express both forward (bottom-up) and backward (top-down) rules in XML for deduction, rewriting, and further inferential-transformational tasks. It is defined by the Rule Markup Initiative, an open network of individuals and groups from both industry and academia[1] that was formed to develop a canonical Web language for rules using XML markup and transformations from and to other rule standards/systems."
[{2021-01-30} https://en.wikipedia.org/wiki/RuleML]

lagRule.RIF

description::
"The Rule Interchange Format (RIF) is a W3C Recommendation. RIF is part of the infrastructure for the semantic web, along with (principally) SPARQL, RDF and OWL. Although originally envisioned by many as a "rules layer" for the semantic web, in reality the design of RIF is based on the observation that there are many "rules languages" in existence, and what is needed is to exchange rules between them.[1]
RIF includes three dialects, a Core dialect which is extended into a Basic Logic Dialect (BLD) and Production Rule Dialect (PRD).[2]"
[{2021-01-05} https://en.wikipedia.org/wiki/Rule_Interchange_Format]

name::
* McsEngl.RIF!=rule-interchange-format,
* McsEngl.lagKnlg.027-RIF!⇒lagRift,
* McsEngl.lagKnlg.RIF!⇒lagRift,
* McsEngl.lagRift,
* McsEngl.lagRule.RIF!⇒lagRift,
* McsEngl.rule-interchange-format!⇒lagRift,

relation-to-RuleML of lagRift

description::
"3.4 What is the relationship between RuleML and RIF?
RuleML has provided input for RIF on several levels, including the use of 'striped' XML as well as the structuring of rule classes into a family of sublanguages with, e.g., Datalog RuleML partially mappable to the RIF Core Dialect, Derivation RuleML to the RIF Basic Logic Dialect, and the production-rule sublanguage of Reaction RuleML to the RIF Production Rule Dialect. Conversely, RuleML adopted some features that were developed as part of the RIF Working Group such as role tags <if> ... <then> instead of <body> ... <head>. Shared RIF RuleML implementations and use cases are projected to lead to further convergence."
[{2021-01-30} https://www.w3.org/2005/rules/wiki/RIF_FAQ]

name::
* McsEngl.lagRift'relation-to-RullML,

info-resource of lagRift

description::
* https://www.w3.org/TR/rif-overview/,

name::
* McsEngl.lagRift'Infrsc,

lagRule.SWRL

description::
"The Semantic Web Rule Language (SWRL) is a proposed language for the Semantic Web that can be used to express rules as well as logic, combining OWL DL or OWL Lite with a subset of the Rule Markup Language (itself a subset of Datalog).[1]
The specification was submitted in May 2004 to the W3C by the National Research Council of Canada, Network Inference (since acquired by webMethods), and Stanford University in association with the Joint US/EU ad hoc Agent Markup Language Committee. The specification was based on an earlier proposal for an OWL rules language.[2][3]
SWRL has the full power of OWL DL, but at the price of decidability and practical implementations.[4] However, decidability can be regained by restricting the form of admissible rules, typically by imposing a suitable safety condition.[5]
Rules are of the form of an implication between an antecedent (body) and consequent (head). The intended meaning can be read as: whenever the conditions specified in the antecedent hold, then the conditions specified in the consequent must also hold."
[{2021-01-28} https://en.wikipedia.org/wiki/Semantic_Web_Rule_Language]

name::
* McsEngl.SWRL!=semantic-web-rule-language,
* McsEngl.lagOntg.Swrl!⇒lagSwrl,
* McsEngl.lagSwrl,
* McsEngl.semantic-web-rule-language!⇒lagSwrl,

lagKnlg.semantic-web

description::
· semantic-web-language is a-language we use to make the-web readable by machines.
· from web-of-documents-(HTML) to web-of-data-(RDF) to web-of-concepts-(McsHitp).

name::
* McsEngl.lagKnlg.012-semantic-web!⇒lagSweb,
* McsEngl.lagKnlg.semantic-web!⇒lagSweb,
* McsEngl.lagSweb,
* McsEngl.semantic-web-language!⇒lagSweb,

descriptionLong::
"In addition to the classic “Web of documents” W3C is helping to build a technology stack to support a “Web of data,” the sort of data you find in databases. The ultimate goal of the Web of data is to enable computers to do more useful work and to develop systems that can support trusted interactions over the network. The term “Semantic Web” refers to W3C’s vision of the Web of linked data. Semantic Web technologies enable people to create data stores on the Web, build vocabularies, and write rules for handling data. Linked data are empowered by technologies such as RDF, SPARQL, OWL, and SKOS."
[{2021-01-03} https://www.w3.org/standards/semanticweb/]
===
"In the original concept of the Semantic Web, software agents would have processed contents, found information from different sources, reasoned about data, and produced output."
[{2021-01-26} https://www.isko.org/cyclo/ontologies]

linked-data of lagSweb

description::
"The Semantic Web is a Web of Data — of dates and titles and part numbers and chemical properties and any other data one might conceive of. The collection of Semantic Web technologies (RDF, OWL, SKOS, SPARQL, etc.) provides an environment where application can query that data, draw inferences using vocabularies, etc.
However, to make the Web of Data a reality, it is important to have the huge amount of data on the Web available in a standard format, reachable and manageable by Semantic Web tools. Furthermore, not only does the Semantic Web need access to data, but relationships among data should be made available, too, to create a Web of Data (as opposed to a sheer collection of datasets). This collection of interrelated datasets on the Web can also be referred to as Linked Data.
To achieve and create Linked Data, technologies should be available for a common format (RDF), to make either conversion or on-the-fly access to existing databases (relational, XML, HTML, etc). It is also important to be able to setup query endpoints to access that data more conveniently. W3C provides a palette of technologies (RDF, GRDDL, POWDER, RDFa, the upcoming R2RML, RIF, SPARQL) to get access to the data."
[{2021-01-03} https://www.w3.org/standards/semanticweb/data.html]

name::
* McsEngl.lagLddt,
* McsEngl.lagLddt!=linked-data-language,
* McsEngl.lagSweb'linked-data!⇒lagLddt,
* McsEngl.linked-data-of-lagSweb!⇒lagLddt,

descriptionLong::
"The term Linked Data refers to a set of best practices for publishing structured data on the Web. These principles have been coined by Tim Berners-Lee in the design issue note Linked Data. The principles are:
* Use URIs as names for things
* Use HTTP URIs so that people can look up those names.
* When someone looks up a URI, provide useful information.
* Include links to other URIs. so that they can discover more things.
The idea behind these principles is on the one hand side, to use standards for the representation and the access to data on the Web. On the other hand, the principles propagate to set hyperlinks between data from different sources. These hyperlinks connect all Linked Data into a single global data graph, similar as the hyperlinks on the classic Web connect all HTML documents into a single global information space. Thus, LinkedData is to spreadsheets and databases what the Web of hypertext documents is to word processor files. The Linked Open Data cloud diagramms give an overview of the linked data sets that are available on the Web."
[{2021-01-03} https://www.w3.org/wiki/LinkedData]

info-resource of lagLddt

description::
* https://json-ld.org/,
* https://github.com/digitalbazaar/jsonld.js,

name::
* McsEngl.lagLddt'Infrsc,

ontology of lagSweb

description::
"On the Semantic Web, vocabularies define the concepts and relationships (also referred to as “terms”) used to describe and represent an area of concern. Vocabularies are used to classify the terms that can be used in a particular application, characterize possible relationships, and define possible constraints on using those terms. In practice, vocabularies can be very complex (with several thousands of terms) or very simple (describing one or two concepts only).
There is no clear division between what is referred to as “vocabularies” and “ontologies”. The trend is to use the word “ontology” for more complex, and possibly quite formal collection of terms, whereas “vocabulary” is used when such strict formalism is not necessarily used or only in a very loose sense. Vocabularies are the basic building blocks for inference techniques on the Semantic Web."
[{2021-01-03} https://www.w3.org/standards/semanticweb/ontology]

name::
* McsEngl.lagSweb'ontology,
* McsEngl.lagSweb'vocabulary,
* McsEngl.ontology@lagSweb,
* McsEngl.vocabulary@lagSweb,

evaluation of lagSweb

description::
"The name of the OWL language itself provides a good example of the value of a Semantic Web. If one were to search for "OWL" using the Internet today most of the pages retrieved would be on the bird Owl rather than the standard OWL. With a Semantic Web it would be possible to specify the concept "Web Ontology Language" and the user would not need to worry about the various possible acronyms or synonyms as part of the search. Likewise, the user would not need to worry about homonyms crowding the search results with irrelevant data such as information about birds of prey as in this simple example."
[{2021-02-08} https://en.wikipedia.org/wiki/Frame_(artificial_intelligence)#Implementations]

name::
* McsEngl.lagSweb'evaluation,

query of lagSweb

description::
"“Query” in the Semantic Web context means technologies and protocols that can programmatically retrieve information from the Web of Data.
What is Query Used For?
The Semantic Web is a Web of data — of dates and titles and part numbers and chemical properties and any other data one might conceive of. RDF provides the foundation for publishing and linking your data. Various technologies allow you to embed data in documents (RDFa, GRDDL) or expose what you have in SQL databases, or make it available as RDF files.
However, just as relational databases or XML need specific query languages (SQL and XQuery, respectively), the Web of Data, typically represented using RDF as a data format, needs its own, RDF-specific query language and facilities. This is provided by the SPARQLquery language and the accompanying protocols. SPARQL makes it possible to send queries and receive results, e.g., through HTTP or SOAP.
Technically, SPARQL queries are based on (triple) patterns. RDF can be seen as a set of relationships among resources (i.e., RDF triples); SPARQL queries provide one or more patterns against such relationships. These triple patterns are similar to RDF triples, except that one or more of the constituent resource references are variables. A SPARQL engine would returns the resources for all triples that match these patterns.
Using SPARQL consumers of the Web of Data can extract possibly complex information (i.e., existing resource references and their relationships) which are returned, for example, in a table format. This table can be incorporated into another Web page; using this approach SPARQL provides a powerful tool to build, for example, complex mash-up sites or search engines that include data stemming from the Semantic Web."
[{2021-01-03} https://www.w3.org/standards/semanticweb/query]

name::
* McsEngl.lagSweb'query,

inference of lagSweb

description::
"Broadly speaking, inference on the Semantic Web can be characterized by discovering new relationships. On the Semantic Web, data is modeled as a set of (named) relationships between resources. “Inference” means that automatic procedures can generate new relationships based on the data and based on some additional information in the form of a vocabulary, e.g., a set of rules. Whether the new relationships are explicitly added to the set of data, or are returned at query time, is an implementation issue.
On the Semantic Web, the source of such extra information can be defined via vocabularies or rule sets. Both of these approaches draw upon knowledge representation techniques. In general, ontologies concentrate on classification methods, putting an emphasis on defining 'classes', 'subclasses', on how individual resources can be associated to such classes, and characterizing the relationships among classes and their instances. Rules, on the other hand, concentrate on defining a general mechanism on discovering and generating new relationships based on existing ones, much like logic programs, like Prolog, do. In the family of Semantic Web related W3C Recommendations RDFS, OWL, or SKOS are the tools of choice to define ontologies, whereas RIF has been developed to cover rule based approaches."
[{2021-01-03} https://www.w3.org/standards/semanticweb/inference.html]

name::
* McsEngl.lagSweb'inference,

evoluting of lagSweb

description::
"evolution of semantic-web:
The evolution of the Semantic Web can be divided into three main phases:

**Phase 1: The early years (1994-2004)**
In this phase, the foundations of the Semantic Web were laid. Tim Berners-Lee and other researchers developed the core technologies of the Semantic Web, such as the Resource Description Framework (RDF), the Web Ontology Language (OWL), and the SPARQL query language.

**Phase 2: Standardization and deployment (2004-2013)**
In this phase, the Semantic Web technologies were standardized by the World Wide Web Consortium (W3C) and began to be deployed in real-world applications. One of the most notable deployments was the Linked Data initiative, which aims to publish all public data on the Web in a linked and machine-readable format.

**Phase 3: Emerging applications and growth (2013-present)**
In this phase, the Semantic Web is increasingly being used to develop new and innovative applications. For example, the Semantic Web is being used to power intelligent search engines, personalized recommendation systems, and automated decision-making systems.

Here are some specific examples of the evolution of the Semantic Web:
* **2001:** Tim Berners-Lee, James Hendler, and Ora Lassila publish the article "The Semantic Web" in Scientific American, which popularizes the concept of the Semantic Web.
* **2004:** The W3C publishes the RDF and OWL standards.
* **2006:** The W3C publishes the SPARQL query language standard.
* **2007:** The Linked Data initiative is launched.
* **2011:** The W3C publishes the Schema.org vocabulary, which is a set of microdata tags that can be used to mark up the meaning of content on web pages.
* **2013:** The W3C publishes the SHACL Shapes Constraint Language, which is a language for constraining the structure and meaning of RDF data.
* **2014:** The W3C publishes the JSON-LD format, which is a way to encode RDF data in JSON.
* **2017:** The W3C publishes the PROV Ontology, which is a vocabulary for describing the provenance of data.
* **2020:** The W3C publishes the Solid project, which is a platform for personal data management using the Semantic Web.

The Semantic Web is still under development, but it is already being used to power a wide range of applications. As the Semantic Web continues to evolve, it is likely to play an increasingly important role in our lives.

Here are some examples of how the Semantic Web is being used today:
* **Search engines:** The Semantic Web is being used to develop more intelligent search engines that can understand the meaning of queries and return more relevant results.
* **Recommendation systems:** The Semantic Web is being used to develop personalized recommendation systems that can recommend products, services, and content to users based on their interests.
* **Automated decision-making systems:** The Semantic Web is being used to develop automated decision-making systems that can make decisions based on complex data sets.
* **Internet of Things (IoT):** The Semantic Web is being used to develop IoT applications that can collect, process, and analyze data from sensors and devices in a more intelligent way.
* **Life sciences:** The Semantic Web is being used to develop applications for life sciences research, such as drug discovery and clinical trials.
* **Government:** The Semantic Web is being used to develop government applications, such as e-government services and fraud detection.

The Semantic Web has the potential to revolutionize the way we interact with the Web. By making data more machine-readable and understandable, the Semantic Web can enable new and innovative applications that were not possible before."
[{2023-11-04 retrieved} https://bard.google.com/chat/d9d14f2406f370df]

name::
* McsEngl.evoluting-of-lagSweb,
* McsEngl.lagSweb'evoluting,

lagSweb.RDF-{2004}

description::
"RDF is graph-based because it represents data as a collection of triples, which can be naturally seen as a graph structure. A triple consists of a subject, a predicate, and an object, which correspond to a node, an edge, and another node in a graph. Each node and edge can be identified by a URI, which makes RDF data globally unique and interoperable. RDF graphs can express complex relationships and semantics among resources, and can be easily merged and queried using standard languages and tools. For more information, you can refer to the following sources: Resource Description Framework - Wikipedia, Data Modeling with RDF(S) — GraphDB 10.0.0 documentation - Ontotext, and RDF 1.2 Concepts and Abstract Syntax - World Wide Web Consortium (W3C)."
[{2023-11-18 retrieved} https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx]
"The Resource Description Framework (RDF) is a family of World Wide Web Consortium (W3C) specifications[1] originally designed as a metadata data model. It has come to be used as a general method for conceptual description or modeling of information that is implemented in web resources, using a variety of syntax notations and data serialization formats. It is also used in knowledge management applications.
RDF was adopted as a W3C recommendation in 1999. The RDF 1.0 specification was published in 2004, the RDF 1.1 specification in 2014."
[{2021-01-03} https://en.wikipedia.org/wiki/Resource_Description_Framework]

name::
* McsEngl.RDF!=resource-description-framework!⇒lagRdfw,
* McsEngl.lagKnlg.014-RDF!⇒lagRdfw,
* McsEngl.lagKnlg.RDF!⇒lagRdfw,
* McsEngl.lagRdf!⇒lagRdfw,
* McsEngl.lagRdfw!=Resource-Description-FrameWork, {2021-01-05},
* McsEngl.resource-description-framework!⇒lagRdfw,

input of lagRdfw

description::
"The Resource Description Framework (RDF) is a framework for expressing information about resources. Resources can be anything, including documents, people, physical objects, and abstract concepts.
RDF is intended for situations in which information on the Web needs to be processed by applications, rather than being only displayed to people. RDF provides a common framework for expressing this information so it can be exchanged between applications without loss of meaning. Since it is a common framework, application designers can leverage the availability of common RDF parsers and processing tools. The ability to exchange information between different applications means that the information may be made available to applications other than those for which it was originally created."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/]

name::
* McsEngl.Rdfinput,
* McsEngl.lagRdfw'input!⇒Rdfinput,

info-resource of Rdfinput

description::
"Resources can be anything, including documents, people, physical objects, and abstract concepts."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/]

name::
* McsEngl.Rdfinput'resource,
* McsEngl.Rdfiresource,
* McsEngl.lagRdfw'resource!⇒Rdfiresource,
* McsEngl.resource@lagRdfw,

Rdfiresource.subject

description::
· subject is-called the-left resource of an-Rdf-property.

name::
* McsEngl.Rdfiresource.subject!⇒Rdfisubject,
* McsEngl.Rdfisubject,

Rdfiresource.object

description::
· object is-called the-right resource of an-Rdf-property.

name::
* McsEngl.Rdfiobject,
* McsEngl.Rdfiresource.object!⇒Rdfiobject,

Rdfiresource.class

description::
· a-generic Rdfiresource is-called Rdficlass.
===
"Resources may be divided into groups called classes.
The members of a class are known as instances of the class.
Classes are themselves resources.
They are often identified by IRIs and may be described using RDF properties.
The rdf:type property may be used to state that a resource is an instance of a class."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass,
* McsEngl.Rdfinput'class!⇒Rdficlass,
* McsEngl.Rdfiresource.class!⇒Rdficlass,
* McsEngl.class@lagRdfw,
* McsEngl.lagRdfw'class!⇒Rdficlass,

Rdficlass.instance

description::
· a-specific Rdficlass is-called Rdfiinstance.
===
The members of a class are known as instances of the class.
... The rdf:type property may be used to state that a resource is an instance of a class."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.instance!⇒Rdfiinstance,
* McsEngl.Rdfiinstance,

Rdficlass.extension

description::
· the-set of all Rdfiinstances of an-Rdficlass.

name::
* McsEngl.Rdficlass.extension!⇒Rdficlass-extension,
* McsEngl.Rdficlass-extension,

descriptionLong::
"RDF distinguishes between a class and the set of its instances. Associated with each class is a set, called the class extension of the class, which is the set of the instances of the class. Two classes may have the same set of instances but be different classes. For example, the tax office may define the class of people living at the same address as the editor of this document. The Post Office may define the class of people whose address has the same zip code as the address of the author. It is possible for these classes to have exactly the same instances, yet to have different properties. Only one of the classes has the property that it was defined by the tax office, and only the other has the property that it was defined by the Post Office.
A class may be a member of its own class extension and may be an instance of itself."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

Rdficlass.subclass

description::
· a-specific but not individual of an-Rdficlass.
===
"If a class C is a subclass of a class C', then all instances of C will also be instances of C'. The rdfs:subClassOf property may be used to state that one class is a subclass of another. The term super-class is used as the inverse of subclass. If a class C' is a super-class of a class C, then all instances of C are also instances of C'."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.sublass!⇒Rdfisubclass,
* McsEngl.Rdfisubclass,

Rdficlass.rdfs:Resource

description::
"All things described by RDF are called resources, and are instances of the class rdfs:Resource. This is the class of everything. All other classes are subclasses of this class. rdfs:Resource is an instance of rdfs:Class."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.rdfs:Resource,
* McsEngl.rdfs:Resource,

Rdficlass.rdfs:Class

description::
"This is the class of resources that are RDF classes. rdfs:Class is an instance of rdfs:Class."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.rdfs:Class,
* McsEngl.rdfs:Class,

Rdficlass.rdfs:Literal

description::
"The class rdfs:Literal is the class of literal values such as strings and integers. Property values such as textual strings are examples of RDF literals.
rdfs:Literal is an instance of rdfs:Class. rdfs:Literal is a subclass of rdfs:Resource."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.rdfs:Literal,
* McsEngl.rdfs:Literal,

Rdficlass.rdfs:Datatype

description::
"rdfs:Datatype is the class of datatypes. All instances of rdfs:Datatype correspond to the RDF model of a datatype described in the RDF Concepts specification [RDF11-CONCEPTS]. rdfs:Datatype is both an instance of and a subclass of rdfs:Class. Each instance of rdfs:Datatype is a subclass of rdfs:Literal."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.rdfs:Datatype,
* McsEngl.rdfs:Datatype,

Rdficlass.rdf:lagString

description::
"The class rdf:langString is the class of language-tagged string values. rdf:langString is an instance of rdfs:Datatype and a subclass of rdfs:Literal."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.rdf:lagString,
* McsEngl.rdf:lagString,

Rdficlass.rdf:HTML

description::
"The class rdf:HTML is the class of HTML literal values. rdf:HTML is an instance of rdfs:Datatype and a subclass of rdfs:Literal."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.rdf:HTML,
* McsEngl.rdf:HTML,

Rdficlass.rdf:XMLLiteral

description::
"The class rdf:XMLLiteral is the class of XML literal values. rdf:XMLLiteral is an instance of rdfs:Datatype and a subclass of rdfs:Literal."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.rdf:XMLLiteral,
* McsEngl.rdf:XMLLiteral,

Rdficlass.rdf:Property

description::
"rdf:Property is the class of RDF properties. rdf:Property is an instance of rdfs:Class."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_classes]

name::
* McsEngl.Rdficlass.rdf:Property,
* McsEngl.rdf:Property,

Rdficlass.rdf:Statement

description::
"The class of RDF statements."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/]

name::
* McsEngl.Rdficlass.rdf:Statement,
* McsEngl.rdf:Statement,

Rdficlass.rdf:Bag

description::
"The class of unordered containers."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/]

name::
* McsEngl.Rdficlass.rdf:Bag,
* McsEngl.rdf:Bag,

Rdficlass.rdf:Seq

description::
"The class of ordered containers."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/]

name::
* McsEngl.Rdficlass.rdf:Seq,
* McsEngl.rdf:Seq,

Rdficlass.rdf:Alt

description::
"The rdf:Alt class is the class of RDF 'Alternative' containers. It is a subclass of rdfs:Container. Whilst formally it is no different from an rdf:Seq or an rdf:Bag, the rdf:Alt class is used conventionally to indicate to a human reader that typical processing will be to select one of the members of the container. The first member of the container, i.e. the value of the rdf:_1 property, is the default choice.
...The class of containers of alternatives."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/]

name::
* McsEngl.Rdficlass.rdf:Alt,
* McsEngl.rdf:Alt,

Rdficlass.rdfs:Container

description::
"The rdfs:Container class is a super-class of the RDF Container classes, i.e. rdf:Bag, rdf:Seq, rdf:Alt.
...The class of RDF containers."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/]

name::
* McsEngl.Rdficlass.rdfs:Container,
* McsEngl.rdfs:Container,

Rdficlass.rdfs:ContainerMembershipProperty

description::
"The rdfs:ContainerMembershipProperty class has as instances the properties rdf:_1, rdf:_2, rdf:_3 ... that are used to state that a resource is a member of a container. rdfs:ContainerMembershipProperty is a subclass of rdf:Property. Each instance of rdfs:ContainerMembershipProperty is an rdfs:subPropertyOf the rdfs:member property.
Given a container C, a triple of the form:
C rdf:_nnn O
where nnn is the decimal representation of an integer greater than 0 with no leading zeros, states that O is a member of the container C.
Container membership properties may be applied to resources other than containers.
The class of container membership properties, rdf:_1, rdf:_2, ..., all of which are sub-properties of 'member'."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/]

name::
* McsEngl.Rdficlass.rdfs:ContainerMembershipProperty,
* McsEngl.rdfs:ContainerMembershipProperty,

Rdficlass.rdf:List

description::
"The class of RDF Lists."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/]

name::
* McsEngl.Rdficlass.rdf:List,
* McsEngl.Rdfilist,
* McsEngl.rdf:List,

Rdfilist.rdf:nil

description::
"The resource rdf:nil is an instance of rdf:List that can be used to represent an empty list or other list-like structure.
A triple of the form:
L rdf:rest rdf:nil
states that L is an instance of rdf:List that has one item; that item can be indicated using the rdf:first property."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_nil]

name::
* McsEngl.Rdfilist.rdf:nil,
* McsEngl.rdf:nil,

Rdfiresource.node

description::
"The set of nodes of an RDF graph is the set of subjects and objects of triples in the graph.
It is possible for a predicate IRI to also occur as a node in the same graph."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-triple]
===
"In Semantic Web languages, such as RDF and OWL, a property is a binary relation: instances of properties link two individuals. Often we refer to the second individual as the "value" or to both both individuals as "arguments" [See note on vocabulary]."
[{2021-01-06} https://www.w3.org/TR/2006/NOTE-swbp-n-aryRelations-20060412/#general]

name::
* McsEngl.Rdfinode,
* McsEngl.Rdfiresource.node!⇒Rdfinode,
* McsEngl.argument@lagRdfw,

Rdfiresource.blank-node

description::
· blank-node is an-Rdfiresource known but NOT identifiable.
"Blank nodes are treated as simply indicating the existence of a thing, without using an IRI to identify any particular thing. This is not the same as assuming that the blank node indicates an 'unknown' IRI."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#blank-nodes]

name::
* McsEngl.blank-node-Rdfiresource,
* McsEngl.Rdfiresource.blank-node,

Rdfiresource.property

description::
"The predicate itself is an IRI and denotes a property, that is, a resource that can be thought of as a binary relation."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-property]
"The RDF Concepts and Abstract Syntax specification [RDF11-CONCEPTS] describes the concept of an RDF property as a relation between subject resources and object resources."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_properties]

name::
* McsEngl.Rdfinput'property,
* McsEngl.Rdfiproperty,
* McsEngl.Rdfiresource.property!⇒Rdfiproperty,
* McsEngl.lagRdfw'property!⇒Rdfiproperty,
* McsEngl.property@lagRdfw,

subject of Rdfiproperty

description::
· subject-of-Rdfiproperty is the-Rdfiresource on its left.

name::
* McsEngl.Rdfiproperty'subject,
* McsEngl.subject@lagRdfw,

object of Rdfiproperty

description::
· object-of-Rdfiproperty is the-Rdfiresource on its right.
===
"In Semantic Web languages, such as RDF and OWL, a property is a binary relation: instances of properties link two individuals. Often we refer to the second individual as the "value" or to both both individuals as "arguments" [See note on vocabulary]."
[{2021-01-06} https://www.w3.org/TR/2006/NOTE-swbp-n-aryRelations-20060412/#general]

name::
* McsEngl.Rdfiproperty'object,
* McsEngl.object@lagRdfw,
* McsEngl.value@lagRdfw,

domain of Rdfiproperty

description::
· domain-of-Rdfiproperty is its generic subject.

name::
* McsEngl.Rdfiproperty'domain,

range of Rdfiproperty

description::
· range-of-Rdfiproperty it its generic object.

name::
* McsEngl.Rdfiproperty'range,

Rdfiproperty.rdfs:range

description::
"rdfs:range is an instance of rdf:Property that is used to state that the values of a property are instances of one or more classes.
The triple
P rdfs:range C
states that P is an instance of the class rdf:Property, that C is an instance of the class rdfs:Class and that the resources denoted by the objects of triples whose predicate is P are instances of the class C.
Where P has more than one rdfs:range property, then the resources denoted by the objects of triples with predicate P are instances of all the classes stated by the rdfs:range properties.
The rdfs:range property can be applied to itself. The rdfs:range of rdfs:range is the class rdfs:Class. This states that any resource that is the value of an rdfs:range property is an instance of rdfs:Class.
The rdfs:range property is applied to properties. This can be represented in RDF using the rdfs:domain property. The rdfs:domain of rdfs:range is the class rdf:Property. This states that any resource with an rdfs:range property is an instance of rdf:Property."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_properties]

name::
* McsEngl.Rdfiproperty.rdfs:range,
* McsEngl.rdfs:range,

Rdfiproperty.rdfs:domain

description::
"rdfs:domain is an instance of rdf:Property that is used to state that any resource that has a given property is an instance of one or more classes.
A triple of the form:
P rdfs:domain C
states that P is an instance of the class rdf:Property, that C is a instance of the class rdfs:Class and that the resources denoted by the subjects of triples whose predicate is P are instances of the class C.
Where a property P has more than one rdfs:domain property, then the resources denoted by subjects of triples with predicate P are instances of all the classes stated by the rdfs:domain properties.
The rdfs:domain property may be applied to itself. The rdfs:domain of rdfs:domain is the class rdf:Property. This states that any resource with an rdfs:domain property is an instance of rdf:Property.
The rdfs:range of rdfs:domain is the class rdfs:Class. This states that any resource that is the value of an rdfs:domain property is an instance of rdfs:Class."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_properties]

name::
* McsEngl.Rdfiproperty.rdfs:domain,
* McsEngl.rdfs:domain,

Rdfiproperty.rdf:type

description::
"rdf:type is an instance of rdf:Property that is used to state that a resource is an instance of a class.
A triple of the form:
R rdf:type C
states that C is an instance of rdfs:Class and R is an instance of C.
The rdfs:domain of rdf:type is rdfs:Resource. The rdfs:range of rdf:type is rdfs:Class."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_properties]

name::
* McsEngl.Rdfiproperty.rdf:type,
* McsEngl.rdf:type,

Rdfiproperty.rdfs:subClassOf

description::
"The property rdfs:subClassOf is an instance of rdf:Property that is used to state that all the instances of one class are instances of another.
A triple of the form:
C1 rdfs:subClassOf C2
states that C1 is an instance of rdfs:Class, C2 is an instance of rdfs:Class and C1 is a subclass of C2. The rdfs:subClassOf property is transitive.
The rdfs:domain of rdfs:subClassOf is rdfs:Class. The rdfs:range of rdfs:subClassOf is rdfs:Class."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_properties]

name::
* McsEngl.Rdfiproperty.rdfs:subClassOf,
* McsEngl.rdfs:subClassOf,

Rdfiproperty.rdfs:subPropertyOf

description::
"The property rdfs:subPropertyOf is an instance of rdf:Property that is used to state that all resources related by one property are also related by another.
A triple of the form:
P1 rdfs:subPropertyOf P2
states that P1 is an instance of rdf:Property, P2 is an instance of rdf:Property and P1 is a subproperty of P2. The rdfs:subPropertyOf property is transitive.
The rdfs:domain of rdfs:subPropertyOf is rdf:Property. The rdfs:range of rdfs:subPropertyOf is rdf:Property."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_properties]

name::
* McsEngl.Rdfiproperty.rdfs:subPropertyOf,
* McsEngl.rdfs:subPropertyOf,

Rdfiproperty.rdfs:label

description::
"rdfs:label is an instance of rdf:Property that may be used to provide a human-readable version of a resource's name.
A triple of the form:
R rdfs:label L
states that L is a human readable label for R.
The rdfs:domain of rdfs:label is rdfs:Resource. The rdfs:range of rdfs:label is rdfs:Literal.
Multilingual labels are supported using the language tagging facility of RDF literals."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_properties]

name::
* McsEngl.Rdfiproperty.rdfs:label,
* McsEngl.rdfs:label,

Rdfiproperty.rdfs:comment

description::
"rdfs:comment is an instance of rdf:Property that may be used to provide a human-readable description of a resource.
A triple of the form:
R rdfs:comment L
states that L is a human readable description of R.
The rdfs:domain of rdfs:comment is rdfs:Resource. The rdfs:range of rdfs:comment is rdfs:Literal.
A textual comment helps clarify the meaning of RDF classes and properties. Such in-line documentation complements the use of both formal techniques (Ontology and rule languages) and informal (prose documentation, examples, test cases). A variety of documentation forms can be combined to indicate the intended meaning of the classes and properties described in an RDF vocabulary. Since RDF vocabularies are expressed as RDF graphs, vocabularies defined in other namespaces may be used to provide richer documentation.
Multilingual documentation is supported through use of the language tagging facility of RDF literals."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_properties]

name::
* McsEngl.Rdfiproperty.rdfs:comment,
* McsEngl.rdfs:comment,

Rdfiproperty.rdfs:member

description::
"rdfs:member is an instance of rdf:Property that is a super-property of all the container membership properties i.e. each container membership property has an rdfs:subPropertyOf relationship to the property rdfs:member.
The rdfs:domain of rdfs:member is rdfs:Resource. The rdfs:range of rdfs:member is rdfs:Resource.
...A member of the subject resource. rdfs:Resource rdfs:Resource"
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/]

name::
* McsEngl.Rdfiproperty.rdfs:member,
* McsEngl.rdfs:member,

Rdfiproperty.rdf:first

description::
"The first item in the subject RDF list. rdf:List rdfs:Resource"
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdf:first,
* McsEngl.rdf:first,

Rdfiproperty.rdf:rest

description::
"The rest of the subject RDF list after the first item. rdf:List rdf:List"
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdf:rest,
* McsEngl.rdf:rest,

Rdfiproperty.rdfs:seeAlso

description::
"Further information about the subject resource. rdfs:Resource rdfs:Resource"
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdfs:seeAlso,
* McsEngl.rdfs:seeAlso,

Rdfiproperty.rdfs:isDefinedBy

description::
"The definition of the subject resource. rdfs:Resource rdfs:Resource"
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdfs:isDefinedBy,
* McsEngl.rdfs:isDefinedBy,

Rdfiproperty.rdf:value

description::
"Idiomatic property used for structured values. rdfs:Resource rdfs:Resource"
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdf:value,
* McsEngl.rdf:value,

Rdfiproperty.rdf:subject

description::
"The subject of the subject RDF statement. rdf:Statement rdfs:Resource"
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdf:subject,
* McsEngl.rdf:subject,

Rdfiproperty.rdf:predicate

description::
"The predicate of the subject RDF statement. rdf:Statement rdfs:Resource"
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdf:predicate,
* McsEngl.rdf:predicate,

Rdfiproperty.rdf:object

description::
"The object of the subject RDF statement. rdf:Statement rdfs:Resource"
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdf:object,
* McsEngl.rdf:object,

Rdfiproperty.rdf:_1

description::
"RDF also uses properties called rdf:_1, rdf:_2, rdf:_3... etc., each of which is both a sub-property of rdfs:member and an instance of the class rdfs:ContainerMembershipProperty. "
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdf:_1,
* McsEngl.rdf:_1,

Rdfiproperty.rdf:_2

description::
"RDF also uses properties called rdf:_1, rdf:_2, rdf:_3... etc., each of which is both a sub-property of rdfs:member and an instance of the class rdfs:ContainerMembershipProperty. "
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf-schema-20140225/#ch_sumproperties]

name::
* McsEngl.Rdfiproperty.rdf:_2,
* McsEngl.rdf:_2,

Rdfiresource.referent

description::
"The resource denoted by an IRI is also called its referent."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-referent]

name::
* McsEngl.Rdfireferent,
* McsEngl.Rdfiresource.referent!⇒Rdfireferent,
* McsEngl.referent@lagRdfw,

Rdfiresource.RDF-source

description::
"We informally use the term RDF source to refer to a persistent yet mutable source or container of RDF graphs.
An RDF source is a resource that may be said to have a state that can change over time.
A snapshot of the state can be expressed as an RDF graph.
For example, any web document that has an RDF-bearing representation may be considered an RDF source.
Like all resources, RDF sources may be named with IRIs and therefore described in other RDF graphs."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-source]

name::
* McsEngl.Rdfiresource.RDF-source,
* McsEngl.RDF-source@lagRdfw,

term of Rdfinput

description::
"IRIs, literals and blank nodes are collectively known as RDF terms.
IRIs, literals and blank nodes are distinct and distinguishable.
For example, http://example.org/ as a string literal is neither equal to http://example.org/ as an IRI, nor to a blank node with the blank node identifier http://example.org/."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-triple]

name::
* McsEngl.Rdfinput'term!⇒Rdfiterm,
* McsEngl.Rdfiterm,
* McsEngl.term@lagRdfw,

Rdfiterm.IRI

description::
"An IRI (Internationalized Resource Identifier) within an RDF graph is a Unicode string [UNICODE] that conforms to the syntax defined in RFC 3987 [RFC3987].
IRIs in the RDF abstract syntax must be absolute, and may contain a fragment identifier.
IRI equality: Two IRIs are equal if and only if they are equivalent under Simple String Comparison according to section 5.1 of [RFC3987]. Further normalization must not be performed when comparing IRIs for equality."
===
"The abbreviation IRI is short for "International Resource Identifier". An IRI identifies a resource. The URLs (Uniform Resource Locators) that people use as Web addresses are one form of IRI. Other forms of IRI provide an identifier for a resource without implying its location or how to access it. The notion of IRI is a generalization of URI (Uniform Resource Identifier), allowing non-ASCII characters to be used in the IRI character string. IRIs are specified in RFC 3987 [RFC3987].
IRIs can appear in all three positions of a triple.
As mentioned, IRIs are used to identify resources such as documents, people, physical objects, and abstract concepts. For example, the IRI for Leonardo da Vinci in DBpedia is:
http://dbpedia.org/resource/Leonardo_da_Vinci
The IRI for an INA video about the Mona Lisa entitled 'La Joconde à Washington' in Europeana is:
http://data.europeana.eu/item/04802/243FA8618938F4117025F17A8B813C5F9AA4D619
IRIs are global identifiers, so other people can re-use this IRI to identify the same thing. For example, the following IRI is used by many people as an RDF property to state an acquaintance relationship between people:
http://xmlns.com/foaf/0.1/knows
RDF is agnostic about what the IRI represents. However, IRIs may be given meaning by particular vocabularies or conventions. For example, DBpedia uses IRIs of the form http://dbpedia.org/resource/Name to denote the thing described by the corresponding Wikipedia article."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-IRI]

name::
* McsEngl.IRI!=internationalized-resource-identifier,
* McsEngl.IRI@lagRdfw,
* McsEngl.Rdfiiri,
* McsEngl.Rdfiterm.IRI!⇒Rdfiiri,
* McsEngl.internationalized-resource-identifier!⇒Rdfiiri,

relation-to-URL of Rdfiiri

description::
"URIs and IRIs: IRIs are a generalization of URIs [RFC3986] that permits a wider range of Unicode characters.
Every absolute URI and URL is an IRI, but not every IRI is an URI.
When IRIs are used in operations that are only defined for URIs, they must first be converted according to the mapping defined in section 3.1 of [RFC3987].
A notable example is retrieval over the HTTP protocol.
The mapping involves UTF-8 encoding of non-ASCII characters, %-encoding of octets not allowed in URIs, and Punycode-encoding of domain names."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#section-IRIs]

name::
* McsEngl.IRI-relation-to-URL,
* McsEngl.Rdfiiri'relation-to-URL,
* McsEngl.URL-relation-to-IRI,

Rdfiiri.datatype (link)
Rdfiiri.namespace (link)
Rdfiterm.literal

description::
"Literals are used for values such as strings, numbers, and dates.
A literal in an RDF graph consists of two or three elements:
Χ a lexical form, being a Unicode [UNICODE] string, which should be in Normal Form C [NFC],
* a datatype IRI, being an IRI identifying a datatype that determines how the lexical form maps to a literal value, and
if and only if the datatype IRI is http://www.w3.org/1999/02/22-rdf-syntax-ns#langString, a non-empty language tag as defined by [BCP47]. The language tag must be well-formed according to section 2.2.9 of [BCP47].
A literal is a language-tagged string if the third element is present. Lexical representations of language tags may be converted to lower case. The value space of language tags is always in lower case.
Please note that concrete syntaxes may support simple literals consisting of only a lexical form without any datatype IRI or language tag. Simple literals are syntactic sugar for abstract syntax literals with the datatype IRI http://www.w3.org/2001/XMLSchema#string. Similarly, most concrete syntaxes represent language-tagged strings without the datatype IRI because it always equals http://www.w3.org/1999/02/22-rdf-syntax-ns#langString."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#section-Graph-Literal]

name::
* McsEngl.Rdfiliteral,
* McsEngl.Rdfiterm.literal!⇒Rdfiliteral,
* McsEngl.literal@lagRdfw,

lexical-form of Rdfiliteral

description::
"A literal in an RDF graph consists of two or three elements:
Χ a lexical form, being a Unicode [UNICODE] string, which should be in Normal Form C [NFC],
* a datatype IRI, being an IRI identifying a datatype that determines how the lexical form maps to a literal value, and
* if and only if the datatype IRI is http://www.w3.org/1999/02/22-rdf-syntax-ns#langString, a non-empty language tag as defined by [BCP47]. The language tag must be well-formed according to section 2.2.9 of [BCP47]."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-literal]

name::
* McsEngl.Rdfiliteral'lexical-form,
* McsEngl.lexical-form@lagRdfw,

datatype-IRI of Rdfiliteral

description::
"A literal in an RDF graph consists of two or three elements:
Χ a lexical form, being a Unicode [UNICODE] string, which should be in Normal Form C [NFC],
* a datatype IRI, being an IRI identifying a datatype that determines how the lexical form maps to a literal value, and
* if and only if the datatype IRI is http://www.w3.org/1999/02/22-rdf-syntax-ns#langString, a non-empty language tag as defined by [BCP47]. The language tag must be well-formed according to section 2.2.9 of [BCP47]."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-literal]

name::
* McsEngl.Rdfiliteral'datatype-IRI,
* McsEngl.datatype-IRI@lagRdfw,

datatype of datatype-IRI

description::
"Datatypes are used with RDF literals to represent values such as strings, numbers and dates.
The datatype abstraction used in RDF is compatible with XML Schema [XMLSCHEMA11-2].
Any datatype definition that conforms to this abstraction may be used in RDF, even if not defined in terms of XML Schema.
RDF re-uses many of the XML Schema built-in datatypes, and defines two additional non-normative datatypes, rdf:HTML and rdf:XMLLiteral.
The list of datatypes supported by an implementation is determined by its recognized datatype IRIs."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#section-Datatypes]

name::
* McsEngl.Rdfiliteral'datatype-IRI'datatype,
* McsEngl.datatype@lagRdfw,

lexical-space of datatype of datatype-IRI

description::
"The lexical space of a datatype is a set of Unicode [UNICODE] strings."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-lexical-space]

name::
* McsEngl.datatype@lagRdfw'lexical-space,
* McsEngl.lexical-space@lagRdfw,

value-space of datatype of datatype-IRI

description::
"The lexical-to-value mapping of a datatype is a set of pairs whose first element belongs to the lexical space, and the second element belongs to the value space of the datatype.
Each member of the lexical space is paired with exactly one value, and is a lexical representation of that value.
The mapping can be seen as a function from the lexical space to the value space."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-lexical-to-value-mapping]

name::
* McsEngl.datatype@lagRdfw'value-space,
* McsEngl.value-space@lagRdfw,

lexical-to-value-mapping of datatype of datatype-IRI

description::
"The lexical-to-value mapping of a datatype is a set of pairs whose first element belongs to the lexical space, and the second element belongs to the value space of the datatype.
Each member of the lexical space is paired with exactly one value, and is a lexical representation of that value.
The mapping can be seen as a function from the lexical space to the value space."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-lexical-to-value-mapping]

name::
* McsEngl.datatype@lagRdfw'lexical-to-value-mapping,
* McsEngl.lexical-to-value-mapping@lagRdfw,

datatype-IRI.recognized

description::
"Datatypes are identified by IRIs.
If D is a set of IRIs which are used to refer to datatypes, then the elements of D are called recognized datatype IRIs.
Recognized IRIs have fixed referents.
If any IRI of the form http://www.w3.org/2001/XMLSchema#xxx is recognized, it must refer to the RDF-compatible XSD type named xsd:xxx for every XSD type listed in section 5.1.
Furthermore, the following IRIs are allocated for non-normative datatypes:
- The IRI http://www.w3.org/1999/02/22-rdf-syntax-ns#XMLLiteral refers to the datatype rdf:XMLLiteral
- The IRI http://www.w3.org/1999/02/22-rdf-syntax-ns#HTML refers to the datatype rdf:HTML"
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-recognized-datatype-iris]

name::
* McsEngl.Rdfiliteral'datatype-IRI.recognized,
* McsEngl.recognized-datatype-IRI@lagRdfw,

datatype-IRI.XML-Schema

description::
"IRIs of the form http://www.w3.org/2001/XMLSchema#xxx, where xxx is the name of a datatype, denote the built-in datatypes defined in XML Schema 1.1 Part 2: Datatypes [XMLSCHEMA11-2].
The XML Schema built-in types listed in the following table are the RDF-compatible XSD types.
Their use is recommended."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#xsd-datatypes]

name::
* McsEngl.datatype-IRI.XML-Schema,
* McsEngl.datatype-IRI.xsd,

xsd:string::
Χ sDescription: Character strings (but not all Unicode character strings)
Χ sType: Core type
* McsEngl.xsd:string@lagRdfw,
* McsEngl.string//xsd,

xsd:boolean::
Χ sValue: true, false
Χ sType: Core type
* McsEngl.xsd:boolean@lagRdfw,
* McsEngl.boolean//xsd,

xsd:decimal::
Χ sValue: Arbitrary-precision decimal numbers
Χ sType: Core type
* McsEngl.xsd:decimal@lagRdfw,
* McsEngl.decimal//xsd,

xsd:integer::
Χ sValue: Arbitrary-size integer numbers
Χ sType: Core type
* McsEngl.xsd:integer@lagRdfw,
* McsEngl.integer//xsd,

xsd:double::
Χ sValue: 64-bit floating point numbers incl. ±Inf, ±0, NaN
Χ sType: IEEE floating-point number
* McsEngl.xsd:double@lagRdfw,
* McsEngl.double//xsd,

xsd:float::
Χ sValue: 32-bit floating point numbers incl. ±Inf, ±0, NaN
Χ sType: IEEE floating-point number
* McsEngl.xsd:float@lagRdfw,
* McsEngl.float//xsd,

xsd:date::
Χ sValue: Dates (yyyy-mm-dd) with or without timezone
Χ sType: Time and date
* McsEngl.xsd:date@lagRdfw,
* McsEngl.date//xsd,

xsd:time::
Χ sValue: Times (hh:mm:ss.sss…) with or without timezone
Χ sType: Time and date
* McsEngl.xsd:time@lagRdfw,
* McsEngl.time//xsd,

xsd:dateTime::
Χ sValue: Date and time with or without timezone
Χ sType: Time and date
* McsEngl.xsd:dateTime@lagRdfw,
* McsEngl.dateTime//xsd,

xsd:dateTimeStamp::
Χ sValue: Date and time with required timezone
Χ sType: Time and date
* McsEngl.xsd:dateTimeStamp@lagRdfw,
* McsEngl.dateTimeStamp//xsd,

xsd:gYear::
Χ sValue: Gregorian calendar year
Χ sType: Recurring and partial date
* McsEngl.xsd:gYear@lagRdfw,
* McsEngl.gYear//xsd,

xsd:gMonth::
Χ sValue: Gregorian calendar month
Χ sType: Recurring and partial date
* McsEngl.xsd:gMonth@lagRdfw,
* McsEngl.gMonth//xsd,

xsd:gDay::
Χ sValue: Gregorian calendar day of the month
Χ sType: Recurring and partial date
* McsEngl.xsd:gDay@lagRdfw,
* McsEngl.gDay//xsd,

xsd:gYearMonth::
Χ sValue: Gregorian calendar year and month
Χ sType: Recurring and partial date
* McsEngl.xsd:gYearMonth@lagRdfw,
* McsEngl.gYearMonth//xsd,

xsd:gMonthDay::
Χ sValue: Gregorian calendar month and day
Χ sType: Recurring and partial date
* McsEngl.xsd:gMonthDay@lagRdfw,
* McsEngl.gMonthDay//xsd,

xsd:duration::
Χ sValue: Duration of time
Χ sType: Recurring and partial date
* McsEngl.xsd:duration@lagRdfw,
* McsEngl.duration//xsd,

xsd:yearMonthDuration::
Χ sValue: Duration of time (months and years only)
Χ sType: Recurring and partial date
* McsEngl.xsd:yearMonthDuration@lagRdfw,
* McsEngl.yearMonthDuration//xsd,

xsd:dayTimeDuration::
Χ sValue: Duration of time (days, hours, minutes, seconds only)
Χ sType: Recurring and partial date
* McsEngl.xsd:dayTimeDuration@lagRdfw,
* McsEngl.dayTimeDuration//xsd,

xsd:byte::
Χ sValue: -128…+127 (8 bit)
Χ sType: Limited-range integer number
* McsEngl.xsd:byte@lagRdfw,
* McsEngl.byte//xsd,

xsd:short::
Χ sValue: -32768…+32767 (16 bit)
Χ sType: Limited-range integer number
* McsEngl.xsd:short@lagRdfw,
* McsEngl.short//xsd,

xsd:int::
Χ sValue: -2147483648…+2147483647 (32 bit)
Χ sType: Limited-range integer number
* McsEngl.xsd:int@lagRdfw,
* McsEngl.int//xsd,

xsd:long::
Χ sValue: -9223372036854775808…+9223372036854775807 (64 bit)
Χ sType: Limited-range integer number
* McsEngl.xsd:long@lagRdfw,
* McsEngl.long//xsd,

xsd:unsignedByte::
Χ sValue: 0…255 (8 bit)
Χ sType: Limited-range integer number
* McsEngl.xsd:unsignedByte@lagRdfw,
* McsEngl.unsignedByte//xsd,

xsd:unsignedShort::
Χ sValue: 0…65535 (16 bit)
Χ sType: Limited-range integer number
* McsEngl.xsd:unsignedShort@lagRdfw,
* McsEngl.unsignedShort//xsd,

xsd:unsignedInt::
Χ sValue: 0…4294967295 (32 bit)
Χ sType: Limited-range integer number
* McsEngl.xsd:unsignedInt@lagRdfw,
* McsEngl.unsignedInt//xsd,

xsd:unsignedLong::
Χ sValue: 0…18446744073709551615 (64 bit)
Χ sType: Limited-range integer number
* McsEngl.xsd:unsignedLong@lagRdfw,
* McsEngl.unsignedLong//xsd,

xsd:positiveInteger::
Χ sValue: Integer numbers >0
Χ sType: Limited-range integer number
* McsEngl.xsd:positiveInteger@lagRdfw,
* McsEngl.positiveInteger//xsd,

xsd:nonNegativeInteger::
Χ sValue: Integer numbers ≥0
Χ sType: Limited-range integer number
* McsEngl.xsd:nonNegativeInteger@lagRdfw,
* McsEngl.nonNegativeInteger//xsd,

xsd:negativeInteger::
Χ sValue: Integer numbers <0
Χ sType: Limited-range integer number
* McsEngl.xsd:negativeInteger@lagRdfw,
* McsEngl.negativeInteger//xsd,

xsd:nonPositiveInteger::
Χ sValue: Integer numbers ≤0
Χ sType: Limited-range integer number
* McsEngl.xsd:nonPositiveInteger@lagRdfw,
* McsEngl.nonPositiveInteger//xsd,

xsd:hexBinary::
Χ sValue: Hex-encoded binary data
Χ sType: Encoded binary data
* McsEngl.xsd:hexBinary@lagRdfw,
* McsEngl.hexBinary//xsd,

xsd:base64Binary::
Χ sValue: Base64-encoded binary data
Χ sType: Encoded binary data
* McsEngl.xsd:base64Binary@lagRdfw,
* McsEngl.base64Binary//xsd,

xsd:anyURI::
Χ sValue: Absolute or relative URIs and IRIs
Χ sType: Miscellaneous XSD type
* McsEngl.xsd:anyURI@lagRdfw,
* McsEngl.anyURI//xsd,

xsd:language::
Χ sValue: Language tags per [BCP47]
Χ sType: Miscellaneous XSD type
* McsEngl.xsd:language@lagRdfw,
* McsEngl.language//xsd,

xsd:normalizedString::
Χ sValue: Whitespace-normalized strings
Χ sType: Miscellaneous XSD type
* McsEngl.xsd:normalizedString@lagRdfw,
* McsEngl.normalizedString//xsd,

xsd:token::
Χ sValue: Tokenized strings
Χ sType: Miscellaneous XSD type
* McsEngl.xsd:token@lagRdfw,
* McsEngl.token//xsd,

xsd:NMTOKEN::
Χ sValue: XML NMTOKENs
Χ sType: Miscellaneous XSD type
* McsEngl.xsd:NMTOKEN@lagRdfw,
* McsEngl.NMTOKEN//xsd,

xsd:Name::
Χ sValue: XML Names
Χ sType: Miscellaneous XSD type
* McsEngl.xsd:Name@lagRdfw,
* McsEngl.Name//xsd,

xsd:NCName::
Χ sValue: XML NCNames
Χ sType: Miscellaneous XSD type
* McsEngl.xsd:NCName@lagRdfw,
* McsEngl.NCName//xsd,

datatype-IRI.rdf:HTML

description::
"RDF provides for HTML content as a possible literal value. This allows markup in literal values. Such content is indicated in an RDF graph using a literal whose datatype is set to rdf:HTML. This datatype is defined as non-normative because it depends on [DOM4], a specification that has not yet reached W3C Recommendation status."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#section-html]

name::
* McsEngl.datatype-IRI.rdf:HTML,
* McsEngl.rdf:HTML@lagRdfw,

datatype-IRI.rdf:XMLLiteral

description::
"RDF provides for XML content as a possible literal value. Such content is indicated in an RDF graph using a literal whose datatype is set to rdf:XMLLiteral. This datatype is defined as non-normative because it depends on [DOM4], a specification that has not yet reached W3C Recommendation status."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#section-XMLLiteral]

name::
* McsEngl.datatype-IRI.rdf:XMLLiteral,
* McsEngl.rdf:XMLLiteral@lagRdfw,

language-tag of Rdfiliteral

description::
"A literal in an RDF graph consists of two or three elements:
Χ a lexical form, being a Unicode [UNICODE] string, which should be in Normal Form C [NFC],
* a datatype IRI, being an IRI identifying a datatype that determines how the lexical form maps to a literal value, and
* if and only if the datatype IRI is http://www.w3.org/1999/02/22-rdf-syntax-ns#langString, a non-empty language tag as defined by [BCP47]. The language tag must be well-formed according to section 2.2.9 of [BCP47]."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-literal]

name::
* McsEngl.Rdfiliteral'language-tag,
* McsEngl.language-tag@lagRdfw,

value of Rdfiliteral

description::
"The literal value associated with a literal is:
* If the literal is a language-tagged string, then the literal value is a pair consisting of its lexical form and its language tag, in that order.
* If the literal's datatype IRI is in the set of recognized datatype IRIs, let d be the referent of the datatype IRI.
** If the literal's lexical form is in the lexical space of d, then the literal value is the result of applying the lexical-to-value mapping of d to the lexical form.
** Otherwise, the literal is ill-typed and no literal value can be associated with the literal. Such a case produces a semantic inconsistency but is not syntactically ill-formed. Implementations must accept ill-typed literals and produce RDF graphs from them. Implementations may produce warnings when encountering ill-typed literals.
* If the literal's datatype IRI is not in the set of recognized datatype IRIs, then the literal value is not defined by this specification."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-literal-value]

name::
* McsEngl.Rdfiliteral'value,
* McsEngl.literal-value@lagRdfw,
* McsEngl.value-of-literal@lagRdfw,

Rdfiliteral.language-tagged-string

description::
"A literal is a language-tagged string if the third element is present.
Lexical representations of language tags may be converted to lower case.
The value space of language tags is always in lower case."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-language-tagged-string]

name::
* McsEngl.Rdfiliteral.language-tagged-string,
* McsEngl.language-tagged-string@lagRdfw,

Rdfiliteral.simple

description::
"Please note that concrete syntaxes may support simple literals consisting of only a lexical form without any datatype IRI or language tag.
Simple literals are syntactic sugar for abstract syntax literals with the datatype IRI http://www.w3.org/2001/XMLSchema#string.
Similarly, most concrete syntaxes represent language-tagged strings without the datatype IRI because it always equals http://www.w3.org/1999/02/22-rdf-syntax-ns#langString."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-simple-literal]

name::
* McsEngl.Rdfiliteral.simple,
* McsEngl.simple-literal@lagRdfw,

triple of Rdfinput

description::
"An RDF triple consists of three components:
* the subject, which is an IRI or a blank node
* the predicate, which is an IRI
* the object, which is an IRI, a literal or a blank node
An RDF triple is conventionally written in the order subject, predicate, object."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-triple]

name::
* McsEngl.Rdfinput'triple!⇒Rdfitriple,
* McsEngl.Rdfinput'statement!⇒Rdfitriple,
* McsEngl.statement@lagRdfw,

graph of Rdfinput

description::
"An RDF graph is a set of RDF triples."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-graph]

name::
* McsEngl.Rdfigraph,
* McsEngl.Rdfinput'graph!⇒Rdfigraph,
* McsEngl.graph@lagRdfw,

dataset of Rdfinput

description::
"RDF datasets, defined in RDF Concepts [RDF11-CONCEPTS], package up zero or more named RDF graphs along with a single unnamed, default RDF graph."
[{2021-01-05} https://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#blank-nodes]

name::
* McsEngl.Rdfidataset,
* McsEngl.Rdfinput'dataset!⇒Rdfidataset,
* McsEngl.dataset@lagRdfw,

output of lagRdfw

description::
"An RDF document is a document that encodes an RDF graph or RDF dataset in a concrete RDF syntax, such as Turtle [TURTLE], RDFa [RDFA-PRIMER], JSON-LD [JSON-LD], or TriG [TRIG]. RDF documents enable the exchange of RDF graphs and RDF datasets between systems."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-document]

name::
* McsEngl.Rdfdoc,
* McsEngl.lagRdfw'document!⇒Rdfdoc,
* McsEngl.lagRdfw'output!⇒Rdfdoc,

output-dataset of lagRdfw

description::
· RDF-datasets are collections of RDF-graphs, comprised of a-default-graph and zero or more named-graphs.
[{2021-01-03} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/]

name::
* McsEngl.lagRdfw'output-dataset,

output-graph of lagRdfw

description::
· RDF-graphs are SETS of subject-predicate-object triples.
[{2021-01-03} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/]

name::
* McsEngl.Rdfograph,
* McsEngl.lagRdfw'graph!⇒Rdfograph,

name of Rdfograph

description::
"The IRI associated with the graph is called the graph name."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-multiple-graphs]

name::
* McsEngl.Rdfograph'name,

output-triple of lagRdfw

description::
"RDF allows us to make statements about resources. The format of these statements is simple. A statement always has the following structure:
<subject> <predicate> <object>
An RDF statement expresses a relationship between two resources. The subject and the object represent the two resources being related; the predicate represents the nature of their relationship. The relationship is phrased in a directional way (from subject to object) and is called in RDF a property. Because RDF statements consist of three elements they are called triples."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/]

name::
* McsEngl.Rdfotriple,
* McsEngl.lagRdfw'output-statement!⇒Rdfotriple,
* McsEngl.lagRdfw'output-triple!⇒Rdfotriple,

output-subject of lagRdfw

description::
·

name::
* McsEngl.lagRdfw'output-subject,

predicate of lagRdfw

description::
·

name::
* McsEngl.lagRdfw'output-predicate,
* McsEngl.lagRdfw'output-property,

object of lagRdfw

description::
·

name::
* McsEngl.lagRdfw'output-object,

output-IRI of lagRdfw

description::

name::
* McsEngl.Rdfoiri,
* McsEngl.lagRdfw'output-IRI!⇒Rdfoiri,

output-literal of lagRdfw

description::
"Literals are basic values that are not IRIs. Examples of literals include strings such as "La Joconde", dates such as "the 4th of July, 1990" and numbers such as "3.14159". Literals are associated with a datatype enabling such values to be parsed and interpreted correctly. String literals can optionally be associated with a language tag. For example, "Lιonard de Vinci" could be associated with the "fr" language tag and "李奥纳多·达·文西" with the "zh" language tag.
Literals may only appear in the object position of a triple.
The RDF Concepts document provides a (non-exhaustive) list of datatypes. This includes many datatypes defined by XML Schema, such as string, boolean, integer, decimal and date."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-literal]

name::
* McsEngl.lagRdfw'output-literal,

output-blank-node of lagRdfw

description::
"IRIs and literals together provide the basic material for writing down RDF statements. In addition, it is sometimes handy to be able to talk about resources without bothering to use a global identifier. For example, we might want to state that the Mona Lisa painting has in its background an unidentified tree which we know to be a cypress tree. A resource without a global identifier, such as the painting's cypress tree, can be represented in RDF by a blank node. Blank nodes are like simple variables in algebra; they represent some thing without saying what their value is.
Blank nodes can appear in the subject and object position of a triple. They can be used to denote resources without explicitly naming them with an IRI."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-blank-node]

name::
* McsEngl.lagRdfw'output-blank-node,

vocabulary of lagRdfw

description::
"An RDF vocabulary is a collection of IRIs intended for use in RDF graphs.
For example, the IRIs documented in [RDF11-SCHEMA] are the RDF Schema vocabulary.
RDF Schema can itself be used to define and document additional RDF vocabularies."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-vocabulary]
===
"The term “namespace” on its own does not have a well-defined meaning in the context of RDF, but is sometimes informally used to mean “namespace IRI” or “RDF vocabulary”."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-namespace]

name::
* McsEngl.RDF-vocabulary!⇒Rdfvoc,
* McsEngl.Rdfvoc,
* McsEngl.lagRdfw'vocabulary!⇒Rdfvoc,
* McsEngl.namespace@lagRdfw,
* McsEngl.vocabulary@lagRdfw,

namespace-IRI of Rdfvoc

description::
"The IRIs in an RDF vocabulary often begin with a common substring known as a namespace IRI.
Some namespace IRIs are associated by convention with a short name known as a namespace prefix.
Some examples:"

Namespace prefix Namespace IRI RDF vocabulary
rdf http://www.w3.org/1999/02/22-rdf-syntax-ns# The RDF built-in vocabulary [RDF11-SCHEMA]
rdfs http://www.w3.org/2000/01/rdf-schema# The RDF Schema vocabulary [RDF11-SCHEMA]
xsd http://www.w3.org/2001/XMLSchema# The RDF-compatible XSD types

[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-vocabulary]

name::
* McsEngl.Rdfvoc'namespace-IRI,
* McsEngl.namespace-IRI@lagRdfw,

abbreviation of namespace-IRI

description::
"In some serialization formats it is common to abbreviate IRIs that start with namespace IRIs by using a namespace prefix in order to assist readability.
For example, the IRI http://www.w3.org/1999/02/22-rdf-syntax-ns#XMLLiteral would be abbreviated as rdf:XMLLiteral.
Note however that these abbreviations are not valid IRIs, and must not be used in contexts where IRIs are expected.
Namespace IRIs and namespace prefixes are not a formal part of the RDF data model.
They are merely a syntactic convenience for abbreviating IRIs."
[{2021-01-07} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-vocabulary]

name::
* McsEngl.Rdfvoc'namespace-IRI'abbreviation,

namespace-prefix of Rdfvoc

description::
"Some namespace IRIs are associated by convention with a short name known as a namespace prefix.
Some examples:"

Namespace prefix Namespace IRI RDF vocabulary
rdf http://www.w3.org/1999/02/22-rdf-syntax-ns# The RDF built-in vocabulary [RDF11-SCHEMA]
rdfs http://www.w3.org/2000/01/rdf-schema# The RDF Schema vocabulary [RDF11-SCHEMA]
xsd http://www.w3.org/2001/XMLSchema# The RDF-compatible XSD types

[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-rdf-vocabulary]

name::
* McsEngl.Rdfvoc'namespace-prefix,
* McsEngl.namespace-prefix@lagRdfw,

Rdfvoc.SPECIFIC

description::
* Dublin-core,
* FOAF,
* SCOS,
* schema.org,

name::
* McsEngl.Rdfvoc.specific,

Rdfvoc.Dublin-core (link)
Rdfvoc.FOAF (link)
Rdfvoc.schema.org (link)
Rdfvoc.SKOS (link)

output.YAGO of lagRdfw

description::
· YAGO is-stored in RDF.
· combines wikidata and schema.org
"YAGO is a knowledge base, i.e., a database with knowledge about the real world. YAGO contains both entities (such as movies, people, cities, countries, etc.) and relations between these entities (who played in which movie, which city is located in which country, etc.). All in all, YAGO contains more than 50 million entities and 2 billion facts."
[{2021-01-30} https://yago-knowledge.org/getting-started]

name::
* McsEngl.YAGO,

info-resource of lagRdfw

description::
* RDF 1.1 Primer, W3C Working Group Note 24 June 2014, https://www.w3.org/TR/rdf11-primer/,
* RDF 1.1 Concepts and Abstract Syntax, W3C Recommendation 25 February 2014, https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/,
* RDF 1.1 Semantics, W3C Recommendation 25 February 2014, https://www.w3.org/TR/rdf11-mt/,
* RDF Schema 1.1, W3C Recommendation 25 February 2014, https://www.w3.org/TR/rdf-schema/,

name::
* McsEngl.lagRdfw'Infrsc,

evoluting of lagRdfw

description::
">evolution of Resource Description Framework:
The Resource Description Framework (RDF) is a framework for representing information about resources in the World Wide Web. It provides a structured and standardized way to describe resources and their relationships. The evolution of RDF can be traced through several key milestones:

1. **RDF 1.0 (1999):** The development of RDF began in the late 1990s, and RDF 1.0 was published as a W3C Recommendation in 1999. It introduced the basic concepts of triples (subject-predicate-object statements) and provided a foundation for expressing metadata about resources.

2. **RDF Schema (RDFS) (2000):** RDF Schema, often referred to as RDFS, is an extension of RDF that provides a basic vocabulary for describing the structure of RDF graphs. It allows the definition of classes, properties, and relationships between classes, enabling the creation of simple ontologies.

3. **SPARQL (2008):** SPARQL (SPARQL Protocol and RDF Query Language) is a query language and protocol for querying RDF data. It was introduced in 2008 and provided a standardized way to query and manipulate RDF data, making it easier to retrieve specific information from RDF graphs.

4. **RDF 1.1 (2014):** RDF 1.1 is an update to RDF 1.0, addressing various issues and introducing new features. It includes Turtle, a more human-readable serialization format for RDF, and introduces the notion of RDF datasets, which allows multiple graphs to be grouped together.

5. **JSON-LD (2014):** JSON-LD (JSON Linked Data) is a JSON-based serialization for RDF data. It allows RDF data to be expressed in a way that is easy to process and understand for developers familiar with JSON.

6. **SHACL (Shapes Constraint Language) (2017):** SHACL is a W3C Recommendation that provides a way to describe and validate the structure and constraints of RDF graphs. It allows the definition of shapes, which specify the expected structure and constraints of resources in an RDF graph.

7. **RDF-star (2021):** RDF-star is an extension of RDF that allows triples to be nested within other triples, enabling more complex and expressive representations. It addresses some limitations of the basic triple structure in RDF and enhances the modeling capabilities.

These milestones reflect the ongoing development and refinement of the RDF framework to meet the evolving needs of the web and semantic data representation. The standards and extensions mentioned above contribute to making RDF a versatile and widely adopted framework for describing and linking data on the web."
[{2023-11-15 retrieved} https://chat.openai.com/c/ddd99d05-c7d7-4b1a-9c39-d6127fc068a5]

"**The Evolution of the Resource Description Framework (RDF) Timeline**

The inception of RDF can be traced back to 1998 when a W3C working group was formed with the aim of standardizing metadata on the web. This group brought together various metadata communities with the goal of developing a robust and flexible architecture for supporting metadata. The first public draft of RDF was released in 1997, laying the groundwork for a standard model for data interchange on the web.

**1998-2004: Early Developments**
During the initial phase, RDF focused primarily on its core data model and syntax, establishing the fundamental concepts of triples and directed labeled graphs to represent relationships between resources. The W3C published the RDF Model and Syntax Recommendation in 1999, providing a formal specification of the RDF data model.

**2004-2009: Expanding Capabilities**
In the second phase, RDF's capabilities were expanded to address the growing demands of data representation and interoperability. The RDF Vocabulary Description Language (RDFS) was introduced in 2004 to provide a simple vocabulary definition language for creating RDF vocabularies. RDF Schema (RDFS) added richer semantics to RDF data by introducing concepts like classes and properties.

**2009-2014: Maturation and Standardization**
The third phase witnessed the maturation and standardization of RDF. The RDF 1.1 Recommendation was published in 2014, consolidating various extensions and refinements to the RDF model and syntax. This release marked a significant step towards a stable and widely adopted standard for representing and exchanging structured metadata on the web.

**2014-Present: Continued Growth and Innovation**
Since 2014, RDF has continued to evolve, with a focus on improving its scalability, performance, and interoperability with other data formats. The W3C has introduced various RDF-related standards, such as SPARQL (SPARQL Protocol and RDF Query Language) for querying RDF data, and RDF Data Shapes for constraining and validating RDF data.
Additionally, RDF has found applications in diverse areas beyond the Semantic Web, including knowledge management, linked data, and big data analytics. The increasing adoption of RDF in various domains has further driven its evolution and development.

**Future of RDF**
The future of RDF holds promise for continued growth and innovation. As the amount of data on the web continues to grow, RDF's ability to represent complex relationships and provide machine-readable metadata will become even more valuable. The development of new tools and technologies for working with RDF will further enhance its adoption and expand its reach into new applications and domains."
[{2023-11-15 retrieved} https://bard.google.com/chat/b3c36b031dbea6d4?hl=en&pli=1]

name::
* McsEngl.evoluting-of-lagRdfw,
* McsEngl.lagRdfw'evoluting,

lagRdfw.SPECIFIC

description::
* Turtle family of RDF languages (N-Triples, Turtle, TriG and N-Quads),
* JSON-LD (JSON-based RDF syntax),
* RDFa (for HTML and XML embedding),
* RDF/XML (XML syntax for RDF),

name::
* McsEngl.lagRdfw.specific,

lagRdfw.concrete-RDF-syntax

description::
"An RDF document is a document that encodes an RDF graph or RDF dataset in a concrete RDF syntax, such as Turtle [TURTLE], RDFa [RDFA-PRIMER], JSON-LD [JSON-LD], or TriG [TRIG]. RDF documents enable the exchange of RDF graphs and RDF datasets between systems.
A concrete RDF syntax may offer many different ways to encode the same RDF graph or RDF dataset, for example through the use of namespace prefixes, relative IRIs, blank node identifiers, and different ordering of statements. While these aspects can have great effect on the convenience of working with the RDF document, they are not significant for its meaning."
[{2021-01-06} https://www.w3.org/TR/2014/REC-rdf11-concepts-20140225/#dfn-concrete-rdf-syntax]

name::
* McsEngl.lagRdfw.concrete-RDF-syntax,
* McsEngl.lagRdfw.instance,
* McsEngl.concrete-RDF-syntax@lagRdfw,

lagRdfw.JSON-LD (link)

lagRdfw.N-Triples

description::
· example:
01 <http://example.org/bob#me> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> .
02 <http://example.org/bob#me> <http://xmlns.com/foaf/0.1/knows> <http://example.org/alice#me> .
03 <http://example.org/bob#me> <http://schema.org/birthDate> "1990-07-04"^^<http://www.w3.org/2001/XMLSchema#date> .
04 <http://example.org/bob#me> <http://xmlns.com/foaf/0.1/topic_interest> <http://www.wikidata.org/entity/Q12418> .
05 <http://www.wikidata.org/entity/Q12418> <http://purl.org/dc/terms/title> "Mona Lisa" .
06 <http://www.wikidata.org/entity/Q12418> <http://purl.org/dc/terms/creator> <http://dbpedia.org/resource/Leonardo_da_Vinci> .
07 <http://data.europeana.eu/item/04802/243FA8618938F4117025F17A8B813C5F9AA4D619> <http://purl.org/dc/terms/subject> <http://www.wikidata.org/entity/Q12418> .

[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-n-triples]

name::
* McsEngl.N-Triple-lagRdfw,
* McsEngl.lagRdfw.N-Triple,

lagRdfw.Turtle

description::
· example:
01 BASE <http://example.org/>
02 PREFIX foaf: <http://xmlns.com/foaf/0.1/>
03 PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
04 PREFIX schema: <http://schema.org/>
05 PREFIX dcterms: <http://purl.org/dc/terms/>
06 PREFIX wd: <http://www.wikidata.org/entity/>
07
08 <bob#me>
09 a foaf:Person ;
10 foaf:knows <alice#me> ;
11 schema:birthDate "1990-07-04"^^xsd:date ;
12 foaf:topic_interest wd:Q12418 .
13
14 wd:Q12418
15 dcterms:title "Mona Lisa" ;
16 dcterms:creator <http://dbpedia.org/resource/Leonardo_da_Vinci> .
17
18 <http://data.europeana.eu/item/04802/243FA8618938F4117025F17A8B813C5F9AA4D619>
19 dcterms:subject wd:Q12418 .
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-turtle]

description::
">example ttl format of ontology:
The Turtle (TTL) format is a popular way to represent ontologies. Here is an example of a simple ontology in TTL format:

```turtle
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix ex: <http://example.org/> .

ex:Person a rdfs:Class .
ex:hasName a rdfs:subPropertyOf rdfs:label .
ex:hasAge a rdfs:Property .

ex:Alice a ex:Person .
ex:Alice ex:hasName "Alice" .
ex:Alice ex:hasAge 30 .

ex:Bob a ex:Person .
ex:Bob ex:hasName "Bob" .
ex:Bob ex:hasAge 40 .
```

This ontology defines the following classes and properties:
* The class `ex:Person` represents all people.
* The property `ex:hasName` is a subproperty of `rdfs:label`, which means that it can be used to label a person with their name.
* The property `ex:hasAge` is a property that can be used to specify a person's age.

The ontology also defines two instances of the `ex:Person` class:
* `ex:Alice` is a person whose name is "Alice" and whose age is 30.
* `ex:Bob` is a person whose name is "Bob" and whose age is 40.
Sources
search.proquest.com/openview/820c7414b3a681ea6eb1cdf820fa6f47/1"
[{2023-11-15 retrieved} https://bard.google.com/chat/af30e2dbd6718a44?hl=en&pli=1]

name::
* McsEngl.TTL-(turtle-format)--lagOntg,
* McsEngl.lagOntg.TTL-(turtle-format),
* McsEngl.turtle-format--lagOntg,
* McsEngl.Turtle-lagRdfw,
* McsEngl.lagRdfw.Turtle,

lagRdfw.TriG

description::
· The syntax of Turtle supports only the specification of single graphs without a means for "naming" them. TriG [TRIG] is an extension of Turtle enabling the specification of multiple graphs in the form of an RDF dataset.
· example:
01 BASE <http://example.org/>
02 PREFIX foaf: <http://xmlns.com/foaf/0.1/>
03 PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
04 PREFIX schema: <http://schema.org/>
05 PREFIX dcterms: <http://purl.org/dc/terms/>
06 PREFIX wd: <http://www.wikidata.org/entity/>
07
08 GRAPH <http://example.org/bob>
09 {
10 <bob#me>
11 a foaf:Person ;
12 foaf:knows <alice#me> ;
13 schema:birthDate "1990-07-04"^^xsd:date ;
14 foaf:topic_interest wd:Q12418 .
15 }
16
17 GRAPH <https://www.wikidata.org/wiki/Special:EntityData/Q12418>
18 {
19 wd:Q12418
20 dcterms:title "Mona Lisa" ;
21 dcterms:creator <http://dbpedia.org/resource/Leonardo_da_Vinci> .
22
23 <http://data.europeana.eu/item/04802/243FA8618938F4117025F17A8B813C5F9AA4D619>
24 dcterms:subject wd:Q12418 .
25 }
26
27 <http://example.org/bob>
28 dcterms:publisher <http://example.org> ;
29 dcterms:rights <http://creativecommons.org/licenses/by/3.0/> .

[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-trig]

name::
* McsEngl.TriG-lagRdfw,
* McsEngl.lagRdfw.TriG,

lagRdfw.N-Quads

description::
"N-Quads [N-QUADS] is a simple extension to N-Triples to enable the exchange of RDF datasets. N-Quads allows one to add a fourth element to a line, capturing the graph IRI of the triple described on that line."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-n-quads]

name::
* McsEngl.N-Quads-lagRdfw,
* McsEngl.lagRdfw.N-Quads,

lagRdfw.RDFa

description::

"RDFa [RDFA-PRIMER] is an RDF syntax that can be used to embed RDF data within HTML and XML documents. This enables, for example, search engines to aggregate this data when crawling the Web and use it to enrich search results (see, e.g., schema.org and Rich Snippets).

EXAMPLE 14: RDFa


    01  <body prefix="foaf: http://xmlns.com/foaf/0.1/
    02                   schema: http://schema.org/
    03                   dcterms: http://purl.org/dc/terms/">
    04    <div resource="http://example.org/bob#me" typeof="foaf:Person">
    05      <p>
    06        Bob knows <a property="foaf:knows" href="http://example.org/alice#me">Alice</a>
    07        and was born on the <time property="schema:birthDate" datatype="xsd:date">1990-07-04</time>.
    08      </p>
    09      <p>
    10        Bob is interested in <span property="foaf:topic_interest"
    11        resource="http://www.wikidata.org/entity/Q12418">the Mona Lisa</span>.
    12      </p>
    13    </div>
    14    <div resource="http://www.wikidata.org/entity/Q12418">
    15      <p>
    16        The <span property="dcterms:title">Mona Lisa</span> was painted by
    17        <a property="dcterms:creator" href="http://dbpedia.org/resource/Leonardo_da_Vinci">Leonardo da Vinci</a>
    18        and is the subject of the video
    19        <a href="http://data.europeana.eu/item/04802/243FA8618938F4117025F17A8B813C5F9AA4D619">'La Joconde à Washington'</a>.
    20      </p>
    21    </div>
    22    <div resource="http://data.europeana.eu/item/04802/243FA8618938F4117025F17A8B813C5F9AA4D619">
    23        <link property="dcterms:subject" href="http://www.wikidata.org/entity/Q12418"/>
    24    </div>
    25  </body>

The example above contains four special RDFa attributes to enable specification of RDF triples within HTML: resource, property, typeof and prefix."
[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-rdfa]

name::
* McsEngl.RDFa-lagRdfw,
* McsEngl.lagRdfw.RDFa,

lagRdfw.RDF/XML

description::
"RDF/XML [RDF-SYNTAX-GRAMMAR] provides an XML syntax for RDF graphs. When RDF was originally developed in the late 1990s, this was its only syntax, and some people still call this syntax "RDF". In 2001, a precursor to Turtle called "N3" was proposed, and gradually the other languages listed here have been adopted and standardized."
example:


    
01 <?xml version="1.0" encoding="utf-8"?>
02 <rdf:RDF
03 xmlns:dcterms="http://purl.org/dc/terms/"
04 xmlns:foaf="http://xmlns.com/foaf/0.1/"
05 xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
06 xmlns:schema="http://schema.org/">
07 <rdf:Description rdf:about="http://example.org/bob#me">
08 <rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Person"/>
09 <schema:birthDate rdf:datatype="http://www.w3.org/2001/XMLSchema#date">1990-07-04</schema:birthDate>
10 <foaf:knows rdf:resource="http://example.org/alice#me"/>
11 <foaf:topic_interest rdf:resource="http://www.wikidata.org/entity/Q12418"/>
12 </rdf:Description>
13 <rdf:Description rdf:about="http://www.wikidata.org/entity/Q12418">
14 <dcterms:title>Mona Lisa</dcterms:title>
15 <dcterms:creator rdf:resource="http://dbpedia.org/resource/Leonardo_da_Vinci"/>
16 </rdf:Description>
17 <rdf:Description rdf:about="http://data.europeana.eu/item/04802/243FA8618938F4117025F17A8B813C5F9AA4D619">
18 <dcterms:subject rdf:resource="http://www.wikidata.org/entity/Q12418"/>
19 </rdf:Description>
20 </rdf:RDF>

[{2021-01-03} https://www.w3.org/TR/2014/NOTE-rdf11-primer-20140624/#section-rdfxml]

name::
* McsEngl.RDF/XML-lagRdfw,
* McsEngl.lagRdfw.RDF/XML,

lagRdfw.RDFS-{2000}

description::
"RDF Schema (RDFS) (2000): RDF Schema, often referred to as RDFS, is an extension of RDF that provides a basic vocabulary for describing the structure of RDF graphs. It allows the definition of classes, properties, and relationships between classes, enabling the creation of simple ontologies."
[{2023-11-15 retrieved} https://chat.openai.com/c/ddd99d05-c7d7-4b1a-9c39-d6127fc068a5]

">RDF-S:
RDF-S typically refers to the Resource Description Framework Schema (RDF-S), which is also known as RDF Schema or RDFS. RDF-S is a specification or language for describing the structure of metadata in the Resource Description Framework (RDF) format.

RDF is a framework for representing and linking data on the web, often used for describing resources, their properties, and relationships between them. RDF-S provides a way to define the structure of RDF data, including classes, properties, and their relationships. It allows you to create simple ontologies to define the meaning of terms and concepts used in RDF data.

Some key features of RDF-S include:
1. Classes: You can define classes to categorize resources and describe their characteristics.

2. Properties: You can define properties to represent relationships between resources and provide additional information about resources.

3. Subclass and Subproperty hierarchies: RDF-S allows you to specify subclass and subproperty relationships, creating a hierarchy of classes and properties.

4. Domain and Range: You can specify the domains (the classes that a property can be used with) and ranges (the classes that a property can point to) for properties.

5. Inference: While RDF-S is not as expressive as more advanced ontology languages like OWL (Web Ontology Language), it allows for simple inferencing about data, which can help derive additional information from the RDF data based on the schema.

RDF-S is used for creating basic ontologies and structuring RDF data to make it more semantically meaningful. It's often considered a foundational layer for more advanced ontology languages and semantic web applications. The latest RDF standard is RDF 1.1, and it extends the capabilities of RDF and RDF-S."
[{2023-11-09 retrieved} https://chat.openai.com/c/24a9dbf4-06bb-4530-a026-f0c689d8e194]

name::
* McsEngl.RDF-S!=Resource-Description-Framework-Schema!⇒lagRdfs,
* McsEngl.Resource-Description-Framework-Schema!⇒lagRdfs,
* McsEngl.lagRdfs!=Resource-Description-Framework-Schema,
* McsEngl.lagRdfw.RDF-S!⇒lagRdfs,

lagRdfw.RDF-star-{2021}

description::
"RDF-star (2021): RDF-star is an extension of RDF that allows triples to be nested within other triples, enabling more complex and expressive representations. It addresses some limitations of the basic triple structure in RDF and enhances the modeling capabilities."
[{2023-11-15 retrieved} https://chat.openai.com/c/ddd99d05-c7d7-4b1a-9c39-d6127fc068a5]

name::
* McsEngl.RDF-star!⇒lagRdfr,
* McsEngl.lagRdfw.RDF-star!⇒lagRdfr,
* McsEngl.lagRdfr!=RDF-star,

lagSweb.SKOS-{2004}

description::
"example of Simple-Knowledge-Organization-System:
The Simple Knowledge Organization System (SKOS) provides a way to represent concepts and their relationships in a machine-readable format. Below is a simple example of SKOS in RDF (Resource Description Framework) syntax. This example defines a basic taxonomy for animals:

```xml
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:skos="http://www.w3.org/2004/02/skos/core#">

<!-- Concept Scheme -->
<skos:ConceptScheme rdf:about="#animalsScheme">
<skos:prefLabel>Animals Taxonomy</skos:prefLabel>
</skos:ConceptScheme>

<!-- Concepts -->
<skos:Concept rdf:about="#mammal">
<skos:prefLabel>Mammal</skos:prefLabel>
<skos:broader rdf:resource="#vertebrate"/>
</skos:Concept>

<skos:Concept rdf:about="#vertebrate">
<skos:prefLabel>Vertebrate</skos:prefLabel>
<skos:broader rdf:resource="#animal"/>
</skos:Concept>

<skos:Concept rdf:about="#animal">
<skos:prefLabel>Animal</skos:prefLabel>
</skos:Concept>

</rdf:RDF>
```

In this example:
- `skos:ConceptScheme` defines the overall scheme or taxonomy, named "Animals Taxonomy."
- `skos:Concept` represents individual concepts within the scheme, such as "Mammal," "Vertebrate," and "Animal."
- `skos:prefLabel` provides human-readable labels for each concept.
- `skos:broader` establishes broader/narrower relationships between concepts. For instance, a "Mammal" is narrower than "Vertebrate," and "Vertebrate" is narrower than "Animal."

This is a very basic example, and in a real-world scenario, the SKOS representation could be more complex and include additional properties to capture various aspects of concepts and their relationships within a knowledge organization system."
[{2023-11-14 retrieved} https://chat.openai.com/c/d54f0a84-9dfd-4768-b22d-276ed183764d]

name::
* McsEngl.Simple-Knowledge-Organization-System--lagSweb,
* McsEngl.SKOS!=Simple-Knowledge-Organization-System--lagSweb,
* McsEngl.lagKnlg.030-SKOS,
* McsEngl.lagKnlg.SKOS,
* McsEngl.lagSkos,
* McsEngl.lagSweb.SKOS,

evoluting of lagSkos

description::
">evolution of Simple Knowledge Organization System:
The Simple Knowledge Organization System (SKOS) is a W3C standard for representing and sharing knowledge organization systems (KOS) such as thesauri, classification schemes, subject heading lists, taxonomies, and folksonomies. It was first published in 2004 and has since evolved to meet the needs of a growing community of users.

**Evolution of SKOS**
* **1997-2000** - The DESIRE II project develops an RDF vocabulary for thesauri.
* **2002-2004** - SKOS is developed as a distinct initiative in the SWAD-Europe project.
* **2004** - SKOS is published as a W3C Recommendation.
* **2009** - SKOS Primer is published, providing a user guide for representing concept schemes using SKOS.
* **2011** - ISO 25964-1, a new thesaurus standard, is published.
* **2012** - A set of linkages between the ISO 25964 data model and SKOS are published.
* **2013** - SKOS Vocabulary is updated to align with ISO 25964-1.
* **2014-2015** - SKOS Extensions are developed to provide additional capabilities such as mapping between concept schemes and grouping concepts into collections.
* **2016** - SKOS Extensions are published as a W3C Note.
* **2019** - SKOS Vocabulary is updated to include support for SKOS Extensions.
* **2023** - SKOS is still widely used and continues to evolve to meet the needs of its users.

**Key trends in the evolution of SKOS**
* **Increasing alignment with other standards:** SKOS has been aligned with other standards such as ISO 25964-1 and CIDOC CRM to improve interoperability and support for a wider range of use cases.
* **Support for new features:** SKOS Extensions have been developed to provide additional capabilities such as mapping between concept schemes and grouping concepts into collections.
* **Growing community of users:** SKOS is now used by a wide range of organizations, including libraries, museums, archives, and government agencies, to represent and share their knowledge organization systems.

**Future of SKOS**
SKOS is a well-established standard with a growing community of users. It is likely to continue to evolve to meet the needs of its users and support new technologies and applications. For example, SKOS is being used to develop new ways to organize and search for information on the Semantic Web.
Here are some specific areas where SKOS is likely to continue to evolve in the future:
* **Support for new types of knowledge organization systems:** SKOS is currently focused on representing traditional KOS such as thesauri and classification schemes. However, new types of KOS are emerging, such as ontologies and knowledge graphs. SKOS may need to be extended to support these new types of KOS.
* **Integration with other semantic web technologies:** SKOS is already being integrated with other semantic web technologies such as RDF and OWL. This integration is likely to continue to deepen in the future, making it easier to use SKOS in conjunction with other semantic web tools and applications.
* **Support for new use cases:** SKOS is currently being used in a variety of use cases, such as library cataloging, museum collection management, and government data organization. However, new use cases are emerging, such as supporting machine learning and artificial intelligence applications. SKOS may need to be extended to support these new use cases.

Overall, SKOS is a well-established and widely used standard for representing and sharing knowledge organization systems. It is likely to continue to evolve to meet the needs of its users and support new technologies and applications."
[{2023-11-14 retrieved} https://bard.google.com/chat/8545cf4917020ea5?hl=en&pli=1]

name::
* McsEngl.evoluting-of-lagSkos,
* McsEngl.lagSkos'evoluting,

lagSweb.OWL-{2004} (link)

lagSweb.SPARQL-{2006}

description::
"SPARQL (pronounced "sparkle" /ˈspɑːkəl/, a recursive acronym[2] for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format.[3][4] It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web.[citation needed] On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation,[5][6] and SPARQL 1.1 in March, 2013.[7]
SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns.[8]
Implementations for multiple programming languages exist.[9] There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer.[10] In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL[11] and to XQuery.[12]"
[{2021-01-05} https://en.wikipedia.org/wiki/SPARQL]

name::
* McsEngl.SPARQL!⇒lagSprq,
* McsEngl.lagKnlg.013-SPARQL!⇒lagSprq,
* McsEngl.lagKnlg.SPARQL!⇒lagSprq,
* McsEngl.lagSweb.SPARQL!⇒lagSprq,
* McsEngl.lagSprq,
* McsEngl.lagSprg!=SPARQL,

lagSweb.schema.org-{2011} (link)

lagSweb.SHACL-{2013}

description::
"2013: The W3C publishes the SHACL Shapes Constraint Language, which is a language for constraining the structure and meaning of RDF data."
[{2023-11-04 retrieved} https://bard.google.com/chat/d9d14f2406f370df]

"overview of SHACL Shapes Constraint Language:
**SHACL Shapes Constraint Language** (SHACL) is a World Wide Web Consortium (W3C) standard language for describing Resource Description Framework (RDF) graphs. SHACL has been designed to enhance the semantic and technical interoperability layers of ontologies expressed as RDF graphs.

**SHACL models** are defined in terms of **constraints** on the content, structure, and meaning of a graph. SHACL is a highly expressive language, allowing for the definition of a wide range of constraints, including:
* **Data type constraints:** Constraints on the data types of literal values, such as strings, integers, and dates.
* **Cardinality constraints:** Constraints on the minimum and maximum number of values that a node can have for a particular property.
* **Path constraints:** Constraints on the values of properties that are connected to a node via a path.
* **Class constraints:** Constraints on the classes that a node can belong to.
* **Shape constraints:** Constraints on the shapes that a node can have.

SHACL constraints are expressed in the form of **shapes**, which are RDF graphs that describe the desired structure and constraints of the data. SHACL shapes can be used to validate data graphs against a set of conditions. They can also be used to generate documentation and descriptions of data graphs.

**SHACL processors** are used to validate data graphs against SHACL shapes. SHACL processors generate a **results graph**, which describes any violations of the SHACL constraints.

SHACL has a number of advantages over other RDF validation languages, such as RDF Schema and OWL:
* SHACL is more expressive than RDF Schema, allowing for the definition of a wider range of constraints.
* SHACL is more concise than OWL, making it easier to write and read.
* SHACL is easier to implement than OWL, making it more accessible to developers.

SHACL is used in a variety of applications, including:
* Data quality assurance
* Ontology validation
* Knowledge graph validation
* Linked Data validation
* API documentation

Here is a simple example of a SHACL shape:
```turtle
@prefix sh: <http://www.w3.org/ns/shacl#>.

:PersonShape a sh:NodeShape;
sh:class rdfs:Class;
sh:property :name [
sh:datatype xsd:string;
sh:minCount 1;
].
```

This shape defines a node shape called `:PersonShape`. The shape states that the node must belong to the class `rdfs:Class` and must have a property `:name` with a value of type `xsd:string`. The `sh:minCount` constraint ensures that the node must have at least one value for the `:name` property.

This shape can be used to validate data graphs that contain information about people. For example, the following data graph would conform to the `:PersonShape` shape:
```turtle
:john a :Person;
:name "John Doe".
```

However, the following data graph would violate the `sh:minCount` constraint:
```turtle
:mary a :Person.
```

SHACL is a powerful tool for validating and describing RDF graphs. It is used in a variety of applications, including data quality assurance, ontology validation, and knowledge graph validation.
Sources en.wikipedia.org/wiki/SHACL"
[{2023-11-04 retrieved} https://bard.google.com/chat/92897ea538348c4f]

name::
* McsEngl.SHACL-Shapes-Constraint-Language,
* McsEngl.lagSweb.SHACL,

lagSweb.JSON-LD-{2014} (link)

lagSweb.PROV-O-{2017} (link)

lagSweb.solid-project-{2020}

description::
"**2020:** The W3C publishes the Solid project, which is a platform for personal data management using the Semantic Web."
[{2023-11-04 retrieved} https://bard.google.com/chat/d9d14f2406f370df]
"Overview of the W3C Solid Project:
The W3C Solid project is a set of specifications and tools for building decentralized web applications. It is led by Sir Tim Berners-Lee, the inventor of the World Wide Web. Solid aims to give users more control over their data and privacy by storing it in personal data stores called Pods. Pods can be hosted on any server, and users can choose who and what applications have access to their data.

Solid is based on the Linked Data Platform (LDP), a set of W3C standards for publishing and interlinking data. This means that Solid applications can easily share and exchange data with each other, even if they are hosted on different servers.

**Benefits of Solid**
Solid offers a number of benefits over traditional web applications, including:
* **Data ownership and control:** Solid users own their data and have full control over who can access it.
* **Privacy:** Solid applications can be designed to be very privacy-friendly, as users control what data is shared with each application.
* **Decentralization:** Solid applications are not hosted on a central server, which makes them more resilient to censorship and outages.
* **Interoperability:** Solid applications can easily share and exchange data with each other, even if they are hosted on different servers.

**Status of the Solid Project**
The Solid project is still under development, but there are already a number of Solid applications and services available. In 2021, the W3C Solid Community Group published a set of Working Drafts for the Solid specifications. These specifications are now being reviewed by the W3C membership, and the goal is to publish them as W3C Recommendations in the near future.

**Examples of Solid Applications**
Here are a few examples of Solid applications:
* **Inrupt Pod:** A personal data store that users can use to store and manage their data.
* **Solid WebID:** A decentralized identity system that allows users to log in to Solid applications without using a third-party identity provider.
* **Solid File Manager:** A file manager that allows users to browse and manage their files stored in their Pod.
* **Solid Notes:** A note-taking application that allows users to create and edit notes stored in their Pod.
* **Solid Calendar:** A calendar application that allows users to create and manage events stored in their Pod.

**Conclusion**
The W3C Solid project is a promising new approach to building web applications. It offers a number of benefits over traditional web applications, including data ownership and control, privacy, decentralization, and interoperability. The Solid project is still under development, but there are already a number of Solid applications and services available."
[{2023-11-04 retrieved} https://bard.google.com/chat/c2ded73c04307fc7?hl=en&pli=1]

name::
* McsEngl.lagSweb.solid-project,

lagKnlg.Wolfram-language

description::
× generic: logo--knowledge-language,

"The Wolfram Language is a highly developed knowledge-based language that unifies a broad range of programming paradigms and uses its unique concept of symbolic programming to add a new level of flexibility to the very concept of programming."
[{2021-02-14} http://reference.wolfram.com/language/guide/LanguageOverview.html]

name::
* McsEngl.Lwlfm!⇒lagWlfm,
* McsEngl.Wlfml!⇒lagWlfm,
* McsEngl.Wolfram-language!⇒lagWlfm,
* McsEngl.lagKnlg.032-Wolfram-language!⇒lagWlfm,
* McsEngl.lagKnlg.Wolfram-language!⇒lagWlfm,
* McsEngl.lagWlfm!=Wolfram-language,
* McsEngl.lagWolfram!⇒lagWlfm,

descriptionLong::
"The Wolfram Language is a computer language. It gives you a way to communicate with computers, in particular so you can tell them what to do.
There are many computer languages, such as C++, Java, Python and JavaScript. The Wolfram Language is unique in that it’s knowledge based. That means that it already knows a lot—so you have to tell it much less to get it to do things you want."
[{2021-01-26} https://www.wolfram.com/language/elementary-introduction/2nd-ed/what-is-the-wolfram-language.html]

"overview of Wolfram-Language:
Wolfram Language is a high-level, multi-paradigm programming language developed by Wolfram Research, the company behind Wolfram Alpha and Mathematica. It is known for its versatility and is used for a wide range of computational, symbolic, and data analysis tasks. Here is an overview of the key features and characteristics of Wolfram Language:

1. Symbolic Programming: Wolfram Language is primarily a symbolic language, which means it can manipulate and work with symbolic expressions, equations, and mathematical notations. This makes it well-suited for tasks involving symbolic mathematics and symbolic computation.

2. Multi-Paradigm: Wolfram Language supports a variety of programming paradigms, including functional, procedural, and rule-based programming. It allows you to write code in a way that best suits the problem you are trying to solve.

3. High-Level Abstractions: The language includes a rich set of built-in functions and high-level abstractions for various tasks, such as data manipulation, graphics, machine learning, and more. This makes it easy to express complex ideas concisely.

4. Built-in Knowledge: Wolfram Language is integrated with the Wolfram Knowledgebase, which provides access to a vast amount of curated data and algorithms. This built-in knowledge is a distinguishing feature and enables you to perform tasks like natural language understanding, data retrieval, and computation with real-world data.

5. Powerful Graphics: Wolfram Language includes extensive support for creating high-quality graphics and visualizations. It is often used for generating charts, plots, and interactive dashboards.

6. Natural Language Input: You can use natural language input to express your computations and queries. This makes the language accessible to a broader audience and is especially useful for computational exploration and data analysis.

7. Computable Documents: Wolfram Language can be embedded in documents to create interactive, computable documents. These documents can combine text, code, and graphics, allowing for live computation and documentation of results.

8. Interactivity: The language supports interactivity, allowing you to create dynamic interfaces, simulations, and interactive applications. This is useful for creating educational materials and interactive demos.

9. Cross-Platform: Wolfram Language is available on various platforms, including desktop (Mathematica), cloud (Wolfram Cloud), and through WolframScript for command-line usage. It is also accessible via APIs for integration with other software and systems.

10. Extensibility: You can extend the language by defining custom functions and packages. This enables you to create reusable code libraries and add functionality to the language.

11. Data Science and Machine Learning: Wolfram Language includes a wide range of tools and functions for data analysis, statistical analysis, and machine learning. It's often used in scientific research and data science projects.

12. Math and Science: Wolfram Language has strong roots in mathematics and science and is widely used in research, education, and engineering for mathematical modeling, simulation, and solving complex equations.

Overall, Wolfram Language is a versatile and powerful programming language that excels in a wide range of computational and symbolic tasks. Its integration with curated knowledge, graphics capabilities, and support for natural language input makes it a valuable tool for a diverse set of applications, from education and research to data analysis and visualization."
[{2023-11-01 retrieved} https://chat.openai.com/c/0a45c2c4-2ffa-4223-8d2e-920f1ee5d1d2]

"overview of Wolfram-Language:
Wolfram Language is a general-purpose, symbolic programming language that powers a wide range of products and services, including Mathematica, WolframAlpha, Wolfram Cloud, and Wolfram Notebook. It is a unique language that combines the power of a programming language with the knowledge and automation of a search engine.

The Wolfram Language is known for its ease of use and its ability to perform a wide range of tasks, including:
* **Symbolic and numerical computing:** The Wolfram Language can be used to perform complex mathematical operations, such as solving differential equations, integrating and differentiating functions, and manipulating matrices.
* **Data science and machine learning:** The Wolfram Language includes a wide range of functions for data science and machine learning, such as data cleaning, feature engineering, model training, and model evaluation.
* **Natural language processing:** The Wolfram Language can be used to process and understand natural language, such as text and speech.
* **Visualization and graphics:** The Wolfram Language can be used to create high-quality visualizations and graphics, such as charts, plots, and 3D models.

The Wolfram Language is also known for its built-in knowledge base, which includes information on a wide range of topics, such as mathematics, science, engineering, and finance. This knowledge base allows users to perform complex tasks with just a few lines of code.

Here are some examples of what you can do with the Wolfram Language:
* Solve a differential equation
* Find the prime factorization of a large number
* Train a machine learning model to classify images
* Generate a natural language description of an image
* Create a 3D model of a molecule
* Calculate the value of a financial derivative

The Wolfram Language is a powerful and versatile language that can be used to solve a wide range of problems. It is used by scientists, engineers, mathematicians, and financial professionals around the world.

Here are some of the benefits of using the Wolfram Language:
* **Easy to learn and use:** The Wolfram Language is designed to be easy to learn and use, even for those with no prior programming experience.
* **Powerful and versatile:** The Wolfram Language can be used to perform a wide range of tasks, from simple calculations to complex scientific simulations.
* **Built-in knowledge base:** The Wolfram Language includes a built-in knowledge base that gives users access to a vast amount of information on a wide range of topics.
* **Cloud-based:** The Wolfram Language can be used in the cloud, which makes it easy to collaborate with others and access your work from anywhere.

If you are looking for a powerful and versatile programming language that can be used to solve a wide range of problems, the Wolfram Language is a great choice."
[{2023-11-01 retrieved} https://bard.google.com/chat/1995d69f282e1d47]

tech of lagWlfm

description::
· any techInfo that uses lagWlfm.

name::
* McsEngl.Wlfmltech,
* McsEngl.lagWlfm'tech!⇒Wlfmltech,

Wolfram-system of Wlfmltech

description::
"The Wolfram Language is fundamentally an interpreter that scans through expressions calling internal code pointed to by the symbol table entries of heads that it encounters."
[{2021-02-21} https://reference.wolfram.com/language/tutorial/SomeNotesOnInternalImplementation.html#16446]
===
"The Wolfram System is one of the more complex software systems ever constructed.
It is built from several million lines of source code, written in C/C++, Java, and the Wolfram Language."
[http://reference.wolfram.com/language/tutorial/TheSoftwareEngineeringOfTheWolframSystem.html]

name::
* McsEngl.Wlfmlapp!⇒Wlfmlsys,
* McsEngl.Wlfmlsys,
* McsEngl.Wlfmltech'Wolfram-system!⇒Wlfmlsys,
* McsEngl.Wolfram-language-application!⇒Wlfmlsys,
* McsEngl.Wolfram-system!⇒Wlfmlsys,

directory of Wlfmlsys

description::
"The main installation directory has three standard subdirectories that contain material distributed with the Wolfram System. Under normal circumstances, none of the contents of these directories should ever be modified, except, for example, if you choose to edit a shared stylesheet.
* AddOns bundled Wolfram System add-ons
* Documentation Wolfram Language documentation
* SystemFiles Wolfram System files
Top-level subdirectories of the main installation directory."
[http://reference.wolfram.com/language/tutorial/WolframSystemFileOrganization.html]"

name::
* McsEngl.Wlfmlsys'directory,

kernel of Wlfmlsys

description::
"The Wolfram Language kernel is a process that runs under the operating system on your computer. Within the Wolfram Language there are several global variables that allow you to find the characteristics of this process and its environment.
[http://reference.wolfram.com/language/tutorial/GlobalSystemInformation.html]
===
In the Wolfram Language kernel the breakdown of different parts of the code is roughly as follows: language and system: 30%; numerical computation: 20%; algebraic computation: 20%; graphics and kernel output: 30%.
... The source code for the kernel, save a fraction of a percent, is identical for all computer systems on which the Wolfram System runs.
[http://reference.wolfram.com/language/tutorial/TheSoftwareEngineeringOfTheWolframSystem.html]

name::
* McsEngl.Wlfmlsys'kernel,

user-interface of Wlfmlsys

description::
· the-part of Wlfmlsys[a] with which a-user interacts with it[a].

name::
* McsEngl.Wlfmlsys'front-end!⇒Wlfmlui,
* McsEngl.Wlfmlsys'user-interface!⇒Wlfmlui,
* McsEngl.Wlfmlui,
* McsEngl.Wlfmlui!=user-interface-of-Wlfmlsys,

code of Wlfmlui

description::
"For the front end, however, a significant amount of specialized code is needed to support each different type of user interface environment. The front end contains about 700,000 lines of system-independent C++ source code, of which roughly 200,000 lines are concerned with expression formatting. Then there are between 50,000 and 100,000 lines of specific code customized for each user interface environment."
[http://reference.wolfram.com/language/tutorial/TheSoftwareEngineeringOfTheWolframSystem.html]

name::
* McsEngl.Wlfmlui'code,

Wlfmlui.SPECIFIC

description::
* batch-interface,
* notebook-interface,
* Wolfram|Alpha,
* Wolfram-Cloud,

name::
* McsEngl.Wlfmlui.specific,

Wlfmlui.batch-mode

description::
"The Wolfram System is usually used interactively, but it can also operate in a batch mode—say taking input from a file and writing output to a file. In such a case, a program cannot for example expect to get interactive input from the user."
[http://reference.wolfram.com/language/tutorial/GlobalSystemInformation.html]

name::
* McsEngl.Wlfmlui.batch-mode,

Wlfmlui.text-based

description::
"The standard front end interface, as discussed in "Using a Notebook Interface", is appropriate for most users' purposes.
In some cases, however, you may not need to use the notebook front end, and you may want instead to interact more directly with the Wolfram Language kernel. You can do this by using a text‐based interface, in which text you type on the keyboard goes straight to the kernel.
It is important to note that while the text-based interface provides access to most of the capabilities of the Wolfram Language kernel, the graphics functionality and dynamic interactivity of the Wolfram System front end are not available.

name::
* McsEngl.Wlfmlui.text-based,

Wlfmlui.notebook

description::
"From simple calculations to full publishable documents and sophisticated dynamic interfaces, everything you can do with the Wolfram System's standard interactive interface is done in a notebook. Carefully designed to leverage familiar word-processing metaphors, Wolfram System notebooks are uniquely powerful computational documents that support live computation, arbitrary dynamic interfaces, full typeset input, image input, automatic code annotation, a complete high-level programmatic interface, and in all, thousands of carefully organized functions and options."
[{2021-02-16} https://reference.wolfram.com/language/guide/NotebookBasics.html]
===
· expession: Notebook[cells, options]

name::
* McsEngl.Wlfmlnbi,
* McsEngl.Wlfmlnbi!=notebook-interface-of-lagWlfm,
* McsEngl.Wlfmlui.notebook!⇒Wlfmlnbi,
* McsEngl.lagWlfm'notebook-interface!⇒Wlfmlnbi,
* McsEngl.notebook-front-end-of-lagWlfm!⇒Wlfmlnbi,

descriptionLong::
"Like everything else in the Wolfram Language, notebooks are ultimately symbolic expressions. When you edit notebooks—or apply high-level programmatic functions—the Wolfram Language automatically updates these expressions. But if you look at the lowest level—say by opening a notebook file as text—you will see the underlying expressions, in which formatting constructs are represented as a hierarchy of low-level symbolic "boxes"."
[{2021-02-18} https://reference.wolfram.com/language/guide/LowLevelNotebookStructure.html]

shortcut of Wlfmlnbi

description::
* = — at beginning of input: specify a complete computation using natural language.
* Ctrl+= — translate from natural language anywhere in Wolfram Language input.
— specify an entity using natural language.
* Ctrl+Shift+e — toggle between formatted display and underlying symbolic expression

name::
* McsEngl.Wlfmlsys'shortcut,
* McsEngl.Wlfmlnbi'shortcut,

cell of Wlfmlnbi

description::
· the-parts of a-notebook.
· expression: Cell[contents,"style",options]

name::
* McsEngl.Wlfmlnbi'cell,

CDF of lagWlfm

description::
"CDF: Computable Document Format
Make computable documents part of the language
Built into the Wolfram Language are "notebook" documents that mix executable code with text, graphics, interfaces and more.
* Make a single document with code, examples, documentation, etc.
* Programmatically create full-featured reports and documents
* Instantly create interactive elements backed by computation
* 11,000+ examples in the Wolfram Demonstrations Project
[{2021-02-17} https://www.wolfram.com/language/principles/]

name::
* McsEngl.CDF'(computable-document-format),
* McsEngl.computable-document-format-of-lagWlfm,
* McsEngl.lagWlfm'CDF'(computable-document-format),

info-resource of Wlfmlnbi

description::
* https://reference.wolfram.com/language/tutorial/UsingANotebookInterface.html,
* https://reference.wolfram.com/language/tutorial/NotebooksAsDocuments.html,
* https://reference.wolfram.com/language/tutorial/ManipulatingNotebooks.html,
* https://reference.wolfram.com/language/guide/NotebookBasics.html,
* https://reference.wolfram.com/language/guide/SpecialCharacters.html,
* https://www.wolfram.com/wolfram-u/introduction-to-notebooks/,
* https://reference.wolfram.com/language/guide/NotebookShortcuts.html,
* embed in webpages: https://reference.wolfram.com/language/WolframNotebookEmbedder/,

name::
* McsEngl.Wlfmlnbi'Infrsc,

interpreter of Wlfmlsys

description::
"The Wolfram System compiler provides an important way both to speed up and also to work with Wolfram Language computations."
[http://reference.wolfram.com/language/Compile/tutorial/Overview.html]

name::
* McsEngl.Wlfmlsys'interpreter!⇒Wlfmlitpr, /intérpiter/,
* McsEngl.Wlfmlitpr,
* McsEngl.Wlfmlitpr!=interpreter-of-lagWlfm,
* McsEngl.lagWlfm'compiler!⇒Wlfmlitpr,
* McsEngl.lagWlfm'interpreter!⇒Wlfmlitpr,

info-resource of Wlfmlitpr

description::
* https://reference.wolfram.com/language/Compile/tutorial/Overview.html,
* https://reference.wolfram.com/language/Compile/tutorial/Introduction.html,

name::
* McsEngl.Wlfmlitpr'Infrsc,

memory of Wlfmlsys

description::
"Particularly for symbolic computations, memory is usually the primary resource which limits the size of computations you can do. If a computation runs slowly, you can always potentially let it run longer. But if the computation generates intermediate expressions which simply cannot fit in the memory of your computer system, then you cannot proceed with the computation.
The Wolfram System is careful about the way it uses memory. Every time an intermediate expression you have generated is no longer needed, the Wolfram System immediately reclaims the memory allocated to it. This means that at any point in a session, the Wolfram System stores only those expressions that are actually needed; it does not keep unnecessary objects which have to be "garbage collected" later."
[{2021-02-16} https://reference.wolfram.com/language/tutorial/GlobalAspectsOfWolframSystemSessions.html#15717]

name::
* McsEngl.Wlfmlsys'memory,

info-resource of Wlfmlsys

description::
* https://reference.wolfram.com/language/guide/SystemInformation.html,

name::
* McsEngl.Wlfmlsys'Infrsc,

structure of Wlfmlsys

description::
"A full Wolfram System installation consists of thousands of separate files, arranged in several hundred directories under the main installation directory."
[http://reference.wolfram.com/language/tutorial/WolframSystemFileOrganization.html]

name::
* McsEngl.Wlfmlsys'structure,

Wlfmlsys.SPECIFIC

description::
* Wolfram|Alpha,
* Wolfram-Cloud,
* Wolfram-Mathematica,

name::
* McsEngl.Wlfmlsys.specific,

WloframScript of Wlfmltech

description::
"WolframScript runs Wolfram Language code, functions and deployed APIs locally, in the cloud or on WSTPServer, allowing input from standard input, command-line arguments, files, URLs, etc."
[{2021-02-20} https://reference.wolfram.com/language/ref/program/wolframscript.html]
"WolframScript enables Wolfram Language code to be run from any terminal, whether or not a Wolfram kernel is available on the system.
With WolframScript, users can decide where and how to run their Wolfram Language code. Along with configurable kernel and cloud settings, WolframScript includes a collection of convenience functions for a wide variety of tasks, including running pure functions with command-line input, calling cloud APIs and exporting data to any format supported by the Wolfram Language. Now you can run code in the Wolfram Cloud with wolframscript -cloud or with #!/usr/bin/env wolframscript -cloud in Unix shell scripts."
[{2021-02-20} https://www.wolfram.com/wolframscript/]

name::
* McsEngl.Wlfmltech.WolframScript,
* McsEngl.WolframScript,

Mathematica of Wlfmltech

description::
"WOLFRAM MATHEMATICA
The world's definitive system for modern technical computing
For three decades, Mathematica has defined the state of the art in technical computing—and provided the principal computation environment for millions of innovators, educators, students, and others around the world.
Widely admired for both its technical prowess and elegant ease of use, Mathematica provides a single integrated, continually expanding system that covers the breadth and depth of technical computing—and seamlessly available in the cloud through any web browser, as well as natively on all modern desktop systems."
[{2021-02-16} https://www.wolfram.com/mathematica/]

name::
* McsEngl.Wlfmlmath,
* McsEngl.Wlfmlmath!=Wolfram-Mathematica,
* McsEngl.Wlfmltech.Mathematica!⇒Wlfmlmath,

Wolfram|Alpha of Wlfmltech

description::
"Wolfram|Alpha is a unique engine for computing answers and providing knowledge.
It works by using its vast store of expert-level knowledge and algorithms to automatically answer questions, do analysis and generate reports."
[{2021-02-10} https://www.wolframalpha.com/tour/]

name::
* McsEngl.Wlfmltech'Wolfram|Alpha,
* McsEngl.Wolfram|Alpha,

Wolfram-cloud of Wlfmltech

description::
"The Wolfram Cloud combines a state-of-the-art notebook interface with the world's most productive programming language—scalable for programs from tiny to huge, with immediate access to a vast depth of built-in algorithms and knowledge.
Natively running the Wolfram Language, the Wolfram Cloud in effect makes every aspect of cloud operations programmable—introducing an unprecedented level of integration, flexibility, simplicity and power into a new generation of cloud technology.
Products Powered by the Wolfram Cloud
Wolfram Cloud Basic offers free access to get started immediately. Sign up for free access
WOLFRAM|ONE »
Your entry point to the Wolfram technology stack.
MATHEMATICA ONLINE »
Bring Mathematica to life in the cloud.
WOLFRAM PROGRAMMING LAB »
Get introduced to modern computational thinking with the Wolfram Language.
WOLFRAM CLOUD APP »
Mobile-optimized interface to view and edit notebooks, and access cloud-deployed apps.
WOLFRAM DATA DROP »
Open service that makes it easy to accumulate data of any kind, from anywhere.
WOLFRAMSCRIPT »
Run Wolfram Language code from any terminal, even if a local Wolfram kernel is unavailable.
WOLFRAM ENTERPRISE PRIVATE CLOUD »
Install a complete private instance of the Wolfram Cloud inside your organization.
WOLFRAM|ALPHA APPLIANCE »
Get a custom, private Wolfram|Alpha within your organization or with your data."
[{2021-02-14} https://www.wolfram.com/cloud/]

name::
* McsEngl.Wlfmlcloud,
* McsEngl.Wlfmltech'Wolfram-cloud!⇒Wlfmlcloud,
* McsEngl.Wolfram-cloud!⇒Wlfmlcloud,

info-resource of Wlfmlcloud

description::
* https://www.wolframcloud.com/,

name::
* McsEngl.Wlfmlcloud'Infrsc,

input of lagWlfm

description::
"SOME OF WHAT'S BUILT INTO THE WOLFRAM LANGUAGE:
* 2D, 3D Visualization,
* Graph Analysis,
* Data Analytics,
* Image Processing,
* Audio Processing,
* Machine Learning,
* Equation Solving,
* Algebraic Computation,
* Arbitrary Precision,
* Calculus Computation,
* Matrix Computation,
* String Manipulation,
* Combinatorial Optimization,
* Computational Geometry,
* Database Connectivity,
* Built-In Testing,
* Device Connectivity,
* Functional Programming,
* Natural Language Understanding,
* Sequence Analysis,
* Time Series,
* Geographic Data,
* Geomapping,
* Weather Data,
* Physics & Chemistry Data,
* Genomic Data,
* Units & Measures,
* Control Theory,
* Reliability Analysis,
* Parallel Computation,
* Engineering Data,
* Financial Data,
* Financial Computation,
* Socioeconomic Data,
* Popular Culture Data,
* Boolean Computation,
* Number Theory,
* Document Generation,
* Table Formatting,
* Mathematical Typesetting,
* Interactive Controls,
* Interface Building,
* Form Construction,
* XML Templating,
* Data Import & Export,
* Semantic Interpretation,
* API Connectivity,
* Interprocess Communication,
[http://www.wolfram.com/programming-cloud/?source=footer]

name::
* McsEngl.Wlfmlinput,
* McsEngl.lagWlfm'input!⇒Wlfmlinput,

output of lagWlfm

description::
· any output model of the-input.

name::
* McsEngl.Wlfmloutput,
* McsEngl.lagWlfm'output!⇒Wlfmloutput,

expression-(en) of Wlfmloutput

description::
"Everything in the Wolfram Language is a symbolic expression.
numbers strings images arrays graphs formulas documents interfaces code ...
All symbolic expressions have the same fundamental structure: head[arguments]
[{2021-02-21} https://www.wolfram.com/language/fast-introduction-for-programmers/en/symbolic-expressions/]
===
"Everything Is an Expression
The Wolfram Language handles many different kinds of things: mathematical formulas, lists, and graphics, to name a few. Although they often look very different, the Wolfram Language represents all of these things in one uniform way. They are all expressions."
[{2021-02-15} https://reference.wolfram.com/language/tutorial/Expressions.html#4715]

name::
* McsEngl.Wlfmlen,
* McsEngl.Wlfmlen!=expression-of-lagWlfm,
* McsEngl.Wlfmloutput'expression!⇒Wlfmlen,
* McsEngl.expression@lagWlfm!⇒Wlfmlen,
* McsEngl.lagWlfm'expression!⇒Wlfmlen,
* McsEngl.lagWlfm'symbolic-expression!⇒Wlfmlen,
* McsEngl.symbolic-expression@lagWlfm!⇒Wlfmlen,

descriptionLong::
"At the core of the Wolfram Language is the foundational idea that everything—data, programs, formulas, graphics, documents—can be represented as symbolic expressions.
And it is this unifying concept that underlies the Wolfram Language's symbolic programming paradigm, and makes possible much of the unique power of the Wolfram Language and the Wolfram System."
[{2021-02-15} https://reference.wolfram.com/language/guide/Expressions.html]
"We’ve now seen all sorts of things that exist in the Wolfram Language: lists, graphics, pure functions and much more. And now we’re ready to discuss a very fundamental fact about the Wolfram Language: that each of these things—and in fact everything the language deals with—is ultimately constructed in the same basic kind of way. Everything is what’s called a symbolic expression.
Symbolic expressions are a very general way to represent structure, potentially with meaning associated with that structure. f[x,y] is a simple example of a symbolic expression."
[{2021-02-14} https://www.wolfram.com/language/elementary-introduction/2nd-ed/33-expressions-and-their-structure.html]

part of Wlfmlen

description::
* head,
* element,

name::
* McsEngl.Wlfmlen'part,

head of Wlfmlen

description::
"In f[x,y], f is called the head of the expression. x and y are called arguments. The function Head extracts the head of an expression.
... Every part of an expression has a head, even its atoms."
[{2021-02-14} https://www.wolfram.com/language/elementary-introduction/2nd-ed/33-expressions-and-their-structure.html]

name::
* McsEngl.Wlfmlen'head,

element of Wlfmlen

description::
"A Wolfram Language expression internally consists of a contiguous array of pointers, the first to the head, and the rest to its successive elements.
[http://reference.wolfram.com/language/tutorial/SomeNotesOnInternalImplementation.html]"

name::
* McsEngl.Wlfmlen'element,