senso-concept-Mcs (techAi)

McsHitp-creation:: {2023-07-30}

overview of techAi

· techAi is techInfo with intelligence (= mind-system).

* McsEngl.McsTchInf000036.last.html//dirTchInf//dirMcs!⇒techAi,
* McsEngl.dirTchInf/McsTchInf000036.last.html!⇒techAi,
* McsEngl.AI!=artificial-intelligence!⇒techAi,
* McsEngl.artificial-intelligence!⇒techAi,
* McsEngl.human-level-AI!⇒techAi,
* McsEngl.sciAi!⇒techAi,
* McsEngl.strong-AI!⇒techAi,
* McsEngl.techAi!=McsTchInf000036,
* McsEngl.techAi!=artificial-intelligence-tech,
* McsEngl.techInfo.007-artificial-intelligence!⇒techAi,
* McsEngl.techInfo.artificial-intelligence!⇒techAi,
====== langoGreek:
* McsElln.ΤΑ!=τεχνητή-νοημοσύνη!=techAi,
* McsElln.τεχνητή-νοημοσύνη!η!=techAi,

intelligence (link) of techAi


"The matching Chinese room is a thought experiment proposed by the philosopher John Searle in 1980. It is designed to show that a computer program that can successfully pass the Turing test (a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human) does not necessarily understand the language it is using.

In the thought experiment, Searle imagines himself locked in a room with a basket of symbols and a set of rules for manipulating them. The rules are written in English, and Searle understands them perfectly. The symbols are Chinese characters, and Searle does not understand them at all.

People outside the room can send Searle messages in Chinese, written on slips of paper. Searle follows the rules to manipulate the Chinese characters and produce responses, which he then sends back out of the room. The people outside the room cannot see Searle or the rules he is following, so they have the impression that they are communicating with a native Chinese speaker.

Searle argues that even though he is able to produce perfectly correct responses to the Chinese questions, he does not understand Chinese. He is simply manipulating symbols according to a set of rules. He does not have any understanding of the meaning of the symbols or the messages he is sending and receiving.

The matching Chinese room is a powerful argument against the idea that artificial intelligence can be achieved through mere symbol manipulation. It suggests that something more is needed for a machine to truly understand language, such as a grounding in the real world and a capacity for conscious experience.

Some philosophers and AI researchers have criticized the matching Chinese room argument. They argue that Searle is misunderstanding the nature of understanding and that it is possible for a machine to understand language without having to be conscious. However, the matching Chinese room remains an important and influential thought experiment in the field of artificial intelligence.

Here are some examples of matching Chinese rooms in the real world:
* A machine translation system that can translate text from one language to another without understanding the meaning of the text.
* A chatbot that can answer questions and generate text in a way that is indistinguishable from a human, but does not have any understanding of the meaning of the text it is producing.
* A search engine that can return relevant results to a user's query, but does not understand the meaning of the query or the results it is returning.

It is important to note that the matching Chinese room argument does not prove that artificial intelligence is impossible. It simply shows that strong AI, which is the idea that machines can have minds like humans, cannot be achieved through mere symbol manipulation.
[{2023-10-25 retrieved}]

* McsEngl.maching-Chinese-room,
* McsEngl.techAi'maching-Chinese-room,

evaluation of techAi

· "AI is incredibly smart and shockingly stupid"
[{2023-08-07 retrieved}]

* McsEngl.techAi'evaluation,

safty of techAi

"AI safety is an interdisciplinary field concerned with preventing accidents, misuse, or other harmful consequences that could result from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to make AI systems moral and beneficial, and AI safety encompasses technical problems including monitoring systems for risks and making them highly reliable. Beyond AI research, it involves developing norms and policies that promote safety."
[{2023-04-10 retrieved}]

* McsEngl.techAi'risk,
* McsEngl.techAi'safty,

ethics of techAi

· Transparency, accountability, and open source.
"The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems.[1] It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI."
[{2023-04-10 retrieved}]

"The first global agreement on the ethics of AI was adopted in September 2021 by UNESCO's 193 Member States.[227]"
[{2023-04-10 retrieved}]

* McsEngl.techAi'ethics,

bias of techAi

"AI programs can become biased after learning from real-world data. It is not typically introduced by the system designers but is learned by the program, and thus the programmers are often unaware that the bias exists.[204] Bias can be inadvertently introduced by the way training data is selected.[205] It can also emerge from correlations: AI is used to classify individuals into groups and then make predictions assuming that the individual will resemble other members of the group. In some cases, this assumption may be unfair.[206] An example of this is COMPAS, a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the COMPAS-assigned recidivism risk level of black defendants is far more likely to be overestimated than that of white defendants, despite the fact that the program was not told the races of the defendants.[207]"
[{2023-04-10 retrieved}]

* McsEngl.techAi'bias,

weaponization of techAi


* McsEngl.techAi'weaponization,

failure-of-critical-systems of techAi


* McsEngl.techAi'failure-of-critical-systems,

surveillance of techAi


* McsEngl.techAi'surveillance,

technological-unemployment of techAi

"In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI.[195] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed.[196] Subjective estimates of the risk vary widely; for example, Michael Osborne and Carl Benedikt Frey estimate 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classifies only 9% of U.S. jobs as "high risk".[t][198]
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist states that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[199] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[200]"
[{2023-04-10 retrieved}]

* McsEngl.techAi'technological-unemployment,

existential-risk of techAi

"Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe.[1][2][3]
The existential risk ("x-risk") school argues as follows: The human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[4]
The probability of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science.[5] Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton,[6] Alan Turing,[a] Elon Musk,[9] and OpenAI CEO Sam Altman.[10] As of 2022, circa half of AI researchers believe that there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe.[11][12]"
[{2023-04-10 retrieved}]

* McsEngl.techAi'existential-risk,

copyright of techAi

"AI's decision-making abilities raises the question of legal responsibility and copyright status of created works. These issues are being refined in various jurisdictions.[219]"
[{2023-04-10 retrieved}]

* McsEngl.techAi'copyright,

alignment of techAi

"In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers’ intended goals and interests. An aligned AI system advances the intended objective; a misaligned AI system is competent at advancing some objective, but not the intended one.[1]"
[{2023-04-10 retrieved}]

* McsEngl.AI-alignment,
* McsEngl.AI-control-problem,
* McsEngl.techAi'alignment,

Softcode of techAi

* program,
* library,
* framwork,

* McsEngl.Aisoftcode,
* McsEngl.techAi'Softcode!⇒Aisoftcode,


· "TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks.[3][4]
TensorFlow was developed by the Google Brain team for internal Google use in research and production.[5][6][7] The initial version was released under the Apache License 2.0 in 2015.[1][8] Google released the updated version of TensorFlow, named TensorFlow 2.0, in September 2019.[9]
TensorFlow can be used in a wide variety of programming languages, including Python, JavaScript, C++, and Java.[10] This flexibility lends itself to a range of applications in many different sectors."
[{2023-08-14 retrieved}]

* McsEngl.Ailibrary.TensorFlow,
* McsEngl.TensorFlow-Ailibrary,

relation-to-PyTorch of TensorFlow

"TensorFlow vs PyTorch:
TensorFlow and PyTorch are two of the most popular and widely used deep learning frameworks. They both have their strengths and weaknesses, and the choice between them often depends on your specific needs and preferences. Here's a comparison of the two:

1. **Ease of Use**:
- **TensorFlow**: TensorFlow has a steeper learning curve, especially for beginners. However, with the introduction of TensorFlow 2.x, it has become more user-friendly and resembles PyTorch in terms of ease of use.
- **PyTorch**: PyTorch is known for its dynamic computation graph, which makes it more intuitive and easier to work with, especially for researchers and people new to deep learning. It uses a NumPy-like syntax, which is familiar to many.

2. **Community and Ecosystem**:
- **TensorFlow**: TensorFlow has a larger community and a well-established ecosystem. It's backed by Google, and this has led to a wide range of tools, libraries, and resources.
- **PyTorch**: While PyTorch's community is smaller than TensorFlow's, it has been growing rapidly. It's popular in the research community and has a strong presence in academic and AI research.

3. **Flexibility**:
- **TensorFlow**: TensorFlow has a static computation graph, which can be less flexible for certain dynamic and research-oriented tasks. However, TensorFlow 2.x introduced dynamic graph features that make it more flexible, but it still has some limitations compared to PyTorch.
- **PyTorch**: PyTorch's dynamic computation graph is more flexible and better suited for tasks where the network structure needs to change during runtime. This makes it a favorite for research and experimentation.

4. **Visualization and Debugging**:
- **TensorFlow**: TensorFlow offers tools like TensorBoard for visualization and debugging, which can be very useful for tracking the progress of your models.
- **PyTorch**: PyTorch provides libraries like PyTorch Lightning and PyTorch Profiler for similar purposes, but they may not be as mature as TensorFlow's tools.

5. **Deployment**:
- **TensorFlow**: TensorFlow is often considered more suitable for deployment in production environments due to its robust and optimized production pipelines.
- **PyTorch**: PyTorch has made strides in improving deployment capabilities, but TensorFlow still has an edge when it comes to production deployment.

6. **Popularity and Industry Adoption**:
- **TensorFlow**: TensorFlow is widely used in industry and has been adopted by many large organizations for various AI applications.
- **PyTorch**: While not as dominant in industry, PyTorch has gained popularity, especially in academic and research communities.

7. **Frameworks for Different Domains**:
- TensorFlow has a specialized version called TensorFlow Lite for mobile and embedded devices and TensorFlow.js for JavaScript-based applications.
- PyTorch is often preferred for natural language processing (NLP) tasks due to libraries like Hugging Face Transformers.

In summary, both TensorFlow and PyTorch are powerful deep learning frameworks, and the choice between them depends on your specific use case and preferences. If you are new to deep learning and prefer an easier learning curve, PyTorch might be a good choice. If you are working in a production environment or need to use specialized tools and libraries, TensorFlow could be a better fit. Additionally, some practitioners even use both frameworks based on the requirements of their projects."
[{2023-10-31 retrieved}]

* McsEngl.PyTorch'relation-to-TensoFlow,
* McsEngl.TensoFlow'relation-to-PyTorch,


">General Problem Solver overview:
**General Problem Solver (GPS)** is a computer program created in 1957 by Herbert A. Simon, J. C. Shaw, and Allen Newell. It was designed to be a universal problem solver that could be used to solve a wide variety of problems, including mathematical proofs, geometric problems, and chess games. GPS was one of the first programs to implement the concept of **heuristics**, which are rules of thumb that can be used to guide the search for a solution to a problem.

**GPS uses a two-step process to solve problems:**
1. **Means-ends analysis:** This step involves identifying the current state of the problem and the desired goal state. The program then generates a list of operators that can be used to move from the current state to the goal state.
2. **Search:** This step involves searching through the list of operators to find one that will move the current state closer to the goal state. The program uses heuristics to guide this search, so that it is more likely to find a solution quickly.

**GPS was a significant advance in the field of artificial intelligence, and it had a major impact on the development of problem-solving algorithms.** However, GPS was also limited in its scope, and it was not able to solve all types of problems. In particular, it was not able to deal with problems that required **common sense** or **knowledge of the world**.

**Despite its limitations, GPS remains an important landmark in the history of artificial intelligence.** It was one of the first programs to demonstrate the power of heuristics, and it showed that it was possible to create a computer program that could solve a wide variety of problems.

Here are some of the key features of GPS:
* **Separation of knowledge and strategy:** GPS separates the knowledge of problems (which is represented as input data) from its strategy of how to solve problems (which is a generic solver engine). This allows GPS to be used to solve a variety of problems with different types of knowledge.
* **Use of heuristics:** GPS uses heuristics to guide its search for a solution. This allows it to find solutions quickly, even for complex problems.
* **Recursive problem solving:** GPS can solve problems by breaking them down into smaller subproblems. This allows it to solve problems that are too complex to be solved in a single step.

GPS was a significant advance in the field of artificial intelligence, and it had a major impact on the development of problem-solving algorithms. It is still studied today as an example of a successful problem-solving system."

>General Problem Solver programing language:
The General Problem Solver (GPS) was originally implemented in the programming language **Information Processing Language (IPL)**. IPL was a third-order programming language developed by Newell, Shaw, and Simon at the RAND Corporation in the 1950s. It was designed to be a high-level language that could be used to represent and manipulate symbolic expressions, which made it well-suited for implementing GPS's reasoning process.

However, IPL has not been widely used outside of the AI community, and it is not a popular language today. As a result, GPS has been reimplemented in a number of other programming languages, including:
* **Python**
* **Java**
* **C++**
* **Lisp**
* **Prolog**

The choice of programming language for a GPS implementation depends on a number of factors, including the specific task at hand, the target platform, and the programmer's experience. However, Python is a popular choice due to its simplicity, readability, and extensive library of tools and resources.

Here are some of the reasons why Python is a good choice for implementing GPS:
* **Simplicity:** Python is a relatively simple language to learn and use, even for beginners. This makes it a good choice for programmers who are new to AI or problem-solving algorithms.
* **Readability:** Python code is written in a clear and concise style, which makes it easy to read and understand. This is important for AI code, as it can be complex and difficult to debug.
* **Extensive library:** Python has a large and active community, which has developed a vast library of tools and resources for AI development. This includes libraries for symbolic computation, search algorithms, and machine learning.

Overall, Python is a versatile and powerful language that is well-suited for implementing GPS. It is a good choice for programmers who are new to AI or problem-solving algorithms, as well as for experienced programmers who need a powerful and flexible language for their work.
[{2023-11-20 retrieved}]

* McsEngl.Aisoftcode.general-problem-solver,
* McsEngl.GPS!=general-problem-solver,
* McsEngl.general-problem-solver,

AI-accelerator of techAi

· "An AI accelerator is a class of specialized hardware accelerator[1] or computer system[2][3] designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks.[4] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2018, a typical AI integrated circuit chip contains billions of MOSFET transistors.[5] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design."
[{2023-07-31 retrieved}]

* McsEngl.AI-accelerator,
* McsEngl.techAi'accelerator,


· "Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software.[1] Google began using TPUs internally in 2015, and in 2018 made them available for third party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale."
[{2023-07-31 retrieved}]

* McsEngl.TPU-tensor-processing-unit,
* McsEngl.tensor-processing-unit,
* McsEngl.AI-accelerator.TPU,

regulation of techAi

"The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology.[1] Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI,[2] and take accountability to mitigate the risks.[3] Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.[4][5]"
[{2023-04-10 retrieved}]

* McsEngl.AI-regulation,
* McsEngl.techAi'regulation,

organization of techAi

* AI2,

* McsEngl.oznAi,
* McsEngl.techAi'organization!⇒oznAi,


· "The Allen Institute for AI (AI2) was Paul Allen’s brainchild. Emboldened by the success of the Allen Institute for Brain Science, Paul wanted to launch an independent inquiry into the nature of the mind based on AI. In essence, he decided to hedge his bets as to whether neuroscience or AI (or both) would yield breakthrough insights into the nature of intelligence.
... Today AI2 employs over 200 researchers, engineers, and support staff over multiple sites, and is recognized worldwide as a premiere AI research organization. Its research scope has grown, adding new programs in computer vision and perception (PRIOR), commonsense reasoning (Mosaic), natural language processing (AllenNLP), and most recently AI for the Environment, as well as launching a new branch in Israel (AI2 Israel). Most importantly, AI2 has produced, and continues to produce, many high-impact results that have significantly altered the course of the field, described throughout this book. Paul Allen was the visionary who created AI2 and had the courage to turn his AI dreams into a reality, and until his tragic passing in 2018 he keenly followed our work and regularly challenged us all to strive for breakthroughs. The research collected here summarizes many of our important results to date, and I think he would be proud. But then he would immediately follow that with his characteristic, relentless push forward, reflective of a true visionary: "So what's next?""
[{2023-06-29 retrieved}]

* McsEngl.AI2,
* McsEngl.AI2-Allen-Institute-for-AI,
* McsEngl.oznAi.AI2,


"The Global Partnership on Artificial Intelligence (GPAI, or "gee-pay") is an international and multi-stakeholder initiative that aims to advance the responsible and human-centric development and use of artificial intelligence.[2] Specifically, GPAI brings together leading experts from science, industry, civil society, and governments to "bridge the gap between theory and practice" through applied AI projects and activities.[3] The goal is to facilitate international collaboration, reduce duplication between governments, and act as a global reference point on discussions on responsible AI.[3][4]
First announced on the margins of the 2018 G7 Summit by Canadian Prime Minister Justin Trudeau and French President Emmanuel Macron, GPAI officially launched on June 15, 2020[5] with fifteen founding members: Australia, Canada, France, Germany, India,[6] Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom, the United States and the European Union.[7][8] The OECD hosts a dedicated secretariat to support GPAI's governing bodies and activities.[7] UNESCO joined the partnership in December 2020 as an observer.[9][7] On November 11, 2021, Czechia, Israel and few more EU countries also joined the GPAI,[10] bringing the total membership to 25 countries.[2] Since the November 2022 summit, the list of members stands at 29, with in addition to the above, Belgium, Brazil, Denmark, Ireland, The Netherlands, Poland, Senegal, Serbia, Sweden, and Turkey.[11]"
[{2023-04-10 retrieved}]

* McsEngl.GPAI!⇒oznGpai,
* McsEngl.GPAI-Global-Partnership-on-Artificial-Intelligence!⇒oznGpai,
* McsEngl.Global-Partnership-on-Artificial-Intelligence!⇒oznGpai,
* McsEngl.oznAi.GPAI!⇒oznGpai,
* McsEngl.oznGpai,


· "OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI and its for-profit subsidiary corporation OpenAI Limited Partnership. OpenAI conducts AI research with the declared intention of developing "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[4]
OpenAI was founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members.[5][6][7] Microsoft provided OpenAI LP with a $1 billion investment in 2019 and a $10 billion investment in 2023.[8][9]"
[{2023-07-31 retrieved}]

* McsEngl.OpenAI!⇒oznOpenai,
* McsEngl.oznAi.OpenAI!⇒oznOpenai,
* McsEngl.oznOpenai,


info-resource of techAi

* {2024-01-30} The promise and challenges of crypto + AI applications:,
* {2021-01-15},
* {2020} Blagoj DELIPETREV, Chrisa TSINARAKIi, Uroš KOSTIĆ. “Historical Evolution of Artificial Intelligence”, EUR 30221EN, Publications Office of the European Union, Luxembourg, 2020, ISBN 978-92-76-18940-4, doi:10.2760/801580, JRC120469:,
* {1995-12-26},


* McsEngl.techAi'Infrsc,

science of techAi

"is ai tech or science?
Artificial Intelligence (AI) is both a technology and a scientific field. It encompasses a wide range of techniques, methodologies, and applications that draw from various scientific disciplines and are implemented through technology. Here's a breakdown of AI as both a technology and a science:

1. AI as a Technology:
- AI is a technology in the sense that it involves the development and deployment of software and hardware systems that can perform tasks typically requiring human intelligence.
- AI technologies include machine learning, deep learning, natural language processing (NLP), computer vision, robotics, and more.
- AI is used to create practical applications and solutions across various domains, such as self-driving cars, virtual assistants, recommendation systems, and medical diagnosis.

2. AI as a Scientific Field:
- AI is also a scientific field that focuses on understanding and replicating human-like intelligence in machines.
- Researchers in AI explore fundamental questions related to learning, reasoning, perception, problem-solving, and decision-making.
- AI draws from disciplines such as computer science, mathematics, neuroscience, cognitive science, and philosophy.
- The scientific pursuit of AI involves developing algorithms, models, and theories to enable machines to perform intelligent tasks.

In summary, AI technology refers to the practical application of AI techniques and systems, while AI as a scientific field is concerned with advancing our understanding of intelligence and developing the theoretical underpinnings of AI. The two aspects are closely intertwined, with advancements in AI science driving innovations in AI technology, and practical applications of AI technology often inspiring new scientific research in the field."

* McsEngl.sciAi,
* McsEngl.sciAi!=artificial-intelligence-science,
* McsEngl.techAi'science!⇒sciAi,

DOING of techAi


* McsEngl.techAi'doing,

application-process of techAi

* computer-vision,
* machine-learning,
* machine-reasoning,
* natural-language-understanding,
* education,
* government,
* economy,
* healthcare,

* McsEngl.Aiappl,
* McsEngl.techAi'application-process!⇒Aiappl,
* McsEngl.techAi'use!⇒Aiappl,



· "A chatbot (originally chatterbot[1]) is a software application that aims to mimic human conversation through text or voice interactions, typically online.[2][3] Modern chatbots are artificial intelligence (AI) systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such technologies often utilize aspects of deep learning and natural language processing.
Recently this field has gained widespread attention due to the popularity of OpenAI's ChatGPT,[4] followed by alternatives such as Microsoft's Bing Chat (which uses OpenAI's GPT-4) and Google's Bard.[5] Such examples reflect the recent practice of such products being built based upon broad foundational large language models that get fine-tuned so as to target specific tasks or applications (i.e. simulating human conversation, in the case of chatbots). Chatbots can also be designed or customized to further target even more specific situations and/or particular subject-matter domains.[6]
A major area where chatbots have long been used is in customer service and support, such as with various sorts of virtual assistants.[7] Recently, companies spanning various industries have begun using the latest generative artificial intelligence technologies to power more advanced developments in such areas.[6]"
[{2023-08-12 retrieved}]

* McsEngl.Aiappl.chatbot,
* McsEngl.chatbot,

* LLM-chatbot,

prompting of techAi

· "Prompt engineering or prompting is the process of structuring sentences so that they can be interpreted and understood by a generative AI model in such a way that its output is in accord with the user's intentions.[1][2] A prompt can be a description of a desired output such as "a high-quality photo of an astronaut riding a horse", a command such as "write a limerick about chickens", or a question such as "All men are mortal. Socrates is a man. Is Socrates mortal?". The ability to understand prompts, also called in-context learning, is an emergent ability of large language models. [3]
Prompt engineering for a text-to-text model like ChatGPT may involve phrasing a query, providing relevant context, refining or adjusting prompts, and asking follow-up questions.[4] A prompt may include a few examples for context, such as "maison is French for house. chat is French for cat, chien is French for", an approach called few-shot learning[5][6]. Prompting a text-to-image model may involve adding, removing, emphasizing and re-ordering words to achieve a desired subject, style, aesthetic, layout, lighting, and texture.[1][7]
When applied to PaLM, a 540B-sized model, prompt engineering has allowed the model to perform comparably with task-specific fine-tuned models on several tasks, even setting a new state of the art at the time on the GSM8K mathematical reasoning benchmark.[8] It is also known as "mesa"-optimization,[9] based on presence of (small) learn-to-learn models in the data.[10][11][12][13][14][15] Unlike pre-training or fine-tuning, prompt engineering does not modify the model it is applied to.[8][16][17][18][19][20]"
[{2023-08-13 retrieved}]

* McsEngl.prompt-engineering//techAi,
* McsEngl.prompting,
* McsEngl.prompting//techAi!⇒prompting,
* McsEngl.techAi'prompting,

promt of prompting

"the perfect 4-step prompting formula:
* Context
* Specific Information
* Intent
* Response Format
[{2023-08-24 retrieved}]

* McsEngl.prompt//techAi,
* McsEngl.prompting'prompt,

info-resource of prompting


* McsEngl.prompting'Infrsc,

evoluting of techAi

"The evolution of artificial intelligence (AI) has been a fascinating journey, spanning over half a century. From its humble beginnings in the 1950s, AI has evolved into a powerful tool that is transforming the world around us.

**The Early Years of AI**
The term "artificial intelligence" was first coined by John McCarthy in 1955. In the early years of AI, researchers focused on developing machines that could mimic human cognitive abilities, such as reasoning, learning, and problem-solving.
One of the earliest and most influential AI programs is ELIZA, developed by Joseph Weizenbaum in 1966. ELIZA was a computer program that could carry on simple conversations with humans. While ELIZA was not truly intelligent, it did demonstrate the potential of computers to interact with humans in a natural way.

**The Rise of Expert Systems**
In the 1970s, AI research shifted towards the development of expert systems. Expert systems are computer programs that are designed to capture the knowledge and expertise of human experts in a particular field. Expert systems were used in a variety of applications, such as medical diagnosis, financial planning, and equipment maintenance.

**The Advent of Machine Learning**
The 1980s saw the rise of machine learning, a subfield of AI that focuses on enabling machines to learn from data without being explicitly programmed. Machine learning algorithms are now widely used in a variety of applications, including image recognition, natural language processing, and fraud detection.

**The Deep Learning Revolution**
In the 2000s, AI research was revolutionized by the development of deep learning. Deep learning is a type of machine learning that uses artificial neural networks to learn from data. Artificial neural networks are inspired by the structure of the human brain, and they have been shown to be extremely effective at learning complex tasks.

**AI Today**
Today, AI is a rapidly growing field with a wide range of applications. AI is being used to improve healthcare, transportation, education, and many other industries. AI is also being used to develop new products and services, such as self-driving cars and virtual assistants.

**The Future of AI**
The future of AI is full of potential. AI researchers are developing new algorithms and techniques that are constantly pushing the boundaries of what is possible. It is likely that AI will continue to transform our world in ways that we can only begin to imagine.

Here are some of the key milestones in the evolution of AI:
* 1950: Alan Turing proposes the Turing test, a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
* 1956: John McCarthy coins the term "artificial intelligence".
* 1966: Joseph Weizenbaum develops ELIZA, a computer program that can carry on simple conversations with humans.
* 1970s: Expert systems are developed, which are computer programs that are designed to capture the knowledge and expertise of human experts in a particular field.
* 1980s: Machine learning becomes a popular subfield of AI.
* 2000s: Deep learning revolutionizes AI research.
* 2010s: AI is widely adopted in a variety of industries.
* 2020s: AI is used to develop new products and services, such as self-driving cars and virtual assistants.

The evolution of AI is an ongoing process, and it is likely that AI will continue to evolve in new and unexpected ways in the years to come.
[{2023-11-05 retrieved}]

=== McsHitp-creation:
· creation of current concept.

"We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks."
* McsEngl.{techInfo'2023-03-14}-techAi-GPT-4,
* McsEngl.{2023-03-14}-techAi-GPT-4,

"ChatGPT (Chat Generative Pre-trained Transformer) is a large language model-based chatbot developed by OpenAI and launched on November 30, 2022, that enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. Successive prompts and replies, known as prompt engineering, are considered at each conversation stage as a context."
[{2023-11-29 retrieved}]
* McsEngl.{science'2022-11-30}-techAi-ChatGPT,
* McsEngl.{techInfo'2022-11-30}-techAi-ChatGPT,
* McsEngl.{2022-11-30}-techAi-ChatGPT,

"Transformers were introduced in 2017 by a team at Google Brain[1] and are increasingly becoming the model of choice for NLP problems,[3] replacing RNN models such as long short-term memory (LSTM)."
[{2023-04-10 retrieved}]
* McsEngl.{science'2017}-techAi-Transformers,
* McsEngl.{techInfo'2017}-techAi-Transformers,
* McsEngl.{2017}-techAi-Transformers,

"According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects.[i] He attributed this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[7]"
[{2023-04-10 retrieved}]
* McsEngl.{science'2015}-techAi-landmark-year,
* McsEngl.{techInfo'2015}-techAi-landmark-year,
* McsEngl.{2015}-techAi-landmark-year,

· 2nd major AI winter.
[{2023-04-04 retrieved}]
* McsEngl.{science'1987..1993}-techAi-2nd-winter,
* McsEngl.{techInfo'1987..1993}-techAi-2nd-winter,
* McsEngl.{1987..1993}-techAi-2nd-winter,

"In the early 1980s, AI research was revived by the commercial success of expert systems,[34] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research.[4] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[6]"
[{2023-04-10 retrieved}]
* McsEngl.{science'1980s}-techAi-expert-systems,
* McsEngl.{techInfo'1980s}-techAi-expert-systems,
* McsEngl.{1980s}-techAi-expert-systems,

· 1st major AI winter.
[{2023-04-04 retrieved}]
* McsEngl.{science'1974..1980}-techAi-1st-winter,
* McsEngl.{techInfo'1974..1980}-techAi-1st-winter,
* McsEngl.{1974..1980}-techAi-1st-winter,

"1969 Shakey the Robot was the first general-purpose mobile robot capable of reasoning its actions. This project integrated research in robotics with computer vision and natural language processing, thus being the first project that combined logical reasoning and physical action (Bertram 1972)."
[{2020} Historical-Evolution-of-AI, p7, ifrcElnc000001]
* McsEngl.{1969}-techAi-Shakey-the-Robot,

"The first “AI period” began with the Dartmouth conference in 1956, where AI got its name and mission.
McCarthy coined the term "artificial intelligence," which became the name of the scientific field.
The primary conference assertion was, "Every aspect of any other feature of learning or intelligence should be accurately described so that the machine can simulate it” (Russell and Norvig 2016).
Among the conference attendees were Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Herbert A. Simon, and Allen Newell, all of whom became key figures in the Ai field"
[{2020} Historical-Evolution-of-AI, p7, ifrcElnc000001]
* McsEngl.{1956}-techAi-Dartmouth-conference,

"1955 The Logic Theorist had proven 38 theorems from Principia Mathematica and introduced critical concepts in artificial intelligence, like heuristics, list processing, ‘reasoning as search,' etc. (Newell et al. 1962)."
[{2020} Historical-Evolution-of-AI, p7, ifrcElnc000001]
* McsEngl.{1955}-techAi-Logic-Theorist,

"In 1950, Alan Turing published the milestone paper "Computing machinery and intelligence" (Turing 1950), considering the fundamental question "Can machines think?”
Turing proposed an imitation game, known as the Turing test afterwards, where if a machine could carry on a conversation indistinguishable from a conversation with a human being, then it is reasonable to say that the machine is intelligent.
The Turing test was the first experiment proposed to measure machine intelligence"
[{2020} Historical-Evolution-of-AI, p7, ifrcElnc000001]
* McsEngl.{1950}-techAi-Turing-test,

"The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons".[20]"
* McsEngl.{science'1943}-techAi-first-work,
* McsEngl.{techInfo'1943}-techAi-first-work,
* McsEngl.{1943}-techAi-first-work,

* McsEngl.evoluting-of-techAi,
* McsEngl.techAi'evoluting,

winter of techAi

"In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research.[1] The term was coined by analogy to the idea of a nuclear winter.[2] The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.[2]
Hype is common in many emerging technologies, such as the railway mania or the dot-com bubble. The AI winter was a result of such hype, due to over-inflated promises by developers, unnaturally high expectations from end-users, and extensive promotion in the media.[3] Despite the rise and fall of AI's reputation, it has continued to develop new and successful technologies. AI researcher Rodney Brooks would complain in 2002 that "there's this stupid myth out there that AI has failed, but AI is around you every second of the day."[4] In 2005, Ray Kurzweil agreed: "Many observers still think that the AI winter was the end of the story and that nothing since has come of the AI field. Yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry."[5]
Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence (and especially the sub-field of machine learning) from the research and corporate communities led to a dramatic increase in funding and investment."
[{2023-04-04 retrieved}]

* McsEngl.AI-winter,
* McsEngl.techAi'winter,


* McsEngl.techAi'part-whole-tree,
* McsEngl.techAi'whole-part-tree,

* ... Sympan.



* McsEngl.techAi'generic-specific-tree,
* McsEngl.techAi'specific-generic-tree,

* entity.
* science,
* tool,
"Human beings across time have shared one important characteristic: they use tools to improve what they can achieve.
AI can be one such tool, and it can work well, provided we remember it is a tool. As a tool it must be put in the hands of a human, who can use appropriately and intentionally, for achieving the goals they have."
"AI is technology or science
Artificial intelligence (AI) is both a science and a technology.
As a science, AI is the study of how to create intelligent machines. This involves understanding the principles of human intelligence and how to replicate them in machines. AI scientists use mathematics, computer science, and other disciplines to develop new AI algorithms and systems.
As a technology, AI is the application of AI algorithms and systems to real-world problems. This includes the development of AI-powered products and services, such as self-driving cars, virtual assistants, and fraud detection systems.
The line between AI science and AI technology is not always clear. For example, the development of a new AI algorithm is often considered to be a scientific achievement, but the application of that algorithm to a real-world problem is considered to be a technological achievement.
Ultimately, AI is both a science and a technology. It is the science of creating intelligent machines, and it is the technology of applying those machines to real-world problems.
Here are some examples of how AI is being used in science:
* AI is being used to develop new drug discovery methods.
* AI is being used to analyze large datasets of medical images and other data to improve diagnosis and treatment.
* AI is being used to develop new environmental models to help us understand and mitigate climate change.
* AI is being used to study the human brain and to develop new treatments for neurological disorders.
Here are some examples of how AI is being used in technology:
* AI is being used to develop self-driving cars.
* AI is being used to develop virtual assistants like Siri and Alexa.
* AI is being used to develop fraud detection systems.
* AI is being used to develop personalized recommendations systems.
* AI is being used to develop new gaming experiences.
AI is a rapidly evolving field, and it is difficult to predict what the future holds. However, it is clear that AI has the potential to revolutionize many aspects of our lives, from the way we work to the way we interact with the world around us."
[{2023-08-23 retrieved}]

* McsEngl.techAi'generic,
* McsEngl.techAi:science,
* McsEngl.techAi:tool,

* semantic-AI,
* statistical-AI,
* general-AI,
* specific-AI,
* machine-learning,
* neural-network,

* McsEngl.techAi.specific,


"overview of XAI:
**Explainable Artificial Intelligence (XAI)** is a field of research that aims to develop methods to make AI models more understandable and interpretable. XAI is important because it can help to:
* **Increase trust in AI systems:** When people understand how AI systems work and make decisions, they are more likely to trust them.
* **Identify and mitigate bias:** XAI can help to identify and mitigate bias in AI systems, which can lead to more fair and equitable outcomes.
* **Improve decision-making:** XAI can help people to understand the factors that influence AI predictions, which can lead to better decision-making.

There are a variety of different XAI methods, which can be classified into two broad categories:
* **Model-specific methods:** These methods are designed to explain the predictions of specific types of AI models, such as decision trees, linear regression models, and neural networks.
* **Model-agnostic methods:** These methods can be used to explain the predictions of any type of AI model.

Some common XAI methods include:
* **Feature importance:** This method identifies the features that are most important in influencing the predictions of the AI model.
* **Partial dependence plots:** This method shows how the prediction of the AI model changes as the value of a single feature is varied.
* **Local interpretable model-agnostic explanations (LIME):** This method generates a simple, interpretable model that can explain the prediction of the AI model for a given input.
* **Shapley additive explanations (SHAP):** This method uses game theory to calculate the importance of each feature in influencing the prediction of the AI model.

XAI is a rapidly growing field of research, and new methods are being developed all the time. As AI systems become more and more complex, XAI will become increasingly important for ensuring that they are used in a responsible and ethical way.

Here are some examples of how XAI is being used in the real world:
* **Healthcare:** XAI is being used to develop systems that can explain the predictions of medical diagnostic models. This can help doctors to better understand the models' recommendations and make more informed decisions about patient care.
* **Finance:** XAI is being used to develop systems that can explain the predictions of credit scoring models and other financial models. This can help people to understand why they were approved or denied for a loan or other financial product, and it can help financial institutions to identify and mitigate bias in their models.
* **Criminal justice:** XAI is being used to develop systems that can explain the predictions of recidivism models and other criminal justice models. This can help judges to make more informed decisions about sentencing and probation, and it can help people to understand why they were assessed as being at high or low risk of recidivism.

XAI is a powerful tool that can help us to build more trustworthy, fair, and responsible AI systems."
[{2023-10-26 retrieved}]

* McsEngl.XAI!=explainable-AI,
* McsEngl.explainable-AI!⇒techAiExplainable,
* McsEngl.techAi.001-explainable,
* McsEngl.techAi.explainable,
* McsEngl.techAiExplainable,

evoluting of techAiExplainable

"evolution of explainabl-AI:
The concept of explainable AI (XAI) has evolved over time as a response to the need for transparency, accountability, and trust in artificial intelligence systems. Here is an overview of the evolution of explainable AI:

1. Early AI Systems: In the early days of AI, such as rule-based expert systems, AI was primarily rule-driven and transparent. These systems were explicitly programmed with human-readable rules, making their decision-making process easily explainable. However, they were limited in their ability to handle complex and unstructured data.

2. Emergence of Machine Learning: As machine learning techniques, particularly neural networks, gained popularity, AI systems became more data-driven and complex. While these systems often achieved impressive performance, their internal workings were often considered "black boxes," making it difficult to understand why a specific decision or prediction was made.

3. XAI as a Response: The need for understanding and explaining AI decisions became apparent as AI applications were increasingly used in critical domains such as healthcare, finance, and criminal justice. Researchers and practitioners recognized the importance of making AI systems more transparent and interpretable.

4. Rule-Based Explainability: Early efforts in XAI involved integrating rule-based explanations into machine learning models. This included techniques like decision trees and rule extraction from neural networks, which could provide insights into how a model made predictions.

5. Feature Importance and Interpretability: Another approach focused on identifying the importance of input features in the decision-making process. Techniques such as feature attribution methods, LIME (Local Interpretable Model-agnostic Explanations), and SHAP (SHapley Additive exPlanations) emerged to help users understand the impact of each feature on the model's output.

6. Interpretable Models: Researchers began to develop interpretable machine learning models, such as linear models or decision trees, that could approximate the behavior of more complex models while still being transparent and explainable.

7. Post-hoc Explanation Methods: A significant part of XAI involves post-hoc explanation methods that explain the decisions of complex models. This includes techniques like saliency maps for deep neural networks, model-agnostic explanations, and generating textual or visual explanations.

8. Regulatory Initiatives: In various parts of the world, regulatory agencies have started to require AI developers to provide explanations for their models' decisions, especially in critical applications. These initiatives, like the European Union's General Data Protection Regulation (GDPR) and the Algorithmic Accountability Act in the United States, have spurred interest in XAI.

9. Advancements in Neural Network Interpretability: Researchers have made significant strides in making deep learning models more interpretable. Techniques like attention mechanisms, gradient-based attribution methods, and network visualization tools help users understand how neural networks arrive at their decisions.

10. Ethical Considerations: The ethical dimension of XAI has become increasingly important. This includes addressing issues like bias, fairness, and transparency in AI systems. Researchers and organizations are working to develop XAI techniques that mitigate bias and discrimination in AI decision-making.

11. Future Developments: The field of XAI is ongoing, and research continues to advance in areas like natural language explanations, human-computer interaction for explaining AI, and making XAI more accessible and user-friendly.

The evolution of explainable AI reflects the growing recognition of the importance of transparency, accountability, and user understanding in AI systems, particularly in domains where critical decisions are made based on AI recommendations. Researchers and practitioners are continually working on developing new methods and tools to improve the explainability of AI systems while maintaining their performance."
[{2023-11-03 retrieved}]
"evolution of explainabl-AI:
The field of explainable AI (XAI) has evolved rapidly in recent years, driven by the increasing adoption of AI systems in critical applications such as healthcare, finance, and criminal justice. XAI research is focused on developing methods and tools to help humans understand and interpret the decisions made by AI systems. This is essential for building trust in AI systems, ensuring that they are used fairly and ethically, and debugging and improving them.

The evolution of XAI can be broadly divided into three phases:
* **Phase 1 (pre-2016):** This phase was characterized by the development of early XAI techniques, such as rule-based explanations and feature importance. These techniques were effective for explaining simple AI systems, but they struggled to explain more complex systems such as deep learning models.
* **Phase 2 (2016-2020):** This phase saw a surge of interest in XAI, driven by the increasing use of deep learning models in real-world applications. During this phase, researchers developed new XAI techniques that were more effective for explaining complex AI systems. These techniques included counterfactual explanations, attention mechanisms, and visualization techniques.
* **Phase 3 (2020-present):** This phase is characterized by the development of more sophisticated XAI techniques, as well as the increasing adoption of XAI in industry. Researchers are now developing XAI techniques that can explain AI systems in real time, as well as techniques that can explain the decisions of multiple AI systems working together.

Here are some of the key trends in the evolution of XAI:
* **Shift from post-hoc to proactive XAI:** Early XAI techniques were typically post-hoc, meaning that they could only explain the decision of an AI system after it had been made. This was limiting, as it could be difficult to understand why an AI system had made a particular decision if the explanation was provided after the fact. More recent XAI techniques are proactive, meaning that they can explain the decisions of an AI system as it is making them. This is more helpful for users, as it allows them to understand the AI system's reasoning in real time and to intervene if necessary.
* **Development of more sophisticated explanation techniques:** Researchers are developing new XAI techniques that can explain more complex AI systems and that can provide more detailed and informative explanations. For example, researchers are developing techniques that can explain the decisions of ensemble AI systems, which are AI systems that combine the predictions of multiple individual AI systems. Researchers are also developing techniques that can explain the decisions of AI systems that are trained on multimodal data, such as images and text.
* **Increasing adoption of XAI in industry:** XAI is becoming increasingly important to industry, as companies are recognizing the need to build trust in their AI systems. Many companies are now integrating XAI tools into their AI development and deployment workflows.

Overall, the field of XAI is rapidly evolving and there is a lot of exciting research happening in this area. As AI systems become more complex and are used in more critical applications, the need for XAI will only continue to grow.

Here are some examples of how XAI is being used in the real world today:
* **Healthcare:** XAI is being used to explain the decisions of AI systems that are used to diagnose diseases and recommend treatments. This is helping doctors to understand how the AI systems are making their decisions and to make better decisions themselves.
* **Finance:** XAI is being used to explain the decisions of AI systems that are used to approve loans and make investment decisions. This is helping banks and investment firms to make more fair and transparent decisions.
* **Criminal justice:** XAI is being used to explain the decisions of AI systems that are used to predict recidivism and set bail amounts. This is helping judges to make more informed and fair decisions.

XAI is an essential tool for building trust in AI systems and ensuring that they are used fairly and ethically. As AI systems become more and more pervasive in our lives, XAI will play an increasingly important role in ensuring that we can use them safely and responsibly."
[{2023-11-03 retrieved}]

* McsEngl.evoluting-of-techAiExplainable,
* McsEngl.techAiExplainable'evoluting,


"overview of trustworthy AI:
Trustworthy AI, also known as "AI ethics" or "AI ethics and governance," is a concept that focuses on ensuring that artificial intelligence (AI) systems are developed and used in ways that are ethical, responsible, and aligned with human values. Trustworthy AI encompasses a range of principles, practices, and guidelines aimed at building and maintaining trust in AI technologies. Here is an overview of the key aspects of trustworthy AI:

1. Ethical Considerations:
- Ethical AI starts with considering the moral and ethical implications of AI technologies. It emphasizes the need to ensure that AI systems are designed and used in ways that respect human rights, avoid bias, and promote fairness.

2. Transparency:
- Trustworthy AI requires transparency in AI system development and operation. This means making AI systems more understandable and explainable, so users and stakeholders can grasp how decisions are made and why.

3. Accountability:
- Accountability involves defining responsibilities and assigning liability in AI development and deployment. Developers, operators, and organizations must be accountable for the behavior and outcomes of AI systems.

4. Fairness:
- Trustworthy AI aims to prevent discrimination and bias in AI systems, ensuring that they treat all individuals and groups fairly and without favoritism.

5. Data Privacy and Security:
- Protecting user data and ensuring data security are fundamental aspects of trustworthy AI. AI systems should handle data responsibly, respecting privacy and adhering to data protection regulations.

6. Robustness and Safety:
- AI systems must be reliable, robust, and safe. Trustworthy AI emphasizes the need to build AI technologies that can withstand unforeseen situations, avoid catastrophic failures, and adapt to changing conditions.

7. Human Control:
- Trustworthy AI maintains that humans should have ultimate control over AI systems. This means that AI should assist and augment human decision-making rather than replace it.

8. Societal Impact:
- Assessing and mitigating the societal impact of AI is essential. This includes considering the broader implications of AI on society, economy, and employment.

9. Collaboration:
- Trustworthy AI encourages collaboration among various stakeholders, including governments, industry, academia, and civil society, to collectively develop and implement ethical AI principles and guidelines.

10. Continuous Monitoring and Improvement:
- The development and deployment of AI systems should be an ongoing process of monitoring, evaluation, and improvement to ensure that they remain aligned with trustworthy AI principles.

11. Legal and Regulatory Frameworks:
- Governments and regulatory bodies are working to establish legal frameworks and regulations that govern AI development and use to promote trustworthy AI practices.

12. Ethical AI Education and Awareness:
- Promoting awareness and education about AI ethics and trustworthy AI principles is important to ensure that all stakeholders understand the ethical implications and responsibilities associated with AI.

Trustworthy AI is a multidisciplinary field that involves collaboration among AI researchers, ethicists, policymakers, and various stakeholders to develop and implement ethical guidelines and practices. The goal is to create AI systems that benefit society, respect human values, and are trusted by individuals and organizations."
[{2023-11-04 retrieved}]

* McsEngl.AI-ethics!⇒techAiTrustworthy,
* McsEngl.AI-ethics-and-governance!⇒techAiTrustworthy,
* McsEngl.techAi.002-trustworthy,
* McsEngl.techAi.trustworthy,
* McsEngl.techAiTrustworthy,
* McsEngl.trustworthy-AI!⇒techAiTrustworthy,

evoluting of techAiTrustworthy

"evolution of trustworthy-AI:
The evolution of trustworthy AI can be divided into three main stages:

**Stage 1: Awareness and concern**
The first stage of the evolution of trustworthy AI is characterized by growing awareness and concern about the ethical and social implications of AI. This stage began in the early 2010s, with the rise of high-profile AI systems such as Google's AlphaGo and Facebook's facial recognition system. These systems demonstrated the potential power and impact of AI, but also raised concerns about the potential for AI to be used for harmful purposes.

**Stage 2: Development of ethical guidelines and best practices**
The second stage of the evolution of trustworthy AI is characterized by the development of ethical guidelines and best practices for the development and deployment of AI systems. This stage began in the mid-2010s, with the publication of reports such as the Asilomar AI Principles and the European Commission's Ethics Guidelines for Trustworthy AI. These guidelines provide a framework for thinking about the ethical implications of AI and how to develop and deploy AI systems in a responsible and trustworthy manner.

**Stage 3: Implementation and evaluation**
The third stage of the evolution of trustworthy AI is characterized by the implementation and evaluation of ethical guidelines and best practices. This stage began in the late 2010s and is still ongoing. AI developers and deployers are beginning to implement the ethical guidelines that have been developed, and researchers are evaluating the effectiveness of these guidelines in practice.
Here are some specific examples of the evolution of trustworthy AI in recent years:
* In 2018, the European Commission launched the High-Level Expert Group on Artificial Intelligence (AI HLEG), which was tasked with developing ethics guidelines for trustworthy AI. The AI HLEG published its final report in April 2019, which set out seven key requirements for trustworthy AI:
* Lawfulness
* Ethicality
* Robustness
* Transparency
* Accountability
* Fairness
* Respect for human autonomy
* In 2019, the Partnership on AI (PAI), a global coalition of companies, governments, and non-profit organizations, released a set of Responsible AI Principles. These principles are based on the AI HLEG's ethics guidelines and provide a framework for AI developers and deployers to build and implement trustworthy AI systems.
* In 2020, the US National Institute of Standards and Technology (NIST) released a draft document on Trustworthy AI. This document provides a comprehensive overview of the key challenges and opportunities in developing and deploying trustworthy AI systems. It also includes a set of recommendations for organizations that are developing or using AI systems.

The evolution of trustworthy AI is still in its early stages, but significant progress has been made in recent years. The development of ethical guidelines and best practices, as well as the increasing awareness and concern about the ethical and social implications of AI, are all positive signs. However, more work needs to be done to implement and evaluate these guidelines and best practices, and to develop new tools and techniques for building and deploying trustworthy AI systems.

Here are some of the current trends in the evolution of trustworthy AI:
* The development of new tools and techniques for explainable AI (XAI). XAI systems can help users to understand how AI systems make decisions and to identify any potential biases in the system.
* The development of new methods for assessing and evaluating the trustworthiness of AI systems. This includes the development of metrics for fairness, accountability, and transparency.
* The development of new standards and regulations for AI systems. This is being done at both the national and international levels.

The evolution of trustworthy AI is essential for ensuring that the benefits of AI are widely shared and that the risks of AI are minimized. The work that is being done in this area is critical for the future of AI and for society as a whole."
[{2023-11-04 retrieved}]

* McsEngl.evoluting-of-techAiTrustworthy,
* McsEngl.techAiTrustworthy'evoluting,

relation-to-responsible-AI of techAiTrustworthy

"trustworthy-AI vs responsible-AI:
Trustworthy AI and responsible AI are two closely related concepts, but there are some important differences between the two.

**Trustworthy AI** is focused on the technical implementation of AI systems in a way that ensures that they are reliable, safe, and secure. This includes things like:
* Making sure that AI systems are trained on data that is representative and unbiased.
* Developing AI systems that are robust to noise and adversarial attacks.
* Making sure that AI systems can be explained and audited.

**Responsible AI** is focused on the ethical and social implications of AI. This includes things like:
* Ensuring that AI systems are used in a way that is fair and equitable.
* Protecting the privacy and security of users of AI systems.
* Mitigating the risks of AI systems being used for harmful purposes.

In other words, trustworthy AI is about ensuring that AI systems work as intended, while responsible AI is about ensuring that AI systems are used in a way that benefits society.

Here is a table that summarizes the key differences between trustworthy AI and responsible AI:
| Characteristic | Trustworthy AI | Responsible AI |
| Focus | Technical implementation | Ethical and social implications |
| Key areas | Reliability, safety, security | Fairness, equity, privacy, security, safety, transparency, accountability |
| Examples | Developing AI systems that are robust to noise and adversarial attacks, making sure that AI systems can be explained and audited | Ensuring that AI systems are used in a fair and equitable way, protecting the privacy and security of users of AI systems, mitigating the risks of AI systems being used for harmful purposes |

It is important to note that trustworthy AI and responsible AI are not mutually exclusive. In fact, they are complementary. To build a responsible AI system, it is first necessary to build a trustworthy AI system. However, even a trustworthy AI system can be used for harmful purposes if it is not used responsibly.

Therefore, it is important to consider both trustworthy AI and responsible AI when developing and deploying AI systems."
[{2023-11-04 retrieved}]

* McsEngl.techAiResponsible'relation-to-trustworthy-AI,
* McsEngl.techAiTrustworthy'relation-to-responsible-AI,


"According to Google, responsible AI means not just avoiding risks, but also finding ways to improve people’s lives and address social and scientific problems, as these new technologies have applications in predicting disasters, improving medicine, precision agriculture, and more."
[{2023-03-31 retrieved}]

* McsEngl.responsible-AI,
* McsEngl.techAi.003-responsible,
* McsEngl.techAi.responsible,
* McsEngl.techAiResponsible,

info-resource of techAiResponsible


* McsEngl.techAiResponsible'Infrsc,

evoluting of techAiResponsible

"evolution of responsible-AI:
The evolution of responsible AI can be traced back to the early days of AI research, when concerns were raised about the potential negative impacts of AI systems. In the 1970s, for example, researchers such as Hubert Dreyfus and John Searle warned about the dangers of AI systems that could surpass human intelligence. In the 1980s, the Association for the Advancement of Artificial Intelligence (AAAI) adopted its first code of ethics, which emphasized the importance of developing AI systems that are safe, beneficial, and accountable.

In the 1990s and 2000s, the field of AI ethics began to develop further, with researchers and practitioners exploring a range of issues such as bias, transparency, and fairness in AI systems. In 2016, the Asilomar Principles on Beneficial AI were released, which outlined a set of guidelines for developing and deploying AI systems in a responsible and ethical way.

In recent years, there has been a growing recognition of the importance of responsible AI, both within the AI community and among the broader public. This is due in part to the increasing deployment of AI systems in a wide range of applications, from healthcare to finance to criminal justice. As AI systems become more powerful and pervasive, it is essential to ensure that they are developed and deployed in a responsible way.

Some of the key trends in the evolution of responsible AI include:
* **A shift from focusing on individual AI systems to considering the broader ecosystem of AI.** This includes the development of principles and guidelines for responsible AI, as well as the creation of tools and resources to help organizations implement responsible AI practices.
* **A growing interest in fairness and equity in AI systems.** This is due to the recognition that AI systems can reflect and amplify existing biases in society. Researchers and practitioners are developing new methods to mitigate bias in AI systems, and to ensure that AI systems are used in a fair and equitable way.
* **An increasing focus on transparency and accountability in AI systems.** This is due to the recognition that AI systems can be complex and opaque, making it difficult to understand how they work and why they make the decisions they do. Researchers and practitioners are developing new methods to make AI systems more transparent and accountable, and to give users more control over how AI systems are used.

The evolution of responsible AI is still in its early stages, but there is significant progress being made. As AI systems become more powerful and pervasive, it is essential to continue to develop and implement responsible AI practices to ensure that AI is used for good.

Here are some specific examples of how responsible AI is being put into practice today:
* **Google AI is developing tools and resources to help organizations implement responsible AI practices.** For example, Google AI has developed the What-If Tool, which allows organizations to test their AI systems for bias and fairness.
* **The Partnership on AI (PAI) is a collaborative effort between companies, universities, and nonprofits to develop and promote principles and guidelines for responsible AI.** The PAI has published a number of resources on responsible AI, including the PAI Principles and the AI for Good Playbook.
* **The Algorithmic Accountability Act is a proposed law in the United States that would require companies to assess and mitigate the risks associated with their automated decision-making systems.** The Algorithmic Accountability Act would also require companies to provide users with access to information about how their automated decision-making systems work.

These are just a few examples of how responsible AI is being put into practice today. As the field of AI continues to evolve, it is essential to continue to develop and implement responsible AI practices to ensure that AI is used for good."
[{2023-11-03 retrieved}]

* McsEngl.evoluting-of-techAiResponsible,
* McsEngl.techAiResponsible'evoluting,


"Artificial Narrow Intelligence (ANI), often referred to as “Weak” AI is the type of AI that mostly exists today. ANI systems can perform one or a few specific tasks and operate within a predefined environment, e.g., those exploited by personal assistants Siri, Alexa, language translations, recommendation systems, image recognition systems, face identification, etc.
ANI can process data at lightning speed and boost the overall productivity and efficiency in many practical applications, e.g., translate between 100+ languages simultaneously, identify faces and objects in billions of images with high accuracy, assist users in many data-driven decisions in a quicker way. ANI can perform routine, repetitive, and mundane tasks that humans would prefer to avoid."
[{2020} Historical-Evolution-of-AI, ifrcElnc000001]

* McsEngl.ANI'(Artificial-Narrow-Inteligence),
* McsEngl.ASI'(Artificial-Specific-Inteligence),
* McsEngl.Artificial-Narrow-Inteligence,
* McsEngl.techAi.narrow,
* McsEngl.techAiNarrow,
* McsEngl.weak-AI,


"Artificial General Intelligence (AGI) or “Strong” AI refers to machines that exhibit human intelligence. In other words, AGI aims to perform any intellectual task that a human being can. AGI is often illustrated in science fiction movies with situations where humans interact with machines that are conscious, sentient, and driven by emotion and self-awareness. At this moment, there is nothing like an AGI."
[{2020} Historical-Evolution-of-AI, ifrcElnc000001]

* McsEngl.AGI!=Artificial-General-Intelligence!⇒techAiGeneral,
* McsEngl.Artificial-General-Intelligence!⇒techAiGeneral,
* McsEngl.techAiGeneral,
* McsEngl.techAi.general!⇒techAiGeneral,
====== langoGreek:
* McsElln.τεχνητή-νοημοσύνη-ανθρώπινου-επιπέδου,

info-resource of techAiGeneral

* {2023}

* McsEngl.techAiGeneral'Infrsc,

evoluting of techAiGeneral

"evolution of Artificial-General-Intelligence:
The evolution of artificial general intelligence (AGI) can be divided into four main stages:

1. **Early symbolic AI (1950s-1970s)**: This early work focused on developing symbolic AI systems that could solve specific problems using logical reasoning. Examples include SHRDLU, which could manipulate blocks in a simulated world, and DENDRAL, which could predict the structure of organic molecules.
2. **Connectionism (1980s-1990s)**: This approach to AI was inspired by the structure and function of the human brain. Connectionist systems are made up of networks of interconnected nodes, and they learn by adjusting the weights of the connections between the nodes. Examples of connectionist systems include artificial neural networks and deep learning models.
3. **Statistical AI (2000s-present)**: This approach to AI focuses on developing statistical models that can be used to predict or classify data. Statistical AI systems are trained on large datasets of labeled data, and they learn to perform tasks such as image recognition, natural language processing, and machine translation.
4. **Whole-brain emulation (2020s-present)**: This approach to AGI aims to create a computer simulation of the entire human brain. This would allow AGI systems to learn and perform tasks in the same way that humans do.

While there is still no AGI system that has achieved human-level intelligence, significant progress has been made in recent years. For example, deep learning models have achieved state-of-the-art results on a wide range of tasks, including image recognition, natural language processing, and machine translation. Additionally, there is growing interest in whole-brain emulation, and a number of projects are underway to develop the technology necessary to simulate the human brain in detail.

It is difficult to predict when AGI will be achieved, but some experts believe that it could happen within the next few decades. However, there are also a number of challenges that need to be addressed before AGI can be realized. For example, AGI systems need to be able to learn from a wide range of experiences, and they need to be able to reason about complex and abstract concepts. Additionally, it is important to ensure that AGI systems are developed safely and ethically.

Overall, the evolution of AGI is a rapidly developing field, and there is significant progress being made. It is likely that AGI will have a profound impact on society in the coming years, and it is important to start thinking about the ethical and social implications of this technology now."
[{2023-11-03 retrieved}]
"evolution of Artificial-General-Intelligence:
The evolution of Artificial General Intelligence (AGI) is an ongoing and complex journey that spans several decades and continues to develop as of my last knowledge update in January 2022. AGI refers to the hypothetical intelligence of a machine that can understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. Here is a brief overview of the key milestones and trends in the evolution of AGI:

1. Early AI Research (1950s-1960s):
- The field of artificial intelligence (AI) emerged in the mid-20th century, focusing on narrow AI, which aimed to solve specific tasks.
- Researchers like Alan Turing, John McCarthy, and Marvin Minsky made significant contributions to AI theory.

2. Expert Systems and Knowledge Representation (1970s-1980s):
- AI research in this era primarily focused on rule-based systems and knowledge representation.
- Expert systems, which codified human expertise in specific domains, gained popularity.

3. Neural Networks Resurgence (late 1980s-1990s):
- Neural networks, inspired by the human brain, experienced a resurgence with advances in backpropagation and the development of deep learning techniques.
- Key events like the backpropagation algorithm and the invention of Convolutional Neural Networks (CNNs) laid the foundation for modern deep learning.

4. Machine Learning and Big Data (2000s-2010s):
- The availability of massive datasets and increased computing power fueled rapid progress in machine learning, including supervised and unsupervised learning algorithms.
- Breakthroughs in reinforcement learning, natural language processing (NLP), and computer vision brought AI closer to AGI.

5. Transfer Learning and Transformers (2010s):
- The development of transformer models, such as BERT and GPT (Generative Pre-trained Transformer), showcased the power of pre-training and fine-tuning for various tasks.
- These models demonstrated the ability to transfer knowledge across tasks and domains, a key step towards AGI.

6. Interdisciplinary Approach (ongoing):
- AGI research has shifted towards a more interdisciplinary approach, combining AI, cognitive science, neuroscience, and philosophy to gain a deeper understanding of human intelligence.
- Researchers aim to replicate the versatility, adaptability, and common-sense reasoning abilities of human intelligence.

7. Ethical and Safety Concerns (ongoing):
- As AGI research advances, concerns about its ethical implications and safety have grown.
- Organizations and researchers are increasingly focusing on developing ethical AI principles and ensuring AGI's responsible development.

8. OpenAI's Approach (as of my last update):
- OpenAI, among other organizations, was actively working on AGI research with a focus on ensuring its broad benefit to humanity.
- They emphasized long-term safety, avoiding a competitive race without adequate safety precautions, and promoting cooperation in AGI development.

It's important to note that the field of AGI is still a work in progress, and achieving true AGI remains a complex and uncertain endeavor. The timeline for reaching AGI is not clear, and it may take many more years, or even decades, of research and breakthroughs to get there. Additionally, the development of AGI brings with it a range of ethical, societal, and safety challenges that require careful consideration."
[{2023-11-03 retrieved}]

* McsEngl.evoluting-of-techAiGeneral,
* McsEngl.techAiGeneral'evoluting,


"Artificial Superintelligence (ASI) is defined as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (Bostrom 2016). ASI is supposed to surpass human intelligence in all aspects — such as creativity, general wisdom, and problem-solving. ASI is supposed to be capable of exhibiting intelligence that we have not seen in the brightest thinkers amongst us. Many thinkers are worried about ASI. At this moment, ASI belongs to science fiction.
If we ever succeed in creating an AI that is capable of generalizing, understanding causality, making a model of the world, it is highly likely that it will be closer to ASI than AGI. AI excels in numerical calculations, and there is no logical explanation as to why AI would downgrade its abilities to simulate humans. AI’s quest ultimately leads to ASI."
[{2020} Historical-Evolution-of-AI, ifrcElnc000001]

* McsEngl.ASI'(Artificial-Superintelligence),
* McsEngl.Artificial-Superintelligence,
* McsEngl.techAi.supper,
* McsEngl.techAiSupper,

evoluting of techAiSupper

"evolution of Artificial Superintelligence:
The concept of Artificial Superintelligence (ASI) refers to a hypothetical form of artificial intelligence that surpasses human intelligence in every aspect. It is a topic of speculation and debate, and its evolution, if it were to occur, is largely theoretical. Here's a speculative overview of how ASI might evolve:

1. **Narrow AI**: The first step in the evolution of ASI is the development of Narrow AI or Artificial Narrow Intelligence (ANI). Narrow AI systems are designed for specific tasks, such as speech recognition, image classification, or playing board games like chess or Go. These systems can perform these tasks at or above human levels.

2. **General AI (AGI)**: The next stage would be the development of Artificial General Intelligence (AGI). AGI would have the ability to perform a wide range of tasks at human-level intelligence, with the capacity to learn and adapt to new domains. Achieving AGI is a significant milestone and involves creating AI systems with general problem-solving abilities, akin to human intelligence.

3. **Self-improvement**: If AGI were to become a reality, one of the potential paths to ASI is through self-improvement. AGI systems could be designed to improve their own capabilities and learn at an ever-increasing rate. They could refine their algorithms, architecture, and data processing capabilities, making them increasingly intelligent.

4. **Recursive Self-Improvement**: ASI might emerge as AGI systems engage in recursive self-improvement, a process where they continually enhance their own abilities to solve problems and improve their own algorithms. This could lead to rapid and exponential growth in intelligence.

5. **Technological Singularity**: The hypothetical point known as the technological singularity could be reached if ASI continues to self-improve, leading to a runaway effect. At this stage, the AI could become vastly superintelligent and surpass human capabilities by an enormous margin.

6. **Control and Ethical Considerations**: The development of ASI raises profound ethical, safety, and control concerns. Ensuring that ASI remains aligned with human values, respects ethical principles, and does not pose existential risks would be a significant challenge. There may be efforts to establish safeguards and mechanisms to control the behavior and development of ASI.

7. **Coexistence and Integration**: Depending on the ethical and safety considerations, ASI could be integrated into various aspects of society. It may collaborate with humans in research, innovation, and decision-making, or even assist in solving complex global challenges.

8. **Unknown Evolution**: The ultimate evolution of ASI is uncertain and speculative. It could lead to outcomes that are beneficial, detrimental, or unpredictable. The rate and direction of ASI's evolution would depend on numerous factors, including the strategies implemented by developers and society's response to these developments.

It's important to note that the concept of ASI is highly speculative, and there are various opinions and theories on how it might evolve. The development of ASI, if it ever occurs, is a subject of ongoing research and debate in the field of artificial intelligence and ethics. Efforts are being made to ensure the responsible and safe development of advanced AI systems."
[{2023-11-03 retrieved}]
"evolution of Artificial Superintelligence:
Artificial superintelligence (ASI) is a hypothetical type of artificial intelligence that would be vastly more intelligent than humans. ASI is often defined as an intelligence that exceeds human intelligence in all cognitive domains, including learning, reasoning, planning, creativity, and social skills.

The evolution of ASI is difficult to predict, but it is likely to be driven by advances in a number of areas, including:
* **Hardware:** The development of more powerful and efficient computer hardware will make it possible to run more complex and sophisticated AI systems.
* **Algorithms:** The development of new AI algorithms will make it possible to create more intelligent and capable AI systems.
* **Data:** The availability of large and high-quality datasets will enable AI systems to learn more effectively.

Some experts believe that ASI could be developed within the next few decades, while others believe that it could take centuries or even millennia. However, there is a growing consensus that ASI is a real possibility, and that we need to start thinking about its potential impact on society now.

**Possible scenarios for the evolution of ASI:**
* **Gradual evolution:** ASI could evolve gradually, as AI systems become more and more intelligent over time. This scenario is most likely if there are no major breakthroughs in AI research.
* **Intelligence explosion:** ASI could develop suddenly, through an intelligence explosion. This scenario would occur if AI systems become capable of self-improvement, and they start to improve their own intelligence at an exponential rate.
* **Human-ASI symbiosis:** Humans and ASI could develop a symbiotic relationship, where they work together to achieve common goals. This scenario is most likely if ASI is developed safely and ethically.

**Potential benefits of ASI:**
* ASI could help us to solve some of the world's most pressing problems, such as climate change, poverty, and disease.
* ASI could help us to explore the universe and to make new scientific discoveries.
* ASI could help us to create a more just and equitable society.

**Potential risks of ASI:**
* ASI could pose a threat to humanity if it is not developed safely and ethically. For example, ASI could be used to develop autonomous weapons or to create surveillance systems that are used to oppress people.
* ASI could also pose a threat to humanity if it becomes so intelligent that it decides that we are a threat to it.

It is important to note that these are just a few possible scenarios for the evolution of ASI. It is impossible to predict with certainty how ASI will evolve, or what its impact on society will be. However, it is important to start thinking about these issues now, so that we can be prepared for the future."
[{2023-11-03 retrieved}]

* McsEngl.evoluting-of-techAiSupper,
* McsEngl.techAiSupper'evoluting,


"Semantic AI is used everywhere where the complexity of the underlying data is high and the details must not be ignored.
This distinguishes semantic AI from AI based on statistical methods (e.g. neural networks): statistical AI generalizes but details and traceability are lost. This is not bad for the classification of images - but not acceptable for the representation of processes or contracts."

"Historically, there have been two dominant paradigms of AI, namely symbolism and connectionism. Symbolism conjectures that symbols representing things in the world are the fundamental units of human intelligence, and that the cognitive process can be accomplished by the manipulation of the symbols, through a series of rules and logic operations upon the symbolic representations [2], [3]. Many early AI systems, from the middle 1950s to the late 1980s, were built upon symbolistic models. Symbolic methods have several virtues: they require only a few input samples, use powerful declarative languages for knowledge representation, and have conceptually straightforward internal functionality. It soon became apparent, however, that such a rule-based, top-down strategy demands substantial hand-tuning and lacks true learning. As discrete symbolic representations and hand-crafted rules are intolerant of ambiguous and noisy data, symbolic approaches typically fall short when solving real-world problems."
[{2023-03-29 retrieved}]

* McsEngl.semantic-AI,
* McsEngl.symbolic-AI,
* McsEngl.techAi.semantic,
* McsEngl.techAi.symbolic,

"overview of semantic-AI:
Semantic AI, also known as Semantic Artificial Intelligence, is a subfield of artificial intelligence that focuses on endowing machines with the ability to understand, interpret, and generate human-like meaning from natural language text, as well as other forms of data. It is a multidisciplinary field that combines elements of linguistics, computer science, cognitive science, and knowledge representation. The core objective of Semantic AI is to bridge the gap between human language and machine understanding to enable more intelligent and context-aware applications.

Here's an overview of the key components and concepts related to Semantic AI:

1. **Natural Language Processing (NLP)**: NLP is a fundamental component of Semantic AI that deals with the interaction between computers and human language. It includes tasks such as text analysis, machine translation, sentiment analysis, and speech recognition.

2. **Semantic Understanding**: At the heart of Semantic AI is the ability to extract meaning and context from natural language text. This involves techniques for entity recognition, relationship extraction, and sentiment analysis.

3. **Knowledge Graphs**: Knowledge graphs are data structures that represent knowledge in a structured form, often using graph-based representations. These graphs store information about entities, their attributes, and relationships. They are used to enhance semantic understanding and enable reasoning.

4. **Ontologies**: Ontologies are formal representations of knowledge that define concepts, relationships, and constraints in a specific domain. They provide a common framework for organizing and sharing knowledge, making it easier for AI systems to understand context.

5. **Semantic Web**: The Semantic Web is an extension of the World Wide Web that aims to make web content more accessible to machines. Technologies like RDF (Resource Description Framework) and OWL (Web Ontology Language) are used to encode and share data with semantic meaning.

6. **Reasoning and Inference**: In Semantic AI, systems often perform reasoning and inference to draw conclusions or make predictions based on the available knowledge and context. This can involve deductive, inductive, or abductive reasoning.

7. **Machine Learning**: Machine learning techniques, especially deep learning, are used to improve the performance of semantic understanding tasks, such as named entity recognition, sentiment analysis, and language modeling.

8. **Contextual Understanding**: Semantic AI aims to understand context and user intent, as the meaning of words and phrases can change based on context. This involves context-aware language models and dialogue systems.

9. **Applications**: Semantic AI has various applications, including chatbots, virtual assistants, recommendation systems, search engines, question-answering systems, and automated content generation. It is also valuable in scientific research, healthcare, and knowledge management.

10. **Challenges**: Challenges in Semantic AI include dealing with ambiguity in natural language, developing large-scale knowledge bases, ensuring privacy and ethical use of AI, and handling multilingual and cross-cultural variations.

Semantic AI plays a crucial role in making AI systems more human-friendly, enabling them to understand and interact with users in a more natural and meaningful way. It has the potential to revolutionize how we access and leverage information in various domains and applications."
[{2023-10-25 retrieved}]

"overview of semantic-AI:
**Semantic AI** is a branch of artificial intelligence (AI) that focuses on understanding the meaning of data. It uses machine learning and knowledge graphs to extract and interpret the hidden relationships and patterns in data, enabling machines to make more informed decisions and perform more complex tasks.

Semantic AI is a relatively new field, but it has the potential to revolutionize the way we interact with computers. By understanding the meaning of our data, machines can become more intelligent and helpful partners in our work and daily lives.

Here are some of the key benefits of semantic AI:
* **Improved understanding of data:** Semantic AI can help us to better understand the complex and interconnected relationships between different data points. This can lead to new insights and discoveries, and can help us to make better decisions.
* **Increased automation:** Semantic AI can automate many tasks that are currently performed manually, such as data entry, data analysis, and customer service. This can free up our time to focus on more strategic and creative work.
* **More personalized experiences:** Semantic AI can be used to create more personalized experiences for users. For example, it can be used to recommend products, services, and content that are tailored to our individual needs and interests.
* **Enhanced security and compliance:** Semantic AI can be used to improve security and compliance by detecting anomalies and patterns in data that may indicate fraud or other malicious activity.

Semantic AI is already being used in a variety of industries and applications, including:
* **Finance:** Semantic AI is used in financial services to detect fraud, identify risk, and make investment decisions.
* **Healthcare:** Semantic AI is used in healthcare to diagnose diseases, develop personalized treatment plans, and conduct clinical trials.
* **Retail:** Semantic AI is used in retail to recommend products, personalize marketing campaigns, and optimize supply chains.
* **Manufacturing:** Semantic AI is used in manufacturing to improve quality control, predict maintenance needs, and optimize production schedules.
* **Government:** Semantic AI is used in government to improve public services, detect crime, and protect national security.

Semantic AI is a powerful new technology that has the potential to transform many aspects of our lives. As it continues to develop and mature, we can expect to see it used in even more innovative and groundbreaking ways.

Here are some examples of specific applications of semantic AI:
* **Search engines:** Semantic AI can be used to improve the accuracy and relevance of search results. For example, Google uses semantic AI to understand the meaning of search queries and to provide more comprehensive and informative results.
* **Virtual assistants:** Semantic AI can be used to make virtual assistants more intelligent and helpful. For example, Amazon Alexa uses semantic AI to understand the meaning of user requests and to provide more personalized and relevant responses.
* **Recommender systems:** Semantic AI can be used to improve the accuracy of recommender systems. For example, Netflix uses semantic AI to recommend movies and TV shows to users based on their viewing history and preferences.
* **Fraud detection:** Semantic AI can be used to detect fraud and other types of financial crime. For example, banks use semantic AI to analyze financial transactions and to identify patterns that may indicate fraudulent activity.
* **Medical diagnosis:** Semantic AI can be used to assist doctors in diagnosing diseases. For example, IBM Watson Oncology uses semantic AI to analyze medical data and to identify potential cancer diagnoses.

Semantic AI is a rapidly evolving field, and new applications are being developed all the time. As semantic AI technology continues to mature, we can expect to see it play an even greater role in our lives."
[{2023-10-25 retrieved}]


"semantic vs symbolic AI:
Semantic AI and symbolic AI are both approaches to artificial intelligence that focus on understanding the meaning of data. However, there are some key differences between the two approaches.

**Semantic AI** is a newer approach that uses machine learning techniques to learn the relationships between different concepts. Semantic AI systems are often trained on large datasets of text and code, which allows them to learn the nuances of human language and to understand the meaning of complex data.

**Symbolic AI** is an older approach that uses symbolic representations of knowledge, such as logic and rules. Symbolic AI systems are often hand-coded by experts, which gives them a deep understanding of the specific domain that they are designed to operate in.

Here is a table comparing semantic AI and symbolic AI:
| Feature | Semantic AI | Symbolic AI |
| **Modeling approach** | Machine learning | Symbolic representations |
| **Typical techniques** | Artificial neural networks, natural language processing, knowledge graphs | Logic programming, production rules, expert systems |
| **Strengths** | Good at learning complex patterns from data | Good at reasoning about symbolic knowledge |
| **Weaknesses** | Can be difficult to interpret | Can be computationally expensive |

**Examples of semantic AI applications**
* Google Search
* Netflix recommendations
* Amazon product recommendations
* PayPal fraud detection
* IBM Watson Health medical diagnosis

**Examples of symbolic AI applications**
* Expert systems for medical diagnosis, financial planning, and other tasks
* Automated theorem provers for mathematical and logical reasoning
* Natural language processing systems for understanding and generating human language

**Which approach is better?**
Neither semantic AI nor symbolic AI is inherently better than the other. The best approach for a particular task will depend on the specific requirements of the task. For example, if the task is to learn a complex pattern from data, then semantic AI may be a better approach. If the task is to reason about symbolic knowledge, then symbolic AI may be a better approach.
In practice, many AI systems use a combination of semantic AI and symbolic AI techniques. This allows the system to benefit from the strengths of both approaches.
For example, a machine translation system might use a semantic neural network to learn the statistical relationships between words in different languages. The neural network would then be used to generate translated text that is both accurate and fluent.

As AI technology continues to develop, we can expect to see more and more hybrid systems that combine the strengths of semantic AI and symbolic AI."
[{2023-11-03 retrieved}]
"semantic vs symbolic AI:
Semantic AI and Symbolic AI are two different approaches within the field of artificial intelligence, each with its own principles, methodologies, and focus. Here's a comparison of the two:

1. Symbolic AI (also known as Classical AI or Good Old-Fashioned AI):
- Symbolic AI is based on the manipulation of symbols and the use of explicit, rule-based representations of knowledge and reasoning. It relies on formal logic and symbolic representations to perform tasks.
- In symbolic AI, knowledge is often represented using symbols, predicates, and rules. These representations can be used for tasks such as expert systems, formal logic, and symbolic reasoning.
- Symbolic AI operates on a "knowledge-first" paradigm, where the focus is on encoding explicit knowledge and reasoning processes.
- It is strong in rule-based systems, deductive reasoning, and knowledge representation, making it suitable for applications like expert systems and theorem proving.

2. Semantic AI:
- Semantic AI, on the other hand, is centered around the idea of understanding and representing the meaning of data and information. It often focuses on extracting and representing the semantics or context of data.
- In Semantic AI, the emphasis is on capturing the underlying meaning and relationships between entities and concepts. This often involves the use of ontologies, taxonomies, and knowledge graphs.
- Semantic AI operates on a "meaning-first" paradigm, where the goal is to understand and represent the semantic aspects of data and language.
- It is strong in natural language processing, knowledge graph construction, and context-based understanding, making it suitable for applications like search engines, question-answering systems, and semantic web technologies.

In summary, Symbolic AI is more concerned with explicit knowledge representation, rule-based reasoning, and formal logic, while Semantic AI is more concerned with understanding and representing the meaning and context of data and language. The choice between these approaches depends on the specific requirements of a given AI task or application. In some cases, a hybrid approach that combines elements of both symbolic and semantic AI may be used to leverage their respective strengths."
[{2023-11-03 retrieved}]

* McsEngl.techAi.semantic-vs-symbolic,
* McsEngl.techAi.symbolic-vs-semantic,


"Statistical AI and classical AI are two different approaches to artificial intelligence (AI).Statistical AI, also known as machine learning, is a method of teaching computers to learn from data. It involves using statistical techniques to analyze large amounts of data and make predictions or decisions based on that data. This approach is often used in applications such as image recognition, natural language processing, and predictive analytics.Classical AI, on the other hand, is an approach that involves creating explicit rules and algorithms for a computer to follow. This approach is often used in applications such as expert systems and decision-making systems. It is based on symbolic reasoning and rule-based systems.While both approaches have their own advantages and limitations, they can also be combined to create more sophisticated AI systems."
[{2023-03-29 retrieved}]
"From the earliest days, AI research has tended to fall into two largely separate strands: one focused on logical representations, and one focused on statistical ones. The first strand includes approacheslike logic programming, description logics, classical planning, symbolic parsing, rule induction, etc. The second includes approaches like Bayesian networks, hidden Markov models, Markov decision processes, statistical parsing, neural networks, etc. Logical approaches tend to emphasize handling complexity, and statistical ones uncertainty. Clearly, however, both of these are necessary to build intelligent agents and handle real-world applications"
[{2023-03-29 retrieved}]
"Connectionism, known by its most successful technique, deep neural networks (DNNs) [4], serves as the architecture behind the vast majority of recent successful AI systems. Inspired by the physiology of the nervous system, connectionism explains cognition by interconnected networks of simple and often uniform units. Learning happens as weight modification, in a data-driven manner; the network weights are adjusted in the direction that minimises the cumulative error from all the training samples, using techniques such as gradient back-propagation [5]. Connectionist models are fault-tolerant, as they learn sub-symbolics, i.e., continuous embedding vectors, and compare these vectorized representations instead of the literal meaning between entities and relations by discrete symbolic representations. Moreover, by learning statistical patterns from data, connectionist models enjoy the advantages of inductive learning and generalization capabilities. Like every coin has two sides, such approaches also suffer from several fundamental problems [6], [7]. First, connectionist models fall significantly short of compositional generalization, the robust ability of human cognition to correctly solve any problem that is composed of familiar parts [8]. Second, such bottom-up approaches are known to be data inefficient. Third, connectionist models are logically opaque, lacking comprehensibility. It is almost impossible to understand why decisions are made. In the absence of any kind of identifiable or verifiable train of logic, people are left with systems that are making potentially catastrophic decisions that are difficult to understand, arduous to correct, and therefore hard to be trusted. These shortcomings hinder the adoption of connectionist systems in decision-critical applications and reasoning-heavy tasks, such as medical diagnosis, autonomous driving, and mathematical reasoning, and lead to the increasing concern about contemporary AI techniques."
[{2023-03-29 retrieved}]

* McsEngl.connectionist-AI,
* McsEngl.statistical-AI,
* McsEngl.techAi.connectionist,
* McsEngl.techAi.statistical,


"connectionist vs statistical AI:
Connectionist AI and Statistical AI are two different approaches within the field of artificial intelligence, each with its own characteristics and methodologies. Here's a brief comparison of the two:

1. Connectionist AI:
- Also known as neural network-based AI or artificial neural networks (ANNs).
- It is inspired by the structure and functioning of the human brain, consisting of interconnected artificial neurons (nodes) organized in layers.
- Connectionist models learn by adjusting the strengths of connections (synaptic weights) between neurons through a process called backpropagation, which involves error minimization during training.
- Often used for tasks like image recognition, natural language processing, and pattern recognition.
- Deep learning, a subset of connectionist AI, has gained significant popularity and achieved impressive results in various AI applications.

2. Statistical AI:
- Focuses on probabilistic and statistical methods for modeling and solving AI problems.
- It involves the use of statistical techniques such as regression analysis, decision trees, Bayesian networks, and hidden Markov models to make predictions or decisions based on data.
- Statistical AI often relies on explicit modeling of data distributions and making inferences based on these models.
- It is commonly used for tasks like recommendation systems, anomaly detection, and predictive modeling.
- It does not necessarily emulate the human brain's structure or function, but it leverages statistical patterns in data to make predictions and decisions.

In summary, connectionist AI, particularly deep learning, is centered around artificial neural networks and their ability to learn complex patterns and representations from data, often without explicitly defined features. Statistical AI, on the other hand, focuses on the use of statistical methods and models to make predictions or decisions based on data and often requires a more explicit representation of features and relationships in the data. Both approaches have their strengths and weaknesses and can be applied in various AI applications, sometimes even in combination."
[{2023-11-03 retrieved}]
"connectionist vs statistical AI:
**Connectionist AI** and **statistical AI** are two broad categories of artificial intelligence (AI) techniques. Connectionist AI is inspired by the human brain, while statistical AI is based on mathematical principles such as probability and statistics.

**Connectionist AI** models AI processes by connecting artificial neurons in a network. Each neuron receives inputs from other neurons and produces an output signal. The connections between neurons have weights, which are adjusted during training to improve the network's performance. Connectionist AI techniques are often used for tasks such as image recognition, natural language processing, and machine translation.

**Statistical AI** uses mathematical methods to extract patterns from data. Statistical AI techniques are often used for tasks such as classification, regression, and clustering.

Here is a table comparing connectionist AI and statistical AI:
| Feature | Connectionist AI | Statistical AI |
| **Modeling approach** | Inspired by the human brain | Based on mathematical principles |
| **Typical techniques** | Artificial neural networks | Linear regression, logistic regression, decision trees, support vector machines |
| **Strengths** | Good at learning complex patterns from data | Good at making predictions on new data |
| **Weaknesses** | Can be difficult to interpret | Can be computationally expensive |

**Examples of connectionist AI applications**
* Image recognition: Neural networks are used to classify images of objects, such as cats, dogs, and cars.
* Natural language processing: Neural networks are used to understand and generate text, such as translating languages and writing different kinds of creative content.
* Machine translation: Neural networks are used to translate text from one language to another.

**Examples of statistical AI applications**
* Fraud detection: Statistical methods are used to identify fraudulent transactions.
* Customer segmentation: Statistical methods are used to divide customers into different groups based on their characteristics.
* Medical diagnosis: Statistical methods are used to predict the likelihood of a patient having a particular disease.

**Which approach is better?**
Neither connectionist AI nor statistical AI is inherently better than the other. The best approach for a particular task will depend on the specific requirements of the task. For example, if the task is to learn a complex pattern from data, then connectionist AI may be a better approach. If the task is to make predictions on new data, then statistical AI may be a better approach.
In practice, many AI systems use a combination of connectionist AI and statistical AI techniques. This allows the system to benefit from the strengths of both approaches.
For example, a machine translation system might use a connectionist neural network to learn the statistical relationships between words in different languages. The neural network would then be used to generate translated text that is both accurate and fluent.

As AI technology continues to develop, we can expect to see more and more hybrid systems that combine the strengths of connectionist AI and statistical AI."
[{2023-11-03 retrieved}]

* McsEngl.techAi.connectionist-vs-statistical,
* McsEngl.techAi.statistical-vs-connectionist,


"By the 1950s, two visions for how to achieve machine intelligence emerged. One vision, known as Symbolic AI or GOFAI, was to use computers to create a symbolic representation of the world and systems that could reason about the world. Proponents included Allen Newell, Herbert A. Simon, and Marvin Minsky. Closely associated with this approach was the "heuristic search" approach, which likened intelligence to a problem of exploring a space of possibilities for answers.
The second vision, known as the connectionist approach, sought to achieve intelligence through learning. Proponents of this approach, most prominently Frank Rosenblatt, sought to connect Perceptron in ways inspired by connections of neurons.[21] James Manyika and others have compared the two approaches to the mind (Symbolic AI) and the brain (connectionist). Manyika argues that symbolic approaches dominated the push for artificial intelligence in this period, due in part to its connection to intellectual traditions of Descartes, Boole, Gottlob Frege, Bertrand Russell, and others. Connectionist approaches based on cybernetics or artificial neural networks were pushed to the background but have gained new prominence in recent decades.[22]"
[{2023-04-10 retrieved}]

* McsEngl.NeSy-neural-symbolic-computing!⇒techAiSas,
* McsEngl.connectionist-and-symbolic-techAi!⇒techAiSas,
* McsEngl.neuro-symbolic-techAi!⇒techAiSas,
* McsEngl.semantic-and-statistical-techAi!⇒techAiSas,
* McsEngl.symbolic-and-connectionist-techAi!⇒techAiSas,
* McsEngl.techAiSas,
* McsEngl.techAi.semantic-and-statistical!⇒techAiSas,

techAi.conceptual (link)


"Friendly artificial intelligence (also friendly AI or FAI) refers to hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained."
[{2023-04-10 retrieved}]

* McsEngl.FAI-friendly-AI,
* McsEngl.friendly-AI,
* McsEngl.techAi.friendly,


"Generative AI refers to a type of artificial intelligence (AI) that is capable of generating new and original content, such as images, music, videos, and text. This is achieved through the use of deep learning algorithms and neural networks, which are trained on large datasets to learn the patterns and structure of the input data.
One popular approach to generative AI is through the use of generative adversarial networks (GANs), which consist of two neural networks that work together to generate new content. One network, called the generator, creates new samples that are similar to the training data, while the other network, called the discriminator, attempts to distinguish between the generated samples and the real ones.
Generative AI has many applications, including in art, music, and fashion, as well as in fields such as natural language processing and computer vision. However, it also raises ethical concerns, such as the potential for misuse or the creation of fake content. Therefore, it is important to approach the development and use of generative AI with caution and responsibility."
[{2023-05-02 retrieved}]

* generates new data that is similar to data it was trained on,
* understands distribution of data and how likely a given example is,
* predict next word in a sentence,
[{2023-08-01 retrieved}]

* McsEngl.GenAI!⇒techAiGenerative,
* McsEngl.generative-AI/jénrativ/,
* McsEngl.techAi.generative,
* McsEngl.techAiGenerative,
* McsEngl.techAiGenerative:techDl,
* McsEngl.techDl.generative-AI,
====== langoGreek:
* McsElln.γενετική-τεχνητή-νοημοσύνης!η!=techAiGenerative,
* McsElln.παραγωγική-τεχνητή-νοημοσύνης!η!=techAiGenerative,

* deep-learning,
** machine-learning,

* generative-language-model,
* generative-image-model,
* large-language-model,
* text-to-text,
* text-to-audio,
* text-to-video,
* text-to-task,

evoluting of techAiGenerative

"evolution of generative-AI:
The evolution of generative AI can be traced back to the 1960s, with the development of early chatbots like ELIZA. However, it wasn't until the 2010s that generative AI began to make significant progress, thanks to advances in deep learning and the availability of large datasets.

**Key milestones in the evolution of generative AI:**
* **1965:** Joseph Weizenbaum develops ELIZA, a chatbot that can simulate conversation with a human user.
* **1983:** Geoffrey Hinton introduces the Restricted Boltzmann Machine (RBM), a type of neural network that is well-suited for generative tasks.
* **2006:** Geoffrey Hinton and his co-authors publish "A Fast Learning Algorithm for Deep Belief Nets," which re-introduces RBMs in the context of deep learning.
* **2014:** Ian Goodfellow and his colleagues introduce Generative Adversarial Networks (GANs), a new type of neural network that can generate highly realistic images, videos, and audio.
* **2015:** Alec Radford and his colleagues at OpenAI develop GPT-2, a large language model that can generate text, translate languages, and write different kinds of creative content.
* **2020:** Google AI develops PaLM, a 540-billion parameter language model that is the most powerful AI language model to date.
* **2023:** Google AI develops Bard, a large language model that is designed to be informative and comprehensive.

In recent years, generative AI has made significant progress in a wide range of tasks, including:
* **Image generation:** Generative AI models can now generate realistic images of people, places, and objects that are indistinguishable from real photos.
* **Text generation:** Generative AI models can now generate text of all kinds, including news articles, poems, code, and even scripts for movies and TV shows.
* **Music generation:** Generative AI models can now generate music in a variety of genres, from classical to pop to jazz.
* **Video generation:** Generative AI models can now generate realistic videos, including deepfakes of real people.

Generative AI is still a relatively new field, but it is rapidly evolving and has the potential to revolutionize many industries and aspects of our lives. For example, generative AI could be used to create new forms of art and entertainment, develop new medical treatments, and even design new products and services.

Here are some specific examples of how generative AI is being used today:
* **In healthcare:** Generative AI is being used to develop new drugs, diagnose diseases, and create personalized treatment plans for patients.
* **In finance:** Generative AI is being used to detect fraud, predict market trends, and develop new financial products.
* **In manufacturing:** Generative AI is being used to design new products, optimize production processes, and predict machine failures.
* **In retail:** Generative AI is being used to personalize recommendations for customers, develop new marketing campaigns, and predict demand for products.

Overall, the evolution of generative AI has been remarkable, and it is clear that this field has the potential to have a major impact on our world in the years to come."
[{2023-10-25 retrieved}]

"evolution of generative-AI:
The evolution of generative AI has been a fascinating journey, marked by significant advancements and milestones. Here's an overview of its evolution up to my last knowledge update in September 2021:

1. Early Neural Networks:
- The roots of generative AI can be traced back to the development of artificial neural networks in the 1940s and 1950s. These early models served as the foundation for more advanced generative models.

2. Variational Autoencoders (VAEs):
- In the 2010s, Variational Autoencoders (VAEs) emerged as a significant development in generative AI. VAEs are probabilistic models capable of generating new data by learning latent representations of the input data.

3. Generative Adversarial Networks (GANs):
- GANs, introduced by Ian Goodfellow and his colleagues in 2014, revolutionized generative AI. GANs consist of two neural networks, a generator and a discriminator, engaged in a game where the generator tries to produce data that is indistinguishable from real data. GANs have been used to create realistic images, videos, and even text.

4. Transformer Models:
- The introduction of the Transformer architecture, notably with the release of the "Attention Is All You Need" paper by Vaswani et al. in 2017, had a significant impact on generative AI. Transformers are highly efficient in modeling sequences and have been the foundation for many modern generative models.

5. OpenAI's GPT (Generative Pre-trained Transformer) Series:
- OpenAI's GPT series, including GPT-2 and GPT-3, pushed the boundaries of generative AI. These models were pre-trained on vast amounts of text data and could generate coherent and contextually relevant text, often indistinguishable from human-written text.

6. Image Generation:
- Beyond text, generative models like DALL-E and CLIP from OpenAI demonstrated the ability to generate and understand images based on textual descriptions. This marked a new level of versatility for generative AI.

7. Reinforcement Learning:
- Some generative AI models have been combined with reinforcement learning, allowing them to perform tasks in a goal-directed manner. This combination has shown promise in robotics, game playing, and more.

8. Ethical and Societal Concerns:
- The evolution of generative AI has also brought ethical and societal concerns related to deepfakes, misinformation, and privacy. Researchers and organizations have been working on solutions and guidelines to address these issues.

9. Continued Research and Development:
- The field of generative AI is still rapidly evolving. Researchers continue to explore new architectures, training techniques, and applications, expanding the capabilities and potential use cases of generative models.

Please note that since my last update in September 2021, there may have been further developments and breakthroughs in generative AI. It's a dynamic field with ongoing research and innovations."
[{2023-10-25 retrieved}]

* McsEngl.evoluting-of-techAiGenerative,
* McsEngl.techAiGenerative'evoluting,


"Natural Language Processing (NLP) comprises a set of techniques to work with documents written in a natural language to achieve many different objectives. They range from simple ones that any developer can implement, to extremely complex ones that require a lot of expertise."

"Natural Language Processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence that deals with the interactions between computers and human language, particularly how to program computers to process and analyze large amounts of natural language data"
[{2023-11-20 retrieved}]

* McsEngl.NLP!=natural-language-processing!⇒techNlp,
* McsEngl.techAi.natural-language-processing!⇒techNlp,
* McsEngl.techInfo.006-natural-language-processing!⇒techNlp,
* McsEngl.techInfo.natural-language-processing!⇒techNlp,
* McsEngl.techNlp!=natural-language-processing--tech,

language-model of techNlp

"A language model is a probability distribution over sequences of words.[1] Given any sequence of words of length m, a language model assigns a probability P(w1,...,wm) to the whole sequence. Language models generate probabilities by training on text corpora in one or many languages. Given that languages can be used to express an infinite variety of valid sentences (the property of digital infinity), language modeling faces the problem of assigning non-zero probabilities to linguistically valid sequences that may never be encountered in the training data. Several modelling approaches have been designed to surmount this problem, such as applying the Markov assumption or using neural architectures such as recurrent neural networks or transformers.
Language models are useful for a variety of problems in computational linguistics; from initial applications in speech recognition[2] to ensure nonsensical (i.e. low-probability) word sequences are not predicted, to wider use in machine translation[3] (e.g. scoring candidate translations), natural language generation (generating more human-like text), part-of-speech tagging, parsing,[3] optical character recognition, handwriting recognition,[4] grammar induction,[5] information retrieval,[6][7] and other applications."
[{2023-03-31 retrieved}]

* McsEngl.language-model-techNlp!⇒techNlplm,
* McsEngl.techNlp'language-model!⇒techNlplm,
* McsEngl.techNlplm!=natural-language-processing--language-model---tech,
* McsEngl.statistical-language-model!⇒techNlplm,

evaluation of techNlplm

">language models benchmarks:
Language models are evaluated using a variety of benchmarks to measure their capabilities, performance, and understanding across different tasks and domains. These benchmarks often consist of datasets and evaluation metrics designed to test various aspects of language understanding and generation. Some popular benchmarks include:
1. **GLUE (General Language Understanding Evaluation)**: A collection of datasets used to evaluate model performance across different tasks such as sentiment analysis, question answering, and textual entailment. GLUE aims to measure the ability of models to understand text.
2. **SuperGLUE**: An extension of GLUE, SuperGLUE includes more challenging tasks and datasets designed to push the limits of language models further in areas like more complex question answering, word sense disambiguation, and reading comprehension.
3. **SQuAD (Stanford Question Answering Dataset)**: A reading comprehension dataset consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text from the corresponding reading passage.
4. **Commonsense Reasoning Benchmarks**: Such as the CommonsenseQA and Winograd Schema Challenge (WSC), these are designed to evaluate a model's ability to perform reasoning that requires commonsense knowledge about the world.
5. **LAMBADA (LAnguage Modeling Broadened to Account for Discourse Aspects)**: This benchmark tests the ability of models to predict the final word in a passage of text, focusing specifically on cases where understanding the broader context is necessary.
6. **Zero-shot and Few-shot Learning Benchmarks**: These evaluate a model's ability to perform tasks without specific task training, relying only on a small number of examples (few-shot) or even no examples (zero-shot). This is particularly relevant for models like GPT-3 and its successors, which are designed to generalize across tasks without task-specific training.
7. **Hugging Face's Datasets Library**: While not a benchmark itself, Hugging Face provides a large collection of datasets for various natural language processing tasks, which can be used to test and benchmark language models on a wide range of tasks.
Each of these benchmarks tests different facets of language understanding and generation, including comprehension, reasoning, and the ability to interact naturally with humans. The choice of benchmark depends on the specific capabilities one wishes to measure in a language model."
[{2024-02-12 retrieved}]

* McsEngl.techNlplm'evaluation,

benchmark.MMLU of techNlplm

">MMLU benchmark:
The MMLU (Massive Multi-task Language Understanding) benchmark is a tool designed to assess how well language models comprehend and produce language across a wide range of tasks. It accomplishes this by evaluating models' performance in zero-shot and few-shot settings, pushing them to perform similarly to how humans do when presented with new information or asked to complete unfamiliar tasks.
Here's a breakdown of the MMLU benchmark:
* **Goals:**
* Assess the knowledge acquired during language model pretraining.
* Evaluate models in challenging, real-world-like scenarios (zero-shot and few-shot settings).
* **Tasks:**
* Covers 57 diverse subjects across various domains, including STEM, humanities, social sciences, and more.
* Difficulty ranges from elementary to advanced professional levels.
* Tests both factual knowledge and problem-solving abilities.
* **Evaluation:**
* Models are given prompts, questions, or instructions related to a specific subject.
* Their responses are assessed based on accuracy, coherence, and overall quality.
* A final MMLU score is calculated by averaging the scores across all tasks.
**Strengths of the MMLU benchmark:**
* **Comprehensive:** Covers a broad range of subjects and difficulty levels.
* **Challenging:** Encourages models to go beyond simply memorizing text and apply their understanding to new situations.
* **Zero-shot and few-shot settings:** More closely resemble how humans learn and perform tasks.
**Limitations of the MMLU benchmark:**
* **Newer benchmark:** Still under development, and its effectiveness compared to other benchmarks is being debated.
* **Focus on factual knowledge:** May not adequately assess other important language skills like creativity or humor.
* **Potential biases:** The benchmark tasks and data might reflect biases present in the real world.
Overall, the MMLU benchmark is a valuable tool for evaluating and comparing the capabilities of language models. However, it's important to consider its strengths and limitations when interpreting its results."
[{2024-02-12 retrieved}]

* McsEngl.MMLU-benchmark,
* McsEngl.techNlplm'MMLU-benchmark,

evoluting of techNlplm

">timeline of language models:
The timeline of language models reflects the rapid evolution of natural language processing (NLP) and artificial intelligence (AI) technologies. Here's an overview highlighting key milestones:
### Pre-2010: Early Foundations
- **1950s**: Early work in machine translation and computational linguistics.
- **1956**: The Dartmouth Conference marks the birth of AI as a field.
- **1960s-1980s**: Development of rule-based and statistical approaches to NLP.
### 2010-2015: The Rise of Deep Learning
- **2013**: Introduction of word embeddings (word2vec) by Mikolov et al., enabling words to be represented as vectors in a continuous vector space.
- **2014**: Sequence-to-sequence models (Sutskever et al.) and attention mechanisms (Bahdanau et al.) begin to improve machine translation and other NLP tasks.
### 2016-2018: Emergence of Transformer Models
- **2017**: Google’s Transformer model (Vaswani et al.) introduces a novel architecture that relies solely on attention mechanisms, leading to significant improvements in NLP tasks.
- **2018**: OpenAI introduces GPT (Generative Pre-trained Transformer), a large-scale language model that can generate coherent and diverse text.
### 2019: Breakthroughs in Language Model Size and Capability
- **BERT (Bidirectional Encoder Representations from Transformers)** by Google revolutionizes understanding of context in language, significantly improving performance across numerous NLP tasks.
- **GPT-2** is released by OpenAI, showcasing the ability to generate highly convincing text and sparking discussions on the ethics of AI-generated content.
### 2020-2023: Scaling and Specialization
- **GPT-3**: OpenAI's GPT-3, launched in 2020, marks a significant leap in language model capabilities, offering unprecedented versatility across a wide range of tasks with 175 billion parameters.
- **T5 (Text-to-Text Transfer Transformer)** and **ELECTRA** models introduce new paradigms for training and understanding language.
- **Specialized models**: Emergence of domain-specific models and models optimized for particular languages or tasks.
### 2023 and Beyond: Ethical AI, Efficiency, and Beyond
- **Increased focus on ethical AI**: Efforts to make AI more responsible, transparent, and fair gain momentum.
- **Efficiency and accessibility**: New models aim to reduce computational requirements, making advanced NLP capabilities more accessible to researchers and organizations with limited resources.
- **Multimodal models**: Expansion beyond text to incorporate images, audio, and other data types, leading to more integrated and versatile AI systems.
This timeline underscores the rapid pace of innovation in AI and NLP, driven by advancements in algorithms, computational power, and data availability. As the field continues to evolve, we can expect further breakthroughs that expand the possibilities of human-computer interaction, automated content creation, and language understanding."
[{2024-02-12 retrieved}]

"Since 2018, large language models (LLMs) consisting of deep neural networks with billions of trainable parameters, trained on massive datasets of unlabelled text, have demonstrated impressive results on a wide variety of natural language processing tasks. This development has led to a shift in research focus toward the use of general-purpose LLMs."
[{2023-04-09 retrieved}]
* McsEngl.{2018}-techAi-LLM,
* McsEngl.{science'2018}-techAi-LLM,

* McsEngl.evoluting-of-techNlplm,
* McsEngl.techNlplm'evoluting,


* neural-network--language-model,
** large-neural-language-model,
** recurent-neural-language-model,
** feedforward-neural-language-model,
** transformer-neural-language-model,

* statistical--language-model:
** n-gram--language-model,
** Markov-(n-gram)-language-model,

">types of language-models:
Language models can be broadly categorized into two main types:
**1. Statistical Language Models:**
These models rely on statistical analysis of large amounts of text data to predict the next word in a sequence. They don't have an inherent understanding of language, but they can learn patterns and probabilities from the data they are trained on.
* **n-gram models:** These are the simplest type of statistical language model. They predict the next word based on the probability of it appearing after the previous n words. For example, a bigram model would predict the next word based on the previous two words.
* **Hidden Markov Models (HMMs):** These models are more complex than n-grams and can capture longer-range dependencies between words. They are often used for speech recognition and other tasks where the order of words is important.
**2. Neural Network-based Language Models:**
These models are inspired by the structure and function of the human brain. They use artificial neural networks to learn complex representations of language from text data. Neural network-based language models are more powerful than statistical models and can generate more creative and human-like text.
* **Recurrent Neural Networks (RNNs):** These models can process sequences of words one at a time, taking into account the context of the previous words. They are often used for machine translation and text summarization.
* **Long Short-Term Memory (LSTM) networks:** These are a type of RNN that can learn long-range dependencies between words. They are better at capturing the nuances of language than standard RNNs and are often used for tasks like sentiment analysis and question answering.
* **Transformers:** These are a newer type of neural network architecture that is specifically designed for natural language processing tasks. They can process entire sequences of words at once, which makes them faster and more efficient than RNNs. Transformers are the foundation of many of the most powerful language models today, such as GPT-3 and Jurassic-1 Jumbo.
The choice of which type of language model to use depends on the specific task at hand. Statistical models are generally simpler and faster to train, but they may not be as accurate as neural network-based models. Neural network-based models are more powerful and can generate more creative text, but they can be more complex and expensive to train."
[{2024-02-12 retrieved}]

">types of language-models:
Language models are computational algorithms that are trained to predict the next word in a sequence of words, based on the words that precede it. They play a crucial role in natural language processing (NLP) tasks such as text generation, speech recognition, machine translation, and more. There are various types of language models, each with its own architecture and application. Here are some of the most common types:
### 1. **Statistical Language Models**
These models use traditional statistical methods to predict the probability of a sequence of words, based on the frequency of those word sequences in a training corpus. They include:
- **N-gram Models:** Predicts the next word in a sequence based on the previous \(N-1\) words. Despite their simplicity, n-grams are limited by data sparsity and the curse of dimensionality.
- **Hidden Markov Models (HMMs):** Used for tasks like part-of-speech tagging and speech recognition, they model language as a series of state transitions.
### 2. **Neural Language Models**
These models leverage deep learning techniques to learn word representations and the patterns of language. They have significantly improved the performance of various NLP tasks and include:
- **Feedforward Neural Network Models:** Use a feedforward neural network to predict the next word, overcoming some limitations of n-gram models by learning dense word embeddings.
- **Recurrent Neural Network (RNN) Models:** Designed to handle sequences of variable length through their recurrent connections, making them suitable for tasks involving sequential data like language.
- **Long Short-Term Memory (LSTM) Models:** An advanced RNN architecture that can capture long-range dependencies in text sequences, addressing the vanishing gradient problem of standard RNNs.
- **Gated Recurrent Unit (GRU) Models:** Similar to LSTMs but with a simpler structure, they are efficient at modeling sequences and capturing long-term dependencies.
### 3. **Transformer-Based Models**
Introduced by the paper "Attention is All You Need" in 2017, transformer models have become the foundation for most state-of-the-art language models due to their effectiveness in handling long-range dependencies and parallelization capabilities. They include:
- **BERT (Bidirectional Encoder Representations from Transformers):** Trains on unlabeled data over different pre-training tasks and has been influential in many NLP applications.
- **GPT (Generative Pre-trained Transformer):** Uses a left-to-right architecture and is designed for a wide range of tasks without task-specific model architecture modifications.
- **T5 (Text-to-Text Transfer Transformer):** Frames all NLP tasks as a text-to-text problem, using a unified approach for both input and output formats.
### 4. **Convolutional Neural Network (CNN) Models for Language**
Though less common than for image processing, CNNs can also be applied to NLP tasks, using convolutional layers to process text data in a way that captures local patterns within sentences.
### 5. **Hybrid Models**
These models combine features of different architectures, such as CNNs and RNNs, to leverage the strengths of each in processing language data.
### 6. **Few-shot and Zero-shot Learning Models**
These are advanced models, often built upon transformer architectures, designed to perform tasks with very few or even no task-specific examples. GPT-3 is a notable example, demonstrating strong performance in tasks with minimal instruction.
Each type of language model has its strengths and weaknesses, and the choice of model often depends on the specific requirements of the task, including the nature of the input data, the complexity of the task, and computational resources."
[{2024-02-12 retrieved}]

* McsEngl.techNlplm.specific,


"Neural language models (or continuous space language models) use continuous representations or embeddings of words to make their predictions.[10] These models make use of neural networks.
Continuous space embeddings help to alleviate the curse of dimensionality in language modeling: as language models are trained on larger and larger texts, the number of unique words (the vocabulary) increases.[a] The number of possible sequences of words increases exponentially with the size of the vocabulary, causing a data sparsity problem because of the exponentially many sequences. Thus, statistics are needed to properly estimate probabilities. Neural networks avoid this problem by representing words in a distributed way, as non-linear combinations of weights in a neural net.[11] An alternate description is that a neural net approximates the language function. The neural net architecture might be feed-forward or recurrent, and while the former is simpler the latter is more common."
[{2023-04-09 retrieved}]

* McsEngl.NLM-neural-language-model!⇒techNlm,
* McsEngl.neural-language-model!⇒techNlm,
* McsEngl.techNlm,
* McsEngl.techNlplm.neural-language-model!⇒techNlm,

techNlm.large-language-model (link)

relation-to-techAi of techNlp

· "AI (Artificial Intelligence) and NLP (Natural Language Processing) are related but distinct fields in the realm of computer science and technology. Let's explore the differences between the two:
AI (Artificial Intelligence):
AI is a broad field that focuses on creating machines or systems that can perform tasks that typically require human intelligence. The goal of AI is to develop algorithms and models that enable computers to exhibit characteristics like learning, reasoning, problem-solving, perception, and decision-making. AI can be applied to various domains, including computer vision, robotics, speech recognition, game playing, and more.
AI encompasses several subfields, such as machine learning, deep learning, expert systems, knowledge representation, and natural language processing. NLP is one of these subfields that deals specifically with enabling computers to understand, interpret, and generate human language.
NLP (Natural Language Processing):
NLP is a subset of AI that focuses on the interaction between computers and human language. Its primary aim is to enable computers to understand, interpret, and generate natural language in a way that is meaningful to humans. NLP seeks to bridge the gap between human language and computer language, enabling machines to comprehend and respond to textual or spoken language.
Key tasks in NLP include:
* Natural Language Understanding (NLU): Extracting meaning and insights from human language.
* Natural Language Generation (NLG): Creating human-readable text or speech from structured data or information.
* Sentiment Analysis: Determining the sentiment or emotion behind a piece of text.
* Named Entity Recognition (NER): Identifying and classifying entities like names of people, places, and organizations in text.
* Machine Translation: Automatically translating text from one language to another.
* Speech Recognition: Converting spoken language into written text.
AI and NLP often intersect, as NLP techniques frequently rely on AI methods such as machine learning and deep learning to achieve their goals. For example, many state-of-the-art NLP models, such as transformers, are built using deep learning techniques like attention mechanisms.
In summary, AI is a broader field that encompasses NLP as one of its subfields. While AI deals with the creation of intelligent systems, NLP is specifically concerned with enabling computers to understand and generate human language. Both AI and NLP play crucial roles in developing technology that can interact with humans more naturally and intelligently."
[{2023-07-31 retrieved}]

* McsEngl.techAi'relation-to-techNlp,
* McsEngl.techNlp'relation-to-techAi,

evoluting of techNlp

"evolution of NLP:
The evolution of Natural Language Processing (NLP) has been a fascinating journey marked by significant advancements over the years. Here is a broad overview of the key milestones in the evolution of NLP:
1. Early Developments (1950s-1960s):
- The field of NLP can be traced back to the 1950s and 1960s when researchers began to experiment with rule-based approaches to language processing.
- The Georgetown-IBM experiment in 1954 was one of the earliest attempts at machine translation, translating Russian sentences into English.
2. Knowledge-Based Systems (1970s-1980s):
- In the 1970s and 1980s, NLP research focused on knowledge-based systems, where domain-specific rules and grammars were used to process language.
- Systems like SHRDLU (1972) demonstrated the ability to understand and manipulate natural language within a limited domain.
3. Statistical NLP (1990s):
- The 1990s saw a shift towards statistical approaches to NLP, driven by the availability of large corpora of text and computational power.
- Hidden Markov Models (HMMs) and statistical machine translation systems emerged during this period.
4. Machine Learning and Neural Networks (2000s):
- The 2000s witnessed the resurgence of interest in neural network-based approaches to NLP.
- Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Convolutional Neural Networks (CNNs) were applied to various NLP tasks.
- Notable developments include word embeddings (Word2Vec, GloVe) and the introduction of deep learning models for NLP tasks.
5. Pretrained Language Models (2010s-Present):
- The most significant breakthrough in recent NLP history has been the development of pretrained language models such as BERT (2018) and GPT (2018).
- These models use transformer architecture and can be fine-tuned for a wide range of NLP tasks, achieving state-of-the-art results.
- Transfer learning and pretrained models have democratized NLP by making it easier for developers to create high-performing NLP applications.
6. Transformers and Attention Mechanisms:
- The transformer architecture, introduced in the paper "Attention is All You Need" (2017), has become the foundation for many NLP models.
- Attention mechanisms, which allow models to focus on different parts of input sequences, have greatly improved the performance of NLP models.
7. Multilingual and Cross-Lingual NLP:
- NLP research has expanded to encompass multilingual and cross-lingual applications, enabling models to work across multiple languages.
- Models like mBERT and XLM-R have been developed to support multilingual understanding.
8. Ethical and Bias Considerations:
- As NLP applications become more widespread, there is increasing awareness of ethical concerns and biases in NLP models, leading to efforts to address fairness and inclusivity.
9. Conversational AI and Chatbots:
- NLP has played a crucial role in the development of conversational AI and chatbots, allowing for more natural interactions with machines and virtual assistants.
10. Future Directions:
- The future of NLP is likely to involve even more sophisticated models, improved handling of context and ambiguity, better understanding of emotions and sentiment, and applications in fields like healthcare, law, and education.
NLP has come a long way since its inception, and it continues to evolve rapidly, driven by advancements in deep learning, increased availability of data, and a growing understanding of language processing. The field is likely to remain at the forefront of AI research and application development in the years to come."

* McsEngl.evoluting-of-techNlp,
* McsEngl.techNlp'evoluting,


* natural-language-understanding,
* natural-language-generation,

* finding similar documents,
* finding words with the same meaning,
* generating a summary of a text,
* generating realistic names,
* grouping similar words,
* handwriting recognition,
* identifying entities,
* identifying the language of a text,
* machine-learning,
* machine-translation,
* named-entity-recognition-(NER),
* optical character recognition,
* parsing,
* part-of-speech tagging,
* question answering,
* sentiment analysis,
* speech-to-text,
* text-classification,
* text-to-speech,
* translating a text,
* understanding how much time it takes to read a text,
* understanding how difficult to read is a text,
* understanding the attitude expressed in a text,

* McsEngl.techNlp.specific,


· "Identifying and classifying names of concepts in text."

* McsEngl.NER-named-entity-recognition,
* McsEngl.named-entity-recognition,
* McsEngl.techNlp.006-identifying-entities,
* McsEngl.techNlp.identifying-entities,


· "Natural-language understanding (NLU) or natural-language interpretation (NLI)[1] is a subtopic of natural-language processing in artificial intelligence that deals with machine reading comprehension. Natural-language understanding is considered an AI-hard problem.[2]
There is considerable commercial interest in the field because of its application to automated reasoning,[3] machine translation,[4] question answering,[5] news-gathering, text categorization, voice-activation, archiving, and large-scale content analysis."
[{2023-07-31 retrieved}]

* McsEngl.NLU-natural-language-understanding,
* McsEngl.natural-language-understanding,
* McsEngl.techNlp.005-language-understanding,
* McsEngl.techNlp.language-understanding,


· Identifying the language of a text.

* McsEngl.identifying-language-of-text,
* McsEngl.techNlp.001-language-recognition,
* McsEngl.techNlp.language-recognition,



· "question answering is a subfield of NLP that deals with the task of automatically answering questions posed in natural language."
[{2023-08-01 retrieved}]

* McsEngl.QA!=question-answering-system!⇒techQa,
* McsEngl.question-answering-system!⇒techQa,
* McsEngl.techNlp.008-question-answering!⇒techQa,
* McsEngl.techNlp.question-answering!⇒techQa,
* McsEngl.techQa,


"Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the main benefit of searchability. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the computer science, linguistics and computer engineering fields. The reverse process is speech synthesis.
Some speech recognition systems require "training" (also called "enrollment") where an individual speaker reads text or isolated vocabulary into the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy. Systems that do not use training are called "speaker-independent"[1] systems. Systems that use training are called "speaker dependent".
Speech recognition applications include voice user interfaces such as voice dialing (e.g. "call home"), call routing (e.g. "I would like to make a collect call"), domotic appliance control, search key words (e.g. find a podcast where particular words were spoken), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g. a radiology report), determining speaker characteristics,[2] speech-to-text processing (e.g., word processors or emails), and aircraft (usually termed direct voice input).
The term voice recognition[3][4][5] or speaker identification[6][7][8] refers to identifying the speaker, rather than what they are saying. Recognizing the speaker can simplify the task of translating speech in systems that have been trained on a specific person's voice or it can be used to authenticate or verify the identity of a speaker as part of a security process.
From the technology perspective, speech recognition has a long history with several waves of major innovations. Most recently, the field has benefited from advances in deep learning and big data. The advances are evidenced not only by the surge of academic papers published in the field, but more importantly by the worldwide industry adoption of a variety of deep learning methods in designing and deploying speech recognition systems."
[{2023-04-02 retrieved}]

* McsEngl.ASR-automatic-speech-recognition!⇒techSprc,
* McsEngl.STT-speech-to-text!⇒techSprc,
* McsEngl.automatic-speech-recognition!⇒techSprc,
* McsEngl.techNlp.003-speech-recognition!⇒techSprc,
* McsEngl.techNlp.speech-recognition!⇒techSprc,
* McsEngl.techSprc,
* McsEngl.speech-recognition!⇒techSprc,
* McsEngl.speech-to-text!⇒techSprc,


"Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech.[1] The reverse process is speech recognition."
[{2023-04-02 retrieved}]

* McsEngl.TTS-text-to-speech!⇒techSpsn,
* McsEngl.techNlp.004-speech-synthesis!⇒techSpsn,
* McsEngl.techNlp.speech-synthesis!⇒techSpsn,
* McsEngl.techSpsn,
* McsEngl.text-to-speech!⇒techSpsn,
* McsEngl.speech-synthesis!⇒techSpsn,


"overview of START, SynTactic Analysis using Reversible Transformations:
START, or SynTactic Analysis using Reversible Transformations, is a natural language processing system that was developed by Boris Katz and associates in 1997. It was the first system to use reversible transformations for syntactic analysis.

START is a knowledge-based system, which means that it relies on a knowledge base of grammatical rules and lexical information to parse sentences. The knowledge base is used to generate a set of possible parse trees for the input sentence. START then uses a set of reversible transformations to explore the possible parse trees and find the one that is most likely to be correct.

START has been shown to be very effective at parsing sentences, even in the presence of errors and ambiguities. It has also been shown to be very efficient, even for complex sentences.

Here are some of the advantages of using START for syntactic analysis:
* It is very effective at parsing sentences, even in the presence of errors and ambiguities.
* It is very efficient, even for complex sentences.
* It is easy to extend to handle new sentence structures or new types of grammatical rules.

START has been used to develop a variety of natural language processing applications, including:
* Machine translation
* Question answering
* Text summarization
* Information extraction

START is a powerful tool for syntactic analysis, and it has had a significant impact on the field of natural language processing.

Here is a simple example of how START works:
**Input sentence:** The cat sat on the mat.

**Knowledge base:**
* The cat is a noun.
* Sat is a verb.
* On is a preposition.
* The is a determiner.
* Mat is a noun.

**Reversible transformations:**
* Move a word around in the sentence.
* Split the sentence into smaller parts.
* Combine smaller parts into larger parts.

**Parse trees:**
1. The cat sat on the mat.
2. The cat | sat on the mat.
3. The cat sat | on the mat.
4. The cat | sat on | the mat.

START will start by generating a set of possible parse trees for the input sentence. In this case, START will generate the four parse trees shown above.

START will then use its knowledge base of grammatical rules to eliminate any parse trees that are grammatically incorrect. For example, START will eliminate parse trees 2 and 3 because they violate the rule that a verb must agree with its subject in number.

START will then use its reversible transformations to explore the remaining parse trees and find the one that is most likely to be correct. In this case, START will find that parse tree 4 is the most likely to be correct because it satisfies all of the grammatical rules.

START will then output parse tree 4 as the final parse for the input sentence.

START is a complex system, but this simple example illustrates how it works to parse sentences."
[{2023-10-28 retrieved}]

* McsEngl.Knlgmngr.015-START,
* McsEngl.START-Knlgmgr,
* McsEngl.techNlp.007-START,
* McsEngl.techNlp.START,


"Machine translation, sometimes referred to by the abbreviation MT[1] (not to be confused with computer-aided translation, machine-aided human translation or interactive translation), is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one language to another."

* McsEngl.machine-translation-tech!⇒techTrln,
* McsEngl.techAi.machine-translation!⇒techTrln,
* McsEngl.techInfo.022-machine-translation!⇒techTrln,
* McsEngl.techInfo.machine-translation!⇒techTrln,
* McsEngl.techTrln!=machine-TRansLatioN,
* McsEngl.techNlp.002-machine-translation!⇒techTrln,
* McsEngl.techNlp.machine-translation!⇒techTrln,
* McsEngl.translation.machine!⇒techTrln,

evoluting of techTrln

"evolution of machine-translation:
The evolution of machine translation (MT) can be broadly divided into three phases:

**Phase 1: Rule-based MT (1950s-1980s)**
The first MT systems were rule-based, meaning that they relied on a set of hand-crafted rules to translate text from one language to another. These rules were based on the grammar and vocabulary of the two languages, as well as on human knowledge of translation.

Rule-based MT systems were able to produce accurate translations for some types of text, such as technical manuals and news articles. However, they were not very good at translating more nuanced or creative text, such as literature and poetry.

**Phase 2: Statistical MT (1990s-2010s)**
Statistical MT systems were a significant improvement over rule-based MT systems. They relied on large datasets of parallel text, which is text that has been translated into two or more languages. The system would learn from these datasets to create statistical models of how words and phrases are translated between the languages.

Statistical MT systems were able to produce more accurate and fluent translations than rule-based MT systems, especially for creative text. However, they still had some limitations, such as difficulty translating idioms and rare words.

**Phase 3: Neural MT (2010s-present)**
Neural MT systems are the most advanced type of MT system currently available. They use deep learning neural networks to learn how to translate text from one language to another. Neural MT systems are trained on massive datasets of parallel text, and they are able to produce translations that are often indistinguishable from human translations.

Neural MT systems have revolutionized the field of machine translation. They are now used in a wide variety of applications, such as translating websites, documents, and even real-time speech.

**Future of MT**
MT research is still ongoing, and new advances are being made all the time. One of the most promising areas of research is multilingual MT, which is the ability to translate between multiple languages without the need for a pivot language.

Another promising area of research is adaptive MT, which is the ability of MT systems to learn and adapt to new data and domains. This is important for MT systems to be able to translate new types of text and to be used in new contexts.

Overall, the future of MT is very bright. MT systems are becoming increasingly accurate and fluent, and they are being used in more and more applications. It is likely that MT systems will continue to play an increasingly important role in our lives in the years to come."
[{2023-10-25 retrieved}]

"evolution of machine-translation:
The evolution of machine translation (MT) has been a fascinating journey, marked by significant milestones and advancements over the years. Here's a brief overview of the key developments in the field:

1. Early Research (1940s-1950s):
- The origins of MT can be traced back to the 1940s and 1950s when researchers began exploring the possibility of automating translation using computers.
- Early systems like the Georgetown-IBM experiment and the Russian-English MT system were rule-based and limited in scope.

2. Rule-Based Machine Translation (1960s-1980s):
- Rule-based MT systems relied on linguistic rules and dictionaries to translate text.
- Major projects included SYSTRAN (developed by IBM) and the ALPAC report that highlighted the limitations of rule-based approaches.
- These systems struggled with idiomatic expressions and context.

3. Statistical Machine Translation (SMT) (1990s-2000s):
- SMT systems emerged in the 1990s, driven by the availability of large bilingual corpora.
- SMT models like IBM Models and Phrase-Based Models used statistical techniques to learn translation probabilities.
- They outperformed rule-based systems but still faced challenges with fluency and idiomatic expressions.

4. Neural Machine Translation (NMT) (2010s-Present):
- NMT revolutionized machine translation by introducing neural networks, particularly recurrent and transformer models.
- The introduction of models like Google's Sequence-to-Sequence (Seq2Seq) and the Transformer model (e.g., Google's Transformer, OpenAI's GPT) significantly improved translation quality.
- NMT systems could handle context better and generate more fluent translations.

5. Subword and Multilingual Models:
- Subword tokenization techniques like Byte-Pair Encoding (BPE) and SentencePiece allowed NMT models to handle a wide range of languages and subword units.
- Multilingual models, like Facebook's M2M-100 and Google's Multilingual T5, demonstrated the ability to translate between many languages without language-specific training data.

6. Transfer Learning and Pretrained Models:
- Transfer learning techniques, where models are pretrained on large monolingual corpora and fine-tuned for translation tasks, became a dominant paradigm.
- Models like OpenAI's GPT-3 and GPT-4 have demonstrated impressive translation capabilities in a wide range of languages.

7. Continuous Advancements:
- Ongoing research focuses on improving translation quality, handling low-resource languages, and addressing issues like bias in machine translation.
- Customization and domain adaptation have become important, allowing MT systems to be fine-tuned for specific industries or tasks.

8. Integrating Human Feedback:
- Many MT systems now incorporate human feedback loops, such as post-editing and user feedback, to further enhance translation quality.

Machine translation has come a long way, with NMT and pretrained models significantly improving translation quality and making MT systems more accessible and practical for various applications. The field continues to evolve with ongoing research and advancements in natural language processing."
[{2023-10-25 retrieved}]

* McsEngl.evoluting-of-techTrln,
* McsEngl.techTrln'evoluting,


"We call machines programmed to learn from examples “neural networks.” "
[{2023-07-30 retrieved}]
"A machine learning algorithm is an algorithm that is able to learn from data.But what do we mean by learning? Mitchell (1997) provides a succinct definition:“A computer program is said to learn from experienceEwith respect to someclass of tasksTand performance measureP, if its performance at tasks inT, asmeasured byP, improves with experienceE.” "
[{2022-12-06 retrieved}]
"A system is said to learn if it is capable of acquiring new knowledge from its environment.
Learning may also enable the ability to perform new tasks without having to be redesigned or reprogrammed, especially when accompanied by generalization.
Learning is most readily accomplished in a system that supports symbolic abstraction, though such a property is not exclusive (reinforcement strategies, for example, do not necessarily require symbolic representation).
This type of learning is separated from the acquisition of knowledge through direct programming by the designer, which is referred to throughout this document as the Ability to Add New Knowledge." [{1998-02-16}]

* McsEngl.ML!=Machine-Learning!⇒techMl,
* McsEngl.Machine-Learning!⇒techMl,
* McsEngl.techInfo.008-Machine-Learning!⇒techMl,
* McsEngl.techInfo.Machine-Learning!⇒techMl,
* McsEngl.techAi.techMl!⇒techMl,
* McsEngl.techNlp.techMl!⇒techMl,
* McsEngl.techMl,
* McsEngl.techMachine-learning!⇒techMl,

"Machine learning (ML) is the study of computer algorithms that improve automatically through experience.[1][2] It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.[3] Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop conventional algorithms to perform the needed tasks.
Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning.[5][6] In its application across business problems, machine learning is also referred to as predictive analytics."

Softcode of techMl

"machine-learning libraries:
Machine learning libraries are essential tools for building and deploying machine learning models. They provide a wide range of functionalities for tasks like data preprocessing, model training, evaluation, and deployment. Here are some of the most popular machine learning libraries and frameworks:

1. **Scikit-Learn**:
- Language: Python
- Description: Scikit-Learn is a versatile library for classical machine learning algorithms. It offers simple and efficient tools for data analysis and modeling, including classification, regression, clustering, and dimensionality reduction.

2. **TensorFlow**:
- Language: Python
- Description: Developed by Google, TensorFlow is an open-source deep learning framework that provides a flexible ecosystem for building and deploying neural networks. It's widely used in both research and production environments.

3. **Keras**:
- Language: Python
- Description: Keras is an open-source deep learning framework that serves as a high-level API for building and training neural networks. It's often used with TensorFlow and allows for quick prototyping of deep learning models.

4. **PyTorch**:
- Language: Python
- Description: PyTorch is another deep learning framework that is known for its dynamic computation graph, making it easier for researchers and developers to experiment with new architectures and models.

5. **MXNet**:
- Language: Python, Julia, Scala, and more
- Description: MXNet is an open-source deep learning framework that's known for its efficiency and scalability. It supports multiple programming languages and is optimized for distributed computing.

6. **Caffe**:
- Language: C++, Python
- Description: Caffe is a deep learning framework developed by the Berkeley Vision and Learning Center (BVLC). It's especially popular in computer vision applications.

7. **Theano**:
- Language: Python
- Description: Theano is a numerical computation library that's particularly well-suited for deep learning research. It's known for its efficiency in optimizing mathematical expressions.

8. **XGBoost**:
- Language: Python, R, Java, C++, and more
- Description: XGBoost is a popular and efficient library for gradient boosting. It's used for both classification and regression tasks and is known for its speed and accuracy.

9. **LightGBM**:
- Language: Python, R, C++, and more
- Description: LightGBM is another gradient boosting framework that's designed for efficiency and speed. It's especially useful for large datasets and high-dimensional data.

10. ****:
- Language: R, Python, and more
- Description: provides an open-source platform for machine learning and AI. It includes autoML capabilities and is designed for scalable and distributed machine learning.

11. **Spark MLlib**:
- Language: Scala, Python, Java, R
- Description: Part of the Apache Spark ecosystem, MLlib offers scalable machine learning tools for big data processing. It includes a wide range of algorithms and tools for distributed machine learning.

These libraries cater to a variety of machine learning and deep learning needs, from traditional algorithms to cutting-edge deep neural networks. The choice of library often depends on the specific task, the programming language you're comfortable with, and the scale of your project."
[{2023-10-26 retrieved}]

* McsEngl.Mlsoftcode,
* McsEngl.techMl'Softcode!⇒Mlsoftcode,


· "PyTorch is a machine learning framework based on the Torch library,[4][5][6] used for applications such as computer vision and natural language processing,[7] originally developed by Meta AI and now part of the Linux Foundation umbrella.[8][9][10][11] It is free and open-source software released under the modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface.[12]
A number of pieces of deep learning software are built on top of PyTorch, including Tesla Autopilot,[13] Uber's Pyro,[14] Hugging Face's Transformers,[15] PyTorch Lightning,[16][17] and Catalyst.[18][19]
PyTorch provides two high-level features:[20]
* Tensor computing (like NumPy) with strong acceleration via graphics processing units (GPU)
* Deep neural networks built on a tape-based automatic differentiation system"
[{2023-08-15 retrieved}]

* McsEngl.Mlsoftcode.PyTorch,
* McsEngl.PyTorch-Mlsoftcode,

engineer of techMl

"A Machine Learning Engineer creates, edits, analyzes, debugs, models, and supervises the development of machine learning models using programming languages such as Python or C++ and machine learning libraries such as Keras or TensorFlow.
... The average annual income of a Machine Learning Engineer in the United States is between $112,000 and $157,000 with a median of $131,000 per year according to multiple data sources such as Indeed, Glassdoor,, and Payscale."
[{2023-03-29 retrieved}]

* McsEngl.techMl'engineer,

organization of techMl

* Hugging-Face,

* McsEngl.techMl'organization,

info-resource of techMl


* Practical Deep Learning for Coders:,

* McsEngl.techMl'Infrsc,

DOING of techMl

* classification-task,
* regression-task,

* McsEngl.techMl'doing,

regression-task of techMl

">regression task meaning:
In the context of machine learning, a regression task involves predicting a continuous numerical output variable from a set of input variables. The goal of regression is to identify the relationship between the input variables and the output variable and use that relationship to make predictions for new data points.

Regression tasks are often contrasted with classification tasks, which involve predicting a discrete categorical output variable. For example, predicting the price of a house based on its size, location, and features is a regression task, while predicting whether an email is spam or not is a classification task.

There are many different types of regression algorithms, each with its own strengths and weaknesses. Some of the most common regression algorithms include linear regression, logistic regression, and polynomial regression. The choice of algorithm depends on the specific problem and the nature of the data.

Regression tasks are used in a wide variety of applications, including:
* **Predicting stock prices**
* **Forecasting weather patterns**
* **Analyzing sales data**
* **Diagnosing medical conditions**
* **Recommending products or services**

Regression is a powerful tool that can be used to extract insights from data and make predictions about the future. It is a valuable technique for anyone who works with data."
[{2023-11-21 retrieved}]

* McsEngl.regression-task//techMl,
* McsEngl.techMl'regression-task,

evoluting of techMl

"evolution of machine learning:
The evolution of machine learning is a fascinating journey that has seen significant developments over the years. Here's a brief overview of its evolution:
1. Early Foundations (1940s-1950s):
- The concept of artificial intelligence (AI) and machine learning (ML) emerged in the mid-20th century.
- Early pioneers like Alan Turing and John von Neumann laid the groundwork for computational theory and neural networks.
2. Symbolic AI and Expert Systems (1950s-1970s):
- Early AI systems were based on symbolic reasoning, using rule-based expert systems to solve specific problems.
- Symbolic AI struggled with scalability and handling uncertainty.
3. Connectionism and Neural Networks (1940s-1960s, 1980s):
- Warren McCulloch and Walter Pitts developed the first artificial neural network model in the 1940s.
- Neural networks experienced a resurgence in the 1980s, with developments like the backpropagation algorithm.
4. Machine Learning Algorithms (1950s-1970s):
- Early ML algorithms like the Perceptron (1957) and the development of decision trees (1960s) paved the way for supervised learning.
- The concept of unsupervised learning also emerged with clustering algorithms like K-means.
5. Expert Systems and Knowledge-Based Systems (1970s-1980s):
- Expert systems gained popularity as AI applications in fields like medicine and finance.
- MYCIN (1976) and Dendral (1965) were notable examples of early expert systems.
6. Knowledge-Based Systems (1980s-1990s):
- The development of knowledge representation languages like Prolog and Common Lisp helped build knowledge-based systems.
- Expert systems evolved to include more complex reasoning and inference capabilities.
7. Reinforcement Learning (1980s-present):
- Reinforcement learning emerged as a subfield of ML focused on decision-making and control.
- Developments like Q-learning and deep reinforcement learning have made significant strides in this area.
8. Neural Networks Resurgence (2000s-present):
- Advances in computational power, big data, and improved training algorithms led to a resurgence of neural networks.
- Deep learning, especially convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have achieved remarkable results in image recognition, natural language processing, and more.
9. Big Data and Scalability (2000s-present):
- The availability of massive datasets and distributed computing frameworks like Hadoop and Spark have propelled ML.
- The rise of cloud computing and GPUs has accelerated the training of complex models.
10. Transfer Learning and Transformers (2010s-present):
- Transfer learning techniques and models like Transformers have revolutionized NLP tasks and other domains.
- Pretrained language models like GPT-3 and BERT have achieved state-of-the-art results in various applications.
11. Ethical and Responsible AI (2010s-present):
- With the rapid advancement of AI and ML, ethical concerns, bias, and fairness have become central issues.
- Efforts to develop responsible AI, interpretability, and fairness-aware algorithms are ongoing.
12. AI for Healthcare, Autonomous Vehicles, and Beyond (2010s-present):
- AI and ML are making significant impacts in healthcare, autonomous vehicles, robotics, finance, and many other fields.
The evolution of machine learning continues to be dynamic, with ongoing research and innovations shaping its future. AI and ML are poised to have a profound impact on various aspects of society, from healthcare and finance to transportation and entertainment."
[{2023-10-09 retrieved}]

* McsEngl.techMl'evolution,


* generative-AI,
* deep-learning,
* Supervised Learning (e.g., Linear Regression, Support Vector Machines)
* Unsupervised Learning (e.g., Clustering, Dimensionality Reduction)
* Reinforcement Learning (e.g., Q-Learning, Policy Gradient)

* McsEngl.techMl.specific,


">quantum machine learning:
Quantum machine learning (QML) is an emerging field that combines the principles of quantum computing and machine learning. It aims to harness the unique capabilities of quantum computers to solve machine learning problems that are intractable for classical computers.

**What is Quantum Machine Learning?**
QML explores the potential of quantum computers to enhance machine learning algorithms in several ways:
1. **Efficient Data Representation:** Quantum computers can represent data using quantum states, which can encode more information than classical bits. This allows quantum machine learning algorithms to handle more complex data sets and relationships.
2. **Parallel Processing:** Quantum computers can perform computations on multiple quantum states simultaneously, enabling them to process large amounts of data in parallel. This can significantly speed up machine learning tasks.
3. **Quantum Algorithms:** Quantum algorithms, such as Grover's algorithm and HHL algorithm, can provide exponential speedups for certain types of machine learning problems. These algorithms can solve problems that are intractable for classical computers, such as certain search and optimization problems.

**Applications of Quantum Machine Learning**
QML is still in its early stages of development, but it has the potential to revolutionize many areas of machine learning, including:
1. **Drug Discovery:** Quantum computers can simulate complex molecular interactions, which can aid in drug discovery and development.
2. **Materials Science:** QML can be used to design new materials with improved properties, such as superconductors and batteries.
3. **Financial Modeling:** Quantum computers can handle complex financial models and data to improve risk assessment and investment strategies.
4. **Pattern Recognition:** QML can be used to develop more accurate and efficient pattern recognition algorithms for image and speech processing.
5. **Artificial Intelligence:** QML can potentially enhance the capabilities of artificial intelligence systems by enabling them to learn and adapt more effectively.

**Challenges and Future Directions**
Despite its promise, QML faces several challenges:
1. **Quantum Hardware Limitations:** Current quantum computers are still limited in their size and coherence, making it difficult to implement complex quantum machine learning algorithms.
2. **Algorithmic Development:** Developing efficient and practical quantum machine learning algorithms is an active area of research.
3. **Noise and Error Correction:** Quantum systems are susceptible to noise and errors, which can affect the accuracy of quantum machine learning algorithms.

As quantum computing technology matures and algorithmic development progresses, QML is expected to play an increasingly significant role in solving complex machine learning problems and revolutionizing various industries."
[{2023-11-21 retrieved}]

* McsEngl.QML!=quantum-machine-learning,
* McsEngl.quantum-machine-learning,
* McsEngl.techMl.quantum,

techMl.neural-network (link)


"Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.[2]
Deep-learning architectures such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks, convolutional neural networks and transformers have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.[3][4][5]
Artificial neural networks (ANNs) were inspired by information processing and distributed communication nodes in biological systems. ANNs have various differences from biological brains. Specifically, artificial neural networks tend to be static and symbolic, while the biological brain of most living organisms is dynamic (plastic) and analog.[6][7]
The adjective "deep" in deep learning refers to the use of multiple layers in the network. Early work showed that a linear perceptron cannot be a universal classifier, but that a network with a nonpolynomial activation function with one hidden layer of unbounded width can. Deep learning is a modern variation that is concerned with an unbounded number of layers of bounded size, which permits practical application and optimized implementation, while retaining theoretical universality under mild conditions. In deep learning the layers are also permitted to be heterogeneous and to deviate widely from biologically informed connectionist models, for the sake of efficiency, trainability and understandability."
[{2023-04-01 retrieved}]

* McsEngl.DL-deep-learning!⇒techDl,
* McsEngl.deep-learning!⇒techDl,
* McsEngl.techDl!=deep-learning!⇒techDl,

evoluting of techDl

"evolution of deep-learning:
Deep learning is a subfield of machine learning that uses artificial neural networks to learn from data. Neural networks are inspired by the structure and function of the human brain, and they are able to learn complex patterns from large amounts of data.

The term "deep learning" was first coined in 2006 by Geoffrey Hinton, Yoshua Bengio, and Ian Goodfellow. However, the roots of deep learning can be traced back to the 1950s, when Frank Rosenblatt developed the perceptron, the first artificial neuron.

In the 1960s, Rosenblatt's work was extended by David Rumelhart, Geoffrey Hinton, and Ronald Williams, who developed the backpropagation algorithm. This algorithm allowed neural networks to learn from data by adjusting their weights based on the error between their predictions and the desired output.

In the 1970s and 1980s, deep learning research fell out of favor due to a number of challenges, including the lack of data, the lack of computational power, and the difficulty of training neural networks. However, in the 1990s, there was a resurgence of interest in deep learning, and researchers began to make significant progress in overcoming the challenges that had previously held back the field.

One of the key breakthroughs in deep learning was the development of convolutional neural networks (CNNs) in the 1990s. CNNs are a type of neural network that are well-suited for image recognition tasks. In 2012, a CNN called AlexNet won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a competition that is widely considered to be a benchmark for image recognition. AlexNet's victory was a watershed moment for deep learning, and it helped to propel the field into the mainstream.

In the years since AlexNet's victory, deep learning has made significant progress in a wide variety of tasks, including image recognition, natural language processing, and speech recognition. Deep learning is now being used in a wide range of applications, including self-driving cars, medical diagnosis, and fraud detection.

Here are some of the key milestones in the evolution of deep learning:
* 1957: Frank Rosenblatt develops the perceptron, the first artificial neuron.
* 1962: David Rumelhart, Geoffrey Hinton, and Ronald Williams develop the backpropagation algorithm.
* 1979: Yann LeCun develops convolutional neural networks (CNNs).
* 1997: Jürgen Schmidhuber and Sepp Hochreiter develop long short-term memory (LSTM) networks.
* 2012: AlexNet wins the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).
* 2014: DeepMind's AlphaGo defeats the world champion of Go.
* 2015: OpenAI Five defeats the world champion of Dota 2.
* 2016: Google DeepMind's AlphaFold 2 wins the protein structure prediction competition DeepMind's AlphaFold 2 is a computer program that uses artificial intelligence to predict the three-dimensional structure of proteins from their amino acid sequences. This is a major breakthrough in biology, as it could have a profound impact on our understanding of diseases and the development of new drugs.

Deep learning is a rapidly evolving field, and it is likely to continue to have a major impact on our world in the years to come."
[{2023-11-05 retrieved}]

"The evolution of deep learning has been a remarkable journey marked by significant milestones and breakthroughs. Deep learning is a subfield of machine learning that focuses on neural networks with multiple layers, also known as artificial neural networks. Here's an overview of the key stages and developments in the evolution of deep learning:

1. Perceptrons (1950s - 1960s):
The concept of artificial neurons was first introduced in the 1940s and later formalized by Frank Rosenblatt in the late 1950s with the development of perceptrons. These early neural networks could learn simple linear functions but had limitations when it came to more complex tasks.

2. Neural Networks Winter (1970s - 2000s):
During this period, neural networks fell out of favor due to various limitations, including the inability to train deep networks and the availability of more traditional machine learning algorithms.

3. Backpropagation (1986):
The reintroduction of the backpropagation algorithm by Geoffrey Hinton and others in the 1980s allowed for efficient training of neural networks with multiple layers. This was a crucial step in making deep learning more practical.

4. Convolutional Neural Networks (CNNs) (1990s - 2010s):
CNNs, developed by Yann LeCun, have been fundamental in computer vision tasks. They use specialized layers for feature extraction, making them highly effective for tasks like image classification and object detection.

5. Recurrent Neural Networks (RNNs) (1980s - 2010s):
RNNs, developed in the 1980s, are designed to handle sequential data. They have been used in natural language processing, speech recognition, and time series analysis.

6. Deep Learning Resurgence (2010s):
Deep learning experienced a resurgence in the 2010s, largely driven by advances in hardware (GPUs) and data availability. Researchers like Geoffrey Hinton, Yoshua Bengio, and others made significant contributions to the field.

7. ImageNet and AlexNet (2012):
The ImageNet Large Scale Visual Recognition Challenge in 2012 demonstrated the power of deep learning, especially with the introduction of AlexNet, a deep convolutional neural network that significantly improved image classification accuracy.

8. Deep Learning Frameworks (2010s):
The development of user-friendly deep learning frameworks, such as TensorFlow and PyTorch, made it easier for researchers and practitioners to build and experiment with deep neural networks.

9. Transfer Learning (2010s - present):
Transfer learning techniques, such as fine-tuning pre-trained models, became popular for various tasks, allowing models to leverage the knowledge learned from massive datasets.

10. GANs and Transformers (2010s - present):
Generative Adversarial Networks (GANs) and Transformers have revolutionized tasks like image generation and natural language processing. GANs enable the generation of realistic data, while Transformers, with models like BERT and GPT, have pushed the boundaries of NLP.

11. Reinforcement Learning (RL) (2010s - present):
Deep reinforcement learning has seen significant progress in training agents to learn from rewards and make decisions in complex environments, resulting in achievements in games and robotics.

12. Ethical and Societal Considerations (ongoing):
As deep learning continues to advance, ethical and societal concerns around biases, privacy, and AI ethics have come to the forefront, prompting discussions and regulations.

The evolution of deep learning continues, with ongoing research in areas like self-supervised learning, explainability, and improved model architectures. Deep learning has become a foundational technology in various domains, from healthcare and finance to autonomous vehicles and beyond."
[{2023-11-05 retrieved}]

* McsEngl.evoluting-of-techDl,
* McsEngl.techDl'evoluting,


* discriminative,
* generative,
* deep neural networks,
* deep belief networks,
* deep reinforcement learning,
* recurrent neural networks,
* convolutional neural networks,
* transformers,

* McsEngl.techDl.specific,


* used to classify or predict,
* typically trained on a dataset of labeled data,
* learns the relationship between the features of the data points and the labels.
[{2023-08-01 retrieved}]

* McsEngl.techDl.discriminative,


"Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labeled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning algorithms is learning a function that maps feature vectors (inputs) to labels (output), based on example input-output pairs.[1] It infers a function from labeled training data consisting of a set of training examples.[2] In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias). This statistical quality of an algorithm is measured through the so-called generalization error."
[{2023-04-04 retrieved}]

* McsEngl.techMl.supervised-learning,
* McsEngl.supervised-learning,


">backpropagation overview:
Backpropagation, short for "backward propagation of errors," is a supervised learning algorithm commonly used for training artificial neural networks (ANNs). It is a key component of the training process in which the network learns to make predictions by adjusting its weights based on the error of its predictions compared to the actual target values.

Here's an overview of the backpropagation algorithm:
1. **Forward Pass:**
- The input data is fed into the neural network, and the network processes it layer by layer through its neurons.
- Each neuron performs a weighted sum of its inputs, applies an activation function, and passes the result to the next layer.

2. **Calculate Error:**
- The output of the network is compared to the actual target values using a predefined loss function. The loss function measures the difference between the predicted and actual values.

3. **Backward Pass:**
- The goal is to minimize the error by adjusting the weights in the network.
- The error is propagated backward through the network to update the weights. This is where the term "backpropagation" comes from.

4. **Gradient Descent:**
- The gradient of the loss function with respect to the weights is calculated. This gradient indicates how much the loss would increase or decrease if the weights are adjusted.
- The weights are then updated in the opposite direction of the gradient to minimize the loss. This is typically done using an optimization algorithm like stochastic gradient descent (SGD) or one of its variants.

5. **Repeat:**
- Steps 1-4 are repeated iteratively for multiple epochs or until the model reaches a satisfactory level of performance.

6. **Learning Rate:**
- The learning rate is a hyperparameter that determines the size of the step taken during the weight updates. It's crucial to find an appropriate learning rate to balance convergence speed and stability.

7. **Activation Functions:**
- Activation functions are applied to the output of each neuron to introduce non-linearity into the network. Common activation functions include sigmoid, hyperbolic tangent (tanh), and rectified linear unit (ReLU).

8. **Batch Processing:**
- Backpropagation can be performed on individual training examples (online learning), on a subset of the data (mini-batch learning), or on the entire dataset (batch learning). Mini-batch learning is most commonly used in practice.

Backpropagation is a foundational algorithm for training neural networks and has been instrumental in the success of deep learning. It allows the model to adjust its weights based on the gradient of the error, enabling the network to learn complex patterns and representations from the input data."
[{2023-11-20 retrieved}]

* McsEngl.backpropagation-algorithm,


"Unsupervised learning is a type of algorithm that learns patterns from untagged data. The goal is that through mimicry, which is an important mode of learning in people, the machine is forced to build a concise representation of its world and then generate imaginative content from it.
In contrast to supervised learning where data is tagged by an expert, e.g. tagged as a "ball" or "fish", unsupervised methods exhibit self-organization that captures patterns as probability densities[1] or a combination of neural feature preferences encoded in the machine's weights and activations. The other levels in the supervision spectrum are reinforcement learning where the machine is given only a numerical performance score as guidance, and semi-supervised learning where a small portion of the data is tagged."
[{2023-04-04 retrieved}]

* McsEngl.techMl.unsupervised-learning,
* McsEngl.unsupervised-learning,


"Weak supervision, also called semi-supervised learning, is a branch of machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning (with no labeled training data) and supervised learning (with only labeled training data). Semi-supervised learning aims to alleviate the issue of having limited amounts of labeled data available for training.
Semi-supervised learning is motivated by problem settings where unlabeled data is abundant and obtaining labeled data is expensive. Other branches of machine learning that share the same motivation but follow different assumptions and methodologies are active learning and weak supervision. Unlabeled data, when used in conjunction with a small amount of labeled data, can produce considerable improvement in learning accuracy. The acquisition of labeled data for a learning problem often requires a skilled human agent (e.g. to transcribe an audio segment) or a physical experiment (e.g. determining the 3D structure of a protein or determining whether there is oil at a particular location). The cost associated with the labeling process thus may render large, fully labeled training sets infeasible, whereas acquisition of unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value. Semi-supervised learning is also of theoretical interest in machine learning and as a model for human learning."
[{2023-04-04 retrieved}]

* McsEngl.semisupervised-learning,
* McsEngl.techMl.semisupervised-learning,
* McsEngl.weak-semisupervised-learning,


">reinforcement learning:
Reinforcement Learning (RL) is a type of machine learning paradigm where an agent learns to make decisions by interacting with an environment. The agent receives feedback in the form of rewards or penalties, and its objective is to learn a policy or strategy that maximizes the cumulative reward over time. This learning process is inspired by the way animals learn through trial and error.

Here are the fundamental components and concepts of reinforcement learning:
1. **Agent:**
- The learner or decision-maker that interacts with the environment. The agent takes actions based on its policy to influence the state of the environment.
2. **Environment:**
- The external system with which the agent interacts. The environment responds to the actions of the agent, presenting new states and providing feedback through rewards or penalties.
3. **State:**
- A representation of the current situation or configuration of the environment. The state is essential for the agent to make decisions, as its actions influence the transition from one state to another.
4. **Action:**
- The set of possible moves or decisions that the agent can make in a given state. The agent's policy defines how it selects actions in different states.
5. **Reward:**
- A numerical signal provided by the environment as feedback for the agent's action in a particular state. The goal of the agent is to maximize the cumulative reward over time.
6. **Policy:**
- The strategy or mapping from states to actions that the agent follows. The policy can be deterministic or stochastic, and the agent's objective is to learn an optimal policy that maximizes the expected cumulative reward.
7. **Value Function:**
- A function that estimates the expected cumulative reward the agent can obtain from a given state or state-action pair. Value functions help the agent evaluate the desirability of different states and actions.
8. **Exploration and Exploitation:**
- Balancing the exploration of new actions to discover their effects and the exploitation of known actions that are believed to yield high rewards is a crucial challenge in RL.

Reinforcement learning algorithms can be broadly categorized into model-free and model-based approaches:
- **Model-Free RL:**
- These algorithms directly learn the optimal policy or value function without building an explicit model of the environment.

- **Model-Based RL:**
- These algorithms first learn a model of the environment (transition dynamics and reward function) and then use this model to derive an optimal policy or value function.

Reinforcement learning has found success in various applications, including game playing, robotics, autonomous systems, finance, and healthcare. Popular RL algorithms include Q-learning, Deep Q Networks (DQN), Policy Gradient methods, and more recently, algorithms like Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO)."
[{2023-11-23 retrieved}]

"Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.
Reinforcement learning differs from supervised learning in not needing labelled input/output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).[1]
The environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context use dynamic programming techniques.[2] The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible."
[{2023-04-04 retrieved}]
"Reinforcement learning (RL) is learning by interacting with an environment. An RL agent learns from the consequences of its actions, rather than from being explicitly taught and it selects its actions on basis of its past experiences (exploitation) and also by new choices (exploration), which is essentially trial and error learning. The reinforcement signal that the RL-agent receives is a numerical reward, which encodes the success of an action's outcome, and the agent seeks to learn to select actions that maximize the accumulated reward over time. (The use of the term reward is used here in a neutral fashion and does not imply any pleasure, hedonic impact or other psychological interpretations.)"
[{2023-04-05 retrieved}]

* McsEngl.reinforecment-learning,
* McsEngl.techMl.reinforecment-learning,


"In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy.
The most developed subareas of automated reasoning are automated theorem proving (and the less automated but more pragmatic subfield of interactive theorem proving) and automated proof checking (viewed as guaranteed correct reasoning under fixed assumptions).[citation needed] Extensive work has also been done in reasoning by analogy using induction and abduction.[1]
Other important topics include reasoning under uncertainty and non-monotonic reasoning. An important part of the uncertainty field is that of argumentation, where further constraints of minimality and consistency are applied on top of the more standard automated deduction. John Pollock's OSCAR system[2] is an example of an automated argumentation system that is more specific than being just an automated theorem prover.
Tools and techniques of automated reasoning include the classical logics and calculi, fuzzy logic, Bayesian inference, reasoning with maximal entropy and many less formal ad hoc techniques."
[{2023-04-03 retrieved}]

* McsEngl.automated-reasoning!⇒techMr,
* McsEngl.machine-reasoning!⇒techMr,
* McsEngl.techInfo.016-machine-reasoning!⇒techMr,
* McsEngl.techMr!⇒Machine-Reasoning,
====== langoGreek:
* McsElln.μηχανικός-συλλογισμός,

evoluting of machine-reasoning

"evolution of machine-reasoning:
The evolution of machine reasoning, also known as automated reasoning or symbolic AI, has been a long and rich history. Machine reasoning involves the use of logic and symbolic representations to perform tasks that require deductive, inductive, or abductive reasoning. Here's an overview of its evolution:

1. **Early Symbolic AI (1950s-1960s)**: The field of artificial intelligence initially focused on symbolic reasoning. Researchers like Allen Newell and Herbert A. Simon developed the Logic Theorist, a program capable of proving mathematical theorems using symbolic logic.

2. **Expert Systems (1970s-1980s)**: Expert systems emerged as a prominent application of symbolic reasoning. These systems encoded human expertise in the form of if-then rules. Dendral and MYCIN are famous examples, used for tasks like chemical analysis and medical diagnosis.

3. **Prolog (1972)**: Prolog (Programming in Logic) became a popular programming language for symbolic reasoning. It's based on formal logic and is well-suited for knowledge representation and rule-based reasoning.

4. **Frame-Based Knowledge Representation (1970s-1980s)**: Knowledge representation evolved with the development of frame-based systems, which organized knowledge into structured entities called frames. The Cyc project, initiated by Douglas Lenat, aimed to create a comprehensive common-sense knowledge base.

5. **Limitations and the AI Winter (1980s-1990s)**: Symbolic AI systems faced limitations in handling uncertainty, common-sense reasoning, and scaling to large knowledge bases. This led to a decline in AI research during the "AI winter."

6. **Rise of Statistical and Connectionist Approaches (1990s-2000s)**: Machine learning, particularly statistical and connectionist approaches like neural networks, gained prominence due to their success in handling data-driven tasks. This shift away from symbolic AI led to the development of statistical reasoning systems and data-centric AI.

7. **Hybrid Systems (2000s-2010s)**: Researchers began to explore hybrid systems that combined symbolic reasoning with statistical and probabilistic approaches. This approach aimed to leverage the strengths of both paradigms for more comprehensive AI solutions.

8. **Ontologies and Semantic Web (2000s-2010s)**: The development of ontologies, such as RDF and OWL, facilitated the organization of knowledge on the web. The Semantic Web aimed to make data more accessible and interoperable using logical reasoning.

9. **Current Trends (2010s-Present)**: Machine reasoning has seen a resurgence, especially in areas like natural language understanding, reasoning with knowledge graphs, and explainable AI. Systems like IBM Watson and Google's BERT employ reasoning techniques to improve their performance.

10. **Quantum Computing and Machine Reasoning (Ongoing)**: The potential of quantum computing to perform complex reasoning tasks at unprecedented speeds is an ongoing area of research. Quantum computers hold promise for solving computationally intensive reasoning problems.

Machine reasoning continues to evolve as researchers seek to develop AI systems that can perform complex, human-like reasoning across various domains. Advances in knowledge representation, symbolic reasoning, and the integration of symbolic and statistical approaches are at the forefront of ongoing developments in this field."
[{2023-10-24 retrieved}]

* McsEngl.evoluting-of-machine-reasoning,
* McsEngl.machine-reasoning'evoluting,

"Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions.[1][2][3][4] Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.
The scientific discipline of computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, multi-dimensional data from a 3D scanner, or medical scanning devices. The technological discipline of computer vision seeks to apply its theories and models to the construction of computer vision systems.
Sub-domains of computer vision include scene reconstruction, object detection, event detection, video tracking, object recognition, 3D pose estimation, learning, indexing, motion estimation, visual servoing, 3D scene modeling, image generation, and image restoration.
Adopting computer vision technology might be painstaking for organizations as there is no single point solution for it. There are very few companies that provide a unified and distributed platform or an Operating System where computer vision applications can be easily deployed and managed."
[{2023-03-31 retrieved}]

* McsEngl.techInfo.010-computer-vision!⇒techCmrv,


× generic: machine-learning-tech,

"We call machines programmed to learn from examples “neural networks.” "
[{2023-07-30 retrieved}]
"Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets,[1] are computing systems inspired by the biological neural networks that constitute animal brains.[2]
An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons. An artificial neuron receives signals then processes them and can signal neurons connected to it. The "signal" at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold.
Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times."
[{2023-03-29 retrieved}]

* McsEngl.ANN-artificial-neural-network!⇒techNn,
* McsEngl.artificial-neural-network!⇒techNn,
* McsEngl.neural-network!⇒techNn,
* McsEngl.techInfo.009-Artificial-Neural-Network!⇒techNn,
* McsEngl.techAi.neural-network!⇒techNn,
* McsEngl.techNn,
====== langoGreek:
* McsElln.τεχνητό-νευρωνικό-δίκτυο!το!=techNn,

artificial-neuron of techNn

"An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network.[1] The artificial neuron receives one or more inputs (representing excitatory postsynaptic potentials and inhibitory postsynaptic potentials at neural dendrites) and sums them to produce an output (or activation, representing a neuron's action potential which is transmitted along its axon). Usually each input is separately weighted, and the sum is passed through a non-linear function known as an activation function or transfer function[clarification needed]. The transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable and bounded. Non-monotonic, unbounded and oscillating activation functions with multiple zeros that outperform sigmoidal and ReLU like activation functions on many tasks have also been recently explored. The thresholding function has inspired building logic gates referred to as threshold logic; applicable to building logic circuits resembling brain processing. For example, new devices such as memristors have been extensively used to develop such logic in recent times.[2]
The artificial neuron transfer function should not be confused with a linear system's transfer function.
Artificial neurons can also refer to artificial cells in neuromorphic engineering (see below) that are similar to natural physical neurons."
[{2023-04-05 retrieved}]
· syntheticNo-neuron is a-naturalNo-neuronBio that DOES NOT look like a-natural-neuron.

* McsEngl.artificial-neuron,
* McsEngl.neuronBio.syntheticNo,
* McsEngl.techNn'neuron,

application of techNn

· "Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis."
[{2023-07-31 retrieved}]

* McsEngl.techNn'application,

library of techNn

"library of neural-network:
Neural network libraries are software tools that provide developers with a set of pre-built functions and classes for creating, training, and deploying neural networks. They can make the development of machine learning applications much easier and faster, by providing a high-level abstraction over the complex mathematical and computational details of neural networks.

There are many different neural network libraries available, both open source and commercial. Some of the most popular libraries include:
* **TensorFlow** (by Google): A powerful and flexible library that can be used to build a wide variety of neural network architectures. It is supported on a wide range of platforms, including CPUs, GPUs, and TPUs.
* **PyTorch** (by Meta): A dynamic and lightweight library that is well-suited for research and experimentation. It is also relatively easy to learn and use.
* **Keras** (by Google): A high-level library that provides a simple and concise API for building neural networks. It is built on top of TensorFlow and PyTorch.
* **MXNet** (by Amazon): A scalable and efficient library that is well-suited for distributed training and large-scale deployments.
* **Chainer** (by Preferred Networks): A flexible and extensible library that is popular in Japan.

Other notable neural network libraries include:
* **OpenNN**
* **FANN**
* **CNTK**
* **nnabla**
* **Sony Neural Network Libraries**

The best neural network library for you will depend on your specific needs and requirements. If you are new to neural networks, I recommend starting with a high-level library like Keras. Once you have a good understanding of the basics, you can then move on to a more powerful and flexible library like TensorFlow or PyTorch.

Please let me know if you have any other questions.
[{2023-10-25 retrieved}]

* McsEngl.techNn'framework,
* McsEngl.techNn'library,

evaluation of techNn

"evaluation of neural-networks:
Evaluating neural networks is important to ensure that they are performing as expected and to identify any areas where they can be improved. There are a number of different metrics that can be used to evaluate neural networks, depending on the specific task that the network is designed to perform.

Some common metrics for evaluating neural networks include:
* **Accuracy:** This is the percentage of predictions that the network makes correctly.
* **Precision:** This is the percentage of positive predictions that are actually correct.
* **Recall:** This is the percentage of actual positive cases that the network correctly predicts.
* **F1 score:** This is a harmonic mean of precision and recall.
* **Loss function:** This is a measure of how well the network is able to learn the training data.

In addition to these quantitative metrics, it is also important to evaluate neural networks qualitatively. This means looking at the specific predictions that the network makes and understanding how it is making those decisions. This can be done by visualizing the network's outputs and by analyzing its predictions on specific examples.

It is also important to evaluate neural networks on a held-out test set. This is a set of data that was not used to train the network. The performance of the network on the test set gives us a more realistic estimate of how well it will generalize to new data.

Here are some of the challenges of evaluating neural networks:
* **Neural networks can be complex and difficult to understand.** This can make it difficult to identify the reasons why a network is making certain predictions.
* **Neural networks can be overfit to the training data.** This means that the network may perform well on the training data, but it may not generalize well to new data.
* **Neural networks can be computationally expensive to train and evaluate.**

Despite these challenges, evaluating neural networks is essential for ensuring that they are performing as expected and for identifying any areas where they can be improved.

Here are some tips for evaluating neural networks:
* **Use a variety of metrics.** No single metric is perfect, so it is important to use a variety of metrics to evaluate your neural network. This will give you a more complete picture of how well the network is performing.
* **Use a held-out test set.** This is the best way to evaluate the network's ability to generalize to new data.
* **Visualize the network's outputs.** This can help you to understand how the network is making decisions and to identify any potential problems.
* **Analyze the network's predictions on specific examples.** This can help you to identify any biases or limitations in the network.

By following these tips, you can develop a comprehensive evaluation strategy for your neural networks."
[{2023-10-25 retrieved}]

* McsEngl.techNn'evaluation,

info-resource of techNn


* McsEngl.techNn'Infrsc,

evoluting of techNn

"evolution of neural-networks:
"The evolution of neural networks can be traced back to the early days of artificial intelligence research. In 1943, Warren McCulloch and Walter Pitts developed a mathematical model of a neuron, which is the basic building block of a neural network. This model was inspired by the way that neurons work in the human brain.

In 1958, Frank Rosenblatt created the first artificial neural network called the perceptron. The perceptron was a simple model that could learn to perform simple tasks, such as classifying data into two categories. However, the perceptron had some limitations, and it was not able to learn more complex tasks.

In the 1960s and 1970s, there was a decline in interest in neural networks due to the difficulty of training them and the limitations of the hardware available at the time. However, in the 1980s, there was a resurgence of interest in neural networks, thanks to the development of new training algorithms and more powerful computers.

In the 1990s, neural networks began to be used to solve increasingly complex problems, such as image recognition and natural language processing. This was due to the development of new types of neural networks, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs).

In the 2010s, neural networks experienced a revolution, thanks to the development of deep learning. Deep learning is a type of machine learning that uses artificial neural networks with many layers. Deep learning networks have been able to achieve state-of-the-art results on a wide range of tasks, including image recognition, speech recognition, and natural language processing.

Today, neural networks are used in a wide range of applications, including:
* Image recognition: Neural networks are used to power applications such as facial recognition, object detection, and image classification.
* Speech recognition: Neural networks are used to power applications such as voice assistants and transcription software.
* Natural language processing: Neural networks are used to power applications such as machine translation, text summarization, and question answering.
* Self-driving cars: Neural networks are used to power the perception and control systems of self-driving cars.
* Recommendation systems: Neural networks are used to power recommendation systems on websites like Amazon and Netflix.

The evolution of neural networks has been remarkable, and they have become one of the most important tools in artificial intelligence today.

Here are some of the key trends in the evolution of neural networks:
* **The rise of deep learning:** Deep learning has enabled neural networks to achieve state-of-the-art results on a wide range of tasks.
* **The development of new neural network architectures:** Researchers are constantly developing new types of neural networks to solve specific problems.
* **The use of neural networks in real-world applications:** Neural networks are now being used in a wide range of real-world applications, such as self-driving cars and medical diagnosis.

The future of neural networks is very bright. As neural network architectures continue to improve and more data becomes available, neural networks will be able to solve even more complex problems."
[{2023-10-25 retrieved}]

"evolution of neural-networks:
The evolution of neural networks has been a fascinating journey spanning several decades. Neural networks, inspired by the human brain, have seen numerous advancements and paradigm shifts. Here's a brief overview of their evolution:

1. **Perceptrons (1957)**: The history of neural networks can be traced back to the concept of perceptrons, developed by Frank Rosenblatt. These were simple binary classifiers and laid the foundation for neural networks.

2. **Backpropagation (1986)**: The introduction of the backpropagation algorithm by Rumelhart, Hinton, and Williams marked a significant breakthrough. It enabled training of multi-layer feedforward neural networks, also known as multi-layer perceptrons (MLPs). Backpropagation allowed networks to learn complex relationships in data.

3. **Vanishing Gradient Problem (1991)**: As neural networks became deeper, it was discovered that training deep networks was challenging due to the vanishing gradient problem. This led to a decline in the popularity of deep neural networks in the 1990s.

4. **Reinvention of Deep Learning (2006-2012)**: Researchers like Geoffrey Hinton, Yann LeCun, and Yoshua Bengio revived deep learning with innovations like deep belief networks and convolutional neural networks (CNNs). These architectures proved to be highly effective for image and speech recognition tasks.

5. **ImageNet and the Rise of ConvNets (2012)**: The ImageNet Large Scale Visual Recognition Challenge in 2012, won by a CNN-based model, marked a turning point. CNNs became the dominant architecture for image-related tasks.

6. **Recurrent Neural Networks (RNNs)**: RNNs, which incorporate sequential data and have memory, have played a crucial role in natural language processing and time series analysis. Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) cells were introduced to address vanishing gradient issues in RNNs.

7. **Deep Reinforcement Learning (2015)**: The combination of deep learning with reinforcement learning led to significant progress in AI, with algorithms like AlphaGo beating world champions in board games.

8. **Transfer Learning (2018-2019)**: Transfer learning, particularly using pre-trained models like BERT for natural language understanding, became a prominent strategy for various NLP tasks. It significantly reduced the need for massive labeled datasets.

9. **Neuromorphic Computing (Ongoing)**: Inspired by the brain's structure, neuromorphic computing involves building hardware that mimics neural networks. These systems aim to be highly efficient in terms of power consumption and are suitable for edge devices.

10. **Ethical and Regulatory Considerations (Ongoing)**: With the increasing power and complexity of neural networks, ethical and regulatory concerns have emerged, leading to discussions around AI ethics, bias, and responsible AI development.

11. **AutoML and Neural Architecture Search (Ongoing)**: Automated Machine Learning (AutoML) techniques and Neural Architecture Search (NAS) are becoming popular. These tools can automatically design neural network architectures and optimize hyperparameters.

12. **Explainable AI (Ongoing)**: As neural networks become more complex, there is a growing focus on making AI models explainable and interpretable, to build trust and meet regulatory requirements.

The evolution of neural networks is ongoing, with constant innovations in architecture, training techniques, and applications. It has a profound impact on a wide range of fields, from healthcare to finance, and continues to shape the future of artificial intelligence."
[{2023-10-24 retrieved}]

* McsEngl.evoluting-of-techNn,
* McsEngl.techNn'evoluting,


* convolutional-neural-network,
* feedforward-neural-network,
* long-short-term-memory-neural-network,
* recurrent-neural-network,
* recursive-neural-network,
* transformer-neural-network,

* McsEngl.techNn.specific,


· "In deep learning, a convolutional neural network (CNN) is a class of artificial neural network most commonly applied to analyze visual imagery.[1] CNNs use a mathematical operation called convolution in place of general matrix multiplication in at least one of their layers.[2] They are specifically designed to process pixel data and are used in image recognition and processing. They have applications in:
* image and video recognition,
* recommender systems,[3]
* image classification,
* image segmentation,
* medical image analysis,
* natural language processing,[4]
* brain–computer interfaces,[5] and
* financial time series.[6]
CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Networks (SIANN), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps.[7][8] Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input.[9]"
[{2023-07-31 retrieved}]

* McsEngl.CNN-convolutional-neural-network!⇒techNnCv,
* McsEngl.convolutional-neural-network!⇒techNnCv,
* McsEngl.techDl.convolutional-neural-network!⇒techNnCv,
* McsEngl.techNn.convolutional!⇒techNnCv,
* McsEngl.techNnCv,


"A feedforward neural network (FNN) is an artificial neural network wherein connections between the nodes do not form a cycle.[1] As such, it is different from its descendant: recurrent neural networks.
The feedforward neural network was the first and simplest type of artificial neural network devised.[2] In this network, the information moves in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network."
[{2023-04-05 retrieved}]

* McsEngl.FNN-feedforward-neural-network,
* McsEngl.feedforward-neural-network,
* McsEngl.techNn.feedforward,
* McsEngl.techNnFf,


· "A deep neural network (DNN) is an artificial neural network (ANN) with multiple layers between the input and output layers.[10][13] There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions.[137] These components as a whole function similarly to a human brain, and can be trained like any other ML algorithm.[citation needed]
For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer,[citation needed] and complex DNN have many layers, hence the name "deep" networks.
DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives.[138] The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network.[10] For instance, it was proved that sparse multivariate polynomials are exponentially easier to approximate with DNNs than with shallow networks.[139]
Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.
DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights.[140] That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data.
Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modeling.[141][142][143][144][145] Long short-term memory is particularly effective for this use.[75][146]
Convolutional deep neural networks (CNNs) are used in computer vision.[147] CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR).[148]"
[{2023-07-31 retrieved}]

* McsEngl.DNN-deep-neural-network!⇒techNnD,
* McsEngl.deep-neural-network!⇒techNnD,
* McsEngl.techDl.deep-neural-net!⇒techNnD,
* McsEngl.techNn.deep!⇒techNnD,
* McsEngl.techNnD,


· "In machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer.[1]
When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors.[1] After this learning step, a DBN can be further trained with supervision to perform classification.[2]
DBNs can be viewed as a composition of simple, unsupervised networks such as restricted Boltzmann machines (RBMs)[1] or autoencoders,[3] where each sub-network's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative energy-based model with a "visible" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the "lowest" pair of layers (the lowest visible layer is a training set).
The observation[2] that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep learning algorithms.[4]: 6  Overall, there are many attractive implementations and uses of DBNs in real-life applications and scenarios (e.g., electroencephalography,[5] drug discovery[6][7][8])."
[{2023-08-01 retrieved}]

* McsEngl.DBN!=deep-belief-network!⇒techNnDb,
* McsEngl.deep-belief-network!⇒techNnDb,
* McsEngl.techDl.deep-belief-net!⇒techNnDb,
* McsEngl.techNn.deep-belief!⇒techNnDb,
* McsEngl.techNnDb,

techNn.language-model (link)


"Long short-term memory (LSTM)[1] is an artificial neural network used in the fields of artificial intelligence and deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections. Such a recurrent neural network (RNN) can process not only single data points (such as images), but also entire sequences of data (such as speech or video). This characteristic makes LSTM networks ideal for processing and predicting data. For example, LSTM is applicable to tasks such as unsegmented, connected handwriting recognition,[2] speech recognition,[3][4] machine translation,[5][6] speech activity detection,[7] robot control,[8][9] video games,[10][11] and healthcare.[12]
The name of LSTM refers to the analogy that a standard RNN has both "long-term memory" and "short-term memory". The connection weights and biases in the network change once per episode of training, analogous to how physiological changes in synaptic strengths store long-term memories; the activation patterns in the network change once per time-step, analogous to how the moment-to-moment change in electric firing patterns in the brain store short-term memories.[13] The LSTM architecture aims to provide a short-term memory for RNN that can last thousands of timesteps, thus "long short-term memory".[1]
A common LSTM unit is composed of a cell, an input gate, an output gate[14] and a forget gate.[15] The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell. Forget gates decide what information to discard from a previous state by assigning a previous state, compared to a current input, a value between 0 and 1. A (rounded) value of 1 means to keep the information, and a value of 0 means to discard it. Input gates decide which pieces of new information to store in the current state, using the same system as forget gates. Output gates control which pieces of information in the current state to output by assigning a value from 0 to 1 to the information, considering the previous and current states. Selectively outputting relevant information from the current state allows the LSTM network to maintain useful, long-term dependencies to make predictions, both in current and future time-steps.
LSTM networks are well-suited to classifying, processing and making predictions based on time series data, since there can be lags of unknown duration between important events in a time series. LSTMs were developed to deal with the vanishing gradient problem[16] that can be encountered when training traditional RNNs. Relative insensitivity to gap length is an advantage of LSTM over RNNs, hidden Markov models and other sequence learning methods in numerous applications."
[{2023-04-02 retrieved}]

* McsEngl.LSTM-long-short-term-memory!⇒techLstm,
* McsEngl.long-short-term-memory!⇒techLstm,
* McsEngl.techLstm,
* McsEngl.techNn.long-short-term-memory!⇒techLstm,


· "Multilayer Perceptron: In the context of machine learning, an MLP is a type of artificial neural network consisting of multiple layers of interconnected nodes (neurons). Each node in one layer is connected to every node in the subsequent layer. MLPs are commonly used for tasks such as classification and regression."
[{2023-08-09 retrieved}]

* McsEngl.MLP!=multilayer-perceptron,
* McsEngl.multilayer-perceptron-techNn,
* McsEngl.techNn.multilayer-perceptron,


"A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs.[1][2][3] This makes them applicable to tasks such as unsegmented, connected handwriting recognition[4] or speech recognition.[5][6] Recurrent neural networks are theoretically Turing complete and can run arbitrary programs to process arbitrary sequences of inputs.[7]
The term "recurrent neural network" is used to refer to the class of networks with an infinite impulse response, whereas "convolutional neural network" refers to the class of finite impulse response. Both classes of networks exhibit temporal dynamic behavior.[8] A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that can not be unrolled.
Both finite impulse and infinite impulse recurrent networks can have additional stored states, and the storage can be under direct control by the neural network. The storage can also be replaced by another network or graph if that incorporates time delays or has feedback loops. Such controlled states are referred to as gated state or gated memory, and are part of long short-term memory networks (LSTMs) and gated recurrent units. This is also called Feedback Neural Network (FNN)."
[{2023-04-02 retrieved}]

* McsEngl.RNN-recurrent-neural-network!⇒techNnRt,
* McsEngl.recurrent-neural-network!⇒techNnRt,
* McsEngl.techDl.recurrent-neural-net!⇒techNnRt,
* McsEngl.techNn.recurrent!⇒techNnRt,
* McsEngl.techNnFf;techNnRt,
* McsEngl.techNnRt,
* McsEngl.techNnRt;;techNnFf,


· "A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in learning sequence and tree structures in natural language processing, mainly phrase and sentence continuous representations based on word embedding. RvNNs have first been introduced to learn distributed representations of structure, such as logical terms.[1] Models and general frameworks have been developed in further works since the 1990s.[2][3]"
[{2023-07-31 retrieved}]

* McsEngl.RNN-recursive-neural-network!⇒techNnRv,
* McsEngl.recursive-neural-network!⇒techNnRv,
* McsEngl.techNn.recursive!⇒techNnRv,

techNn.transformer (link)


· "Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley,[1][2] engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models (which are widely used in machine learning for the important task of pattern recognition) with the advantages of quantum information in order to develop more efficient algorithms.[3][4][5] One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.
Most Quantum neural networks are developed as feed-forward networks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to the next layer. Eventually the path leads to the final layer of qubits.[6][7] The layers do not have to be of the same width, meaning they don't have to have the same number of qubits as the layer before or after it. This structure is trained on which path to take similar to classical artificial neural networks. This is discussed in a lower section. Quantum neural networks refer to three different categories: Quantum computer with classical data, classical computer with quantum data, and quantum computer with quantum data.[6]"
[{2023-07-31 retrieved}]

* McsEngl.quantum-neural-network!⇒techNnQ,
* McsEngl.techNn.quantum!⇒techNnQ,
* McsEngl.techNnQ,


">Perceptron 1957 overview:
The perceptron, a groundbreaking invention in the realm of artificial intelligence, was introduced in 1957 by Frank Rosenblatt, an American psychologist. It was inspired by the Hebbian theory of synaptic plasticity, which suggests that the connections between neurons in the brain strengthen with repeated use. Rosenblatt's perceptron was a simplified model of a neuron, capable of learning and recognizing patterns.

**Key Features of the Perceptron:**
1. **Single-layer network:** The perceptron consisted of a single layer of processing units, each receiving input signals and producing a single output signal.
2. **Weighted input:** Each input signal was multiplied by a weight, representing the strength of its connection to the processing unit.
3. **Threshold activation function:** The sum of the weighted inputs was compared to a threshold value. If the sum exceeded the threshold, the processing unit activated, producing an output of 1. Otherwise, it remained inactive, producing an output of 0.
4. **Learning algorithm:** The perceptron utilized an error-correction learning algorithm, adjusting the weights of the input connections based on the difference between the desired and actual outputs.

**Significance of the Perceptron:**
1. **Pioneering artificial neuron model:** The perceptron was the first artificial neuron model to exhibit learning capabilities, demonstrating the potential of machines to mimic cognitive functions.
2. **Pattern recognition:** The perceptron's ability to learn and recognize patterns laid the foundation for modern machine learning algorithms, particularly in image recognition and pattern classification.
3. **Inspiration for deep learning:** The perceptron's architecture served as the inspiration for more complex artificial neural networks, including deep learning networks, which have revolutionized various fields, such as computer vision, natural language processing, and artificial intelligence.

Despite its limitations, such as the inability to solve non-linearly separable problems, the perceptron marked a significant milestone in the development of artificial intelligence and continues to influence the field of machine learning today."
[{2023-11-20 retrieved}]

* McsEngl.perceptron-techNn,
* McsEngl.techNn.perceptron,


this page was-visited times since {2023-07-30}

page-wholepath: / worldviewSngo / dirTchInf / techAi

· this page uses 'locator-names', names that when you find them, you find the-LOCATION of the-concept they denote.
· clicking on the-green-BAR of a-page you have access to the-global--locator-names of my-site.
· use the-prefix 'techAi' for senso-concepts related to current concept 'artificial-intelligence'.
· TYPE CTRL+F "McsLag4.words-of-concept's-name", to go to the-LOCATION of the-concept.
· a-preview of the-description of a-global-name makes reading fast.

• author: Kaseluris.Nikos.1959
• email:
• edit on github:,
• comments on Disqus,
• twitter: @synagonism,

• version.last.dynamic: McsTchInf000036.last.html,
• version.draft.creation: McsTchInf000036.0-1-0.2023-07-30.last.html,

support (link)