classes ::: Artifical Intelligence, decision making, Cybernetics,
children :::
branches ::: neural net

bookmarks: Instances - Definitions - Quotes - Chapters - Wordnet - Webgen


object:neural net
object:neural networks
class:Artifical Intelligence
class:decision making
class:Cybernetics

--- CONCEPTION
  2020-06-24 - I created this note, after masturbating. and so I have been logging and also often scoring the entry, (-1, +1, +x, -x), and just scored a -5 for the fap. and it is very interesting how I weigh certain things higher and lower. Like Savitri gets +1 per page, (very high). Whereas a chapter of TSOY gets +1, though it should maybe be +1 per 5 pages or something.


see also :::

questions, comments, suggestions/feedback, take-down requests, contribute, etc
contact me @ integralyogin@gmail.com or
join the integral discord server (chatrooms)
if the page you visited was empty, it may be noted and I will try to fill it out. cheers



now begins generated list of local instances, definitions, quotes, instances in chapters, wordnet info if available and instances among weblinks


OBJECT INSTANCES [0] - TOPICS - AUTHORS - BOOKS - CHAPTERS - CLASSES - SEE ALSO - SIMILAR TITLES

TOPICS
scores,_ratings,_grades,_rank
SEE ALSO


AUTH

BOOKS

IN CHAPTERS TITLE

IN CHAPTERS CLASSNAME

IN CHAPTERS TEXT

PRIMARY CLASS

Artifical_Intelligence
Cybernetics
decision_making
SIMILAR TITLES
neural net

DEFINITIONS

A :::neural_network ::: is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. Neural networks can adapt to changing input so the network generates the best possible result without needing to redesign the output criteria. The conception of neural networks is swiftly gaining popularity in the area of trading system development.

activation function ::: In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs.

AI accelerator ::: A class of microprocessor[16] or computer system[17] designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision, and machine learning.

AI koan ::: (humour) /A-I koh'an/ One of a series of pastiches of Zen teaching riddles created by Danny Hillis at the MIT AI Lab around various major figures of the Lab's culture.See also ha ha only serious, mu.In reading these, it is at least useful to know that Marvin Minsky, Gerald Sussman, and Drescher are AI researchers of note, that Tom Knight was one of the Lisp machine's principal designers, and that David Moon wrote much of Lisp Machine Lisp. * * * A novice was trying to fix a broken Lisp machine by turning the power off and on.Knight, seeing what the student was doing, spoke sternly: You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.Knight turned the machine off and on.The machine worked. * * * better garbage collector. We must keep a reference count of the pointers to each cons.Moon patiently told the student the following story: One day a student came to Moon and said: `I understandhow to make a better garbage collector... [Pure reference-count garbage collectors have problems with circular structures that point to themselves.] * * * In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.What are you doing?, asked Minsky.I am training a randomly wired neural net to play Tic-Tac-Toe, Sussman replied.Why is the net wired randomly?, asked Minsky.I do not want it to have any preconceptions of how to play, Sussman said.Minsky then shut his eyes.Why do you close your eyes?, Sussman asked his teacher.So that the room will be empty.At that moment, Sussman was enlightened. * * * A disciple of another sect once came to Drescher as he was eating his morning meal.I would like to give you this personality test, said the outsider, because I want you to be happy.Drescher took the paper that was offered him and put it into the toaster, saying: I wish the toaster to be happy, too. (1995-02-08)

AI koan "humour" /A-I koh'an/ One of a series of pastiches of Zen teaching riddles created by {Danny Hillis} at the {MIT AI Lab} around various major figures of the Lab's culture. See also {ha ha only serious}, {mu}. In reading these, it is at least useful to know that {Marvin Minsky}, {Gerald Sussman}, and Drescher are {AI} researchers of note, that {Tom Knight} was one of the {Lisp machine}'s principal designers, and that {David Moon} wrote much of Lisp Machine Lisp. * * * A novice was trying to fix a broken Lisp machine by turning the power off and on. Knight, seeing what the student was doing, spoke sternly: "You cannot fix a machine by just power-cycling it with no understanding of what is going wrong." Knight turned the machine off and on. The machine worked. * * * One day a student came to Moon and said: "I understand how to make a better garbage collector. We must keep a reference count of the pointers to each cons." Moon patiently told the student the following story:   "One day a student came to Moon and said: `I understand   how to make a better garbage collector... [Pure reference-count garbage collectors have problems with circular structures that point to themselves.] * * * In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. "What are you doing?", asked Minsky. "I am training a randomly wired neural net to play Tic-Tac-Toe", Sussman replied. "Why is the net wired randomly?", asked Minsky. "I do not want it to have any preconceptions of how to play", Sussman said. Minsky then shut his eyes. "Why do you close your eyes?", Sussman asked his teacher. "So that the room will be empty." At that moment, Sussman was enlightened. * * * A disciple of another sect once came to Drescher as he was eating his morning meal. "I would like to give you this personality test", said the outsider, "because I want you to be happy." Drescher took the paper that was offered him and put it into the toaster, saying: "I wish the toaster to be happy, too." (1995-02-08)

Also adaptive network-based fuzzy inference system. ::: A kind of artificial neural network that is based on Takagi–Sugeno fuzzy inference system. The technique was developed in the early 1990s.[6][7] Since it integrates both neural networks and fuzzy logic principles, it has potential to capture the benefits of both in a single framework. Its inference system corresponds to a set of fuzzy IF–THEN rules that have learning capability to approximate nonlinear functions.[8] Hence, ANFIS is considered to be a universal estimator.[9] For using the ANFIS in a more efficient and optimal way, one can use the best parameters obtained by genetic algorithm.[10][11]

Also connectionist system. ::: Any computing system vaguely inspired by the biological neural networks that constitute animal brains.

Also stochastic Hopfield network with hidden units. ::: A type of stochastic recurrent neural network and Markov random field.[68] Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield networks.

artificial neural network (ANN)

artificial neural network ::: (artificial intelligence) (ANN, commonly just neural network or neural net) A network of many very simple processors (units or neurons), each opposed to symbolic) data. The units operate only on their local data and on the inputs they receive via the connections.A neural network is a processing device, either an algorithm, or actual hardware, whose design was inspired by the design and functioning of animal brains and components thereof.Most neural networks have some sort of training rule whereby the weights of connections are adjusted on the basis of presented patterns. In other words, dogs from examples of dogs, and exhibit some structural capability for generalisation.Neurons are often elementary non-linear signal processors (in the limit they are simple threshold discriminators). Another feature of NNs which distinguishes data and programs, but rather each neuron is pre-programmed and continuously active.The term neural net should logically, but in common usage never does, also include biological neural networks, whose elementary structures are far more complicated than the mathematical models used for ANNs.See Aspirin, Hopfield network, McCulloch-Pitts neuron.Usenet newsgroup: comp.ai.neural-nets. (1997-10-13)

artificial neural network "artificial intelligence" (ANN, commonly just "neural network" or "neural net") A network of many very simple processors ("units" or "neurons"), each possibly having a (small amount of) local memory. The units are connected by unidirectional communication channels ("connections"), which carry numeric (as opposed to symbolic) data. The units operate only on their local data and on the inputs they receive via the connections. A neural network is a processing device, either an {algorithm}, or actual hardware, whose design was inspired by the design and functioning of animal brains and components thereof. Most neural networks have some sort of "training" rule whereby the weights of connections are adjusted on the basis of presented patterns. In other words, neural networks "learn" from examples, just like children learn to recognise dogs from examples of dogs, and exhibit some structural capability for generalisation. Neurons are often elementary non-linear signal processors (in the limit they are simple threshold discriminators). Another feature of NNs which distinguishes them from other computing devices is a high degree of interconnection which allows a high degree of parallelism. Further, there is no idle memory containing data and programs, but rather each neuron is pre-programmed and continuously active. The term "neural net" should logically, but in common usage never does, also include biological neural networks, whose elementary structures are far more complicated than the mathematical models used for ANNs. See {Aspirin}, {Hopfield network}, {McCulloch-Pitts neuron}. {Usenet} newsgroup: {news:comp.ai.neural-nets}. (1997-10-13)

artificial intelligence ::: (artificial intelligence) (AI) The subfield of computer science concerned with the concepts and methods of symbolic inference by computer and symbolic faster. The term was coined by Stanford Professor John McCarthy, a leading AI researcher.Examples of AI problems are computer vision (building a system that can understand images as well as a human) and natural language processing (building have foundered on the amount of context information and intelligence they seem to require.The term is often used as a selling point, e.g. to describe programming that drives the behaviour of computer characters in a game. This is often no more intelligent than Kill any humans you see; keep walking; avoid solid objects; duck if a human with a gun can see you.See also AI-complete, neats vs. scruffies, neural network, genetic programming, fuzzy computing, artificial life. CMU Artificial Intelligence Repository .(2002-01-19)

artificial intelligence "artificial intelligence" (AI) The subfield of computer science concerned with the concepts and methods of {symbolic inference} by computer and symbolic {knowledge representation} for use in making inferences. AI can be seen as an attempt to model aspects of human thought on computers. It is also sometimes defined as trying to solve by computer any problem that a human can solve faster. The term was coined by Stanford Professor {John McCarthy}, a leading AI researcher. Examples of AI problems are {computer vision} (building a system that can understand images as well as a human) and {natural language processing} (building a system that can understand and speak a human language as well as a human). These may appear to be modular, but all attempts so far (1993) to solve them have foundered on the amount of context information and "intelligence" they seem to require. The term is often used as a selling point, e.g. to describe programming that drives the behaviour of computer characters in a game. This is often no more intelligent than "Kill any humans you see; keep walking; avoid solid objects; duck if a human with a gun can see you". See also {AI-complete}, {neats vs. scruffies}, {neural network}, {genetic programming}, {fuzzy computing}, {artificial life}. {ACM SIGART (http://sigart.acm.org/)}. {U Cal Davis (http://phobos.cs.ucdavis.edu:8001)}. {CMU Artificial Intelligence Repository (http://cs.cmu.edu/Web/Groups/AI/html/repository.html)}. (2002-01-19)

Aspirin ::: (language, tool) A freeware language from MITRE Corporation for the description of neural networks. A compiler, bpmake, is included. Aspirin is designed for use with the MIGRAINES interface.Version: 6.0, as of 1995-03-08. . (1995-03-08)

Aspirin "language, tool" A {freeware} language from {MITRE Corporation} for the description of {neural networks}. A compiler, bpmake, is included. Aspirin is designed for use with the {MIGRAINES} interface. Version: 6.0, as of 1995-03-08. {(ftp://ftp.cognet.ucla.edu/alexis/)}. (1995-03-08)

backpropagation ::: A method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.[38] Backpropagation is shorthand for "the backward propagation of errors", since an error is computed at the output and distributed backwards throughout the network's layers. It is commonly used to train deep neural networks,[39] a term referring to neural networks with more than one hidden layer.[40]

back-propagation (Or "backpropagation") A learning {algorithm} for modifying a {feed-forward} {neural network} which minimises a continuous "{error function}" or "{objective function}." Back-propagation is a "{gradient descent}" method of training in that it uses gradient information to modify the network weights to decrease the value of the error function on subsequent tests of the inputs. Other gradient-based methods from {numerical analysis} can be used to train networks more efficiently. Back-propagation makes use of a mathematical trick when the network is simulated on a digital computer, yielding in just two traversals of the network (once forward, and once back) both the difference between the desired and actual output, and the derivatives of this difference with respect to the connection weights.

back-propagation ::: (Or backpropagation) A learning algorithm for modifying a feed-forward neural network which minimises a continuous error function or objective function. error function on subsequent tests of the inputs. Other gradient-based methods from numerical analysis can be used to train networks more efficiently.Back-propagation makes use of a mathematical trick when the network is simulated on a digital computer, yielding in just two traversals of the network (once output, and the derivatives of this difference with respect to the connection weights.

backpropagation through time (BPTT) ::: A gradient-based technique for training certain types of recurrent neural networks. It can be used to train Elman networks. The algorithm was independently derived by numerous researchers[41][42][43]

batch normalization ::: A technique for improving the performance and stability of artificial neural networks. It is a technique to provide any layer in a neural network with inputs that are zero mean/unit variance.[47] Batch normalization was introduced in a 2015 paper.[48][49] It is used to normalize the input layer by adjusting and scaling the activations.[50]

neural nets {artificial neural network}

neural network {artificial neural network}

capsule neural network (CapsNet) ::: A machine learning system that is a type of artificial neural network (ANN) that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization.[72]

Cellular Neural Network ::: (architecture) (CNN) The CNN Universal Machine is a low cost, low power, extremely high speed supercomputer on a chip. It is at least 1000 times faster computer. Because the entire computer is integrated into a chip, no signal leaves the chip until the image processing task is completed.Although the CNN universal chip is based on analogue and logic operating principles, it has an on-chip analog-to-digital input-output interface so that user-friendly CNN high-level language, like the C language, have been developed which makes it easy to implement any image processing algorithm.[Professor Leon Chua, University of California at Berkeley]. (1995-04-27)

Cellular Neural Network "architecture" (CNN) The CNN Universal Machine is a low cost, low power, extremely high speed {supercomputer} on a chip. It is at least 1000 times faster than equivalent {DSP} solutions of many complex {image processing} tasks. It is a stored program supercomputer where a complex sequence of image processing {algorithms} is programmed and downloaded into the chip, just like any digital computer. Because the entire computer is integrated into a chip, no signal leaves the chip until the image processing task is completed. Although the CNN universal chip is based on analogue and logic operating principles, it has an on-chip analog-to-digital input-output interface so that at the system design and application perspective, it can be used as a digital component, just like a DSP. In particular, a development system is available for rapid design and prototyping. Moreover, a {compiler}, an {operating system}, and a {user-friendly} CNN {high-level language}, like the {C} language, have been developed which makes it easy to implement any image processing algorithm. [Professor Leon Chua, University of California at Berkeley]. (1995-04-27)

CNN "architecture" {Cellular Neural Network}.

CNN ::: (architecture) Cellular Neural Network.

cognitive architecture "architecture" A computer architecure involving {non-deterministic}, multiple {inference} processes, as found in {neural networks}. Cognitive architectures model the human brain and contrast with single processor computers. The term might also refer to software architectures, e.g. {fuzzy logic}. [Origin? Better definition? Reference?] (1995-11-29)

cognitive architecture ::: (architecture) A computer architecure involving non-deterministic, multiple inference processes, as found in neural networks. Cognitive architectures model the human brain and contrast with single processor computers.The term might also refer to software architectures, e.g. fuzzy logic.[Origin? Better definition? Reference?] (1995-11-29)

committee machine ::: A type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response.[92] The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare ensembles of classifiers.

condela ::: Connection Definition Language.A procedural, parallel language for defining neural networks. . (1994-11-30)

Connection Definition Language "language" (condela) A {procedural}, parallel language for defining {neural networks}. {(ftp://tut.cis.ohio-state.edu/pub/condela)}. (1994-11-30)

connectionism ::: An approach in the fields of cognitive science, that hopes to explain mental phenomena using artificial neural networks.[120]

convolutional neural network ::: In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing.[122] They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics.[123][124]

cybernetics "robotics" /si:`b*-net'iks/ The study of control and communication in living and man-made systems. The term was first proposed by {Norbert Wiener} in the book referenced below. Originally, cybernetics drew upon electrical engineering, mathematics, biology, neurophysiology, anthropology, and psychology to study and describe actions, feedback, and response in systems of all kinds. It aims to understand the similarities and differences in internal workings of organic and machine processes and, by formulating abstract concepts common to all systems, to understand their behaviour. Modern "second-order cybernetics" places emphasis on how the process of constructing models of the systems is influenced by those very systems, hence an elegant definition - "applied epistemology". Related recent developments (often referred to as {sciences of complexity}) that are distinguished as separate disciplines are {artificial intelligence}, {neural networks}, {systems theory}, and {chaos theory}, but the boundaries between those and cybernetics proper are not precise. See also {robot}. {The Cybernetics Society (http://cybsoc.org)} of the UK. {American Society for Cybernetics (http://asc-cybernetics.org/)}. {IEEE Systems, Man and Cybernetics Society (http://isye.gatech.edu/ieee-smc/)}. {International project "Principia Cybernetica" (http://pespmc1.vub.ac.be/DEFAULT.html)}. ["Cybernetics, or control and communication in the animal and the machine", N. Wiener, New York: John Wiley & Sons, Inc., 1948] (2002-01-01)

cybernetics ::: (robotics) /si:`b*-net'iks/ The study of control and communication in living and man-made systems.The term was first proposed by Norbert Wiener in the book referenced below. Originally, cybernetics drew upon electrical engineering, mathematics, biology, processes and, by formulating abstract concepts common to all systems, to understand their behaviour.Modern second-order cybernetics places emphasis on how the process of constructing models of the systems is influenced by those very systems, hence an elegant definition - applied epistemology.Related recent developments (often referred to as sciences of complexity) that are distinguished as separate disciplines are artificial intelligence, neural networks, systems theory, and chaos theory, but the boundaries between those and cybernetics proper are not precise.See also robot. of the UK. . . .Usenet newsgroup: .[Cybernetics, or control and communication in the animal and the machine, N. Wiener, New York: John Wiley & Sons, Inc., 1948](2002-01-01)

Cyc "artificial intelligence" A large {knowledge-based system}. Cyc is a very large, multi-contextual {knowledge base} and {inference engine}, the development of which started at the {Microelectronics and Computer Technology Corporation} (MCC) in Austin, Texas during the early 1980s. Over the past eleven years the members of the Cyc team, lead by {Doug Lenat}, have added to the knowledge base a huge amount of fundamental human knowledge: {facts}, rules of thumb, and {heuristics} for reasoning about the objects and events of modern everyday life. Cyc is an attempt to do symbolic {AI} on a massive scale. It is not based on numerical methods such as statistical probabilities, nor is it based on {neural networks} or {fuzzy logic}. All of the knowledge in Cyc is represented {declaratively} in the form of logical {assertions}. Cyc presently contains approximately 400,000 significant assertions, which include simple statements of fact, rules about what conclusions to draw if certain statements of fact are satisfied, and rules about how to reason with certain types of facts and rules. The {inference engine} derives new conclusions using {deductive reasoning}. To date, Cyc has made possible ground-breaking pilot applications in the areas of {heterogeneous} database browsing and integration, {captioned image retrieval}, and {natural language processing}. In January of 1995, a new independent company named Cycorp was created to continue the Cyc project. Cycorp is still in Austin, Texas. The president of Cycorp is {Doug Lenat}. The development of Cyc has been supported by several organisations, including {Apple}, {Bellcore}, {DEC}, {DoD}, {Interval}, {Kodak}, and {Microsoft}. {(http://cyc.com/)}. {Unofficial FAQ (http://robotwisdom.com/ai/cycfaq.html)}. (1999-09-07)

Darkforest ::: A computer go program developed by Facebook, based on deep learning techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search.[125][126] The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them.[127] With the update, the system is known as Darkfmcts3.[128]

decision boundary ::: In the case of backpropagation-based artificial neural networks or perceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers the network has. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn any continuous function on compact subsets of Rn as shown by the Universal approximation theorem, thus it can have an arbitrary decision boundary.

Deep_Learning ::: is an artificial intelligence function that imitates the workings of the human brain in processing data and creating patterns for use in decision making. Deep learning is a subset of machine learning in Artificial Intelligence (AI) that has networks capable of learning unsupervised from data that is unstructured or unlabeled.  Also known as Deep Neural Learning or Deep Neural Network.

DeepMind Technologies ::: A British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc. The company is based in London, with research centres in Canada,[146] France,[147] and the United States. Acquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans,[148] as well as a neural Turing machine,[149] or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.[150][151] The company made headlines in 2016 after its AlphaGo program beat human professional Go player Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film.[152] A more general program, AlphaZero, beat the most powerful programs playing Go, chess, and shogi (Japanese chess) after a few days of play against itself using reinforcement learning.[153]

echo state network (ESN) ::: A recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can (re)produce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.[163][164]

ensemble averaging ::: In machine learning, particularly in the creation of artificial neural networks, ensemble averaging is the process of creating multiple models and combining them to produce a desired output, as opposed to creating just one model.

generative adversarial network (GAN) ::: A class of machine learning systems. Two neural networks contest with each other in a zero-sum game framework.

Hebbian learning "artificial intelligence" The most common way to train a {neural network}; a kind of {unsupervised learning}; named after canadian neuropsychologist, Donald O. Hebb. The {algorithm} is based on Hebb's Postulate, which states that where one cell's firing repeatedly contributes to the firing of another cell, the magnitude of this contribution will tend to increase gradually with time. This means that what may start as little more than a coincidental relationship between the firing of two nearby neurons becomes strongly causal. Despite limitations with Hebbian learning, e.g., the inability to learn certain patterns, variations such as {Signal Hebbian Learning} and {Differential Hebbian Learning} are still used. {(http://neuron-ai.tuke.sk/NCS/VOL1/P3_html/node14.html)}. (2003-11-07)

Hebbian learning ::: (artificial intelligence) The most common way to train a neural network; a kind of unsupervised learning; named after canadian neuropsychologist, Donald O. Hebb.The algorithm is based on Hebb's Postulate, which states that where one cell's firing repeatedly contributes to the firing of another cell, the magnitude of what may start as little more than a coincidental relationship between the firing of two nearby neurons becomes strongly causal.Despite limitations with Hebbian learning, e.g., the inability to learn certain patterns, variations such as Signal Hebbian Learning and Differential Hebbian Learning are still used. .(2003-11-07)

hidden layer ::: An internal layer of neurons in an artificial neural network, not dedicated to input or output.

hidden unit ::: A neuron in a hidden layer in an artificial neural network.

Hopfield network "artificial intelligence" (Or "Hopfield model") A kind of {neural network} investigated by John Hopfield in the early 1980s. The Hopfield network has no special input or output neurons (see {McCulloch-Pitts}), but all are both input and output, and all are connected to all others in both directions (with equal weights in the two directions). Input is applied simultaneously to all neurons which then output to each other and the process continues until a stable state is reached, which represents the network output. (1997-10-11)

Hopfield network ::: (artificial intelligence) (Or Hopfield model) A kind of neural network investigated by John Hopfield in the early 1980s. The Hopfield network has no which then output to each other and the process continues until a stable state is reached, which represents the network output. (1997-10-11)

IEEE Computational Intelligence Society ::: A professional society of the Institute of Electrical and Electronics Engineers (IEEE) focussing on "the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained".[194]

intelligent control ::: A class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.[195]

long short-term memory (LSTM) ::: An artificial recurrent neural network architecture[207] used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections that make it a "general purpose computer" (that is, it can compute anything that a Turing machine can).[208] It can not only process single data points (such as images), but also entire sequences of data (such as speech or video).

machine learning ::: The ability of a machine to improve its performance based on previous results.Neural networks are one kind of machine learning.[More examples? Net resources? Web page?] (1995-02-15)

machine learning The ability of a machine to improve its performance based on previous results. {Neural networks} are one kind of machine learning. [More examples? Net resources? Web page?] (1995-02-15)

McCulloch-Pitts neuron "artificial intelligence" The basic building block of {artificial neural networks}. It receives one or more inputs and produces one or more identical outputs, each of which is a simple non-linear function of the sum of the inputs to the neuron. The non-linear function is typically a threshhold or step function which is usually smoothed (i.e. a {sigmoid}) to facilitate {learning}. (1997-10-11)

McCulloch-Pitts neuron ::: (artificial intelligence) The basic building block of artificial neural networks. It receives one or more inputs and produces one or more identical to the neuron. The non-linear function is typically a threshhold or step function which is usually smoothed (i.e. a sigmoid) to facilitate learning. (1997-10-11)

MIGRAINES "tool" A {graphical user interface} for evaluating and interacting with the {Aspirin} {neural network} simulation. Utilities exist for moving quickly from an {Aspirin} description of a network directly to an executable program for simulating and evaluating that network. MIGRAINES has been kept separate from Aspirin so that its limitations do not restrict the performance of Aspirin. However, in practice, they are used together. This combination allows for simple specification and creation of efficient neural network systems that can be graphically analysed and tested. [Aspirin/MIGRAINES Neural Network Software User's Manual, Release v6.0 MP-91W00050, Copyright 1992 by Russel Leighton and the MITRE Corporation]. (1995-03-07)

MIGRAINES ::: (tool) A graphical user interface for evaluating and interacting with the Aspirin neural network simulation.Utilities exist for moving quickly from an Aspirin description of a network directly to an executable program for simulating and evaluating that network. together. This combination allows for simple specification and creation of efficient neural network systems that can be graphically analysed and tested.[Aspirin/MIGRAINES Neural Network Software User's Manual, Release v6.0 MP-91W00050, Copyright 1992 by Russel Leighton and the MITRE Corporation]. (1995-03-07)

net ::: 1. (networking) network.2. (networking) network, the.3. (architecture) neural network.4. (networking) The top-level domain originally for networks, although it sees heavy use for vanity domains of all types.[Jargon File] (1999-01-26)

net 1. "networking" {network}. 2. "networking" {network, the}. 3. "architecture" {neural network}. 4. "networking" The {top-level domain} originally for networks, although it sees heavy use for {vanity domains} of all types. [{Jargon File}] (1999-01-26)

neural machine translation (NMT) ::: An approach to machine translation that uses a large artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.

Neural network - a computer system modeled on the human brain and nervous system.

neural Turing machine (NTM) ::: A recurrent neural network model. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent.[229] An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.[230]

neuro-fuzzy ::: Combinations of artificial neural networks and fuzzy logic.

neuron {artificial neural network}

NN {artificial neural network}

pattern recognition ::: (artificial intelligence, data processing) A branch of artificial intelligence concerned with the classification or description of observations.Pattern recognition aims to classify data (patterns) based on either a priori knowledge or on statistical information extracted from the patterns. The patterns to be classified are usually groups of measurements or observations, defining points in an appropriate multidimensional space.A complete pattern recognition system consists of a sensor that gathers the observations to be classified or described; a feature extraction mechanism that classification or description scheme that does the actual job of classifying or describing observations, relying on the extracted features.The classification or description scheme is usually based on the availability of a set of patterns that have already been classified or described. This set of establishes the classes itself based on the statistical regularities of the patterns.The classification or description scheme usually uses one of the following approaches: statistical (or decision theoretic), syntactic (or structural), or interrelationships of features. Neural pattern recognition employs the neural computing paradigm that has emerged with neural networks. (1995-09-22)

pattern recognition "artificial intelligence, data processing" A branch of {artificial intelligence} concerned with the classification or description of observations. Pattern recognition aims to classify {data} (patterns) based on either a priori knowledge or on statistical information extracted from the patterns. The patterns to be classified are usually groups of measurements or observations, defining points in an appropriate multidimensional space. A complete pattern recognition system consists of a sensor that gathers the observations to be classified or described; a {feature extraction} mechanism that computes numeric or {symbolic} information from the observations; and a classification or description scheme that does the actual job of classifying or describing observations, relying on the extracted features. The classification or description scheme is usually based on the availability of a set of patterns that have already been classified or described. This set of patterns is termed the {training set} and the resulting learning strategy is characterised as {supervised}. Learning can also be {unsupervised}, in the sense that the system is not given an a priori labelling of patterns, instead it establishes the classes itself based on the statistical regularities of the patterns. The classification or description scheme usually uses one of the following approaches: statistical (or {decision theoretic}), syntactic (or structural), or neural. Statistical pattern recognition is based on statistical characterisations of patterns, assuming that the patterns are generated by a {probabilistic} system. Structural pattern recognition is based on the structural interrelationships of features. Neural pattern recognition employs the neural computing paradigm that has emerged with {neural networks}. (1995-09-22)

proceedings "publication" (Proc.) A printed collection of papers presented at a conference or meeting, e.g. "The Proceedings of the Fifth International Conference on Microelectronics for Neural Networks and Fuzzy Systems". Along with learned journals, conference proceedings are a major repository of peer-reviewed research results. (2008-07-16)

radial basis function network ::: In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.[267][268][269]

recurrent neural network (RNN) ::: A class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition[273] or speech recognition.[274][275]

reservoir computing ::: A framework for computation that may be viewed as an extension of neural networks.[277] Typically an input signal is fed into a fixed (random) dynamical system called a reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines[278] and echo state networks[279] are two major types of reservoir computing.[280]

restricted Boltzmann machine (RBM) ::: A generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.

Single Electron Tunneling Technology "hardware" A {neural network} hardware concept based on {single electron tunneling}. {Single electron tunneling transistors} have some properties which make them attractive for neural networks, among which their small size, low power consumption and potentially high speed. Simulations have been performed on some small circuits of SET transistors that exhibit functional properties similar to those required for neural networks. {(http://computer.org/conferen/proceed/mn96/ABSTRACT.HTM

Single Electron Tunneling Technology ::: (hardware) A neural network hardware concept based on single electron tunneling. Single electron tunneling transistors have some properties which make them attractive for neural networks, among which their small size, low power consumption and potentially high speed.Simulations have been performed on some small circuits of SET transistors that exhibit functional properties similar to those required for neural networks. .[Proceedings of the Fifth International Conference on Microelectronics for Neural Networks and Fuzzy Systems (MicroNeuro '96). Martijn J. Goossens, Chris J.M. Verhoeven, and Arthur H.M. van Roermund]. (1999-01-06)

spiking neural network (SNN) ::: An artificial neural network that more closely mimics a natural neural network.[290] In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their Operating Model.

TensorFlow ::: A free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks.[308]

T. Kohonen "person" A researcher at the {University of Helsinki} who has been studying {neural networks} for many years with the idea of modelling as closely as possible the behaviour of biological systems. His name is commonly associated with a particular kind of neural network in which there are only two kinds of {neurons} (see {McCulloch-Pitts}), input and others. All the input neurons are connected to all others and the others are connected only to their other nearest neighbors. The training {algorithm} is a relatively simple one based on the geometric layout of the neurons, and makes use of {simulated annealing}. (1994-10-19)

T. Kohonen ::: (person) A researcher at the University of Helsinki who has been studying neural networks for many years with the idea of modelling as closely as possible The training algorithm is a relatively simple one based on the geometric layout of the neurons, and makes use of simulated annealing. (1994-10-19)

Turing Machine ::: (computability) A hypothetical machine defined in 1935-6 by Alan Turing and used for computability theory proofs. It consists of an infinitely long program specifies the new state and either a symbol to write to the tape or a direction to move the pointer (left or right) or to halt.In an alternative scheme, the machine writes a symbol to the tape *and* moves at each step. This can be encoded as a write state followed by a move state for the distance of zero. A further variation is whether halting is an action like writing or moving or whether it is a special state.[What was Turing's original definition?]Without loss of generality, the symbol set can be limited to just 0 and 1 and the machine can be restricted to start on the leftmost 1 of the leftmost infinite in one direction only, with the understanding that the machine will halt if it tries to move off the other end.All computer instruction sets, high level languages and computer architectures, including parallel processors, can be shown to be equivalent to a Turing Machine and thus equivalent to each other in the sense that any problem that one can solve, any other can solve given sufficient time and memory.Turing generalised the idea of the Turing Machine to a Universal Turing Machine which was programmed to read instructions, as well as data, off the microcode which directs the reading and decoding of higher level machine code instructions.A busy beaver is one kind of Turing Machine program.Dr. Hava Siegelmann of Technion reported in Science of 28 Apr 1995 that she has found a mathematically rigorous class of machines, based on ideas from chaos create artificial intelligence. Dr. Siegelmann's work suggests that this is true only for conventional computers and may not cover neural networks.See also Turing tar-pit, finite state machine. (1995-05-10)

Turing Machine "computability" A hypothetical machine defined in 1935-6 by {Alan Turing} and used for {computability theory} proofs. It consists of an infinitely long "tape" with symbols (chosen from some {finite set}) written at regular intervals. A pointer marks the current position and the machine is in one of a finite set of "internal states". At each step the machine reads the symbol at the current position on the tape. For each combination of current state and symbol read, a program specifies the new state and either a symbol to write to the tape or a direction to move the pointer (left or right) or to halt. In an alternative scheme, the machine writes a symbol to the tape *and* moves at each step. This can be encoded as a write state followed by a move state for the write-or-move machine. If the write-and-move machine is also given a distance to move then it can emulate an write-or-move program by using states with a distance of zero. A further variation is whether halting is an action like writing or moving or whether it is a special state. [What was Turing's original definition?] Without loss of generality, the symbol set can be limited to just "0" and "1" and the machine can be restricted to start on the leftmost 1 of the leftmost string of 1s with strings of 1s being separated by a single 0. The tape may be infinite in one direction only, with the understanding that the machine will halt if it tries to move off the other end. All computer {instruction sets}, {high level languages} and computer architectures, including {parallel processors}, can be shown to be equivalent to a Turing Machine and thus equivalent to each other in the sense that any problem that one can solve, any other can solve given sufficient time and memory. Turing generalised the idea of the Turing Machine to a "Universal Turing Machine" which was programmed to read instructions, as well as data, off the tape, thus giving rise to the idea of a general-purpose programmable computing device. This idea still exists in modern computer design with low level {microcode} which directs the reading and decoding of higher level {machine code} instructions. A {busy beaver} is one kind of Turing Machine program. Dr. Hava Siegelmann of {Technion} reported in Science of 28 Apr 1995 that she has found a mathematically rigorous class of machines, based on ideas from {chaos} theory and {neural networks}, that are more powerful than Turing Machines. Sir Roger Penrose of {Oxford University} has argued that the brain can compute things that a Turing Machine cannot, which would mean that it would be impossible to create {artificial intelligence}. Dr. Siegelmann's work suggests that this is true only for conventional computers and may not cover {neural networks}. See also {Turing tar-pit}, {finite state machine}. (1995-05-10)

Whole Brain Emulation - the hypothetical process of copying mental content (including long-term memory and "self") from a particular brain substrate and copying it to another computational device, such as a digital, analog, quantum-based or software based artificial neural network.

Xy-pic ::: (graphics, publication) A package for typesetting graphs and diagrams using TeX. It is structured as several modules, each defining a custom notation arrows, curves, and frames. These can be organised in matrix, directed graph, path, polygon, knot, and 2-cell structure.Xy-pic works with LaTeX, AMS-LaTeX, AMS-TeX, and plain TeX, and has been used to typeset complicated diagrams from many application areas including category theory, automata theory, algebra, neural networks and database theory. . (1997-11-20)

Xy-pic "graphics, publication" A package for {typesetting} graphs and diagrams using {TeX}. It is structured as several modules, each defining a custom notation for a particular kind of graphical object or structure. Example objects are arrows, curves, and frames. These can be organised in matrix, {directed graph}, path, polygon, knot, and 2-cell structure. Xy-pic works with {LaTeX}, {AMS-LaTeX}, {AMS-TeX}, and {plain TeX}, and has been used to typeset complicated diagrams from many application areas including {category theory}, {automata} theory, {algebra}, {neural networks} and {database} theory. {(http://ens-lyon.fr/~krisrose/Xy-pic.html)}. (1997-11-20)



QUOTES [0 / 0 - 3 / 3]


KEYS (10k)


NEW FULL DB (2.4M)


*** WISDOM TROVE ***

*** NEWFULLDB 2.4M ***

1:My CPU is a neural net processor; a learning computer. ~ Arnold Schwarzenegger,
2:We now know from neural-net technology that when there are more connections between points in a system, and there is greater strength between these connections, there will be sudden leaps in intelligence, where intelligence is defined as success rate in performing a task. ~ Pierre Teilhard de Chardin,
3:DeepMind soon published their method and shared their code, explaining that it used a very simple yet powerful idea called deep reinforcement learning.2 Basic reinforcement learning is a classic machine learning technique inspired by behaviorist psychology, where getting a positive reward increases your tendency to do something again and vice versa. Just like a dog learns to do tricks when this increases the likelihood of its getting encouragement or a snack from its owner soon, DeepMind’s AI learned to move the paddle to catch the ball because this increased the likelihood of its getting more points soon. DeepMind combined this idea with deep learning: they trained a deep neural net, as in the previous chapter, to predict how many points would on average be gained by pressing each of the allowed keys on the keyboard, and then the AI selected whatever key the neural net rated as most promising given the current state of the game. ~ Max Tegmark,

IN CHAPTERS [0/0]









WORDNET



--- Overview of noun neural_net

The noun neural net has 2 senses (no senses from tagged texts)
                
1. neural network, neural net ::: (computer architecture in which processors are connected in a manner suggestive of connections between neurons; can learn by trial and error)
2. neural network, neural net ::: (any network of neurons or nuclei that function together to perform some function in the body)


--- Synonyms/Hypernyms (Ordered by Estimated Frequency) of noun neural_net

2 senses of neural net                        

Sense 1
neural network, neural net
   => computer architecture
     => specification, spec
       => description, verbal description
         => statement
           => message, content, subject matter, substance
             => communication
               => abstraction, abstract entity
                 => entity

Sense 2
neural network, neural net
   => reticulum
     => network, web
       => system, scheme
         => group, grouping
           => abstraction, abstract entity
             => entity


--- Hyponyms of noun neural_net

1 of 2 senses of neural net                      

Sense 2
neural network, neural net
   => reticular formation, RF
   => reticular activating system, RAS


--- Synonyms/Hypernyms (Ordered by Estimated Frequency) of noun neural_net

2 senses of neural net                        

Sense 1
neural network, neural net
   => computer architecture

Sense 2
neural network, neural net
   => reticulum




--- Coordinate Terms (sisters) of noun neural_net

2 senses of neural net                        

Sense 1
neural network, neural net
  -> computer architecture
   => neural network, neural net

Sense 2
neural network, neural net
  -> reticulum
   => neural network, neural net




--- Grep of noun neural_net
neural net
neural network



IN WEBGEN [10000/123]

Wikipedia - Artificial Neural Networks
Wikipedia - Artificial neural networks
Wikipedia - Artificial Neural Network
Wikipedia - Artificial neural network -- Computational model used in machine learning, based on connected, hierarchical functions
Wikipedia - Backpropagation through structure -- Technique for training recursive neural nets
Wikipedia - Backpropagation through time -- Technique for training recurrent neural networks
Wikipedia - Backpropagation -- Optimization algorithm for artificial neural networks
Wikipedia - Bidirectional recurrent neural networks
Wikipedia - Biological neural networks
Wikipedia - Biological neural network
Wikipedia - Capsule neural network
Wikipedia - Category:Artificial neural networks
Wikipedia - Category:Neural networks
Wikipedia - Confabulation (neural networks)
Wikipedia - Connectionism -- Approach in cognitive science that hopes to explain mental phenomena using artificial neural networks
Wikipedia - Convolutional neural networks
Wikipedia - Convolutional Neural Network
Wikipedia - Convolutional neural network -- Artificial neural network
Wikipedia - Convolutional Sparse Coding -- Neural network coding model
Wikipedia - Cultured neural networks
Wikipedia - Deep convolutional neural network
Wikipedia - Deep neural networks
Wikipedia - Deep neural network
Wikipedia - Differentiable neural computer -- Artificial neural network architecture
Wikipedia - Dropout (neural networks)
Wikipedia - Efficiently updatable neural network -- A neural network based evaluation function
Wikipedia - European Neural Network Society
Wikipedia - Evolutionary acquisition of neural topologies -- A method that evolves both the topology and weights of artificial neural networks
Wikipedia - Extreme learning machine -- Type of artificial neural network
Wikipedia - Feedforward neural networks
Wikipedia - Feedforward neural network
Wikipedia - Fuzzy neural network
Wikipedia - History of artificial neural networks
Wikipedia - Hybrid neural network
Wikipedia - IEEE Neural Networks Society
Wikipedia - Keras -- Neural network library
Wikipedia - Large memory storage and retrieval neural network -- Type of neural network
Wikipedia - Large width limits of neural networks
Wikipedia - Mathematics of artificial neural networks
Wikipedia - Neocognitron -- Type of artificial neural network
Wikipedia - Neural gas -- Artificial neural network
Wikipedia - Neural nets
Wikipedia - Neural net
Wikipedia - Neural network (disambiguation)
Wikipedia - Neural Network Exchange Format -- artificial neural network data exchange format
Wikipedia - Neural Network Intelligence
Wikipedia - Neural Networks (journal)
Wikipedia - Neural network software
Wikipedia - Neural networks
Wikipedia - Neural Network
Wikipedia - Neural network
Wikipedia - Neuroevolution -- Form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks
Wikipedia - OpenAI Five -- Dota 2 bots trained with deep neural networks
Wikipedia - Open Neural Network Exchange
Wikipedia - Optical neural network
Wikipedia - Physical neural network
Wikipedia - Quantum neural network -- Quantum Mechanics in Nueral Network
Wikipedia - Radial basis function network -- Type of artificial neural network that uses radial basis functions as activation functions
Wikipedia - Rectifier (neural networks)
Wikipedia - Recurrent neural networks
Wikipedia - Recurrent neural network
Wikipedia - Recursive neural network -- Type of neural network which utilizes recursion
Wikipedia - Self-organizing map -- Type of artificial neural network
Wikipedia - Spiking neural network
Wikipedia - Time delay neural network
Wikipedia - Types of artificial neural networks
Wikipedia - Universal approximation theorem -- A feed-forward neural network with a 1 hidden layer can approximate continuous functions
https://www.goodreads.com/book/show/11720073-neural-networks-and-learning-machines
https://www.goodreads.com/book/show/15948275-explanation-based-neural-network-learning
https://www.goodreads.com/book/show/18239810-artificial-neural-networks-for-intelligent-manufacturing
https://www.goodreads.com/book/show/1840610.Foundations_of_Neural_Networks
https://www.goodreads.com/book/show/1983887.Cellular_Neural_Networks_and_Visual_Computing
https://www.goodreads.com/book/show/24582662-neural-networks-and-deep-learning
https://www.goodreads.com/book/show/2523049.Bayesian_Learning_for_Neural_Networks
https://www.goodreads.com/book/show/29746976-make-your-own-neural-network
https://www.goodreads.com/book/show/38813824-a-guide-to-convolutional-neural-networks-for-computer-vision
https://www.goodreads.com/book/show/391008.Neural_Networks
https://www.goodreads.com/book/show/589006.Kalman_Filtering_and_Neural_Networks
https://www.goodreads.com/book/show/660448.Neural_Networks_and_Machine_Learning
https://www.goodreads.com/book/show/8928527-pulsed-neural-networks
https://www.goodreads.com/book/show/9099829-cellular-neural-networks-and-analog-vlsi
https://www.goodreads.com/book/show/92536.Neural_Networks_for_Pattern_Recognition
https://memory-alpha.fandom.com/wiki/Locomotor_neural_net
https://memory-alpha.fandom.com/wiki/Neural_net
https://memory-alpha.fandom.com/wiki/Neural_network
https://memory-alpha.fandom.com/wiki/Sensory_neural_net
https://memory-beta.fandom.com/wiki/Besrethine_neural_network
Artificial neural network
Bidirectional recurrent neural networks
Capsule neural network
Cellular neural network
Convolutional neural network
Dilution (neural networks)
Efficiently updatable neural network
Energy-based generative neural network
European Neural Network Society
Fast Artificial Neural Network
Feedforward neural network
History of artificial neural networks
IEEE Transactions on Neural Networks and Learning Systems
Large memory storage and retrieval neural network
Modular neural network
Neural network
Neural network (disambiguation)
Neural Network Exchange Format
Neural network Gaussian process
Neural Networks (journal)
Neural network software
Open Neural Network Exchange
Optical neural network
Physical neural network
Probabilistic neural network
Quantum neural network
Rectifier (neural networks)
Recurrent neural network
Recursive neural network
Region Based Convolutional Neural Networks
Residual neural network
Siamese neural network
Spiking neural network
Time delay neural network
Trion (neural networks)
Types of artificial neural networks



convenience portal:
recent: Section Maps - index table - favorites
Savitri -- Savitri extended toc
Savitri Section Map -- 1 2 3 4 5 6 7 8 9 10 11 12
authors -- Crowley - Peterson - Borges - Wilber - Teresa - Aurobindo - Ramakrishna - Maharshi - Mother
places -- Garden - Inf. Art Gallery - Inf. Building - Inf. Library - Labyrinth - Library - School - Temple - Tower - Tower of MEM
powers -- Aspiration - Beauty - Concentration - Effort - Faith - Force - Grace - inspiration - Presence - Purity - Sincerity - surrender
difficulties -- cowardice - depres. - distract. - distress - dryness - evil - fear - forget - habits - impulse - incapacity - irritation - lost - mistakes - obscur. - problem - resist - sadness - self-deception - shame - sin - suffering
practices -- Lucid Dreaming - meditation - project - programming - Prayer - read Savitri - study
subjects -- CS - Cybernetics - Game Dev - Integral Theory - Integral Yoga - Kabbalah - Language - Philosophy - Poetry - Zen
6.01 books -- KC - ABA - Null - Savitri - SA O TAOC - SICP - The Gospel of SRK - TIC - The Library of Babel - TLD - TSOY - TTYODAS - TSZ - WOTM II
8 unsorted / add here -- Always - Everyday - Verbs


change css options:
change font "color":
change "background-color":
change "font-family":
change "padding":
change "table font size":
last updated: 2022-04-28 02:11:34
261647 site hits