Artificial intelligence is the ability of a digital computer or computer-controlled robot to perform tasks typically associated with intelligent beings. The term is often applied to the project of developing systems endowed with human-specific intellectual processes, such as the ability to reason, generalize, or learn from past experiences. In addition, the definition of the concept of AI (artificial intelligence) comes down to a description of a set of related technologies and processes, such as, for example, machine learning, virtual agents and expert systems. In simple terms, AI is a rough representation of the neurons in the brain. Signals are passed from neuron to neuron and finally output - a numerical, categorical or generative result is obtained. This can be illustrated with this example. if the system takes a photo of a cat and is trained to recognize whether it is a cat or not, the first layer can identify the general gradients that define the overall shape of the cat. The next layer can identify larger objects such as ears and mouth. The third layer defines smaller objects (such as whiskers). Finally, based on this information, the program will output "yes" or "no" to say whether it is a cat or not. The programmer does not need to “tell” the neurons that these are the functions they should look for. The AI learned them on its own by training on many images (both with and without cats).
What is artificial intelligence?
Description of artificial neuron
An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in artificial neural networks. An artificial neuron receives one or more inputs and sums them to produce an output or activation, representing the neuron's action potential, which is transmitted along its axon. Typically, each input is analyzed separately and the sum is passed through a nonlinear function known as an activation function, or transfer function.
When did AI research begin?
In 1935, British researcher A.M. Turing described an abstract computing machine that consists of an infinite memory and a scanner that moves back and forth through the memory, character by character. The scanner reads what it finds, writing further characters. The scanner's actions are dictated by a program of instructions, which is also stored in memory as symbols. The earliest successful AI program was written in 1951 by Christopher Strachey. In 1952, this program could play checkers with a person, surprising everyone with its ability to predict moves. In 1953, Turing published a classic early paper on chess programming.
The difference between artificial intelligence and natural intelligence
Intelligence can be defined as the general mental ability to reason, solve problems, and learn. Because of its general nature, intelligence integrates cognitive functions such as perception, attention, memory, language, or planning. Natural intelligence is distinguished by a conscious attitude towards the world. Human thinking is always emotionally colored, and it cannot be separated from physicality. In addition, man is a social being, so thinking is always influenced by society. AI is not related to the emotional sphere and is not socially oriented.
How to compare human and computer intelligence?
Human thinking can be compared with artificial intelligence based on several general parameters of the organization of the brain and machine. The activity of a computer, like the brain, includes four stages: coding, storage, data analysis and output. In addition, the human brain and AI can self-learn based on data received from the environment. Also, the human brain and machine intelligence solve problems (or tasks) using certain algorithms.
Do computer programs have IQ?
No. IQ is related to the development of a person’s intelligence depending on age. AI exceeds some human abilities in some ways, for example, it can retain a huge number of numbers in memory, but this has nothing to do with IQ.
What is the Turing test?
Alan Turing developed an empirical test that shows whether a program is able to capture all the nuances of human behavior to such an extent that a person cannot determine whether he is communicating with an AI or a live interlocutor. Turing proposed that an outside observer evaluate the conversation between a person and a machine that answers questions. The judge does not see who exactly is answering, but knows that one of the interlocutors is an AI. The conversation is limited to the text channel only (computer keyboard and screen), so the result is not dependent on the machine's ability to render words as human speech. If the program manages to deceive a person, it is considered to have successfully completed the test.
Symbolic approach
The symbolic approach to AI is a collection of all artificial intelligence research methods based on high-level symbolic (human-readable) representations of tasks, logic and search. The symbolic approach was widely used in AI research in the 1950s–80s. One popular form of the symbolic approach is expert systems that use a combination of certain production rules. Production rules link symbols into logical connections, which are similar to the If-Then algorithm. The expert system processes the rules to draw conclusions and determine what additional information it needs, that is, what questions to ask, using human-readable symbols.
Logical approach
The term “logical approach” implies an appeal to logic, reasoning, and problem solving using logical steps. Logicians back in the 19th century developed precise notations for all kinds of objects in the world and the relationships between them. By 1965, there were programs that could solve any logic problem (the peak of popularity of this approach occurred in the late 1950s–70s). Proponents of the logical approach within the framework of logical artificial intelligence hoped to build intelligent systems on such programs (in particular, written in the Prolog language). However, this approach has two limitations. First, it is not easy to take informal knowledge and put it into the formal terms that AI processing requires. Secondly, there is a big difference between solving a problem in theory and solving it in practice. Even problems with a few hundred facts can exhaust the computing resources of any computer if it does not have some guidance as to which reasoning to use first.
Agent-based approach
An agent is something that acts (from the Latin agere, “to do”). Of course, all computer programs do something, but computer agents are expected to do more: work autonomously, sense environmental signals (using special sensors), adapt to change, create goals and carry them out. A rational agent is one who acts to achieve the best expected outcome.
Hybrid approach
This approach, which became popular in the late 1980s, is believed to work most effectively because it is a combination of symbolic and neural models. The hybrid approach increases the cognitive and computational capabilities of the machine.
Artificial Intelligence Tutorial
The history of artificial intelligence is interesting, especially when you consider how long the concept of AI has been with us. If we try to think about what AI is, we can find a small glimpse of its concepts in Greek, Chinese and Egyptian myths, however, as for the term Artificial Intelligence, it was coined in 1956 at Dartmouth College in New York. -Hampshire. If Alan Turning is the father of computers, then (as mentioned earlier) the fathers of artificial intelligence are Marvin Minsky and John McCarthy, who at MIT worked on various artificial intelligence projects, now called the MIT Computer Science and Artificial Intelligence Laboratory. Since then, it has been milestones upon milestones that have been achieved in the field of artificial intelligence. While each year has contributed to the development of AI, the following years are the most important in its development:
1961: Although the creation of the MIT Artificial Intelligence Laboratory was a major milestone, even predated by the work done by Alan Turing on early forms of computers in the 1940s, Isaac Asimov's proposal of the Three Laws of Robotics in 1950 and subsequent development However, the first based on AI a checkers program written by Christopher Strachey in
1951, 1961, became the first large-scale implementation of AI, when robots were introduced into General Motors factories to facilitate work on assembly lines.
1964: Natural language processing and understanding were first achieved this year when Daniel Bobrow developed STUDENT as his MIT Ph.D. dissertation (“Natural Language Input for a Computer Problem Solving System”).
1965: In addition to the previous achievement in the field of NLP (Natural Language Processing), ELIZA was developed by Joseph Weizenbaum, which is a kind of “interactive chatbot” that can communicate with users in English.
1969: The famous paper "Perceptrons: An Introduction to Computational Geometry" by Marvin Minsky and Seymour Papert is published, pointing out the limitations of the simple single-layer neural networks popular at the time and their failure to solve a simple XOR problem.
1974: The Stanford Artificial Intelligence Laboratory introduces a self-driving car, the Stanford Cart, capable of detecting obstacles in its path and maneuvering without hitting objects in its path. However, the final form was obtained only in 1979.
1974 – 1979: The first AI Winter hits AI development largely due to the lack of memory or computing power available at the time. Government investment in AI is falling, and overall interest in the academic world is also falling.
1981: Japan decides to work on its "fifth generation" computer project to make it easier to communicate, translate and reason like humans, which receives huge discoveries from the US and UK, thus ending the first winter of artificial intelligence.
1983: The XCON (eXpert CONfigure) program (an early type of rule-based expert system computer developed in 1978 to help order DEC's VAX computers) reaches its peak of 2,500 rules and is adopted by a huge number of corporations around the world. World. Thus, XCON becomes the first widely used computer system that integrates artificial intelligence techniques to solve real-world problems such as selecting components based on customer requirements.
1984: The annual meeting of the Association for the Advancement of Artificial Intelligence takes place at the University of Texas, where Roger Shank and Marvin Minsky warn the AI community of another AI winter, citing that the main problem is bursting the ongoing AI bubble due to the constant decline in research funding and investment in industry.
1986: David Ramelhart, Geoffrey Hinton, and Ronald Williams propose a learning method called backpropagation that can be applied to neural networks in their paper “Learning Representations with Backpropagation Errors.”
1987 – 1993: As Shank and Minsky warned, AI Winters' second hit. Slow and inelegant expert systems are becoming too costly for corporations, which increasingly prefer Microsoft and Apple OS on personal computers to perform their tasks. The Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Defense's research and development agency responsible for developing new technologies for use by the military, is stopping funding AI-based projects because it believes AI is not catalyzing the next wave of computer advancement. In 1998, Minsky and Papert published an expanded version of their book Perceptrons and cited the AI community's continued repetition of old mistakes as the reason for its general decline. Ramelhart's backpropagation technique is used in a multilayer neural network for handwriting recognition at AT&T Laboratories , however, the problem of limited computing power remains, since training the network takes 3 days. In 1992, Bernhard Boser, Isabelle Guyon, and Vladimir Vapnik (the original creator of SVM in 1963) found a way to apply the kernel trick to support vector machines (SVMs) to solve nonlinear classification problems. This algorithm, although much simpler and faster than neural networks, is capable of performing tasks such as image classification, which greatly influences the development of AI. and Vladimir Vapnik (the original creator of SVM in 1963) found a way to apply the kernel trick to support vector machines ( SVM) for solving nonlinear classification problems. This algorithm, although much simpler and faster than neural networks, is capable of performing tasks such as image classification, which greatly influences the development of AI. and Vladimir Vapnik (the original creator of SVM in 1963) found a way to apply the kernel trick to support vector machines ( SVM) for solving nonlinear classification problems. This algorithm, although much simpler and faster than neural networks, is capable of performing tasks such as image classification, which greatly influences the development of AI.
1995: ALICE, inspired by ELIZA, designed by Richard Wallace but much more advanced due to the accessibility of the World Wide Web.
1997: Sepp Hochreiter and Jürgen Schmidhuber developed long-term memory technique (LSTM) for handwriting and speech recognition. That same year, IBM develops the chess-playing program Deep Blue and beats Garry Kasparov, the world chess champion, in a highly publicized event.
1998 – 2000: Many robots are developed around the world. In 1998, Furby was developed as the first robot dog. In 1999, Sony developed the AIBO robotic dog. In 2000, Cynthia Breazeale of MIT developed Kismet to recognize human emotions, and in the same year Honda developed ASIMO. as a humanoid robot capable of interacting with people and delivering trays to restaurants.
2004: DARPA recognizes the potential of artificial intelligence and launches a challenge to develop autonomous vehicles (though no one wins the competition).
2009: Google begins a self-driving car project.
2010: The science of storytelling began when a student project developed its Stats Monkey prototype, which automatically writes sports stories based on widely available data.
2011: IBM's Watson computer program wins the popular show Jeopardy. The race to create the most sophisticated virtual assistant begins with many assistants becoming popular such as Facebook M, Microsoft Cortona, Apple Siri and Google Now.
2015: Elon Musk and other famous people donate $1 billion to Open AI.
2016: Google Deep Mind Technologies developed AlphaGo, beating Korean Go champion Lee Sedol. That same year, Stanford releases the AI 100 report, which looks at 100 years of efforts to study AI and predict how it will impact people.
Present: A large number of universities around the world have started offering formal academic courses in AI, making it a major subject of study.
Artificial Intelligence Technology Market
The market is expected to grow to $190.61 billion by 2025, with an annual growth rate of 36.62%. Market growth is influenced by factors such as the introduction of cloud applications and services, the emergence of large data sets and active demand for intelligent virtual assistants. However, there are still few experts developing and implementing AI technologies, and this is holding back market growth. Systems built on AI require integration and technical support for maintenance.
Processors for AI
Modern AI tasks require powerful processors that can process huge amounts of data. Processors must have access to large amounts of memory, and the device also requires high-speed data transfer channels.
In Russia
At the end of 2022, a series of Elbrus-804 servers were launched in Russia, showing high performance. Each computer is equipped with four eight-core processors. Using these devices, you can build computing clusters; they allow you to work with applications and databases.
World market
The drivers and market leaders are two corporations - Intel and AMD, manufacturers of the most powerful processors. Intel has traditionally focused on releasing machines with higher clock speeds, AMD is focused on constantly increasing the number of cores and providing multi-threaded performance.
How to feed AI information
We all understand that AI needs to have access to information in order to study the world around it, but we do not fully understand how much information is needed. According to Neil Lawrence, a professor of machine learning at the University of Sheffield and a member of Amazon's AI technology development team, these systems will need hundreds and thousands of times more information than a person in order to learn to understand the world order and recognize certain objects.
“If you look at all the industries and areas where engineers have had some success with deep machine learning, you immediately see the enormous amount of information that was involved in solving all these questions,” Lawrence says, citing as an example the same speech and image recognition technologies.
Companies like Google and Facebook have access to mountains of information, which, of course, makes it easier to create various useful tools (the same voice search technologies for Android, for example).
According to Lawrence, information is now what coal was in the early years of the Industrial Revolution. As an example, Lawrence cites Thomas Newcomen, an Englishman who created in 1712 (in fact, 60 years before James Watt created such a machine) a primitive version of a steam engine powered by coal. Newcomen's invention was not perfect. Compared to Watt's machine, it was ineffective and too expensive to use. For the most part, it could only be used in coal mines, where the volume of fuel required compensated for the machine's shortcomings.
An example of Facebook's open image recognition technology
Lawrence believes there could be hundreds of these Newcomens around the world, developing their own machine learning models. There may be truly revolutionary ideas among them, but without their technology having access to huge databases of information, most likely no one will ever know about them.
Large companies like Google, Facebook and Microsoft are the very modern “coal miners” . They have access to an unlimited amount of information, so they can create ineffective machine learning systems and then improve them.
Small startups may indeed have great ideas, but they will never achieve anything worthwhile without access to information bases.
This problem becomes even more obvious when you look at those areas where getting the information you need becomes even more difficult. Take, for example, the healthcare system, where AI could be used to perform tasks related to computer vision—searching for and recognizing malignant tumors on X-rays, for example. But access to such data is usually very limited. The main limiting factor here, according to Lawrence, is people's current perception that it is unethical for third parties to access this kind of information. The main problem, according to Lawrence, is not finding ways to disseminate information, but how to make machine learning systems more efficient and teach them to work with less information. And these improvements in efficiency, according to the scientist, may take the same 60 years, as was the case with Watt’s machine.
National development concept
Three dozen countries have already approved national AI development strategies. In October 2022, the draft National AI Development Strategy should be adopted in Russia. It is expected that a legal regime will be introduced in Moscow to facilitate the development and implementation of AI technologies.
AI Research
Questions about what artificial intelligence is and how it works have been worrying scientists from different countries for decades. The US government budget annually allocates $200 million for research. In Russia, over 10 years - from 2007 to 2017 - about 23 billion rubles were allocated. Sections on supporting research in the field of AI will become an important part of the national strategy concept. New research centers will soon open in Russia, and the development of innovative software for AI will also continue.
Standardization in AI
Norms and rules in the field of AI in Russia are in the process of constant refinement. It is expected that at the end of 2022 - beginning of 2020, national standards will be approved, which are currently being developed by market leaders. At the same time, the National Standardization Plan for 2022 and beyond is being formed. The standard “Artificial Intelligence. Concept and terminology”, and in 2019 experts began to develop its Russified version. The document should be approved in 2022.
Impact of artificial intelligence
The implementation of AI is inextricably linked with scientific and technological progress, and the scope of application is expanding every year. We encounter this every day in life, when a large retail chain on the Internet recommends a product to us, or, just opening the computer, we see an advertisement for a movie that we just wanted to watch. These recommendations are based on algorithms that analyze what the consumer has purchased or watched. Behind these algorithms is artificial intelligence.
Is there a risk for the development of human civilization?
Elon Musk believes that the development of AI could threaten humanity and the results could be worse than the use of nuclear weapons. Stephen Hawking, a British scientist, fears that humans could create artificial intelligence with super intelligence that could harm humans.
On economics and business
The penetration of AI technology into all areas of the economy will increase the volume of the global market for services and goods by $15.7 trillion by 2030. The USA and China are still leaders in terms of all kinds of projects in the field of AI. Developed countries - Germany, Japan, Canada, Singapore - are also striving to realize all the possibilities. Many countries whose economies are growing at a moderate pace, such as Italy, India, Malaysia, are developing strengths in specific areas of AI application.
To the labor market
The global impact of AI on the labor market will follow two scenarios. First, the spread of some technologies will lead to the dismissal of a large number of people, since computers will take over many tasks. Secondly, due to the development of technological progress, AI specialists will be in great demand in many industries.
Areas of application
Artificial intelligence and its applications are undergoing a transformation. The definition of Weak AI (“weak AI”) is used when it comes to the implementation of narrow tasks in medical diagnostics, electronic trading platforms, and robot control. Whereas Strong AI (“strong AI”) is defined by researchers as an intelligence that is given global tasks, as if they were put before a person.
Use for defense and military purposes
By 2025, sales of related services, software and equipment globally will rise to $18.82 billion, and the annual market growth will be 14.75%. AI is used for data aggregation, in bioinformatics, in military training, and in the defense sector.
In education
Many schools include introductory lessons on AI in their computer science courses, and universities widely use big data technologies. Some programs monitor student behavior, grade tests and essays, recognize errors in word pronunciation and offer correction options.
There are also online courses on artificial intelligence. For example, the educational portal GeekBrains.
In business and trade
Over the next five years, leading retailers will have mobile apps that work with digital assistants like Siri to make shopping easier. AI allows you to earn huge sums on the Internet. One example is Amazon, which constantly analyzes consumer behavior and improves its algorithms.
Where can you study on the topic #artificial intelligence
Well
Big Data Analyst with guaranteed employment
As you type
Well
Specialization "Machine Learning: Advanced Level"
As you type
Well
Fundamentals of Artificial Intelligence: Neural Networks
As you type
In the electric power industry
AI helps predict generation and demand for energy resources, reduce losses, and prevent resource theft. In the electric power industry, the use of AI in analyzing statistical data helps to select the most profitable supplier or automate customer service.
In the manufacturing sector
According to a McKinsey survey of 1,300 executives, 20% of businesses are already using AI. I recently implemented AI in my production in the packaging department. Uses AI's ability to recognize images. The camera records all the employee’s actions by scanning the barcode printed on the clothing and sends the data to the computer. The number of transactions performed directly affects the employee’s remuneration.
In brewing
Carlsberg uses machine learning to select yeast and expand its range. The technology is implemented on the basis of a digital cloud platform.
In banking
The need for reliable data processing, the development of mobile technologies, the availability of information and the proliferation of open source software make AI a sought-after technology in the banking sector. More and more banks are raising borrowed funds with the help of mobile application development companies. New technologies are improving customer service, and analysts predict that within five years, AI in banks will make most decisions independently.
By transport
The development of AI technologies is a driver of the transport industry. Monitoring road conditions, detecting pedestrians or objects in inappropriate places, autonomous driving, cloud services in the automotive industry are just a few examples of the use of AI in transport.
In logistics
AI capabilities allow companies to more effectively forecast demand and build supply chains at minimal cost. AI helps reduce the number of vehicles needed for transportation, optimize delivery times, and reduce operating costs of transport and warehouse facilities.
In the market of luxury goods and services
Luxury brands have also turned to digital technologies to analyze customer needs. One of the challenges facing developers in this segment is managing and influencing customer emotions. Dior is already adapting AI to manage customer-brand interactions using chatbots. As luxury brands compete in the future, the deciding factor will be the level of personalization they can achieve with AI.
In public administration
Government agencies in many countries are not yet ready for the challenges hidden in AI technologies. Experts predict that many of the existing government structures and processes that have evolved over the past few centuries will likely become obsolete in the near future.
In criminology
Various AI approaches are being used to identify criminals in public places. In some countries, such as Holland, police are using AI to investigate complex crimes. Digital forensics is an evolving science that requires the mining of huge volumes of highly complex data sets.
In the judicial system
Developments in the field of artificial intelligence will help radically change the judicial system, making it fairer and free from corruption. China was one of the first to use AI in the judicial system. It can be assumed that robot judges will eventually be able to operate with big data from government service repositories. Machine intelligence analyzes huge amounts of data, and it does not experience emotions like a human judge. AI can have a huge impact on processing information and collecting statistics, as well as predicting possible crimes based on data analysis.
In sports
The use of AI in sports has become commonplace in recent years. Sports teams (baseball, soccer, etc.) analyze individual player performance data, taking into account various factors when recruiting. AI can predict the future potential of players by analyzing playing technique, physical condition and other data, as well as estimate their market value.
In medicine and healthcare
This area of application is growing rapidly. AI is used in disease diagnosis, clinical research, drug development and health insurance. In addition, there is now a boom in investment in numerous medical applications and devices.
Analysis of citizen behavior
Monitoring the behavior of citizens is widely used in the security field, including monitoring behavior on websites (social networks) and instant messengers. For example, in 2022, Chinese scientists managed to identify 20 thousand potential suicides and provide them with psychological assistance. In March 2022, Vladimir Putin ordered to intensify the actions of government agencies to combat the negative impact of destructive movements on social networks.
In the development of culture
AI algorithms are beginning to generate works of art that are difficult to distinguish from those created by humans. AI offers creative people many tools to bring their ideas to life. Right now, the understanding of the role of the artist in a broad sense is changing, since AI provides a lot of new methods, but also poses many new questions to humanity.
Painting
Art has long been considered the exclusive sphere of human creativity. But it turns out that machines can do much more in the creative field than people can imagine. In October 2018, Christie's sold the first painting created by AI for $432,500. A generative adversarial network algorithm was used that analyzed 15,000 portraits created between the 15th and 20th centuries.
Music
Several music programs have been developed that use AI to create music. As in other fields, AI in this case also simulates a mental task. A notable feature is the ability of an AI algorithm to learn from information received, such as computer tracking technology that is able to listen and follow a human performer. AI also drives what's known as interactive composition technology, in which a computer composes music in response to a live musician's performance. At the beginning of 2022, Warner Music entered into its first-ever contract with an artist, the Endel algorithm. Under the terms of the contract, the Endel neural network will release 20 unique albums within a year.
Photo
AI is rapidly changing the way we think about photography. In just a couple of years, most advances in this area will be focused on AI, and not on optics or sensors, as before. Advances in photography technology will, for the first time, not be tied to physics and will create a completely new way of thinking about photography. Already, the neural network recognizes the slightest changes when modeling faces in photo editors.
Video: face swap
In 2015, Facebook began testing DeepFace technology on the site. In 2022, Reddit user DeepFakes came up with an algorithm that allows you to create realistic face replacement videos using neural networks and machine learning.
Media and literature
In 2016, Google AI, after analyzing 11 thousand unpublished books, began writing its first literary works. Facebook AI Research researchers in 2022 came up with a neural network system that can write poetry on any topic. In November 2015, the Russian company opened the direction of preparing automatic texts.
Go games, poker, chess
In 2016, an AI beat a human at Go (a game with over 10,100 variations). In chess, a supercomputer defeated a human player due to the ability to store in memory the moves ever played by humans and program new ones 10 steps ahead. Poker is now played by bots, although previously it was believed that it was almost impossible to train a computer to play this card game. Every year developers improve their algorithms more and more.
Face recognition
Facial recognition technology is used for both photo and video streams. Neural networks build a vector, or “digital,” template of a face, and then these templates are compared within the system. She finds reference points on the face that define individual characteristics. The algorithm for calculating characteristics is different for each system and is the main secret of the developers.
For further development and application of AI, it is necessary to train humans first.
In conclusion
One typo can have a huge impact on your brand image. As a result, any content you create must be polished, precise, and error-free. AI software helps companies and startups do just that and speed up the marketing process.
If you are looking for the right AI software, please select one from the list below:
- Jarvis: The Best Software for Creating Artificial Intelligence
- Grammarly: Best for error-free proofreading
- Wordtune Best for paraphrasing content
- ProWritingAid Best for editing academic content
- Sapling Best for sales and support messages
- AI Writer Best for Automated Content Creation
- Articoolo Best for WordPress Automated Articles
Whether you want to create great content or just produce short articles, the tools above can help you achieve that. You can also use a combination of several tools to improve the quality of your content.
Sergey Shirkin
Dean of the Faculty of Artificial Intelligence GeekUniversity
Artificial intelligence technologies in the form in which they are used now have existed for about 5–10 years, but in order to apply them, oddly enough, a large number of people are required.
Accordingly, the main costs in the field of artificial intelligence are the costs of specialists. Moreover, almost all basic artificial intelligence technologies (libraries, frameworks, algorithms) are free and publicly available. At one time, finding machine learning specialists was almost impossible. But now, largely thanks to the development of MOOCs (Massive Open Online Course), there are more of them. Higher educational institutions also supply specialists, but they often have to complete their studies through online courses. Now artificial intelligence can easily recognize that a person is planning to change jobs, and can offer him relevant online courses, many of which can be completed with only a smartphone. This means that you can exercise even while on the road - for example, on the way to work. One of the first such projects was the online resource Coursera, but later many similar educational projects appeared, each of which occupies a specific niche in online education.
You need to understand that AI, like any program, is first of all code, that is, text formatted in a certain way. This code needs development, maintenance and improvement. Unfortunately, this does not happen by itself; without a programmer, the code cannot “come to life”. Therefore, all fears about the omnipotence of AI are unfounded. Programs are created for strictly defined tasks, they do not have feelings and aspirations like a person, they do not perform actions that the programmer did not put into them.
We can say that in our time, AI has only some human skills, although it can be faster than the average person in the speed of their application. True, thousands of programmers spend many hours of effort to develop each such skill. The most that AI can do so far is to automate some physical and mental operations, thereby freeing people from routine.
Does the use of AI pose any risks? Rather, there is now a risk of not recognizing the possibility of using artificial intelligence technologies. Many companies are aware of this and are trying to develop in several directions at once in the hope that one of them might take off. The example of online stores is indicative: now only those who realized the need to use AI were afloat, when it was not yet a trend, although it was quite possible to “save money” and not invite the necessary mathematicians and programmers for unknown reasons.
Prospect for the development of artificial intelligence
Computers can now do many things that only humans could do before: play chess, recognize letters of the alphabet, check spelling, check grammar, recognize faces, dictate, speak, win game shows and much more. But skeptics persist. As soon as another human ability can be automated, skeptics say that it is just another computer program and not an example of self-learning AI. AI technologies are just finding wide application and have huge growth potential in all areas. Over time, humanity will create more and more powerful computers, which will become more and more advanced in the development of AI.
Is the goal of AI to put human intelligence into a computer?
There is only a rough understanding of how the human brain works. So far, not all properties of the mind can be imitated using AI.
Will AI be able to reach human levels of intelligence?
Scientists are striving for AI to be able to solve even more diverse problems. But it is premature to talk about reaching the level of human intelligence, since thinking cannot be reduced to algorithms alone.
When will artificial intelligence reach the level of human thinking?
At this stage of accumulation and analysis of information, which has now been achieved by humanity, AI is far from human thinking. However, in the future, breakthrough ideas may arise that will influence a sharp leap in the development of AI.
Can a computer become an intelligent machine?
Part of any complex machine is a computer system, and here we can only talk about intelligent computer systems. The computer itself does not have intelligence.
Is there a connection between speed and the development of intelligence in computers?
No, speed is only responsible for some properties of intelligence. The speed of processing and analyzing information in itself is not enough for intelligence to appear.
Is it possible to create a children's machine that could develop through reading and self-learning?
This has been discussed by researchers for almost a hundred years. The idea will probably be implemented someday. Today, AI programs do not process or use as much information as children can.
How do computability theory and computational complexity relate to AI?
Computational complexity theory focuses on classifying computational problems according to their inherent complexity and relating these classes to each other. A computational problem is a problem that can be solved by a computer. A computation problem is solvable by the mechanical application of mathematical steps, such as an algorithm.
AI must be able to multitask
Another key problem standing in the way of developing truly deep machine learning models is the fact that all of our current AI systems are, at their core, very stupid. According to Raya Hadsell, a research scientist at Google's DeepMind, these systems can actually be taught to perform cat recognition tasks, taught to play, and still be very effective at performing these tasks. But “at the moment in the world there is not a single full-fledged neural network and methods that would allow it to be trained to recognize images, play Space Invaders and contemplate music.” In turn, neural networks are the key basis for creating deep machine learning systems.
And this problem is much more significant than it might seem at first glance. When DeepMind announced last February that it had created a system that could play 49 Atari games, it could indeed be seen as a major achievement. But in the end it turned out that after the system finishes playing one game, it needs to be retrained to play another each time. Hadsell notes that we can't teach a system to play all the games at once, since the rules of each will mix with each other and ultimately interfere with the task at hand. Each time you have to teach the machine again, and each time the system “forgets” how to play the previous game.
“In order to create artificial general intelligence, we need something that will help us teach a machine to perform multiple tasks at once. Now we can't even train them to play games,” says Hadsell.
The solution may lie in so-called progressive neural networks - combining independent deep learning systems into a single whole to more efficiently work with information. In a published scientific paper exploring this issue, Hadsell and her team of researchers described how their advanced neural network was able to adapt to a game of Pong in which the conditions were somewhat different each time (in one case, the colors were changed; in another, the colors were mixed up). control), much faster than a “regular” neural network, which had to be trained anew each time.
Basic principle of a progressive neural network
The method has proven to be very promising and has recently been used to customize robotic arms, speeding up their learning process from a week to just one day. Unfortunately, this method also has its limitations. Hadsell notes that with progressive neural networks, the learning process cannot be reduced to simply adding new tasks to their memory. If you continue to combine such systems together, sooner or later you will end up with “a model that is too complex to track.” In this case, we will be talking about a different level. About the level at which different tasks will be performed in essentially the same way. Creating an AI that can design chairs and creating an AI of human intelligence that can write poems and solve differential equations are not the same thing.