top of page

Artificial Intelligence and self-training.

The development of artificial intelligence advances more rapidly every day, to the point where scientists are betting on its self-training. However, the processes indicate that human action and intervention would be needed in one way or another.

In the last year, artificial intelligence figures such as Yann LeCun or Yoshua Bengio have been exploring the possibilities and studying the behaviors of artificial intelligence languages ​​with self-supervised learning.

Currently, artificial intelligence is trained by learning from what it can “see” or hear, like humans do, and with trial-error actions. However, someone is needed to tell the AI if it is correct or not, humans. Additionally, there is supervised learning, which requires intelligence to consume a lot of marked and written information. However, this method is questionable, since much of our knowledge is not written down and this would make it impossible to reach a level of human-like intelligence.

The above is the reason why scientists and figures from AI studies are choosing to bet on self-supervised and/or predictive learning. Methods in which a language could consume large amounts of unwritten information (such as many hours of video), and make sense of it, and then carry out related tasks, which could even include predicting the outcome of a situation.

Despite the existing possibilities that artificial intelligence can train itself, there is an imminent risk in this, and that is that, if they obtain information from data generated by AI, the language ends up “losing its mind.”

In a study by Stanford and Rice universities, it is demonstrated how AI models that consume information generated by AI end up creating meaningless images, texts and symbols. This represents a risk if the models are intended to train themselves, as there is a possibility that they will find information generated by other AI on the Internet, which is very common today.

AI images generated from themselves.

This is because a MAD (Model Autophagy Disorder) loop would be created, in which a model consumes content generated by itself, and gives meaningless, or less efficient, results. Although there is great potential in the development of AI self-training and it is possible that at a given moment it will be compared with humans, it will still take time to reach something like this, and the human presence in its training is almost essential.

0 comments

Recent Posts

See All

Comments


COLOMBIA

+57 6017429730 

msl@msl-latam.com

Carrera 7D No 108A-59

Bogota, Colombia 

MEXICO

+52 5633848400

msl.mx@msl-latam.com

Aristoteles 77 Piso 5, oficina 529, Polanco V sección, Miguel Hidalgo 11560 

Mexico City, Mexico

EEUU

MSL Distribuciones LLC

20801 Biscayne Blvd,

Suite 403,

Aventura FL 33180.

Costa Rica

San José, Guachipelín de Escazú, del Centro Comercial Paco 200 Oeste Edif. Prisma OF. 303

bottom of page