Re: Google plays God - Develops AI that evolves and improves on its own I was trying to resist commenting on any AI discussion on this forum thus far, but now I can't resist anymore. :-)
I work in AI/Data Science and I am co-founder and CEO of a company that develops advanced AI models for complex industrial problems. In this context, we have in fact been developing our own version of AutoML and our work is in progress as we speak.
First of all, I think that there is nothing to fear. The titles such as "Google is playing God" are just hyperbole. This AutoML is essentially a superset of all possible algorithms to model a given set of input-output data. This will definitely improve the results on some hard problems and even make it possible to solve some complex problems which are not solved today. So this is progress for sure, but it does not mean anybody has created a magic. They have essentially only created a new, more powerful algorithm to solve that problem compared to a set of algorithms which we already knew.
It is important to note that the "problem" is still defined by humans. In other words, we start the process by defining the input data and what we want to achieve. We define what the "desired outcome" is. There are two types of fundamental learning algorithms: supervised learning and unsupervised leaning. In supervised learning, the user has to give explicit input-output pairs of data (or labelled data as we call it in data science). In supervised learning, we do not need to know the labels or give labelled data, but we still need to define the "goal". The AI algorithms, whether traditional algorithms or this new AutoML, find the best possible "model" that maps the inputs and the goals with some notion of overall least possible error (I have oversimplified a lot of things here, but this description gives the gist of the process).
Now the above process involves few intricate tasks including data processing/preparation and tuning many learning parameters, which require a good knowledge of data science. Usually, data scientists are involved in tuning the AI model continuously as it gets trained. Sometimes the data scientists decide that the algorithm they are trying is not ideal and they switch to a different algorithm. This is one decision that requires a great insight into data science and algorithms. Now all that this AutoML is doing is automating most of these tasks to a level that intervention by data scientists is minimum. This is actually a great news since this further democratizes the AI and even data scientists with lesser skills may be able to use something like this to develop better models than before. More power to everyone!
The reason I am bringing this up is to emphasize this: The AI has no intent and no mind. It can't on its own define what it wants to do. The humans define the goal of the AI, even in AutoML, and the algorithms (including AutoML) find the best model that achieves the goal.
So, however powerful the AI algorithm is and however it is hyped (rightly or wrongly), it still can't take decisions it is not asked to take, leave alone taking any "actions" on its own. The "goal" of the algorithm is defined upfront by a human, and the algorithms have no capability to change these goals.
In summary, there is no need to worry. There are certainly some social and economical implications of AI becoming very powerful (such as whether it will replace human jobs etc), and we discuss those all the time in multiple AI forums. Those are valid concerns to some extent. But the concern that AI will play God or conversely AI will turn into some kind of a robotic villain like in movies has no basis. That is totally unfounded.
Last edited by Dr.AD : 27th April 2020 at 20:18.
|