AI Useful in Hard, But Less So in Soft Sciences

Is there not a mathematician in the world who does not want AI (or AGI) to start solving the greatest numerical riddles of all time? When it does begin to do so, will it win the Nobel Prize, and from then on relegate humans as bystanders?

It’s not as if AI isn’t already solving biological problems as well, such as:

  • the Moderna vaccine that was created in two days;
  • IBM Watson Health or Google’s recent Med-PaLM that may be more accurate than most doctors.
  • AlphaFold’s deep learning by DeepMind which can predict the 3D structure of proteins.

Note that in well-established math, disagreement tends to be less common, because proofs are easier to measure than the soft sciences. True, cutting-edge math is more debated, but AI may solve much of that with time, using probability and large data sets.

Soft Sciences

But the real problems AI will run up against will likely be found in the soft and fuzzy sciences, in which disagreement is often the norm, like sociology, politics, philosophy, and even religion because they are ultimately more guided by an individual’s personal values, needs, wants, perceptions than concrete reason and universal facts. It is probably logical then that AI simply spits out all possible alternatives and facts based on data, yet leaves humanity to make those decisions.

You would think that remaining neutral would be key, but even if that were the case, many people would actively seek other AI’s that: were more biased, less filtered, or ones that molded themselves to the users’ preferences and responses. Much like how today joining fringe forums and groups online to reinforce one’s tribal feedback is common, and sometimes that is bad, but other times perhaps useful (e.g. weight loss or addiction recovery group).

Simultaneously, such people would dismiss a neutral AI that does not support their personal viewpoints. The whole world today suffers from trust of information problems already and I do not see how AI will reduce it for various reasons (many people barely trust their phone navigation).

A lot of the debate about AI seems to project this feeling of a single AI with a single viewpoint. Even if there was only a single AI at the top, it could still provide millions of different viewpoints to each person. The truth may not be something to be determined by the hard sciences math or the whims of society.

On the other hand, remaining overly neutral may be equivalent to the accepting, “non-judgmental” psychologist that always accepting of a patient’s behavior instead of simply telling them they are wrong at times. At some point, the lack of judging poor behavior, and ambivalence does not help anyone, so extreme neutrality does not help reinforce truth.

I asked ChatGPT a question and clearly, it tries to put “communism” and “capitalism” on equal footing, not even being able to conclude that millions of times more people died, as well as lived in severe poverty, under communist rule. It also convenience ignores the reality that capitalism places accountability on oneself, while communism says other people are responsible for our well-being. Conversely, a system where responsibility is limited to one’s self is the definition of a failed society. Constant “neutrality” may lead to ambiguity as well. Will AI learn this, I don’t know?

Assuming AI could actually teach intelligence based on concrete statistical and historical data, it may actually benefit society if they choose to trust it, and assuming its results could be verified by people, which is yet another problem. I am not super hopeful on either of these currently, partly because people will remain predictably irrational if needs be, or simply have different goals than others. The truth may never unfold, although the appearance of it may.

We live in a world with more data than ever (although knowledge may be relatively sparse), and yet no sign exists to me that people are getting more intelligent. In fact, IQ scores seem to be falling in developed countries. How is adding more computational power, even if directly in our minds going to make us smarter? Data is not knowledge, nor wisdom.

Will People Still be Motivated to Solve Problems if AI Takes the Lead?

Ray Kurzweil said on the Lex Friedman podcast that even though chess beat the world’s best player, chess has never been more popular than ever. Well, chess is purely for fun. Once applied to money-making ventures, it will be a different story. Just ask Jim Simmons from Renaissance Technologies, the most successful investment fund of all time that used quantitative strategies. Most investors would want to recreate something similar if they could. Yes, perhaps investing is a game of sorts too.

Unlike chess, I doubt people will invest the old-fashioned way for fun once AI gets ahold of it; and what is stopping it from applying so well to investing? I would say the ability to act on a computer interface, and maybe even at scale (operating millions of websites at once). Even value investing would be much better done with AI that could digest all investing books and then make recommendations or even make investments.

So, maybe we will all just play games all day while computers work for us, with the govt taxing them at 100%. Sounds fun, but maybe you should read up on the issues discussed here. If there is a working and a ruling class, then elitism is most likely under this scenario as well.

Applying Leverage to Flawed People

What can I say: people are flawed, computers are flawed. Combining those two negatives together might result might compounding the risks. I am not even sure why greater knowledge will have a greater benefit on society, as I discuss here.

In the meantime, let’s hope that an AGI-created/designed virus does not bridge the real world soon. If it does, we may have to install anti-virus AGI software (e.g. programmatic nanobots) on ourselves just to be able to exist. If you think most will adapt, then maybe you are right, but plenty of anti-vaxxers and anti-covid vaccine people out there to show plenty of people will not subject themselves to increasingly unknowable, debatable, and complex processes just to be able to breathe air.

Leave a comment