Non Artificial and Not Intelligence

Non Artificial and Not Intelligence explores why this is a misnomer. It is a computer program made by man, and prone to the same prejudices and errors that any human will make.

Non Artificial and Not Intelligence explores why this is a misnomer. It is a computer program made by man, and prone to the same prejudices and errors that any human will make.

The Error of AI

The problem with AI is that it is a scam from the beginning. Most people consider AI as an electronic person that you can talk to, see in digital form like as if it were a person, and that will respond to the person making inquiries as if it were a real person, a slave person to you, but none-the-less a real person. That is where this error begins. AI programs are just computer programs, made by man, specifically computer programmers. The concept is very straightforward, a human person takes an area of inquiry (a point of science, history, medicine, engineering, etc.) and makes a point of inquiry. (I have these symptoms, so what is my disease and the medicine to cure me). The human “user” asks this question.

In the making of the program, there are areas that the programmers work in. So if we want to make a medical AI bot, we get 1000 doctors together, and ask them to first list all known diseases and then list the symptoms for those diseases and the cures. So far so good. But to ask a doctor that question, he could spend years researching and writing his part of the program. Do you think that any good doctor worth his salt would take years off of his medical practice to do that (most probably without money for doing it)? The simple answer is no. Because AI is built on stealing other people’s hard work, and paying them nothing in return.

Hollywood’s Real Scary Monster, AI

If you look at what the actors guild says about AI, none is good. They want laws against AI stealing their hard work. Why? Because they work hard to make a product that they can sell (i.e. make money for the artist off of that product), so the product is themselves, and AI pays them nothing for that. The artist spends a lot of money, time, personal energy and gets basically nothing from that except that feeling that comes when you have been robbed and defrauded. (feels like financial rape, violation)

So the principle of how AI comes into being is probably illegal. They have scraping programs that scraping information from many websites without permission for financial gain. (If they buy all the hard disks from a manufacturer for an entire year, e.g. Western Digital 2026, and are making nuclear power plants to power their AI centers, somebody is making money in all of this) But examine this closely, why would it be illegal to make a computer program that helps people? The answer is because it takes what is other people’s hard work without permission, and without reimbursement for them. Hollywood right now is anti-AI, and all of that will change in the future, because what are differences between them is money. When laws are passed to make AI pay the artists, they will be delighted. They won’t have to work, and they will get constant income.

Hard Work versus Filler

But let’s go back to the medical AI program. What is the rule in making a medical AI program? What is free is the rule. These are sophisticated massive computer programs. Thank you Google for birthing AI. Google is in the browser search program industry, and they made probably millions of little junky computer all over the world searching together all the web. What they did is impressive, but again, they are stealing websites’ material without permission. Actually, websites are being forced to bow down before Google and SEO (Search Engine Optimization rules) in order to “be visible”. So that is bribery going on. Again, even through things like Google Adsense where Google pays the site owner a little money for views, they make a fortune, and the website owner makes next to nothing. The principle we see reappearing over and over again is stealing the work of others. That is where AI will be pinned to the wall. Interesting that governments already have pin Google to the wall for not paying significant payout to the site owners that participate in Adsense.

But you get what you pay for. In this case, you pay nothing for AI, and the quality of what you get is probably going to be close to nothing or worse than nothing, error. Here is the real problem of even using AI. You are at the mercy of the programmer. There is no “intelligence” in AI. It is only the smarts, knowledge, or “intelligence” of the computer programmer. But in a medical AI program, hardly any of those programmers are going to be doctors. So the programmer has to get information from a real doctor, which is not going to happen. They are not going to pay 1000 doctors of the best quality in each of their fields of medicine to do this. They are going to rake and sack websites on medicine.

Here is the next problem I see. If you go to YouTube.com and search long enough, you will find contradicting videos. Search for drinking water. You will probably find a lot of videos on the benefits of drinking water. But if you look long even, you will also find videos on the dangers of drinking water. Yes! Drinking too much water is very dangerous because it will flush the electrolytes from your body. This is not great medical advise, anybody can see the common sense in going to extremes. But the computer doesn’t have common sense, nor any sense. It is an electronic machine that just does what it is programmed to do. Beyond the obvious bad information problem, there is also a programming problem. Simply put, it does not do what it was programmed to do. Most always this is because the programmer made a bad decision somewhere. But finding where and even identifying what the program is would be excessive time consuming and subject to opinion.

The Problem of Bad Raw Information

But when the program is made, those “professionals” that give the AI program raw information can make mistakes. If you just think for a minute about the legal situation in the United States with doctors who make wrong medical decisions, (and they are sued for it), then you see the problem. Even if AI were paying the best doctors in each field of medicine to put the raw input data, that would be an extremely intensive and conflict filled battleground. In any disease, the best discernment about cause and cure changes because of the medical research going on constantly. If you were very rich and went to the best doctors for a particular disease (for example, Hollywood artists have deep pockets and do this regularly), they could still make a bad decision about the disease they have and the cure. Remove the humans from the equation, and you do not get something better but infinitely worse.

The Problem of Bad Programming

Here the problem is really the poor ability of programmers to do their job. Add to that the limiting factor of the limits of computers themselves as well as the lack of money spent on the best expert advice for raw information, and this is a disease waiting to happen. You can only blame the fools who depend on AI for the disease when it happens. All AI is use with precaution, the user is responsible for their own actions and decisions, not the AI. And with real doctors? The money the doctors make are stripped from them in litigation when they have a malpractice claim. This will eventually come to settle on AI also. Only through government protecting AI errors through laws, which why would that be good? or through some kind of liability disavowment when people use AI. Even that can be challenged.

The problem of Views versus Wisdom

Take YouTube for an example here. What is the best video on YouTube? Not what is most accurate and helpful, but what gets the most views. There is a very important point to be seen in what is happening. What is “accepted” or given the quality of “good” is what is popular, not what is wise. According to popularity, drugs are good. According to wisdom and experience, illicit drugs are bad. But with YouTube and AI, the most views is what makes something rate the highest or best. This is basic SEO, Search Engine Optimization. Money is gained by what is seen the most. If a doctor defines a disease and prescribes medicine in black and white print, and that is the best assessment we have to date on that disease, that gets lets “views” than a showy video that has a lot of color, action, and a sexy voice or actress presenting it. So wisdom is advantageous use of knowledge. So if that “knowledge” is in error, there is no gain to anybody only damage. But those who do gain, are those who are selling the lies.

Understanding is another factor in all of this. To understand something is to be able to discern what is of an advantage and what is not an advantage. While many people making computer programs propose that their programs can really do this, understand, they cannot. Understanding needs a human to do it. Anything else tries to use the understanding of an expert in a “canned” kind of way. Understanding take a set of data and analyzes it, and looks for principles. A cut causes infection, which makes the area around the cut turn red. But that is not always true. Sometimes it is infected but there is very little or no redness. Circumstances change the situation, and humans are ultimately needed recognize these circumstances. AI removes that active and present human element. That element has to be predefined and solved ahead of time, and that is not always acceptable.

Always remember, the computer cannot think, it can only proceed along the lines or thinking that its programmer has designed for it to do. It cannot free think. Take some situation and go completely outside of that situation seeking a gain, advantage, solution, etc. That is human, and computers do not have that human ability. The more we depend on AI instead of humans, the less humans are going to be available and prepared when it becomes absolutely necessary to call upon them. Therein lies another grave danger of AI. Instead of aiding humans to do a better job, it seeks to replace them. Yet the more it replaces them, the more it needs more humans. So as a tool, it can be useful, but as a replacement for thinking and other hard human work, it has to be constantly checked and rechecked by humans.

The Problem of “Smartness.” Easiness, and Laziness

The next problem is that of laziness, and some people who are very “smart”. They are so smart that they don’t have to think nor work hard to get the benefits of work. For example, there has been lawyers in courts that make a brief using AI. In law, everything is citing the existing laws. That is where one side has authority against the other, and the judge only reads and evaluates each sides references to the laws, and judges which has the best support. So some lawyers have taken the easy road and use AI. The problem is that the cases and laws that these AI generated defenses are in error. What kind of error, spelling mistakes? No they are citing laws and case law that doesn’t exist. Any judge will have an army of court assistants that do basically one thing, they verify each sides arguments with the law. In other words, what is cited is looked up and studied, and sometimes a law is cited to establish a point when that point was not in the case at all. There are computerized “libraries” of all the laws and case law (individual decisions in cases that become a support for a particular understanding of a law). The assistants look these things up to check them and study them, and they don’t exist. This is basically suicide for the lawyer. His standing before a judge is now zero.

But the error coming out is important. The programmers have programmed “pseudo-information” to give to the fool using AI. AI has the selling point of doing what a person can do but much faster and better. In a way, that is true. But always with the overwhelming factor that the information could be tainted by programmer prejudice, or by raw information source prejudice.

A while back, there was an AI program that made graphics. The output was very pretty. Nice pictures. But when asked to produce pictures of our founding fathers, they were all black. History tells us that they were not black. Black people were slaves at the time of the founding of our country, and they were not the principles in making our countries laws. In other extreme cases the founding fathers were Asians, American Indians, etc. So history does not support that as fact. It is fiction. But how did the program err? It did not err because when other people made the same inquiry, they got the same results. The error was a programming prejudice that came out very clearly in a specific inquiry.

This all returns to the basic problem of AI. It is man made, and the errors of humans will always be in it. You can “clean” these programs of prejudice, but it is like cleaning up history in a school. Your personal views (as the teacher) will guide what you say, how you say, how much you emphasize some things, or how you ignore others. That is human prejudice. I grew up in South Carolina, and in elementary school, the state ordered all elementary age students to study South Carolina history. So it was colonial history really. But the books were excessive about George Washington’s wooden teeth and the many illegitimate children Washington had with black slave women up and down the eastern seaboard. What Washington did as General or President wasn’t really mentioned or if it was, it wasn’t emphasized. “Normal people” saw through this as programming or brain washing, and every classroom had these books, one for each student, on a shelf, and nobody opened them. The teacher didn’t teach from them. I was a curious person so I opened one and read some from one them.

But there is nothing new in AI here. It is a brainwashing machine. I am a pastor and Bible study. In evaluating different Bible versions, translations, it is extremely important to know the beliefs of the translator. They come through in hidden ways in how he translates the Bible. The same is true here with AI. The political, religious, and philosophical orientation of the programmers and those that provide raw information will come out in the end product, but you will be hard put to discern it unless it is an extreme, like a black George Washington picture.

The Problem of Intelligence

The fact of the matter is that for all the hype, computers cannot think. People say no, not yet. The point is never. Computers are not people, human beings, and they are simply machines that serve mankind. They make errors. When a car (a machine) suddenly doesn’t work, or works poorly, did it decide to not work? No. It is a machine. What happened is you forgot to do something like change the oil, put gasoline in it, etc. Barring that you did everything right as the user, some cars fail because of engineering “failures.” But the problems with machines always come back on humans, in one way or another. When a weapon like the atomic bomb is made and destroys a lot of lives. The people who designed it and decided to use it in a specific case are always to blame. The “machine” did what it was preprogrammed to do, and the gravity of an error or misuse lies at the feet of humans.

But Does it Really Work?

Here we need to be careful. What does “work” really mean in this context? I use a calculator a lot sometimes (like doing my taxes). It is a little bigger than a credit card. Does it work? For what I want it to do, yes it does. But if I mistype a number in it, it gives me something I don’t want. An error. The problem was with my chubby fingers typing on that little thing. Often happens with the computer also. But does it work. Well, it does what it was designed to do.

What is the “designed to do” for AI programs? Here things get difficult. To retrieve raw data, a computer is excellent. To search for specific details it is great. But to evaluate like a human that information so as to make a decision on it, not so great. In my example about lawyers “mis” using AI to make their court cases, both the judge’s legal assistants and the lazy lawyers used computers. The legal assistants to verify what a human had done. This was a good outcome. But the lawyer was lazy, sloppy, and uncaring about quality, and his computer (AI program) extended his bad moral quality traits into a grave error that maybe would cost him his professional license. Doctors would be in the same situation.

So some would say, “do not through the baby out with the bathwater”. But we need to understand that the baby here is AI. It is a help or crutch for some to get things done without they themselves doing the hard work involved. Laziness. Sloppiness. But in the case of the lawyer, the judge uses legal assistants to detect exactly this laziness on the lawyers’ part. In that case, the computer does not “generate” a human like response (a lawyer’s legal case) but rather to quickly find and present information in a large and difficult to navigate library of information. So in that case, the computer was very helpful and should be used in that case and way. But these legal assistants cannot think, “I can use an AI program to check and verify a legal brief”. They then fall into the same problem as the lawyer, and perhaps if they use the same AI program, it will result in everything being great, in which case, another group of people examining the case in the future will find the error (quoting authorities that don’t exist). The embarrassment is for the legal assistants and the judge, and a great scandal for the lawyers. If a defense attorney is using AI and gets bogus authorities cited, which he is lazy in the first place, he is not going to check it, and the case goes forward for him, what about the prosecuting attorney? They are gong to have great embarrassment if the fraud is discovered, and they did not reveal it. They didn’t do their job and got hook winked by the scam. That is a legal mess. The state can fire the prosecutor and put a legal case against the prosecutor.

But leaving lawyers and judges, these are judge revealing the error of AI, we go to engineers using AI, or medical doctors, or people at NASA. Their errors are not something of an embarrassment, but lives are the cost. People die. Why? Because people trusted AI when they should have done the legwork involved.

We go back to the example of Hollywood artists. The problem is that if people do hard work, they should have the benefits of that hard work. When people are lazy and don’t want to do the hard work involved, then there is always going to be problems. Since the beginning of time, this has been true. It is called robbery. One lazy person who won’t do the hard work for his own benefit, takes or robs the fruit of another person’s hard work. I hope the Artists Guild sues the pants off of the AI companies. But you should be very careful in how much you use and depend on AI for your life’s work. If you use it, you had better be very careful in checking the accuracy of what it gives you. Hidden errors are embarrassing.

Non Artificial and Not Intelligence


DCox Is unlimited Immigration a Human Right? NO!
From a religious perspective, we analyze the open door policies and illegal immigration.
Topics: Illegal Immigration is Not Obeying the Countries Laws | Rom 13:1-14 | A Country should Protect its Treasures from those who come to plunder | Should the United States give work permits to all the illegals? No. | Don't Illegal Aliens have a right to vote, representation, and a right to work? | Conclusion.
DCox Is unlimited Immigration a Human Right? NO!.

Author: Pastor Dave

Leave a Reply

Your email address will not be published. Required fields are marked *