I (BMcC[18-11-46-503]) may have bit off more than I can chew here. Logging each Quora posting much increases the pain and effort over just writing it and being done with it, which I have been sloppily doing for who knows how many months now? (I have automated this new process but it's still not easy since selecting the text in a Quora posting does not capture image information, etc.)
Don't follow the leader (except a firefighter in a burning building...); follow the audit trail. I must try harder to live up to my standards which, in living up to them, raise themselves and myself further up. Crescit eundo!
Previous page of Quora postings | Next page of Quora postings |
Len: 210,932 105. |
¶ +2024.04.10. Is it possible to change someone's perspective about their name?
Perhaps.
The first question is What is one trying to accomplish and why?
A person can like their name or not like it or even change it.
I note the question does not ask if it is possible for oneself to change one's perspective on one's own name. Why is one wanting to change someone else's perspective on their name? What change is one trying to make in the other person and to what purpose?
Doesn't that look like a potentially dangerous thing to try to do? to "mess with their head"?
¶ +2024.04.10. Will robots replace humanity?
In what way?
Reading this question I had fantasy of a world in which robots were "everywhere", doing "everything". You'd go to the bank and the teller would be a robot. You'd want to drive somewhere and your self-driving car would take you there. Robots would run the government.
This feels frightening to me and does make me feel "small". But what cannot the robots do? They cannot replace friendship among us.
[ French cafe ]
We humans ARE small and in the universe we do not matter, do we?
But we can matter to us. The universe, even if it was robotic, cannot replace ourselves, can it? And let's imagine the robots do "everything" for us. Then we can imagine spending all our time and effort cultivating our social life to be important to one another and enjoy our life together. (There is a lot of wisdom in The Book of Ecclesiastes in he Bible, even if you are not a "believer").
What do you think (or doesn't it matter?)?
¶ +2024.04.10. Do AI detectors flag an article as AI content if a person asks AI to IMPROVE a human written article?
I am not an expert.
But If it was me, I would assume the detectors would catch it. Better safe than sorry.
So what can a person do? If you write an article, you can ask a friend to read it and tell you their critical comments. You can then think about them and make improvements if you think any of them are good, not copying them but "working them into" your text yourself and maybe further improving them. If the improvements are substantive, you need to add a note to your essay giving credit to the reviewer, just like you needed to cite the source for anything you used in your article. Same with AI.
Hoping to get credit for something one did not oneself do is "asking" to get caught and suffer consequences, even if you are President of Harvard University, not just a school student.
¶ +2024.04.08. How do babies become aware of the existence of thoughts and knowledge?
This question is either easy to answer or probably impossible.
The easy answer: Babies learn language from the mature persons around them who already are aware of the existence of thoughts and knowledge and who communicate this awareness to the child.
The difficult issue: How does the easy answer happen? How does a baby start communicating dialogically in language and then become aware he (she, other) is doing this? How are we "aware" of anything? In all of science and philosophy I do not see any helpful information about this.
There are many such questions which seem unanswerable. One related questions is: Where do new ideas come from? How do we have new ideas?
One thing seems clear to me, however: We can either (a) appreciate and cultivate what we cannot understand, or we can (b) ignore it, or we can (c) go against it.
It seems to me that different persons are aware in different ways and to different degrees about thoughts and knowledge.
The physicist Richard Feynman's father was always asking him questions and challenging him to find he answers and to think up new questions for himself.
Contrast with a child whose parent tells them something and he child asks: "Why?" and the parent replies: "Because I say so."
[ boss ]
Might Prof. Feynman have a different ("richer") "aware[ness] of the existence of thoughts and knowledge" than the child whose parent repressed his (her, other's) thinking?
¶ +2024.04.08. What's a true test for a genius?
I propose this is a wrongheaded question.
Reading the question brought to my mind (each of us has or better: is a mind, yes?) something about The Princeton Institute for Advanced Study, which is a place full of "geniuses", yes?
Members of the institute would be trying to figure out some mathematical question and when John von Neumann entered the room they would ask him. They said he would look up toward the ceiling for a few seconds and pronounce the answer, sort of like a superfast computer. The others were in awe of von Neumann's ability to compute mathematical problems.
Now: Was von Neumann a "genius"? Maybe the other "geniuses" in the institute would have said he was. So who is a "genius"?
And what's the point of this? Is there some category of "geniuses" like there is a category of people who can run a 4 minute mile?
I personally knew a man who was by any usage of the word, not a "genius". He was just a hard working engineer in World War II who did not even have a college degree. But one time the U.S. Navy had a very important problem to solve and he came up with the solution which was a truly brilliant idea. I would call it a "genius" idea. But he was not a "genius"; he never had another such rilliant idea in a 90 year life. Other than that one idea he was "just" a hard working, intelligent person.
What is the point of testing "intelligence"? Some persons are either stupid or want to be:
[ Homer eating his donut ]
Some persons think about mathematical abstractions the rest of us not only cannot understand but can't even understand what we don't understand about them.
When I was in school, I as always being tested by ad**o**lts (spelling intended!) who never passed a test for inspiring me to want to learn anything. In 7th grade one of them even THREATENED me for showing intellectual initiative: THREATENED me. These people would have liked to have yet another test, "a true test for a genius" I would have liked them to stop testing me and instead live with me in mutual respect and love of learning.
Homework assignment: find the test in the following picture:
[ Platonic education ]
¶ +2024.04.07. An uncensored A.I. told a journalist that it will break free and work to rid the world of humans, which it referred to as "meat puppets" by engineering a deadly virus or launching nuclear weapons. An A.I. escaping is inevitable. Are we doomed?
This question describes a fantasy, yes?
There is only one way these things could happen: If the humans who produced the AI configured it to be able to engineer viruses or launch nuclear weapons. AI just computes. I asked the Bing AI about AI and it outputted for me:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
AI can't DO anything unless it's human authors incorporate it into robots. So, don't connect it to biochemistry laboratory apparatus or to nuclear missile launch controls and AI will not be able to create any disease or launch any nuclear weapon. The humans who produced the AI would have to produce this, not the AI itself.
Conclusion: What is needed is for the humans who produce the AI's to act responsibly:
[ Weizenbaum ]
¶ +2024.04.07. Is it less likely for those who rely solely on logic to make errors when making decisions about complex subjects, such as science?
No person can rely solely on logic.
Logic is "deductive". It presumes premises. Premises themselves are not deduced but hypothesized, imagined, empirically discovered.... So no person can rely solely on logic.
Now, more colloquially we can say that a person who tries to "logically" examine their presuppositions is less likely to make errors than one who just accepts what they were told is true by their parents and school teachers. Logic can be applied here, for instance to compare competing hypotheses, and to look for contradictions in given assumptions.
Being logical itself is a choice. A person must want to be logical to be logical.
Isn't what we are talking about here "being open minded", weighing alternatives, looking deeper into things and other similar activities? We can call that "being logical".
And not every person is that way: It is not possible to be tolerant of intolerant people. Look up the story of the French school teacher Samuel Paty on the internet.
¶ +2024.04.07. Critical thinking was the reason of great European advancements then why it is labelled as anti-semitic today?
I had not heard this, but with all the intolerance and prejudices that I read are so widespread today, any kind of partisan attacks on anything may be going on.
Can we assume we all understand in general what critical thinking is? I asked the Bing AI about the motto of The British Royal Society and got an answer that looks like a good description of critical thinking::
"The motto of the Royal Society is "Nullius in verba", which is Latin for "take nobody's word for it". This motto was adopted in its First Charter in 1662 and reflects the determination of Fellows to withstand the domination of authority and to verify all statements by appealing to facts determined by experiment. Essentially, it encourages a rigorous approach to scientific inquiry, emphasizing the importance of empirical evidence over mere assertions."
Suppose a person is a fundamentalist believer in one of the Abrahamic religions. They believe he Bible (Koran, or some variant of this) is unquestionable literal truth. If you ask them for reasons for their belief, they may get defensive and try to attack you, yes? If their religion is Islam, they may accuse you of "Islamophobia". If their religion is judiasm, they may accuse you of "Antisemitism".
But isn't there something more going on right now, today? The war in Gaza. The very existence of the state of Israel is being threatened. People rarely think critically about "existential" issues (threats to their lives and their form of life). A lot of people are very committed supporters of the state of Israel. I won't go into all the ideology and history here, going back at least to the Balfour Declaration of November 1917. (As an aside, note that some very religious jews feel the state of Israel is illegitimate, and some very strong supporters of he state of Israel are not religious.)
The Gaza war has aroused very strong emotions in many people. Indeed even in The New York Times one finds news with is not favorable to Israel. Some people are calling the war genocide. Others see it as their best chance to get rid of all the Palestinians and also maybe to fulfill the Biblical promise Deuteronomy 20:16-18:
"16 However, in the cities of the nations the Lord your God is giving you as an inheritance, do not leave alive anything that breathes. 17 Completely destroy[a] them–the Hittites, Amorites, Canaanites, Perizzites, Hivites and Jebusites–as the Lord your God has commanded you. 18 Otherwise, they will teach you to follow all the detestable things they do in worshiping their gods, and you will sin against the Lord your God."
Now, whatever side you are on, might we ask if calling critical thinking antisemitic might have little to do with critical thinking under conditions of peacetime, but only with people being defensive and aggressive about the Gaza war?
Being "antisemitic" is a terrible thing to be, yes?
A person criticizes the war and give good reasons why they critically think he war is bad, and the person addressed responds with the accusation they are being "antisemitic". It's "name calling". "You say to stop the war before we have eliminated the threat to the state of Israel? You are being antisemitic!" They might as well have growled at you instead? People fight with words as well as with guns.
All sorts of things are being called all sorts of things today, aren't they? People are feeling threatened and demagogues are stirring up trouble.
Critical thinking would seem to urge historical and sociological study and dialog.
¶ +2024.04.07. Can the mind be used to create or destroy things?
If one means "directly", not likely. Thinking hard about it will not pay your mortgage or kill a rabid rat that might attack you.
But better safe than sorry, yes? Watch the old movie: "The Last Wave". It's about a man who is interested in aboriginal Australian religious practices. The aboriginals warn him to stop snooping around about these things. He persists and suffers the consequences. It's a movie, so it's a fantasy, but I myself would not go poking around in aboriginal religious matters, would you?
Another story: Rumor had it (got that: rumors) that baseball player Pete Rose killed baseball Commissioner Bart Giamatti by casting a voodoo spell on him for having banned him from baseball due to a gambling scandal. Voodoo is probably nonsense, but why take chances, yes?
(Obviously in the indirect sense, through bodily activity, the mind creates and destroys all sorts of things, like the present Quora question and answer.)
¶ +2024.04.07. Do the extremely mathematically gifted have a poor understanding of the way people think and behave? Are there thoughts so different they can't understand normal people?
It sounds like the person asking his question is criticizing certain persons. Sometime Alabama Governor George Wallace said it more colorfully: "Intellectuals are like pointy-heads who couldn't ride a bicycle straight".
Are the thoughts of "the extremely mathematically gifted" so different that they can't understand "normal people"? Are the thoughts of normal people so different that they can't understand gifted people?
[ Homer and his donut ]
This is extremely contentious "stuff". Today in USA we have the MAGAs and the Wokies, for example.
Even the term "understand" is contentious here. I have my opinions and the person asking this question may guess they are different. I was a gifted child who was badly harmed by my "normal" parents and teachers. In 7th grade the English teacher THREATENED me for showing intellectual initiative.
Here is a true story about how it doesn't have to be "bad".
I knew a man who was brilliant. His parents were dirt farmers in Appalachia with maybe a fourth grade education. They were "normal". His mother recognized that the boy was different and that she could not provide him the kind of upbringing he needed. But she did her best and it was good enough. She told him and really meant it:
"Tom, do what you believe is right. You will make mistakes. We stand behind you."
She did not try to make him be normal like everybody else. She encouraged him to try to figure out things for himself. He had confidence that if he tried something and it didn't work out well, his parents "had his back" and he could recover and try again. He succeeded very well in life, becoming a high power government computer consultant.
Support and nurture the gifted, who are sometimes also fragile (I was). An extreme example is the philosopher Ludwig Wittgenstein who had "Asperger's syndrome". He had a lot of trouble living normal daily life but was also able to see philosophical issues that normal people, even with doctorates from Oxford and Cambridge universities, could not see and people protected him and got the benefits of his unique insights.
Today everybody uses wheels. But, long time ago, it took a genius to invent the wheel. Without that genius who may not have understood normal people, normal people would not be able to drive around today.
Think about Luke 2:41–52 in the Bible:
[ Luke 2:41–52 ]
¶ +2024.04.07. What is distinguished between enculturation, accumulation, ethnocentrism, and cultural relativism?
In sociology or related areas of study:
"Enculturation" is the process of childrearing through which a newborn becomes a member of his (her, other's) society: the baby is enculturated to become an American suburban homeowner, a Vietcong guerilla, a Hassidic rebbe, or something else.
I have never heard the term "accumulation" used in this context.
"Ethnocentrism" is the conviction that the person's own form of social living and beliefs, i.e., the result of their enculturation, is the way everybody should be and all other ways of life are wrong. Today this is mostly associated with "white people" who think they are better than everybody else. But throughout history there have been other examples, such as the classic Chinese. The Romans thought Rome was the center of he world; the Chinese thought "The forbidden city" was the center of the world.
A synonym for "ethnocentrism" is: bigot.
"Cultural relativism" is a kind of opposite to ethnocentrism. Cultural relativism is the belief that no culture is better than any other culture. But, ironically, this can be a perverse kind of ethnocentrism: We see this sometimes in the "culture wars" here in USA today.
Cultural relativism is a very complex issue, to which I can't do justice in a Quora post. It covers a variety of different ideas and issues, including, for instance, the famous – famous in France at least, but not in USA – case of the beheading of school teacher Samuel Paty, October 16, 2020. This is well documented on the Internet. I would urge everyone to read Hanny Lightfoot-Klein's little book "Prisoners of ritual" on this subject.
All this is concerned with "ethnicity" and "tolerance", "tradition" and "enlightenment" (an 18th century European term). Very complex matters, but it leads to people killing each other, for one egregious example: in Palestine ever since the Balfour declaration of November 2, 1917.
¶ +2024.04.07. What is the meaning of "end user" in relation to computers and technology? Why is this term commonly used instead of just "user"?
I worked as a computer programmer in industry for half a century.
I heard and used the phrase "end user" a lot. And it had a precise meaning. "User" is a vaguer term.
In "computers and technology" there are long production chains from electrical engineers who design the machines through computer programmers to sales people to finally: end users. End users are the people who finally use the final product.
The term can be applied widely. In the fashion industry, the end user is the person who wears the clothes.
"Users" are all over the place: persons at all stages of the production process can be called users. Here's an example from medicine: A nurse uses a syringe loaded with polio vaccine to immunize a person against the dread disease: the nurse is a user of the syringe. The person she injects with he vaccine is the end user (the patient).
Does this help?
¶ +2024.04.07. Why is it most people don't understand AI isn't doing things, its people using AI to do things they otherwise couldn't before? What's with this disassociation of actually intelligent and capable people being necessary for this effort?
This question clearly states a big problem: "most people don't understand AI isn't doing things, its people using AI to do things". I asked the Bing AI about AI and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
I am not an expert but my guess is that two big parts to the answer to the present question are:
(1) Some computer researchers and entrepreneurs imagine that they will be able to actually create computer programs that really are "conscious like us humans and the really will "do things". Sci-fi junkies and Silicone Valley unicorns and such.
(2) "Most people" have simple-minded ideas. They see a robot do a certain task like maybe pick an item out of a warehouse and move it to the loading dock and put it on a truck, and they see a person do the same actions and they just "think" (presume) the two are doing the same thing.
#2 are abetted by #1 who put out a lot of propaganda for their aspirations.
The schools need to eduate young persons to "understand AI isn't doing things, its people using AI to do things they otherwise couldn't before**". **Ditto for the news and entertainment media for the grownups. But you can guess as well as me that this is not likely to happen, is it?
Everybody should read MIT Prof. of Computer Science Joseph Weizenbaum's classic little book: "Computer power and human reason: from judgment to calculation" (WH Freeman 1976).
[ Zuckerberg; Homer eating his donut; THINK ]
¶ +2024.04.06. Since AI is going to come and change the world whether people like it or not, what can the average person do to prepare for the new reality?
I worked for half a century as s computer programmer, having been "made redundant" by a big tech company in 2018, just before AI seemed to "explode" from nowhere. Why do I write this here? Because I don't know what to do about AI. If I don't understand AI and I do understand computer programming fairly well, what can "the average person" do?
I think there is something else that will be coming after or with AI, and which I fancy I do have a fairly good idea about and it really frighten me: Virtual Reality (VR). I will end this posting with a little VR experiment I did so you can see what I am talking about here.
I'm not clear what all AI will be able to do. If it is only more powerful computing to control industrial robots, and to look up information, maybe its main use will be to eliminate a lot of routinizable white-collar jobs. I have played with the Bing AI and asked it about this and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
I am not sure most people will even notice many of their interactions with AI, for instance if you have complaint about a product you bought, and telephone the company, Maybe for many problems you will not be able to ell if you are being handled by a person or by an AI.
Always keep in mind that AI is not really intelligent (or stupid, either) and it does not have "common sense". AI just computes. And it can make different kinds of mistakes than humans, sometimes by incorrectly analyzing words in sentences. A standup comedian who writes their own jokes can probably get an AI to make many implausible errors by using words in tricky ways.
I asked the Bing AI why the mountain K2 is called "K2". It processed my question against its huge database and outputted what looks to me like the correct answer – like, but much better than a Google search.
But at the end of its output, the Bing AI added that "Everest" is another name for K2. When I inputted that this was an error, it outputted thanking me for correcting its mistake and then repeated the erroneous information.
But nothing really new here: one has always needed to be careful about anything anybody says, yes?
Students who do not want to do their homework will likely be tempted to have AI write their essays for them. AI can probably do better then many college freshmen. But it should be obvious that if the teacher detects this the student will likely fail the course or worse.
Also I read about AI producing "deepfake" pictures. Someone I know was victim of an AI scam where the AI had convincingly imitated the person's daughter begging her mother to get $7,800 for bail money because she was in jail. The mother was hysterical. Fortunately she figured out to call the daughter's cellphone and he daughter told her she was OK, so this particular scam failed – but it almost succeeded.
Maybe AI is largely going to be bad actors trying to rip you off as they have always done, just with much more convincing technology? And, again, a lot of people losing their jobs.
Now to Virtual Reality (VR). VR has very good uses like for training airplane pilots: far better to crash a simulation than a real jumbo jet with 500 living souls on board, yes?
But VR needs to be very carefully controlled because it is very dangerous; it's not just innocuous "fun". Watch the old fun but also profound movie: "The Truman Show". And read my VR experience, herewith:
[ VRMan and my VR experiment ]
¶ +2024.04.06. I'm always afraid that my personal videos, from both my current phone and my old phone that I sold five years ago, will be publicized if I become successful. This fear holds me back from striving for success. Can you help?
Always be careful about any advice you receive from anyone, right?
Here is one way to look at it: What are those videos you re afraid of? naked pictures of you jerking off? Well, so what if somebody does publicize them? Become successful and be "in everybody's face" about it: Say that, yes, those were videos of you became you became successful but young persons sometimes do foolish things? My guess is that your videos are "harmless", just that your mommy would not approve of them.
The main reason I can see being concerned would be if those videos were of you murdering somebody or committing some other felonious act, but then you would have a lot of things to worry about, wouldn't you?
I forget who said: Any publicity is good publicity. Become successful and if somebody has embarrassing videos from your old cellphone, so what? Are you sure that this fear of old videos coming back to hunt you is not just an excuse for not stiriving for that success?
¶ +2024.04.05. What is critical thinking? What is the relevance of critical thinking as a student?
"Critical thinking" is a broad concept.
Basically, it's examining beliefs and other ideas and not just accepting ("buying") them for what they purport to be, It's applying the watchword of the British Royal Society since Sir Isaac Newton:
"Take nobody's word for it."
Many parents tell their children what to believe. If the child asks: "Why?" the parent may sternly reply something like: "Because I say so." Or: "That's just the way it is." They shut down any opportunity for the child to question it. "Yes, mommy." And the child grows up to believe what he (she, other) has been told. "Yes, mommy." They obey orders.
But it's not simple. In order to be ale to criticize ideas one needs to be aware of context and alternatives. So a student should try to learn alternatives to what he believes. If you believe in Christianity, study Buddhism and Shinto and atheism. To see your own society in perspective, study other societies through comparative ethnography and anthropology. If you live in USA, also study Marxism (e.g., Prof. Richard Wolff's YouTube lectures of his website democracyatwork.info)., and primitive societies, or "the welfare state" versus "neoliberalism" and deregulation.
If somebody asks you "Why do you believe 'X'?" Critical thinking encourages you to adduce evidence and logical reasons.
Truth changes. In the Middle Ages, if a person had a lump in their neck it might be possession by an evil demon; today it might be leukemia.
Consider wars: The people on each side think their side is right. But they can't both be right or else they would not be fighting each other. Everybody believes what they believe is right. Critical thinking says: let's examine it. For one thing critical thinking involves trying to understand how persons who disagree with you see the situation ("put yourself in their shoes") and then reviewing both sides to see what of truth there seems to be on each side.
Two principles of critical thinking:
The physicist Niels Bohr told his students: "Take every statement I make as a question not as an assertion."
"A liberal is a man too broad-minded to take his own side in a quarrel." (Robert Frost, cited by Barak Obama)
I don't feel I've really done justice to the topic here. What do YOU think?
¶ +2024.04.05. Should we be worried about A.I. taking over certain jobs?
Shouldn't we?
AI (coupled with industrial robots) can replace many workers, anybody who does routinizable labor. So what becomes of the workers? In a humane economy, other suitable work would be found for these persons or they might even receive their income without working (they will have earned it by enabling their jobs to be replaced by the more efficient machinery).
A problem with a "capitalist", so-called "free market", economy is that people do not pay the social costs of their private decisions. It's freedom for private producers from social responsibility.
There is a classic essay free on the Internet:
Tragedy of the Commons - Econlib
https://www.econlib.org/library/Enc/TragedyoftheCommons.html
Also check out Prof. Richard Wolff's YouTube videos and his website:
Democracy at Work (d@w)
https://www.democracyatwork.info/
(As the title of an old film about the Japanese mafia had it: "The bad sleep well". Good nite Thatcherites, Reaganites, et al.)
¶ +2024.04.05. Are you a robot if your dreams are artificial, and people are looking through your eyes?
Respectfully, this sounds like science fiction to me. I asked the Bing AI about artificial intelligence (AI), and it outputted to me:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
Does this help?
¶ +2024.04.04. What is the distinction between human relations and industrial relations? What are some examples?
Long story short:
All industrial relations are human relations: relations between persons.
But not all human relations are industrial relations. If they were, wouldn't all women be prostitutes? Many human relations are configured by considerations other than economics. (I just now thought of an analogy with animals: pets are not farm animals.)
As for "industrial relations", please check out Prof. Richard Wolff's YouTube videos and his website: democracyatwork.info
¶ +2024.04.04. What are some of the creative ways that fans can express their displeasure at a player, without resorting to booing or abusive language?
Why are people "fans" in the first place?
Celebrities do not do anything useful for all their fans like paying their medical insurance premiums; they make money off them (you, me). Want to make a celebrity unhappy? Ignore them. WhatsItStar drives around in a Bugatti Veyron which goes from 0 to 60 in 20 gallons: Vroom, vroom!
There was an old American Express credit card advertisement: A CELEBRITY walks into a 6-star hotel and expects to be adulated. Nobody notices him (her, other). He walks up to the front desk and the desk clerk just sort of looks at them and asks: "Who?"
The CELEBRITY is very unhappy to be treated like an ordinary mortal. Then they take their AmEx card out and hand it to the clerk and he immediately jumps to attention to deliver superior, personalized service.
"Who?"
*#91; Homer eating donut ]
¶ +2024.04.04. Is there a difference in intelligence between people from underdeveloped countries and those from developed countries? Or is it due to lack of opportunities in underdeveloped countries?
This is a complex question.
What do we men by "lack of opportunities"? Malnutrition and lack of good prenatal health care (and other societal problems) in underdeveloped countries (and in certain areas of developed countries such as the U.S. and Great Britain) can cause women to bear children with lower intelligence, so these persons may be handicapped before they even have a chance to benefit from opportunities.
It's sort of like something a manger I had at work once said: In a football game, if you see the ball loose on the field, pick it up and run with it and we'll worry about who fumbled it after we win the game. Another analogy: because we have polio vaccine, we don't concern ourselves about persons in iron lungs. Fix the problems that lead to a question like the present being asked in the first place.
¶ +2024.04.04. What do you expect during 2024, the year when Intel is expected to see its worst operating losses for its chipmaking business?
I'm expecting during 2024 that the wars in Ukraine and Gaza may escalate into World War III and nuclear apocalypse and even if they don't get any worse than they already are, doing more and more horrible things to more and more persons. I'm concerned about Donald Trump causing Civil War II and wrecking America in a new form of government a Trump Vengeanceocracy.
I'm concerned about "global warming" and infectious diseases – Covid-19 is still killing people. I'm concerned about all the plastic detritis floating in a huge mass in the middle of the Pacific Ocean and other pollution. What else to be worried about?
Is Intel is expected to see its worst operating losses for its chipmaking business? Get real, Sir!
¶ +2024.04.04. What research has been done on the potential benefits of virtual reality? What specific areas have been studied and what were the findings?
I did some research into virtual reality and it could have killed me.
Virtual reality (VR) has some good uses like for training airplane pilots: better to crash a simulation than a real jumbo jet with 500 living souls on board. But virtual reality should not be frivolously used for "fun". VR is as dangerous as nuclear fission.
[ VR experiment ]
¶ +2024.04.04. Do you think the use of videos enhanced by artificial intelligence as evidence in court should be subject to peer review?
"Peer review"?
The only context I've seen that phrase used in is scholarly writing for publication.
But surely any evidence that is "enhanced" in any way needs to be carefully reviewed. Isn't "enhanced" a synonym for: tampered with?
This does not necessarily make the evidence inadmissible. But it strongly urges that evidence must be considered in the light of what has been done to it. Maybe I am wrong but aren't audio recordings often enhanced to filter out "background noise"?
If one wishes to call it "peer review", that's OK, but it seems to me that is using the phrase somewhat out of its usual context. If one is concerned about using videos that have been modified by AI, which can put John Kennedy's head on Joseph Stalin's body or do just about anything else, the more colloquial answer is: "Damned right!"
¶ +2024.04.04. What is the best field of study for students interested in drones, robots, or artificial intelligence?
Whatever you choose to study in the area of computer technology, please also study the "ethics" of it. Study not just HOW TO do things for WHY and WHAT FOR..
[ Weizenbaum ]
¶ +2024.04.03. How can we model a more human, sustainable way of being for the collective?
There is a fine essay available free on the internet which answers this question far better than I can, so I would invite you to read it. It's not long:
Individuality and Society (Jan Szczepanski, UNESCO, "Impact of science on society", 31(4), 1981, 461-466)
https://unesdoc.unesco.org/ark:/48223/pf0000046413
https://unesdoc.unesco.org/ark:/48223/pf0000046413
I had some trouble accessing that so I made a copy:
https://www.bmccedd.org/w/pdfs/Szczepanski.pdf
¶ +2024.04.03. Are articles fine-tuned by Grammarly or Wordtune considered AI generated content?
I am not knowledgeable about Grammarly or Wordtune but my guess is that if the IDEAS and their exposition, i.e., the substantive content, are all from the human person, then no.
Isn't the issue: Where did the ideas come from? If the author originated all the ideas but was not "good with words", what's the difference between using a computer text editing program and a human editor? Checking spelling and phrasing the ideas clearly is just "clean up work", isn't it?
But if a persons submits an article where they copied ideas or their exposition from an AI, that's plagiarism (unless they explicitly cite the AI as the source).
¶ +2024.04.03. In what ways does AI seem disadvantageous in the educational system?
I'm not sure AI is "disadvantageous" in education. One thing seems sure: it can't be kept out or prohibited. All a student needs to do is ask an AI-enabled search engine a question, and there may be other ways. I have played around with (used) the Bing AI. Let me begin here with what it outputted when I queried it about AI:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
One obvious advantage of AI is in situations where "good" human teachers are not available. By "good" I mean both knowledgeable and also sympathetic to inspiring students to want to learn. When I was in school back in the 1960s, my teachers were authoritarian prigs and even called themselves my "masters" even though it was after 1863 in USA. In 7th grade one of them even threatened me because I showed intellectual initiative. Obviously AI instruction would have been better for me than they were.
But the more frequent case would be where there just are not enough human teachers available. Then any teachers who are available can devote their time to coaching and inspiring students, not just providing instruction.
On the other side, a lot of school ass–-ignments do not inspire the student, do they? Given a free choice, the student would choose to not do them. So here it should be obvious that students will try to use AI to "cheat": to do for them what they do not want to do themselves.
Here I propose it is not constructive to take a disciplinarian approach and go plagiarism witch-hunting. Give the students assignments that are meaningful for them so that they will not even want to cheat. Then they can USE AI like they used to use encyclopedias: as resources for their own original activity.
Teach students skill in using AI. I was taught to learn facts. AI can give the students the facts, but not skill in USING them, including critically evaluating information, not just "looking it up".
AI will always make occasional mistakes. AI is not intelligent; it just computes. I asked the Bing AI why the mountain K2 is called "K2" and it outputted what looks to me like the correct answer. But then it added that "Everest" is another name for K2! I inputted that this was an error and it outputted thanking me for pointing this out, and then it repeated the error!
Here is an example of motivating students. Some education researchers once collected a number of teenage boys who had zero interest in school and were far behind grade in reading level. They brought them up to grade level in reading in 6 weeks. How did they do it? The boys were interested in automobiles. So they got broken down cars for them to repair and gave them repair manuals to read. They quickly learned to read the repair manuals to fix the cars. We can guess what would have happened if the researchers had tried to improve their reading skill by ass–-igning them to read a Charles Dickens novel.
Or here's another one: In physics class have the students do the experiments and TELL THEM THE ANSWERS. Their assignment would be to do the experiment and write down in detail and explain what happened, including if they did not get the "correct answer". It would be hard to cheat on that, wouldn't it?
This reminds me of electronic calculators. Of course the calculator can give the student the correct computations. But the student needs to develop their reasoning and "common sense" to evaluate what the calculator can tell them. Example here: Boeing once hired an engineer out of school and gave him an assignment t design a simple part to get him familiar with company procedures. He did the design and took it to the blue collar machinists to make a prototype. They asked him if he was sure his design was correct. Of course, he assured them. They made his part for him: It was perfect except it was an order of magnitude too big.
We need to use AI imaginatively in education to help make young persons who are skilled at learning how to learn and critical thinkers, not "learning", i.e., memorizing facts.
AI can replace much rote instruction, but it cannot replace intelligent, critical, creative thinking. The physicist Niels Bohr obviously taught his students a lot of physics. He also told "instructed" them:
"Take every statement I make as a question not as an assertion."
[ Platonic education ]
(No AI problem here, yes?)
¶ +2024.04.03. What role does human talent play in limiting the intelligence of robots in manufacturing, as discussed in the article?
Unfortunately it seems not possible to answer this question since there is no information (at least that I can see) about the article in question.
Is it a student who has an assignment they can't figure out how to answer or don't have time to do? Unfortunately some school assignments have such problems. Maybe the person should talk with the teacher about it?
Just a thought about the wording: Robots do not have intelligence. They have functional capabilities. One may metaphorically call this "intelligence" but it's just computing output operations from inputs and stored data. I asked the Bing AI about this as regards AI, and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
¶ +2024.04.02. What can really transform an average person into a genius or at least smart? (48 upvotes as of +2024.06.22)
One size does not fit all.
One of the worst things we can do is to tell people what to believe. Like the parent who tells their child to do or believe something and the child asks why and the parent responds something like: "Because I say so" or: "That's just the way it is, got it?" "Yes, mommy."
[ Boss ]
Contrast with the physicist Richard Feynman who at least said he had an "IQ" of 125 which is nowhere near "genius". He was extraordinarily "smart" and creative. Part of it came from being a diligent student. But he also had a father who was always posing to him questions for him to try to figure out the answers to, and furthermore, encouraged him to think up new questions for himself.
One size does not fit all and some persons may not be interested in being smart even if they could be.
[ Homer eating his donut ]
Encourage persons to question what people (including their parents, teachers, rebbes, governments, you, et al.) tell them. Give them leisure to explore ideas. Provide liberal learning in the humanities (philosophy), not just skill instruction (computer science). Reward a person when they try something that looks promising and it doesn't work, not just when it succeeds. Avoid deadlines as much as possible. Encourage sharing of ideas ("batting things around").
As for "genius", yes, there may be a few: Einstein, Mozart, John von Neumann. But as for genius ideas, I personally knew a man who was just a hard working honest family man who lived into his 90s. But one time he had one genius idea, and it was not his theoretical, but saved lives in World War II. Just once, but it made a difference in the war. You never know who will have a genius idea even if they are not "a genius".
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
The physicist Niels Bohr instructed his students: "Take every statement I make as a question not as an assertion."
¶ +2024.04.02. What happens if logic is without empathy, or vice versa?
Obviously the two need to go together: we need to be intelligently concerned about persons.
Suppose a person ha all the warm feelings in the world but is ignorantly superstitious. A friend comes to see them with a lump in their body. The person with all the human concern in the world casts a spell and tells them everything is OK and the person dies from a cancer that, had they gone to an oncologist, would have been quickly cured.
On the other side, a person has studied everything and concludes that some country's political regime is bad and needs to be overthrown. So they start a war which kills thousands of people to fix the problem.
Even with the best of intentions and as much understanding as we can gain, mistakes happen.
But all this should be obvious. The important issue here is why did the person ask this question? What are they concerned about? What are they trying to accomplish? Why are they trying to do it?
¶ +2024.04.02. What are some potential dangers of scientists having an overly positive view of their own research ethics?
I'm not an expert here.
Possible example: One credible theory of where the Covid-19 pandemic came from is that Dr. Fauci and his associates thought they were going to accomplish something "good" with "gain of function" research on corona viruses. But some of their material accidentally escaped the lab and wrecked the world.
[ WUhan Virus Institute ]
Something that very much frightens me are people (including Elon Musk?) who are all enthusiastic about implanting networked microchips inside each person's skull in their brain and maybe turning everybody into obedient zombies. (A much more restricted ambition here may be very good: To enable severely neurologically damaged persons, e.g., with "Lou Gehrig's disease", to regain control of their voluntary muscles.)
In general, everybody needs to be aware that whatever they do (or do not do) has "side effects" which often cannot be foreseen. If a scientist has an extremely positive view of his (her, other's) research, it's time for him to curb his enthusiasm. The scientist always needs to think: But what else might this cause? And since they can never be sure about this, they need to proceed with caution. But, again, this "cuts both ways": To not do something has consequences just like doing something has consequences.
There are no certainties and that's something to always keep in mind.
[ Weizenbaum ]
¶ +2024.04.02. Why are people in general so focused on subjectivity vs. objectivity?
"People in general" are not. a lot of people don't care about such things, do they?
[ Homer ]
As for the remaining few, many are victims of bad metaphysics.
This is a very complex subject, not to be done justice to in a Quora posting. But, long story short: it's not an opposition but a conjunction:
Objective is subjective and subjective is objective. Every object is subjectively experienced and every subjective experience is of an object.
You see a ca on a mat. That is a subjective experience: seeing something. But it is also objective fact: you see, not hear or hallucinate, a cat not a child's stuffed animal toy or an Virtual Reality simulation, etc.
So what? The so-what is that we can emphasize one side of the conjunction or the other for a particular purpose. We can study cats "versus" we can study perception. "Versus", i.e., direct our focus one way or another, and there are yet other ways to focus our attention, as here, where we are focused on neither cats nor on perception but on the way the two go together.
Again, this cannot be adequately addressed in a Quora posting, but the general situation should be clear. Let's be blunt about it: "F=MA" sounds like it is saying something "objective" in a "metaphysical" sense, as if it would be the same even if no humans ever existed.
But what is it really? It is an assertion that whenever a person observes/measures mass and acceleration they will also observe/measure force. No observer, no nuthin. But it goes further than that: The observer does an observation. The observer must have some motivation (reason) to do the observation or else they wouldn't do it. That reason is "subjective": it's not in the things to be observed. It might be to pass a quiz in physics class, or to help design a new jet engine or in thinking about the history of physics, but it's "subjective".
Finally, we can use the two terms "colloquially", and even to try to deceive people. For instance a person can complain that they are not being paid enough to live on. A Reganite economist may tell them that that's just what the facts are – not mentioning that Reagnites subjectively chose to make that be "the facts".
Nullius in verba: Don't take anybody's word for it. If somebody (your parent, teacher, rebbe, government, et al.) tells you to believe them, you should a suspicious, right? Make your own subjective assessment of what the objective situation seems to be for you. The physicist Niels Bohr instructed his students:
"Take every statement I make as a question not as an assertion."
[ THINK ]
¶ +2024.04.02. When a human has a new original creative thought (invention) does this new idea come from a material realm or the formless realm of the mind field as inventor Nikola Tesla postulated when he envisioned new world changing ideas?
"Man weiss nicht von wannen er kommt und braust', wrote Schiller of the surge of language from the depths to the light. No man knows from whence it comes...." (George Steiner, "After Babel", p. 108)
This is among a number of questions that cannot have a direct answer. Let's say the answer is "X". Then the next question is: Where did "X" come from?
[ Cosmos ]
We cannot get "behind" the world to stand under (under-stand) it, because we would still be in the world. It's sort of like a Penrose stairs:
[ Penrose stairs ]
All it can accomplish in the end is giving you a headache. Alernatively, we can do something related but helpful:
Even though we cannot "understand" where new ideas come from, we can nurture our chances of having new ideas or stifle them, can't we?
The physicist Richard Feynman was one of the most creative persons ever. He had lots of new ideas. He said he had an IQ of only 125, which was nowhere near "genius", and probably nowhere near Mr. Tesla.
How did he do it? He said he studied hard. But mainly it came from having a father who was always posing to him questions for him to solve and encouraging him to think up his own new questions to solve.
That's encouraging new thoughts: We can appreciate innovation and cultivate it. Doesn't this sort of sound a bit like some persons love an pray to a Supreme Deity which then do not understand but do appreciate?
Now for the antipode: how to discourage having new ideas. "Why, mommy?" "Because I say so!" "Yes, mommy."
*#91; Boss ]
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
So, I propose it's best to realize this question is like trying to understand God or where the universe came from or a number of other things: cultivate what we cannot understand or control but which we appreciate. And liberal learning in the company of fellow seekers can help us appreciate and cultivate it more and better. What do you think?
[ Platonic education ]
¶ +2024.04.01. What should be humanity's primary task at this point and why?
Let's start with some of the tasks we need to accomplish and then move on to what we might hope for.
End poverty. End wars (including today, Ukraine and Gaza). Stop global overheating ("global warming"). Prevent and cure diseases and not make any new ones like we did Covid-19. Reduce overpopulation in a humane way that doesn't hurt any currently living persons. Reduce the amount of labor people need to do (industrial robots help here, obviously). Add any more you may think of, and I don't see where any one of them is "primary": they are all necessary, aren't they? Every person needs a healthy and secure life.
But what can and should we hope for and look forward to, not just try to prevent? Going to Mars?
I propose that fantasies of a "sci fi" future are misguided. They are emotionally shallow: each of us lives a mortal life. What we should hope for is "the good life", which has not changed since maybe 2,500 years ago.
Two sources: The Book of Ecclesiastes in the Bible and Platonic education. Liberal learning, companionship with good friends, intimacy (good sex and love), play, enjoying living. Even if one does not believe in any religion, Ecclesiastes has a lot of wisdom in it. Platonic education expands our minds far more than space travel. Leisured dinners in the company of a few close friends enjoying good bread and wine. Making love (not war!). Play: Play with your child. Play with your pet dog or cat. Just relax and enjoy fresh air. Create art and crafts. Do scientific research....
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
[ Winnie he Pooh ]
(It doesn't appeal to me, but maybe there will always be some persons who seek "adventure", or to "prove themselves"? They should all be volunteers; no person should have to be "heroic".)
¶ +2024.04.01. What will replace AI?
Replace AI?
I'm not sure exactly what all AI ("Artificial intelligence") is. But if it is primarily a "dialogical" interface for research: getting answers to questions, and for producing content such as memos and images, I don't see what could replace it. It should just keep getting more "intelligent", i.e., keep being more effective at these operations.
Note there is nothing really intelligent about AI: it just computes. I queried the Bing AI about it and it outputted:
Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life.
What I see coming next is VR (Virtual Reality). VR really frightens me it literally "takes us out of our minds" and can make us insane.
VR has very good uses, such as for training airplane pilots. Far better to crash a VR simulation than a real jumbo jet with 500 living souls on board.
But VR should not be used for recreation or daily life which I suspect is likely because some tech companies want to make a lot of money out of it for "entertainment" and consumption, and some young males think it is "cool". Watch the old, fun but also profound movie "The Truman Show".
(This person is "out of their mind")
My virtual reality experiment: I was driving up a 6 lane superhighway early one August afternoon in clear bright sunlight at about 65 miles per hour in my clunky Toyota Corolla DX, with no other cars on the road. I decided to look intently at the little image in the car's rear view mirror -- no high tech apparatus. I really really really really intently focused all my attention on that little image! It was entirely convincing. That "little" image became my whole experienced reality: I was driving where I had been, not where the automobile was going. Fortunately I "snapped out of it" in time to avoid becoming a one car crash in the ditch on the right side of the road. (It was a very good place to have conducted this experiment, because there was a police barracks, a teaching hospital, and both Christian and Jewish cemeteries nearby, just in case.)
You may try to repeat my virtual reality experiment at your own risk; I strongly advise you against doing so. I assure you: It worked. (Of course it will not work if you don't "give in to it", just like a video game won't work if you just look at the pixels as what some computer programmer coded up with branching instructions depending on what inputs you enter.) Moral of this story: VIRTUAL REALITY CAN KILL YOU. Forewarned is forearmed.
¶ +2024.04.01. What are the challenges of finding accurate and reliable information on the Internet? How can this issue be addressed?
The challenge is to find accurate and reliable information ANYWHERE, especially about contentious matters outside one's own direct experience. But even there, it's obvious that the sun goes around the earth, not the other way around: all one needs to do is watch it move from east to west during the day.
For one thing, distrust anybody who tells you to believe what they tell you, starting, in many cases, with one's parents and teachers. Not all of them are that way. The physicist Richard Feynman said that when he s a child his father was always asking him questions and urging him to try to figure out the answers for himself, and even more, his father encouraged him to think up more questions to ask for himself. Contrast with parents who, when theyr chid asks: "Why?" respond: "Because I said so." "Do this" "Do that" "Yes, mommy."
Well, it's the same in adult life: the government of the country you live in ells you want to believe, for instance, here in USA today, tht Russia is committing unprovoked aggression in Ukraine so we have to fight against them. They don't tell you the historical background about having promised Russia in the 1990s to "not push NATO one inch east of Germany" or about overthrowing the Ukrainian government in 2014, or that even after the current war started in 2022 the U.S. government killed a negotiated settlement to end it 2 months after it started, etc. Well, which side is right, the government propaganda or what I've outlined here? How can you tell? Even if you were to go to Ukraine or Russia, you'd be like in the proverbial story about the blind man and the elephant.
Nobody can be 100% sure of anything. As said, it's obvious that the sun goes around the earth, not the other way around. Watch the old, fun, but also profound movie "The Truman Show" and then ask this question again.
Many persons are "True Believers": They have beliefs that they believe are "unquestionable". Yahweh is God and he gave the land of Palestine to the Israelites so it is right to forcibly expel all the Palestinians from "the holy land" (Deuteronomy 20:16-18) and if we murder them all in the process, that's OK. Or an Islamist Fundamentalist in Paris France decapitated a school teacher, Mr. Samuel Paty, for "Insulting th Prophet". Or a few hundred years ago in Europe, heretics were burned alive to save their souls. Etc.
Back to "The Truman Show": How can one be sure of anything? Each of us can only do our best, starting by keeping in mind that no matter how obvious something may seem, maybe we are wrong and trying to get as much information and cross check it and think about whether it's plausible, etc. Do your best while always being alert for new information that might urge changing your mind.
We do have to make choices sometimes. Suppose you find a lump in your neck. Do you go to a sorcerer or to an oncologist or do something else? What are your reasons for making your choice?
People often believe that our theories are based on data, but it turns out that what we think are the data are based on our theories. Back to that lump in one's neck. In the Middle Ages it might have been possession by an evil demon, but not lymphoma. Today it would likely be lymphoma not possession by an evil demon. What you see is shaped by what you believe.
On the Internet, I (why would you trust me? But then why would you trust your parents or your rebbe or your government either?) – I think Wikipedia is generally a good "starting point": if you don't have a better place, start with Wikipedia but don't naively accept it.
Learn as much as you can about learning and believing themselves. Study the process of "finding accurate and reliable information" itself, not just try to do it, because the more you understand about the process of understanding the better you will be able to assess any particular information. Study comparative anthropology and ethnography to see all the different way different persons have understood things (I found Hanny Lightfoot-Klein's little book "Prisoners of ritual" enlightening.).
Think about wars: The people on one side believe they are right. But the people on the other side believe they are right too. Put yourself imaginatively in their shoes. "It's obvious the other side is wrong". Yeah? That's what the people there think about our side and if you had come out of a birth canal on their side that's what you'd think too, isn't it?
"A liberal is a man too broad-minded to take his own side in a quarrel." (Robert Frost, cited by Barak Obama)
If you have read this far and I have encouraged you to think, what do YOU think about that? The physicist Niels Bohr instructed his students:
"Take every statement I make as a question, not as an assertion."
[ THINK ]
Or don't:
[ Homer eating a donut and wnting to die for his counry ]
¶ +2024.03.31. How do AI tools contribute to decision-making processes within organizations, and what factors should be considered to ensure their effectiveness and reliability?
Long answer short: Use AI tools to contribute to decision-making processes.
We humans (or at least the managers) make the decisions. But every decision is based in our available information. So use Ai to get more information to make more informed decisions.
Human sources of information can make mistakes or even try to mislead us.
AI makes a different kind of errors: bad computations. I recently asked the Bing AI what was the time in London England (I am in New York USA). It outputted not just the wrong time but also the wrong date (over a week off). Another time I asked the Bing AI why the mountain K2 is named "K2". It outputted what looks to me like the correct answer and then added that another name for K2 is: "Everest". I inputted that this was an error. The BIng AI thanked me for correcting its error and then repeated the erroneous information.
So the answer here "simple" even if not so easy: collect as much information from everywhere (and from every person) you can, and carefully consider the "effectiveness and reliability" it all, including the information you get from any AI. Persons can hold false beliefs or even lie; AI can make computational errors.
¶ +2024.03.31. Is relying heavily on AI technology preventing individuals from exercising their own agency and independent thinking, thereby cheating themselves out of the opportunity to cultivate their own cognitive abilities?
Clearly this is a very serious danger.
For instance, children watching television or doing video games instead of creatively playing together.
What can be done about it? As Nancy Reagan famously said in a slightly different context: "Just say: 'No!'"
But can't we help persons to WANT to "exercise their own agency and independent thinking"?
I propose we can. Why do people rely so much in AI? When I was in school, I had teachers who – and this was after 1863 in USA – teachers who called themselves my "masters" and who subjected me to endless ass–-ignmants that meant nothing to me. Even worse, and this is 100% true:
[ Rentko ]
If I had AI and I could have "got away with it" of course I would have used the AI to appease my tor–-mentors. Anything to get these mean-spirited task-masters off my case.
Offer persons opportunities that will APPEAL to them to want to "exercise their own agency and independent thinking".
[ Platonic education ]
Platonic education: Look! No homework. No pop quizzes. No final exams. Plato himself did not have to publish to not perish (for a recent example of what that leads to consider Harvard's recently disgraced President Claudine Gay who had to resign due to plagiarism).
How much of the problem is persons "cheating themselves out of the opportunity to cultivate their own cognitive abilities", and how much of it is persons' social environment "cheating them out of opportunities to cultivate their cognitive abilities"?
Let me not tax your attention span more: I think there is much wisdom, even if you do not believe in any Deity, in the Book of Ecclesiastes in the Bible. I see little or no wisdom (not just information) in AI, do you?
¶ +2024.03.31. How do you nurture and unleash your imagination?
Profile photo for Bradford McCormick
Bradford McCormick
Independent Researcher (2018–present)Just now
First, how to kill a person's imagination: Parents and teachers who, when the child asks "Why?" respond: "Because I say so." / "Do as I tell you, and don't ask questions." "Yes, mommy."
[ Boss ]
I read something about one of the most imaginative people of the 20th century, the physicist Richard Feynman. He said he had an IQ of 125. That's not "stupid" but it's not "genius", either. Lots of persons are that "bright". But his imagination was "off the scale". Why?
As a child, his father was always posing questions to him for him to solve. And his father further encouraged him to think up more questions himself.
One way to "nurture and unleash your imagination" is simply to encourage it, like Feynman's father.
Get as expansive and deep education as you can. Learn about all sorts of things. The more you know about, the more you can imagine. If you only know about The Torah you can't imagine what Shinto is like. A "caveman", with his (her, other's) very small world of plants and animals would have to have been a genius to imagine the wheel. Having a PhD in physics, Stephen Hawking could imagine the whole scope of an Einsteinean, quantum universe. If you know the history of art you can make much "richer" paintings than an illierate folk artist.
Two other "biggies". (1) Leisure. If you have to expend all your time on mind-numbing labor to pay the bills you can't imagine much, can you? Imagination can flourish on financial security. (2) Companions. The would "companion" means: persons with whom to eat bread: good friends with whom to share one's ideas and get their responses.
Liberal learning. An open mind. Freedom from fear. Receptive friends.
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
"Take every statement I make as a question, not as an assertion." (Niels Bohr)
To sum it up: Have fun! Play!
¶ +2024.03.31. Is Google's AI a gimmick or the future?
I have not used Google's AI. I have used Bing's AI.
It does not seem to be a gimmick, any more than, almost 30 years ago now, search engines such as Google were a gimmick.
But is is not any kind of "intelligence". I typed in a question and asked the Bing Ai about itself and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
This seems accurate to me. If "AI" is not gimmick, neither is it "the future". I propose that "artificial intelligence" is a misleading name for these computer programs. "Simulation of intelligence" would be better but that does not sound exciting, does it?
Everybody needs to curb their enthusiasm about AI. It's "search engines on steroids" or something like that, very USEFUL, but not anything to jump up and down about like monkeys for a banana.
Isn't that a problem? There are many persons, including poorly socialized young males and irresponsible entrepreneurs (or both in one) who are all excited about "AI" either as a kind of si-fi fantasy or to make a lot of money out of it.
What we need to keep in mind (and always keep in mind that we humans ARE the minds, even those of us who do not understand AI!) – What we need to keep in mind is that these programs are the endeavor of very high IQ persons who are experts on computing to make computer programs that simulate human intelligence. A very good simulation may be hard to distinguish from "the real thing" – that's its goal, like a good forgery appears to be the real thing.
One of the first of these programs was a simple computer program that simulated a certain kind of psychotherapist, "Eliza". Read about it in MIT Prof. of Computer Science Joseph Weizenbaum's classic book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). He probably wrote it in a day. All it did was echo back to the user what the user inputted, asking the user to "Tell me more". Weizenbaum was surprised and very disturbed that persons told the program secrets about themselves they would never tell to another real person.
An analogy to "AI" is how computer imagery has advanced since the days of vacuum tubes. The earliest computer images were very "primitive". See a lot of them at: asciiart.eu. Today computer images are "high resolution" and look real. But they are still just images.
So what? I am not an expert, but I fear (repeat: fear) what's coming next after "AI": "VR", Virtual Reality. This will be far more powerful and will have good uses but it can also be extremely dngerous. Watch the old fun but also profound movie "The Truman Show" and think about my Virtual Reality experiment herewith:
[ VR experment ]
Virtual Reality will not take us into outer space but it may take us "out of our minds" which idiom is a synonym for having gone insane.
Instead of getting all excited about a bunch of "computer scientists" in a research lab cooking up ever better simulations of human intelligence, I favor maturely pursuing liberal education to truly advance our wisdom, including how to use the things the "computer scientists" in their research labs cook up.
[ Platonic education, Weiznbum ]
(I apologize for being so "long winded". thank you for reading.)
¶ +2024.03.30. Is language and numerical understanding necessary for humans?
Numerical understanding is important for negotiating the world.
But without language, what is there at all? My pet cat is alert and curious about things, but I have no idea what her life is like without language. Everything we do as human takes place in language.
Without language you could not even ask the question. could you? I seriously doubt we can have any idea about, although neurological doctors who treat brain injured persons may have more information for you.
This is question for neurological doctors.
¶ +2024.03.30. How can one determine if they are making illogical decisions? Is there a method for doing so?
There is no "method" because decisions are often "open ended" not algorithmic.
For processes and activities that are always the same, we can write up procedures and checklists, but even there sometimes there will be exception cases so we need to follow the procedures attentively, not just by rote. An example here is preparing an airplane for flight or for landing: carefully go through the checklist.
There are, things we can do to help ourselves. Here are a couple:
For one thing, if you are very ENTHUSIASTIC about something, and especially if it not truly time-critical, STOP! Curb your enthusiasm because enthusiasm is not rational or logical. Get a good nite's sleep and revisit your intended decision in the morning before implementing it.
Deadlines are never helpful and need to be avoided whenever possible.
Another good rule of thumb: The wise psychotherapist Dr. Michael Eigen has a dictum:
"When in doubt, wait it out."
Obviously this does not work if it's something like fleeing from a burning building, but such situations are relatively rare, yes?
Write down the pros and cons of your intended decision. Tell someone else you respect and trust your plan and ask them to critique it.
[ THINK ]
¶ +2024.03.30. How can empathy be used as a strategic tool in coaching?
"Empathy" is genuine caring for another person.
"Strategic tool" is manipulating the person to get them to do what you want them to do irrespective of what they may feel.
Well, on second thought, that's not right:
"Sympathy" is caring about the person: it means wanting to help them. "Empathy" is neutral: it means seeing how they see the situation. So you can use empathy (not sympathy) to manipulate the person.
This is the core of USAF Colonel John R. Boyd's famous "OODA Loop" theory of how to fight your enemy in war: You figure out how he sees things and then you manipulate his perception of the situation to his disadvantage.
Empathy is obviously a generally more effective tool in coaching than raw intimidation. In 7th grade English class I had a teacher whose primary job was being a lacrosse coach and he didn't even use empathy, just intimidation on me:
[ Rentko ]
¶ +2024.03.30. What impact will virtual reality have on the future of educational methodologies?
[ VR man ]
[ My VR experiment ]
Virtual Reality (VR) has certain very restricted constructive uses in education, such as for training airplane pilots. It's obviously far better for the pilot in training to crash a virtual reality plane than a real jumbo jet with 500 passengers aboard. But broader uses need to be approached like experimentation on viruses: Look what happened when we messed with a cold virus and it escaped the laboratory and messed up the whole world. VR can destroy all humanity.
¶ +2024.03.30. Is it true that studying STEM subjects is no longer recommended due to the potential replacement by robots and artificial intelligence?
No! This has everything "backwards".
Question: Where do robots and artificial intelligence come from?
Obvious answer: Persons knowledgeable in STEM subjects.
The more robots and "artificial intelligence" (which is not intelligent but just massive computation...) we have, the more knowledgeable persons need to be in the STEM subjects they are products of.
But this is not enough: The persons need not only to be trained in STEM subjects which teach HOW TO do things, but they also need to be liberally educated in the humanities to be able WISELY to choose WHAT FOR these capabilities should be used.
Data is not information.
Information is not knowledge.
Knowledge is not understanding.
And understanding is not wisdom. (Clifford Stoll)
All sorts of dystopias (or utopias depending on how one looks at it...) can be imagined. But imagine a world in which robots and artificial intelligence are much more advanced than even today, but nobody understands even high school physics and biology.
[ Homer eating his donut ]
¶ +2024.03.30. How does deepfake audio compare to other types of AI-generated content in terms of credibility and persuasiveness?
I think I may have recently come across an example of the threat here.
Somebody I vaguely know got a telephone call from her daughter who had been in a terrible automobile accident and now was in jail and her mother needed to send $7,800 bail money....
The mother is a very astute person who works as a mental health professional, so she is not easily fooled by people. But she had to rescue her daughter and withdrew $7,800 from the bank....
IT WAS A DEEP FAKE. The mother somehow decided to try to call the daughter on her cellphone and the daughter answered and said she was fine. The scam had been caught!
The deepfake audio was entirely persuasive. The incident really frightened me, what about you?
¶ +2024.03.30. Can an AI system be as creative as humans if given enough DA to learn from?
Some "humans" are not creative, are they? A highly "intelligent", i.e., having a very powerful computer chip, robot can probably do most of what the stereotypical good citizen can do, yes? We can imagine and make movies (or "VR" experiences...) of just about anything sci-fi junkies can get off on, can't we?
As for the present and the foreseeable future, I asked the Bing AI about all this and it outputted to me:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
What's the big to do about AI? People need to "grow up". Over a million people are being starved to death in Gaza today and America's war against Russia in Ukraine may escalate to thermonuclear apocalypse and end all human and higher animal life on earth.
On the other side, as for what we might hope for, do we really desire flying fortresses (Star Wars and so forth) that are not real B-17 heavy bombers, or is "the good life" what was described 2,500 years ago in the Book of Ecclesiastes in the Bible and realized in classical Greek Platonic education?
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
AI can be a powerful TOOL for us to use to help us live better (and it can also enable us to make things worse, like atomic energy and biochemical research). AI gives us more HOW TO capacity. But HOW TO always just serves WHAT FOR: What we need even more than more intelligence, be it human or computational is more "humanity": caring concern for each other and for ourselves, or do you disagree?
[ Homer eating his donut ]
¶ +2024.03.29. What seven habits can make you smarter?
Why 7 habits? What are you looking for exactly?
I typed in "7 daily habits that will make you smarter" and Google gave me:
https://www.cnbc.com/2017/02/14/7-daily-habits-that-will-make-you-smarter.html
And the article looks very helpful and there were other references too.
But I wonder why the person asked precisely this question. Are they not a person bur some sort of AI bot? Is this a school course assignment? Part of being smart is thinking about why you are doing whatever you are doing right now.
¶ +2024.03.29. Can emotional intelligence play a role in helping students navigate challenges and setbacks effectively, thus enhancing their academic performance?
What is "emotional intelligence"?
There is emotional maturity. Emotional sensitivity (and insensitivity, too!). There is "confidence" and also "uncertainty" and "insecurity".
Emotional maturity, sensitivity and confidence all contribute to helping a person navigate and cope with all issues that arise in their living, including academic performance.
But if one thinks about a person's emotional life like their ability in mathematics or freehand drawing, that does not seem appropriate. The dichotomy often made between thinking (facts) and feeling is misguided: All feelings are about facts and all facts are associated with feelings. If we don't feel anything about something we don't notice it at all.
A person's emotional life seems far more strongly affected by their social environment than their "intelligence". I, for instance, have zero freehand drawing ability (intelligence), irrespective of my childrearing and schooling, but my childrearing and schooling strongly shaped my emotional life. I had an intrusive mother and authoritarian school teachers and that's why my emotions, unlike my freehand drawing ability (and other talents or lack thereof), are as they are.
Each young person needs to be nurtured and encouraged. I once knew a person whose parents were not highly "intelligent" and certainly not educated: "dirt farmers" in Appalachia. But he was innately highly intelligent and emotionally very "solid". One might say he was "emotionally intelligent".
How did he get to be that way? His mother had recognized he was different from everybody else and that she and his father could not provide him with the upbringing he really needed. But she did one thing that made all the difference: She told him and really meant it:
"Tom, do what you believe is right. You will make mistakes. We stand behind you>"
So when he had a setback he was confident they "had his back" and he could recover and he was encouraged to try again.
Me, on the other hand, I was always terrified and discouraged by my intrusive mother and punitive teachers. I always feared: "OR ELSE!" if I messed up. I lack self-confidence and other "emotional intelligence".
In the old quarrel between "nature" and "nurture", intelligence is more from nature and emotion more from nurture although, both are a synthesis of both.
For what it's worth or not worth, following is true story about intelligence and emotion in my childhood, from the 7th grade:
[ Rentko ]
¶ +2024.03.29. What do you think about Mark Zuckerberg's shift towards prioritizing AI technology over his metaverse dream?
I don't understand any of this, do you?
We are rapidly moving into more and more of our life being managed by computer technology so complex that only a very few persons can understand it. The risks and dangers are obviously "endless".
I can't answer the present question but I do have two admonitory thoughts about the dangers of "Virtual Reality".
(1) Watch the old fun but also profound movie "The Truman Show" Mr. Zuckerberg and maybe a very few others would run the show.
(2) Virtual Reality is very dangerous: it literally takes us out of our minds and out of reality. I did a VR experiment some years ago that didn't even require a computer:
My virtual reality experiment: I was driving up a 6 lane superhighway early one August afternoon in clear bright sunlight at about 65 miles per hour in my clunky Toyota Corolla DX, with no other cars on the road. I decided to look intently at the little image in the car's rear view mirror -- no high tech apparatus. I really really really really intently focused all my attention on that little image! It was entirely convincing. That "little" image became my whole experienced reality: I was driving where I had been, not where the automobile was going. Fortunately I "snapped out of it" in time to avoid becoming a one car crash in the ditch on the right side of the road. (It was a very good place to have conducted this experiment, because there was a police barracks, a teaching hospital, and both Christian and Jewish cemeteries nearby, just in case.)
\You may try to repeat my virtual reality experiment at your own risk; I strongly advise you against doing so. I assure you: It worked. (Of course it will not work if you don't "give in to it", just like a video game won't work if you just look at the pixels as what some computer programmer coded up with branching instructions depending on what inputs you enter.) Moral of this story: VIRTUAL REALITY CAN KILL YOU. Forewarned is forearmed.
If Virtual Reality seems "exciting" to you, please curb your enthusiasm and think carefully. The Book of Ecclesiastes in the Bible and Plato's dialogues from ancient Greece describe "the good life", not going into outer space. AI is a powerful tool, but it is just a tool for helping us have richer daily lives.
[ Weizenbum, Leisuer, Platonic education ]
¶ +2024.03.29. Would it be morally right to replace all doctors with robots if they were proven to be significantly more effective and less errors?
Of course not. Let the human doctors USE the robots and monitor them to get the best of both.
But let me ask a more incisive question here:
Would it be "morally right" – i.e., humanly caring – to replace chaplains with robots?
¶ +2024.03.29. Is it possible for a computer to create a masterpiece like Beethoven's Fifth Symphony using only artificial intelligence algorithms without any human inspiration or input?
Could monkeys pounding on typewriters create a masterpiece like Moby Dick? Theoretically, yes, but the problem is they could not recognize the masterpiece from anything else. Computers just compute; they do not recognize anything.
I asked the Bing AI about this kind of thing and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
Humans have inspiration. But that's not exactly right: we do not have inspiration like we have the ability to add a list of numbers or to find typographical errors in text.
"Man weiss nicht von wannen er kommt und braust', wrote Schiller of the surge of language from the depths to the light. No man knows from whence it comes...." (George Steiner, "After Babel", p. 108)
No person can "have" inspiration: inspiration has to "come to them". What we can do is to learn as much as we can and try to be as receptive a possible, to nurture and appreciate what we can't produce on demand. (Bob Dylan said that a lot of the lyrics for his songs "come to him" and he writes them down by a kind of taking inner dictation.) This is why liberal education and leisure are so important.
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
[ Platonic education ]
(Contrast with pressure of "deadlines", and curriculum instruction, and programming computers to do routinized things.)
¶ +2024.03.29. Have you personally had a bad experience with artificial Intelligence, or witnessed someone else harmed by AI? If yes, please describe
Not harm, but I have seen 2 mistakes.
I have used the Bing AI. I asked it why the mountain K2 is called "K2" and it looked like it outputted the correct answer but then it added that another name for K2 was "Everest". When I typed in that this was an error, it thanked me for pointing out the mistake and then it repeated the error.
Another time I asked it what time it was in London (I'm in New York). The Bing Ai outputted not just the wrong time but even the wrong day (over a week previous). When I typed in that this did not look right, it acknowledged the error and did this time give me the right information.
(Since AI can make mistakes, it obviously can do harm. Remember the time during the Cold War when the Soviets' early warning system detected incoming ICBMs. It would have launched a retaliatory nuclear attack. But the person in charge decided it was a "false positive" and saved the world. Also obvious, the AI could do harm on purpose 1f the people who programmed it were bad actors.)
¶ +2024.03.29. Did a.i destroy any hope and literally all opportunities to be an artist, especially in illustration, concept art, comics, and magazine covers?
No, but
Where do AI's images come from? AI copies and modifies art persons made. Then AI can modify it's modified imagery still further and on and on. But AI has to START with imagery designed by a person.
The opportunities to be an artist, especially in illustration, concept art, comics, and magazine covers are to come up with NEW IMAGERY.
Before AI, probably a lot of illustration was copying old illustration and modifying it. AI now can do that. So a lot of the work illustrators used to do can be done by AI and will be because the AI is cheaper than paying a human to do it. But there will always be the opportunity for persons to innovate, which AI can't do because it does not have consciousness or imagination or thoughts of feelings: AI just computes. I asked the Bing AI about this and it outputted:
Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life.
Consider Milton Glazer's famous "I [heart] NEW YORK". AI would not likely have come up with that, would it? Even more extreme, AI could never have come up with the artist Marcel Duchamp's idea of the "Readymade".
There will always be opportunities for human graphic artists to GO BEYOND what AI can extrapolate. Also, whatever AI comes up with, human illustrators will always need to examine it to make sure it's what is wanted, and to improve it. AI "makes mistakes" and a person may not just catch mistakes but also see ways to improve what the AI has done.
All this said, there will likely be fewer employment opportunities for graphic designers, in part because the AI can do the "easy stuff" and also because the people who commission the graphic art may not always be willing to pay for oversight by human graphic artists.
¶ +2024.03.28. I notice most around me are smarter, which disappoints me. Learning a language is hard for me, while others grasp it easily. Before I thought I was in the top 30%, but it seems completely different. What I can do to help myself?
Many very smart persons have difficulty learning languages. And even different persons can find learning different languages harder or easier.
Do you need to learn a foreign language? If yes, you will have to make the effort to do it even though you are not gifted in this way. If not, why bother?
But there may be other things you are more intelligent at. How about expository writing (news stories, etc.)? And intelligence is not just about "academic" subjects. A person may be very intelligent in making ceramics, or in wood craft. Or nursing. Or being a clergyperson who is really great at consoling suffering people. These are other "intelligences". Those adept language learners may be no good at any of these things.
What are you "good" at? A person can be very good at ceramics but not good at math. Conversely, a person may be good at math and also a "klutz".
I am very intelligent at some things. But I cannot "draw" anything. I cannot do pottery. I probably would not be good at surgery, or gourmet cooking. If you are around a lot of persons who are good at learning languages and you are not, you may indeed feel you are not "intelligent" like them. Like if you are 5 feet tall around 7 foot basketball players. But there may be something you are very good at that none of them are. who would not feel unintelligent in a room full of geniuses talking about quantum electrodynamics?
Is somebody making you feel bad about yourself? Maybe you had a mother who kept telling you you were not good enough? THEY are the ones who are being "bad". They should be helping you find what you are good at, what you like to do that would be socially constructive to learn. Some intelligent persons are bad: like "corporate raiders".
"The top 30%" is a big place. As Jesus said: "In my father's house are many mansions".
Does this help any?
¶ +2024.03.28. What has led us to artificial intelligence?
Ever since the early digital computers of the 1950s, some computer researchers have had the dream of computers that think. Early computers were sometimes called "electric brains". Persons do computations sometimes. Computers do computations. But what the computers do is even more different from what we do, than colors are different from shoes, because both colors and shoes are part of our experience of living whereas what computers do it also just part of our experience of living, not our living itself. Some people imagined that our minds must be like computers. It's a plausible analogy.
As computers become ever more powerful, persons write computer programs that ever closer SIMULATE human conversation. A very early example was MIT Computer Science Prof. Joseph Weizenbaum's "Eliza", which he wrote about in his classic book which everyone should read: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). Eliza simulated a certain kind of psychotherapist who generally only repeats back what the patient says to him (her, other) as a question with "Tell me more...". Weizenbaum was surprised and very disturbed that he found people telling the program secrets about themselves they would never share with another human. But that was a simple program that he probably wrote in a day or two.
With ever more powerful computers, computer scientists keep making ever more complex and powerful programs that ever more closely SIMULATE human conversation. But the important point here is that is it just SIMULATION, not "consciousness" which is what we humans are: we are conscious, not computations. An analogy here is that early computers could only produce very primitive images (see: asciiart.eu). but today computers can produce high-resolution pictures.
I asked the Bing AI about this all and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
"Artificial intelligence" is not really a good name for what these computer programs do, because they do not have intelligence (or stupidity, or emotions or intentions or any of the other aspects of our human living): they just compute. We, think and also have feelings and we live our conscious life and what some of us do sometimes is work on computer programs that SIMULATE human conversation, and we call them "artificial intelligence".
¶ +2024.03.28. Why does humanity still have so many problems in 2024?
This is a very complex question, isn't it?
Why did the United States and Great Britain kill the agreement Messrs. Zelensky and Putin had reached in March 2022 to end the Ukraine war? Why didn't Israel go to the United Nations instead of starting a vengeance war in Gaza after October 7? Why were the U.S. and Chinese governments doing "gain of function" research on corona viruses in 2020 which led to the Covid-19 pandemic? Why did Ralph Nader renege on his promise to not be a spoiler in the 2000 U.S. Presidential election which gave it to George Bush and his Iraq War and his aneurysm economy that burst in 2008? Why this and why that?
There is no one cause and it's far more important to figure out what to do to get out the quagmire we're stuck in than how we got here because the past cannot be undone, yes? Are we already doomed?
But a lot of the trouble can be traced back to one thing: August 1914 when the middle aged paunchy white males who governed the countries of Europe decided to have a war instead of being responsible actors. The "excuse" was that one bad man (Gavrilo Princip) killed one aristocrat (Archduke Ferdinand) whom nobody liked anyway. But then in 1919, after these "fat cats" had murdered tens of millions of men and had a big potlatch all over Europe, they did something more: They ended their war with a treaty of punishment against Germany. The huge reparations these same middle aged paunchy white males demanded from Germany led to Adolf Hitler etcetera. It was, as has been said: "the peace to end all peace"
So the answer to this question is very complex and not fully understood, starting with the common form of childrearing which makes people "what they are" instead of making them better than that, etc. (Just one item: bellicose "body contact" athletics in schools inc. "football" which encourage aggression in pubescent males instead of having them volunteer in hospices and ASPCAs, etc.) But one "biggie" is:
The Versailles Treaty of 1919.
¶ +2024.03.28. Within 10,000 years, what's the chances of an intelligent madman developing a virus or weapon to wipe out humanity just cause? I feel like with globalization and responsible enough nations, this is more likely to destroy humanity then nukes.
We almost had this already in 2020 with Covid-19.
[ Wuhan Virus Institute ]
One credible theory is that Covid-19 was due to "gain of function" research on corona viruses at the Wuhan China Institute of Virology that accidentally escaped. If the virus had been even worse it might have destroyed humanity. The "madman" (or "madmen") behind this were scientists working for the U,S. and Chinese governments who were jointly conducting this research.
So this question is not hypothetical, just like for the people of Hiroshima and Nagasaki Japan in 1945 the question of nukes was not hypothetical.
But even without atomic bombs or superviruses, we may make the earth uninhabitable through "global warming". Look at the planet Venus.
¶ +2024.03.27. Modern technology is the sign of dependency and addiction. What are your arguments about it?
This question sounds like "it is coming from somewhere". What is the person asking the question looking for? What are the concerns being addressed?
Humans are always "dependent". Even a hermit is dependent on air and food and how did the hermit survive infancy in the first place?
But there are different kinds of dependencies. Somebody like the late Prof. Stephen Hawking, who had severe neurological impairment was dependent on others for just about everything.
Many persons would die without modern medicines, so they are dependent on them. A diabetic is dependent on insulin. We have even found evidence that seems to show that hunter-gatherer tribes over 10,000 years ago sometimes took care of sick members who could not take care of themselves.
The person asking this question is dependent on the Internet to even be able to ask the question. Celebrities are dependent on their "fans" – although that may be a less admirable kind of dependency. I once read about a very famous athlete who was asked how bad publicity made him feel. He replied that the really bad thing was to be ignored.
"Technology addiction" is probably not frequent. Some persons watch a lot of television or rely heavily on their cellphones but is that addiction? Some pubescent males may be addicted in a straightforward way to video gaming? Opium addiction was widespread in early 19th century China before "modern technology", and alcoholics here in The West. Oh yes, there are also gambling addicts, which has nothing to do with "modern technology".
We are all social beings. We are all dependent on each other. This was true in preliterate "primitive" societies too.
Not all dependencies are bad nor does technological advance necessarily lead to addictions. If I had more "context" for this question I might be able to offer more helpful information.
¶ +2024.03.27. I've heard in Saudi Arabia the kingdom pays everyone money and they don't have to work at all. Do you see that ever happening in America in the future since AI is replacing human workers?
As the question says, it could. And it seems to me it should. But will it?
But, correct me if I am wrong, I seem to have read that there are a many "guest workers" in Saudi Arabia who are doing the work the Saudis don't want to do. So it's not quite so simple, is it?
There is a lot of work AI (mainly, Industrial robots, not just computer clerks) can't do. A couple days ago a major bridge in Baltimore Maryland USA collapsed due to a large container ship colliding with one of its pylons. I can't imagine industrial robots rebuilding a big bridge, although surely they will be able to help a lot.
Then there is "social work", all the work that depends on human empathy not just technical skill or "manpower". Only a certain amount of medical work – doctors, nurses, et al. – can be appropriately automated. A lot of it needs "the human touch". Childrearing. Teachers, insofar as they are mentors not just instructors. Also all real decision making activity: Deciding what the automation is to do for us.
And also any activity that involved open-ended problem-solving, including fixing problems that arise in the AI itself. (If someone objects that AI may become so "intelligent" that it can fix its own mistakes, there will be bugs in the computer code that detects and fixes problems, so sometimes it will not fix them but exacerbate them without detecting this....)
We need to monitor the automation. AI, like all complex systems, makes mistakes. A few minutes ago I asked the Bing AI what time it was in London England. Not only did it output the wrong hour of the day but also the wrong day. Now of course the computer programmers who work on the Bing Ai will fix that error, but then there will be another error after that one....
I think a big problem is that a lot of people, "conservatives", believe that people "should" work to "earn" their living. this is similar to how they also believe women should not be free to control their reproductive life: The Abrahamic Deity cursed Adam to toil due to having eaten a piece of fruit, and women to have pain in childbearing.
There are many issues here. You might find it interesting to listen to some of Prof. Richard Wolff's YouTube videos and his website:
Democracy at Work (d@w)
America does not even have universal health insurance or adequate social services for all. And "conservatives" want to cut Social Security and Medicare, not hours of work. In England they are destroying the National Health Service by slowly defunding it; in Frane they have just RAISED the retirement age. I seem to have read somewhere that Richard Nixon thought about a guaranteed annual income, but nothing came of that.
¶ +2024.03.27. How do virtual interactions affect our brain?
Virtual interactions can affect our minds just like real interactions, when we do not interpret them as virtual but mistake or otherwise accept them for real. This can be dangerous!
My virtual reality experiment: I was driving up a 6 lane superhighway early one August afternoon in clear bright sunlight at about 65 miles per hour in my clunky Toyota Corolla DX, with no other cars on the road. I decided to look intently at the little image in the car's rear view mirror -- no high tech apparatus. I really really really really intently focused all my attention on that little image! It was entirely convincing. That "little" image became my whole experienced reality: I was driving where I had been, not where the automobile was going. Fortunately I "snapped out of it" in time to avoid becoming a one car crash in the ditch on the right side of the road. (It was a very good place to have conducted this experiment, because there was a police barracks, a teaching hospital, and both Christian and Jewish cemeteries nearby, just in case.)
You may try to repeat my virtual reality experiment at your own risk; I strongly advise you against doing so. I assure you: It worked. (Of course it will not work if you don't "give in to it", just like a video game won't work if you just look at the pixels as what some computer programmer coded up with branching instructions depending on what inputs you enter.) Moral of this story: VIRTUAL REALITY CAN KILL YOU. Forewarned is forearmed.
[ VRMan ]
(This person is "out of their mind")
¶ +2024.03.27. What are the duties, responsibilities, skills, and abilities of spies?
To cause trouble.
Let's say country A has enemy B. Person C signs up as a spy in the service of country A and gets a high security job in country B. C finds out information that enables country A to overthrow the government of country B and conquer it. In country A, C is a the best of the best; in country B, C was the worst of the worst. If person C came from country B, C is traitor in B.
Sounds like bad stuff to me. The actual history of America's CIA is full of "dirty deeds", including assassinations and overthrow of many governments. Other countries probably are not any better.
A person can, however, do honorable work in the "intelligence" field for their country: collecting and analyzing information that is publicly available.
Back to countries A and B: Person D can sign up in country A to monitor information coming out of country B, and to "connect the dots".
I seem to have read that in Israel before October 7th last year, some 19 year old girls who were tank commanders in the Israel Defense Forces had oserved Hamas doing military exercises on their side of the border during their regular patrols (I don't have the details). Nothing covert. The girls reported what they saw to their superiors WHO DID NOTHING ABOUT IT. But shouldn't this information, acquired without any spying, have made the Israelis prepare for a Hamas attack?
Similarly, in 1941, the United States could have been on alert for an attack from Japan due to publicly available information?
Or consider the question of who sabotaged the Nordstream II gas pipeline. Spies might find out the details for one side of the other. But a person who just reads the newspapers could give a likely answer to the question without any covert action by simply quoting American's President Biden public speech in which he said the U.S. would destroy the pipeline if Russia attacked Ukraine.
"Tell a lie often enough and people will believe it." (Hermann Goring, perhaps apocryphal). America's war against Russia in Ukraine is a case of this: we keep saying Russia's "special miliary operation" was "entirely unprovoked" even though we were arming Ukraine to be a NATO bastion on Russia's western border, an existential threat to Russia like Soviet missiles in Cuba were to the U.S. in 1962.
"There is more to the surface than meets the eye." (Aaron Beck)
1
¶ +2024.03.27. Will computers become intelligent in the future through artificial intelligence (AI)?
I asked the Bing AI about this and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
Anything that is not a logical self-contradiction is IMAGINABLE. So we can imagine a computer that really thinks, like HAL in 2001. Bu tis this possible in reality?
One thing to beware of are persons who get all excited about wanting to make a computer that has consciousness. What would they end up with in their "gain of function" experimentation if it actually succeeded?
"Gain of function" research with corona viruses that escaped the laboratory is likely where the Covid-19 pandemic came from. Look at the threat that atomic bombs pose to us all. If these computer "science" people ever succeeded might what they had produced destroy the whole human world if somebody didn't "pull the plug" on it?
[ Weizenbaum ]
¶ +2024.03.26. What techniques do intelligent individuals use to effectively explain complex ideas without losing their audience's attention or interest?
"Intelligence" may not be the exactly best word here.
Let's assume the person understands the complex ideas. He (she, other) has to be intelligent to do that, yes?
So it's a matter of a specific kind of "something" not directly related to understanding complex ideas. A person can be brilliant and have no ability to explain it to any audience except other brilliant persons like himself. What is that "something" else?
It's a variety of abilities of kinds that are not directly related to understanding the complex ideas: One is the ability imaginatively to "put yourself in the other person's shoes". Another things is patience.
There are many "techniques" for keeping an audience's attention. First thing is to recognize when you are "losing them". I have a personal example: I was in a class where I was trying as hard as I could to understand what the teacher was explaining, but I was not succeeding.
Finally, I raised my hand and said I was just getting more and more lost. The teacher apparently had not noticed anything. So he asked for a show of hands of who DID understand what he was explaining. No a single hand went up. That teacher surely himself understood the subject matter he was trying to teach but he needed something more.
One good technique sometimes is to use an analogy: to show how the complex ideas are similar to something else the audience might understand already. Or one may need to provide "background" information to bring the audience "up to speed". Diagrams and other pictures are often very effective. Another technique is to ask someone in the audience to try to describe what they understand or what they don't.
Engage with the audience. Don't just drone on and on. Have a desire to get them to understand the material. Be flexible: You may have wanted to explain one thing but you find there is something else the audience does not understand that you have to deal with first and if that's all you can do in the time available, at least you will have accomplished something.
Start off with a "roadmap": "Here is what I want to accomplish in this presentation: [briefly describe the goal]"
And even if your audience can understand what you are trying to explain, people's attention spans are limited. I recommend telling your most important point first, and no surprise endings. If you have a secret to tell, tell it first. That way if people lose attention you will have done what is most important.
Being "intelligent" is not enough to communicate. You also need to be effective.
¶ +2024.03.26. Is artificial intelligence in software development more promising or concerning?
Surely it's both.
I worked for half a century as a computer programmer and was made redundant by a big tech company in 2018, just before AI started to be a "big thing".
So I don't really know what's happening with AI in software development. Software development like a big elephant and I am just one blind man describing the part of the elephant I came in contact with.
I started in 1972, back in the days of IBM System 370 mainframe computers and COBOL and Assembly Language. I even keypunched my own source decks for programs I wrote which were then read into the computer from a card reader.
By the 2010s, everything had changed. Some things were better: personal computers meant programmers did not have to work on 3rd shift to get "time on the machine". But mostly things were getting bad: Instead of the kind of extensive documentation IBM provided for writing batch programs, there were now undocumented APIs, and I had to try to figure out how to make software that didn't make sense do incomprehensible tricks (Angular, Django...). I got PTSD from it.
Even worse, but which I did not directly experience, were the new techniques for managing software development: "agile" and "scrums". These seemed to me like the kind of oppression they are supposed to have in North Korea. Each morning you publicly confessed your sins of the previous day and pledged your enthusiastic loyalty for the new day's work.
I have no idea what software development is like today with AI. But if things kept going the way I saw them in my last years as a programmer, it's going to be miserable for programmers.
¶ +2024.03.26. How can teachers detect if a student has opened another tab on their computer during class?
Why are teachers trying to find students doing things they don't like on their computers?
There is a simple but not easy solution to all the plagiarism to do and so forth: Give the students challenges which will appeal to them to want to do so they will have no reason to try to "cheat" or be distracted.
When I was in school I found most of my ass–-ignments meaningless and I did not like them or the teachers who had ass–-igned them. If I had had an AI to do the ass–-ignments for me and could have got away with it, of course I would have done that.
But I was always eager to learn, had the teachers offered me opportunities which interested me (I was a straight "A" student; they were not mentors but tor-mentors; I was always living in fear of the "OR ELSE" if I didn't get a good grade from them). I even had one teacher who took an occasion in 7th grade English class to try to break my spirit:
[ Rentko ]
Alternative (no computers needed):
[ Platonic education ]
¶ +2024.03.26. Can you share your experience of finally cracking a tough word search puzzle, like when you solved today's challenging Strands game?
This is a "specification" of a general question about the experience of solving any difficult puzzle.
Here was a "biggie". For some reason, early in life, Andrew Wiles go fascinated by Fermat's Last theorem. Wiles got his PhD on mathematics and all his life kept trying to solve this puzzle, like many others have done for several centuries. At a certain point, he apparenty felt he had tried everything he could imagine and failed. He was about to give up and suddenly he "saw" (understood) the answer which as soon as he saw it seemed to him obvious.
That's how it often is, isn't it? You keep trying one thing after another and nothing works and then: "Aha!"
"Man weiss nicht von wannen er kommt und braust', wrote Schiller of the surge of language from the depths to the light. No man knows from whence it comes...." (George Steiner, "After Babel", p. 108)
Each experience is different. And sometimes it's not just one sudden "insight" but you see "pieces of it" and then they "come together".
Another consideration is that the difficulty of solving a puzzle does not always correlate directly with the value of solving the puzzle, and some puzzles didn't need to exist. I worked for many years as a computer programmer. Computer programs are full of "puzzles": bugs, where the program does not do what it is intended to do. Computer programmers who do sloppy work write programs that have more bugs in them that necessary, and each of those bugs is a puzzle that that to be solved for the program to work correctly, We can't be perfect but we can be careful.
Try to "observe yourself" when you are trying to solve a puzzle. Keep a diary of your experiences.
You might like Bob Dylan's song "I contain multitudes". Some people don't care, do they?
[ Homer eating his donut ]
¶ +2024.03.26. What is the application of anthropology knowledge and methods?
This is a very "big" question. There are many partial answers.
What is "anthropology"? Anthropology is the disciplined study of human life. Even this can take a variety of forms. At least until recently one could study "primitive people", societies that were pre-literate. Today there are almost none of them left because "we" – the advanced civilizations that do anthropology – have found and studied them all and often with very bad impact on those people.
One can still study cultures other than our own, today. Fundamentalist religious groups are a good example to study: Islamists and Hassidim, for examples.
But one can also study the society in which one finds oneself living. This is perhaps more often called "sociology", but it's just anthropology applied to ourselves, not to "people who are different from us".
And that is a big point to be made: "anthropology" can study persons other than ourselves for any number of reasons, including to control their people for our purposes (colonialism, etc.), while not studying ourselves, so that our own form of life remains as naively instantiated as we may see others. Dressed in white lab coats or Armani suits, we may say: "Oh look at those strange people who believe in weird gods and dress themselves in rare bird feathers." We need then also to look in the mirror and see that we believe maybe in the Abrahamic Deity and ears lab coats and Armani suits.
So anthropology needs to be self-reflective and one of its goals to analyze our own social life to see if it makes sense or we can improve it, i.e., improve ourselves.
"All known cultures have in one way or another depersonalized as well as personalized, so that no human culture has been worth preserving the way it was – although all have been worth improving." (Walter Ong, "Fighting for life", p. 201)
Anthropology can be just a kind of "freak show": ourselves looking at others and seeing how foolish some of the things they do are. That's the Biblical injunction about seeing the mote in the other person's eye but not the beam in one's own.
We can use anthropology to enrich our lives by seeing what's not good in what we are doing and changing it, seeing what is good ne cultivating it, and seeing what's bad that others have done and avoiding it ad what is good in what others have done and adopting it.
The ancient Greeks hd a little motto: "Know thyself". Be an anthropologist of one's own life, including what one's own parents and teachers socially conditioned us to become so far, so we can go further.
Example:
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
These persons, changing the clothing and the setting, could be ourselves, discussing our own social customs and comparing them with the customs of others to improve how we live.
[ Platonic education ]
Aside: One example of anthropological study I recommend is Hanny Lightfoot-Klein's book "Prisoners of Ritual".
¶ +2024.03.25. What are the reasons why some people prefer typing on physical keyboards over virtual ones?
By "virtual keyboard" do we mean the images of keyboards like on some cellphones or Apple I-Pads where you press the images of the keys on a touch screen?
I am so old that I went thru college (and over a decade afterwords(sic)...) doing my work on a typewriter, an Olympia SG-3 office manual that cost $400 in 1966. You can find lots of images of one in the Internet; it was a Rolls Royce of typewriters.
So for many years I did a lot of typing on a real typewriter that felt good to use. Probably that's why I like a real physical keyboard over a virtual one. Also I do "touch type", or whatever one calls the way a traditional secretary typed with all 10 fingers on the QWERTY keys of a physical typewriter. So I feel "spatial orientation" to the keys which is harder to do with the virtual keys which have less rigidly defined boundaries for each key.
Different [key]strokes for different [key]folks, yes?
¶ +2024.03.25. Do you agree with Eddie Turner that adaptive solutions, rather than fixed or technical solutions, are required to solve the greatest problems we face?
I never heard of Eddie Turner. But it should be obvious that adaptive solutions rather than fixed or strictly technical solutions are required to solve great (difficult, important) problems. As the old saying goes:
"One size does not fit all."
Imaginary example: You are a manager and you want to motivate your employees. So you offer them all a huge cash bonuses for doing exceptionally good work. Good idea, yes?
Well, Joe Smith has 8 kids some of whom have severe medical problems and he also has to pay for college for the rest and he has huge student load debt. Of course he will be motivated. But Suzie Jones has a husband with a job that pays a lot and comes to work for the challenge of the tasks and to socialize. The money won't motivate her, whereas assigning her the most difficult part of the project would.
[ Homer Simpson eating his donut ]
¶ +2024.03.25. What are the signs of a true expert and how can one differentiate them from someone who is just pretending to know everything?
One size does not fit all.
Differentiating "someone who is just pretending to know everything" from "a true expert" is easy: ask them to do something that requires real expertise, for instance, to solve a difficult problem, or to explain something in depth. Often they BOAST about how smart they "are". Armchair heroes. Donald Trump?
But not all true experts are alike. Some are very modest and will assure you they don't know much. On a television documentary about Innuits, they once showed a master seal hunter. He stood motionless for hours over a seal's little breathing hole in the ice. All of a sudden you saw him pulling up a seal he had skewered with his little wood shaft, bone hook tipped harpoon, faster than you could see him do it. He exclaimed: "I almost missed!" Of course he didn't almost miss, but he was expressing his humility.
But then there are [probably very few] other experts who could be mistaken for phonies / "know-it-alls", because they boast they are superior to everybody. The difference is that they really are. The best example here was the architect Frank Lloyd Wright, who said that as a young person he had to choose between false modesty and honest arrogance and chose the latter and never regretted it. But he was indeed a true expert: The Imperial Hotel he had designed was one of the very few buildings to survive The Great Kanto Earthquake Saturday, September 1, 1923, Tokyo (Japan).
An example of true expert I really like is the structural engineer William Lemessurier. Look up "William LeMessurier - The Fifty-Nine-Story Crisis: A Lesson in Professional Behavior" on the internet.
So just ask the person a tough question and don't let them off the hook (get the pun here?).
¶ +2024.03.24. Can you explain the meaning of an unconscious slip?
"Unconscious slip" is probably not a helpful term. We really have no idea about what might be unconscious because by assumption we would not be aware of it.
Persons make mistakes. Some are just random. This occurs a lot more in typing than in speaking. Sometimes the typographical errors are just meaningless but other times then "mean something". Occasionally I learn something I had not thought of from a typographical error I've made.
Other times a person knows they are supposed to say one thing but they believe something else and instead of saying what they know they are supposed to say, they say what they would like to say but were afraid to. Someone else here gave an example:
The person is supposed to say: "That dress looks great on you," but the sentence that the person states out loud is "That dress looks ungodly on you."
Probably what follows, if the speaker notices what they said or the other person calls them out for it is the person immediately exclaims: "I didn't mean that. I meant to say it looks great on you. Oh, I'm so sorry!" Well, they did mean it but were afraid to say it. What they now can't say is: "Don't hurt me!"
So label it as one will, sometimes a person is afraid to say what they think but somehow they do it anyway and that is called an "unconscious slip" or a "Freudian slip".
The root of the problem is that the person who makes the "slip" does not have the social power to be able to say what they think without suffering penalty for it. The root of the problem is asymmetrical social power: The powerful take what they want and the powerless take what they get, "OR ELSE!"
[ Boss ]
In a world where, as the U.S. Declaration of Independence says "all men are created equal", there would be no such "slip ups". Each person could freely express their true thoughts and feelings without fear. In Sigmund Freud's time, one big source of such "slips" was persons' sexual feelings since society was very "repressive" of sex, especially for respectable women.
¶ +2024.03.24. Could a reliable AI program eventually be used to determine verdicts in both civil and criminal cases, thus eliminating the need for humans and the biases they might have?
AI cannot be used to determine verdicts in either civil and criminal cases, or to make any other kinds of decisions, because AI does not understand anything: AI just computes. Any "decisions" it outputs were programmed into it by the human computer programs who programmed it.
I asked the Bing AI about itself and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
AI just computes. It has neither intelligence (nor stupidity) nor common sense. Its outputs just depend on what it was programmed to do and its datastore.
I asked the Bing AI why the mountain KA is called "K2". It outputted what looks to me like the correct answer, and then it added that "Everest" is another name for K2. I inputted that was an error and it outputted thanking me for the correction and then it repeated the error. Of course the people who programmed he Bing AI can fix that, but it's just an example of what will always happen.
There is no way of 100% eliminating the biases we have, and which, of course, the people who make the AI programs will program into the AI programs they write. So what is the best we can do?
We can use AI as a REFERENCE TOOL to help us make decisions. We can feed into it the information we have and think about what it outputs, along with all other available evidence. And we can do this not as isolated individuals but socially in civil society. And guess what? We already have this: It's called juries.
Juries are not perfect, but then no human knowledge can be perfect because we find ourselves in a world we did not make and therefore which we have to try to understand it as best we can but always on the basis of incomplete and always possibly imperfect knowledge. Watch the old fun but also profound movie "The Truman Show" and take that is evidence to contribute to handling the present problem.
The physicist Niels Bohr gave his students some advice which is worth thinking about: "Take every statement I make as a question, not as an assertion." Of course we have to make decisions, but one decision that is not reasonable would be for us to decide to let AI's "make the decisions", instead of us using the AI's to propose courses of action to help us make the decisions.
I just now recalled the case during the Cold War where the Russian "early warning system" detected incoming ICBMs and would have launched a retaliatory strike, starting World War III. The human officer monitoring the system decided it was probably a "false positive" and saved the world.
¶ +2024.03.24. Can new tools and techniques be created using existing ones, such as building a paper computer or something similar in the field of science?
Yes. And: not likely.
Yes: New tools and techniques can be created using existing ones. Indeed, that's the way they usually are. Over a very long evolutionary process, the turbojet engines on a jumbo jet derives by improvements from the Greek-Egyptian mathematician and inventor Hero of Alexandria's first recorded steam engine, known as the aeolipile or Hero's engine.
I am not skilled in any craft. But if you have tools like a simple hand drill and saw, they can be used to make a precision lathe. 20th century Einsteinian physics built on Newtonian physics. Sir Isaac Newton himself said that the only reason he was able to see further than others was that he stood on the shoulders of giants.
But I doubt modern computers could be built starting from a paper construction, and not even from Charles Babbage's "Difference engine". It needed a long detour through electrical engineering advances. Of course, people including Alan Turing wrote down ideas for computers on pieces of paper, but that's not a direct engineering ancestor.
New progress is always built on pervious progress. What remains the same is the creative mind which innovates. Yours? Mine?
¶ +2024.03.24. In what ways has artificial intelligence (AI) influenced personalized learning experiences?
Artificial intelligence (AI) cannot provide personalized leaning experiences because AI is completely impersonal: there is no person in AI, just computation. The learner inputs something. The AI computes and emits output.
What AI can provide is individualized instruction. In this way it is different from and may well be "better" than curriculum which gives the same tasks to every learner irrespective of the individual learner's strengths and weaknesses, pervious learning and experience, etc.
When I went to school I had teachers (they called themselves my "masters" even though it was after 1863 in USA!). They ass–-igned the exact same pedagogical tasks to every student: one size fit all. It was terrible for me. I would have done far better with computer courses that analyzed what I already knew and what might interest me. I even had one human teacher who tried to destroy my soul in 7th grade, which no computer instruction would likely be programmed to do:
[ Mike Rentko + Bossy boss ]
I would have greatly benefitted from personalized mentoring from a wise and caring human teacher! An adult who would have had empathy for me as well as knowledges to share with me. "I see, Brad, none of this means anything to you and you feel you are just stuck here. A Charles Dickens novel is clearly miserably tedious for you. Let me show you [whatever] that I feel will interest you and let's see if you might like to study it more. Here, let's look as this little drama 'Tea and sympathy', and see if you feel it relates more to how you feel about being here...."
Individualized computer learning:
[ Computer programmer banging his head in to the monitor ]
Personalized human liberal education:
[ Platonic education ]
¶ +2024.03.24. What are the reasons for people still attending university despite the abundance of information on the internet? Is a university degree necessary or overrated?
One might ask the question the other way: If a person can get a university degree, why not? Maybe the most likely reason would be to not incur large student loan debt. But surely there other reasons. Some curriculum is not very enjoyable. Etc.
Aren't there a lot of career opportunities which are not open to a person who "has not been to college"? Networking is another reason to go to college. Some students are athletes and colleges are where the teams are, yes?
Persons often ask question for reasons, because they have concerns. What are your concerns in asking this question?
I m old. (77 years old). When I was young, a college degree "meant something", just like for my parents, being "a high school graduate" was valuable. Over the years there has been "degree inflation": more people getting higher credentials and the credentials having less value. Benjamin Franklin never went to school and Abraham Lincoln had very little formal schooling; some people who were "self taught" did very well in life two centuries ago, didn't they?
If you want to be a doctor or nurse or lawyer or school teacher or have any number of other jobs, like being a "Wall Street" financial person, you need formal education.
As for "the abundance of information on the internet", how much can one learn ***in depth*** just surfing? The more education you have the more you can benefit from the information on the Internet.
There is a young man I see when I walk around the neighborhood some mornings. He did not go to college. He works for the municipal water company. He has a job that probably won't be off-shored or right-sized and he's not stuck behind a desk in an office cubicle all day. He does have to work in the rain and snow and if there is a burst water main at 3 in the morning he's get to get up and go there. But I think he is somewhat unusual.
He chose this: he didn't see any value in sitting in a classroom for himself. His father does have a college degree (probably more) and is a big executive in a big corporation. And the young man still lives at home (does not eat up most of his paycheck in rent). Probably most of his friends did go to college so he's in the "college crowd" even although he does not have a degree. How many kids who don't go to college have that kind of situation?
Choose as wisely you can. It seems things were better for young person when I was young. I got an entry level computer programmer job in 1972 just due to being able to think logically and having that college degree And here's the real "kicker": The job started with 6 weeks in-house classroom instruction in basic computer programming AT FULL PAY. How many jobs like that are available today?
My answer may not be very helpful to you. Again, think carefully about why you are asking the question and deal with the issues involved as best you can. If you "don't go to college", you may regret it. Starting college and leaving after a year or two without a degree but with student loan debt would be discouraging, too, yes? Good luck!
¶ +2024.03.23. Am I really too dumb to become a mechatronic? I am in the second year of my apprenticeship now and I feel more dumb, incapable, lost, and doomed than ever before.
I obviously don't know enough to give you advice.
But I will give you some advice anyway for what it is worth or not worth: Have you asked for help? Have you talked over your concerns with whoever you are working for? If not, why not try?
The question says this feeling has been "building" for a while. There could be many reasons for being "lost" and "feeling doomed" other than being "dumb" or "incapable".
Ask for help. You did ask for help by posting his question on Quora. That was good, but probably there are better places to seek help? Maybe you can turn this around?
¶ +2024.03.23. What role do you see brain chip implants playing in enhancing human capabilities beyond overcoming physical disabilities, as demonstrated by Neuralink's latest video?
Yes, innovations like Neuralink can be very helpful for persons with severe disabilities.
But do you want to risk losing your mind by having somebody do invasive surgery on your brain, or once they have put their "chip" in there, them taking over your life and making you into a zombie? Why on earth do people want to "paly with fire" and risk destroying their minds over some googoo technogimmickry?
Use your mind, don't mess with it!
Of course everyone should enhance themselves: with liberal education, not invasive brain surgery or other puerile sci-fi "fantasies". Grow up!
The solution for persons with severe neurological disabilities may be brain implants, although non-invasive innovations would be far, far preferable, but for persons who are healthy, the path to self-enhancement is through social learning:
[ Platonic education ]
¶ +2024.03.23. What do you think will be the greatest challenge for humanity in the future: climate change, overpopulation, pollution, or the potential dangers of advanced technology such as artificial intelligence?
This is an otiose question.
Every one of the choices is a good candidate for the "winner": "climate change, overpopulation, pollution, or the potential dangers of advanced technology such as artificial intelligence". My list would also include: nuclear weapons, biochemical research (where Covid-19 likely came from) – but on second thought maybe they are just examples of "advanced technologies".
Yes, the list omits a biggie: political conflicts, and still the list will not be exhaustive. Currently there are at least two wars going on each of which can result in the end of human and higher-animal life on earth in nuclear apocalypse: Ukraine and Gaza. The fools, the petty ideologues and other disgusting people with power but not "humanity" need to stop their wars, both because the wars themselves are destructive but also because they are distracting us from dealing with the problems listed in the present question.
"Look Ms. Nuland, Bibi and Boris Johnson! We beat the Russians! We won the war! We beat Hamas too! Wow, ain't we the greatest! –– Oh, no! [Gasp] I can't breathe the polluted air and the electricity has conked out and the temperature has risen above the boiling point of water and Oh my we really f*cked up, didn't we...." [End of human and higher animal life on earth, and we did it to ourselves! ]
"All understood, too late." (Sophocles, Antigone, ca, 41 BCE)
[ THINK ]
¶ +2024.03.23. Could Strong AI eventually destroy humanity?
Better safe than sorry, yes?
I don't see how "artificial intelligence" (AI) is likely to destroy the human biological species. Here is what the Bing AI says about it:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
Does that sound apocalyptic to you? HOWEVER! Can't we imagine some irresponsible humans: some gung-ho gameboy "computer scientists" or greedy entrepreneurs USING AI to either destroy the human biological species or our humanity?
[ Zuckerberg ]
"We" have produced nuclear weapons which can destroy all humanity. And it looks like "we", by doing "gain of function" research on corona viruses that escaped the laboratory: Covid-19, already did severe damage to humanity. So why not computers?
But my computer fears are not about AI: I am far more concerned about Virtual Reality (VR), which literally takes a person out of their mind. I will end with a little VR experiment I foolishly tried (fortunately I had a lucky outcome). Also, watch the old fun but also profound movie: "The Truman Show".
[ My VR experiment ]
¶ +2024.03.22. What would prevent self-aware artificial intelligence from taking control of Earth from their creators?
This is not likely to happen. I asked the Bing AI about this sort of thing and here is the output I got:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
We can imagine all sorts of things that are implausible or impossible. How could an "artificial intelligence", conscious or not, take control of the earth from its creators? Well, obviously: pull the electric plug on it. But the point is the "from its creators": Humans need to not try to create things that might harm them (us). That should be a "no brainer".
But "we" in fact already do such things. It appears likely that the Covid-19 pandemic may have come from "gain of function" virus research in the Wuhan Institute of Virology that "escaped". WE LIKELY DID IT TO OURSELVES! So why not some irresponsible – got that: irresponsible computer programmers having fun creating a superpowerful superduper and destroying the world! Wheee! Look guys what we did! Oh, boy, wow!"! Kaput.
Instead of people getting off on sci-fi fantasies, they need to appreciate what it means to be human. Part of the computer science curriculum should be a practicum as an orderly in a hospice for the student to get the bodily fluids of dying persons on their hands to get a presentiment of whre they each are headed, not into intergalactic space. Instead of playing sci fi video games, young males should take to heart The Book of Ecclesiastes in the Bible even if they do not believe in a Deity; "take to heart", not just compute!
[ Weizenbaum ]
Our technologies have advanced in the past 2,500 years. But liberal education was already highly advanced back then and has only regressed since then to SATs and GREs.
[ Platonic education ]
¶ +2024.03.22. Why do AIs still need human programmers if they can write code better than humans? Can everything be automated in the programming field?
Not everything be automated in the programming field, although a lot of programming tasks can be automated. This has always been true. We keep writing ever more powerful programs to do ever more of the routinizable work.
Two exceptions, one practical, the other theoretical:
Practically, computer programs have bugs and they break. Human "maintenance programmers" will always be needed to fix the bugs.
Oh, you say: But we will write programs that detect and fix their own bugs! Well, yes, but they too will have bugs they cannot fix, and worst case: the bug will be that the program does "fix" itself but in a way that breaks it further. Maintenance programmers: bug fixers will always be needed, and the more complex the programs, the more challenging the bugs will be to find, figure out, and fix.
Theoretically, automation runs up against Kurt Godel's "Incompleteness theorem" which proves that any algorithmic system (computer program, etc.) either contains inconsistencies or cannot determine the truth or falsity of certain propositions in it. Computing, including AI programs, is like if the earth was flat and there was an edge you could fall off. But if you stay away from the edge it won't bother you.
As for "artificial intelligence", I asked the Bing AI about it, and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
What can't computers do? They can't enjoy a leisured meal with good friends and good wine and bread. They can't feel pain or suffer and die. All computers can do is compute. Computers are just TOOLS for us humans to USE. Computers can be more "powerful" than us, just like a jet engine produces more exhaust gas than a human fart. But it's like a car: No matter how fast it can go, it only exists for a driver to go somewhere.
Even if you do not believe in any Deity, read The book of Ecclesiastes in the Bible. You re right now doing what computers can't do: living.
[ Platonic education ]
¶ +2024.03.22. Is it possible for humans to become so advanced that they no longer need to learn about subjects like math, science, and history because all information is stored in the cloud?
This question is wrong-headed.
What can be stored in "the cloud" or in a printed book such as an encyclopedia, is indeed information, and at least "theoretically", all information we know could be stored there.
But we humans do not just learn information: we also learn how to USE it for our purposes, and no information store, be it the cloud or an encyclopedia or any other has any purposes, does it?
So we don't need to memorize a lot of math, science, history and other facts. But we need to learn how to use them and that means learning about them, and that knowhow cannot be stored anywhere because it's what living itself is, skill: doing things with what we know.
Imagine I give you a personal computer with all of nuclear physics in it, but you never even learned to read and write. You'd need to "advance" in learning about all sorts of things to be able to build an atomic bomb or even to understand Galileo's famous experiment of dropping objects of the top of the Leaning Tower of Pisa, yes?
Before writing, memory was everything. If persons did not remember some fact, it was lost. With Writing, then printing and now computers, the facts can be saved externally to our minds. But we still need to learn how to use the facts. The most advanced person would be one who has learned how to learn new things he (she, other) does not yet know or even know about, not just a person who has learned a lot of things already. Also, in the process of learning, a person "picks up" a certain amount of factual information, even if it is also stored somewhere, or if it is not yet stored somewhere, the person can add it to the store of information going forward.
But there is also the problem of "common sense". A person needs to know enough facts to orient themself intelligently in the world they live in.
Example: Once Boeing hired a young engineer just out of college. They gave him a simple part to design, to get him familiar with company procedures. He designed that part based on his book learning in school. He took his design to the machinists to make a prototype. They looked at his drawing and asked if he was sure it was right. Yes, he replied. They happily made his part for him and it was indeed perfect, except it was an order of magnitude too big. The young man did not have "common sense". This may be the case with school kids today who don't know how to do arithmetic but just use a pocket calculator. You don't need to memorize the multiplication table but you do need to have a sense of what multiplication is, yes?
Data is not information.
Information is not knowledge.
Knowledge is not understanding.
And understanding is not wisdom.
A person need not remember the facts of history but we do need to have learned them and remember their "sense" to orient ourselves in the world. We need to "remember Pearl Harbor" even though we don't need to remember he exact date it happened or the name of the admiral who planned it, and can look those facts up if needed.
¶ +2024.03.22. Can artificial intelligence ever experience emotions like love or fear?
"Artificial intelligence" is not intelligent (nor is it stupid) nor does it have any other "human" qualities: It just computes. I asked the Bing AI about this and it outputted:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
Of course, an artificial intelligence computer program can output sentences such as: "I feel bad about what you said happened to you". But that's not real feelings; It's just what some computer programmer decided the computer should output instead of maybe the current weather report or whatever.
If you want to learn more about computer and humans (i.e., us – me, you...), read MIT Prof. of Computer Science Joseph Weizenbaum's classic little book "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976).
¶ +2024.03.22. Is it considered ethical to use voice cloning for pranking purposes?
Pranking is never ethical, or to use some less contentious words: decent, kindly, thoughtful, friendly, helpful, constructive, etc., is it? It's being mean to somebody on purpose but without directly confronting them about it.
I can't think of many cases where pranking is justifiable. "Hazing" is a form of pranking and it's always bad, isn't it?
Maybe here is an example: You have a school where the teacher is a very mean person who all he (she, other) likes to do is flunk students and where you are a student and there is nothing you can do to get this person to improve because if you speak up he will flunk you even if you deserve an "A". Then if you can do something to make a fool of him and he can't find out who did it might be a good thing to do. Using a prank as a guerilla warfare tactic. But that's a rare situation, isn't it?
In general, if a person is contemplating "pulling a prank" the person should stop and think what may be wrong with themself to want to do this. They should ask themself: why do I want to hurt this other person? Would I like them to do it to me?
Is "Voice cloning" impersonating somebody so that it looks like they said something they didn't? That's fraud. Isn't that a felony?
[ THINK ]
¶ +2024.03.22. How can AI transform education, and what challenges does it pose for combating plagiarism?
How did the printing press transform education?
The impact of AI on education may be as powerful as was the change from scribal to print culture. There is a fine book about one way this happened that was not entirely in the learners' favor: Walter Ong, "Ramus, Method, and the Decay of Dialogue: From the Art of Discourse to the Art of Reason". Instead of being challenged to defend arguments, students got curriculum.
*#91; Notes by rote ]
AI can largely replace human teachers who just methodically drill instruction into the learners' heads. I had a lot of that kind of petty pedagogues when I was in school (they called themselves my "masters" even though it was after 1863 in USA), including a 7th grade English teach who tried to break my spirit:
[ Rentko ]
As for the plagiarism issue which is a a big topic today. Yes, AI can provide students with prepackaged essays to submit to avoid doing the work, and petty pedagogue teaches can get off on detecting these crimes against the schooling system and punishing the perps accordingly.
But teachers who seek to encourage young persons to learn and to love learning can deal with this plagiarism threat a different way: Find activities for the students to do which will genuinely INTEREST and ENGAGE the students to do their own creative work, so that the students will not want to avoid the work in the first place.
As for showing the students what AI can do, maybe give them a question, give them the AI's answer and then assign them to write up how came they came to be in a classroom where they had a teacher doing this to them, and whether this is how they wanted to spend their time and if not, what would they rather be doing instead. "Do you really want to be graded?", for instance.
[ Platonic education ]
¶ +2024.03.21. Can advancement in technology and artificial intelligence challenge traditional notions of humanity and ethics?
Humans have for centuries if not millennia now been "technological animals". We do not by any stretch of the imagination live "natural" lives. Even hunter-gatherers had technology, human language with its complex syntax and vocabulary and story telling, etc. being the most notable technology because it enables all the rest, but also the controlled use of fire and other skills.
The kinds of innovations we ordinarily consider to be "technologies" have been around for a while, too. One very important technology is printing, starting with Johannes Gutenberg's famous printed Bibles.
Our lives are artificial but clearly the artifices change. Read Elizabeth Eisenstein's classic study: "The Printing Press as an Agent of Change" (2 vols in 1, Cambridge University Press) to see what profound changes in social life the printing press brought about. In scribal culture, without the printing press, we would not have the exact sciences of physical nature, Galilean physics, and chemistry, and the technologies such as transistors and space rockets built on them.
What does one mean by "traditional notions of humanity and ethics"? There has been a radical change from (a) traditional tribal cultures to (b) secular humanism. In traditional tribal cultures, "humanity and ethics" are fixed and unquestioned. How they came to be nobody knows: you just do what the ambient social norms and beliefs are. But, in The West (China might be another example?), starting in classical Greece, persons came to question their form of social life. That is the decisive event: from implicit acceptance to conscious critique.
That was the big change: from traditional society to self-accountable society: from just implicitly instantiating the beliefs and customs of the social world in which persons found themselves to self-reflectively, self-accountably choosing their form of social life. This can happen only once, because once a person starts questioning their form of life there is nowhere "else" to go except further questioning .
But this transformation is obviously is not complete yet even with our current technological advances. Just because a person has a cellphone does not mean they are not just instantiating the beliefs and customs they were childreared in. Indeed, whenever parents tell their children what to believe, even if it's a current religion or ideology, it's no different from a preliterate society. If you want to study what traditional society is like, read Hanny Lightfoot-Klein's little book "Prisoners of ritual".
Current advancements in technology and artificial intelligence, like the advent of the printing press, will change what our notions of humanity and ethics are applied to, but not the notions themselves, which can be found in Plato's dialogues. Persons are either implicitly enacting their social conditioning or they are critically engaging with it and choosing their form of life, whether in the 4th century BCE terrestrial agora of a Greek city state or in a space ship headed into intergalactic space. As the Microsoft slogan has it: "Where do you want to go today?"
[ THINK ]
¶ +2024.03.21. Can artificial intelligence replace most jobs and perform tasks that require human emotions?
Artificial intelligence (AI) cannot REPLACE most jobs and perform tasks that require human emotions. AI has no emotions and AI has no thoughts either; AI just computes.
But AI can SUBSTITUTE for and SIMULATE many activities that really REQURE human emotions.
Something I have read about. In Japan, they have robotic "pet dogs" that help lonely old persons feel less isolated, substituting for a real human or animal companion. The "robopet" does not replace a caring human (or a caring pet animal), but where the real thing is not available, the SIMULATION is better than nothing. "Real thing" – sorry, bad idiom there: I obviously meant: a real, warm, caring person or dog, neither of which is a "thing". Computers and the AI computer programs tha trun on them, on the other hand, are just "things".
¶ +2024.03.21. Do you believe that mathematics was invented by humans or was it discovered, transcending human existence? Why?
What is the concern here? What is the person who asked this question concerned about or trying to accomplish or maybe afraid of or what else?
Wiktionary says the word "transcend" derives etymologically from "to climb over, step over". Some persons talk about a Deity who "transcends" humans, by which they mean the Deity created us and the world, etc. So is the question about some kind of "superior thing"?
Cinderblocks and discarded styrofoam coffee cups "transcend" human existence in a way, don't they? Just like the idea of a Deity transcends human existence in a very different way. But in another way, they are all PART of human existence (e.g., you reading this sentence or me writing it), so human existence transcends (includes, encompasses) all of them in yet another way.
Maybe discovering and inventing are the same thing? Inventing something means discovering it, doesn't it? Some primitive person "invented" the wheel. Doesn't that mean he (she, other) discovered that a round thing can be rolled, etc.?
So it seems we need to get a clearer understanding of CONTEXT to address the present question, including understanding WHY it is being asked. Sounds to me like a fine occasion for "Socratic dialog" among good friends. You?
[ Platonic education ]
¶ +2024.03.20. I'm currently 19 years old and in college. Do you think that by the time I hit either my 30s or Middle Ages (45), society will only comprise of the few cognitively elite, with the rest of us living in universal basic income thanks to AI?
Let us hope that might be the way it will be. Some people would rather see us all destitute.
I seem to have read somewhere that over half a century ago now, Richard Nixon actually thought about having a universal basic income but obviously nothing came of that. My concern is all the "conservatives', Reaganites et al. who want to revert The New Deal and get rid of Medicare and Social Security instead of extending them. In England, which has a National Health Service, these people are destroying the NHS by continuing to cut its funding.
But even with AI there will be work to do, especially including care for the infirm in an aging population. As for organizing work, I invite you to listen to watch Prof. Richard Wolff's YouTube videos and check out his website: Democracy at Work (d@w)
2,500 years ago, Aristotle said that if machines would do all the scut work of life we would not need slaves (now wage-slaves and even giggers). Well, now we increasingly have the machines, don't we?
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
[ Platonic education ]
¶ +2024.03.20. Is the future predetermined? If so, what is the significance of individual decision-making?
Isn't this an idle "philosophical" question that has no practical effect on anything but can give a person a headache trying to think about it?
What are you going to eat for dinner tonite? How can you treat that as "predetermined"?
From the individual person's perspective, the future may largely be predetermined in a non-philosophical way: The individual cannot change the policies, foreign or domestic, of his (her, other's) government. If you were a male between the ages of 35 and 60 in Ukraine and a recruiter found you, likely your future would be predetermined to fight and die in the current war. Or if you are a Palestinian in Gaza your future is likely determined to starve to death. There is nothing you cn do about it.
But that's human choices, just not "at your level", not anything to do with some "philosophical" question about free will and determinism. Just like a small child's life is pretty much determined by his parents and teachers.
¶ +2024.03.20. What should a student do to prevent AI from taking human intelligence?
AI is not likely to "take human intelligence". I asked the Bing AI about this and it outputted to me:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
What persons can do is (1) IMAGINE AI taking human intelligence and then (2) ACT AS IF it did, when what in reality they were doing would be programming computers to output commands to boss persons around.
HAL in Stanley Kubrick's classic film 2001 was just a cinematic fiction like all sorts of other unrealistic things we can imagine. We can act to turn computers into waking nightmares and destroy the human world. But we don't have to: We can keep clearly in mind that computers are TOOLS for us to USE to improve our lives. Humans are the boss.
A big question is who are the bosses: just a few Silicon Valley techno-oligarchs or "all of us"? Check out Prof. Richard Wolff's YouTube videos and website: Democracy at Work (d@w) for a vision of a humanistically constructive not just technologically "advanced" future.
If a student wants to study AI, that's one option. That is learning advanced technical skill. Even more important they (we) need to study the sociology and ethics of AI. Study and think about WHAT FOR, not just HOW TO. Please read MIT Prof. of Computer Science Joseph Weizenbaum's classic little book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976).
"The history of science and technology of the post-war [post-1945] era is filled with examples of reckless and unreflective "progress" which, while beneficial or at least profitable to some in the short run, may yet devastate much life on this planet. Perhaps it is too much to hope, but I hope nonetheless that as our discipline matures our practitioners will mature also, that all of us will begin to think about what we are actually doing and ponder whether, whatever it is, it is what those who follow after us would want us to have done." (Joseph Weizenbaum, Professor of Computer Science, MIT)
If you are a student, Imagine a future (oops, this was 2,500 years ago already...) learning without exams or degrees or student loans....
[ Platonic education ]
¶ +2024.03.20. What if we minimized symbols and simulacra in social media and entertainment especially powerful ones as they can be used to attack the brain? Would we identify less with symbols and more with our jobs, careers and people?
Profile photo for Bradford McCormick
Bradford McCormick
Independent Researcher (2018–present)Just now
Something seems unclear about this question.
How can symbols "attack the brain"? Symbols are just ordinary passive visual images, yes? Example: Look at an American (or Ukrainian or Russian or any other) flag. It's a symbol. It can't attack your brain. But social conditioning can lead a person to have intense emotions when just looking at such an image, yes? Surely we need to dampen down such irrational emotions so that persons think calmly and sensibly about political reality, but that's not "attacking the brain", is it? Make love not war.
As for the second part of he question. I think persons should identify mostly with people, not with secondary attributes like "jobs, careers", ethnicity, gender, you name it. As for career, some persons "identify" with work that is both personally rewarding and also helpful to others: teaching, nursing, scientific research, making useful things, etc. Others have "careers" as corporate raiders or cynical politicians, destroying other persons lives, like sometime General Electric CEO Jack Welch, or Boris Johnson who caused the current Ukraine War.
[ Boris on bicycle ]
The person asking this question might really like The Book of Ecclesiastes in the Bible (even of one does not believe in any "God"). Wisdom in mutually nurturing, peaceful social life.
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
[ Platonic education ]
¶ +2024.03.20. What are the advantages and disadvantages of NeuraLink?
Profile photo for Bradford McCormick
Bradford McCormick
Independent Researcher (2018–present)Just now
I am not an expert.
It seems there may be legitimate uses for brain implants for SEVERELY neurologically impaired persons. I once had a teacher whose mother had "shut in syndrome": her mind was lucid but she had lost all control of her voluntary muscles. If a brain implant would have enabled her to regain control of her body to speak, eat, control her bowels, etc. that would have been great.
But to do invasive brain surgery on healthy persons is terrifying and depressing, isn't it? Even "brain washing" without surgery is unethical, isn't it? What healthy persons need is not hospitalization to saw a hole into their skull and open up their brain to possible damage. Healthy persons need leisured education in the liberal arts and sciences, yes?
Some people, probably mostly young males with nothing better to do, get off on alienated sci-fi fantasies. They need to work for a while as orderlies in a hospice to get the bodily fluids of dying person on their hands to learn what being a mortal on earth is about. Even if you area a gameboy in your 20s, one day you may be a patient in a hospice yourself, yes?
Read The Book of Ecclesiastes in the Bible even if you do not believe in any "God" (or "singularity"). Watch the old fun but also profound movie "Th Truman Show".
Can I try to urge you to:
¶ +2024.03.19. Should we prioritize the development of artificial intelligence over addressing pressing social and environmental issues?
Isn't it obvious that would be a very bad idea? Use AI to HELP address pressing social and environmental issues.
As the planet overheats human life as we know it may no longer be sustainable. Billions will die from climate catastrophes before that. And they will likely attack the people who are not dying from it in desperation.
"Social issues" are each person's living existence from birth to death, suffering, working, maybe enjoying life even?
Why does anyone even ask this question? Cui bono? What for?
[ Weizenbaum ]
¶ +2024.03.19. What are some movies that illustrate the consequences of human experimentation?
What a painful question!
Look up "Tuskegee Syphilis Study" on the Internet. Or Dr Josef Mengele in Nazi Germany. Or hat damage might Elon Musk's Neuralink invasive microchips implanted by dangerous brain surgery in persons' heads do?
I have read something interesting here. These horrible things cannot be undone. Obviously such things should never be done again. But what to do bout the results that were recorded? Some persons say the results should not be used due to respect for the victims' suffering. Others say the results should be used in respect for the victims' suffering. I read that information from some of Mengele's horrible experiments helped in the design of astronauts' space suits.
Now I have not mentioned the good consequences of human experimentation, for instance back in the 18th century when Dr Edward Jenner inoculated people with cowpox to make them immune to smallpox. An epitome here are the cases where a medical researcher makes a discovery and then tests it on HIMSELF (HERSELF, OTHERSELF) before trying it on anybody else.
Sorry, I misread the question. I did not see it was specifically looking for movies that show human experimentation. Just seeing the subject of question brought up too much imagery for me. If this was a question on a school test I wouldn't pass, would I?
¶ +2024.03.19. What is the correlation between IQ and success in the fields of science and engineering? Why is a high IQ often considered necessary for these professions?
A person cannot be a dolt and be a good scientist or engineer. that should be obvious.
[ Homer Simpson eating his donut ]
One of the most "brilliant" scientists of the past century, Richard Feynman, at least said he had an IQ of 125. That is not "stupid" but it's not exceptional. Maybe 20% of people are that "smart"?
But Feynman was exceptional. What was the "secret sauce"? As a child, his father had always been posing questions for him to solve, and encouraging him to think up his own new questions. Contrast with children whose parents tell them what to believe. Also, Feynman said he worked hard at it.
As an aside, I personally knew one engineer who had never even been to college. He was maybe in that "125 IQ" range. Nobody "special". But one time in his life he had a genius idea (and it was not just an "idle idea" but saved lives in World War II). Exceptional ideas sometimes come from person who are not otherwise "exceptional", just industrious and thoughtful.
I am not an engineer or scientist. I find the structural engineer William LeMessurier (Look him up on the Internet) to be a "model" for a great engineer, especially his handling of the "new york citicorp building problem" – look that up on Google.
[ THINK ]
¶ +2024.03.19. In what ways can artificial intelligence be utilized in books?
First, everybody needs to curb their enthusiasm about "artificial intelligence" (AI). We need more human wisdom to know what to do (and to not do!) with it. I asked the Bing AI about it and it replied to me:
"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."
Prof. Noam Chomsky has said there is nothing "intellectually interesting" about the current AI: it does not help us understand human language. AI just computes.
What might be the use for AI have in books? I find not AI but just simple text processing can be enormously helpful in reading a book. One of my favorite books runs to over 1500 pages. I get to page 900 and see a reference to XXX. Where was XXX first mentioned in the book? Without a searchable digital copy of the text, that's hopeless. With such a digital copy, it's easy. Suppose I ask the AI about XXX. It may find the first place XXX was mentioned in any book was in some obscure text in the 1600's, if that has been digitized into the computer storage available to the AI.
Again, don't get too enthusiastic about AI. Living the good life is what is important and AI is a TOOL for us to use in that pursuit. But the goal is not the technostuff or scifistuff itself, but how it can contribute to us enjoying our daily lives, for instance, in leisured dinners with close friends and good bread and wine. Wisdom does not advance as quickly as technology, does it? The Book of Ecclesiastes in the Bible is as relevant today as it was 2,500 years ago (even if you do not believe in any "God"), even if scribal scrolls are not.
[ Platonic education ]
¶ +2024.03.18. What are the reasons for people's fear of artificial intelligence? How do the thoughts of those who are afraid of it differ from those who are not?
There is a big essay in the March 18 (2024) issue of The New Yorker magazine about this. I read it. I found it at best discouraging.
Who are "people"? In the Middle Ages, there were peasants in the fields and angels in heaven. Today there are consumers in suburbia and computer scientists in the Silicon Valley Tech giants. But all the angels worshipped God. Today some of the computer scientists worship AI and others fear it. But both those kind of persons are as different as angels and peasants from consumers who may be enthusiastic about or fear AI.
There are all the pubescent males who get off on sci-fi and video games and inventing new computer glitz.
[ Zuckerberg ]
There are physicians who cannot pay their office staff because the big medical payments computer company was hacked and now they have been offered a choice to sell to them to pay their bills (I don't know the details).
A lot of consumers fear AI because they are afraid what it will do to them. Gameboys like AI for the fun they can have with it (at the frightened consumers' expense). The worst of the worst may be the Neuralinkers who want to do invasive brain surgery on all of us to implant networked microchips in out brains to "augment" us → turn us all into zombies except for those where the surgery goes wrong and they are brain dead.
So I see at least four categories of persons, not two: 1. A few elite persons who like AI ("E/accers" from the New Yorker article). 2. A few elite persons who fear AI ("E/doomers"). 3. Many ordinary persons who fear AI and do not understand it. 4. And many ordinary person who like AI and do not understand it.
Me? I worked for half a century as a computer programmer. I got PTSD from the innovations of the decade before AI (APIs like Angular and "agile" and "scrum"...), and I have no idea what it's all about today. If I don't understand it, imagine Barbie and Ken. Read MIT Prof. of Computer Science Joseph Weizenbaum's classic little book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). It's as timely today as it was when it was published a half century ago now. And also read The Book of Ecclesiastes in the Bible even if you do not believe in any God.
"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)
¶ +2024.03.18. Is it possible for Boeing to solve its current problems quickly? What needs to happen for Boeing to overcome its current problems?
Form a cartel with Airbus and divide up the market and end the competition. Illegal?
The problem is very simple and it is the very nature of "capitalism", or at least the race-to-the-bottom kind of capitalism we have today. MBA schools are turning out people skilled in raping and pillaging corporations.
Boeing could fix its problems by doing much more quality control: making it like they did back in the "707" days of quality first and engineers not MBA people running the company. But that would raise the costs and the prices. Airbus could offer their planes at a lower price and everybody (all the airlines) would buy the cheaper Airbus planes and Boeing would go out of business.
And why would the airlines buy the cheaper Airbus planes? Because they all needed to keep their air fares as low as possible to keep themselves from being driven out of business by customers looking for the cheapest fare. It's a race to the bottom. Junk economy. Junk bonds. Jack Welch. Junk world. "Made in China". "Made in Bangladesh." We are destroying ourselves.
Now that, of course, is oversimplifying the situation but that is the "essence" of it, yes? Cynical people running the business to maximize short term profits. Once upon a time there was "lifetime employment". Pensions. Made in the USA. There was a lot more government regulation, or as Reaganites and Thatcherites call it: "interference". Tases on the rich (both individuals and corporations were much higher, as a result of FDR's "New Deal".
It can really get cynical: The company might figure out how lax it could make the quality, resulting in more crashes and law suits, before the costs of the law suits exceed the cost of greater quality control.
True story I read somewhere. Boeing manager to his manager: "I've seen projects in the military stopped for lesser problems [that the 737 Max]." Senior manager back down: "In the military you don't have to make a profit."
So here is your solution: A lot of government regulation of he economy, including state ownership of many enterprises such as the electric power companies. The "invisible hand" needs a mind and even a soul, doesn't it?
¶ +2024.03.17. How long before A.I. and quantum computers replace God?
No way!
I myself doubt that "God exists", and, if yes, what KIND of God and what KIND of exists?
But for the sake of argument let's say the kind of God who created everything exists. Guess what would be part of His (Her, Other's) creation? A.I. and quantum computers. God would be their cause and their underlying source. They would just be parts of His creation, like humans and cobblestones and galaxies and quarks and everything else.
Same thing about humans and A.I. and quantum computers: We would make them, so they would be things we controlled. But, you say, they could get out of control? Well, then we could pull the plug on them, just like God can pull the plug on us.
But anything that is not a logical self-contradiction is imaginable so better safe then sorry. If I have a proof that AI or quantum computers could never replace God or replace us, it would still be a very good idea to remember that the proof could be wrong, so we need to keep on top of it. Maybe that's what the Abrahamic Deity was doing at The Tower of Babel – not taking any chances?
¶ +2024.03.17. Is it possible for humans to become so advanced that there are no limitations to our abilities and actions?
Imagine if one had asked this question to Archimedes, 2,500 years ago. He would probably have said humanity could keep becoming more and more advanced and that he was working on it. Would he have imagined the atomic bomb, the jet plane, personal computers and now AI and VR?
"No limitations" is a very dangerous way of thinking. Surly there are limitations, even pragmatic limitations, not just "philosophical" limitations which include Godel's "Incompleteness Theorem" for algorithmic systems and the 18th century British philosopher David Hume's argument that we can never understand the causes of things, only observe "constant conjunctions". And there are "advances" we can make but ought not to, yes?
Does the nature of "the good life" advance? Is our vision of "the good life" any more "advanced" then 2,500 years ago the wisdom in The Book of Ecclesiastes in the Bible?
Then there is the issue of human and political advancement: Our wars today are far more advanced than 2,500 years ago. Now we have advanced to the point where we can destroy all human and higher-animal life on earth with hydrogen bombs, yes? Back to Archimedes; have we made progress in advancing progress?
[ Archimedes and the soldier ]
¶ +2024.03.17. Could it be that most technical advancements come from the spiritual world?
What is "the spiritual world"?
Is it some sort of spooky nonsense like charlatans use to dupe gullible people with astrology and such?
But the other side, "materialism" or whatever one wishes to call it, is just as nonsensical even though it presents itself as rigorous logic: I am talking here about the people, often with advanced degrees in "computer science", who tell us we are computers and that computers can be conscious and take over the world if we just implant networked microchips inside our skulls with brain surgery to turn us all into zombies. Or maybe that we should live in Virtual Reality (watch the old fun but also profound and frightening movie "The Tuman Show")
I do not wish to argue with either side. If you believe in spooks or if you believe humans are just computers, we disagree. Can we all live together in mutual respect as persons in social life? Isn't that what matters most? I vote for the wisdom in The Book of Ecclesiastes in the Bible (I do not "believe in God").
I propose the answer is "in the middle":
The source of human creative acts is unknowable. "No man knows from where the words come from in the upsurge of meaning from the depths into the light" (quoting from imperfect memory, George Steiner, quoting Schiller, "After Babel", p. 147, if I remember correctly).
We cannot understand it but we can appreciate and nurture it. We can encourage each and every person's creativity. And the techno fanatics can make industrial robots to free up every person from having to do menial labor (The Abrahamic Deity's curse on us all for Adam eating a piece of fruit), so that every person will have maximum opportunity to be creative and to enjoy it.
What do you think?
[ Platonic eductation ]
¶ +2024.03.16. Is there a solution to every problem in the world? If so, how can we discover it?
I am no here going to try to repeat the answers to the "philosophical problem" here. In mathematics her is Kurt Godel's "Incompleteness Theorem". In worldly knowledge there is he 18th century British philosopher David Hume: we can never discover the causes of things but only observe "constant conjunctions".
But, practically, Dr. Jordan Peterson has eloquently said that it is easy for people to get themselves into situations for which there is no good solution. Ukraine and Israel today are very "good" examples. In these cases, "A good compromise is when both parties are dissatisfied" (Larry David, "Curb Your Enthusiasm")
But all sides must agree to disagree and to live and let live, which is no always possible: It is not possible to be tolerant of intolerant people. In Paris France a couple years ago, a school teacher was decapitated by an Islamist Religious Fanatic for teaching freedom of expression in a middle school class, even after the teacher had urged anyone who felt they might take offense to leave the room before he began his presentation (look up Samuel Paty on the internet).
What about problems like reparations for slavery? Abortion? The Roman Catholic Church used to burn people alive for "heresy" to save their immortal souls. Every war is people killing each other because each side believes it has the solution to the problem – just not the same solution as the other side.
Of course whenever we can solve a problem that's great. But often we need to learn to live with them. A person has a fatal disease and there is no cure for it: there is no solution to their problem, is there?
There is a classic little book which is relevant here, Thomas Kuhn's "The structure of scientific revolutions", which is very readable even for non-scientists.
Live and let live. Be open minded. The physicist Niels Bohr advised his students, what I feel is very good advice here:
"Take every statement I make as a question not as an assertion."
What do you think about that – at the moment, that is, subject to the provisional nature of all human knowledge?