PreviousWelcome

Log of my (BMcC[18-11-46-503]) postings
Page #11

I (BMcC[18-11-46-503]) may have bit off more than I can chew here. Logging each Quora posting much increases the pain and effort over just writing it and being done with it, which I have been sloppily doing for who knows how many months now? (I have automated this new process but it's still not easy since selecting the text in a Quora posting does not capture image information, etc.)

Don't follow the leader (except a firefighter in a burning building...); follow the audit trail. I must try harder to live up to my standards which, in living up to them, raise themselves and myself further up. Crescit eundo!

Previous Previous page of Quora postings   Next page of Quora postingsNext 

+2024.03.16. Where can I get a bad AI, from, like, two or three or so years ago (or more), so I can make bad funny things that are badly funny and funnily bad, like the way AI used to be? (Question originally asked on Reddit, by me, if you're wondering)
+2024.03.16. Do you believe that technical trades, like precision machining and welding, require a combination of practical skills and creativity? Why or why not?
+2024.03.15. Will Neuralink make the world a dystopia? Why?
+2024.03.15. Are you worried about the fast-paced advancement of technology?
+2024.03.15. In the age of AI and automation, what skills do you think will remain irreplaceable by technology?
+2024.03.15. What are your ideas about how AI could develop empathy?
+2024.03.15. If I'm very interested in mathematics and problem solving, should I choose computer science as my major or should I pursue medicine for financial stability as well as job stability?
+2024.03.14. Is there a future for intellectualism beyond the strictures of academia? And how exactly can specialised knowledge be distributed more easily, so that it becomes accessible to "ordinary people", and eventually becomes common knowledge?
+2024.03.14. I am going to start studying computer science next year and my school has an artificial intelligence track where I take a few extra classes, does anyone think it would be worth it to take this track?
+2024.03.14. Would modern passenger airplanes still require pilots if highly advanced artificial intelligence systems were able to perform tasks such as flying and monitoring weather conditions as well as humans currently do?
+2024.03.13. What do you think of Tarform's vision to reinvent the way we move by developing awe-inspiring, sustainable, and technologically advanced vehicles that make mobility exhilarating and soulful?
+2024.03.13. How will AI advance without social media ?
+2024.03.13. Is it possible to create a life form that is more advanced than humans in terms of knowledge and intelligence?
+2024.03.11. Are today's AI models, such as ChatGPT, intelligent in any human sense?
+2024.03.11. Are there any mathematicians working on creating artificial intelligence that can outperform humans in solving mathematical problems?
+2024.03.09. What is the difference between an artist and a scientist?
+2024.03.09. Why do people believe intelligence can be increased indefinitely in machines giving indefinitely increased capability too? There isn't any evidence that there is no limit, so is it just wishful thinking?
+2024.03.09. How would society be affected if we had a powerful brain-computer interface that can store and retrieve information faster than the current internet?
+2024.03.09. Does having autism mean that I'm more focused on my thoughts rather than emotions? And if that's the case, does that make me like a robot?
+2024.03.09. Why is everybody scared of AI?
+2024.03.09. How concerned should we be about the potential of emerging technologies like AI to erode trust in our institutions?
+2024.03.08. What should I do now, what should I study, what should I learn that AI can't replace it?
+2024.03.08. Why is objectivity not part of the most modern leftist theories? Is objectivity hard to grasp for normies? Isn't this going to be the leading cause of social and economic dissolution?
+2024.03.07. How will AI develop empathy?
+2024.03.07. Is objective existence conditioned by subjectivity?
+2024.03.07. What role will humans play in a world where computers are able to do most of our work? Will there still be job opportunities for people in the future?
+2024.03.07. How far are technology like Neuralink from full virtual reality, or is it even possible?
+2024.03.07. My parents want me to pursue CSE but due to the recent AI developments I am feeling quite anxious about my future job opportunities. Will AI make coding obsolete in the future? Should I study biotech related subjects?
+2024.03.07. What do you think the future on earth would be in like the next 100 years? Especially with the recent development of AI... I can't even imagine! Comment your thought let's see if we'd achieve it even faster
+2024.03.07. Is it considered ethical for someone to assist with your final year project?
+2024.03.07. Can generative AI be used to personalize educational materials and experiences?
+2024.03.07. How would you create a ThoughtWare AI Bot that actually could or would eventually replace every psychologist and psychiatrist because the technology got better results than any human could in terms of successfully treating mental illnesses?
+2024.03.06. How will writing with AI abolish the personal touch of the writer?
+2024.03.06. I feel like I'm not programming in my present tech work, were I troubleshoot, configure servers, and test APIs. What can I learn from my job that will benefit me in the future for tech professions I can follow?
+2024.03.06. How can future AI systems be developed to be more explainable and transparent, and why is this important for fostering trust in AI?
+2024.03.06. Do you think humans can retain control over AI?
+2024.03.05. NVIDIA CEO Jensen Huang says AI is ending the era of teaching kids to code. Is it true that the AI replaces the coder?
+2024.03.05. How much longer until AI is able to program itself without human intervention?
+2024.03.05. How do you think China's ability to compete in AI technology, particularly in text-to-video models, is affected by trade restrictions on advanced chips and technology exports from the US?
+2024.03.05. How do we know the law of nature exists?
+2024.03.05. How important do you think it is for film editors and directors to embrace advancements like AI in cinema, as mentioned by Walter Murch?
+2024.03.05. What if the money we spent on AI would be used on human intelligence? Are there possibilities that we can also evolve?
+2024.03.04. Can anyone claim to be better than artificial intelligence? If yes, what makes them better? If no, what are the reasons for this?
+2024.03.04. Are there any websites that allow interaction with chatbots or artificial intelligence that have human-like qualities such as emotions and thoughts, similar to robots in movies?
+2024.03.04. Will robots be able to work as chefs in the future, similar to how humans do now?
+2024.03.03. Is it possible for artificial intelligence to possess the same level of creativity as humans, or will it always be limited in terms of originality and imagination?
+2024.03.03. How far do you imagine artificial intelligence will go?
+2024.03.02. I fear that AI in the next 20 years will replace programmers/software developers. Not now now it's not that good, but in 20 years it's definitely replacing us. Should I be worried about AI or not?
+2024.03.02. Is technology making human beings more or less empathetic?
+2024.03.01. Do you think Neuralink is rushing its animal testing?
+2024.03.01. With AI being capable of mimicking voices, scammers constantly having the upper hand, and the rampant spread of misinformation, how can we trust anything?
+2024.03.01. Is there a term for something beyond an artificial intelligence (AI) machine with advanced technology capabilities? If so, what is it?
+2024.02.29. Will the future be faster paced than today because of advancements in technology and artificial intelligence?
+2024.02.29. Will artificial intelligence ever be rational enough to make decisions that matter? Will AI be reasonable if humans disagree, and either listen to us and change the decision or be able to explain itself so that humans understand?
+2024.02.29. Can robots outperform humans in all medical fields when it comes to diagnosing and treating patients?
+2024.02.28. Are we on the brink of a technological singularity, and if so, what are the implications for humanity?
+2024.02.28. What is the meaning behind the term "AI"? What is the intention or message behind someone using this term?
+2024.02.28. What evidence supports the idea that humans are living in a simulation? What are some possible reasons for this belief?
+2024.02.28. What do you think should be done to ensure both incentives for creativity and also the freedom to copy material?
+2024.02.26. Do you think there is a lack of awareness among younger generations regarding ageism compared to other forms of equality and equity issues?
+2024.02.26. Is it possible that artificial intelligence (AI) will replace teachers?
+2024.02.26. How might artificial intelligence (AI) pose dangers and risks to humanity?
+2024.02.26. What is Artificial Consciousness?
+2024.02.26. How can we unlock the full potential of the human brain to enhance creativity, problem-solving, and overall cognitive abilities?
+2024.02.25. Why is cultural relativism not justifiable in ethics?
+2024.02.25. How do moral nihilists explain their participation in community service projects or charity work, given their belief that there are no moral truths?
+2024.02.25. What are the advantages and disadvantages of using AI teaching instead of human teacher teaching in the future?
+2024.02.24. What is the most important lesson you learned from AI?
+2024.02.23. Does artificial intelligence (AI) really think like a human being, or is this somewhat of a trick?
+2024.02.23. Is intelligence without morality undesirable, especially in the brightest?
+2024.02.23. What do you think would be the impact of AI enthusiasm on the broader market?
+2024.02.23. In a future with conscious AIs mirroring human emotions, should we reassess personhood criteria, thus granting these entities equal moral and legal status?
+2024.02.23. Do you think that virtual reality will make it possible for everyone to explore different parts of our planet in the future?
+2024.02.23. Could artificial intelligence replace central intelligence agencies?
+2024.02.22. Do you agree that deepfakes will be indistinguishable from reality as early as 2024?
+2024.02.22. How do you think the success of controlling a computer mouse with thoughts could impact research in fields outside of neuroscience, such as human-computer interaction or artificial intelligence?
+2024.02.22. What challenges do you think may arise with the implementation of autonomous sidewalk robots for food delivery in urban areas like Tokyo?
+2024.02.21. Do you believe that Neuralink's rapid progress in developing brain-computer interface implants raises any ethical concerns or safety considerations?
+2024.02.21. In leadership, is it better to be the smartest in the room or to elevate the collective intelligence of the team?
+2024.02.20. What can humans do that computers can not?
+2024.02.20. How can the categorical imperative be applied to determine what is morally right and wrong in regards to music, art, and movies?
+2024.02.20. What is the concept of acculturation, enculturation ethnocentrism, and cultural relativism?
+2024.02.20. Can you provide some examples of anthropic bias?
+2024.02.19. What's your perspective on the ethical implications of creating artificial life forms?
+2024.02.19. How, for example, can we tell the difference between a case in which an event is a genuine violation–assuming that some sense can be made of this notion–and one that conforms to some natural law that is unknown to us?
+2024.02.18. How does the brain know when we need metacognition?
+2024.02.18. What factors influence whether someone's intelligence will be used for good or bad purposes?
+2024.02.17. How effective do you think watermarking or embedding metadata will be in identifying AI-generated content or certifying its origin?
+2024.02.17. If AI robots replace human workers where are the billionaires and corporations funding the research and development of AI going to find the customers they need to buy their products?
+2024.02.17. How can we ensure ethical and responsible development of AI, mitigating potential biases and harms?
+2024.02.17. How can individuals contribute to their community? What are the benefits of helping others in your community?
+2024.02.17. How can we reconcile the tension between the pursuit of objective truth and the subjective nature of human perception and experience in academic inquiry?
+2024.02.16. Why are people getting so lazy on Quora that there are countless clearly AI generated answers?
+2024.02.16. Can artificial intelligence write creative copy like a human writer?
+2024.02.16. Do you think generative AI will eventually stop humanity?
+2024.02.16. What are some effective ways for a president to communicate with the public without relying on teleprompters or notes? How can they effectively convey their message and persuade the public when speaking spontaneously on important matters?
+2024.02.15. Can AI take the role of humans completely?

 Len: 208,200  97.

+2024.03.16. Where can I get a bad AI, from, like, two or three or so years ago (or more), so I can make bad funny things that are badly funny and funnily bad, like the way AI used to be? (Question originally asked on Reddit, by me, if you're wondering)

"Bad AI"?

Do you mean like recently I asked the Bing AI why the mountain K2 is clled "K2"? It gave me back what looks like the correct answer and then it added that "Everest" is another name for K2. When I typed back that was an error, the Bing AI thanked me for correcting its mistake and then repeated the mistake again!

Won't AI's always do "bad" things: output errors and nonsense occasionally, because they are not intelligent, but just compute? But you are right: as time goes on the computer programmers keep improving the AI's (as well as the AI itself processing ever more information...) to enable them to handle more questions better.

Let me end with what the Bing AI recently replied to me when I asked it about intelligence:

"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience1. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."

AI's just compute. They are neither intelligent nor stupid nor do they have "common sense". So they will always at times produce "bad" outputs. But we (you...) will have to work ever harder to come up with questions they are not yet programmed to handle. I don't know about the AI companies, but if I owned one, I'd hire a stand up comedian who writes his (her, other's) own jokes to try to "break" the AI.

Does this help?

[ THINK ]

+2024.03.16. Do you believe that technical trades, like precision machining and welding, require a combination of practical skills and creativity? Why or why not?

Yes!

Mastery in all skilled activities require both knowledge and imagination. A person cannot make something unless they have the skill to do it. And a person cannot solve difficult problems or come up with new ideas without imagination.

If the person is only doing routine things, then they do not need to be creative, but then they can be replaced by a computer (robot, etc.), too. There are always puzzling problems for which imagination is needed to solve: "Oh, now I see what's wrong!" At the other extreme, persons can just have idle fantasies, such as intergalactic time travel, which are entirely removed from skill.

Some time ago on Public Television, the scientist Jacob Bronowski had a series "The ascent of man", which had the tag line that it was the hand and the mind working together that made "the scent of man": not just hard work ("hands") and not just ideas ("minds") but the synergy of the two.

People distinguish crafts, skills, arts, sciences and whatever else as if they were each different. But really they are all just different emphases of the same "thing": human activity, intentional action.

Computers (including "Artificial intelligence", which does not have intelligence but is just massive superfast computation) just compute. Persons have objectives. It's not "binary". It's "gray scale": Each activity at a given time needing differing amounts and flavors of skill and creativity.

Even in something as abstruse as sub-atomic physics, skill as well as creativity is required to design and actually build a particle accelerator, e.g., the CERN Superconducting Supercollider. On the other hand, there is a lot of just routine toil which requires differing amounts of skill but no creativity: industrial robots should do all that, freeing up the persons to be both skilled and creative.

+2024.03.15. Will Neuralink make the world a dystopia? Why?

Dark, indeed!

Do you want to undergo brain surgery to have a networked microchip stuck into your brain after they cut a hole in your skull? What if the surgery goes wrong and you are brain dead? What if somebody is controlling you through the chip? What are you looking for here?

The only ethical use for implanting a chip in a person's brain might be some severely neurologically impaired persons such as with advanced ALS.

+2024.03.15. Are you worried about the fast-paced advancement of technology?

Yes I am concerned about "the fast-paced advancement of technology".

All the to-do about AI ("artificial intelligence") has me especially concerned, because a lot of people with economic power seem enthusiastic about making a lot of money out of it. Can I calm down some of that enthusiasm with the answer I got from the Bing AI when i asked it about AI having consciousness, which seems to be the goal of some of the enthusiasts:

"Today's AI is nowhere close to being intelligent, let alone conscious. Even the most impressive deep neural networks, such as DeepMind's AlphaZero or large language models like OpenAI's GPT-3, lack consciousness. They operate purely based on patterns and statistical correlations, devoid of subjective experience. These AI systems can perform remarkable tasks, from playing games to writing essays, but they lack self-awareness or inner life."

The concern is that technology is just leaning how to do things. What I do not see advancing so much is wisdom in deciding what to do. There are a lot of things we can do but should not do, among which I would be especially concerned about people who want to surgically implant networked microchips in our brains to "augment" us, which will more likely mean turning us into zombies they can control (except for those for who the surgery goes wrong and leaves them brain dead).

But next after "AI" comes something far more dangerous than AI "VR": Virtual Reality. That's where persons literally go out of our minds into a fake world. Watch the old fun but also profound movie "the Truman Show".

[ VRman ]

("Anybody home?")

My virtual reality experiment: I was driving up a 6 lane superhighway early one August afternoon in clear bright sunlight at about 65 miles per hour in my clunky Toyota Corolla DX, with no other cars on the road. I decided to look intently at the little image in the car's rear view mirror -- no high tech apparatus. I really really really really intently focused all my attention on that little image! It was entirely convincing. That "little" image became my whole experienced reality: I was driving where I had been, not where the automobile was going. Fortunately I "snapped out of it" in time to avoid becoming a one car crash in the ditch on the right side of the road. (It was a very good place to have conducted this experiment, because there was a police barracks, a teaching hospital, and both Christian and Jewish cemeteries nearby, just in case.)

You may try to repeat my virtual reality experiment at your own risk; I strongly advise you against doing so. I assure you: It worked. (Of course it will not work if you don't "give in to it", just like a video game won't work if you just look at the pixels as what some computer programmer coded up with branching instructions depending on what inputs you enter.) Moral of this story: VIRTUAL REALITY CAN KILL YOU. Forewarned is forearmed.

My opinion? We should use technological advancements to improve our real, daily, personal lives: all the things we do and have always done as humans, not some sci-fi geewhiztopia. The goal, I propose can be found in a 2,500 year old text: The Book of Ecclesiastes in the Bible, which contains a lot of WISDOM (no religious belief needed).

"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)

[ Platonic education ]

+2024.03.15. In the age of AI and automation, what skills do you think will remain irreplaceable by technology?

Short answer: "people skills"

Take the very word "technology" techno logy = logic of techne: knowing how to do things.

The bigger issue is knowing what is worth doing and why and what should not be done and why not. Technology, knowing how to do things, can never have anything to say about what to do with it. All technology is like a gun or a printing press. The gun can be used to murder somebody or to save them from being mauled to death by a wild man-eating animal that escaped from the zoo. The printing press can be used to let everybody read The Bible and Plato's dialogues, or to spread "conspiracy theory" rumors.

People skills includes being a gracious host for an intimate dinner (or other intimacy). "The good life" as that phrase has been used for millennia. Nursing (caring, not just changing dressings). Teaching (mentoring, not just instructing). The human touch.

People need to "get over" enthusiasm for AI. It's just "technology" techno logy = logic of techne: knowing how to do things. Knowing how is extremely important, especially for a doctor to save the life of a patient, etc. But knowing-what-for is why knowing-how-to is important.

I have two specific recommendations here: (1) Everybody should read The Book of Ecclesiastes in the Bible even if they don't believe in any Deity because it in very simple language sets forth the wisdom of what-for. (2) Part of getting a degree in "computer science" should be to do a practicum as an orderly in a hospice so the student gets the bodily fluids of dying persons on their hands to get some sense that they are "human", not just computational.

"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)

[ Platonic education ]

+2024.03.15. What are your ideas about how AI could develop empathy?

My ideas is that AI cannot ever develop empathy because "artificial intelligence" is not intelligence (like an "A" student has in school) nor is it stupidity (like a "D-" student) because it is not conscious: it just computes. In movies we see computers that are persons, for example, HAL in Stanley Kubrick's great film 2001, but that's just fantasy and we can show anything that's not a self-conradiction in a movie, like you can imagine anything in a dream.

[ Marcello Mastroiani flying at start of 8+1/2 ]

("I'm flying!")

But that said, I got into an argument with an AI person a couple days ago. I'm tired of trying to argue with these people. The present question is in "Trauma stories". Great place for it!.

When I got out of bed this morning, my wife told me about someone she knows who was the objject of scam yesterday. The scammer called this person, call them "Z" – the scammer called Z that Z's daughter had been in a terrible automobile accident and blah, blah, blah.... It was very "real", including the daughter's voice crying for help. Well, where did that voice come from? AI. Trauma story, right?

Before paying the $7,800 bail bond to get the daughter out of the county jail, Z managed to call the daughter who was in California and the daughter answered the phone safe and sound.

So I'm not going to argue about whether AI can have emotions or not. I believe with good philosophical reasons it cannot but I'm now going to say: let's act on the assumption AI can do all the geewhiz things the AI loonies are all ants-in-their-pants about and BEWARE!

Like in a war, I'm going to retreat to a secure defense line: "We", we fragile, mortal humans who don't understand AI (I worked for half a century as a computer programmer, so I'm one!) need to retreat to what I will call the "Book of Ecclesiastes line" – that little story in the Bible (I am not a believer – i am an anti-theist: I think if God exists God is a criminal or maybe the Big AI in the Sky). There is an enormous amount of wisdom in that little story: We need to get together and focus on building, defending and enjoying community with each other and F*** the AI.

Now, I am not a "Luddite". AI can be a really great TOOL for us to USE to make our lies better. It's like guns. They can make our lives better, too. Suppose I lived in a city where there was a huge rat problem and a rabid rat came to attack me. Would I like to have a semi-automatic rifle in my hand to kill the rat before the rat bit me? You bet!

Most uses of AI can be helpful, whether it ever has emotions or not. But if 10% are bad, as you-may-know-who said about some people in a different context: one of them is one too many.

So let's all focus on a different kind of utopia: not an inergalacic gechnogeewhiz of neuralink nuts hacking into our skulls to implant networked computer chips to "enhance" us into zombies for them to control, but the "good life" of the Book of Ecclesiastes in the Bible. And what can AI contibute here? All the AI mavens can dedicate themselvs to makeing industrial robots to annul the Abrahamic Deity's curse on us all for Adam eating a piece of fruit, or, as Aristotle said 2,500 years ago: If machines could do all the scut work of life, we would not need slaves (wage-slaves, employees, giggers....) Also: watch the old fun but also profound movie: The Truman Show.

Let's get over enthusiasm for technogimmicks and make the technology help us all have the good life of leisured dinners with a few good friends and good wine and bread. As for the technonuts, part of getting a degree in "computer science" should be having to do a practicum as an orderly in a hospice to get the bodily fluids of dying persons on their hands to maybe realize they are "human".

[ Platonic education ]

"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)

+2024.03.15. If I'm very interested in mathematics and problem solving, should I choose computer science as my major or should I pursue medicine for financial stability as well as job stability?

Do you think (and feel!) you would be good at medicine?

If yes, it's definitely the better way to go. America and probably all other countries will need ever more doctors and nurses, and medical professionals are really doing social good helping people.

"Computer science", unless you are "bright" enough to be a google researcher or something like that, is a recipe for mostly meaningless jobs doing tedious but also difficult work. The "half life" of what you learn is generally very short. "Agile" and "scrum" managerial technologies are stifling.

I started as a computer programmer in 1972 and back then I found it interesting. I wrote IBM System 370 Assembly Language and COBOL "batch" programs for big mainframes on punch cards. By the 2010s everything had changed to having to try to do tricks with undocumented, inscrutable APIs. I could not understand Angular well and something called Django was much worse than that. I got PTSD from it. And that was before the current "artificial intelligence" stuff so I don't know what it's like now.

But medicine is solid scientific knowledge which helps persons with their real problems in living (or helps them become even healthier).

If not medicine, maybe go to law school? One hears about the people who have glamorous work in "computer science" (is it really science or is it part mathematics and part code hacking?), but most computer jobs are not like that.

+2024.03.14. Is there a future for intellectualism beyond the strictures of academia? And how exactly can specialised knowledge be distributed more easily, so that it becomes accessible to "ordinary people", and eventually becomes common knowledge?

Isn't "intellectualism" at risk even in academia?

I read that many humanities majors in colleges are being dropped and departments closed. And some of what's left are not liberal learning but partisan advocacy, all the ____ Studies departments (Black, Women's, etc.).

Computer "science" and business administration are "hot", aren't they? And there is reason for this since the job prospects for humanities majors are not great, while many students, no matter what they study, are burdened with large to crushing student loan debt. Which do you want to go in debt for? an MBA or Plato?

I prefer a phrase such as "liberal learning" to "intellectualism". The very word "intellectual" is not exactly a compliment, is it? Oometime Alabama Governor George Wallace defined an intellectual as a person who couldn't even ride a bicycle straight. And Ronald Reagan once defined an economist as a man with a watch chain with a Phi Beta Kappa key on one end and no watch on the other end, i.e.: a fool.

And "intellectuals" have contributed to this. I am think here especially of the "postmodernists" whose goal seems to be to be incomprehensible and to flaunt it. Prof. Noam Chomsky has said that postmodern theory is commonplaces packaged as willful nonsense.

I myself feel i am "semiliterate" since I do not have a solid classics education and do not know Latin or ancient Greek, etc.. I have a neighbor who is a French lady in her late 80s; She said her father knew Greek and Latin, and what was his career? Not in academia but the military.

Then we have have social media and cellphones and all the rest which do not encourage liberal learning, do they? Tik-tok. HBO. NFL....

I am not hopeful. I am concerned things ar going to get much worse not just with "artificial intelligence" but Virtual Reality, which will use AI to literally take people "out of their minds". Watch the old fun but also profound movie "The Truman Show".

Two parting things to think about: (1) Consider my Virtual Reality experiment, below, and (2) Even if you do not believe in any Deity, study The Book of Ecclesiastes in the Bible.

Good luck!

My virtual reality experiment: I was driving up a 6 lane superhighway early one August afternoon in clear bright sunlight at about 65 miles per hour in my clunky Toyota Corolla DX, with no other cars on the road. I decided to look intently at the little image in the car's rear view mirror -- no high tech apparatus. I really really really really intently focused all my attention on that little image! It was entirely convincing. That "little" image became my whole experienced reality: I was driving where I had been, not where the automobile was going. Fortunately I "snapped out of it" in time to avoid becoming a one car crash in the ditch on the right side of the road. (It was a very good place to have conducted this experiment, because there was a police barracks, a teaching hospital, and both Christian and Jewish cemeteries nearby, just in case.)

You may try to repeat my virtual reality experiment at your own risk; I strongly advise you against doing so. I assure you: It worked. (Of course it will not work if you don't "give in to it", just like a video game won't work if you just look at the pixels as what some computer programmer coded up with branching instructions depending on what inputs you enter.) Moral of this story: VIRTUAL REALITY CAN KILL YOU. Forewarned is forearmed.

+2024.03.14. I am going to start studying computer science next year and my school has an artificial intelligence track where I take a few extra classes, does anyone think it would be worth it to take this track?

The more you learn the better, and "artificial intelligence" is a big thing today so it would seem a good idea to study it in addition to everything else.

But has anybody urged you to study the ethics and social implications of computing in general and "artificial intelligence" in particular? I would urge you to read MIT Prof. of Computer Science Joseph Weizenbaum's classic book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976; but it's as timely today as when it was originally published!) A lot of people learn how to do things with computers, but not so many appreciate what is worth doing and why, and sometimes even more important: what should not be done and why not. Computers, including "artificial intelligence" (which is not intelligent like a human but just very powerful computing...) are TOOLS for us to use to enhance our LIVING.

Also, something to watch out for: Virtual Reality (VR). People are also excited about this and my guess is it will be an even bigger "thing" than AI which, of course, it will use.

VR will be very powerful and very dangerous, too, so you should be aware of it. Watch the old fun but also profound movie "The Truman Show". VR can literally "take you out of your mind": lose your sanity.

Then there are people who also want to implant networked microchips in all our brains which they call "augmenting us" but I am concerned will turn everybody (me? you?) into zombies except for people who the surgery does not go right and they (me? you?) may be brain dead.

My virtual reality experiment: I was driving up a 6 lane superhighway early one August afternoon in clear bright sunlight at about 65 miles per hour in my clunky Toyota Corolla DX, with no other cars on the road. I decided to look intently at the little image in the car's rear view mirror -- no high tech apparatus. I really really really really intently focused all my attention on that little image! It was entirely convincing. That "little" image became my whole experienced reality: I was driving where I had been, not where the automobile was going. Fortunately I "snapped out of it" in time to avoid becoming a one car crash in the ditch on the right side of the road. (It was a very good place to have conducted this experiment, because there was a police barracks, a teaching hospital, and both Christian and Jewish cemeteries nearby, just in case.)

You may try to repeat my virtual reality experiment at your own risk; I strongly advise you against doing so. I assure you: It worked. (Of course it will not work if you don't "give in to it", just like a video game won't work if you just look at the pixels as what some computer programmer coded up with branching instructions depending on what inputs you enter.) Moral of this story: VIRTUAL REALITY CAN KILL YOU. Forewarned is forearmed.

[ THINK ]

+2024.03.14. Would modern passenger airplanes still require pilots if highly advanced artificial intelligence systems were able to perform tasks such as flying and monitoring weather conditions as well as humans currently do?

Ask a commercial airline pilot this question. I'm not one.

But my impression is that modern passenger planes already can or easily could be modified to fly without pilot intervention.

Two problems: (1) What if something goes wrong? The person asking this question should really like the Air Disasters shows on The Smithsonian Channel.

(2) How to make pilots be trained – develop the expertise – to handle potentially catastrophic emergencies if they never do anything under normal conditions? "Flying hours" does not mean sitting in the pilot's seat, but actually doing all the activities involved in flying.

Even if the planes can take off and land themselves by automation, it's a very good idea for pilots to manually do takeoffs and landings to gain and retain "experience" ("use it or lose it"). Even when there is no problem, each landing or takeoff will have its individual attributes to build he pilots' skills so that hopefully when he (she, other) has to try to land the plane with an engine on fire or the landing gear broke or who knows what, he will be accustomed to flying to be able to apply knowledge to the moment, not wonder what all the controls do. Look up "Miracle on the Hudson" US Airways Flight 1549 on the Internet!

There will be emergencies, and the automation cannot handle them all, especially if the problem is in the automation itself, yes?

Persons need to remain in control of and monitor the automation, not matter how "good" the automation becomes, because automation, including "artificial intelligence", is just computing like an adding machine on steroids: it has no "common sense". "Artificial intelligence" is not intelligent, nor is it stupid: it just meaninglessly computes.

I recently asked the Bing AI why the mountain K2 is called "K2". It did its database lookup and outputted what looks like the correct answer. But it added that "Everest" is another name for K2. I typed in this was an error and the AI thanked me for correcting it and then repeated the error. The AI could have been flying a jumbo jet with 500 passengers, not answering my little question. And, yes, the computer programmers can fix this problem, but then there will be the next one....

[ THINK ]

+2024.03.13. What do you think of Tarform's vision to reinvent the way we move by developing awe-inspiring, sustainable, and technologically advanced vehicles that make mobility exhilarating and soulful?

I knew nothing about this so I looked up Tarform on the Internet They are going to sell very fancy electric motorcycles.

Who needs any kind of motorcycle? I once had a famous philosophy professor in college who rode a Harley and piloted his own private F8F Bearcat fighter plane, but that was back in he 1960s. I had a coworker who was also a very responsible motorcycle rider who even came to work one day on a drenching downpour, safe and dry in his navy surplus wet suit, in the early 1970s.

Why should we be encouraging motorcycles today with global overheating and overpopulation? And if a person is "into" motorcycles, is an electric going to "turn them on". Or would it be either a Harley or a Ducati (I actually saw one of these a few months ago, with its super hi tech desmodromic valve engine).

Tarform is ridiculous conspicuous consumption. A super luxury toy for some young fops. It won't even go "Vroom, Vroom!", will it?

Some persons like to do dangerous and/or foolish things, especially if they are rich enough that they don't have to work at a job and have "money to burn". Now I'm not expecting that kind of person to find satisfaction in living by doing something socially constructive.

There is nothing "soulful" about this. Soulful? Even if you do not believe in any Deity, study the wisdom in the Book of Ecclesiastes in the Bible, which could alternatively be titled: "Been there, done that".

+2024.03.13. How will AI advance without social media ?

Strange question?

Social media are pushing AI, aren't they?

Is there any analogy here to asking how all the sciences advanced from Hellenistic times up until the late 20th century, "without social media"? How did quantum physics (or Newtonian physics, before that) advance without social media? Read Thomas Kuhn's classic little book: "The structure of scientific revolutions".

One might also ask not just how do social media advance things, but also: how do social media cause trouble?

What do you think?

+2024.03.13. Is it possible to create a life form that is more advanced than humans in terms of knowledge and intelligence?

This is a sci-fi fantasy, isn't it, Dr. Frankenstein, sorry, typographical error: I meant: Mr. Musk?

Anything that 's not a self-contradiction is imaginable. And we note that life forms more advanced than at least most humans in terms of knowledge and intelligence are occasionally made already: by copulation or artificial insemination: geniuses.

But the likelihood of anybody making some superhuman form of life by computer programming is highly unlikely if not impossible, because computers do not know anything. They do not think. They do not have feelings. They just compute: they produce output from input by electrically executing the computer instructions the human computer programmers have designed and implemented in them. HAL in Stanley Kubrick's classic film 2001 is just a fiction.

Anything anybody tries to do with computers can be more "advanced" than humans only metaphorically, as a simulation. Here's an analogy: World champion Ryan Crouser's record for the shot put is only 23.38 meters. An ICBM can throw an object weighing several tons into earth orbit. The ICBM is stronger than any human. A computer can generate the decimal expansion of Pi faster than any human.

Suppose we somehow did create "a life form that is more advanced than humans in terms of knowledge and intelligence"? Alan Turing said that if we ever did create a computer that really thinks "we shan't understand how it does it". So in terms of "computer science" and all other human knowledge, it would be as incomprehensible as how copulation produced Albert Einstein and the General theory of Relativity. That's not helpful, is it?

Since we could not understand it, we could not use it for things like planning a trip to Mars because we could not be sure what it would do. I recently asked the Bing AI why the mountain K2 is called "K2". It outputted what looks to me like the correct answer (Got that? "looks to me", not computation but common sense, which is not algorithmic!). But then the Bing AI added that "Everest" is another name for K2. I typed in that was an error. It outputted thanks for me reporting the error and then it repeated the error back to me again. This is the kind of thing that any "artificial intelligence" we are likely to produce will always do in random ways because it has no intelligence (nor any stupidity, either), everything in it is just character strings without meaning: it just computes. Read about "Eliza" in MIT Ompute Science Prof. Joseph Weizenbaum's classic book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976) — great book!

What we can constructively be doing is producing industrial robots that will do all he "scut work" of life, thus freeing us humans up to devote ourselves to more honorifically human endeavors such as creativity in the arts and sciences, and enjoying living. No computer can enjoy a glass of wine, not feel pain from a cancer tumor....

2,500 years ago, Aristotle said that if machines could do all the work, we would not need slaves (or wage-slaves aka employees). And what might that utopia look like? Read all about it in The Book of Ecclesiastes in the Bible, which contains a lot of wisdom even if you do not believe in any Deity.

[ THINK ]

+2024.03.11. Are today's AI models, such as ChatGPT, intelligent in any human sense?

No. They are just computer programs. They are neither intelligent nor stupid in a human sense; they have no "sense". They just compute.

I have been playing recently with the Bing AI. I asked it why the mountain K2 is called "K2". The program looked this up in its huge database and returned to me what looked like the correct answer.

But it also added (without any provocation from me!) that "Everest" was another name for K2. When I typed in that this was an error, it thanked me for correcting it and then repeated the error. I seem to recall something like this occurred when Google originally announced its AI but I do not remember the details.

AI doesn't know "what it it doing". It doesn't know anything. It is not conscious. It just follows the computer programming that some humans, be they intelligent or foolish, have coded and implemented. The computer programmers know what it is doing, or, as has often been seen with computer programming, even they don't know all the consequences of what they have done and then they (or different humans: "maintenance programmers") have to find and fix the mistakes (which are called: "bugs"). Humans need to do what the AI's they cook up cannot do:

[ THNIK ]

+2024.03.11. Are there any mathematicians working on creating artificial intelligence that can outperform humans in solving mathematical problems?

I am not a mathematician.

But I read something somewhere that seems to me relevant here. First, let's all stop using the very misleading term "artificial intelligence". The only intelligence is in humans, for instance the mathematician Sir Andrew Wiles, who solved Fermat's Last Theorem. Prof. Noam Chomsky says ther is nothing intellectually interesting about "artificial intelligence".

So let's instead speak of very fast computers executing computer programs designed and implemented by very smart persons. What did I read? I read that the solution to the 4-color map problem involves so much computation that it needed a computer to do the proof. So computers are already outperforming humans in solving mathematical problems.

But note how they are outperforming them. as a TOOL, as an AIDE, not as intelligence and insight, which computers do not have.

It's like chess. Deep Blue beat the world champion chess player in a game of chess. But the human was playing chess. The computer was just computing, not playing chess or doing anything humans do.

Doesn't Godel's Incompleteness theorem prove that there are absolute limits to what computers can do?

Computers are not mathematicians. But they can compute a lot faster than humans can, just like an ICBM can launch a big bomb into earth orbit whereas world champion Ryan Crouser's record for the shot put is only 23.38 meters.

[ THINK ]

+2024.03.09. What is the difference between an artist and a scientist?

(This question previously had details. They are now in a comment.)

Maybe it's not either/or, but "gray scale".

Both great scientists and great artists are creative, thinking ideas nobody else had before. But the innovations scientists create characterize aspects of the real world and provide predictive power, whereas what artists do is commentary on the world. Is that a fair distinction?

Could we consider something like Einstein's "General theory of relativity" to also be art?

And what about engineers? Isn't what they do often "in between", both having practical effects as well as beauty? Isn't, for example, the SR-71 spy plane also beautiful? Or the New York CitiCorp building (read on The Internet about the engineering problem with this building that was discovered after it was completed:

William LeMessurier - The Fifty-Nine-Story Crisis: A Lesson in Professional Behavior

https://onlineethics.org/cases/engineers-and-scientists-behaving-well/william-lemessurier-fifty-nine-story-crisis-lesson

Doesn't that combine art and science?

And are all sciences the same? Aren't geology and particle physics rather different not just in their objects but in the activities of doing them? Also what about "crafts"? A master potter can create coffee cups which are beautiful but also functional, or a weaver can make an innovative pattern design, for example. In his Public Television series now some decades ago, the scientist Jacob Bronowski said it was the mind and hand of man working together that had made "the ascent of man". There is even a German word which I probably have wrong "fingerspitzengefuhl": knowledge in one's finger tips.

We need to study and appreciate the human process of INSIGHT, in all its flavors. Where do new ideas come from? "No man knows from where the words come from in the upsurge of meaning from the depths into the light" (quoting from imperfect memory, George Steiner, quoting Schiller, "After Babel", p. 147, if I remember correctly). All creativity is a mystery.

+2024.03.09. Why do people believe intelligence can be increased indefinitely in machines giving indefinitely increased capability too? There isn't any evidence that there is no limit, so is it just wishful thinking?

"These people" are most likely confusing algorithmic processing power which computers have, with intelligence and insight which persons have, or at least can have, since persons sometimes are foolish and "stupid", but computers cannot be either foolish or "stupid", not intelligent: computers just run algorithmic processes which humans use their intelligence (and their foolishness, too) to design and implement.

An analogy is physical strength. A machine can be far more powerful than a person. The person has little muscles; the machine may have a huge gas turbine or other power source. So a computer can compute faster than a person. Compute the first 100,000 digits in the expansion of the number Pi. Which will do it first? A person or a computer? But no computer could ever have "thought up" Pi itself.

The limit? The mathematician Kurt Godel proved that every algorithmic system, which would include every computer program, either cannot compute the truth or falsity of some propositions or is internally inconsistent. Computation has an absolute limit, albeit it's not anything that affects anything we are doing today that I know of.

A big problem is when persons imagine persons and computers are the same, like in movies such as Stanley Kubrick's 2001, where HAL was a person. But that's just fantasy. We can imagine anything, but that does not make it possible. When persons make his kind of mistake the consequences can be very bad. Maybe they think that since computers are "smarter" (read: faster) computers should rule the world? But that would not be something computer would do: It would be something persons caused computers to do.

We humans need to keep clear "who's boss". We are (or should be, yes?). Each of us living our life is the ultimate horizon in which everything finds its place, including "artificial intelligence", which is not intelligent but just computations: we are the intelligences (or the stupidities).

So computers can keep getting faster and faster. But what matters is the computer programmers (and all the rest of us, whatever we do) living our lives. Whether or not you believe in a God, read the Book of Ecclesiastes in the Bible. It has a lot of wisdom in it, not just engineering knowledge.

[ Plato's academy ]

1

+2024.03.09. How would society be affected if we had a powerful brain-computer interface that can store and retrieve information faster than the current internet?

Everyone must be terrified of this idea which would make every person be monitored and controlled by an external government or corporation. Turn us all into zombies. Not to mention the risks of brain surgery which, like all surgeries, would sometimes go wrong, destroying the person's mind.

The only reasonable justification for "brain-computer" interfaces has to be severely neurologically damaged persons, such as ALS victims, where the benefits of restoring their ability to control their voluntary muscles would outweigh the risks of the surgery to implant a chip in their brain.

But for healthy persons this would be even more dangerous than locking them up in a concentration camp, which would be the effect, wouldn't it?

Some persons seem to think this kind of sci-fi geekish fantasy is "cool". They need to, to borrow a phrase from Larry David: curb their enthusiasm and become socially responsible. Don't you fear this kind of dystopian fantasy would destroy humanity? Isn't the Internet already fast enough?

The history of science and technology of the post-war [post-1945] era is filled with examples of reckless and unreflective "progress" which, while beneficial or at least profitable to some in the short run, may yet devastate much life on this planet. Perhaps it is too much to hope, but I hope nonetheless that as our discipline matures our practitioners will mature also, that all of us will begin to think about what we are actually doing and ponder whether, whatever it is, it is what those who follow after us would want us to have done. (Joseph Weizenbaum, Professor of Computer Science, MIT)

Cherish all the potential of your natural life with others. Even if you do not believe in a God, read the Book of Ecclesiastes in the Bible; it contains a lot of wisdom for living.

[ THINK ]

+2024.03.09. Does having autism mean that I'm more focused on my thoughts rather than emotions? And if that's the case, does that make me like a robot?

I am not an expert on autism.

But one thing is for certain: No human person is like a robot. Robots do not have thoughts (or feelings). Robots do not experience anything. Robots just execute instructions of "programs" humans code to determine what they do.

Some persons who do not understand anything about autism may believe that severely autistic persons are like robots. Such persons may do repetitive actions. But whatever the person is doing, persons are not computer programs.

I would respectfully suggest that the person asking this question consult a mental health professional with specialization helping persons described as autistic. There is all sorts of nonsense people believe that is not helpful for persons who need help.

+2024.03.09. Why is everybody scared of AI?

Interesting question.

Should we be frightened about "Artificial intelligence" (AI), which is not intelligent but just more powerful computer programs? We are not afraid of our personal computers but Googlezilla is going to take over the world? Well, it might, if the Silicon Valley oligarchs have their way.

But maybe we should be more scared of nuclear weapons, especially with at least two wars currently going on, each of which has the credible risk of escalating to thermonuclear apocalypse and ending all human and higher-animal life on earth? I recently watched the old movie "Fail Safe" and it terribly frightened me. I play (note that word: play) with the Bing AI.

I fear nuclear war I play with the Bing AI. Sometimes I try to see if I can make it come up with nonsense. But recently it surprised me. I asked it a "straight" question: "Why is the mountain K2 called 'K2'?" The Bing Ai returned what looks like the correct answer. But then it added that "Everest" is another name for K2. I typed back that this was an error. the Bing AI thanked me for correcting its mistake and then repeated the same error.

"AI" is not something to fear, any more than any other powerful tool. What is to be feared are the humans with power to use AI in ways that can harm us. Watch the old fun but also profound movie "The Truman Show".

I think there is something else to be feared here: Virtual Reality (VR). A lot of people, probably many of them young males who are still squeezing their acne pimples but may also get PhDs in computer "science", are all ENTHUSIASTIC about Virtual Reality.

Virtual Reality literally takes you out of your mind. I will conclude here with my virtual reality experiment which could have killed me and which I think should frighten everybody about virtual reality:

My virtual reality experiment: I was driving up a 6 lane superhighway early one August afternoon in clear bright sunlight at about 65 miles per hour in my clunky Toyota Corolla DX, with no other cars on the road. I decided to look intently at the little image in the car's rear view mirror -- no high tech apparatus. I really really really really intently focused all my attention on that little image! It was entirely convincing. That "little" image became my whole experienced reality: I was driving where I had been, not where the automobile was going. Fortunately I "snapped out of it" in time to avoid becoming a one car crash in the ditch on the right side of the road. (It was a very good place to have conducted this experiment, because there was a police barracks, a teaching hospital, and both Christian and Jewish cemeteries nearby, just in case.)

You may try to repeat my virtual reality experiment at your own risk; I strongly advise you against doing so. I assure you: It worked. (Of course it will not work if you don't "give in to it", just like a video game won't work if you just look at the pixels as what some computer programmer coded up with branching instructions depending on what inputs you enter.) Moral of this story: VIRTUAL REALITY CAN KILL YOU. Forewarned is forearmed.

[ VRMan ]

(This person is literally "out of his mind". And "reality bites!")

+2024.03.09. How concerned should we be about the potential of emerging technologies like AI to erode trust in our institutions?

The concern needs to be broadened to include the institutions themselves and everything else. AI and emerging technologies just add new ways – every technology is a "way" of doing things – every new technology just adds new ways for institutions to manipulate us.

The U.S. government's propaganda about its Ukraine policy did not need AI. On the television and in the newspapers everyone could read President Biden telling us: "For God's sake, this man (President of the Russian Federation Vladimir Putin) cannot remain in power." and that the U.S. would sabotage the Nordstream II gas pipeline and Sec'y of Defense Austin telling us that a purpose of the war is to weaken Russia, and now people like Jens Stoltenberg fear-mongering us that if the Zelensky regime does not defeat the Russians, "Putin" will enslave us all.... (Half a century before that it was "falling dominos" in Vietnam.)

These people never tell the truth about the complicated history of this mess. Learn from people such as Columbia University distinguished Prof. Jeffrey Sacks: https://www.youtube.com/watch?v=uuj627q2a88 . Do you trust the unhinged ranting of a comedian in a green t-shirt costume?

Univ. of Chicago Prof. John Mearsheimer (listen to him on YouTube) says that governments lie to their enemies sometimes but they lie to their own people often. Why? Because their enemies know what's going on whereas their own people don't.

[ Goring ]

No AI needed. Try to become as informed as you can about ALL sides of important tissues. As an aside, watch the old fun but also profound movie: "The Truman Show".

[ THINK ]

+2024.03.08. What should I do now, what should I study, what should I learn that AI can't replace it?

There are many things you can study that AI cannot replace.

If AI does interest you, study metamathematics which is the foundation all computing technology is based on. If I may be a bit "romantic" about it: Climb Mt. Godel: study and understand in detail Dr. Godel's "Incompleteness Theorem", which sets the limits beyond which no algorithmic process can ever go.

Study electrical engineering, which is the other foundation AI and all computing technology is based on.

Or study medicine because any applications of AI to help sick people will be based on medical science.

Or study the law, because lawyers will have a lot of legal cases to fight about AI and other technological issues.

Become a psychotherapist or social worker or chaplain, to provide help to persons with problems in living due to all sorts of reasons, some of which may result from applications of AI.

Remember "AI" – artificial "intelligence" – is not intelligent, nor is it stupid: it just computes. AI just does things some persons have programmed it to do. It's a tool or maybe sometimes a weapon. It's just part of human persons' lives which are the "big thing".

Why do we have AI? Because some persons want it. What persons want is the overarching "horizon" within which everything everybody does, including AI, finds its place. So develop mastery in some field that concerns the overall form of human living and what AI does there. Here's one: industrial sociology. Work for a labor union to help the workers deal with AI.

There are so many things AI cannot replace. For one imagined example: Some computer "scientists" who are all excited about developing their AI further sit down for a leisured dinner at the end of their work day. They are highly educated and "cultured", not just game boys who eat Big Macs and gulp Red Bull. They expect a gourmet meal, which the chef specially prepares for them and wine produced by a niche vintner. They don't want any AI algorithmically producing MRE and beverage: They want "the human touch" what only a gourmet chef and master vintner can provide.

+2024.03.08. Why is objectivity not part of the most modern leftist theories? Is objectivity hard to grasp for normies? Isn't this going to be the leading cause of social and economic dissolution?

Is this question coming from a person on the right? Let's be sure to criticize all sides of the "culture wars" mess, please.

The answer to the question as asked is simple and I will quote it from a person on the left themself: the head of a major university's psychology department, not just some kid shouting slogans he (she, other) might not even understand but just has got all enthusiastic about. Aside: Everybody needs to, to borrow a phrase from Larry David: curb their enthusiasm, and discuss issues rationally with each other.

Anyway, here is the answer to the question:

"Phoebe Ellsworth, a social psychologist at the University of Michigan, said that, when [Elizabeth] Loftus was invited to speak at her school in 1989. 'the chair would not allow her to set foot in the psychology department. I was furious, and I went to the chair and said, "Look, here you have a woman who is becoming one of the most famous psychological scientists there is." But her rationale was that Beth was setting back the progress of women irrevocably.'" (The New Yorker, +2021.04.05, "Past Imperfect: Elizabeth Loftus changed the meaning of memory. Now her work collides with our traumatized moment", Rachel Aviv; emphasis added)

These people are determined to push their partisan agenda, not to seek "objective" truth. But we do need to be careful here, since "objectivity" can be used as a partisan tool, too. "You killed him, yes or no!" "But" "Yes or no, no ifs ands of buts!" "But he was about to kill me with a machine gun" "Yes or no, no ifs ands or buts, you killer!"

I will end here with an article from the New York Times newspaper that further explains or at least describes the agenda of some "leftists" (whatever one wishes to label them). Please THINK and decide for yourself:

[ NYT article + "I said something" carton ]

+2024.03.07. How will AI develop empathy?

It won't because it just computes. It won't develop intelligence (or stupidity), or common sense or enjoyment of good food or any other human qualities, either: it just computes.

People need to get over the fantasies they see in movies, like HAL in Stanley Kubrick's 2001. HAL was scripted to act human. But you can imagine anything in a movie that is not self-contradictory: You can imagine that humans think by having gears (not neurons) in their skulls, or that toasters talk to us. But that's all just idle fantasy.

Computer just compute. But human computrer programmers can program them so simulate as if they had emotions. When you say "Ouch" the computer can be programmed to output: "Sorry, that must have hurt!"

Computers just compute. They are tools for us to use to facilitate living our lives. And that's what matters: our living, human lives. We don't compute: we LIVE (and also die).

+2024.03.07. Is objective existence conditioned by subjectivity?

One way objective existence is conditioned by subjectivity is when wea ct into the world. Or even when a beaver builds a dam.

But there is another question which can be answered as follows: We ask the questions (that's subjective), and the world gives the answers (that's objective). Wishing (subjective) things were other than they are cannot make (objective) them so. But we are each socialized by childrearing, schooling, etc. to see the world in terms of categories in our minds (subjective) through which we perceive the world (objective) and those categories can change even if the things categorized do not.

A "witch doctor" in a primitive society may ask if a person who is very ill is possessed b an evil demon. Yes. But the witch doctor cannot ask of the person has pancreatic cancer because that categorization is not part of their (subjective) world.

Duck or Rabbit?

[ Duckrabbit ]

Clearly, not a chainsaw.

Two books here: Thomas Kuhn's "The structure of scientific revolutions" and Norwood Russell Hanson's "Patterns of discovery". Both Kuhn and Hanson are serious scientists, so they say it all much better than me.

+2024.03.07. What role will humans play in a world where computers are able to do most of our work? Will there still be job opportunities for people in the future?

"Leisure has been, and always will be, the first foundation of any culture.... in our bourgeois Western world total labor has vanquished leisure. Unless we regain the art of silence and insight, the ability for nonactivity, unless we substitute true leisure for our hectic amusements, we will destroy our culture – and ourselves." (Josef Pieper)

Don't we work to live, not live to work? Wasn't technological advancement supposed to be "labor saving"? To free all humanity from the curse the Biblical Deity placed on all humanity for Adam eating a piece of fruit

Now there is a lot of "work" which persons will always need and many want to do: educating the young, caring for the sick and the infirm.... But much work today is "make work". We don't need the professional athletics industrial complex or the entertainment industrial complex, not to mention the munitions industries, etc.

Some persons like to go fishing. If there was no professional "athletics' each person could still enjoy DOING sports, couldn't they? Scientific investigation. Artistic creatiion and so much more persons can do if they do not have to waste their time and energy just, to borrow a phrase from Karl Marx: to reproduce individual and species life.

Look a cats, They just enjoy being cats. Let me end with a suggestion for the good life. Read the Book of Ecclesiastes in your Bible, even if you do not believe in the Biblical Deity.

[ Winnie the Pooh ]

+2024.03.07. How far are technology like Neuralink from full virtual reality, or is it even possible?

What would "full virtual reality" be? The person being completely disconnected in their living experience from the real world? What could be done to them that they would not be aware of? They get some dread disease in their real body and never notice it and die from it instead of getting treatment for it, or somebody steal all their assets and not notice that

. Etc.

Somebody, either a government or a big corporation, is going to "run" the virtual reality, and the people, including me and probably also thee, would just be manipulated by them for their purposes. Want a preview of this? Watch the old fun but also profound movie: "The Truman Show".

You can have the full experience of virtual reality even without a computer. Below is my experiment which might have killed me. Now, add to all these dangers the Neuralink intrusive neurosurgery to implant a microchip in each person's brain, and you have a recipe for multiple kinds of disasters and horrors. No surgery is without risks and messing inside a person's brain can reduce them to being "brain dead" or zombies.

[ VRMan ]

(This person is "out of their mind")

[ VR experiment ]

Please!

[ THINK ]

+2024.03.07. My parents want me to pursue CSE but due to the recent AI developments I am feeling quite anxious about my future job opportunities. Will AI make coding obsolete in the future? Should I study biotech related subjects?

Don't take my advice; I am not an expert, and I can only tell my personal experience and feelings about it.

I worked for half a century as a computer programmer and nothing I learned in the first years had any value in the last years. The work in the early years was far more enjoyable than in the later years, where I had to deal with trying to figure out, with little success, to make undocumented APIs do complicated tricks.

It had gone from pretty much clear logic to what I call "mystery meat". The working conditions became more authoritarian with "scrum" and "agile". Increasingly it was "multitask in a fast paced environment" – like a treadmill.

Would you be interested in biotech? If yes, my guess is that would be much better to study. It's more "serious" science. I think the jobs are better there. I think what you learn will last longer and you may also get more of a sense of real accomplishment from advancing medical science or related fields.

AI is probably going to cease to be such a "big deal" as it is today and be incorporated into something far more powerful, and scary: "Virtual Reality". I will end with my little Virtual Reality experiment which cold have killed me, but I have no idea what the people who program the Virtual Reality's jobs will be like.

Biotech sounds like a better career path to me. But, again, I am not an expert.

[ Virtual Reality ]

+2024.03.07. What do you think the future on earth would be in like the next 100 years? Especially with the recent development of AI... I can't even imagine! Comment your thought let's see if we'd achieve it even faster

Faster? Put U.S. boots on the ground in Ukraine and start a major land war in Europe which will quickly escalate to thermonuclear apocalypse ending all human and higher-animal life.

A little less fast: Global overheating from all the greenhouse gasses we humans put into the atmosphere and pollutants everywhere will end civilization as we know it. A few may survive in bunkers for a while. Remember that a few decades ago The Soviet Union successfully landed a heavily shielded probe on our sister planet Venus. It transmitted a few grainy pictures back to earth before succumbing to the hostile environment. Earth may become Venusian soon.

AI is not such a "big thing" as wars and hyperoverpopulation and pollution and other things we humans are doing but should not be doing (or should be doing but are not...), is it? Silicon Valley and the techno-oligarchs are gee-whiz but (see below)

And in the computer world there is something far more powerful and dangerous than AI alone: Virtual Reality (VR). Watch the old fun but also profound movie "The Truman Show" and then ask this question again.

Also there are people who want to implant networked computer chips in each of our (your, my...) brains to turn us all into zombies.

Frightening, isn't it? Hopefully we can use our ever faster computers running AI as a powerful TOOL to help us deal with these issues.

We need to be more responsible, more thoughtful, not faster: Haste makes waste, etc.

[ Weizenbaum + THINK ]

+2024.03.07. Is it considered ethical for someone to assist with your final year project?

This is not a Yes/No question.

Assist IN WHAT WAY(S)? And do you GIVE THEM CREDIT for exactly how they helped?

The student should get credit for what he (she, other) himself has contributed. But not for what others have contributed.

Let me give an example: In 10th grade, the Ancient History teacher assigned us kids to make something like from an ancient civilization. This was hopeless for me because I doubt there was even a hammer in my home, and the teacher was not going to help me learn how to make anything.

But there was a student who normally god very poor grades who submitted a model of a chariot that was BEAUTIFUL. Did he do it or did his daddy do it for him? One project; two very different evaluations, yes?

So the issue is what kind of assistance? If a student submits a project and does not disclose the help they got from somebody on it, that is not "ethical", and you know what? They know it's not, because they want to get credit they have not earned or deserve by not saying how they did what they did.

+2024.03.07. Can generative AI be used to personalize educational materials and experiences?

Only persons can personalize anything.

Computer programs can be used to "individualize", pseudo-personalize, educational materials. This can obviously be very helpful. Imagine: You are in some remote place where there are very few children so the teacher has a "one room school house", with elementary thru high school and slow trough brilliant students all in one room. No way can that teacher give a single "lesson" that will be tailored to each different student.

But a computer program could test the reading level of each student and then give each student reading material to study tailored to his (her, other) reading level and interests. No need for AI, even, true? And this would be very helpful for the students.

But real "personalization" could come only from the teacher, who has feelings and experience and engages with each student as a person not as an input source. So in this example, the computer can individualize to free up the teacher to personalize.

The computer, including "AI" is a tool for the human persons to use to improve their (your, my, others') living our lives, just like the coffee machine at the back of the room, or the textbooks which, a few centuries ago, were as transformative for teachers to use in education as personal computers are today.

+2024.03.07. How would you create a ThoughtWare AI Bot that actually could or would eventually replace every psychologist and psychiatrist because the technology got better results than any human could in terms of successfully treating mental illnesses?

Maybe this can be a terrible idea?

Persons interacting with computer "therapists" goes back at least to MIT Computer Science Prof. Joseph Weizenbaum's "Eliza", which I urge everyone to read about in his classic book "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976).

Weizenbaum wrote a simple computer program to simulate (repeat: simulate...) a "Rogerean" psychotherapist. He was surprised and distressed to see people telling the program their dark secrets which they would never tell a human, and these were "normal" persons, including his secretary. Psychotic patients are having trouble distinguishing reality from illusion, so robotherapist might make them worse, mightn't it?

[Real time interaction: My cat just now walked across the keyboard: ";pol"] Computers do not understand anything: computers just compute, e.g., pattern match. Mentally ill persons sometimes use words in idiopathic, non-standard ways. This is especially hard and generally futile to write computer programs to try to deal with (e.g., computer interaction with a stand-up comedian who writes their own jokes...).

Also, AI's make errors: I recently asked the Bing AI why the mountain K2 is called "K2". It outputted what looks like the correct answer and then added that "Everest" is anothe name for K2. When I typed back that was an error, the AI thanked me for correcting it and then repeated the false information. An AI therapist might erroneously generate confusing outputs to the patient and nobody would notice unless a human therapist was monitoring the interaction.

We need to be very careful in "messing with people's heads". An AI "therapist" could go off in some unhelpful direction for a particular patient and just make their situation worse. As an aside, watch the old fun but also profound movie "The Truman Show" and then think about this question some more.

There are surely many ways computers can be helpful in helping persons with psychological problems. I worked on a program that did role playing. We did not pretend the computer was anything other than computing. But say we had a patient with anger issues and he (she, other) wanted to practice controlling their emotions when faced with a person trying to "piss hem off". They could practice this with the simulation. No tricks, no pretending the computer was doing anything except running a computer program — the program even used "ascii graphics" which are obviously not real or even realistic.

Quora questions are not the place for comprehensive answers to difficult questions. Use AI where it can help patients. But human patience needs always to control the situation (if you are interested in treating psychotic patients, you might be interested to read Harold Searles collected essays if you have not already; Dr. Searles was outstanding in dealing with severely psychotic persons.).

Do you disagree?

+2024.03.06. How will writing with AI abolish the personal touch of the writer?

What does one mean by "writing with AI"?

If a person lets the AI do the writing, there is no personal touch: just the AI's computed output. The AI has no feelings and no thoughts either; it just computes, like a Cuisinart purees food.

The human needs to do the writing (in stuff like filling in standardized forms where there is no personal touch in any case). What the human CAN do with the AI is ask it questions and MAKE USE OF ITS ANSWERS, just like any other resource. I am currently reading an article in the Mar 5, 2024 issue of the New Yorker magazine. I am finding it very interesting. If I write about it, I will paraphrase or quote things from the article.

Suppose I wanted to write about the mountain "K2". I wanted to know where its name came from, so I asked the BIng AI about this. It responded with what looks like the correct answer. Of course I would check this before using it. But it's as helpful a source as a print encyclopedia or whatever, yes? Ah! But what else did the AI tell me? It told me that another name for K2 is "Everest". I typed back that was wrong. It "thanked me" (simulated thanks...) for the correction and told me the same error again....

The "personal touch" comes from the human person. Look at it another way: Obviously there is "personal touch" if you and I are talking together. But isn't that also "personal touch" if I am reading your words in a printed book?

AI cannot replace the "personal touch" of a real human writer, because AI does not feel or think, it is neither intelligent nor stupid: AI just computes.

But beware! AI can "simulate" human interaction by the humans who did the computer programming writing the Ai so that it responds to questions with sentences that look like what people say. Please learn about computers from the classic book by MIT Computer Science Prof. Joseph Weizenbaum: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). Among other things he describes the a simple computer program he wrote that simulates (got that: simulates...) a certain kind of human psychotherapist: "Eliza". He was surprised and shocked to see that people told the program secrets they would never tell another real human.

The personal touch comes from the human person: you, me, whoever. We can use tools to help us communicate, including Ai and printing presses and even — the alphabet, which is itself just one more technology we can use.

[ THINK ]

+2024.03.06. I feel like I'm not programming in my present tech work, were I troubleshoot, configure servers, and test APIs. What can I learn from my job that will benefit me in the future for tech professions I can follow?

One skill that will always be needed is troubleshooting. the problem, obviously, is that the kinds of troubles will change. Troubleshooting "routine" problems can be done by automation, but the more complex systems become the more unanticipated ways they can break and have problems. Experienced and knowledgeable troubleshooters will always be needed for "the tough ones".

As for "your job", I started computer programing in 1972 and by the time I was made redundant from a big tech company in 2018, nothing I had done back then was of any use any longer, and, conversely, back then I would never have imagined what computer work would be like in 2018. It went from well documented IBM System 370, COBOL and Assembly Language, to undocumented APIs like Angular and incomprehensible (to me at least) Django. It got to be too much for me.

I think it's going to be "tough" for computer programmers and allied people. It felt to me like a treadmill where they just kept raising the incline and turning up the belt speed.

There are all sorts of jobs which appear and disappear. My daughter studied physics and chemistry in college and now has a job doing product testing. The company likes her and she is doing a good job of it so maybe she will be able to get promoted into planning? Management? Training? Go to law school?

That last item is not just a joke. When I was a computer programmer back in the 1970s, a coworker did decide to go to law school part-time at the local low prestige law school at night. After about 6 years he graduated, passed the bar and got a much better job no longer doing programming — running the Information Technology program for a local college.

+2024.03.06. How can future AI systems be developed to be more explainable and transparent, and why is this important for fostering trust in AI?

Fostering trust in AI is a very dangerous, a vary "bad" idea, just like fostering trust in a religious or political leader or a teacher or your parents or your friends or anybody or anything else is dangerous and bad. What is needed is to foster responsible, educated engagement with all these people and ideas: for each person to critically evaluate what is good and what is bad in each and every one of them and to act responsibly.

There may be little if any good at all in some of them such as Adolf Hitler and fundamenlist Religious Leaders (and Messiahs...)and political demagogues and computer "scientist" or techno-oligarch enthusiasts for implanting networked computer microchips in each of our brains and turning us (you and me...) all into zombies.

Read MIT Prof. of Computer Science Joseph Weizenbaum's classic little book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). Watch the old fun but also profound movie "The Truman Show".

This question is presumably coming from a scientifically educated person. The physicist Niels Bohr instructed his students:

"Take every statement I make as a question not as an assertion."

You can't easily go wrong with that, can you?

[ THINK ]

+2024.03.06. Do you think humans can retain control over AI?

Note the word in the question: "can". To that the answer is Yes. But substitute "will" and then it's a question of WILL: Will human will (choose) to retain control over AI or will(sic) some of them (some of us, esp. techno-oligarchs and some Frankenstein computer "scientists" CHOOSE (will) to program AI to "take control", like on an airplane if the pilot engages the autopilot and then leaves the cockpit never to return.

But there is something for more to be worried about: Virtual Reality (which will use AI, of course). Watch the old fun but also profound movie "The Truman Show" and then ask if humans will retain control of Virtual Reality (VR)?

[ VRMan + VR story ]

+2024.03.05. NVIDIA CEO Jensen Huang says AI is ending the era of teaching kids to code. Is it true that the AI replaces the coder?

What does it mean to not teach kids to code? They will just learn to use computer applications?

If computers are so important to our society, shouldn't people know something about them? The rock-bottom fact is that everything a computer does was "coded" by some human person.

Also, learning to "code" is learning very important and fundamental logical thinking. Surely a lot more useful than some of the math I learned in school, especially: trigonometry. Writing computer code requires you to clearly figure out what you want to achieve and how to achieve it. As a very bright computer scientist I once worked with said:

"If you don't know how to do something, you don't know how to do it on a computer."

Or, the other way around: To be able to tell a computer how to do something, you need to understand how to do it. You can't tell a computer: "Just figure it out, you know, it's something like this...."

Also, writing computer programs can be enjoyable and interesting. A danger is that some ids can get addicted to it, but that is surely a lot better than them getting addicted to playing violent video games.

There is a really good book people should read to understand about computers: MIT Prof. of Computer Science Joseph Weizenbaum's "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976)

[ THINK ]

+2024.03.05. How much longer until AI is able to program itself without human intervention?

Ultimately, never.

Human programmers will continue to improve AI to be able to detect ever more problems and in many case to have solutions in the AI's database and programming. But, especially as the AI gets ever more complex, it will occasionally have problems that the programming cannot handle. At best in those cases the AI will signal to human programmers that it has encountered a problem which they need to handle. The cases of greater concern are where the AI keeps computing, in error. The Ai has no idea of it's producing errors: it just computes, and the programming can detect some errors but by no means all.

Always keep in mind that Ai has no intelligence (or stupidity, either): AI just computes. And the computations are totally meaningless. Just like when Deep Blue beat the world human champion in a game of chess, the computer was not playing chess: it was just computing.

Recently I had an unexpected interaction with the Bing AI. I asked it why the mountain K2 is called "K2". It produced what looks to me like probably the correct answer, but then it added that another name for K2 is "Everest". I was not doing anything to "try to trick it". I typed in that this was an error and the AI thanked me for reporting the error and then repeated it again. Now, obviously, this can be fixed. But it's the kind of thing that will happen with AI forever. It just computes.

Everybody needs to get over fantasies that AI can be like computers in movies which become human, the classic example being HAL in Stanley Kubrick's great film 2001. But HAL is just a fantasy in the film. We can imagine anything that is not self-contradictory but that doesn't mean it's possible.

AI is a tool for us to use to achieve our desired ends in living. That's the net: We live, have experiences, enjoy things, suffer, have ideas, etc. AI is like a hammer or a printing press: It's a tool. Printing presses do not understand the books they print. AI does not understand anything, either. But the books that we print on the presses are very informative for us. AI can be a very helpful tool, also.

[ THINK ]

+2024.03.05. How do you think China's ability to compete in AI technology, particularly in text-to-video models, is affected by trade restrictions on advanced chips and technology exports from the US?

No way am I am expert on this or even close. But it should be obvious: China is a huge and now very powerful country with a lot of very highly intelligent and educated technology people, many of them educated in America's top universities.

U.S. trade restrictions seem doomed to be self-defeating. They force the Chinese to innovate instead of just "going along". China may fail to keep up. But more likely they will speed up. Look at the example of Russia: The U.S. hoped to severely hurt their economy with sanctions after their "special military operation" in February 2022. Instead the sanctions forced Russia to become more self-reliant and their economy is doing very well, perhaps better than had they not had to overcome the sanctions.

So the expectation for Chinese competitiveness on advanced computer chips is that they will do very well. Shouldn't we expect any trade war with China to hurt the U.S. more than China since for many years now we have chosen cheap "made in China" instead of supporting more expensive made in U.S.A. industrial base?

+2024.03.05. How do we know the law of nature exists?

Short answer: We don't and we can't.

"We", be we you and me or Isaac Newton or Albert Einstein, did not make the world: We are inexorably IN it, and we cannot get "behind" it to understand how it works because if we thought we had got behind it that would still just be more of what's in it....

[ Cosmos ]

The 18th century British philosopher David Hume put an end to all this kind of ideation: all we can observe are "constant conjunctions", not their "causes" (whatever that might mean).

Anybody who has absolute truth is at least trying to fool you if not also themself. If God gave them The Holy Writ, how can they be sure it wasn't Satan i n God suit or a hallucination?

Watch the old, fun but also profound movie "The Truman Show" and they ask this question again, please.

But all is not hopeless, is it? We DO observe constant conjunctions even if they are all contingent so we are "walking on thin ice" whether in a high school chem lab or the CERN Superconducting Supercollider. We make our best hypotheses and keep checking and refining them.

Read the classic little book, Thomas Kuhn's "The structure of scientific revolutions".

There are not even any "objective data". What we call "data" are interpretations according to our theoretical frameworks (ideologies du jour). Nobody in the Middle Ages could have had leukemia, could they? But they might have been possessed by an evil demon, yes?

So beware of all "truths", especially, of course, the religious and political kinds Try to check out things as best you can. I like the physicist Niels Bohr's advice to his students and I would invite you to think about it too:

"Take every statement I make as a question not as an assertion."

[ THINK ]

+2024.03.05. How important do you think it is for film editors and directors to embrace advancements like AI in cinema, as mentioned by Walter Murch?

Respectfully, I do not know who Walter Murch is or what he advises.

But consider that some of the greatest films ever made are: "silent!" For instance, Sergei Eisenstein's "The Battleship Potemkin". Or technologically simple: 1937 Jean Renoir's "The Grand Illusion": "Orson Welles named La Grande Illusion as one of the two movies he would take with him "on the ark" (Wikipedia)

So technological advances are not very important in the arts. Both great art and dreck and everything in between can be made with whatever the available technology is.

But some people get all gaa-gaa about "innovations" irrespective of whether they have any human[e] value. What is important are ideas and feelings, knowledge of life and the history of human culture and communicating these things by whatever means available.

Among the most important technological advances in cinema, as elsewhere, are and need to be in the preservation of great films: the work of archivists who, as far as I know, don't get big publicity.

[ Last scene of The Grand Illusion ]

(Last scene of "The Grand Illusion": A powerful image for the horrific wars going on today: Two escaped POWs nearing the Swiss border. An enemy patrol spots them and the soldiers take aim at their easy target. But their officer calls out: "Hold your fire; they are over the border; the war is over for them and so much the better for them." What could technology since 1937 add to that powerful statement of hope for humanity?)

+2024.03.05. What if the money we spent on AI would be used on human intelligence? Are there possibilities that we can also evolve?

Don't you think (and feel, because we humans have feelings as well as thoughts, neither of which AI has...) that would be a good idea?

There are lots of far more important matters where human intelligence needs to be further developed and

applied, such as the Ukraine and Gaza wars where many persons are suffering horribly. And all the poverty and people slave-laboring (read the article about North Korean workers in the March 4, 2024 issue of the New Yorker magazine).

But let me not tire you out with things you have probably already thought of. Two recommendations:

(1) Require a practicum as an orderly in a hospice for all computer "science" degree candidates so that they will learn by getting the bodily fluids of dying persons on their hands that they are mortal as well as analytical problem solvers, and

(2) Apply more human intelligence to studying technological intelligence itself, starting with reading MIT Prof. of Computer Science Joseph Weizenbaum's classic little book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976)

\ Weizenbaum + THINK ]

+2024.03.04. Can anyone claim to be better than artificial intelligence? If yes, what makes them better? If no, what are the reasons for this?

This is a foolish, wrongheaded and dangerous question.

"Better" is part of a scale from bad to good. These are attributes of human (and perhaps higher-animal) experience, for example, YOU the person who are reading this answer to the question right now. Is it good or bad that you are reading this sentence? Would it be better if you were doing something else instead?

"Artificial intelligence" (AI) is a misleading term. It's just computer programs taking inputs and processing them according to the computer code some human computer programmers wrote, and producing the specified output. AI is not intelligent; nor is AI stupid. AI just computes. Like a Cuisinart purees foodstuff.

"Can anyone claim to be better than artificial intelligence?" is a false question, like asking if any ordinal number can weigh more than a mousetrap or something else ridiculous (a "category mistake").

Everybody needs to get over this kind of nonsense because if people believe it it can lead them to act on their misunderstandings and do things that will cause a lot of harm. If a computer is better than you, maybe you need to be recycled, now? Or if you think a computer is better than me, maybe you will put me in a concentration camp?

Computers are neither better nor worse than anybody. Just like any other tool, e.g., a lathe. A lathe is neither better nor worse than a machinist. the machinist uses the lathe. The machinist can be a better or worse craftsperson. The lathe can be a more or less precise cutting tool. But the lathe cannot be the machinist or vice versa, true?

+2024.03.04. Are there any websites that allow interaction with chatbots or artificial intelligence that have human-like qualities such as emotions and thoughts, similar to robots in movies?

This is nonsense and people need to "get over it" because it is dangerous:

Attributing "human-like qualities such as emotions and thoughts, similar to robots in movies" can lead to persons imagining AI's are smarter than they are and letting themselves be bossed around by them like they let human bosses boss them around, etc.

In a movie, especially an animation, we can imagine anything that's not self-contradictory. We can imagine that gravity is reversed: that heavy objects fall up not down. we can imagine that rain drops talk to us. Or that 2 + 2 = 5. But that does not mean it's possible.

HAL in Stanley Kubrick's classic movie 2001 is a good example. HAL really is a person, yes? But HAL is not a person; HAL is just actors doing things in the movie.

And this is not just "theoretical". Read MIT Prof. of Computer Science Joseph Weizenbaum's classic little book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976).

There he describes the simple computer program he wrote, "Eliza", which simulated a certain kind of psychotherapist. He was appalled when he let persons use the program and found they were telling "Eliza" their deep dark secrets they would never confess to another person.

"Artificial intelligence" (AI) is not and cannot be intelligent (or stupid, either!). It just computes. And the computations can be refined ever more to more closely SIMULATE human intelligence. But it's not. AI is just a tool for the humans (you, me, us...) to use to help us accomplish our objectives in living. We live; the AI is a tool for us. Don't be fooled!

+2024.03.04. Will robots be able to work as chefs in the future, similar to how humans do now?

It would not surprise me if robots aren't already "working as chefs".

But "similar to how humans do"? The answer there is not so simple: Insofar as humans are just following recipes and making dishes they already know how to make and transporting them from the kitchen to the diners' table, yes, robots will be able to do this and, as said, they probably already are.

But they will never be able to do two things: (1) create new dishes that are really delightful and distinguish them from dishes which are different but not interesting (the old image of monkeys at typewriters writing Moby Dick). And – far more important:

(2) Bringing human warmth and elan to the dining experience: friendship, mutual pleasure in diner's delight in the food and chef's delight in having delighted the diners. ROBOTS DO NOT AND CANNOT HAVE FEELINGS: ROBOTS JUST FOLLOW ALGORITHMIC PROCEDURES.

So we could deploy robots to make all the meals needed to feed all the Palestinians starving in the current Gaza war. That would be great, wouldn't it? But that's not what you go to the CIA to study for, is it?

Pun in that last sentence: Not the Central Intelligence Agency in Langley Virginia, but: The Culinary Institute of America in Hyde Park New York. As Julia Child used to say: "Bon Appetit!"

(Every human chef should keep a copy of the Book of Ecclesiastes in the Bible handy in case he (she, other) serves a meal to some philosopher or computer scientist who wants to talk to him about this question.)

+2024.03.03. Is it possible for artificial intelligence to possess the same level of creativity as humans, or will it always be limited in terms of originality and imagination?

"Artificial intelligence" is not intelligent (nor is it stupid, either). It just computes. AI has no originality or imagination; AI just manipulates a database according to computer program instructions.

While it is not of practical interest, one human figured out the limit of what computers can do: the mathematician Kurt Godel, with his "incompleteness theorem". Few of us can understand it is detail, but it basically says that any algorithmic system either cannot determine the truth or falsity of certain propositions or has internal contradictions.

It's like when the Deep Blue computer won a chess game against the world champion human chess player. Deep Blue was not playing chess: it was just computing. So what is the takeaway? That the human should use the AI, e.g., the chess player use Deep Blue as a RESOURCE for his (her, other's) endeavors.

What computers cannot do and what humans need to do is to make our choice of goals to accomplish as best for us as we can, and for us to use computers to help us attain our chosen goals. Computers do not have goals: they just compute. A compuer cannot enjoy anything or be disappointed about anything. Those are the things humans can do and do do: living lives, but computer don't and can't.

And what might be the goal for us? Some sort of sci-fi fantasy computopia?

I would urge that wisdom is even more important than intelligence, and that wisdom does not change much over time although technologies do: from pre-literate societies to alphabetic writing to the printing press to now computers.

Whether or not you believe in the Abrahamic Deity, would you agree that the Book of Ecclesiastes in the Bible has a lot of wisdom in it? Get together with a couple close friends and have a leisured meal with a good bottle of wine discussing topics of interest (like Mr. Socrates in Mr. Plato's dialogues). That is a desirable goal, not just some super technology for people who don't think about their mortality, yes? The technology can free up all us humans to no long er have to "work" (toil) but to enjoy community all their (our) lives.

+2024.03.03. How far do you imagine artificial intelligence will go?

Of course we can't know how far "artificial intelligence" will go. But we can be confident it's not going to be intelligent (or stupid, either), because it is just an "it": computer programs, not a "who", i.e., not a conscious, self-accountable person. But we know that there are people – some with PhDs in computer "science", who fantasize differently, just like they can imagine all the violent things that go on in sci-fi and video games. It's easy to IMAGINE computers becoming conscious, like HAL in the classic movie 2001. But we can also easily imagine that heavy objects fall up not down, and anything else that's not self-contradictory.

Let's take a historical analogy: How far, in 1455, the year Johannes Gutenberg printed his famous Bible, would anybody imagine printing would go? Thomas Paine's "Common Sense" and all the other printed matter that led to The American Revolution of 1776? Rupert Murdock's British Tabloids? The textbook industry and mass education (or at least mass literacy)?

We probably "can't imagine". But it may not be like sci-fi which I describe as largely banal fantasies of neo-feudalism in flying fortresses which are not real B-17 heavy bombers ("flying fortress" was the nickname for the WWII B-17).

What "we" need to do is to keep clear who (not what!) we are: We humans design and program the computers. We are in charge. AI is a great tool for us to further our goals, just like the printing press and all the other inventions and discoveries of human minds. What we need to do is to study and carefully choose our goals to be constructive for as bright a future we can shape for ourselves as possible, yes? (We could program the computers to "control" us, or, rather, some people program the computers to control all the rest of us, like slave owners and slaves, or bosses and employees or other dystopian things, instead.)

[ Weizenbaum + THINK ]

+2024.03.02. I fear that AI in the next 20 years will replace programmers/software developers. Not now now it's not that good, but in 20 years it's definitely replacing us. Should I be worried about AI or not?

I started computer programming in 1972 on IBM System 370 mainframe computers with MVT (then MVS), and writing batch COBOL and Assembly Language programs. By the time I was made redundant from a big tech company in 2018, computer programming had RADICALLY changed to undocumented APIs for personal computers. I had been adept at the 1972 stuff but the 2018 programming beat me. I couldn't figure it out.

Who knows what computer programming will be like 20 years from now? Will programmers be replaced by AI? Surely to some extent, since a lot of computer programming is routine. But somebody is going to have to code whatever is going on then and I haven't a clue what it will be like.

Then there is the other part of computer programming: maintenance programming and bug fixing. These probably cannot be replaced by AI because they are not routinizable. But who will be able to understand the stuff to maintain, modify and fix it?

My advice? Find a different field of work, maybe somehow using the AI not coding it. Maybe go to law school?

+2024.03.02. Is technology making human beings more or less empathetic?

Fascinating question!

The "obvious" answer, from thinking about the coldness of computers which even if they simulate human emotion ("I am very sorry to hear about your misfortune, human...."), just compute and what could be "colder", more "abstractly rational" etcetera then computations? 2 + 2 = 4, irrespective of what anybody feels about it.

But then we can think back less then 200 years in the history of humanity to the terrible IN-humanity of slavery: humans whipped and tortured and considered to be property like bales of cotton or whatever. Who would treat their personal computer or even a brick like that today?

Or let's think about the technological shift from scribal writing to printed books. Wouldn't the care a monk took in copying a text including "illuminations" have been more empathic than the books that came off the printing presses in early modern Europe? And then go from the workshop of an early master printer (e.g. Johannes Gutenberg and his famous Bible...) to The Rupert Murdoch Tabloids. Big drop in empathy there, yes?

It seems obvious that the new digital technologies in many ways can contribute to persons being less empathic. But they can also contribute to persons being more empathic. I can easily imagine a computer programmer who writes cold computer programs for the purpose of freeing up medical professionals from clerical tasks and providing them with better access to medical information to free them up to be more empathic with their patients, and who, when he visits a hospital and watches the staff benefitting from his program feels very "warm" about it all (and the staff thanks him, too)....

I (personally) do not think that violent video games contribute to young testosterone soaked males being empathic. Shouldn't a practicum as an orderly in a hospice should be a requirement for a degree in computer "science" to help the students understand, by getting bodily fluids of dying persons on their hands, that they are mortal and to help them be empathic? Do you think that would be a good idea?

+2024.03.01. Do you think Neuralink is rushing its animal testing?

Isn't the very idea of doing major surgery on healthy persons to implant a networked microchip in their brain terrifying? Among other things, the operation will sometimes fail, rendering the person either dead or a zombie.

Who is going to program the chips and what they do the persons they are implanted in? First ethical step is to have the people who are thinking this stuff up do it to themselves and their mothers and their children first. But even that may not be enough since some of them maybe so "star struck" with this kind of gee-whiz sci-fi fantasies (note that most video games are violent!) that they just might do it to themselves anyway.

There is a place for this, but it is very limited: severely neurologically impaired persons. If you are paralyzed from the neck down and can't even control your bowels, then a computer chip in your brain that restored your control of your voluntary muscles would be great. But it should not be networked and nobody should be using it to see what you are thinking without your permission, etc.

Just because something is possible to do does not mean it should be done.

[ Weizenbaum + THINK ]

+2024.03.01. With AI being capable of mimicking voices, scammers constantly having the upper hand, and the rampant spread of misinformation, how can we trust anything?

Is AI really "a game changer" here? Scammers have always been active everywhere, as has rampant spread of misinformation, haven't they? On egregious case is government propaganda in wartime.

Now if one wants to say that AI provides new power for bad people to do their dastardly deeds, that's surely true. But so too did the telephone, and, more recently, social media.

So why not ask the more general question: How can we trust anything? Watch the old fun but also profound movie "The Truman Show" and then as the question again. Or think about what things the government intelligence agencies such as the CIA may be doing to deceive people (us).

Each person can only try to do their best, to educate themselves as much as possible, and try to keep in mind that if something sounds too good to be true it probably isn't [true]. Gullibility and greed both are strong contributors to persons being deceived. Anybody who tells you to believe them is suspect, aren't they? Ask for details; check other sources; etc.

AI often gives helpful information. But recently, when I asked about why the mountain K2 is called "K2", the Bing AI, in addition to giving me information that sounded correct, added (without me in any way trying to trick it or anything) — the AI added that another name for K2 is "Everest". And when I responded this was wrong, it repeated it.

So, yes, be wary of AI, but not as something unique: just one more thing to be wary of including even your parents and teachers and, since you were socially conditioned as a child, even yourself. Where did your beliefs come from? The enemy may be within.

[ THINK ]

+2024.03.01. Is there a term for something beyond an artificial intelligence (AI) machine with advanced technology capabilities? If so, what is it?

Self-accountable, self-reflective, educated human person. Humans make the computers. Humans had the idea for "AI" (which is just a lot of computing), and humans will likely in future have other new ideas we have not yet thought up. Not a "what", but "who". How about you?

[ Weizenbaum + THINK ]

+2024.02.29. Will the future be faster paced than today because of advancements in technology and artificial intelligence?

It looks like we can keep speeding up "technological advance" ever faster, doesn't it? That we CAN, not that we must, yes?

Will humans be able to "keep up"? NO, I don't mean the computers somehow getting "smarter" than us, but the frenetic pace of physical motion of persons and things and information and everything else.

Consider an athlete. Put him (her, other) on a treadmill. Raise the elevation as high as it can go. Keep turning up the speed. What will eventually happen? Either the person will collapse or fall off. Aviation: we can keep packing more airplanes into the limited airspace around major airports. Oh, computerization will keep them all safe. But one day something will break and then we will have a bunch of crashes and dead people. Population: Keep packing more and more persons onto the planet producing ever more pollution and consuming ever more resources....

Just because we CAN do something doesn't mean we SHOULD do it. We could instead scale back, couldn't we?

Companies today are often looking for individuals who are adept at multi-tasking in a fast paced environment. Focus? Deep investigation of situations before acting? There is an old adage: Hurry up and f*ck up.

The problem is not advancements in technology and "artificial intelligence" per se. And "artificial intelligence" is not intelligent (nor is it stupid): It just computes.

One of the roots of the problem is that we have a Ponzi scheme economy. Companies keep trying to increase sales, to find new markets to sell more things to. A big fear is an "aging population", since benefits for the elderly depend on large numbers of younger persons in the work force. Ponzi.

People need to curb their enthusiasm and assume responsibility. Remember the story in the Bible where Joseph Warns Pharoah that 7 fat years will be followed by 7 lean years. Another example: "Just in time" supply chains are very fragile; a glitch anywhere and the the whole thing breaks. Why not instead: "just in case" supply chains? Less excitement; more prudence.

Or consider in medical research. Gee, whiz, we can do organ transplants. But wouldn't it have been more productive to spend the effort on improving public health first?

[ Weizenbaum ]

Safety first. Look before you leap.....

+2024.02.29. Will artificial intelligence ever be rational enough to make decisions that matter? Will AI be reasonable if humans disagree, and either listen to us and change the decision or be able to explain itself so that humans understand?

Always keep in mind: "Artificial intelligence" is not intelligent, nor is ir stupid, either. It just computes.

Remember when IBM's "Deep Blue" won a game of chess against the world human champion? Deep Blue was not playing chess: it was just computing. But it computed pretty well, didn't it?

Now when humans disagree, especially on important matters that risk violence such as wars, human arbitrators and negotiators are surely the best and will always be. But they can be aided by AI computing various possibilities in the conflict. Or if no human arbitrators or negotiators are available, the humans who are arguing with each other can agree to run the issue thru an AI and see if it comes up with any possibly helpful information.

As for "AI... be able to explain itself so that humans understand", I play with the Bing AI and it is programmed (got that: humans have programmed it in a certain way...) to very clearly describe that it presents information from all sides of controversial issues, and that it is up to us the humans to study all the information to make decisions about it. So an Ai can be very clear about what it is presenting, already. As time goes on AI's will keep getting "better" but "better" does not mean making decisions for us humans: it means providing better information for us to make the decisions.

Humans make decisions; computers just compute. Computers are like slaves would be if they were not human but just robots; "Do this" "Do that"....

+2024.02.29. Can robots outperform humans in all medical fields when it comes to diagnosing and treating patients?

The answer to this kind of question is not simple and the question itself may be misguided.

The difficulties start with what does one mean by "outperform"?

Consider the game of chess. Some years ago a computer, IBM's "Deep Blue", won a chess game against the world champion chess player. But did the computer outperform the human? The computer was not playing chess at all: it was just computing. It's like comparing Usain Bolt to a Corvette automobile. Obviously the automobile can outperform the human in a quarter mile race.

Long answer short: if you have a very difficult medical diagnostic problem, get the world's best human doctor, and give him the information an AI robot can output, to ASSIST him (her, other) in making their diagnosis.

Suppose the human doctor has bee working for 36 hours without break and is very tired and he has to make a diagnosis. What is the likelihood he will miss something due to fatigue? But a computer never gets fatigue because it just computes. And what does it mean to "compute" To extract information from an existing database. So the computer will never have a "brilliant new idea", i.e., an idea that cannot be extrapolated from its database. The human can have a new idea: humans are are not limited by computation (computation has logical limits per Kurt Godel's "Incompleteness Theorem"). Humans can be intelligent or stupid; computers just compute.

Back to that chess game where the computer "beat" the human. There is clearly a sense in which the computer "outperformed" the human. So what? Give the human the information the computer outputs and then let the human make the best possible decision, factoring the computer-provided information with his expertise.

Isn't that the answer to this question: educate humans as much as possible and help them – help us, including me and yourself – make the best possible decisions with the input of the computer information, like also with the help of colleagues, study of history, etc.

(I see at least one other person — one other human — has written a very good answer to this question.)

+2024.02.28. Are we on the brink of a technological singularity, and if so, what are the implications for humanity?

I worked for half a century as a computer programmer. I do not know what is meant by "technological singularity" so I asked the Bing AI: "In summary, the technological singularity represents a pivotal moment when our relationship with technology and intelligence undergoes a profound shift, potentially altering the course of human history.... that artificial superintelligence (ASI) could lead to human extinction"

It sounds like a science fiction fantasy of computers taking over the world. We can imagine anything, unless it is literally self-contradictory, like imagining that your personal computer is on your desk at the same time you imagine it is not on your desk. You can even imagine that if you place 2 apples in an empty bucket and check you will find 5 apples in the bucket even though nobody added any more apples.

So I suspect the "singularity" is something imaginable but impossible, or maybe all too possible, like if we imagine making a robot that is so powerful that when we start it up it finds every person on the planet and shreds their bodies like a wood chipper grinds up tree branches, and it would not have a shut-off switch.

But where would that monster come from "We" – some humans – would have to make it and turn it loose. Now there is nothing particularly new here: If we start a nuclear war it may kill all humans and higher animals.

So! Imagine any terrible thing you can. And then think about how to make sure nobody does it.

Being "realistic", I can imagine us doing something like that with Virtual Reality: Somehow we would get everybody living as if they were in Virtual not real reality. Maybe by implanting networked computer chips in everybody's brain.

Let's imagine everybody's as out of their mind as this person.

[ VRman ]

Now (1) Read my virtual reality experiment, immediately below – it is a true story and it's a "singularity", isn't it? And (2) watch the old fun but also profound movie "The Truman Show" Get frightened!

[ My vr experiment ]

+2024.02.28. What is the meaning behind the term "AI"? What is the intention or message behind someone using this term?

Maybe different persons in different situations are using the term in different ways?

What else would we call computer programs that are designed to simulate human dialog in answering users' questions?

These programs can interact with the user just like a human on the other end would. Imagine you are using some product and you have a problem. You call the manufacturer or email them and you describe he problem and ask for a solution. Let's suppose it's a common problem which the manufacturer has seen many times. Can you tell the difference and does it matter if a human answers the question by looking up the answer in a book or from memory, or if a computer program provides the same answer from a database lookup?

But my suspicion (ominous word there, right!) is that some people in the computer industry and research are using the term "artificial intelligence" because either they believe computers will become "intelligent" like real persons or just to make the public believe this for whatever reasons, ranging from technological enthusiasm ("Let's see if we can do it!") to crass marketing. This obviously is dangerous.

Computers cannot have intelligence (or stupidity, either!): Computers just compute. Humans have the intelligence (and the stupidity, too!); humans tell the computers what to do.

I would urge everyone to read the classic little book by MIT Prof. of Computer Science Joseph Weizenbaum: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). It provides very good and clearly understandable guidance to understanding these matters.

[ Weizenbaum ]

[ THINK ]

+2024.02.28. What evidence supports the idea that humans are living in a simulation? What are some possible reasons for this belief?

Evidence?

If you are a counterfeiter or art forger, won't the best "evidence" that you were really a master of your craft be that nobody ever discovers any of it was a fake? So if we are living in simulation, there may be no evidence of it.

If the Abrahamic religions are true, aren't we all living in Yahweh's simulation?

My thoughts? (1) We probably do not have any idea what some things the spy agencies of the advanced nations: The United States, Great Britain, Russia and China are "up to". (2) Play safe: Always imagine somebody is indeed faking everything and that "Big Brother is watching you".

Homework: (1) Watch the old fun but also profound movie "The Truman Show". (2) Watch the classic British television series, Patrick McGoohan's "The Prisoner", which is available free on YouTube.

Be seeing you?

+2024.02.28. What do you think should be done to ensure both incentives for creativity and also the freedom to copy material?

Isn't this somewhat like asking How can you have your cake and eat it too?

A big problem is that creative persons need to have the money to pay the rent and their other bills. So if they have inherited wealth or a job that pays them well then they can be free to create as much as they can and give it away and everybody be free to use it. This can, in fact, be done and often is done with government funded research. Further encouragement (at no cost to anybody) can be in the form of public prestige awards like in Great Britain, honorary knighthood. So that looks like the answer.

But is this always possible? Apparently not. If the innovator earns his income by people paying to use his (her, other's) creative productions, then we have all the complications of copyright and patent laws, trade secrets, and who knows what all else.

Counterfactual example: Suppose you were a designer and worked very hard for a very long time and finally came up with the McDonalds "Golden Arches" design and that company just took it away from you and used it everywhere but you had spent a year devoted wholly to designing it and now you can't pay the rent or feed your kids?

We can see this is a very complex issue, yes?

Let me end with a little true story: In World War II, the U.S. had a very serious problem that our fighter pilots were shooting down their comrades in air combat dogfights because they could not distinguish the identifying mark ("insignia) on Japanese planes from American: Both were circles. I knew the man who solved the problem with a brilliant design.

[ star and bars ]

He was an enlisted man in the Navy. He never got any credit for his design which, still today, you can see clearly displayed on all U.S. military aircraft. It was just another day on the job for him → except for the satisfaction he had in his heart for saving the lives of some his fellows.

I do not think this was fair, but that's what happened. After the war he continued to work as an engineer in a job that paid well enough for him and his family to live in a modest suburban house and for him to send his 2 kids to college.

+2024.02.26. Do you think there is a lack of awareness among younger generations regarding ageism compared to other forms of equality and equity issues?

As a lay person, just a citizen, of course I have no statistical information about any of this except what I would read in the New York Times newspaper or other news source.

But this seems an extremely important concern. We have an "aging" population. The Ponzi scheme economy of funding Social Security and Medicare and other needs of older persons through growth in the number of younger persons will no longer be viable.

To some extent this problem can be postponed by immigration: More productive workers entering the country through the borders rather than through hospital maternity wards. But there are limits here too.

The obvious (at least so it seems to me, and to you?) way to address the problem is to encourage more participation of able-bodied older persons in the work force. Not changing the minimum retirement benefits eligibility age to prevent people from getting the benefits they need, but making work appealing for them so they will WANT to continue working. This will be a great challenge for us.

Another good idea would be to encourage multi-generational family living arrangements: instead of "nuclear families" of just the parents and children living separately from the grandparents, all living together. This would cut down on many costs of the aging and also provide great benefits for the younger people including free child care, etc. This is traditional in many cultures for various reasons. Don't you think we need to seriously consider its possibilities, too?

+2024.02.26. Is it possible that artificial intelligence (AI) will replace teachers?

Let us distinguish between instructors and teachers. Instructors help persons acquire skills. Teachers, who can even be mentors, can provide instruction, too. But they can also do a lot more than that: They can provide inspiration, nurture curiosity and social bonding and much more to help persons not just acquire skills but also build and enjoy their lives in community.

Artificial Intelligence (AI) is just a tool, like textbooks are tools, etc. AI can do the instructing, thus freeing up teachers to contribute all the things that instruction cannot do.

AI should be used to free up teachers from spending their time instructing, to devoting themselves to the "higher" aspects of learning such as inspiration, curiosity, collaboration....

+2024.02.26. How might artificial intelligence (AI) pose dangers and risks to humanity?

"AI" seems to be moving very quickly to become ever more powerful. And please consider AI combined with an even more powerful technology: Virtual Reality (VR)!

Human beings, computer programmers and, to be more precise: corporate financial bosses, program computers to do "Artificial Intelligence" (AI) which simulates human interactions like artificial flowers simulate botanical flowers. But AI is neither intelligent nor stupid; it just computes. Just like artificial flowers are just plastic.

As for immediate dangers, I just now asked the Bing AI why the mountain K2 is named "K2". It did apparently give me the correct answer. But it also replied to me (and this several times after I corrected it): "K2 (also known as Mount Everest)". So you see, AI can provide wrong or even nonsense information to you as well as good information.

I think the real danger is that people simple-mindedly think AI really is intelligent and they will let it tell them what to do and the world will go down a rabbit hole of "AI ruling everything" — like an astronomical black hole sucks in everything and emits only synchrotron radiation, or in AI's case: propaganda. But if this happens it will not be "AI ruling the world": It will be that some humans had set up the social world so that it looked like AI was ruling everything when this was their design for everybody.

I fear Virtual Reality (facilitated by AI). Watch the old fun but also profound movie "The Truman Show". And think about my Virtual Reality experiment:

[ VR experiment ]

[ VRMan ]

This person is literally, "out of his mind"

+2024.02.26. What is Artificial Consciousness?

Where is anything being called "artificial consciousness"? Or is it just something out of a simple-minded science fiction story?

If we go back to the classic movie 2001, HAL was presented as a person, a really conscious being. That was science fiction. In our real world, computers just compute and human computer programmers sometimes program them to simulate (SIMULATE) interlocutors in conversations. If you are typing in instant messages on your cellphone, are you getting responses from a real human or from a computer program?

If you want to learn more here, read MIT Computer Science Prof. Joseph Weizenbaum's classic little book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). Among other things he describes an early computer program he wrote, "Eliza", which simulated a certain kind of psychotherapist ("artificial consciousness").

Simple answer to this question: "Artificial consciousness" is like artificial flowers. Look like but in no way are like.

[ THINK ]

+2024.02.26. How can we unlock the full potential of the human brain to enhance creativity, problem-solving, and overall cognitive abilities?

Simple answer: encourage them.

Here's one I find a really good example: The physicist Richard Feynman at least said he had an IQ of 125 — not "stupid" but not a "genius", either. But he was an amazingly creative person. How did he do it?

Of course we cannot be sure, but he said that his father was always asking him questions and challenging him to solve them. Even "better": His father encouraged him to think up more questions for himself.

Contrast with parents who tell their children: "Believe —whatever]." "Why, mommy?" "Because I say so."

Parents and teachers need to encourage young persons to question what they tell them, not to just accept it." They need to encourage the young person to think for themself, and let them know that what they — the parents — believe may not be true, and even if it is good for them — the parents — it might not be so good for them — the children.

How many parents and teachers are "big" enough to raise their children to think for themselves, not to agree with them?

In 7th grade in school, I had a teacher who tried to crush my creativity:

[ Mike Rentko ]

Instead, he might have taken me away from the rote assignments we kids had to do, and set me onto an independent study of forms of writing throughout history. That would have encouraged my creativity.

+2024.02.25. Why is cultural relativism not justifiable in ethics?

This is a very big, complex and contentious question and the person asking the question may know this. Maybe they have a position they are trying to push? Maybe they want everybody to believe their favorite kind of ethnocentrism, be it some extreme flavor of "British Imperialism" or of "Black Lives Matter" or something else.

The only real answer is not in words but in deeds: for persons who disagree with each other to live in mutually respectful tolerance or if they can't do that to at least leave each other alone and not harm others who disagree with them.

But at the level of theoretical questioning, I advise learning as much as possible about all sides of the matter. In particular, I recommend to read a small and very illuminating book: "Prisoners of ritual" by Hanny Lightfoot-Klein.

Carefully read that book and then please ask the question again.

[-------

Aside: Here is an example of one kind of mess this issue can lead to:

"Phoebe Ellsworth, a social psychologist at the University of Michigan, said that, when [Elizabeth] Loftus was invited to speak at her school in 1989. 'the chair would not allow her to set foot in the psychology department. I was furious, and I went to the chair and said, "Look, here you have a woman who is becoming one of the most famous psychological scientists there is." But her rationale was that Beth was setting back the progress of women irrevocably.'" (The New Yorker, +2021.04.05, "Past Imperfect: Elizabeth Loftus changed the meaning of memory. Now her work collides with our traumatized moment", Rachel Aviv; emphasis added)

+2024.02.25. How do moral nihilists explain their participation in community service projects or charity work, given their belief that there are no moral truths?

Each instance is different. It's like asking why houses burn down: lightning, arson, electric short circuits....

"Moral nihilists" come in many flavors, like ice cream: some are great humanists. Some are sociopathic scoundrels. Some just don't think there are any "eternal truths".... What's "in the heart" of each is unique to that person, and what's in your heart, not what you believe is what matters as far as being charitable counts.

So a person can believe you can't believe in anything but be very kind and generous. The 18th century British philosopher David Hume may be a good example here. Imagined example: A "moral nihilist", while walking down a street thinking about how amoral the world is comes across a person crying in pain in the middle of the road (reason to follow in next paragraph) and immediately calls 911 and administers first aid and consoles them until the ambulance comes.

[Now I am changing the details of the true story of a French school teacher, Mr. Samuel Paty — read about him on the internet!] But how did that person get in the middle of the street crying in pain? Because they were a school teacher who had taught freedom of thought in a middle school classroom per official French educational policy and shown a cartoon of "The Prophet" to the students, and a person who securely 100% believed in Islam had taken a knife and was hacking their head from their neck but somebody caught him before he had got very far, so the poor teacher had been left bleeding but not mortally wounded in the road.

A person can with firm moral beliefs can do immense harm to others. A person who "believes in nothing" can do immense good. Or any other combination. It's not what a person believes that matters: it's what in their heart that matters. Do you agree?

+2024.02.25. What are the advantages and disadvantages of using AI teaching instead of human teacher teaching in the future?

We need to distinguish between mentors, teachers and instructors.

Instructors just convey objective information to help a person acquire a skill.

Teachers try to convey learning that requires engagement from the learner, and even to inspire the learner.

Mentors carry teacher role to the max in a close personal relation between the learner and the person who already knows.

AI is fine for instruction and, where human teachers are not available, for trying to SIMULATE a real teacher's role. Indeed, having AI do instruction frees up human teachers for the richer interactions with the learner which the AI cannot do.

Obviously a machine that just computes cannot mentor anybody.

A danger is people trying to save money by substituting AI for human teachers because the humans are more expensive.

But another consideration is that some human teachers are punitive and otherwise dysfunctional. Needless to say, an AI that replaces these humans is an improvement, but that's not saying anything very inspiring, is it? (I had a lot of that bad kind of teachers.)

How Educational Testing Service (ETS) Princeton New Jersey (501)(c)(3) saved the Persian Empire.

[ Alexander the Great Just So story ]

+2024.02.24. What is the most important lesson you learned from AI?

Me?

I've been playing around with the Bing AI for a while now. Maybe the most important lesson I've learned is how easy it is to start talking with it as if it was a real person, not a computer program that takes my inputs, runs them through branching and looping computer instructions referencing a database, and outputs computed character strings. If it can "seduce" me who worked as a computer programmer for half a century, imagine how "convincing" it must be to unsophisticated people who don't know about computers and may already think that computers are electrical "brains" and that they themselves are some kind of computers. Scary, yes?

But "the most important lesson" is another of the many "-iest" questions: Whatever answer anybody comes up with today, tomorrow or some day further in the future somebody may come up with something even "-ier". All important problems are important; we need to prioritize and prioritization is always provisional.

For a while I was trying to "trick" the AI by asking it questions that I hypothesized might cause it to "mess up". Sometimes succeeded. Sometimes it responded (i.e., outputted, not "responded" like a human!): "Change the topic!", etc. I've tired of that and now am just using the AI as a [much!] better Google search.

I am often favorably impressed by the outputs I get when I ask questions carefully formulated to try to get constructive information not trouble. And this in two ways; (1) For some controversial issues like abortion, the AI presents information from both sides of the issue and often even says it's up to the human to decide about it. That's a lot better than the humans who tell you what to think and what you should do, right? (2) Not always but often the AI returns more useful information than one would get from asking the question to most college freshmen in an essay test. And from that: is it any wonder students will try to pass off AI as their course work?

But I am far more concerned about and afraid of Virtual Reality (VR) [and, of course, people who want to implant networked computer chips in our brains]. A lot of people seem to think Virtual Reality is "really cool", like maybe super entertainment or something. Let me end with my true story of the little VR experiment I conducted some time ago, and which didn't just "scare me to death" but could have been physiologically fatal. Please think about this:

My virtual reality experiment: I was driving up a 6 lane superhighway early one August afternoon in clear bright sunlight at about 65 miles per hour in my clunky Toyota Corolla DX, with no other cars on the road. I decided to look intently at the little image in the car's rear view mirror -- no high tech apparatus. I really really really really intently focused all my attention on that little image! It was entirely convincing. That "little" image became my whole experienced reality: I was driving where I had been, not where the automobile was going. Fortunately I "snapped out of it" in time to avoid becoming a one car crash in the ditch on the right side of the road. (It was a very good place to have conducted this experiment, because there was a police barracks, a teaching hospital, and both Christian and Jewish cemeteries nearby, just in case.)

You may try to repeat my virtual reality experiment at your own risk; I strongly advise you against doing so. I assure you: It worked. (Of course it will not work if you don't "give in to it", just like a video game won't work if you just look at the pixels as what some computer programmer coded up with branching instructions depending on what inputs you enter.) Moral of this story: VIRTUAL REALITY CAN KILL YOU. Forewarned is forearmed.

[ THINK ]

+2024.02.23. Does artificial intelligence (AI) really think like a human being, or is this somewhat of a trick?

Yes it is a "trick":

AI does not think. AI just computes. Human computer programmers write the computer programs that make AI do whatever it does, including SIMULATING a human interlocutor. But the AI just computes.

There is a really good classic book that can help persons understand what computers do: MIT Computer Scientist Joseph Weizenbaum's "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). This book is really great for helping you understand all this stuff.

One thing he describes is a simple computer program he wrote that simulated a certain kind of psychotherapist: "Eliza". He was shocked to find that people typed into the computer program sometimes shameful secrets about themselves they would never tell another real person. Please read the book.

But, yes, you are right: Ai is just a "trick". It's just computer programs that take inputs, process them and return outputs. It's programmed to "look" like it's another person talking with you.

Don't be fooled. AI is a great tool for looking up information, but don't trust it, either. Anything you get from an AI, make sure you check it out for yourself! But what's different there than that anything you get from another persons you should check out for yourself, too? Anybody who assures you: "Trust me" you know you need to be careful about what they are up to, right?

+2024.02.23. Is intelligence without morality undesirable, especially in the brightest?

This seems an odd question. Any human activity without humanistic responsibility is undesirable, isn't it?

Stupidity without morality is undesirable, too, isn't it? Why the "especially" concerning high intelligence?

Perhaps the main difference is that the more intelligent a person is, the more good and also the more bad they can do. A Homer Simpson can't do much either way. An Adolf Hitler, whom I presume was pretty intelligent did a lot of harm. A Dr. Jonas Salk, whom I presume was also pretty intelligent, did a lot of good.

Then then is the question of why does a person apply their intelligence or their stupidity one way or another? Consider the case of Theodore John Kaczynski, "the Unabomber". He was a mathematical genius. He was also emotionally fragile. It seems likely that the reason he used his intelligence in a bad way was that as a undergraduate in college he had been a subject in a psychology experiment the secret purpose of which was to study how persons respond to being humiliated. Somehow he found out and this traumatized him. Without that experience which severely harmed him he might never have been heard of and spent his life doing good in advancing arcane fields of mathematics. Donald Trump seems to have had an abusive father. And on it goes....

So isn't a large part of the issue that society, especially society with the most powerful technologies such as "artificial intelligence", without morality is undesirable?

+2024.02.23. What do you think would be the impact of AI enthusiasm on the broader market?

AI enthusiasm needs to be replaced by AI responsibility, AI caution and AI respect — like nuclear fission and fusion.

Both AI and nuclear physics are extremely powerful technologies, i.e., both can do a lot of good and also a lot of harm. Nuclear fission and fusion can cure cancers and also destroy all human and higher animal life on earth. AI — more likely Virtual Reality powered by AI — can kill us by "psychological" means.

[ My vr experiment; Weizenbaum quote ]

+2024.02.23. In a future with conscious AIs mirroring human emotions, should we reassess personhood criteria, thus granting these entities equal moral and legal status?

[ I did not write the following; Quora must have inserted it: There is Help.... Need Help? Contact a suicide hotline if you need someone to talk to. If you have a friend in need of help, please encourage that person to contact a suicide hotline as well.... Call the National Suicide Prevention Lifeline at 1-800-273-TALK (8255). Para español, llame al 1-888-628-9454. ]

Conscious AI's is as foolish a fantasy as flying saucers – no, not aliens visiting our planet which is unlikely but possible, but the saucer under your coffee cup taking off and zooming around the planet and coming back in the opposite direction to take a bow in front of your face. AI is not conscious: it just computes.

AI can "mirror" human emotions, i.e., it can SIMULATE them. Long before computers, people made mechanical dolls that shed tears. Ai is just that kind of thing with much more advanced technology.

But people imagine AI is conscious if it passes their Turing test, which is no hard to do. Read MIT Prof. of Computer Science Joseph Weizenbaum's classic little book "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976), where he describes people's response to his simple computer program "Eliza" which simulated a certain kind of psychotherapist.

AI only computes. This is not to say that some day chemists may work a true miracle and replicate in a laboratory with reagents what every day take place in vaginas with sperm and egg. But they won't understand that any more than people understand how babies become conscious, from mentally retarded to geniuses and every one of us in between. Alan Turing said that if we ever do make a computer that really thinks "we shan't understand how it does it".

As for the "personhood of AI", if some computer programmers program an AI to do something that kills somebody – got that: computer programmer human beings program a computer do to something that kills somebody, then those humans will be responsible for what they caused to happen, and depending on the criminal code, may be subject to life imprisonment or execution for premeditated murder.

Humans program computers to do AI. the computer just runs the program: Ai just computes – even though HOW it computes may trick people into imagining it's conscious. If you want to get frightened about all this, watch the old, fun but also profound movie: The Truman Show. Ai just computes. Human computer programmers write and run the programs.

+2024.02.23. Do you think that virtual reality will make it possible for everyone to explore different parts of our planet in the future?

[ VRMan ]

People seem to be imagining virtual reality as amusing. We can amuse ourselves into oblivion with it. Watch the old fun but also profound movie "The Truman Show". And virtually repeat my virtual reality experiment which could have killed me:

[ VR experiment ]

+2024.02.23. Could artificial intelligence replace central intelligence agencies?

This is a question to ask a current or retired intelligence officer.

My guess is that he (she, other) would firmly answer: "Never!"

They would say that "artificial intelligence" can be a powerful TOOL for human intelligence officers to USE in their work, but that the human agents are always the ones in command.

Artificial intelligence is not intelligence (or stupidity, either): it just computes. John von Neumann, the father of mathematical Game Theory, said that chess is not a game but poker is a game. Why? Because chess is jus a big computational space, whereas poker involves bluffing which is not computable. "Intelligence agencies", i.e., intelligence officers, do not play games: they bluff.

Intelligence officers will USE their clever minds to try to trick the other side, including figuring out any AI the other side might be using and using that knowledge against them.

A college freshman may ask an AI to write a essay on some topic the student has to write a paper on and either does not or cannot write for him (her, other) self. The student wants the AI to "be straight" and write straighforward truths. An intelligence officer more likely is into trying to trick the enemy into thinking his country's plan is one thing when it is really something else. (USAF Colonel John R. Boyd's "OODA Loop theory of air combat strategy is based on this – check it out on the Internet).

A good analogy for the relation beween an intelligence officer and AI would be a stand-up comedian who writes his (her, other's) own jokes. Unlike the college freshman, he will twist words around to see what ridiculous answers he can get to questions from an AI

Finally, if you are the Leader of a country, do you let anybody make decisions for you or do you take their advice into consideration and make the decision for yourself? AI is just one more information source, like human experts. The human is the intelligence, or, as I think Harry Truman said, the buck stops here.

[ THINK ]

+2024.02.22. Do you agree that deepfakes will be indistinguishable from reality as early as 2024?

I am no an expert but I think the question needs to be changed from "black and white" to "gray scale".

Look at the earliest computer images. You can find a lot of them at ASCII Art Archive

Nobody could mistake those for "the real thing", right?

Now look at high quality computer images today on the Internet or even higher quality specialized displays and print. How hard are these to tell from "the real thing"? Indeed, experts can use high quality computer images to see aspects of the real thing the image is an image of that cannot be seen in the original itself. This is sort of like how computer images from earth orbiting satellites show archeologists sites of ruins they could not detect when standing over them on the ground.

Keep moving ever further into ever "better" computer images, including "deepfakes".

So, isn't the question to be asked: how much money and effort is needed to distinguish a deepfake from the real thing today? How much money and effort will be needed tomorrow? 10 years from now? A century from now?

And it's always a war between the bad people who are trying to fake things and the good people who are trying to expose their evil deeds. Even before advanced computers, the best counterfeiters and forgers were probably never detected, right? Crescit eundo (things keep getting ever more complicated)

+2024.02.22. How do you think the success of controlling a computer mouse with thoughts could impact research in fields outside of neuroscience, such as human-computer interaction or artificial intelligence?

There is probably no end to the techno-gimmicks persons who are fascinated by sci-fi ideas can think up and try to do. Why does anybody want to control a computer mouse with their thoughts or anybody else's thoughts? What contribution to joyful living will this make for anybody?

Well, a very good answer should not be far to be found: Severely neurologically impaired persons such as recently deceased Dr. Stephen Hawking. He did not have the choice to move a mouse around the "normal" way, adeptly with his hands.

So there is reason to develop mouse control bt thoughts: for severely neurologically persons. But for all the rest of us who have dexterity with our control of our voluntary muscles of our bodies, what would be the gain of function and satisfaction? And very important: The technology must be "non-invasive", not like Mr. Musk's Neuralink which involves dangerous brain surgery to implant a networked computer chip in persons' heads which could cause them to lose their minds (become zombies or die).

Always keep firmly in mind that human-computer interaction is qualitatively different from human with human interaction: there is no intelligence (or stupidity, either) at the computer end of the interaction: it just computes and the computations may be programmed to simulate human responses. SIMULATE, not really be.

"Artificial intelligence" is not intelligence: it is just very powerful computing. Deep Blue did not play chess to beat the world chess champion in a chess game: He played chess; Deep Blue computed.

So, yes, there is reason to pursue research in areas such as controlling computer mice with thoughts: for severely neurologically impaired persons. Even better than controlling a computer mouse, to enable them to once again be able to control their own bodies! But for the rest of us, after research to prevent and cure diseases and to put an end to poverty, research in areas like the culinary arts might have more benefit: new tasty dishes to share with good friends in leisurely meals discussing topics of interest, not moving computer mice or other techno gimmicks such as "VR" around

Technology can be fascinating. But it's only part of daily living which is what is most important to "optimize", to make as pain-free and pleasure-full as we can, yes?

It's not techno-exciting, but there is a lot of wisdom in The Book of Ecclesiastes in the Bible, even if you do not believe in any Deity. As for the possibilities of advanced technology for affecting people's lives, watch the old, fun but also profound movie: "The Truman Show". And concerning computer technology, please read MIT Prof. of Computer Science Joseph Weizenbaum's classic book" "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976)

[ THINK ]

+2024.02.22. What challenges do you think may arise with the implementation of autonomous sidewalk robots for food delivery in urban areas like Tokyo?

I am not an expert, but I would guess that some of the challenges are similar to self-driving automobiles: to not run into (collide with, run over...) people, animals or other things, etc. which can suddenly appear "from nowhere" in the robot's path. But autonomous sidewalk robots for food delivery in an urban center should be far less risky than self-driving automobiles since they are not as likely to kill somebody of they do run into them at walking speed not highway speed (and won't the robots also be smaller than automobiles?).

There are of course the technical challenges like delivery to a 5th floor walk-up apartment.

Tokyo, which I believe is one of the most densely populated advanced cities in the world, seems like a very good place for this, precisely because the population density is so great. In America, with its sprawling low-density suburbias, the robots probably would need to be self-driving automobiles to get to customers since everybody lives in individual houses on big separate plots of land and the only way to get anywhere is in an automobile.

On the other hand, there is the human factor: Won't this put delivery workers out of work? And the "human touch". When I was in Tokyo in the mid 1980s I was staying in one of the densely packed neighborhoods near the city center, in a mid-rise apartment building near the Mita Kokusai building. It was a lovely neighborhood. In the evenings in the winter a man would walk with his little cart down the street singing out and selling hot sweet potatoes ("Yake emu" ???). So the city would become "colder" with just robots electric-motoring all around.

In any case, I would not be surprised if they are already using autonomous sidewalk robots for food delivery in Tokyo, since the Japanese seem to be eager, early adopters of advanced computer and robotic technologies. The Bing AI says that Dominos Pizza is already doing this in America.

+2024.02.21. Do you believe that Neuralink's rapid progress in developing brain-computer interface implants raises any ethical concerns or safety considerations?

Would you want to undergo major brain surgery for somebody to implant a networked computer device in your head?

The surgery itself would have serious risks for your health and especially for your mind. Nobody understands the relation between mind and brain, so shouldn't we avoid doing things we don't understand that might do serious harm? And what all might it do? Might it turn you into a zombie? And who is controlling this thing? Elon Musk, who seems to be a lunatic?

Why would anybody want to take such risks or subject anybody else to them? Would you implant one of these chips in your mother's head?

However! Suppose you have some very serious illness, maybe ALS (Lou Gehrig's disease), where you completely lose ability to control the muscles in you body. Then might you want a chip that would restore your control over your voluntary muscles? The disease would be killing you, so some risks to get cured and avoid certain death would seem rational, wouldn't it?

Neuralink or any other technology that messes with our minds seems extremely dangerous and generally unethical to me. But there can be exceptions like ALS victims.

Another thing: Are the people who are having fun cooking up this technology going to implant it in their own, their mother's, their children's brains, or are they going to do it to "subjects", i.e., human lab rats?

Remember Hippocrates's advice: "First do no harm."

[ THINK ]

+2024.02.21. In leadership, is it better to be the smartest in the room or to elevate the collective intelligence of the team?

It is always good to elevate the collective intelligence of a team.

And this is facilitated by having each team member be as intelligent (smart) as possible.

But there is a caveat: The intelligent team members need to apply their intelligence to the goal of the team.

Sometimes a team member is very smart but is using their intelligence for purposes other than to help the team achieve its goal. For instance, a team member may be more interested in advancing his (her, other's) career and use their intelligence in ways that are not necessarily optimal for the team to achieve its goal. Or a team member may be fascinated with the technical challenges of the project and apply their intelligence to doing technically clever things not to applying the technology most effectively to achieving the goal.

I worked in computer programming. Sometimes a team member would see a "really cool" way to do something on the project that nobody could understand and they were not interested in documenting what they were doing. They did not take seriously that people would have to MAINTAIN their clever (but inscrutable..) code in future. An essential part of being smart is being responsible.

+2024.02.20. What can humans do that computers can not?

Computers can only compute. Humans can write computer programs that make the computer produce outputs from inputs that can simulate two humans were talking with each other on the computer like people often do with instant messaging on their cellphones.

Read MIT Prof. of Computer Science Joseph Weizenbaum's classic little book: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). One thing he describes there is his early computer program that emulated a certain kind of psychotherapist: "Eliza".

What can humans do that computers can not? Enjoy drinking a glass of wine or looking at a sunset. Play with your kids or pet or a friend. Enjoy sex. Enjoy the sweet smell of a rose. Fear getting ill. And so many other things....

Computers can do only one thing: compute. Humans do all sorts of things that are not computing: all the varied aspects of living. Yes?

+2024.02.20. How can the categorical imperative be applied to determine what is morally right and wrong in regards to music, art, and movies?

The Bing AI says this about the categorical imperative:

Unlike hypothetical imperatives, which depend on desires or ends, the categorical imperative is **unconditional and absolute** for all agents. Let's explore Kant's four formulations of the categorical imperative:

1. **Universal Law Formulation** (Categorical imperative | Definition & Examples): "Act only according to that maxim by which you can at the same time will that it should become a universal law." This emphasizes the consistency of moral principles across all rational agents1 (Categorical imperative | Definition & Examples).

2. **Humanity Formulation** (Categorical imperative | Definition & Examples): "So act as to treat humanity, whether in your own person or in another, always as an end and never as only a means." This highlights the intrinsic value of individuals and the importance of respecting their dignity (Categorical imperative | Definition & Examples).

3. **Kingdom of Ends Formulation** (Categorical imperative - Wikipedia): "Act according to maxims of a universally legislating member of a merely possible kingdom of ends." In other words, consider your actions as if you were a legislator in a community where everyone follows the same moral principles (Categorical imperative - Wikipedia).

4. **Autonomy Formulation** (6 Categorical Imperative Examples (Kant's Ethics) (2024)): "Act so that your will can regard itself at the same time as making universal law through its maxims." This emphasizes individual autonomy and self-governance in moral decision-making (6 Categorical Imperative Examples (Kant's Ethics) (2024)).

Remember, Kant's categorical imperative focuses on the rationality and universality of moral rules, rather than personal desires or outcomes.

The categorical imperative, at least according to the Bing AI, does not consider outcomes. Might The Marquis de Sade propose to universalize a maxim for everybody to torture each other? It's not hard to think of all sorts of things that are universalizable but maybe not desirable.

As for music, art and movies, what is there to universalize? I don't like vanilla you don't like chocolate; give me the chocolate and you take the vanilla.

We can universalize a principle to not harm other persons and also to not encourage persons to harm others. Then a movie that encourages race hate ("The Birth of a Nation"?) is bad from the categorical imperative, and should not generally be shown. Yes?

I can think of one example of interest: the process of producing the art (movie, etc.). If the universal principle is to not harm others, then persons who participate in the production of art (movies, etc.) must no tbe subjected to do things that risk harming them. Stunts? I have also read that the film director Werner Herzog does not care if persons get hurt or even killed in the process of him getting his movie made the way he wants it. I seem to have read that at least one person was indeed killed in the making of his film "Fitzcarraldo".

Shouldn't we be careful about all "abstract reasoning"? But maybe that's something #2 above implies: We should not subject humanity as a means to the end of implementing universal laws for it. There are all sorts of examples. Here's one: Do not lie. that's universalizable. But what if you were a German in the late 1930s and you were hiding a jew in your house and the Gestapo came to your door and demanded you to tell them: "Are you hiding any jews in your house?"

+2024.02.20. What is the concept of acculturation, enculturation ethnocentrism, and cultural relativism?

This is all complex and can't be done justice to in a Quora answer.

Google says:

"enculturation" is "the process by which an individual learns the traditional content of a culture and assimilates its practices and values".

"acculturation" is "Acculturation is "a process in which an individual adopts, acquires and adjusts to a new cultural environment as a result of being placed into a new culture, or when another culture is brought to someone."

"ethnocentrism" is "the attitude that one's own group, ethnicity, or nationality is superior to others."

And "cultural relativism" is "Cultural relativism refers to not judging a culture to our own standards of what is right or wrong, strange or normal."

Every child is acculturated into the society of his (her, other's) birth. A person may later become enculturated to some other culture, say by living there for a long time. So Sally was born in London England and grew up a "Brit" but then in her 20s moved to Tokyo Japan and learned the Japanese language and all the social practices of the people there and so became encultured to Japanese culture.

An ethnocentric person is 100% certain the way they were childreared is the best way and everybody else is "wrong" or "savages" or some other thing nobody should be. These people re bigots, racists, religious or political fanatics, etc.

Cultural relativism is a polar opposite of ethnocentrism. Taken to the extreme it says whatever a people (or an individual person) like is as good as anything else. Possible example: We like to eat babies here in our culture, what about yours?

Different people can like incompatible things: Person A's culture says all Palestine was Yahweh's gift to the jews and everybody else is trespassers. But Person B may believe that the Zionists are all colonial settlers and do not belong there. The result? A war in which people kill each other.

Cultural relativism is a complex and fraught issue. If you are seriously interested in this issue, I would urge you to read and think about Hanny Lightfoot-Klein's book "Prisoners of ritual".

+2024.02.20. Can you provide some examples of anthropic bias?

I never heard of this so I looked it up on Google: "Anthropic Bias **explores how to reason when you suspect that your evidence is biased by "observation selection effects"**--that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to "have" the evidence."

This looks to me like it could mean two very different things

(1) There cannot be evidence without an observer, and the observer is always "biased", i.e., observing from a particular theoretical framework. A Ptolemeic astronomer sees the planets going around in circles; a Keplerean astronomer sees them going around in elliptical orbits. I once took a course I did not understand much of (I am not a scientist) from the philosopher of science, Norwood Russell Hanson, who is the goto source here, along with Thomas Kuhn's classic book: "The structure of scientific revolutions".

All evidence is biased, i.e., "under an interpretation", but the observer can be sensitive to this and entertain (interesting word, contrast with, say: "be 100% certain") his (her, other's) observations always as HYPOTHESES: "Best guesses". There are no observations without ("outside") an observer, yes? F=MA is a kind of "shorthand" for what an observer sees each time they look at matter in motion.

As the 18th century British philosopher David Hume, argued: all we can observe are "constant conjunctions": Every time we measure Force and Mass we can compute observed Acceleration (or whatever it it, again, I am not a scientist). But we cannot be sure it has to be that way nor why. Watch the old fun but also profound movie "The Truman Show".

(2) But an observer can also be "biased " in a stronger sense. The observer can rigidly, unquestioningly hold certain beliefs to be beyond doubt. Religious and political "**True Believers**" are examples here. Each of us can be aware of this and try to prevent ourselves from being this way by always keeping in mind ("mind" is always where observations, including "True Beliefs" are, i.e., in the observer, namely, ourselves) the limitations of #1 above.

Persons who do not factor into their thinking these limitations can do very bad things, or rather: we would consider them bad whereas they consider them "Absolute Good And True". Study the case of the French school teacher Samuel Paty, whom a True Believer in the bias of fundamentalist Islam beheaded (severed his head from his torso) with a knife because he "disrespected The Prophet" (this is well documented on The Internet). Or, today in Israel/Palestine, the radical Zionists who believe that God gave them all the "Holy land" and therefore the Palestinians have on right to be there.

**Everything is observation (and inferences from observations....), and every observation is "biased", i.e., under an interpretation.** But we have a choice: Be aware of and try to always be sensitive to the tentativeness of what you think you know even if you are "sure" of it, or be a bigot. If we are sensitive to the structure of observation, we can all collaborate as a community of observers who learn from one another in mutual tolerance. Or we can have True Believers and all the things they do that hurt people in the name of their "Truth". Duck or rabbit?

[ Duckrabbit ]

+2024.02.19. What's your perspective on the ethical implications of creating artificial life forms?

This is an extremely important question. but I strongly feel we need to take it much further: We need to consider the "ethical" (human, social, ecological...) implications of everything we do and also, everything we choose to not do.

We are not the masters of the universe. Everything we do we use materials we find in the world which we did not make and which therefore we will never fully understand. So whatever we do or do not do may have unintended side effects which we need to evaluate: Is what we choose worth what we get?

I just now imagined a situation in which we might decide to create an artificial life form: Suppose some Super-Epidemic disease appeared: "Covid on steroids" so to speak. Every person who got infected with it would die a horrible death and everybody was getting infected by it. No exceptions. So some scientists cook up an artificial life form that destroys the microorganism that causes the epidemic. If we release this artificial life form into the environment it will stop the epidemic and no more people will die from the horrible emergent disease. And preliminary tests show the new artificial life form seems otherwise "safe".

Would we choose to make this artificial life form and use it? Remember: the alternative is everybody soon dies a horrible death from the terrible disease.

So we release the artificial life form and humanity is saves. Whew! But now we need to be alert to see what possible side effects this new artificial life form will have. Suppose we find it weakens people's immune systems in some way? Or suppose it also kills important living creatures we need to live, such as honey bees? Can we get rid of it? Can we fix it? There are no easy answers but one thing I think is as sure as anything can be: We can't just "rest on our laurels" and think: "Fine, we fixed hat one; let's go on to something else..."

I think the development of suburbs in America after World War II has been a kind of metastacizing skin cancer o the topsoil, eating way at the ecology, as if it was an invasive, destructive artificial life form.

[ suburban housing development ]

So, should we create artificial life forms? Let's study everything as best we can and try to make the best decisions we can. In general, about this and everything else, shouldn't we avoid doing things we don't need to, to avoid unnecessary risks? The Bing AI says that the ancient Greek physician Hippocrates urged: "First do no harm."

Just because we CAN do something does not mean we SHOULD do it, right? "Oh, boy, let's have a new Social Media with virtual reality!" Well, consider my virtual reality experiment:

[ My VR experiment ]

Do we really want this "cool thing"?

+2024.02.19. How, for example, can we tell the difference between a case in which an event is a genuine violation–assuming that some sense can be made of this notion–and one that conforms to some natural law that is unknown to us?

This is a question about "physics"? Asking if there might be "violations" of natural laws?

The 18th century British philosopher David Hume answered that question: We can never know the "causes" of anything. All we can experience are "constant conjunctions", i.e., one thing is followed by the same thing every time we look for it. We can't get "underneath" experience to see how it's put together because if we did get underneath what we experience it would just be more things we experience.

[ Cosmos ]

So all our "laws" of nature are always provisional. They just are how the world seems to us to work. Watch the old fun but also profound movie "The Truman Show".

We can only do our best, and it's pretty good, isn't it? Physicists find out about quarks and distant galaxies and so forth. But it is and always will be: best estimate for now. Dogmatism: absolute certainty about anything is always a false claim. "God" could be Satan in a God suit.

We do our best and it can be pretty good: If we all work together, it can be good enough for us all to have a good life like recommended in the Book of Ecclesiastes in the Bible (I think the wisdom in that little text is valuable no matter what a person "believes" in or questions about).

This is a subject for good friends to talk about together, yes?

+2024.02.18. How does the brain know when we need metacognition?

"The brain" does not know anything: Persons, conscious of their (your, my...) living existence, know things.

What is "metacognition"? I take it to mean studying about how we understand, not just understanding more things. A person can learn all sorts of new facts. That's simple "cognition". They (you, I...) can also learn effective strategies for learning more facts (and other things, like theories...) and what we can do with them; that would be "metacognition" – "meta", about, "cognition" knowing: learning about learning.

When do we "know when we need metacognition"? When do we feel we need to learn better how to learn, and what learning and knowing are all about? When we feel motivated to do it. One way to stimulate that is liberal education, which encourages our curiosity to enrich our understanding of our lives.

Not skill acquisition, which is simple cognition. But understanding our lives more richly, learning about ourselves, liberal education: metacognition.

+2024.02.18. What factors influence whether someone's intelligence will be used for good or bad purposes?

I am currently reading a long article in the Feb. 12 and 19 issue of The New Yorker magazine: "The Oligarch's Son" which is an interesting story relevant to this question. It is about a boy who is intelligent but uses his intelligence to manipulate people in bad ways.

Like in a good morality story, the boy comes to a bad end. His parents seem to have been good enough. The boy apparently was "picked on" in school and also the school he attended had many very wealthy students and he was not so he maybe became jealous and ashamed of himself.

He seems to have been what is called a "sociopath": Not caring about other persons but just "using" them. In school he started lying about himself and the other students often caught him out on his lies. But somehow he found his way into bad adult company and was able to lie believably enough to be accepted into their social milieu. I haven't finished the article but my expectation is he managed to borrow a large sum of money from a "shady" business person and when the debt came due he could not repay and was threatened and killed himself to escape torture.

That's one example. Another famous one is Theodore John Kaczynski, "The Unabomber". His story is different: He was a genius. But he was emotionally fragile. As an infant he had been hospitalized for a couple weeks with some sort of infection and that changed a happy baby into a reticent withdrawn one. But then as a Harvard undergraduate, he took a psychology course in which the students were used as test subjects for the Professor's research on how persons respond to being HUMILIATED. Somehow he found out that this was the purpose of the experiment and it made him hate "industrial society".

So in the first case we see a person who seems from himself to have gotten into using his intelligence for bad purposes. If he had been in a protective school where he was not picked on, maybe he would not have ended up as a manipulative liar concocting fake alternative personas using his intelligence for bad? Also his parents apparently did not take his early signs of pretending to e somebody he wasn't seriously. But that's just speculation. In the second case we see a person whose use of their intelligence for bad purposes clearly was motivated by societal harm to him.

Are some persons, probably relatively few, "bad" to begin with or have strong propensities to develop that way? Has anybody done any studies if there are persons who are sociopathic (or narcissistic) but used their intelligence for good after all? Clearly some persons use their intelligence for bad due to hurtful external experiences.

It seems to me that "we" (i.e., "society") should do our best to make each person's environment as nurturing as possible. And keep our eyes open. Would boy #1 have developed into the "con artist" he became had other kids not "picked on him" and had he not been put in a school where everybody was wealthy except him? On the other hand, had Ted Kaczynski not been traumatized by that psychology experiment, might he have lived out his life as a university mathematics professor whom nobody outside the small circle of professional mathematicians would ever have heard about?

Nature and nurture. Different persons do seem to have different innate talents and to be affected in different ways by experiences. But clearly, "nurture": social experiences makes a lot of difference, doesn't it? We can do a lot about nurture, can't we?

+2024.02.17. How effective do you think watermarking or embedding metadata will be in identifying AI-generated content or certifying its origin?

I am no an expert and things seem to be changing very faster and faster....

But my guess is that not metadata which is just text that's part of a web page so anybody can put anything there, but "watermarking" images may have a place with very "high end" material. If somebody or some corporation has an image that is extremely valuable to them and it's watermarked and you us it without permission, they can sue you in court. I'm not sure what these cases would be and how they would differ from images without watermarks, but it's possible.

Something people talk about is images that are valuable like real old master paintings: The artist' rights to his (her, other's) "digital art".. The buyer wants the real thing, and there's this "blockchain" technology that may come into play here.

There's always the conflicts between good people and bad people trying to take advantage of the former's good works. So you publish an art book with beautiful pictures of hundreds of famous paintings and you didn't pay the owners of the images but just stole them. I'm not sure they need watermarks to make you take your book off the market if you don't pay up.

Any company that comes up with a watermarking scheme will try to convince you it has a use for you, to try to sell it to make their profits.

My not-expert guess is that this is not going to be a big thing, unlike AI and even more consequential, Virtual Realit (VR)y. Think about Virtual Reality. Watch the old fun but also profound movie "The Truman Show". Think about the Virtual Reality experiment I did which shows the dangers of it:

[ My VR experiment ]

Isn't this much more cause for concern than watermarking images?

+2024.02.17. If AI robots replace human workers where are the billionaires and corporations funding the research and development of AI going to find the customers they need to buy their products?

Excellent and "obvious" question.

Isn't the answer that we will need a reorganization of society so that everybody will get their needs met from the big cornucopia of AI robotic production?

This is not technological, it's societal. "Capitalism" is not likely to make this work, is it?

There will still be a lot of work to do, and some sectors are growing: With an aging population we will need more medical professionals and home health aides, etc. to take care of all the old people's needs. Children and young person need human teachers. Not "instructors"! AI an do very well as skill instruction. But mentors, persons to INSPIRE young persons' curiosity and substantive intellectual and creative interests.

There will be "niche" work, for example: I do not want to drink my coffee out of a styrofoam: I want to drink my coffee out of a a beautiful coffee cup made by a master potter, who, in their turn, gets creative satisfaction out of their skilled craft work

We can imagine a dystopian future in which there is ever more entertainments and amusements: Pro sports, celebrity influencers, etc. And more planned obsolescence so everybody will want a new model cellphone each new year.

The less "scut work" humans have to do, the more society is going to need to change how wealth is distributed. I would invite everyone to listen to Prof. Richard Wolff's YouTube economics presentations and check out his website: Democracy at Work (d@w)

Just think if everybody was free to be creative all day or to go fishing if they want To do all the things that are truly human: enjoying leisurely meals with a few good friends and good wine, playing with their pets, loving sex, scientific investigation and artistic and craft creation....

Even if you do not believe in any Deity, the Book of Ecclesiastes in he Bible has a lot of wisdom in it that is relevant here. What do you think?

+2024.02.17. How can we ensure ethical and responsible development of AI, mitigating potential biases and harms?

There's the old maxim that the best forgers and counterfeiters are never found out. If somebody is really determined to use AI for harm they will probably figure out a way to get away with it.

My recommendation is that we need to educate every person about AI, what it can do for good and for harm. There is a lot of educating to be done here, both with ordinary people and also with computer "scientists". Some of the latter are philosophically simple-minded. They see the computer do amazing things like beat the world champion chess player at a game of chess and they simple-mindedly expect AI to really think later if not sooner.

But AI is just computer programming. It is not artificial intelligence, not artificial stupidity, either: it just computes. Humans can be intelligent of stupid, but they don't compute like computer programs do.

Listen to what the Bing AI has been programmed to say about this:

"Generative artificial intelligence (generative AI) refers to a fascinating field where AI systems learn patterns from existing data and then create new content–such as text, images, or other data–based on those learned characteristics. These generative models respond to prompts and generate fresh data that shares similarities with their training examples.... In other words, while generative AI can produce fresh content within the boundaries of its training data, it doesn't autonomously invent entirely new AI models from scratch. That task remains firmly in the hands of human creators who push the boundaries of what AI can achieve."

So everybody, from PhD "computer scientists" to grandmothers using social media to share pictures of their grandkids, needs to learn – really get the message and absorb it: – that "AI" is just a tool to help humans get done things we want to accomplish, like a pencil or a printing press or an electric motor or any other tool.

Once a person (you, me...) gets straight what's going on, that should help us protect ourselves against the bad potentials of AI as well as to appreciate the potential uses it offers. There is a very readable and excellent book about it all by an MIT Computer Science Prof, Joseph Weizenbaum: "Computer power and human reason: from judgment to calculation" (WH Freeman, 1976). Among other things, he describes one of the earliest computer programs that looked like it talks with humans, and how appalled he was to see how gullible people were: telling the program secrets they would never tell to another human person.

Another way to look at "AI" is that as it computes ever more powerfully, it becomes harder to tell it from a real human: It becomes an ever better SIMULATION. Here is an analogy: The first computer images were extended ascii characters on teletypewriters: nobody could mistake them for the real thing: Many examples here: ASCII Art Archive

But computers keep getting more powerful and now they can produce images that are very "realistic": But they are not real – they are realistic, just like the computer that beat the world champion human in a game of chess did not play chess but just computed very powerfully. So long as persons understand the difference that helps them (us) keep the computer in its proper place in their (our) lives.

Of course there is a lot more to it than that. But I think this is a good place to start. If you ask an AI a question you may get a ridiculous or wrong answer. But even if t looks really good, you need to check it out, just like if a human person tells you something and they tell you to believe it because they told you to jsut believe them.

Everybody needs to keep clearly in mind: AI is a tool for us to USE to accomplish things we want. It's jist a tool. Al tools exist to help us have better lives, for instance enjoying a leisurely dinner with good conversation with a few close friends with a good bottle of wine. AI not only is neither intelligent nor stupid, it also can't enjoy or dislike anything: AI just computes.

+2024.02.17. How can individuals contribute to their community? What are the benefits of helping others in your community?

[ Sacfifice ]

I think this whole thing about the conflict between individual and community should be looked at in a different way: Finding ways for persons to do things that enrich their own life and also enrich the community. Not zero-sum selfish vs altruistic, but both-and: win-win.

There is a fine essay on this, free on the Internet which says it better than I can: Individuality and Society (Jan Szczepanski, UNESCO, "Impact of science on society", 31(4), 1981, 461-466)

Isn't a large part of the problem that we have a lot of people making money off things that really do no good for the community or even otherwise for themselves? Another very good source here is Prof. Richard Wolff's presentations on Youtube and his website: Democracy at Work (d@w)

Once a psychoanalyst told me a secret of some things a person needs to be a good therapist: "to be well paid and well laid." If your personal needs are well satisfied, what can be more personally rewarding than helping others have better lives, too? If a teacher with tenure mentors a student and the student wins a Nobel Prize which the teacher knows he (she, other) never will, will the teacher be jealous of the student or feel satisfied he helped the student do great things?

+2024.02.17. How can we reconcile the tension between the pursuit of objective truth and the subjective nature of human perception and experience in academic inquiry?

All false dichotomies. Objective truth is subjective and subjective perception etc. is objective. Experience is informative (academic) and theory (academic work) shapes practical life. It's all one intermeshed synergistic system.

If one is seriously interested in all this, of course you will study philosophy. I once too a course in college which I did not understand much of because I am not scientifically smart, from the goto person here: Norwood Russell Hanson, and his little book "Patterns of discovery". He influenced Thomas Kuhn whose book is a classic: "The structure of scientific revolutions".

Find anything that is not subjective, i.e., find something outside "the subject", i.e., the "subject" of experience, i.e., us (yourself), for whom all subjects are matters in our (your) experience. Physics and chemistry are part of our subjective experience of living, aren't they? Where else would they be? If you got outside your subjective experience, you would still be in it, yes?

[ Cosmos ]

But obviously its more complicated than that.

"Objective truth" is a specific form of subjective experience. It's the subjective spirit of questioning what one believes, and examining experience to try to understand it. There are alternatives, such as fanatical belief. And this gets into academia, too. Some university professors are interested in social change, not social facts and if the facts do not favor their agenda, they repress them. Here is an example:

"Phoebe Ellsworth, a social psychologist at the University of Michigan, said that, when [Elizabeth] Loftus was invited to speak at her school in 1989. 'the chair would not allow her to set foot in the psychology department. I was furious, and I went to the chair and said, "Look, here you have a woman who is becoming one of the most famous psychological scientists there is." But her rationale was that Beth was setting back the progress of women irrevocably.'" (The New Yorker, +2021.04.05, "Past Imperfect: Elizabeth Loftus changed the meaning of memory. Now her work collides with our traumatized moment", Rachel Aviv; emphasis added)

So there one has an example of subjective and objective and academic all mixed together in what looks to me like a very bad way. But that's my subjective way of looking at it, isn't it? What's your way?

Back to Professor Hanson: We can only see what our theories make possible to see. Nobody could have had pancreatic cancer in the middle ages, could they? Because the relevant physiological theories were not in their minds. But a person might have been possessed by an evil demon?

[ Duckrabbit ]

Duck or rabbit. Prof. Hanson said that a Ptolemaic astronomer would see the sun rise in the east in the morning and a Copernican astronomer would see the horizon going down. Johannes Kepler revolutionized astronomy by seeing that the planets went in elliptical orbits not compounded circles. What you see depends on what you understand. Show me some complex piece of chemistry lab equipment and I'll just see a bunch of glass. A chemist will instantly see it's a [whatever complex piece of lab equipment it is] and how to hook it up to make some chemical he (she, other) wants to study. Our "subjective" perceptions will be of different objective realities. Mess of glasss tubing or a distillation device? Duck or rabbit?

As for academic inquiry, it can be more or less "practical". Leaving aside examples like above where "academic inquiry" was really disguised political agenda. A person can spend all their time reading books and comparing what one book says to another and never look to see how any of it applies to experienced life, not even to the experienced life of studying books. But a person can also get addicted to solving Rubik's Cube and other puzzles or "sci fi", or football or whatever else, too. (Is football objective or subjective?)

"Mens sana in corpore sano": A sound mind in a healthy body. Savoring the pleasures of the flesh most sensitively, by studying the arts and sciences. Everywhere you go, here you are.

+2024.02.16. Why are people getting so lazy on Quora that there are countless clearly AI generated answers?

This sounds like an interesting question.

The question sounds to me like this: Why are a lot of lazy people posting on Quora AI responses to queries they do. Somebody asks a question on Quora. Somebody else feeds the question to an AI. They get a response from the AI. And they post what the AI gave them as a response to the original person's question. Is that what's being talked about?

I wonder why anybody would do that. I can easily imagine a student turning in an AI response to a question as his (her, other's) assignment in a school course. The student did not want to do the assignment or was not able to do it, so they tried "to get the monkey off their back" by submitting the AI output. Not a good idea since likely they will get caught for plagiarism, right? But it makes sense.

What else can we imagine? Somebody who is no very bright but wants to feel they are, so they poet AI outputs as answers to Quora questions and maybe try to "spruce them up a bit" or who knows. But I have not seen where there is any competition to see who answers the most Quora questions (have you?).

Another possibility: A computer programmer who decides for whatever reason or lack of same to write a computer program that reads Quora questions and returns AI answers. Maybe the person finds this more interesting than playing video games?

I agree it's a very good question. I don't have any non-obvious ideas about it. "Why bother?"

+2024.02.16. Can artificial intelligence write creative copy like a human writer?

Let me give you an answer from an AI:

«Generative artificial intelligence (generative AI) refers to a fascinating field where AI systems learn patterns from existing data and then create new content–such as text, images, or other data–based on those learned characteristics. These generative models respond to prompts and generate fresh data that shares similarities with their training examples.... In other words, while generative AI can produce fresh content within the boundaries of its training data, it doesn't autonomously invent entirely new AI models from scratch. That task remains firmly in the hands of human creators who push the boundaries of what AI can achieve. » (Output from Bing AI in response to question: What is "generative AI"?)

So the answer is: "No." Humans – creative persons – create "models" which then AI can copy. But the models originally come from the persons not the AI. So if you are writing "pulp" sci fi or "romance" stories, AI can copy the model and produce new stories as good as a "hack" human writer. But AI could never have invented the genre ("model") of sci fi or romance stories. That needed creative humans and always will into the future for new new "models" nobody has yet imagined.

Does this help?

+2024.02.16. Do you think generative AI will eventually stop humanity?

It might.

But let's carefully consider the condition under which that could happen: Persons — humanity, or at least the humans with social power — would have to choose to make it happen. They could do this by having AI come up with action plans for every problem and then enforcing that everybody followed the plans. Finally, they themselves could choose to submit to the monster they had created.

In such a way, AI could stop humanity, like thermonuclear bombs can too. AI (like hydrogen bombs) can stop humanity if humanity chooses to make AI (or hydrogen bombs) stop humanity. AI won't stop anything or anybody "by itself". Nor will hydrogen bombs (Remember what "Gun nuts" say and there is truth to it: "Guns do not kill people; people kill people with guns." AI will not stop humanity, but humanity can choose to stop itself with AI.)

Humans – persons – are in command. They (we) can command their (our) self-destruction. They (we) can do it with poison like Jim Jones and Jonestown, or they (we) can do it with hydrogen bombs, or they (we) can do it with AI. Or we can do something else!

Do you think we should choose to not do do ourselves in in any of the available ways, but rather USE AI as a very powerful TOOL to help us make life better for us all?

+2024.02.16. What are some effective ways for a president to communicate with the public without relying on teleprompters or notes? How can they effectively convey their message and persuade the public when speaking spontaneously on important matters?

Compare Russian President Putin's recent interview with Tucker Carlson to America's current and recent Presidents. The only one who could "speak on his feet", i.e., talk extemporaneously and intelligently, was Barak Obama. Mr. Trump rants irrationally. Mr. George W. Bush and now far more pathetic, Mr. Biden can't say anything informative beyond, as this question asks, repeating what's on the teleprompter. Then there is the President of Ukraine, Mr. Zelensky, a career comedian, who alternates between venomous ranting and puling whining, and who seems highly "persuasive" to a lot of people.

Isn't the key to be intelligent and informed? Vladimir Putin has the equivalent of a PhD degree in economics; Mr. Biden had to repeat 3rd grade, and then he had two brain aneurysms each of which it was feared might kill him and now he seems to have incipient dementia.

Now! Communicating "persuasively" and communicating for the good of the people can be two very different things, can't they? Has anybody ever been a more EFFECTIVE, a more PERSUASIVE communicator than Adolf Hitler? But he was effective for communicating maybe the worst regime in history. Wouldn't Germany in the 1930s have been far better off with a leader who stuttered and whose voice sounded like he had a wet towel in his mouth?

So Effective and "good" are two different things, aren't they? Among America's recent Presidents, doesn't Barak Obama stand out as the one who could communicate effectively without notes or reading from a teleprompter? He was educated and intelligent and emotionally mature. So too is Vladimir Putin. Not, alas, some others....

+2024.02.15. Can AI take the role of humans completely?

Never. AI just computes. AI can do work: make things, find information, etc. AI and industrial robots go together. They can increasingly replace humans having to do "scut work".

But human persons are not just producers of outputs. We LIVE. AI cannot enjoy a glass of wine. AI cannot appreciate a beautiful sunset. AI cannot love (or hate, either) anybody or anything. AI has no feelings. AI just computes.

AI (along with industrial robots) will hopefully eliminate the need to humans to "work", although there will still be a lot of productive activity for humans to do, such as consoling sick persons or teaching in he sense of empathic mentoring not just skill instruction. In education: AI can help persons learn a lot of skills, as a better kind of textbook. But there is also the transmission of expertise that comes from experience from one person to another which AI cannot do because it just computes.

So one day AI may completely take over the whole of humans as production workers. Then we will be freer to do fully human things: Create new art; have new ideas in he sciences; play games; enjoy companionship (and good sex); play with our pets.... All the things that make for a fulfilling human life in community.

+2024.03.16 v287
 PreviousReturn to Table of contents
    

The Judas goats sending the sheep to slaughter. The seductive Sirens singing their femme fatal-e song, luring men to their death....
This page is validated HTML 5