SKEPTIC Volume 12, Number 2

Discussion of Skeptic magazine and Letters to the Editor
Newsfeed
Poster
Posts: 103
Joined: Sun Mar 19, 2006 7:36 pm

SKEPTIC Volume 12, Number 2

Postby Newsfeed » Sun Mar 19, 2006 6:48 pm

Artificial Intelligence
A.I. Gone Awry: The Futile Quest for Artificial Intelligence

Special Section: Intelligent Design

The Dover Decision

The Omnitron is Still With Us

The Origin of Alien Faces

The 2005 IgNobel Prizes


…and more!

CURRENT SUBSCRIBERS will receive
this issue around the end of March, 2006.


Have a peek at the content listing:

http://www.skeptic.com/the_magazine/arc ... 12n02.html
Last edited by Pyrrho on Sat May 03, 2008 10:03 pm, edited 1 time in total.
Reason: Updated the hyperlink.

mindmaker
Posts: 1
Joined: Thu Mar 23, 2006 4:05 am
Location: Guantanamo Concentration Camp Mental Reservation
Contact:

A.I. Gone A-Whitewash (Rebuttal to A.I. Gone Awry)

Postby mindmaker » Thu Mar 23, 2006 5:13 am

This morning I chanced upon Kassan: A.I. Gone Awry in a forum devoted to AI and the Singularity. Later in the day I stopped in at a magazine store and they said that the new issue would not come in until mid-April. But then I remembered to check the University Book Store where I had worked in the summer before I entered graduate school at U Cal Berkeley: Bingo! The clerk said that the new issue of The Skeptic had come in a few minutes ago.

The "A.I. Gone Awry" article by Peter Kassan on pages 30-39 was very impressive but extremely disappointing. Like some puff-piece websites like TheEdge.com, the AI article was written entirely from the point-of-view (POV) of the academic AI Establishment. Sure, the article gave an exhaustive history of academic AI, but it made no mention at all of the exciting progress in independent AI projects -- where the race is on and there is no publish-or-perish academic foot-dragging.

Peter Kassan's article stated that there is no general theory of neuroscience -- but I beg to differ, because I spent fourteen years of my collegiate youth and beyond in a mighty and successful effort to formulate a theory of neuroscience as a basis for True AI.

That the article ended with three entire pages of academic references was truly impressive, as were Kassan's observations interspersed amid the citations, but they were all non-hacker, non-maverick, non-garage-tinkerer publications of the glacially slow academic AI Establishment. In short, the cover article was a waste of paper and a waste of front-page prominence.

You have been warned, The Skeptic Magazine. Do not publish such AI Establishment puffery in futuro.

All your front-page AI article are belong to us.

User avatar
Thorn
Regular Poster
Posts: 744
Joined: Fri Mar 03, 2006 9:07 pm
Location: In your side.
Contact:

Postby Thorn » Thu Mar 23, 2006 5:45 am

As a person with a good number of year's experience, I am quite interested in the issue, haven't seen it on shelves yet, but I am, with many, skeptical of
AI. A program is just that. Programmed to do what it is told. No matter what you try, no amount of code will allow a computer to act outside of programming. A computer cannot learn, it can only remember, at best. There is a distint difference.

But I could be wrong, and I am all too aware, guess I'll have to wait for the magazine.
"In science, "fact" can only mean "confirmed to such a degree that it would be perverse to withhold provisional assent." I suppose that apples might start to rise tomorrow, but the possibility does not merit equal time in physics classrooms."
-S.J. Gould

NicoleTedesco
New Member
Posts: 6
Joined: Fri Aug 25, 2006 3:26 pm
Location: Seattle, WA
Contact:

Criticizing Kassan's "Testing Problem"

Postby NicoleTedesco » Fri Aug 25, 2006 3:42 pm

While Kassan is correct about the "testing problem" in general, it does not apply to the AI situations that he proposes. First, Kassan introduces the testing problem in relation to connectionist software. Just because there may be 100 trillion neurons in the brain it doesn't mean that a developer must test 100 billion neurons. In reality only a single neuron be tested, along with the "growth" algorithm that will replicate that neuron in memory and inject variance when appropriate. The testing problem however is more appropriate to this "G.O.F.A.I." or "expert system" description (I like his quip about "beginner systems"). Regardless, testing an A.I. system will be nothing like testing Microsoft Windows: an operating system must have predictable responses to a pre-specified set of inputs, and the testing of the operating system is against this matrix. The testing of an AI system is the Turing Test, which is much simpler and cheaper and very much smaller in scale than the quality assurance strategy implemented by the army of testers that my neighbor Microsoft has hired to assure the usefulness of their popular operating system.

Peter Kassaan appears never to have developed much software in his life, except perhaps a few desktop applications for his personal, occasional academic use.
Last edited by NicoleTedesco on Sun Aug 27, 2006 7:49 pm, edited 2 times in total.
Nicole Tedesco

User avatar
macros_man
Frequent Poster
Posts: 1025
Joined: Tue Jan 10, 2006 12:25 pm
Location: Union City, California

Postby macros_man » Fri Aug 25, 2006 4:05 pm

Thorn wrote:As a person with a good number of year's experience, I am quite interested in the issue, haven't seen it on shelves yet, but I am, with many, skeptical of
AI. A program is just that. Programmed to do what it is told. No matter what you try, no amount of code will allow a computer to act outside of programming. A computer cannot learn, it can only remember, at best. There is a distint difference.

But I could be wrong, and I am all too aware, guess I'll have to wait for the magazine.


Your comments above make me wonder what you mean by "good number of year's experience".

What is a good number? And of what kind of experience are you speaking?
The meaning of life... is to defer entropy

User avatar
macros_man
Frequent Poster
Posts: 1025
Joined: Tue Jan 10, 2006 12:25 pm
Location: Union City, California

Postby macros_man » Fri Aug 25, 2006 4:18 pm

Jim Dominic:

Sorry.... this is sort of unrelated to this particular posting.... but I was just curious...

Yearly subscriptions for the skeptic magazine to Canada seems to be $40, and you get 4 magazines per year.

That sounds reasonable... but then to back-order a magazine, it seems to be only $6 per issue...

Is this right? If so, it would seem to be cheaper to back-order all of the issues, rather than getting them as part of a subscription.

I'm not cheap or anything... and I like the idea of contributing to the skeptic society... but I was just curious whether this is a discrepancy, or what?
The meaning of life... is to defer entropy

NicoleTedesco
New Member
Posts: 6
Joined: Fri Aug 25, 2006 3:26 pm
Location: Seattle, WA
Contact:

Postby NicoleTedesco » Fri Aug 25, 2006 5:23 pm

I am in partial agreement with Thorn. I do believe the modern computing paradigm, given sufficient power and clever-enough programming, will one day be able to simulate human behavior to some useful level. A simulated AI may or may not pass the Turing Test, but I do believe we will get close enough perhaps even to "bond" with these future programs because that is what our own genetic programming predisposes us to do given the right stimulii. However before machines can become conscious in the traditional "cogito, ergo sum" sense of the word, before they can have subjective experience analogous to what we humans understand it to be, we will have to learn a lot more about the nature of consciousness itself (e.g., is it a quantum mechanical phenomenon?) and perhaps even employ novel computing technologies to replicate that physical characteristic we call "having a soul" (e.g., employing quantum computing perhaps to replicate quantum phenomena).

Most misunderstandings about AI come from the inability of the debaters to understand the difference between the phenomena of intention, action, response, perception and experience, and to place their criticisms into the appropriate phonomenological frame. For instance the question of when a computing system will simulate human action and "free will" (e.g., Turing Test) is different from the question of when computing systems will implement truly subjective experience.
Nicole Tedesco

User avatar
macros_man
Frequent Poster
Posts: 1025
Joined: Tue Jan 10, 2006 12:25 pm
Location: Union City, California

Postby macros_man » Fri Aug 25, 2006 5:38 pm

NicoleTedesco wrote:I am in partial agreement with Thorn.

...



Sorry... but I don't see where you are agreeing with Thorn, here. Thorn appears to be under the assumption that there is some fundamental difference between thinking and computing. You, on the other hand, seem to think that some bridge exists between thinking and computing, and that this bridge is passable, if only we could better understand what thinking is.

Forgive me if I'm misenterpreting you, Thorn, but you seem to think that bridge is not passable, or that no bridge even exists.
The meaning of life... is to defer entropy

User avatar
macros_man
Frequent Poster
Posts: 1025
Joined: Tue Jan 10, 2006 12:25 pm
Location: Union City, California

Postby macros_man » Fri Aug 25, 2006 5:57 pm

macros_man wrote:Sorry... but I don't see where you are agreeing with Thorn, here. Thorn appears to be under the assumption that there is some fundamental difference between thinking and computing. You, on the other hand, seem to think that some bridge exists between thinking and computing, and that this bridge is passable, if only we could better understand what thinking is.

Forgive me if I'm misenterpreting you, Thorn, but you seem to think that bridge is not passable, or that no bridge even exists.


Sorry Thorn... I'm probably mis-representing your thoughts, by representing them far too strongly... I understand you are just kind of unsure about AI and consciousness, and whether there's any interaction there... but I sort of got the idea that feel there is some unbridgeable gap between information processing and phenomenological experience.
The meaning of life... is to defer entropy

NicoleTedesco
New Member
Posts: 6
Joined: Fri Aug 25, 2006 3:26 pm
Location: Seattle, WA
Contact:

When I Do, and When I Don't Agree With Thorn

Postby NicoleTedesco » Sat Aug 26, 2006 3:09 pm

macros_man, I apologize: I agree with Thorn in that traditional "computational" AI approaches will not get us very far in terms of producing something that is truly analogous to human intelligence, but I disagree with Thorn in that I don't believe that the creation of an analog of the human is not beyond our reach. I believe that the conscious experience is a physical process which will once day be described in a coherent theory and that one day we will replicate that experiential process artificially. However I disagree with Thorn in that it may be possible to closely simulate human interaction with a combination of known approaches (heuristic processing, neural networks, very rapid Darwinian programming and so on) once they scale up given sufficient computing speed, memory and clever programming gimicks.

On terms of whether or not I agree with Thorn is dependant on what one is trying to produce with a Turing-based system (modern computing paradigm): a reasonable simulation (passing a Turing Test) or creating a true analogy to the phenomenon of conscious human experience.
Nicole Tedesco

pertti_jarla
Poster
Posts: 125
Joined: Mon Feb 13, 2006 5:03 pm

Postby pertti_jarla » Sun Aug 27, 2006 6:49 pm

One thing seems certain: we are very far from making a simulation of the human brain. Any optimism about the schedule seems misplaced.
It is surprising how many people are so certain that we will see this breakthrough. A revolution in digital technology and in the understanding of the nervous system/brain would be needed.

NicoleTedesco
New Member
Posts: 6
Joined: Fri Aug 25, 2006 3:26 pm
Location: Seattle, WA
Contact:

When will a patient not be aware of the ball flying towards

Postby NicoleTedesco » Sun Aug 27, 2006 7:23 pm

I believe Ray Kurzweil's estimation (see "The Singularity Is Near") of 2050 has a good "feel" to it, if and only if 2050 is considered a time period in which we can expect convincing simulations of human behavior (and, perhaps, even human form) to emerge. In terms of replicating the human experience however, the related timeline cannot be estimated since we have no theoretical notion at all of the physical processes associated with it.

On the other hand Kurzweil develops potential timelines for various brain structure replacements which may aid us in the discovery of the processes underlying human experience. For instance, I suggest that we can at least narrow down our search as we learn to replace brain structures such as the occipital lobe, which is our first layer of visual processing in our brain. Imagine replacing the occipital lobe, one layer at a time, with humans reporting their subjective experiences in each case. At some point I suggest that a "split brain" like experience may arise--at which point does a patient seem to process some visual stimulus but not be aware of its processing? For instance, when do these patients react to objects flying towards them in their field of vision but do not report a related subjective experience of "seeing" that flying object? The situation would be analogous to "split brain" patients whose left hands literally do not know what their right hands are doing. By finding subjective experience boundaries in this way we improve our chances of creating a complete theory of mind immensely. Kurzweil's timeline for such advances, I believe, ignore the slowness at which such layered replacements can and will be made with real, live humans. The federal approval process for each replacement layer will by themselves telescope the entire discovery process perhaps until the end of the century (at least).
Nicole Tedesco

NicoleTedesco
New Member
Posts: 6
Joined: Fri Aug 25, 2006 3:26 pm
Location: Seattle, WA
Contact:

Yes, Virginia, There Will Be A Revolution

Postby NicoleTedesco » Sun Aug 27, 2006 8:04 pm

pertti_jarla, I do believe that you are correct and that it will require a revolution or two to create a true analog to human subjective, or "qualic" experience (i.e., creating a soul). I also have some faith however that Ray Kurzweil's description of nonlinear technological acceleration has some merit. The feedback loop produced by concomitant and interdependent advances in computing, bioengineering and nanotechnology may accelerate the timeline to those revolutions to points sooner in future history that you may think. I also believe however that Kurzweil's 2050 timeline for the emergence of human replacements may be a tad optimistic. After all humans are involved in the entire process which implies bureaucracies, laws, uncertainty, errors, errors and more errors, all of which will threaten Kurzweil's "Happy Day" scenario. Oh yeah, don't forget to take into account the effects (both positive and negative) that human [i]wars[/] will have on his timeline.
Nicole Tedesco

Scheletro
Posts: 1
Joined: Fri Mar 07, 2014 7:09 am

Intelligence (mind) vs Brain

Postby Scheletro » Fri Mar 07, 2014 7:55 am

The notion that human- level intelligence is an “emergent property” of brains (or other systems) of a certain size or complexity is nothing but hopeful speculation.


This baffles me. AI skepticism aside, what are intelligence and mind if not emergent properties of the material brain immersed in the totality of its physical environment?

NicoleTedesco
New Member
Posts: 6
Joined: Fri Aug 25, 2006 3:26 pm
Location: Seattle, WA
Contact:

Re: SKEPTIC Volume 12, Number 2

Postby NicoleTedesco » Sat Mar 08, 2014 2:01 am

For me, given my experience, the term "emergent properties" has come to be associated with a connotation of "epiphenomenalism", which I am not a fan of with respect to the mind.
Nicole Tedesco

User avatar
kennyc
Has No Life
Posts: 12192
Joined: Sun Apr 18, 2010 11:21 am
Custom Title: The Dank Side of the Moon
Location: Denver, CO

Re: Intelligence (mind) vs Brain

Postby kennyc » Sat Mar 08, 2014 1:05 pm

Scheletro wrote:
The notion that human- level intelligence is an “emergent property” of brains (or other systems) of a certain size or complexity is nothing but hopeful speculation.


This baffles me. AI skepticism aside, what are intelligence and mind if not emergent properties of the material brain immersed in the totality of its physical environment?



Hmmm.....welcome to the skeptic forum.....interesting you pulled up an 8 year old post.....any reason for this?

There are a number of threads about intelligence, consciousness, and brain/mind that are much more up to date, or of course you can start one if you'd like to discuss it. It's definitely one of my favorite topics...
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry - The Bleeding Edge
"Strive on with Awareness" - Siddhartha Gautama

User avatar
Gord
Real Skeptic
Posts: 29090
Joined: Wed Apr 29, 2009 2:44 am
Custom Title: Silent Ork
Location: Transcona

Re: Intelligence (mind) vs Brain

Postby Gord » Sun Mar 09, 2014 12:22 pm

kennyc wrote:...any reason for this?

It baffled him.
"Knowledge grows through infinite timelessness" -- the random fictional Deepak Chopra quote site
"You are also taking my words out of context." -- Justin
"Nullius in verba" -- The Royal Society ["take nobody's word for it"]
#ANDAMOVIE

User avatar
kennyc
Has No Life
Posts: 12192
Joined: Sun Apr 18, 2010 11:21 am
Custom Title: The Dank Side of the Moon
Location: Denver, CO

Re: Intelligence (mind) vs Brain

Postby kennyc » Sun Mar 09, 2014 12:25 pm

Gord wrote:
kennyc wrote:...any reason for this?

It baffled him.


Ah, I see....bafflement....

https://www.google.com/search?q=baffle+ ... 68&bih=729
Kenny A. Chaffin
Art Gallery - Photo Gallery - Writing&Poetry - The Bleeding Edge
"Strive on with Awareness" - Siddhartha Gautama


Return to “SKEPTIC Magazine: Letters & Discussions”

Who is online

Users browsing this forum: No registered users and 1 guest