Human Intelligence and "Artificial Intelligence"

How should we think about weird things?

Will artificial intelligence suprass us?

Yes, within a few decades. Humanity will be obsolete soon.
3
25%
Yes. However, there will still be a place for humanity.
2
17%
Artificial intelligence will be impressive, however there will always be aspects of humanity that it cannot repliacate.
6
50%
Humanity will always be uniqie and superior to artifical intelligence.
0
No votes
Other (please explain).
1
8%
 
Total votes : 12

Human Intelligence and "Artificial Intelligence"

Post #1  Postby snooziums » Wed Jun 20, 2007 10:17 pm

This is a split off of the "Vegetarian/Vegan challenge" which went way off topic.

There is a debate as to when computers will match our intelligence, or if it will always be exclusive to humans.

Here are some quotes from the language discussion:

ruprecht wrote:But natural intelligence is far advanced over artificial intelligence. If they can make a computer that breaks it's commands, and can make mistakes and learn from them, then it's a whole new ball game.

the computer that won in chess did not win by artificial intelligence. It needed it's programmers to reprogram in order to win, based on the opponent's previous win methods.


snooziums wrote:However, there are "neuro network" computers, that function by multi-point networks that have to learn everything they know. For instance, I saw a demonstration of one that was learning to say a sentence. Someone would say it, and it would try to repeat it. Every time, it got a bit closer to the original human speaker.

Computers are becoming more complex every year. Human evolution is at an end. Our brain size has not increased for over 100,000 years. Within a couple of decades, computers will surpass the human brain in complexity, that is inevitable.

Already, they have in memory. The human brain cannot store more than about 50 DVDs worth of information. Supercomputers already have that capacity, and it will be common place before long. 15 years ago, 40 MB hard drives were the limit. Now we have 1 TB hard drives, and there seems to be no current limit to the rate of increase.

The computer that defeated the chess player, "Deep Blue," was reprogrammed, however there are current programs that learn and adapt. Many programs are in the works that do this, learn and adapt. Not hard to do. It is even in some games.

Within 30 years there will be computers that can learn as well as humans can, and perform reason at the same rate or better.


ruprecht wrote:i'm not sure that learning is the same as natural intelligence, but certainly in "quantity" AI can be "bigger". From what I understand, it is the ability to break rules and learn from mistakes that is of interest in this area.


ruprecht wrote:here is the article..I think..I didn't find it on this pc bookmarks..so hopefully 9 pretty sure) it is this one. It is really interesting for it's wide scope .

http://www.google.com/search?q=meaning+ ... =utf-8&oe= utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a

it's number one..meaning based intelligence vs. info based.


snooziums wrote:"Meaning-Based Intelligence"? Now we have to define meaning, and if it exists, with some here will argue that it does not.

As for the "specialness" of bacteria. and rat neurons, that is just an adaptability to an environment. However software can be written the same, and some neuro-"hardware" is also designed to to this.

I am not saying that the living organism is not impressive, however machines, both hardware and software, are being designed and built that work under the same principles, and will be able to replicate everything that humans do someday.

After all, if we are really so "special," then creating something that is superior to us should not be that hard. Otherwise, we are not that far "above" the rest of the animals.


snooziums wrote:ruprecht wrote:
On the issue of creating superior intelligence to ours, that might be something to be applied to god bleevers' arguments, eh ?

No, not at all. It is the reverse.

if we create a superior intelligence to ourselves, then it works to disprove any "God" theory, since it will prove that we can create something as complex as we are. But if we cannot create something as complex as we are, and since it is argued that we are so far advanced from anything else in the animal kingdom, then one must question why we are so unique.

When we create intelligence, we will realize that it does not take some "divine" force to do so.


slimpickins wrote:
How could it be possible that machines could match or outstrip our intelligence, when we don't even understand some finer points of our own function, and therefore can't program a machine with such capacity, even if it can learn.

We do not know how cats purr either, but we can replicate the sounds. And some here would argue that we do know most of how the brain functions, look at the discussions about if the "soul" exists or not. The argument is that since we know how the brain functions, we must rule out the "soul" idea.

And while we do not know all of the little details of how our brain functions, unless we are designing something that functions the exact same way, that is not relevant. As long as it can learn and adapt in thought as we can, at our rate, how it does it can be unique to our own functioning.  
Reviewing the massive amount of unsubstantiated or anecdotal claims, testimony, non-validated observational data, and philosophical studies, they actually suggest the existence of such an entity as the "soul." Although it cannot be determined what it is or if it is factual or not, it is my personal belief that there may very well be something there, and that it is worth looking into.
User avatar
snooziums
Frequent Poster
 
Posts: 1673
Joined: Tue Sep 12, 2006 9:16 pm
Location: Olympia, WA

Post #2  Postby Evolver8484 » Sun Jun 24, 2007 5:21 am

“Intelligence” encompasses many related abilities such as memory, reasoning, problem solving, and creativity.

Now in terms of memory computers have surpassed us, hard drives store way more memory than the human brain and reasoning for a computer is inherently better than a human being.

As for problem solving, I know that a technique called Evolutionary programming(1) has be used to create circuit designs without the aid of a human after the program has been started(sorry I can’t find the reference). But, computers have a problem with processing stuff that has a lot of variables in it like speech or even walking. Now, there has been work using a statistical reasoning technique (UCT) to handle a large number of variables like in the game Go(2) with some mixed success. It may be possible that computer programmers will be able to mimic humans in this ability in the future but I don’t see it happening within the next 30 years.

Now creativity, the oddball, as far as I know this ability is not a function of logic in the classical sense. But those who are the most creative can usually use it to do great things, like Einstein. He was arguably not as smart as his colleagues but when it came to seeing things from a different perspective he completely blew them out of the water. I haven’t heard of a way to mimic this, but that not to say someone won’t come up with a way. The jury is still out for me but if I were a betting man I’d put everything on the machines.

We all knew it, Arnold is going to create skynet and the machines are going to take over the world. :lol:

(1)        http://en.wikipedia.org/wiki/Evolutionary_programming
(2)        http://www.sciam.com/article.cfm?articl ... sc=I100322
The first mistake with any idea is assuming your right.
User avatar
Evolver8484
New Member
 
Posts: 8
Joined: Sun Jun 10, 2007 2:34 am
Location: Ohio

Re: Human Intelligence and "Artificial Intelligence"

Post #3  Postby Animus » Sun Oct 21, 2007 10:38 pm

I'd like to recommend a book sort of about this subject, it's called The Engine of Reason, The Seat of the Soul by Paul Churchland.

Computers currently have processing speeds in the gigahertz range, while neurons communicate at only 500 hz. However, neurons are more tightly packed together. Brains are also not linear processors like computers, they are vector coded. Their processing is parallel, distributed, redundant and with recurrent connections. I think the term "software" is erroneous in context of human brains or the future of AI. Brain-Based Devices (BBDs) like Darwin VII are programed through a training process, a process that sets the synaptic weights for it's vector coding brain. SpeakNET, DecNEt, Gary Cottrell's face-recognition, and so on use vector-coded training to 'learn' as opposed to being programmed ahead of time.

The real obstacles in AI might be the fact that human brains have 100 trillion synaptic connections. Fitting that into a silicon chip is going to be a challenge. Kareem Zaghloul at the University of Pennsylvania, US, and colleague Kwabena Boahen at Stanford University have created a silicon retina that codes visual information in the same manner human retinas do and the output is rather interesting. But we are still a long way from recreating the complexity of the brain, the complexity, I think, is where the real problem is. We've created linear processing technology that surpasses human capacity for speed and memory, but none of it is capable of learning autonomously. Brains also have the ability to create synapses through synaptogenesis, whereas computers will need to have their synapses preconnected and then they can adjust their 'weights' according to something like Hebb's rule.

AI is certainly looking interesting these days, and I think we will create computers that aren't necessarily better or smarter than humans, but which aren't held back by other motivations and a short life-span. For me to study a subject, takes a lot of time, effort and money. I need to frequently suspend study to eat, sleep, work and perform chores, and less than 80 years from now I will be dead and it will all be lost. But a computer with the capacity to learn without these drawbacks will be superior in knowledge, it will have the diagnostic capacity of a hundred diagnosticians, if so trained.
"There are a lot of myths which make the human race cruel and barbarous and unkind. Good and Evil, Sin and Crime, Free Will and the like delusions made to excuse God for damning men and to excuse men for crucifying each other." - Clarence Darrow
User avatar
Animus
Poster
 
Posts: 214
Joined: Fri Feb 16, 2007 12:37 am
Location: London, Ontario

Re: Human Intelligence and "Artificial Intelligence"

Post #4  Postby Interesting Ian » Mon Oct 22, 2007 12:51 am

I don't understand the question.  What do you mean by "intelligence"?

I do not think that computers will ever become remotely conscious.   So if intelligence necessitates consciousness then computers will never surpass us.  Indeed they will never surpass the intelligence of a rock.  

If you don't understand intelligence to necessitate consciousness, then the question hinges around whether all problems can be solved through an algorithmic process.  If the answer to this question is yes, then it clearly follows that computers will eventually surpass us.

If the the answer to this question is no, then a further question which needs to be addressed is whether human intelligence is purely of a algorithmic nature. If the answer to this question is yes then it clearly follows that computers will eventually surpass us.

If the the answer to this question is no, then a further question which needs to be addressed is whether the non-algorithmic reasoning of human beings to reach a given conclusion can also be achieved through an algorithmic process.  If the answer to this question is yes then it follows that computers will eventually surpass us.

If the the answer to this question is no then it seems that in some aspects of intelligence computers will eventually surpass us, and in other aspects they will never do so.

A huge problem with your question is the implicit assumption that some type of materialist metaphysic correctly depicts reality.  I find such a supposition utterly absurd.
User avatar
Interesting Ian
Regular Poster
 
Posts: 775
Joined: Tue Mar 29, 2005 9:07 pm
Location: Stockton-on-Tees, England

Re: Human Intelligence and "Artificial Intelligence"

Post #5  Postby Animus » Mon Oct 22, 2007 1:43 am

Ian,

There is a philosophical problem to overcome with consciousness. There is no litmus test or turing test for consciousness, we know through inference that other beings are conscious, but we know through direct testing as much as we know about wether a rock is conscious. Although clinical cases do reveal that consciousness and the brain are related, in such a way that the brain is a causal conditional for consciousness. Unless we devise a test for consciousness, we will never know if another human being, a rock or a computer is conscious.
"There are a lot of myths which make the human race cruel and barbarous and unkind. Good and Evil, Sin and Crime, Free Will and the like delusions made to excuse God for damning men and to excuse men for crucifying each other." - Clarence Darrow
User avatar
Animus
Poster
 
Posts: 214
Joined: Fri Feb 16, 2007 12:37 am
Location: London, Ontario

Re: Human Intelligence and "Artificial Intelligence"

Post #6  Postby Marsupialwolf » Mon Oct 22, 2007 4:49 am

snooziums wrote:
However, there are "neuro network" computers, that function by multi-point networks that have to learn everything they know. For instance, I saw a demonstration of one that was learning to say a sentence. Someone would say it, and it would try to repeat it. Every time, it got a bit closer to the original human speaker.

Computers are becoming more complex every year. Human evolution is at an end. Our brain size has not increased for over 100,000 years. Within a couple of decades, computers will surpass the human brain in complexity, that is inevitable.

Already, they have in memory. The human brain cannot store more than about 50 DVDs worth of information. Supercomputers already have that capacity, and it will be common place before long. 15 years ago, 40 MB hard drives were the limit. Now we have 1 TB hard drives, and there seems to be no current limit to the rate of increase.


I was wondering what your source was for the limits on human memory.  From what I've seen the only theoretical limits that have been proposed were based on the rate of learning and the average human lifespan rather than on actual storage capacity of the brain.  These studies seemed rather imprecise at best as it makes assumptions as to the average rate of learning in humans.  If we are incapable of actually filling the brain to capacity within a human lifetime, then for practical purposes the storage capacity of the human brain is meaningless.  

An interesting quote I came across while searching for info:
"Digital computers are a very poor model for the human (or for that matter any other organic) brain. However human memory works, it most certainly doesn't work in the way that a computer's memory works and so the idea of GBs of memory is essentially meaningless." --Ben Gallagher
User avatar
Marsupialwolf
New Member
 
Posts: 12
Joined: Mon Oct 22, 2007 3:31 am

Re: Human Intelligence and "Artificial Intelligence"

Post #7  Postby Interesting Ian » Mon Oct 22, 2007 1:27 pm

Animus wrote:Ian,

There is a philosophical problem to overcome with consciousness. There is no litmus test or turing test for consciousness, we know through inference that other beings are conscious,



Remember that we are supposed to operate purely according to physical laws.  We are supposed to be incredibly complex, but at the end of the day we are merely biological machines.  The totality of our behaviour is supposed to be purely a result of physical chains of cause and effect.  So it is not possible to infer the existence of consciousness.  Instead the materialist simply declares that our behaviour is consciousness (behaviourism), or that consciousness is identical to the causal processes occurring in our brains (functionalism), or is identical to the biological material itself comprising our brains (identity theory).  So according to the materialist we do not infer consciousness, rather we are conscious by definition. (and of course the notion that computers can become conscious assumes functionalism)

It is the interactive dualist who infers the existence of consciousness. It is the self per se which initiates some change in the brain thus breaking a chain of physical cause and effect.  So our behaviour is partially accounted by the fact of consciousness per se rather than purely physical laws.
User avatar
Interesting Ian
Regular Poster
 
Posts: 775
Joined: Tue Mar 29, 2005 9:07 pm
Location: Stockton-on-Tees, England



Return to Skepticism and Critical Thinking

Who is online

Users browsing this forum: Venerable Kwan Tam Woo and 1 guest

MIB
MIB
Do NOT follow this link or you will be banned from the site! MIB