Page 4 of 4

Posted: Tue Jun 13, 2017 12:40 pm
by ruby sparks
[quote=""Koyaanisqatsi""]I don't think anyone questions whether or not machines can calculate. They clearly can and very efficiently. I'm not sure that's the same thing as what we're talking about, nor would I grant that being able to calculate is the full extent of human intelligence. The only thing that my culture-laden mind can come up with as a reference is the I, Mudd episode from the original Star Trek. Simplistic, I know, but the thing what did them thar robots was illogic. They couldn't handle nonsense. The human brain, however is a;liver aoggood aat ssdrivng ssensf om nnoonssens. I believe (thanks to sub) that this is a result of having to overcome the stochastic nature of the brain; that it was actually the malfunction that resulted in improved function.

Can we program chaos? Seems an oxymoron. And I don't mean mimic chaos, such that it's not actually chaotic; I mean actual, dynamic chaos that disrupts the system, not as something that is part of the system. And we may not need to in order for AI to simulate/mimic human-like choices. Maybe we can create a simulation of an amygdala, but then how do we program in survivor-based perspective (the base standard by which empathy is measured against)?

I have little doubt that we will create--and already have created--machines that fool us, but I think that has more to do with the fact that we fill-in-the-gaps and basically ignore (on a "conscious level") non-survivor based information. Iow, I think it's more a matter of our own thresholds for trivial flaws being so low than it may be for AI technology to be so sophisticated/nuanced. If it doesn't threaten our survival, we don't really give much of a shit. Though we still have an interesting (and perhaps related) issue when it comes to animation.

And, once again, I should make it clear that I'm making a delineation between what we program as opposed to anything "organic." We have the benefit of millions of years of evolution, again driven by survival. Do we need to Roy Batty our Replicants in order to jumpstart agency*?

So there seems to be (at least) two fundamental conditions that would appear to be central to human agency (*I'm using that as a more "meta" category that would include intelligence) and yet external to it; stochasticity and mortality. It is evidently not about creating a perfect, flawless system and turning it on and presto, agency!; it's about creating an imperfect, cataclysmic system and turning it on and hoping that it can somehow independently overcome all the flaws that evidently gives rise to (human) agency.

How do you build a malfunctioning robot/system and then hope that it overcomes the malfunctions, particularly if it has no innate fear of or understanding of death?[/quote]

The short answer, koy, is, speaking as someone who sometimes has trouble with assembling Ikea furniture properly, I don't, myself, know the full answers or limitations. :)

Though if we wanted to, for example, instil the 'you are going to die' factor as a working doesn't seem outlandishly difficult in principle (said the layman).

Whether the robot will ever 'feel fear' about that is another matter, but it might not need to. It's the sort of question I'm asking in another thread here in the P & M forum.

Will any robot have to be capable of implementing 'our' capacities (including the Intentional Stance) in order to 'be as good as or better than us'? Will it in fact need to be conscious? Maybe. I don't know. It may be anthropocentric to automatically say yes. In any case, even if robots can do as well as or better than us, in at least certain activities and predicaments, then our conscious, sentient, possibly chimeric game is arguably not the only one in town* (even if we ourselves are largely stuck with it for now) and my somewhat eliminativist, largely reductionist and more bottom-up alternative may have some legs.

I hope I haven't kick-started an AI or p-zombie discussion, as I sometimes avoid getting into those, the former for lack of detailed knowledge. :)

Thanks for the link to the creepy animation thing. That looks interesting, I hadn't heard of the 'uncanny valley' thing, and I intend to read it properly later, not least because I've come across stuff like how we treat and feel about corpses (for example) before, in this context and in the context of the various stances which Dennett writes of.

* That was arguably self-evident even 500 or 500 million years ago, if we consider all the other biomachines (other lifeforms). It may be more easily arguable that robots can outstrip their capacities before they get as far as ours. That said, information-processing, cognition, pediction and decision-making are only part of the picture. Robots might have to start reproducing, or at least find a way to obtaining more autonomy as regards sustinence and survival. Autopoeisis, as someone else called it. Self-maintaintenance. I think biological machines currently have a monopoly on this, at the moment, as far as I know.