• Information on this archive. See IIDB.org
  • Please join us on IIDB (iidb.org)
    This is the archived Seculare Cafe forum. It is read only. If you would like to respond or otherwise revive a post or topic, please join us on the active forum: IIDB.

Discuss!

Discuss philosophical concepts and moral issues.
User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Sun Jun 11, 2017 1:21 pm

[quote=""ruby sparks""]
subsymbolic;673189 wrote:You'd want to say both are just the wrong way of looking at something, but it's a bit late...
Do you mean both stances (design & intentional) as opposed to the one I appear to favour (physical)?

I would never say that the first two are wrong and the last right. They're all models, taxonomies, descriptions. None will be all right or all wrong. All will be useful/pragmatic/instrumentalist models in certain contexts.

One question I have...what is the supposed role of the stances (particularly the IS) in non-conscious thinking?

It seems to me that the main focus for the IS is on person-person* and perhaps person-group (if the group is a relatively modest size) interactions, and generally of the conscious/deliberated type.

*Essentially brain-brain (or mind - mind if preferred).[/QUOTE]

What I mean is that the eliminative project described by Stitch or Churchland is based on assumptions that are correct. I'm constantly trying to tease out a tension between IS being the way we came to understand science and yet science showing that the idea that minds are basically symbol shuffling belief boxes is clearly wrong. You can't do science without intentions (or at least we are stuck in the local minima of having no other way to do it) and science shows intentions are the wrong way of looking at how brains work.

As for non conscious thinking, give me a definition of thinking and I'll probably have an immediate answer. In fact, defining what you mean will probably answer the question for you. Do you want to call a neuron reaching a threshold and firing thinking? An ad-hoc chord of million neurons? What about someone mulling over a short narrative? Someone reading silently?

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Sun Jun 11, 2017 1:52 pm

That's a good point. Thinking really seems to imply some narrative quality.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Sun Jun 11, 2017 2:52 pm

[quote=""subsymbolic""]

What I mean is that the eliminative project described by Stitch or Churchland is based on assumptions that are correct. I'm constantly trying to tease out a tension between IS being the way we came to understand science and yet science showing that the idea that minds are basically symbol shuffling belief boxes is clearly wrong. You can't do science without intentions (or at least we are stuck in the local minima of having no other way to do it) and science shows intentions are the wrong way of looking at how brains work. [/quote]

Right, so I picked you up wrong.


[quote=""subsymbolic""]
As for non conscious thinking, give me a definition of thinking and I'll probably have an immediate answer. In fact, defining what you mean will probably answer the question for you. Do you want to call a neuron reaching a threshold and firing thinking? An ad-hoc chord of million neurons? What about someone mulling over a short narrative? Someone reading silently?[/quote]

What I mean by non-conscious thinking is any thinking (mental/cognitive process) that someone is not conscious of. And yes I would include, for example, a neuron firing, though typically there would be many.
Last edited by ruby sparks on Sun Jun 11, 2017 3:16 pm, edited 2 times in total.

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Sun Jun 11, 2017 3:17 pm

[quote=""ruby sparks""]What I mean by non-conscious thinking is any thinking that someone is not conscious of. And yes I would include, for example, a neutron firing.[/quote]

Ok, then is there any difference between information processing and thinking?
Say, when formyl methionine binds with an FPR1 receptor causing it to release histamine and global neurotransmitter regulators such as serotonin, is that thinking? Serotonin, of course, will act as an antagonist to a bunch of cascades and act as an agonist to others, altering mood and behavior appropriately...

And a further diagnostic question, given that at least some thoughts can be literally and non-metaphorically written down in words on paper, do you think that the word 'think' is starting to have similar problems to the word 'meme', in that it now covers several different processes that don't have much in common and indeed, are not, as Ramsey Stitch and Garon demonstrated, commensurable.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Sun Jun 11, 2017 3:19 pm

[quote=""subsymbolic""]

What I mean is that the eliminative project described by Stitch or Churchland is based on assumptions that are correct. I'm constantly trying to tease out a tension between IS being the way we came to understand science and yet science showing that the idea that minds are basically symbol shuffling belief boxes is clearly wrong. You can't do science without intentions (or at least we are stuck in the local minima of having no other way to do it) and science shows intentions are the wrong way of looking at how brains work. [/quote]

There are bound to be tensions between two such ideas. You get stuff like a machine that can't easily think of itself as a machine thinking that it's a machine.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Sun Jun 11, 2017 3:22 pm

[quote=""subsymbolic""]Ok, then is there any difference between information processing and thinking?
[/quote]

I can't think of any significant ones off the top of my head. That said, I'm not going to put my head on the block and call them identical.

[quote=""subsymbolic""]And a further diagnostic question, given that at least some thoughts can be literally and non-metaphorically written down in words on paper, do you think that the word 'think' is starting to have similar problems to the word 'meme', in that it now covers several different processes that don't have much in common and indeed, are not, as Ramsey Stitch and Garon demonstrated, commensurable.[/quote]

I think all word have such problems. :)

I'd be ok with non-conscious mental/cognitive information-processing for one of them. That would leave 'thinking' with its traditional properties.

Do you think they don't have much in common or aren't commensurate??
Last edited by ruby sparks on Sun Jun 11, 2017 3:32 pm, edited 1 time in total.

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Sun Jun 11, 2017 4:38 pm

[quote=""ruby sparks""]
subsymbolic;673234 wrote:Ok, then is there any difference between information processing and thinking?
I can't think of any significant ones off the top of my head. That said, I'm not going to put my head on the block and call them identical.

[quote=""subsymbolic""]And a further diagnostic question, given that at least some thoughts can be literally and non-metaphorically written down in words on paper, do you think that the word 'think' is starting to have similar problems to the word 'meme', in that it now covers several different processes that don't have much in common and indeed, are not, as Ramsey Stitch and Garon demonstrated, commensurable.[/quote]

I think all word have such problems. :)

I'd be ok with non-conscious mental/cognitive information-processing for one of them. That would leave 'thinking' with its traditional properties.

Do you think they don't have much in common or aren't commensurate??[/QUOTE]

What are the 'traditional properties' of thinking? I'm honestly not sure how I'd answer that question myself.

However, as this really is in the technical ballpark where it matters, the word cognitive really only refers to those mental states that can have a truth value, that is conceptualised states. The distinction between conceptualised and non conceptualised content is critical here. Part of the problem is that quite a lot of the technical language is painfully out of date. Thirty years ago you'd have had cognitive states (that you could say) conative states (behaviour and reaction) and effective states (how it felt). However, it's now painfully clear that there is an additional category of content that isn't conceptualised but plays a part in areas that were traditionally the realm of cognition. Back in the seventies people could still say things like 'the laws of logic are the laws of thought' and not get laughed at. In fact, In the late eighties, I remember mocking the shit out of someone who argued for non conceptual content. My grounds were that there could never be a commensurable conceptual scheme beyond a bi-value logic and hence we couldn't talk about them and if we did anything we said would be nonsense. Hence I called them non contentful concepts. Technically I was correct, but he had the last laugh.

As I don't think we've really covered this properly, a concept isn't just a fancy idea. It is the ability to use a word correctly. As my preferred example goes, you have the concept of a duck when you can accurately pick out all and only ducks from a selection of objects. Linnaean hierarchies tend to be laid out conceptually - the deeper your understanding, the further down the tree you can go. Anyone can identify something as a flying thing, most as an aeroplane, less as a jet, still less as a 747 and very few as a 747SP.

So, as a quick way of putting it, conceptual content usually involves content that looks pretty language like and is certainly serial, individuated and so on, while nonconceptual content isn't. Quite what it is is still at least partially up for grabs, but patterns of activation, neural architecture, firing rates and myelination are all well and truly in the frame. This sort of content certainly isn't individuated, certainly is superpositionally stored and all the points from the RS&G paper.

Call it language based and (embedded) meatstate based and you are not too far off. The problem with thought is that people happily bounce between meaty and language ideas without a moment's thought. And you'll note I've deliberately kept this in the realm of biology and language and not brought up minds, which is where it all gets complicated.
Last edited by subsymbolic on Sun Jun 11, 2017 4:52 pm, edited 1 time in total.

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Sun Jun 11, 2017 5:26 pm

[quote=""ruby sparks""]
subsymbolic;673234 wrote:Ok, then is there any difference between information processing and thinking?
I can't think of any significant ones off the top of my head. That said, I'm not going to put my head on the block and call them identical.

[quote=""subsymbolic""]And a further diagnostic question, given that at least some thoughts can be literally and non-metaphorically written down in words on paper, do you think that the word 'think' is starting to have similar problems to the word 'meme', in that it now covers several different processes that don't have much in common and indeed, are not, as Ramsey Stitch and Garon demonstrated, commensurable.[/quote]

I think all word have such problems. :)
[/QUOTE] that's definitely the nubbin of the issue right there.
I'd be ok with non-conscious mental/cognitive information-processing for one of them. That would leave 'thinking' with its traditional properties.

Do you think they don't have much in common or aren't commensurate??

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Sun Jun 11, 2017 5:31 pm

[quote=""subsymbolic""]
ruby sparks;673236 wrote:
subsymbolic;673234 wrote:Ok, then is there any difference between information processing and thinking?
I can't think of any significant ones off the top of my head. That said, I'm not going to put my head on the block and call them identical.

[quote=""subsymbolic""]And a further diagnostic question, given that at least some thoughts can be literally and non-metaphorically written down in words on paper, do you think that the word 'think' is starting to have similar problems to the word 'meme', in that it now covers several different processes that don't have much in common and indeed, are not, as Ramsey Stitch and Garon demonstrated, commensurable.
I think all word have such problems. :)

I'd be ok with non-conscious mental/cognitive information-processing for one of them. That would leave 'thinking' with its traditional properties.

Do you think they don't have much in common or aren't commensurate??[/QUOTE]

What are the 'traditional properties' of thinking? I'm honestly not sure how I'd answer that question myself.

However, as this really is in the technical ballpark where it matters, the word cognitive really only refers to those mental states that can have a truth value, that is conceptualised states. The distinction between conceptualised and non conceptualised content is critical here. Part of the problem is that quite a lot of the technical language is painfully out of date. Thirty years ago you'd have had cognitive states (that you could say) conative states (behaviour and reaction) and effective states (how it felt). However, it's now painfully clear that there is an additional category of content that isn't conceptualised but plays a part in areas that were traditionally the realm of cognition. Back in the seventies people could still say things like 'the laws of logic are the laws of thought' and not get laughed at. In fact, In the late eighties, I remember mocking the shit out of someone who argued for non conceptual content. My grounds were that there could never be a commensurable conceptual scheme beyond a bi-value logic and hence we couldn't talk about them and if we did anything we said would be nonsense. Hence I called them non contentful concepts. Technically I was correct, but he had the last laugh.

As I don't think we've really covered this properly, a concept isn't just a fancy idea. It is the ability to use a word correctly. As my preferred example goes, you have the concept of a duck when you can accurately pick out all and only ducks from a selection of objects. Linnaean hierarchies tend to be laid out conceptually - the deeper your understanding, the further down the tree you can go. Anyone can identify something as a flying thing, most as an aeroplane, less as a jet, still less as a 747 and very few as a 747SP.

So, as a quick way of putting it, conceptual content usually involves content that looks pretty language like and is certainly serial, individuated and so on, while nonconceptual content isn't. Quite what it is is still at least partially up for grabs, but patterns of activation, neural architecture, firing rates and myelination are all well and truly in the frame. This sort of content certainly isn't individuated, certainly is superpositionally stored and all the points from the RS&G paper.

Call it language based and (embedded) meatstate based and you are not too far off. The problem with thought is that people happily bounce between meaty and language ideas without a moment's thought. And you'll note I've deliberately kept this in the realm of biology and language and not brought up minds, which is where it all gets complicated.[/QUOTE]

You mentioned once that you didn't get much from Hofstadter (GEB). But those loops brought about by the reflexive nature of language seem intimately connected with what you just wrote. How do you see the causal nature of language as an element of the system?

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Sun Jun 11, 2017 6:30 pm

[quote=""plebian""]
subsymbolic;673242 wrote:
ruby sparks;673236 wrote:
subsymbolic;673234 wrote:Ok, then is there any difference between information processing and thinking?
I can't think of any significant ones off the top of my head. That said, I'm not going to put my head on the block and call them identical.

[quote=""subsymbolic""]And a further diagnostic question, given that at least some thoughts can be literally and non-metaphorically written down in words on paper, do you think that the word 'think' is starting to have similar problems to the word 'meme', in that it now covers several different processes that don't have much in common and indeed, are not, as Ramsey Stitch and Garon demonstrated, commensurable.
I think all word have such problems. :)

I'd be ok with non-conscious mental/cognitive information-processing for one of them. That would leave 'thinking' with its traditional properties.

Do you think they don't have much in common or aren't commensurate??
What are the 'traditional properties' of thinking? I'm honestly not sure how I'd answer that question myself.

However, as this really is in the technical ballpark where it matters, the word cognitive really only refers to those mental states that can have a truth value, that is conceptualised states. The distinction between conceptualised and non conceptualised content is critical here. Part of the problem is that quite a lot of the technical language is painfully out of date. Thirty years ago you'd have had cognitive states (that you could say) conative states (behaviour and reaction) and effective states (how it felt). However, it's now painfully clear that there is an additional category of content that isn't conceptualised but plays a part in areas that were traditionally the realm of cognition. Back in the seventies people could still say things like 'the laws of logic are the laws of thought' and not get laughed at. In fact, In the late eighties, I remember mocking the shit out of someone who argued for non conceptual content. My grounds were that there could never be a commensurable conceptual scheme beyond a bi-value logic and hence we couldn't talk about them and if we did anything we said would be nonsense. Hence I called them non contentful concepts. Technically I was correct, but he had the last laugh.

As I don't think we've really covered this properly, a concept isn't just a fancy idea. It is the ability to use a word correctly. As my preferred example goes, you have the concept of a duck when you can accurately pick out all and only ducks from a selection of objects. Linnaean hierarchies tend to be laid out conceptually - the deeper your understanding, the further down the tree you can go. Anyone can identify something as a flying thing, most as an aeroplane, less as a jet, still less as a 747 and very few as a 747SP.

So, as a quick way of putting it, conceptual content usually involves content that looks pretty language like and is certainly serial, individuated and so on, while nonconceptual content isn't. Quite what it is is still at least partially up for grabs, but patterns of activation, neural architecture, firing rates and myelination are all well and truly in the frame. This sort of content certainly isn't individuated, certainly is superpositionally stored and all the points from the RS&G paper.

Call it language based and (embedded) meatstate based and you are not too far off. The problem with thought is that people happily bounce between meaty and language ideas without a moment's thought. And you'll note I've deliberately kept this in the realm of biology and language and not brought up minds, which is where it all gets complicated.[/QUOTE]

You mentioned once that you didn't get much from Hofstadter (GEB). But those loops brought about by the reflexive nature of language seem intimately connected with what you just wrote. How do you see the causal nature of language as an element of the system?[/QUOTE]

Actually, I've met him a couple of times and find him annoying. At Turing 90, for example, he presented one of his graduate student's programs as his own with only the slightest mention of her role, while we all knew that copycat was her baby. He didn't really perform well when given stick in the bar...

So yeah, stuff like Aunt Hillary has the ring of truth, but who cares. There's a moral dimension here. in a more extreme case, I'm sure that there's value in Heidegger's work on being in the world, especially for an embedded system enthusiast, but when you did the heavy lifting in convincing people that Jews were not human then you can bugger off.

As for the causal role of language, that's not originally from feedback loops, it's parasitic on an older ability to respond to signalling. The ability to be told is the ability to tell yourself. Sure, that gets other feedback loops going, but then I'm not a covert behaviourist as both Dan and Doug are, so I'd rather work out my own loops and I'd prefer Andy Clark and Lynne Rudder Baker for that.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Sun Jun 11, 2017 6:38 pm

[quote=""subsymbolic""]
ruby sparks;673236 wrote:
subsymbolic;673234 wrote:Ok, then is there any difference between information processing and thinking?
I can't think of any significant ones off the top of my head. That said, I'm not going to put my head on the block and call them identical.

[quote=""subsymbolic""]And a further diagnostic question, given that at least some thoughts can be literally and non-metaphorically written down in words on paper, do you think that the word 'think' is starting to have similar problems to the word 'meme', in that it now covers several different processes that don't have much in common and indeed, are not, as Ramsey Stitch and Garon demonstrated, commensurable.
I think all word have such problems. :)

I'd be ok with non-conscious mental/cognitive information-processing for one of them. That would leave 'thinking' with its traditional properties.

Do you think they don't have much in common or aren't commensurate??[/QUOTE]

What are the 'traditional properties' of thinking? I'm honestly not sure how I'd answer that question myself.

However, as this really is in the technical ballpark where it matters, the word cognitive really only refers to those mental states that can have a truth value, that is conceptualised states. The distinction between conceptualised and non conceptualised content is critical here. Part of the problem is that quite a lot of the technical language is painfully out of date. Thirty years ago you'd have had cognitive states (that you could say) conative states (behaviour and reaction) and effective states (how it felt). However, it's now painfully clear that there is an additional category of content that isn't conceptualised but plays a part in areas that were traditionally the realm of cognition. Back in the seventies people could still say things like 'the laws of logic are the laws of thought' and not get laughed at. In fact, In the late eighties, I remember mocking the shit out of someone who argued for non conceptual content. My grounds were that there could never be a commensurable conceptual scheme beyond a bi-value logic and hence we couldn't talk about them and if we did anything we said would be nonsense. Hence I called them non contentful concepts. Technically I was correct, but he had the last laugh.

As I don't think we've really covered this properly, a concept isn't just a fancy idea. It is the ability to use a word correctly. As my preferred example goes, you have the concept of a duck when you can accurately pick out all and only ducks from a selection of objects. Linnaean hierarchies tend to be laid out conceptually - the deeper your understanding, the further down the tree you can go. Anyone can identify something as a flying thing, most as an aeroplane, less as a jet, still less as a 747 and very few as a 747SP.

So, as a quick way of putting it, conceptual content usually involves content that looks pretty language like and is certainly serial, individuated and so on, while nonconceptual content isn't. Quite what it is is still at least partially up for grabs, but patterns of activation, neural architecture, firing rates and myelination are all well and truly in the frame. This sort of content certainly isn't individuated, certainly is superpositionally stored and all the points from the RS&G paper.

Call it language based and (embedded) meatstate based and you are not too far off. The problem with thought is that people happily bounce between meaty and language ideas without a moment's thought. And you'll note I've deliberately kept this in the realm of biology and language and not brought up minds, which is where it all gets complicated.[/QUOTE]

I suppose what I primarily mean by traditional properties of thinking (and I suppose cognition, a word I perhaps shouldn't have retained when I tried to redefine non-conscious processes, ditto for 'mental' now that I think about it) is that they are considered to be conscious. It seems that all the other 'usual' properties/descriptors follow from this (reasoning, cognition, perception, attention, conceiving, understanding, observation, knowledge etc etc).

I guess I have a more than slight resistance to, um, thinking of a hard and fast distinction (between, say, non-conscious processes and conscious ones) so perhaps I have a (questionable) habit of using the terms for both. Part of this resistance may stem from a general suspicion that the latter, the conscious ones, have been (historically) and often still are given more priority than they may deserve. Push me a bit and I might even use the word illusory, but lack of certainty about what is or isn't an illusion and what is and isn't the (causal) role of consciousness prevents me from making the bold claim that consciousness (and by extension other mental properties) is by and large a trick, a byproduct, peripheral, trivial by comparison, with the human system essentially running 'blind' (or if you like, 'in the dark').

Language-based meat state. I can run with that. :)

Which brings me back to my original question, in what way does IS relate to non-conscious processes? Or are we saying that it doesn't? Because then, someone like me is going to be tempted to wonder if, as an en explanatory model/theory, it is explaining chimera.

Note that this would not rule out it being useful, pragmatic and instrumental, as a game.
Last edited by ruby sparks on Sun Jun 11, 2017 7:03 pm, edited 8 times in total.

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Sun Jun 11, 2017 7:08 pm

[quote=""subsymbolic""]
plebian;673250 wrote:
subsymbolic;673242 wrote:
ruby sparks;673236 wrote:
I can't think of any significant ones off the top of my head. That said, I'm not going to put my head on the block and call them identical.



I think all word have such problems. :)

I'd be ok with non-conscious mental/cognitive information-processing for one of them. That would leave 'thinking' with its traditional properties.

Do you think they don't have much in common or aren't commensurate??
What are the 'traditional properties' of thinking? I'm honestly not sure how I'd answer that question myself.

However, as this really is in the technical ballpark where it matters, the word cognitive really only refers to those mental states that can have a truth value, that is conceptualised states. The distinction between conceptualised and non conceptualised content is critical here. Part of the problem is that quite a lot of the technical language is painfully out of date. Thirty years ago you'd have had cognitive states (that you could say) conative states (behaviour and reaction) and effective states (how it felt). However, it's now painfully clear that there is an additional category of content that isn't conceptualised but plays a part in areas that were traditionally the realm of cognition. Back in the seventies people could still say things like 'the laws of logic are the laws of thought' and not get laughed at. In fact, In the late eighties, I remember mocking the shit out of someone who argued for non conceptual content. My grounds were that there could never be a commensurable conceptual scheme beyond a bi-value logic and hence we couldn't talk about them and if we did anything we said would be nonsense. Hence I called them non contentful concepts. Technically I was correct, but he had the last laugh.

As I don't think we've really covered this properly, a concept isn't just a fancy idea. It is the ability to use a word correctly. As my preferred example goes, you have the concept of a duck when you can accurately pick out all and only ducks from a selection of objects. Linnaean hierarchies tend to be laid out conceptually - the deeper your understanding, the further down the tree you can go. Anyone can identify something as a flying thing, most as an aeroplane, less as a jet, still less as a 747 and very few as a 747SP.

So, as a quick way of putting it, conceptual content usually involves content that looks pretty language like and is certainly serial, individuated and so on, while nonconceptual content isn't. Quite what it is is still at least partially up for grabs, but patterns of activation, neural architecture, firing rates and myelination are all well and truly in the frame. This sort of content certainly isn't individuated, certainly is superpositionally stored and all the points from the RS&G paper.

Call it language based and (embedded) meatstate based and you are not too far off. The problem with thought is that people happily bounce between meaty and language ideas without a moment's thought. And you'll note I've deliberately kept this in the realm of biology and language and not brought up minds, which is where it all gets complicated.
You mentioned once that you didn't get much from Hofstadter (GEB). But those loops brought about by the reflexive nature of language seem intimately connected with what you just wrote. How do you see the causal nature of language as an element of the system?
Actually, I've met him a couple of times and find him annoying. At Turing 90, for example, he presented one of his graduate student's programs as his own with only the slightest mention of her role, while we all knew that copycat was her baby. He didn't really perform well when given stick in the bar...

So yeah, stuff like Aunt Hillary has the ring of truth, but who cares. There's a moral dimension here. in a more extreme case, I'm sure that there's value in Heidegger's work on being in the world, especially for an embedded system enthusiast, but when you did the heavy lifting in convincing people that Jews were not human then you can bugger off.

As for the causal role of language, that's not originally from feedback loops, it's parasitic on an older ability to respond to signalling. The ability to be told is the ability to tell yourself. Sure, that gets other feedback loops going, but then I'm not a covert behaviourist as both Dan and Doug are, so I'd rather work out my own loops and I'd prefer Andy Clark and Lynne Rudder Baker for that.[/QUOTE]
The only exposure I have to Hofstadter is GEB and I read and forgot everything about a book about minds that he cowrote with Dennett as well as I am a strange loop.

But GEB was really seminal for me. I was in a really good math space at the time for that book.

I am not even sure if I know what his point was. I got a way of looking at things out of it.

Eta: have you ever read Jane Jacobs?

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Sun Jun 11, 2017 7:16 pm

Which brings me back to my original question, in what way does IS relate to non-conscious processes? Or are we saying that it doesn't? Because then, someone like me is going to be tempted to wonder if, as an en explanatory model/theory, it is explaining chimera.

Note that this would not rule out it being useful, pragmatic and instrumental, as a game.

I'm not sure I have quite managed to get my fundamental problem across yet.

Yes, it's a chimera. Personally, I prefer to call it fictionalist, rather than instrumentalist. If someone were suggesting we started using it today I'd be a clear eliminativist towards the attitudes specifically and folk psychology (and indeed most of what is called psychology generally.

However, I reckon, for boring reasons, that while we've been slowly bootstrapping language for half a million years or more, intentional talk has only been around for seven to ten thousand years and was the fire under the axial age.

And there's the problem. Most of what matters to us about us, including this form of self-consciousness that bootstrapped us into homo naratans and allows most of the modern world to happen is within the idiom of this way of looking at us. It's why psychology has made no real progress ever. How can psychology move forward when the way we look at ourselves psychologically is simply wrong? However, while this has serious implications for doing sciences of the mind, the real problem is that we are trapped in this idiom because all the progress we made - culture arts society... religion... and so on, is based on this way of looking at us. Elimination is not an option. You might as well say the language is wrong (which I think it is, but don't even get me started...)

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Sun Jun 11, 2017 7:19 pm

[quote=""plebian""]
subsymbolic;673253 wrote:
plebian;673250 wrote:
subsymbolic;673242 wrote:
What are the 'traditional properties' of thinking? I'm honestly not sure how I'd answer that question myself.

However, as this really is in the technical ballpark where it matters, the word cognitive really only refers to those mental states that can have a truth value, that is conceptualised states. The distinction between conceptualised and non conceptualised content is critical here. Part of the problem is that quite a lot of the technical language is painfully out of date. Thirty years ago you'd have had cognitive states (that you could say) conative states (behaviour and reaction) and effective states (how it felt). However, it's now painfully clear that there is an additional category of content that isn't conceptualised but plays a part in areas that were traditionally the realm of cognition. Back in the seventies people could still say things like 'the laws of logic are the laws of thought' and not get laughed at. In fact, In the late eighties, I remember mocking the shit out of someone who argued for non conceptual content. My grounds were that there could never be a commensurable conceptual scheme beyond a bi-value logic and hence we couldn't talk about them and if we did anything we said would be nonsense. Hence I called them non contentful concepts. Technically I was correct, but he had the last laugh.

As I don't think we've really covered this properly, a concept isn't just a fancy idea. It is the ability to use a word correctly. As my preferred example goes, you have the concept of a duck when you can accurately pick out all and only ducks from a selection of objects. Linnaean hierarchies tend to be laid out conceptually - the deeper your understanding, the further down the tree you can go. Anyone can identify something as a flying thing, most as an aeroplane, less as a jet, still less as a 747 and very few as a 747SP.

So, as a quick way of putting it, conceptual content usually involves content that looks pretty language like and is certainly serial, individuated and so on, while nonconceptual content isn't. Quite what it is is still at least partially up for grabs, but patterns of activation, neural architecture, firing rates and myelination are all well and truly in the frame. This sort of content certainly isn't individuated, certainly is superpositionally stored and all the points from the RS&G paper.

Call it language based and (embedded) meatstate based and you are not too far off. The problem with thought is that people happily bounce between meaty and language ideas without a moment's thought. And you'll note I've deliberately kept this in the realm of biology and language and not brought up minds, which is where it all gets complicated.
You mentioned once that you didn't get much from Hofstadter (GEB). But those loops brought about by the reflexive nature of language seem intimately connected with what you just wrote. How do you see the causal nature of language as an element of the system?
Actually, I've met him a couple of times and find him annoying. At Turing 90, for example, he presented one of his graduate student's programs as his own with only the slightest mention of her role, while we all knew that copycat was her baby. He didn't really perform well when given stick in the bar...

So yeah, stuff like Aunt Hillary has the ring of truth, but who cares. There's a moral dimension here. in a more extreme case, I'm sure that there's value in Heidegger's work on being in the world, especially for an embedded system enthusiast, but when you did the heavy lifting in convincing people that Jews were not human then you can bugger off.

As for the causal role of language, that's not originally from feedback loops, it's parasitic on an older ability to respond to signalling. The ability to be told is the ability to tell yourself. Sure, that gets other feedback loops going, but then I'm not a covert behaviourist as both Dan and Doug are, so I'd rather work out my own loops and I'd prefer Andy Clark and Lynne Rudder Baker for that.
The only exposure I have to Hofstadter is GEB and I read and forgot everything about a book about minds that he cowrote with Dennett as well as I am a strange loop.

But GEB was really seminal for me. I was in a really good math space at the time for that book.

I am not even sure if I know what his point was. I got a way of looking at things out of it.

Eta: have you ever read Jane Jacobs?[/QUOTE]

Hah! not for decades, although I think it came up before... However, my Pa was one of the GLC's town planners and he worked on laying out a couple of the New Towns back in the day. He's a big fan.

I did like The Mind's Eye...

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Sun Jun 11, 2017 9:01 pm

I asked about Jacobs because I see a lot of parallels in the emergent behavior of cities and brains.

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Sun Jun 11, 2017 9:38 pm

[quote=""plebian""]I asked about Jacobs because I see a lot of parallels in the emergent behavior of cities and brains.[/quote]

I confess that I don't yet, but I'm open to being convinced...

The thing that gets me with brains is just how long they've been evolving and just how much complexity has been evolved into both the brain and the brain's interaction.

I've been saying for years that the best and only metaphor for the brain is the brain.

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Sun Jun 11, 2017 11:24 pm

[quote=""subsymbolic""]
plebian;673267 wrote:I asked about Jacobs because I see a lot of parallels in the emergent behavior of cities and brains.
I confess that I don't yet, but I'm open to being convinced...

The thing that gets me with brains is just how long they've been evolving and just how much complexity has been evolved into both the brain and the brain's interaction.

I've been saying for years that the best and only metaphor for the brain is the brain.[/QUOTE]

This is something I need to think about for a while. It was reading Hofstadter that gave me the link so I'm going to have to go back to both sources to make the case but it's definitely something I will do because I've been thinking about it for about 30 years.

User avatar
subsymbolic
Posts: 13371
Joined: Wed Oct 26, 2011 6:29 pm
Location: under the gnomon

Post by subsymbolic » Mon Jun 12, 2017 6:55 am

[quote=""plebian""]
subsymbolic;673269 wrote:
plebian;673267 wrote:I asked about Jacobs because I see a lot of parallels in the emergent behavior of cities and brains.
I confess that I don't yet, but I'm open to being convinced...

The thing that gets me with brains is just how long they've been evolving and just how much complexity has been evolved into both the brain and the brain's interaction.

I've been saying for years that the best and only metaphor for the brain is the brain.
This is something I need to think about for a while. It was reading Hofstadter that gave me the link so I'm going to have to go back to both sources to make the case but it's definitely something I will do because I've been thinking about it for about 30 years.[/QUOTE]

I think it's just pretentious recursion pretending to do the hard work and a selection of metaphorical exemplars that make it look profound and supporting the assertion that recursion can loop like that in other cases. He's asserting that it's like this, only infinitely complex. Remember his response, in TME, to What Is It Like To Be A Bat. That closes down the smart interpretation to Aunt Hillary.
Last edited by subsymbolic on Mon Jun 12, 2017 7:10 am, edited 1 time in total.

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Mon Jun 12, 2017 7:49 am

[quote=""subsymbolic""]Yes, it's a chimera. Personally, I prefer to call it fictionalist, rather than instrumentalist.[/quote]

Yay. A new word for me to play with, and possibly apt, in this so-called post-truth era. :)

[quote=""subsymbolic""]If someone were suggesting we started using it today I'd be a clear eliminativist towards the attitudes specifically and folk psychology (and indeed most of what is called psychology generally.

However, I reckon, for boring reasons, that while we've been slowly bootstrapping language for half a million years or more, intentional talk has only been around for seven to ten thousand years and was the fire under the axial age.

And there's the problem. Most of what matters to us about us, including this form of self-consciousness that bootstrapped us into homo naratans and allows most of the modern world to happen is within the idiom of this way of looking at us. It's why psychology has made no real progress ever. How can psychology move forward when the way we look at ourselves psychologically is simply wrong? However, while this has serious implications for doing sciences of the mind, the real problem is that we are trapped in this idiom because all the progress we made - culture arts society... religion... and so on, is based on this way of looking at us. Elimination is not an option. You might as well say the language is wrong (which I think it is, but don't even get me started...)[/quote]

Ok, but total elimination and total retention are arguably just two tiny points at either end of a very wide and varied spectrum.

As to being trapped within having to use our evolved capacities (eg human language) to examine, deconstruct and undermine the assumptions about our own capacities, sure. This is a big obstacle. But I do think that there is enough plasticity to allow for gradual change. And some of it may be forced, by events and new evidences.

As for psychology, well, maybe psychology as it largely is and has been, with all the attendant folk elements, will disappear, or morph into 'proper' science, when the machines take over. They will not (do not) look at us as anything other than machines, in a very reductive sense, and to at least some extent, their basis in formal languages and 'proper logic' (not the flawed colloquial one we supposedly rational machines use, with all it's multitude cognitive biases) gets them around some of our subjective limitations (even when designed by us). And if these hypothetical machines want to know anything about us, or predict what we are going to do, they won't ask for self-reported narrative, they'll measure physical, subsymbolic stuff, like our skin conductivity, blood flow, oxygen levels and so on and, ultimately (if things get that far) what our neurons are doing.

Science's reductionist approach has arguably already been very very successful and some of the above is already everyday, in the 'new machine age', and progressing, even if the frontiers (such as neuromarketing..

https://en.wikipedia.org/wiki/Neuromarketing

... and big number analyses etc etc) are only fledgling and easy to find 'reliability flaws' in. Our somewhat heuristic and chimeric (chimeraic?) techniques and capacities are pretty good but they evolved in times when we only had other humans to contend with, before computer software could, for example, beat us at chess. That is of course just one rather old and limited example of us inventing stuff that is better than us. Soon, we may not even be asked to be in charge of driving our own cars. Is it only a matter of time before we don't get to make any major decisions?

You can call this science fictionalism perhaps. :)
Last edited by ruby sparks on Mon Jun 12, 2017 8:37 am, edited 5 times in total.

Koyaanisqatsi
Posts: 8403
Joined: Fri Feb 19, 2010 5:23 pm

Post by Koyaanisqatsi » Mon Jun 12, 2017 12:55 pm

The problem with neuromarketing, however, is that it still requires subjective interpretation based on psychology. All the machine could measure is changes in state. The reason behind those changes, however, would still just be guesses on the part of anyone looking at the data. I can think of many different reasons, for example, as to why your heart rate might go up when looking at a picture of buttermilk pancakes (everything from your diabetes to the fact that it was the meal your mom made you just before she told you your father had left).

Without the narrative accompanying the data, it's just as much of a crap shoot as if you never did the data.
Stupidity is not intellen

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Mon Jun 12, 2017 1:21 pm

[quote=""Koyaanisqatsi""]The problem with neuromarketing, however, is that it still requires subjective interpretation based on psychology. All the machine could measure is changes in state. The reason behind those changes, however, would still just be guesses on the part of anyone looking at the data. I can think of many different reasons, for example, as to why your heart rate might go up when looking at a picture of buttermilk pancakes (everything from your diabetes to the fact that it was the meal your mom made you just before she told you your father had left).

Without the narrative accompanying the data, it's just as much of a crap shoot as if you never did the data.[/quote]

I don't know enough about neuromarketing to say, but I'd guess that if there aren't algorithms for interpreting the data already, there soon could be. It wouldn't be a big step to acting on it without human intervention either.

In any case, the data will already be situation-specific. All they might initially want to guess is whether a certain human (who might, for example fit the demographic of 'watches this type of media content') will or won't be susceptible to this or that sort of product advertisment during the broadcast. And don't forget that the set of 'things known about you' (including, say your diabetes) can become quite extensive and can be factored in. I did say fledgling.

Or to put it another way, If I, amazing human machine that I am, am going to predict what you are going to do in a certain situation, I might not know about the traumatic childhood buttermilk experience either, and I'd still have to guess.

So, the machines would still only be predicting, imperfectly. It isn't even a case of envisaging a brave new world where they predict better than humans, that might be quite a long way off, given that we are jacks of all evolved trades (though the machines might exceed us at more and more specific things). So our privileged position at the top of the cognitive tree might be fairly safe-ish and we can continue (with the great vanity which is arguably our hallmark) to regard technological futures in which we are knocked off our perches as worrying dystopias, in which Tom Cruise and Cameron Diaz ride in on an old, beat-up Harley Davidson (with Cameron behind Tom, as is the natural order) and restore humanity to top dog and 'True Custodian And Steward Of The World'TM just before the credits roll.

No, my only point was to suggest that our chimeric, conscious, flawed, homo narrans-esque ways might to at least some extent have to concede room to other ways.

If you like, I'm challenging the idea of our game being the only or best one in town, going forward. I might not be making a huge dent in that, given that I'm speculating about the future.

I also just like narratives in which the human ego is humbled. :)
Last edited by ruby sparks on Mon Jun 12, 2017 2:00 pm, edited 22 times in total.

plebian
Posts: 2838
Joined: Sun Feb 22, 2015 8:34 pm
Location: America

Post by plebian » Mon Jun 12, 2017 2:07 pm

[quote=""subsymbolic""]
plebian;673271 wrote:
subsymbolic;673269 wrote:
plebian;673267 wrote:I asked about Jacobs because I see a lot of parallels in the emergent behavior of cities and brains.
I confess that I don't yet, but I'm open to being convinced...

The thing that gets me with brains is just how long they've been evolving and just how much complexity has been evolved into both the brain and the brain's interaction.

I've been saying for years that the best and only metaphor for the brain is the brain.
This is something I need to think about for a while. It was reading Hofstadter that gave me the link so I'm going to have to go back to both sources to make the case but it's definitely something I will do because I've been thinking about it for about 30 years.
I think it's just pretentious recursion pretending to do the hard work and a selection of metaphorical exemplars that make it look profound and supporting the assertion that recursion can loop like that in other cases. He's asserting that it's like this, only infinitely complex. Remember his response, in TME, to What Is It Like To Be A Bat. That closes down the smart interpretation to Aunt Hillary.[/QUOTE]

I do not remember his response in Time. What did he say?

However, the recursive issues he deals with are explicitly language related and deal with self reference of language. While the destructive record example might seem counter to that it isn't.

Once you develop reflexivity, you begin the looping in earnest.

Koyaanisqatsi
Posts: 8403
Joined: Fri Feb 19, 2010 5:23 pm

Post by Koyaanisqatsi » Mon Jun 12, 2017 8:54 pm

[quote=""ruby sparks""]I don't know enough about neuromarketing to say, but I'd guess that if there aren't algorithms for interpreting the data already, there soon could be. [/quote]

Well, then who/what would be interpreting the conclusions of the algorithms? Interpretation is a judgement; an act of agency iow. You could set the interpretation machine ("IM") to a probability threshold, I suppose, such that when it calculated say a 70% or more chance of X it chooses Y, but only as a function of imbuing the bias/prejudices of the programmer. Otherwise, what is it calculating against?

Again, if the data is: "heart rate up 10% upon viewing stack of buttermilk pancakes" how would any algorithm possibly be able to accurately predict what caused the change in state? Even if given more information about the individual--such as "diabetic"--does the increase heart rate mean he fears them or desires them, or neither (i.e., traumatic association)?
In any case, the data will already be situation-specific. All they might initially want to guess is whether a certain human (who might, for example fit the demographic of 'watches this type of media content') will or won't be susceptible to this or that sort of product advertisment during the broadcast.
Well, ALL people are susceptible to advertising. That's why corporations spend billions every year on it. The question marketers want to answer is whether or not their particular strategies result in higher sales, but we needn't get into all of that.
And don't forget that the set of 'things known about you' (including, say your diabetes) can become quite extensive and can be factored in.
Yes, but, again, in what fashion can it be factored in? There must first be a bias--a base standard of some kind already established--to factor against. The way marketing works is to do exactly this kind of in-depth research to discover a customer's "pain" (the trendy lingo these days believe it or not; I just finished my master's in International Marketing from Boston University and that's all we talked about endlessly) and then put yourself into their shoes to try to imagine what would heal that pain. Thus empathy plays a primary role in any successful marketing campaign.

Empathy--from what I can find in most of the current literature--is primarily the domain of the amygdala. An interesting study I found relates:
Primates have a dedicated system to process faces. Neuroimaging, lesion, and electrophysiological studies find that the amygdala processes facial emotions. Here we recorded 210 neurons from 7 neurosurgical patients and asked whether amygdala responses are driven primarily by properties of the stimulus or by the perceptual judgments of the perceiver. Our finding shows, for the first time to our knowledge, that neurons in the human amygdala encode the subjective judgment of emotions shown in face stimuli, rather than simply their stimulus features.
How then would an IM do likewise without it first having some sort of built-in/programmed (i.e., "false") bias in order to judge against?
I did say fledgling.
Granted, but I think it comes back to what sub said, that the best and only metaphor for a brain is a brain.
Or to put it another way, If I, amazing human machine that I am, am going to predict what you are going to do in a certain situation, I might not know about the traumatic childhood buttermilk experience either, and I'd still have to guess.
Exactly, but because we are like beings with similar cultures/language/experiences/reference points, it's likely that your guess would be far more accurate than the IM's, not as a matter of probability calculations, but as a matter of associations (which is what brains do). In fact, I don't see how IM's prediction could ever be anything more than a complete crapshoot, regardless of whether or not it had every single bit of information contained with a target customer's brain precisely because it would have no baseline to compare it against. Assuming we're talking about an IM that wasn't first programmed with such a baseline.
So, the machines would still only be predicting, imperfectly.
Again, I think they'd just be wildly guessing; the equivalent of throwing a dart at a dartboard.
Last edited by Koyaanisqatsi on Mon Jun 12, 2017 10:37 pm, edited 1 time in total.
Stupidity is not intellen

User avatar
ruby sparks
Posts: 7781
Joined: Thu Dec 26, 2013 10:51 am
Location: Northern Ireland

Post by ruby sparks » Tue Jun 13, 2017 8:25 am

[quote=""Koyaanisqatsi""]Well, then who/what would be interpreting the conclusions of the algorithms? Interpretation is a judgement; an act of agency iow. You could set the interpretation machine ("IM") to a probability threshold, I suppose, such that when it calculated say a 70% or more chance of X it chooses Y, but only as a function of imbuing the bias/prejudices of the programmer. Otherwise, what is it calculating against?[/quote]

The 'what' doing the interpretation (and acting on it) would be the machine, and yes it would be a rational agent. A thermostat is arguably a minimally rational agent.

Another more sophisticated example might be an autonomous (self-driving) car, which, on the sales literature at least, offers advanced control systems which interpret sensory information and allows the car to navigate its environment without human input.

[quote=""Koyaanisqatsi""]Well, ALL people are susceptible to advertising. That's why corporations spend billions every year on it. The question marketers want to answer is whether or not their particular strategies result in higher sales, but we needn't get into all of that.[/quote]

No, we needn't. But if you'e right, and such marketing techniques work (I'm sure there's a debate to be had on that regarding extent) then that would be a partial endorsement of itself.

Of course, as a forensic psychologist said to me at a barbecue a few days ago, it has always, arguably, been easier to predict what 1000 people will do than what 1 person will do, and I recognise the difference.

My point was to sketch out a case for a scenario in principle, a lot of it as yet unrealised (though still evolving) where conscious human (possibly chimeric) narrative cognition may not be the only game in town. I'm not saying that machines are now or perhaps ever will be as sophisticated as the successful kludge which is the human brain. I think that would be foolish and too speculative. That won't stop them being used more and more.

As such, I accept all your objections. I might only say that the sophistication of machines is on an upward trajectory (and there are some, apparently, who argue that human functioning is on a downward one, due to a supposed reduction in selection pressures).

I agree that when it comes to the brain, there is currently no analogy or metaphor which fits (I also, incidentally take the same view when discussing abortion, for which it seems to me all the analogies fall well short) and nothing as unique, that we know of. AI and robotics seem a long way off, even if improving. But it is very, very early days. And bear in mind that an alternative would not necessarily have to be 'like the human brain' in order to be effective. Comparisons with the human brain could be seen as anthropocentric in essence and as such a lack of metaphor may not be a fully relevant consideration.

[quote=""Koyaanisqatsi""]Again, I think they'd just be wildly guessing; the equivalent of throwing a dart at a dartboard.[/quote]

I reckon machines will, indeed already can decide as well as or better than humans, in certain areas/environments, and not just playing chess. Driving on roads may soon be one of them. Diagnosing and treating medical conditions may be another. I don't know what the future holds (I don't even understand what stuff like recursive algorithms, bioengineering, evolutionary computation and bootstrapping applications are). But some are far more bullish than me:

"Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone."

"Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks"


https://en.wikipedia.org/wiki/Superinte ... telligence



Note that these are not alternative methods of operation FOR humans, just ones that affect them*. Rival games that are 'out there' (on the pitch), if you like. When it comes to humans themselves, there seems to be a lot of truth in saying that the way we operate is the only game in town, for us, even if it is a kludge, chock-full of illusions and cognitive biases, many non-conscious.

Though I reserve the opinion that the game can gradually, perhaps subtly develop if for example the size and shape of the pitch alters (such as when we are confronted with evidence, perhaps obtained more objectively than we can manage, that something is an illusion). This can force us to at least reassess the nature of the game, which may in turn at least slightly affect how we - our systems - play it.


* at least not until some sort of hybrid/biomachine technology directly, internally interacts with our neurology.
Last edited by ruby sparks on Tue Jun 13, 2017 10:24 am, edited 48 times in total.

Koyaanisqatsi
Posts: 8403
Joined: Fri Feb 19, 2010 5:23 pm

Post by Koyaanisqatsi » Tue Jun 13, 2017 12:14 pm

I don't think anyone questions whether or not machines can calculate. They clearly can and very efficiently. I'm not sure that's the same thing as what we're talking about, nor would I grant that being able to calculate is the full extent of human intelligence. The only thing that my culture-laden mind can come up with as a reference is the I, Mudd episode from the original Star Trek. Simplistic, I know, but the thing what did them thar robots was illogic. They couldn't handle nonsense. The human brain, however is a;liver aoggood aat ssdrivng ssensf om nnoonssens. I believe (thanks to sub) that this is a result of having to overcome the stochastic nature of the brain; that it was actually the malfunction that resulted in improved function.

Can we program chaos? Seems an oxymoron. And I don't mean mimic chaos, such that it's not actually chaotic; I mean actual, dynamic chaos that disrupts the system, not as something that is part of the system. And we may not need to in order for AI to simulate/mimic human-like choices. Maybe we can create a simulation of an amygdala, but then how do we program in survivor-based perspective (the base standard by which empathy is measured against)?

I have little doubt that we will create--and already have created--machines that fool us, but I think that has more to do with the fact that we fill-in-the-gaps and basically ignore (on a "conscious level") non-survivor based information. Iow, I think it's more a matter of our own thresholds for trivial flaws being so low than it may be for AI technology to be so sophisticated/nuanced. If it doesn't threaten our survival, we don't really give much of a shit. Though we still have an interesting (and perhaps related) issue when it comes to animation.

And, once again, I should make it clear that I'm making a delineation between what we program as opposed to anything "organic." We have the benefit of millions of years of evolution, again driven by survival. Do we need to Roy Batty our Replicants in order to jumpstart agency*?

So there seems to be (at least) two fundamental conditions that would appear to be central to human agency (*I'm using that as a more "meta" category that would include intelligence) and yet external to it; stochasticity and mortality. It is evidently not about creating a perfect, flawless system and turning it on and presto, agency!; it's about creating an imperfect, cataclysmic system and turning it on and hoping that it can somehow independently overcome all the flaws that evidently gives rise to (human) agency.

How do you build a malfunctioning robot/system and then hope that it overcomes the malfunctions, particulalry if it has no innate fear of or understanding of death?
Stupidity is not intellen

Post Reply