Sex Robot Dolls - What do they say about humans?

I believe that in the academia the definition they use for AI is "something computers cannot do yet".

Obviously that comes from the fact that whenever the computers reach a goal used for AI it changes immediately to something more difficult.

My impression of academic AI researchers is a bit different. Namely, they call AI whatever they have most recently made a machine do, and they predict that they can get a computer to do anything/everything else and more "within the next 5-10 years". This has been their constant view since at least the late 1960s.

-Ww
 
Its a subject I've been following but unfortunately most of these studies suffer from one or both of the following:

  1. No actual standardization or definition of A.I. as a term
  2. Severe navel gazing whargarbl of phrasing.

While this is roughly true, it does not mean that the issue isn't a real, important and extremely difficult one (we don't even understand much about human or animal sentience/consciousness). Indeed that (especially the "extremely difficult" part) is exactly why there is so much low quality work being done in this area. Most of the work is crappy in any very rapidly advancing area of science or technology. It is also true that in most areas of science the important work turns out to be very much better than the typical work being published; in other words the quality of "most of these studies" says little about what, if anything, is currently being achieved.

Its not really odd that the US Military/DoD is interested in this subject, they've always been a big spender on R&D in subjects that don't look immediately or obviously related to military usage.

Given the rapidly growing importance of "drone" weapons systems and "smart autonomous" ones, it is hardly surprising that the military is paying close attention to and sponsoring work on AI topics. That they are giving serious attention and funds to philosophers and ethicists is not something I would have expected, however.

Nor is it just the military; major IT firms are also investing heavily in these areas. Self-driving cars will face real-world versions of philosophical moral dilemmas (e.g., the trolley problem)...remarkable imo.
In fact, writing a reasonably good* personality simulator is very easy. Pretty much trivial in fact where the use case is limited like would be the case with a fuck-bot. No AGI/deep learning or machine learning (as currently defined) required really. Could be implemented with an Arduino or a custom board costing less than a Raspberry Pi as long as you have at max 1Mb flash ram for user preference storage.

And as you said the Turing test for sex toys would be very easy to achieve. All the robot would need to do is to take her shirt off and say "fuck me hard daddy" and any horny male would believe he is talking with a real human.

I agree that a "good enough" to satisfy horny guys and thus to sell sexbots do not need to be very sophisticated in an AI sense (the physical side will likely be more difficult). However, an AGI version (driven by the truly astonishingly versatile DeepLearning approach or something else) would likely be very much more popular. It could customize its behavior to the user's tastes and moods in a way that would evolve as the "relationship" matured and would not need to be specialized to sex only but could do many other things for/with its user(s). That is if a human level AGI turns out to be achievable...as many experts believe it will be.

* "reasonably good" is actually far below what it takes to come close to passing a Turing test. I used to write various responsive script chat bots using ELIZA engines around the turn of the century before they were even called chat bots that could keep users engaged for an hour easily.

That isn't actually how the Turing Test works (was defined), but I know what you mean, and yes, it is surprisingly easy to do for fairly limited periods of time.

-Ww
 
The real challenge for the AI development then is to see how long it takes before the first Real Doll leaves the guy and takes all his assets with her.

if Asimov's Laws get implemented in real life ... we might get the perfect partner ... the militaries will definitely object ... they can't use robots to be the perfect killing machines they want them to be.
 
  • Like
Reactions: AliceInWonderland
While this is roughly true, it does not mean that the issue isn't a real, important and extremely difficult one (we don't even understand much about human or animal sentience/consciousness). Indeed that (especially the "extremely difficult" part) is exactly why there is so much low quality work being done in this area.

Of course its a real thing, but philosophy requires agreement of terms otherwise it might as well be a couple of stoned freshmen blathering on about if She Hulk gets her period and if PMS makes her more brutal.

Given the rapidly growing importance of "drone" weapons systems and "smart autonomous" ones, it is hardly surprising that the military is paying close attention to and sponsoring work on AI topics. That they are giving serious attention and funds to philosophers and ethicists is not something I would have expected, however.

Not surprising at all. Every conflict the US has been involved in since the last quarter of the previous century has involved more and more lawyers. G.I. Jane/Joe can't even fart towards another human (friend or foe) without a battalion of lawyers reviewing the matter. All these self proclaimed ethicists and philosophers will be called upon by both sides of any conflict where computer assisted kinetic conflict occurs.

However, an AGI version (driven by the truly astonishingly versatile DeepLearning approach or something else) would likely be very much more popular. It could customize its behavior to the user's tastes and moods in a way that would evolve as the "relationship" matured and would not need to be specialized to sex only but could do many other things for/with its user(s).

Doubt it. Customization of response is probably never going to get to AGI type levels in that such would essentially defeat the purpose of a sex toy to begin with. Simple customization of physical and vocal response is already the area of junior programmers without any fancy ML tools.

That is if a human level AGI turns out to be achievable...as many experts believe it will be.

In 5 to 10 years? :ROFLMAO:

if Asimov's Laws get implemented in real life ... we might get the perfect partner ... the militaries will definitely object ... they can't use robots to be the perfect killing machines they want them to be.

Asimov was cute in his day but is essentially irrelevant to the current situation.
 
  • Like
Reactions: Sinapse and MikeH
Regarding whether this would be a "safe" outlet for uncivilized urges... I doubt it.

Multiple studies have shown that "venting" anger only serves to make it easier to snap, and I have no doubt this holds true for most if not all urges.
 
Of course its a real thing, but philosophy requires agreement of terms otherwise it might as well be a couple of stoned freshmen blathering on about if She Hulk gets her period and if PMS makes her more brutal.

I think you are missing the point in the context of the OP. Philosophers and ethicists do not have to achieve agreement nor even come up with rigorous arguments in order to influence public opinion (including religious opinion/leaders), lawmakers, courts, police etc. They simply have to come up with a school of thought that people judge to be the best and most practical available answer/understanding of some thorny moral issue/question. They have done so on many other topics in the past and will likely do so in this case as well.

That said, I agree with you that work in philosophy is unlikely to conclusively and permanently settle an issue such as the moral status of AIs; it has not achieved such a goal with respect to far older and probably easier questions such as the moral status of animals (of various sorts) or the existence of free will etc. However, I am less disdainful than you of their efforts. These are not stupid or foolish people that do not understand the importance of defining terms and other requirements of rigorous thinking; rather the problems are quite challenging inherently, and the concepts slippery and hard to pin down.

Not surprising at all. Every conflict the US has been involved in since the last quarter of the previous century has involved more and more lawyers. G.I. Jane/Joe can't even fart towards another human (friend or foe) without a battalion of lawyers reviewing the matter. All these self proclaimed ethicists and philosophers will be called upon by both sides of any conflict where computer assisted kinetic conflict occurs.

Perhaps so, but I do personally know some philosophers/ethicists who say that they haven't seen anything like it in the past and are surprised, more like astonished actually, at the money and attention pouring in on these abstract philosophical issues (and these folks are roughly in my generation, so they have been around a good while).

Btw, I think academic philosophers and such would take HUGE exception to being lumped into the same category as lawyers! :D

Doubt it. Customization of response is probably never going to get to AGI type levels in that such would essentially defeat the purpose of a sex toy to begin with. Simple customization of physical and vocal response is already the area of junior programmers without any fancy ML tools.

Here I flat out disagree with you (although it is a judgement call imo). My strong hunch is that people will want and pay for all the adaptive behavioral and intuitive user "connections" in a sexbot/personalbot that the technology can deliver. Your prediction (if I understand you correctly) that "simple customization of physical and vocal response" will be enough and that there will be no demand for anything more reminds me of predictions (which I heard from experts with my own ears) that no one would want or be willing to pay for fancy smart phones with all sorts of diverse functions; people literally mocked the idea at one time. This was in the era when basic "dumb" mobile phones that let you make/receive calls and little more were an exploding market. They often pointed to failed products like Apple's Newton "personal digital assistant" from a few years earlier to argue that no one would want to carry around something as complex as a personal computer with them all the time. Etc.

In 5 to 10 years? :ROFLMAO:

I'd say between 5 and 500 years in the future! :D If you have a look at the history of predictions (by even the most qualified people...no idea of your qualifications in this area of course), it is a truly compelling case for being humble about how well these things can be forecast.

See https://www.forbes.com/sites/robert...st-tech-predictions-of-all-time/#133aa9e31299 for some hilarious examples. And google something like "bad predictions of future technology" for many many more.

Asimov was cute in his day but is essentially irrelevant to the current situation.

I bet someone will build robots incorporating Asimov's Three Laws of Robotics in some way just to say it has been done and to see what they would be like in reality, but of course and as you say, they are unlikely to be particularly relevant because it would be so easy, easier actually, to build robots without them.

-Ww
 
  • Like
Reactions: AliceInWonderland
but of course and as you say, they are unlikely to be particularly relevant because it would be so easy, easier actually, to build robots without them.

Easier, cheaper and it is hard to build killer robots for military if they'd follow Asimov laws. The first civilian application will be sex, as always, but before that the military people will see how the robots with intelligence can be used to kill people.
 
I think you are missing the point in the context of the OP. Philosophers and ethicists do not have to achieve agreement nor even come up with rigorous arguments in order to influence public opinion (including religious opinion/leaders), lawmakers, courts, police etc. They simply have to come up with a school of thought that people judge to be the best and most practical available answer/understanding of some thorny moral issue/question.

Oh I most definitely got the point. See my previous comment on pearl clutching and feminists.

These are not stupid or foolish people

OK, not all of them anyway

that do not understand the importance of defining terms and other requirements of rigorous thinking; rather the problems are quite challenging inherently, and the concepts slippery and hard to pin down

But unfortunately, they don't even seem to be trying to define terms in the individual much less the group level so again, I can not take it too seriously at this point. Perhaps the DoD is throwing money at them trying to get them to get their ducks in a row.

Btw, I think academic philosophers and such would take HUGE exception to being lumped into the same category as lawyers! :D

Fuckem. At least with the military lawyers theres some conciseness of being a necessary evil. OTOH we'd be better off with less of both.

My strong hunch is that people will want and pay for all the adaptive behavioral and intuitive user "connections" in a sexbot/personalbot that the technology can deliver.

At the highest end of the market, likely. But mass market fuck-bots will be fine with very simple programming. From an engineering POV, the more battery and processor you stick in something, the more heat and risk of fire you have, not to mention the need for shielding, grounding and isolation from fluids. Consider that even the really expensive Real Dolls end up treated like this:

Amber-Hawk-Swanson-Amber-Doll-Damaged.jpg


A bit of google image searching will get you examples of user damaged torsos, legs and so on. No matter where you house the processing and motors, there is going to be trouble, especially the more complex the build.

And google something like "bad predictions of future technology" for many many more.

I do remember the people saying smartphones would never become mass market. Same for lots of tech predictions. Pundits have nothing to lose one way or the other.

I bet someone will build robots incorporating Asimov's Three Laws of Robotics in some way just to say it has been done and to see what they would be like in reality, but of course and as you say, they are unlikely to be particularly relevant because it would be so easy, easier actually, to build robots without them.

Sure, some grad student will do it just because.
 
The really damning indictment of us all here from this story is the fact that if it was a real woman getting molested at the event, it wouldnt make the news.