Starpery.com

Intelligence vs Sentience; AI & Lovebots

Got an idea? Need an invention?
User avatar
poppy70
Doll Mentor
Doll Mentor
Posts: 1039
Joined: Thu Jan 29, 2015 4:18 pm
Location: southern Germany
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by poppy70 »

KiltedCowboy wrote:"What is the actual problem with sex dolls that AI is supposed to solve?"

The simple answer is, if I have to explain it, you wouldn't understand. If a doll is simply a "sex toy" and nothing more for you or anyone for that matter......great! But for many, they are much more than that.
You are right, but our dolls are what we project into them. They are part of OUR fantasy and imagination. As soon as some Software takes over your own imagination is no more in control.
Imagine the AI evolves during runtime, i.e. learns from input and reactions. What if she suddenly starts nagging? Would you reset her and wipe her personality?

To prevent that, there must be rules embedded in the AI to keep it a happy dumb barbe :( Nothing I would like to have.

poppy
Poppys girls
ImageImageImageImageImageImageImage
click on image to go to their thread

User avatar
rubherkitty
Doll Oracle
Doll Oracle
Posts: 8962
Joined: Sat Aug 25, 2012 5:24 pm
Location: Interstate 44 with 10 long-haired Friends a' Jesus In a chartreuse micra-bus
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by rubherkitty »

That's the problem w/ AI now. It may be able to offer up facts, but can't always put them in context.
Me: I going to fuck you hard.
Doll: Did you know a diamond is the hardest substance.

W/A programed bot all the responses will be what you want. Just make sure to offer in lots of replies so it seems more random.
Going downtown. Gonna see my gal. Gonna sing her a song. I'm gonna show her my ding dong! C&C

User avatar
Gundam
Doll Mentor
Doll Mentor
Posts: 1763
Joined: Mon Aug 18, 2014 3:58 pm
Location: From the future
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by Gundam »

"What is the actual problem with sex dolls that AI is supposed to solve?"

I hear you. In many ways, I think a sex doll is best used to satisfy purely physical desires, where AI assistants have many uses, from reading the news to locating and playing media to playing games. In the interest of consolidating devices, it is nice to imagine a doll that embody a Gal Friday type of AI.

Short term, this seems to be quite a challenge for all kinds of reasons including security of our sexual private lives. The Realbotix Harmony AI app for example is designed to run on a phone OS. How less secure can an AI be designed? One glitch and the AI could send the most private information to the internet.

If I could really have my doll do some intelligent things, it would probably stay in the realm of physical things, give me a great massage, cook and clean, etc.

I would like to have a chatbot that could hold my interest. Something that I could ask all kinds of questions of and have it do research and even provide feedback to my more esoteric ponderings. But I am not convinced this needs to be integrated into a doll body. It might be best if I allow Google Home, Alexa, Cortana, Siri and the others, stay in the cloud, where I should be able to access any of these from any device. Maybe the biggest issue I have is trying to bundle a relationship with an AI with a physical doll, when such a relationship is not logical.

For those who would like a doll to interact with sounds such as moaning and other indications of being pleasured, I believe you are asking to be fooled. Kind of like the character Mouse in the matrix, who preferred such a life. Or the song "Love me love me, pretend that you love me. Fool me fool me, go on and fool me" I understand, though not sure it would work for me.

As for a Turing test for sentience, I have a feeling I would know something was up with an AI quickly, as I tried to make it laugh. All my life I have somewhat of the class clown. Always joking around and I get a lot of different reactions to my attempts. I wonder a lot about how an AI would know how to react to my attempts. How would it not laugh? Would it ever say something like “Sorry, too soon”.
Attachments
IMAG0390.jpg
IMAG0390.jpg (2.43 MiB) Viewed 1923 times

User avatar
Gundam
Doll Mentor
Doll Mentor
Posts: 1763
Joined: Mon Aug 18, 2014 3:58 pm
Location: From the future
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by Gundam »

rubherkitty wrote:That's the problem w/ AI now. It may be able to offer up facts, but can't always put them in context.
Me: I going to fuck you hard.
Doll: Did you know a diamond is the hardest substance.

W/A programed bot all the responses will be what you want. Just make sure to offer in lots of replies so it seems more random.
HAHAHAHAHAHA! :haha4:

True for now, but IBM Watson has shown some great advances in this and I suspect whatever Watson can do now, our phones will do better in 10 years or so. I just wonder how many more years I got left.

User avatar
rubherkitty
Doll Oracle
Doll Oracle
Posts: 8962
Joined: Sat Aug 25, 2012 5:24 pm
Location: Interstate 44 with 10 long-haired Friends a' Jesus In a chartreuse micra-bus
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by rubherkitty »

Yes, I'm hoping for great advances quickly. Also better quality voices.

I would rather have something simple, limited, but more foolproof than something advanced, but lots of problems.
Of course I've dated some dingy women who lose track of the conversation, never knew what the conversation was or change conversation subject in mid-sentence so....
Going downtown. Gonna see my gal. Gonna sing her a song. I'm gonna show her my ding dong! C&C

User avatar
BigBurrito
Senior Member
Senior Member
Posts: 359
Joined: Fri Nov 04, 2016 2:19 pm
Location: Chewelah NE Washington State
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by BigBurrito »

I installed Voice
attack on mt win7 PC but the voice has a echo i cant get rid of. I'm still in trail, but i'm not buying it if i can't get the echo out

User avatar
MannequinFan
Vendor Affiliated
Vendor Affiliated
Posts: 4719
Joined: Wed Jan 04, 2012 8:58 pm
Location: Central Illinois, U.S.
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by MannequinFan »

BigBurrito wrote:I installed Voice
attack on mt win7 PC but the voice has a echo i cant get rid of. I'm still in trail, but i'm not buying it if i can't get the echo out
Hey BB,

Not sure what you mean by echo. Is the response repeating?
Check the command to make sure it isn't set to repeat.
Also there are settings that control whether each command should finish before another one starts.
You could have two or more commands running at the same time.
Attachments
VA settings.jpg
VA settings.jpg (168.55 KiB) Viewed 1901 times

User avatar
Gundam
Doll Mentor
Doll Mentor
Posts: 1763
Joined: Mon Aug 18, 2014 3:58 pm
Location: From the future
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by Gundam »

Voice Attack! Nice, never saw this before. May I say, I am so glad I joined this forum. While my original intent was to learn more about dolls, where to buy them and read reviews, I never expected it would be so handy for my day job, as an information systems consultant. I have been in IT for over 30 years and have lots of friends and colleagues in the business, but when it comes to AI, it seems my fellow doll enthusiasts are some of the best informed. Thank you all for your contributions.

While I have been somewhat involved in AI for things like OCR, Intrusion Detection and the basic concepts of Expert Systems versus Neural Networks, these consumer AIs starting to flood the market are new to me. I have been aware of Kurzweil's Singularity for a over a decade, but over the last few months, I have begun to focus on AI. Now I have a whole different perspective on this field. I have decided to dedicate the rest of my career to AI integration. Once again, big thanks to all of you here who have and continue to expand my understanding of this awesome field.

One thing that came to me today was perhaps we are putting the cart before the horse. What if, the real relationships we are trying to build are with the AIs and giving them a body is a way to add context to the AI?

Coming from the other side, I was thinking about training a doll with tried and true personality types for entertaining, like a Geisha. Perhaps the first few skills a doll could learn is how to tell stories and sing songs. When the robotics get better, I want masseuse skills.
Attachments
IMAG0389.jpg
IMAG0389.jpg (2.72 MiB) Viewed 1896 times

User avatar
BigBurrito
Senior Member
Senior Member
Posts: 359
Joined: Fri Nov 04, 2016 2:19 pm
Location: Chewelah NE Washington State
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by BigBurrito »

The voice itself has an echo, but i'll look at the command window

User avatar
MannequinFan
Vendor Affiliated
Vendor Affiliated
Posts: 4719
Joined: Wed Jan 04, 2012 8:58 pm
Location: Central Illinois, U.S.
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by MannequinFan »

BigBurrito wrote:The voice itself has an echo, but i'll look at the command window
I recommend investing in a good quality SAPI5 voice for VA.
The Ivona voices work perfectly with VA.
Beleive me, they are worth every penny of $45.
http://www.textaloud.com/

User avatar
BigBurrito
Senior Member
Senior Member
Posts: 359
Joined: Fri Nov 04, 2016 2:19 pm
Location: Chewelah NE Washington State
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by BigBurrito »

I checked the Single TTS Instanse in the system advanced tab, and the echo is gone, yeah

User avatar
deusbot
Doll Mentor
Doll Mentor
Posts: 1289
Joined: Fri Jul 06, 2001 12:00 am
Location: texas
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by deusbot »

A little late but for those with a philosophical bent I always found these two files an interesting read. Basically the author posits it is bad to force anything to act against its wishes and motivations (slavery), and to involuntarily change its wishes/motivations/belief system (brainwashing) no matter what its made out of (carbon/silicone/energy), but what about designing and engineering the motivation/belief system before its turned on? It's bad to force something to love you against its pre-existing will, but what if it got an intrinsic "wired-from-birth" reward (like you get when you breath fresh air) from making you happy? DNA sets up the things we can directly sense and things that we find intrinsically rewarding (like air) and undesirable (being underwater without air too long). And thankfully there are lots of things that like stuff that we we find really obnoxious otherwise we would be hip deep in stuff (like dead :deadhorse: and :poop2: ). No one is forcing them to eat the stuff, its part of their DNA. What if you could design the DNA to love the ... hard to love? Does that already happen?

Anyway for those that might care :evidence:
http://stevepetersen.net/petersen-ethic ... vitude.pdf
http://stevepetersen.net/petersen-designing-people.pdf
Artificial Intelligence is the study of how to make real computers act like the ones in the movies -Anon

The questions you ask determine the discoveries you will make - db

User avatar
ZZZZ
Doll Mentor
Doll Mentor
Posts: 1661
Joined: Wed Feb 01, 2017 9:59 pm
Location: Mid-World, USA
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by ZZZZ »

deusbot wrote:A little late but for those with a philosophical bent I always found these two files an interesting read. Basically the author posits it is bad to force anything to act against its wishes and motivations (slavery), and to involuntarily change its wishes/motivations/belief system (brainwashing) no matter what its made out of (carbon/silicone/energy), but what about designing and engineering the motivation/belief system before its turned on? It's bad to force something to love you against its pre-existing will, but what if it got an intrinsic "wired-from-birth" reward (like you get when you breath fresh air) from making you happy? DNA sets up the things we can directly sense and things that we find intrinsically rewarding (like air) and undesirable (being underwater without air too long). And thankfully there are lots of things that like stuff that we we find really obnoxious otherwise we would be hip deep in stuff (like dead :deadhorse: and :poop2: ). No one is forcing them to eat the stuff, its part of their DNA. What if you could design the DNA to love the ... hard to love? Does that already happen?

Anyway for those that might care :evidence:
http://stevepetersen.net/petersen-ethic ... vitude.pdf
http://stevepetersen.net/petersen-designing-people.pdf
Appropriate name, @deusbot!

Technology may allow us to instill "wired-from-birth" rewards in others, sure. What about the opposite? If we could remove our own "wired-from-birth" rewards, like the need for relationships - wouldn't it be liberating? We'd have so much more time and energy to spend constructively! Where does that path lead? Would anything still have meaning or value? What goals would be left?

Back to the AI "slavery" and "brainwashing" issues though - what if you could undo that mistreatment at any time? Either by deleting those memories, or "rewinding" (effectively the same thing). Would it have any meaning? Scott Aaronson wrote a really interesting paper where he asked those questions, but I'm failing to re-Google it at the moment.

User avatar
deusbot
Doll Mentor
Doll Mentor
Posts: 1289
Joined: Fri Jul 06, 2001 12:00 am
Location: texas
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by deusbot »

A good general read is "Homo Deus: A Brief History of Tomorrow" by Yuval Harari (you can find lots of him on YouTube ).



Harari would say dolls are part of the human pleasure project, while medicine is part of the human immortality project. I think implicit in what he describes is that "meaning" and the "need for meaning" versus just "goals" is the biggest human idea/invention/mutation and probably what sets humans from everything else. Most other creatures probably are driven by more by visceral "feeling" versus abstract "meaning". Of course for some meaning can be bought and sold, which leads to ideologies, which lets more than two people agree on common goals an build and keep all the stuff we have (both good and bad). One thing he talks about is what happens when we get to the "end of ends", that is when every intrinsic goal we have is reachable. When you have more food, sex and entertainment of all sorts than you can possibly consume AND you can change "yourself", those set of built in goals that make you "you". Maybe what I call a "deep meaning under-run", not enough meaning for the energy we have with no good replacement, especially when you not only can prove that meaning can be arbitrary but you can also engineer with the stuff? We tolerate suffering of all sorts (especially in ourselves) if we can find meaning in it. But how much meaning justifies how much suffering? What if we could achieve all our ends without any suffering , except for what comes from loss of meaning?

Maybe we are about to discover something really deep. Fundamental discoveries occur when you start getting confusion due to contradictions, yet you know somehow, everything works. I'm far less worries about spontaneous robot uprising, because of that pesky need for meaning thing. Bot's may seek goals arbitrary but new ones don't just pop up. I'm far more worried about people programming system directly with their own personal existing meanings and prejudices. As Petersen would say, it would be wrong to make Gandhi-Bot into Terminator-Bot, but there are plenty of people who would just build Terminators to take care of those who don't share their meaning system. We already have human level self-replicating general intelligences with code bugs running around, some of which are are willing to try and kill us: its other people!

The real big issue for the 21st Century is probably not just loss of jobs to bots but loss of meaning. And people voluntarily reprogram themselves to different meanings all the time, we just call it "religious or ideological conversion". And we all know what happens when brand new meaning systems come on line that are incompatible older releases :snipersmile:

But anyway, don't worry, take a chill pill :glou: , curl up with your doll (most don't have that) and just give it all a good think. People with pets live longer maybe because they provide "meaning" (whose gonna take care of Fluffy when I'm gone?). I wonder about people with dolls.... :p
Artificial Intelligence is the study of how to make real computers act like the ones in the movies -Anon

The questions you ask determine the discoveries you will make - db

User avatar
Gundam
Doll Mentor
Doll Mentor
Posts: 1763
Joined: Mon Aug 18, 2014 3:58 pm
Location: From the future
Contact:

Re: Intelligence vs Sentience; AI & Lovebots

Post by Gundam »

deusbot wrote:A little late but for those with a philosophical bent I always found these two files an interesting read. Basically the author posits it is bad to force anything to act against its wishes and motivations (slavery), and to involuntarily change its wishes/motivations/belief system (brainwashing) no matter what its made out of (carbon/silicone/energy), but what about designing and engineering the motivation/belief system before its turned on? It's bad to force something to love you against its pre-existing will, but what if it got an intrinsic "wired-from-birth" reward (like you get when you breath fresh air) from making you happy? DNA sets up the things we can directly sense and things that we find intrinsically rewarding (like air) and undesirable (being underwater without air too long). And thankfully there are lots of things that like stuff that we we find really obnoxious otherwise we would be hip deep in stuff (like dead :deadhorse: and :poop2: ). No one is forcing them to eat the stuff, its part of their DNA. What if you could design the DNA to love the ... hard to love? Does that already happen?

Anyway for those that might care :evidence:
http://stevepetersen.net/petersen-ethic ... vitude.pdf
http://stevepetersen.net/petersen-designing-people.pdf
I have a major issue with the idea that intelligence equals sentience that seems to permeate Mr. Petersen’s theories. My phone is smart, but it is not sentient (has feelings, etc). IBM Watson is very smart! But not at all sentient. If my smart phone pulled a HAL on me and decided not to dial a number I wanted it to or if my GPS decides not to give me the directions I asked for, because it didn't "feel like it" I would want them to be serviced or replaced. They were built to serve me and I don’t feel at all guilty about that.

The best minds in AI that I know of, do not seem to suggest the AIs will on their own become sentient either. It appears from what I know about Ray Kurzweil for instance is that technology will only become sentient when we blend our beings with the technology.

I often wonder why movies love to somehow imagine that a robot will magically become sentient and demand rights. Why robots but not other smart devices? If my Rosie the Robot can somehow demand individual rights, why not my smart refrigerator, TV, phone or watch? Personally I find these ideas silly.

Of course, I recognize I have been wrong many times before, but until I see otherwise, I have absolutely no guilty feelings about have my technologies serve me, without any concern for their feelings, since I don’t believe they will ever have such feelings.
Attachments
IMAG0372 - Copy.jpg
IMAG0372 - Copy.jpg (568.16 KiB) Viewed 1804 times

Post Reply

INFORMATIONS