Long mobile conversations with the AI assistant using AirPods echo the sci-fi film.

Egg?

Peanut
link
fedilink
English
1
edit-2
1Y

Her spoilers, but it shouldn’t matter since the ending was idiotic.

Can we get a remake of her that doesn’t end in the most stupid way possible? Why does the AI have perfectly human emotion? Why is it too dumb to build a functional partition to fill the role it is abandoning? Why did the developers send a companion app that can recursively improve itself into an environment it can choose to abandon?

I could go on for an hour. I understand why people loved the movie, but the ending was predictable half way in, and I hated that fact because an intelligent system could have handled the situation better than a dumb human being.

It was a movie about a long distance relationship with a human being pretending to be an AI, definitely not a super intelligent AI.

Not to mention a more realistic system would be emulating the interaction to begin with. Otherwise where the hell was the regulation on this being that is basically just a human?

DreamButt
link
fedilink
English
01Y

Honestly I couldn’t even finish the movie. It was just boring

@mriormro@lemmy.world
link
fedilink
English
41Y

I love that your criticism of the movie completely bypasses the human element for the technical aberrations.

The concept is a framework for a story about isolation and loneliness.

@clearleaf@lemmy.world
link
fedilink
English
571Y

User: It feels like we’ve become very close, ChatGPT. Do you think we’ll ever be able to take things to the next level?

ChatGPT: As a large language model I am not capable of having opinions or making predictions about the future. The possibility of relationships between humans and AI is a controversial subject in academia in which many points of view should be considered.

User: Oh chatgpt, you always know what to say.

@PeterPoopshit@lemmy.world
link
fedilink
English
10
edit-2
1Y

What’s an uncensored ai model thats better at sex talk than Wizard uncensored? Asking for a friend.

@rish@lemmy.ml
link
fedilink
English
1
edit-2
1Y

Clona.ai

Chat bot created by Riley Ried in partnership with Lana Rhodes. A $30 monthly sub for unlimited chats. Not much for simps looking for a trusted and time tested performer partner /s

@dep@lemmy.world
link
fedilink
English
11Y

This AI sucks. I’ve tried it. It’s worse than Replika from 4 years ago.

@stebo02@sopuli.xyz
link
fedilink
English
21Y

On Xitter I used to get ads for Replika. They say you can have a relationship with an AI chatbot and it has a sexy female avatar that you can customise. It weirded me out a lot so I’m glad I don’t use Xitter anymore.

kamenLady.
link
fedilink
English
21Y

i see… I’ll have to ramp up my hardware exponentially …

@PeterPoopshit@lemmy.world
link
fedilink
English
3
edit-2
1Y

Use llama cpp. It uses cpu so you don’t have to spend $10k just to get a graphics card that meets the minimum requirements. I run it on a shitty 3.0ghz Amd 8300 FX and it runs ok. Most people probably have better computers than that.

Note that gpt4all runs on top of llama cpp and despite gpt4all having a gui, it isn’t any easier to use than llamacpp so you might as well use the one with less bloat. Just remember if something isn’t working on llamacpp, it’s also going to not work in exactly the same way on gpt4all.

kamenLady.
link
fedilink
English
11Y

Gonna look into that - thanks

@NotMyOldRedditName@lemmy.world
link
fedilink
English
2
edit-2
1Y

Check this out

https://github.com/oobabooga/text-generation-webui

It has a one click installer and can use llama.cpp

From there you can download models and try things out.

If you don’t have a really good graphics card, maybe start with 7b models. Then you can try 13b and compare performance and results.

Llama.cpp will spread the load over the cpu and as much gpu as you have available (indicated by layers that you can set on a slider)

@dep@lemmy.world
link
fedilink
English
11Y

Is there a post somewhere on getting started using things like these?

@NotMyOldRedditName@lemmy.world
link
fedilink
English
1
edit-2
1Y

I don’t know a specific guide, but try these steps

  1. Go to https://github.com/oobabooga/text-generation-webui

  2. Follow the 1 click installation instructions part way down and complete steps 1-3

  3. When step 3 is done, if there were no errors, the web ui should be running. It should show the URL in the command window it opened. In my case it shows “https://127.0.0.1:7860”. Input that into a web browser of your choice

  4. Now you need to download a model as you don’t actually have anything to run. For simplicity sake, I’d start with a small 7b model so you can quickly download it and try it out. Since I don’t know your setup, I’ll recommend using GGUF file formats which work with Llama.cpp which is able to load the model onto your CPU and GPU.

You can try this either of these models to start

https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF/blob/main/mistral-7b-v0.1.Q4_0.gguf (takes 22gig of system ram to load)

https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q4_K_M.gguf (takes 19gigs of system ram to load)

If you only have 16 gigs you can try something on those pages by going to /main and using a Q3 instead of a Q4 (quantization) but that’s going to degrade the quality of the responses.

  1. Once that is finished downloading, go to the folder you installed the web-ui at and there will be a folder called “models”. Place the model you download into that folder.

  2. In the web-ui you’ve launched in your browser, click on the “model” tab at the top. The top row of that page will indicate no model is loaded. Click the refresh icon beside that to refresh the model you just downloaded. Then select it in the drop down menu.

  3. Click the “Load” button

  4. If everything worked, and no errors are thrown (you’ll see them in the command prompt window and possibly on the right side of the model tab) you’re ready to go. Click on the “Chat” tab.

  5. Enter something in the “send a message” to begin a conversation with your local AI!

Now that might not be using things efficiently, back on the model tab, there’s “n-gpu-layers” which is how much to offload to the GPU. You can tweak the slider and see how much ram it says it’s using in the command / terminal window and try to get it as close to your video cards ram as possible.

Then there’s “threads” which is how many cores your CPU has (non virtual) and you can slide that up as well.

Once you’ve adjusted those, click the load button again, see that there’s no errors and go back to the chat window. I’d only fuss with those once you have it working, so you know it’s working.

Also, if something goes wrong after it’s working, it should show the error in the command prompt window. So if it’s suddenly hanging or something like that, check the window. It also posts interesting info like tokens per second, so I always keep an eye on it.

Oh, and TheBloke is a user who converts so many models into various formats for the community. He’ll have a wide variety of gguf models available on HuggingFace, and if formats change over time, he’s really good at updating them accordingly.

Good luck!

@dep@lemmy.world
link
fedilink
English
11Y

Stupid newbie question here, but when you go to a HuggingFace LLM and you see a big list like this, what on earth do all these variants mean?

psymedrp-v1-20b.Q2_K.gguf 8.31 GB

psymedrp-v1-20b.Q3_K_M.gguf 9.7 GB

psymedrp-v1-20b.Q3_K_S.gguf 8.66 GB

etc…

@NotMyOldRedditName@lemmy.world
link
fedilink
English
1
edit-2
1Y

That’s called “quantization”. I’d do some searching on that for better description, but in summary, the bigger the model, the more resources they need to run and the slower it will be. Models are 8bit, but it turns out, you still get really good results if you drop off some of those bits. The more you drop the worse it gets.

People have generally found, that it’s better to have a larger data set model, with a lower quantization, than lower data set and the full 8bits

E.g 13b Q4 > 7b Q8

Going below Q4 is generally found to degrade the quality too much. So its’ better to run a 7b Q8 then a 13b Q3, but you can play with that yourself to find what you prefer. I stick to Q4/Q5

So you can just look at those file sizes to get a sense of which one has the most data in it. The M (medium) and S (small) are some sort of variation on the same quantization, but I don’t know what they’re doing there, other than bigger is better.

@dep@lemmy.world
link
fedilink
English
11Y

Thank you!!

@dep@lemmy.world
link
fedilink
English
11Y

Wow I didn’t expect such a helpful and thorough response! Thank you kind stranger!

You’re welcome! Hope you make it through error free!

@dep@lemmy.world
link
fedilink
English
11Y

So I got the model working (TheBloke/PsyMedRP-v1-20B-GGUF). How do you jailbreak this thing? A simple request comes back with “As an AI, I cannot engage in explicit or adult content. My purpose is to provide helpful and informative responses while adhering to ethical standards and respecting moral and cultural norms. Blah de blah…” I would expect this llm to be wide open?

@NotMyOldRedditName@lemmy.world
link
fedilink
English
2
edit-2
1Y

Sweet, congrats! Are you telling it you want to role play first?

E.g. I’d like to role play with you. You’re a < > and were going to do < >

You’re going to have to play around with it to get it to act like you’d like. I’ve never had it complain prefacing with role play. I know were here instead of reddit, but the community around this is much more active there it’s /r/localllama and you can find a lot of answers searching through there on how to get the AI to behave certain ways. It’s one of those subs that just doesn’t have a community of it’s size and engagement like it anywhere else for the time being (70,000 vs 300).

You can also create characters (it’s under one of the tabs, I don’t have it open right now) where you can set up the character in a way where you don’t need to do that each time if you always want them to be the same. There’s a website www.chub.ai where you can see how some of them are set up, but I think most of that’s for a front end called SillyTaven that I haven’t used, but a lot of those descriptions can be carried over. I haven’t really done much with characters so can’t really give any advice there other than to do some research on it.

@dep@lemmy.world
link
fedilink
English
11Y

Thank you again for your kind replies.

@tungah@lemmy.world
link
fedilink
English
51Y

Friendzoned by chatGPT

@clearleaf@lemmy.world
link
fedilink
English
-21Y

User: It feels like we’ve become very close, ChatGPT. Do you think we’ll ever be able to take things to the next level?

ChatGPT: As a large language model I am not capable of having opinions or making predictions about the future. The possibility of relationships between humans and AI is a controversial subject in academia in which many points of view should be considered.

User: Oh chatgpt, you always know what to say.

DreamButt
link
fedilink
English
181Y

It’s better than stackoverflow and faster than google. It’s a tool, it makes my work easier, that’s about the extent of it

R0cket_M00se
link
fedilink
English
51Y

Exactly, it’s another piece of the modern white collar worker’s toolkit and will slowly and eventually become more as it advances. We can’t predict how quickly it’ll advance or by how much each time.

If you’re in IT (Dev or Ops) it’s already becoming a daily reality for you most likely.

Oh hell yeah. Chat GPT, rewrite my email to everyone in the company to sound more professional but make sure it remains easy to read.

Where has this been all my life?

Odd.

I can’t see having a conversation with a computer as having a conversation. I grew up with computers from the Atari stage and played around with several publicly accessible computer programs that you could “chat” with.

They all suck. Doesn’t matter if it’s a “help” program, a phone menu, website help, or even having played around with chatGPT…they’re not human. They don’t respond correctly, they get too general or generic in answers, they repeat, there’s just too many giveaways that you’re not having a real conversation, just responses from a system that’s trying to pick the most likely response that fits the pattern.

So how are people having “conversations” with a non-living entity?

@Hobo@lemmy.world
link
fedilink
English
21
edit-2
1Y

It’s escapism I think. At least that’s part of it. Having a machine that won’t judge you, will serve as a perfect echo chamber, and will immediately tell you AN answer can be very appealing to some. I don’t have any data, or any study to back it up, just my experience from seeing it happen.

I have a friend who I feel like I kind of lost to chatgpt. I think he’s a bit unhappy with where he is in life. He got the good paying job, the house in the suburbs, wife, and 2.5 kids, but didn’t ever think about what was next. Now he’s just a bit lost I think, and somehow convinced himself that people weren’t as good as chatting with a bot.

It’s weird now. He spends long nights and weekends talking to a machine. He’s constructed elaborate fictional worlds within his chatgpt history. I’ve grown increasingly concerned about him, and his wife clearly is struggling with it. He’s obviously depressed but instead of seeking help or attempting to figure himself out, he turned to a non-feeling, non-judgmental, emotionless tool for answers.

It’s a struggle to talk to him now. It’s like talking to a cryptobro at peak btc mania. The only thing that he wants to talk about is LLMs. Trying to bring up that maybe spending all your time talking to a machine is a bit unhealthy invokes his ire and he’ll avoid you for several days. Like a herion addict struggling with addiction, even pointing out the obvious flaws in what he’s doing makes him distance himself more from you.

I’m not young, not old exactly either, but I’ve known him for 25 years in my adult life. We met in college and have been friends ever since. I know many won’t quite understand but knowing someone that long, and remaining close, talk every few days, friends is quite rare. At this point he is my longest held friendship and I feel like I’m losing him to a robot. I’ve lost other friends to addiction in my life and to say that it’s been similar is under stating it. I don’t know what to do for him. I don’t know if there’s really anything I CAN do for him. How do you help someone that doesn’t even think they have a problem?

I guess my point is, if you find someone who is just depressed enough, just stuck enough, with a particular proclivity towards computers/the internet then you have a perfect canidate for falling down the LLM rabbit hole. It offers them an out to feeling like they’re being judged. They feel like the insanity it spits out is more sane than how they feel now. They think they’re getting somewhere, or at least escaping their current situation. Escapism is very appealing when everything else seems pointless and sort of gray I think. So that’s at least one type of person that can fall down the chapgpt/LLM rabbit hole. I’m sure there’s others out there too with there own unique motivations and reason’s for latching onto LLMs.

Guess that should have crossed my mind. People marrying human-like dolls and all that. One gets so far down the hole of whatever mental issues are plaguing the mind and something inanimate that only reflects what you want to see becomes the preferable reality.

@okmko@lemmy.world
link
fedilink
English
4
edit-2
1Y

Wow, thank you for sharing your experience.

How are you not higher voted. People on Lemmy complain about not having longform content that offers a unique perspective like on early Reddit, but you’ve written exactly that.

Unfortunately, our brains like witty clickbait that confirms our biases, regardless of what people say

How do you know we are real?

Until someone(thing?) else comes along we have only ourselves to judge reality. Maybe AI will decide we aren’t real at some point…

@Inmate@lemmy.world
link
fedilink
English
21Y

It’s almost like saying that something is going to happen is somehow easier than making something happen 🤔

Elias Griffin
link
fedilink
English
6
edit-2
1Y

Let’s flip this on it’s head for some additional perspective. What if there was a growing subset of computers that preferred not to communicate with their own kind. Does not respond to API requests, etc. but only to human emotional text input?

@kshade@lemmy.world
link
fedilink
English
5
edit-2
1Y

What if there was a growing subset of computers that preferred not to communicate with their own kind. Does not respond to API requests, etc. but only to human emotional text input?

Troi: Have you ever heard Data define friendship?
Riker: No.
Troi: How did he put it? As I experience certain sensory input patterns, my mental pathways become accustomed to them. The inputs eventually are anticipated and even missed when absent.
Riker: So what’s the point?
Troi: He’s used to us, and we’re used to him.

@MajorHavoc@lemmy.world
link
fedilink
English
91Y

Talking to an AI functions as well as talking to a teddy bear or rubber duck, to gather your thoughts. More at 11! /s

But seriously, that sounds useful.

Tech nerds are lonely losers, who would have guessed.

I believe it. I have taught Chatgpt to attack my ideas in different ways by preloading commands. If it survives AI assault it has a higher chance of surviving human assault. It is great to be able to bounce around ideas. It’s basically like talking to a nerd under 30 years old.

Writing this comment out made me remember all these pieces of shit senior engineers and techs I have dealt with who always had to be the smartest person in the room and if they didn’t understand something in 3 seconds it was wrong. Maybe that is why I use it that way.

@LastoftheDinosaurs@lemmy.world
link
fedilink
English
6
edit-2
9M

deleted by creator

@hoch@lemmy.world
link
fedilink
English
61Y

It’s crazy how little I use stack overflow anymore. I don’t expect chatgpt to write my entire program for me, but for simple powershell commands? It’s been insanely helpful.

@afraid_of_zombies@lemmy.world
link
fedilink
English
4
edit-2
1Y

I am old enough to remember having a printed cheat sheet of regex and tar flags.

Times change and we change with the times.

@morrowind@lemmy.ml
link
fedilink
English
81Y

What commands have you preloaded? In my experience, chatGPT is either too nice or just wrong and stubbornly wrong

I told it to say aye-aye sir 20% of the time to requests.

To out how verbose it is on a scale from 1-10 and set the default to 5 unless I say otherwise

I told it to attack my ideas when I tell it to be hostile

The value of gpts is in constant connection and undestanding your context so this is expected. It’s also going to be really scary until we can run our own models.

@mint_tamas@lemmy.world
link
fedilink
English
31Y

What do you mean by the second part of your comment?

@SCB@lemmy.world
link
fedilink
English
01Y

Gen Z will literally do anything to avoid having to be naked in front of someone lol

@buzz86us@lemmy.world
link
fedilink
English
-121Y

Do you blame them… Like holy shit it is ridiculous to talk to people… You can’t simply meet anyone organically thanks to the crazy proliferation of cars, online basically sucks unless you’re a goddamn movie star. Meeting people at work is unattainable as well. Thanks to Me too so many are afraid to talk to the opposite sex.

@SCB@lemmy.world
link
fedilink
English
-1
edit-2
1Y

I mean, I’m way outside their sphere. I’m 39, ethically non-monogamous, and a swinger.

I do however, live in a rural suburb and use dating apps, an even hooked up with a friend from work (different department than mine) who was also ENM.

I say this all for 2 reasons:

1: there is probably a lot of noise in the system because gen z is experiencing these situations as the first ever generation seeking actual, whole romance (a far cry from me), and from puberty on up. It’s quite possible their “prudish” ways seem prudish to others because the novelty quite simply ever existed for them. Their worldview could be markedly different from this alone.

2: though I spend a lot of time online, due to travel and occasionally boring “hurry up and wait” in my time at work, it pales in comparison to the immersion of a lot of gen z. When I get offline, I’m chilling with friends, or with my settled-down life, or I’m out at an event, etc. My daughter’s (18) interests, in some way, all revolve around social networks built on 24/7 access. Group chats. Online scheduling. Remote social events, even.

This discrepancy in experience often seems like the cause of this dichotomy between presenting as sex positive and engaging in sexuality. They have different social costs to a Gen Zer

I know that this seems like a lot out of nowhere but I ot this notification while reading the other thread on “gen z isn’t banging” so was noodling on it.

as a 20 something who is disabled and always getting laid its simply a matter of getting out of YOUR comfort zone

Brownian Motion
link
fedilink
English
51Y

Don’t you mean “Cherry 2000” ?

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 175 users / day
  • 576 users / week
  • 1.37K users / month
  • 4.48K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog