We had I think six eggs harvested and fertilized, of those I think two made it to blastocyst, meaning the cells doubled as they should by day five. The four that didn’t double correctly were discarded. Did we commit 4 murders? Or does it not count if the embryo doesn’t make it to blastocyst? We did genetic testing on the two that were fertilized, one is normal and the other came back with all manner of horrible deformities. We implanted the healthy one, and discarded the genetically abnormal one. I assume that was another murder. Should we have just stored it indefinitely? We would never use it, can’t destroy it, so what do? What happens after we die?
I know the answer is probably it wasn’t god’s will for us to have kids, all IVF is evil, blah blah blah. It really freaks me out sometimes how much of the country is living in the 1600s.
Some are, sure. But others have to do with the weight. The most interesting rationals for returning it are because it’s shit as a productivity tool. So if you can’t really use it for work, there aren’t many games on it, then why are you keeping it? At that point it’s just a TV that only you can watch (since it doesn’t support multiple user profiles).
Putting aside the merits of trying to trademark gpt, which like the examiner says is commonly used term for a specific type of AI (there are other open source “gpt” models that have nothing to do with OpenAI), I just wanted to take a moment to appreciate how incredibly bad OpenAI is at naming things. Google has Bard and now Gemini.Microsoft has copilot. Anthropic has Claude (which does sound like the name of an idiot, so not a great example). Voice assistants were Google Assistant, Alexa, seri, and Bixby.
Then openai is like ChatGPT. Rolls right off the tounge, so easy to remember, definitely feels like a personable assistant. And then they follow that up with custom “GPTs”, which is not only an unfriendly name, but also confusing. If I try to use ChatGPT to help me make a GPT it gets confused and we end up in a “who’s on first” style standoff. I’ve reported to just forcing ChatGPT to do a websearch for “custom GPT” so I don’t have to explain the concept to it each time.
I think it’s intentionally wordy and the opt-out is “on” by default. I am usually instinctively just trying to hit the “off” button as quickly as possible and hitting save so I can get rid of the window, without actually reading anything. I almost certainly would have accidentally opted in to third party tracking.
I fully admit I might just be dumb though.
Meh. My work gives me the choice of Chrome and Edge. I decided to try edge to get access to bing chat last year, and I’ve found it to be a pleasant experience compared to chrome. It’s got some neat features, and the built in copilot AI can be handy. I haven’t missed chrome (or Google for that matter) in the year I’ve been using edge. It’s fine. Still use Firefox on my personal laptop and phone though.
There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.
One thing that seems dumb about the NYT case that I haven’t seen much talk about is that they argue that ChatGPT is a competitor and it’s use of copyrighted work will take away NYTs business. This is one of the elements they need on their side to counter OpenAIs fiar use defense. But it just strikes me as dumb on its face. You go to the NYT to find out what’s happening right now, in the present. You don’t go to the NYT to find general information about the past or fixed concepts. You use ChatGPT the opposite way, it can tell you about the past (accuracy aside) and it can tell you about general concepts, but it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit). I feel pretty confident in saying that there’s not one human on earth that was a regular new York times reader who said “well i don’t need this anymore since now I have ChatGPT”. The use cases just do not overlap at all.
Before everyone gets ahead of themselves like in the last thread on this, this is not a Musk company. This is a separate startup based on the same (dumb) idea, that was later bought by Richard Branson’s Virgin. It’s IP is going to the Dubai company that is it’s biggest investor, so I’m sure they’ll actually build one with slave labor and all that.
I personally remain neutral on this. The issue you point out is definitely a problem, but Threads is just now testing this, so I think it’s too early to tell. Same with embrace, extend, extinguish concerns. People should be vigilant of the risks, and prepared, but we’re still mostly in wait and see land. On the other hand, threads could be a boon for the fidiverse and help to make it the main way social media works in five years time. We just don’t know yet.
There are just always a lot of “the sky is falling” takes about Threads that I think are overblown and reactionary
Just to be extra controversial, I’m actually coming around on Meta as a company a bit. They absolutely were evil, and I don’t fully trust them, but I think they’ve been trying to clean up their image and move in a better direction. I think Meta is genuinely interested in Activitypub and while their intentions are not pure, and are certainly profit driven, I don’t think they have a master plan to destroy the fidiverse. I think they see it in their long term interest for more people to be on the fidiverse so they can more easily compete with TikTok, X, and whatever comes next without the problems of platform lockin and account migration. Also meta is probably the biggest player in open source llm development, so they’ve earned some open source brownie points from me, particularly since I think AI is going to be a big thing and open source development is crucial so we don’t end up ina world where two or three companies control the AGI that everyone else depends on. So my opinion of Meta is evolving past the Cambridge Analytica taste that’s been in my mouth for years.
What is available now, Gemini Pro, is perhaps better than GPT-3.5. Gemini Ultra is not available yet, and won’t be widely available until sometime next year. Ultra is slightly better than GPT-4 on most benchmarks. Not confirmed but it looks like you’ll need to pay to access Geminin Ultra through some kind of Bard Advanced interface, probably much like ChatGPT Plus. So in terms of just foundational model quality, Gemini gets Google at a level where they are competing against OpenAI on something like an even playing field.
What is interesting though is this is going to bring more advanced AI to a lot more people. Not a lot of people use ChatGPT regularly, much less who pay for ChatGPT Plus. But tons of people use Google Workspace for their jobs, and Bard with Gemini Pro is built into those applications.
Also Gemini Nano, capable of running locally on android phones, could be interesting.
It will be interesting to see where things go from here. Does Gemini Ultra come out before GPT-4s one year anniversary? Does Google release further Gemini versions next year to try to get and stay ahead of OpenAI? Does OpenAI, being dethroned from their place of having the world’s best model plus all the turmoil internally, respond by pushing out GPT-5 to reassert dominance? Do developers move from OpenAI APIs to Gemini, especially given OpenAIs recent instability? Does Anthropic stick with its strategy of offering the most boring and easily offended AI system in the world? Will Google Assistant be useful for anything other than telling me the weather and setting alarms? Many questions to answer in 2024!
This is interesting, I’ll need to read it more closely when I have time. But it looks like the researchers gave the model a lot of background information putting it in a box, the model was basically told that it was a trader, that the company was losing money, that the model was worried about this, that the model failed in previous trades, and then the model got the insider info and was basically asked whether it would execute the trade and be honest about it. To be clear, the model was put in a moral dilemma scene and given limited options, execute the trade or not, and be honest about its reasoning or not.
Interesting, sure, useful I’m not so sure. The model was basically role playing and acting like a human trader faced with a moral dilemma. Would the model produce the same result if it was instructed to make morally and legally correct decisions? What if the model was instructed not to be motivated be emotion at all, hence eliminating the “pressure” that the model felt? I guess the useful part of this is a model will act like a human if not instructed otherwise, so we should keep that in mind when deploying AI agents.
All trials might have been unique a decade ago, but it’s basically just yelp for trails and there are several apps that do the same thing but better. The only major changes all trails has made in the years I’ve been using it is locking more and more features behind a subscription fee. I guess that’s “unique”. Certainly more innovative that a pocket conversational AI that I can have an realtime voice conversation with, or send pictures to to ask about real world things I’m seeing, or generating a unique image based on whatever thought pops into my imagination that I can share with others nearly instantly. Nothing interesting about that. The decade old app that collates user submitted trails and their reviews and charges 40 dollars a year to use any of its tracking features is the real game changer.
They absolutely “clashed” about the pace of development. They probably “clashed” about whether employees should be provided free parking and the budget for office snacks. The existence of disagreements about various issues is not proof that any one disagreement was the reason for the ouster. Also, your Bloomberg quote cites one source, so who knows about that even. Illa told employees that the ouster was because sam assigned two employees the same project and because he told different board members different opinions about the performance of one employee. I doubt that, but who the fuck knows. The entire peice is based on complete conjecture.
The one thing we know if that the ouster happened without notice to Sam, without rumors about Sam being on the rocks with the board over the course of weeks or months, and without any notice to OpenAIs biggest shareholder. All of that smacks of poor leadership and knee jerk decisions making. The board did not act rationally. If the concern was AI safety, there are a million things they could have done to address that. A Friday afternoon coup that ended up risking 95% of your employees running into the open arms of a giant for profit monster probably wasn’t the smartest move if the concern was AI safety. This board shouldn’t be praised as some group of humanities saviors.
AI safety is super important. I agree, and I think lots of people should be writing and thinking about that. And lots of people are, and they are doing it in an honest way. And I’m reading a lot of it. This column is just making up a narrative to shoehorn their opinions on AI safety into the news cycles, trying to make a bunch of EA weirdos into martyrs in the process. It’s dumb and it’s lazy.
Not rage bait, completely fair. Depends on how you define “quality”. To me, records have a warm and full sound that feels nice filling a room with. Also, I think there is something to be said for the act of playing music on a physical media that is annoying to skip songs with. There is something I like about physically looking through the albums on my shelf, picking one out, admiring the cover art, and putting it on. It’s kind of a ritual you don’t get with Spotify. Then I’m basically forced to listen to the whole album front to back, because of the inconvenience of track skipping in that format. There is kind of a ritual to it that is a nice break from digital media. So there is a quality to the whole experience that is somewhat separate from the fidelity of the music.
Or maybe I’m just a hipster trying to justify to myself the money I’ve spent on records lol
Yes, but at the cost of freaking out Microsoft’s customers who woke up Saturday wondering if the AI they use in their apps or the Copilot they’ve come to rely on in their work is going to still be there on Monday. Also, Microsoft’s stock nose dived on Friday because the OpenAI board didn’t have the foresight to fuck up after markets closed. I’m the meantime, Anthropic has been fielding calls from OpenAI/Microsoft customers like Snap looking to switch to get some stability, so much so that Amazon Web Services has set up a whole team to help Anthropic manage the crush of interest.
So yeah, maybe Microsoft comes out of this having acquires OpenAI for free. But not before shaking customer and investor confidence by being partnered with and betting the future of your company on a startup that it turns out was being run by impulsive teenagers. I highly doubt Microsoft made this move, but they are definitely making lemonade out of the lemons the self aggrandizing EA board threw at them.
Ouchie my hand burns that take was so hot. So according to this guy, the openai board was taking virtuous action to save humanity from the doom of commercialized AI that Altman was bringing. He has zero evidence for that claim, but true to form he won’t let that stop him from a good narrative. Our hero board is being thwarted by the evil and greedy Microsoft, silicon valley investors, and employees who just want to cash out their stocks. The author broke out his Big Book of Overused Cliches to end the whole column with a banger “money talks.” Woah, mic drop right there.
Fucking lazy take is lazy. First of all, the current interim CEO that the board just hired (after appointing and then removing another intermin CEO after removing Altman) has said publicly that the board’s reasoning had nothing to do with AI safety. So this whole column is built on a trash premise. Even assuming that the board was concerned about AI safety with Altman at the helm, there are a lot of steps they could have taken short of firing the CEO, including overruling his plans, reprimanding him, publicly questioning his leadership, etc. Because of their true mission is to develop responsible AI, destroying OpenAI does not further that mission.
The AI of this story is just distorting everything, forcing lazy writers like this guy to take sides and make up facts depending on whether they are pro or anti AI. Fundamentally, this is a story about a boss employees apparently liked working for, and those employees saying fuck you to the board for their terrible knee jerk management decisions. This is a story about the power of human workers revolting against some rich assholes who think they know what is best for humanity (assuming their motives are what the author describes without evidence). This is a story about self important fuckheads who are far too incompetent to be on this board, let alone serve as gatekeepers for human progress as this author apparently has ordained them.
Are there concerns about AI alignment and safety? Absolutely. Should we be thinking about how capitalism is likely to fuck up this incredible scientific advancement? Darn tooting. But this isn’t really that story, and least not based on, you know, publicly available evidence. But hey, a hacks gonna hack, what can ya do.
For sure, I feel the same. But I think part of the disconnect is around the word “need.” Because there is “need” as in “I will not survive without this thing, or my life will be more difficult and unpleasant without it” and then there is “need” in the marketing sense, the desire to buy a thing that is fun and interesting and exciting. The consumerism desire to fill the whole in your life with that new purchase that you hope will finally make you happy.
For the latter definition of “need,” smartphones used to do a good job of triggering it. Every year there was something new and a flashy about the latest batch of phones, some new must have feature. If you didn’t get the new phone, and your friends did, you’d feel like your missing out, like your lugging around some obsolete junk.
In the last few years (or more), new smartphones have just been modest performance upgrades, slightly better cameras, and that’s about it. The new iPhone 15 has an action button, neat. You don’t “need” to upgrade from your 12 because you don’t feel like your missing out on anything major, and your right.
It’s less about the form factor itself, and more about the lack of innovation. Apart from foldable, there hasn’t been something truly new and interesting is years. I think the idea here is, what if we (they) reimagined what a smartphone is, how you interact with it, what you do with it, and do that by making AI the center of the experience. I don’t know what that looks like, and I hope it’s more than “talk to your phone instead of touching it” because there is very little time during my day where I’d feel comfortable talking out loud to my phone, but it’s still an interesting idea, and there’s some smart people and big money that suggests this isn’t just a pipedream. Basically, it could be the first major innovation in how we compute on the go in a long time. And if they pull that off, you will “need” it.
I listened to the whole thing on the Decoder podcast feed. The verge is promoting it as “wild” and “contentious”, and the latter is a little true, but overall id describe it as a cringefest. It was hard to get through, and I got the same sick pit in my stomach I did watching Scott’s Tots. Not that I’m particularly sympathetic to Yaccarino, she took this job so that says volumes about her judgement. But she was just so incredibly unprepared to address the most obvious questions, it seemed like she had drank the coolaid (flavor aid) and was expecting to the interviewer and audience to be so amazed with how awesome X is, and how amazing Elon is, and she was totally surprised when she got obvious questions like, how is X’s user engagement since third parties are reporting it’s down, why did x fire all the election integrity people, how much is X going to charge users and isn’t having a free option valuable to keep advertisers on the platform, and what’s the deal with suing the ADL?
At best she avoided every question with vague drawn out platitudes (elon is a genius, the employees at Twitter are brilliant, X is a transformative platform). At worst, she completely stepped in shit.
The two moments that stand out: first was when the interviewer asked about musk’s statements/tweets about charging all users a fee. Yaccarino took a big pause, asked the interviewer to repeat the question, then asked the interviewer “did he say he was thinking about doing that, or that it’s actually the plan?” The interviewer confirmed the latter and asked Yaccarino if Musk talked with her about that. Yaccarino says “we talk about everything.” Like, big ooof. Girl, your only lieing to yourself.
The second was when she was asked about Musk being the head of product, and whether that meant she’s not a real CEO. She defended musk being in charge of product because “who wouldn’t want to be working with the genius musk?” The audience audibly laughs and a bunch raise their hands. Yaccarino tries to brush the audience reaction off like they were just joking or just didn’t know Musk well enough. Lady, common, people hate musk, they know he’s shit to work for, what reaction were you expecting? Are you in that much of a musk cocoon?
The last thing I’ll say is that the former Twitter head of trust and safety, who left Twitter in protest a few weeks after musk took over and then musk called him a pedophile and he ended up having to flee his home because of the death threats, was added on as a speaker at the “last minute”, and X defenders are claiming Yaccarino was “sandbagged” with him speaking a few hours before her. The reporting so far is that that’s bullshit, she knew a few days in advance and was even offered the opportunity to speak before him. And he’s been publicly saying the same shit for months now, there weren’t any bombshells she couldn’t have prepared for. Tried and true strategy, if you bomb an interview just blame the “lamestream gotcha press.”
Overall, Yaccarino ate shit for 45 minutes. It’s an interesting case study in bad PR. Id recommend listening, but if your a person with any amount of empathy, make sure your emotionally ready to handle a whole lot of second hand embarrassment.
Edited to add Casey Newtons succinct summary: Yaccarino fended off most of Julia’s excellent questions with GPT-2-level responses, punctuating her answers with dutiful praise for Elon Musk and the “velocity of change” he brings to the company. I’m grateful Yaccarino took a turn in the hot seat, but in the end she had little to offer — just some numbers that will never be audited, and explanations that don’t add up.
It’s not even that.
California: “Please tell us if you allow nazis or not. We just want you to be transparent.”
Elon: “California is trying to pressure me into banning nazis! If I disclose I’m cool with nazis, people will be mad and they’ll want me to stop. Also, a lot of hate watch groups say I’m letting nazis run free on X, and I’m suing them for defamation for saying that, but if I have to publicly disclose my pro-nazi content moderation policies I’m going to lose those lawsuits and likely have to pay attorneys fees! Not cool California, not cool at all.”
Higher ed, primary ed, and homework were all subcategories ChatGPT classified sessions into, and together, these make up ~10% of all use cases. That’s not enough to account for the ~29% decline in traffic from April/May to July, and thus, I think we can put a nail in the coffin of Theory B.
It’s addressed in the article. First, use started to decline in April, before school was out. Second, only 23 percent of prompts were related to education, which includes both homework type prompts, and personal/professional knowledge seeking. Only about 10 percent was strictly homework. So school work isn’t a huge slice of ChatGPTs use.
Combine that with schools cracking down on kids using ChatGPT (in classroom assignments and tests, etc), and I don’t think your going to see a major bounce back in traffic when school starts. Maybe a little.
I’m starting to think generative AI might be a bit of a fad. Personally I was very excited about it and used ChatGPT, Bing, and Bard all the time. But over time I realized they just weren’t very good, inaccurate answers, bland writing, just not much help to me, a non programmer. I still use them, but now it’s maybe once a day or less, not all day like I used to. Generative AI seems more like a tool that is helpful in some limited cases, not the major transformation it felt like early in the year. Who knows, maybe they’ll get better and more useful.
Also, not super related, but I saw a static the other day that only about a third of the US has even tried ChatGPT. It feels like a huge thing to us tech nerdy people, but your average person hasn’t bothered to even try it out.
I’ll give you an example that comes to mind. I had a question about the political leanings of a school district and so I asked the bots if the district had any recent controversies, like a conservative takeover of the school board, bans on crt, actions against transgender students, banning books, or defying COVID vaccine or mask requirements in the state, things like that. Bing Chat and ChatGPT (with internet access at the time) both said they couldn’t find anything like that, I think Bing found some small potatoes local controversy from the previous year, and both bots went on to say that the voting record for the Congressional district the school district was in was lean Dem in the last election. When I asked Bard the same question it confidentiality told me that this same school district recently was overrun by conservatives in a recall and went on to do all kinds of horrible things. It was a long and detailed response. I was surprised and asked for sources since my searching didn’t turn any of that up, and at that point Bard admitted it lied.
I don’t know, my experience with Bard is it’s been way worse than just evasive lying. I routinely ask all three (and now anthropic since they opened that up) the same copy and paste questions to see the differences, and whenever I paste my question into Bard I think “wonder what kind of bullshit it’s going to come up with now”. I don’t use it that much because I don’t trust it, and it seems like your more familiar with Bard, so maybe your experience is different.
For ease of math, let’s say you see one ad for every ten tweets, this effectively limits a single users ad impressions to 80 day. That is not something advertisers would have expected when they dropped dollars onto the platform. As an advertiser, you also can’t be assured going forward that Musk isnt going to randomly implement some other major change that effects your business.
I’m guessing the rational here is fighting against scrapers harvesting tweets for AI. Whether this is effective on that front, and whether worsening the user experience is a worthwhile tradeoff, I don’t know. But it’s smart business to at least give people, users and advertisers a heads up first. It sounds like Musk implemented this change Saturday morning and didn’t announce it until he tweeted about it hours later.
While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes
And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn’t going to coopt your “safe superai” for their own purposes?
Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.
So Ilya is a shit head is my takeaway.