• 11 Posts
  • 32 Comments
Joined 1Y ago
cake
Cake day: Jun 13, 2023

help-circle
rss

While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn’t going to coopt your “safe superai” for their own purposes?

Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.

So Ilya is a shit head is my takeaway.


We had I think six eggs harvested and fertilized, of those I think two made it to blastocyst, meaning the cells doubled as they should by day five. The four that didn’t double correctly were discarded. Did we commit 4 murders? Or does it not count if the embryo doesn’t make it to blastocyst? We did genetic testing on the two that were fertilized, one is normal and the other came back with all manner of horrible deformities. We implanted the healthy one, and discarded the genetically abnormal one. I assume that was another murder. Should we have just stored it indefinitely? We would never use it, can’t destroy it, so what do? What happens after we die?

I know the answer is probably it wasn’t god’s will for us to have kids, all IVF is evil, blah blah blah. It really freaks me out sometimes how much of the country is living in the 1600s.


Some are, sure. But others have to do with the weight. The most interesting rationals for returning it are because it’s shit as a productivity tool. So if you can’t really use it for work, there aren’t many games on it, then why are you keeping it? At that point it’s just a TV that only you can watch (since it doesn’t support multiple user profiles).


Putting aside the merits of trying to trademark gpt, which like the examiner says is commonly used term for a specific type of AI (there are other open source “gpt” models that have nothing to do with OpenAI), I just wanted to take a moment to appreciate how incredibly bad OpenAI is at naming things. Google has Bard and now Gemini.Microsoft has copilot. Anthropic has Claude (which does sound like the name of an idiot, so not a great example). Voice assistants were Google Assistant, Alexa, seri, and Bixby.

Then openai is like ChatGPT. Rolls right off the tounge, so easy to remember, definitely feels like a personable assistant. And then they follow that up with custom “GPTs”, which is not only an unfriendly name, but also confusing. If I try to use ChatGPT to help me make a GPT it gets confused and we end up in a “who’s on first” style standoff. I’ve reported to just forcing ChatGPT to do a websearch for “custom GPT” so I don’t have to explain the concept to it each time.


I think it’s intentionally wordy and the opt-out is “on” by default. I am usually instinctively just trying to hit the “off” button as quickly as possible and hitting save so I can get rid of the window, without actually reading anything. I almost certainly would have accidentally opted in to third party tracking.

I fully admit I might just be dumb though.


Meh. My work gives me the choice of Chrome and Edge. I decided to try edge to get access to bing chat last year, and I’ve found it to be a pleasant experience compared to chrome. It’s got some neat features, and the built in copilot AI can be handy. I haven’t missed chrome (or Google for that matter) in the year I’ve been using edge. It’s fine. Still use Firefox on my personal laptop and phone though.


There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.


One thing that seems dumb about the NYT case that I haven’t seen much talk about is that they argue that ChatGPT is a competitor and it’s use of copyrighted work will take away NYTs business. This is one of the elements they need on their side to counter OpenAIs fiar use defense. But it just strikes me as dumb on its face. You go to the NYT to find out what’s happening right now, in the present. You don’t go to the NYT to find general information about the past or fixed concepts. You use ChatGPT the opposite way, it can tell you about the past (accuracy aside) and it can tell you about general concepts, but it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit). I feel pretty confident in saying that there’s not one human on earth that was a regular new York times reader who said “well i don’t need this anymore since now I have ChatGPT”. The use cases just do not overlap at all.


Before everyone gets ahead of themselves like in the last thread on this, this is not a Musk company. This is a separate startup based on the same (dumb) idea, that was later bought by Richard Branson’s Virgin. It’s IP is going to the Dubai company that is it’s biggest investor, so I’m sure they’ll actually build one with slave labor and all that.


Richard Branson hates public transit? Cause it’s his company that shut down, Virgin Hyperloop One.


I personally remain neutral on this. The issue you point out is definitely a problem, but Threads is just now testing this, so I think it’s too early to tell. Same with embrace, extend, extinguish concerns. People should be vigilant of the risks, and prepared, but we’re still mostly in wait and see land. On the other hand, threads could be a boon for the fidiverse and help to make it the main way social media works in five years time. We just don’t know yet.

There are just always a lot of “the sky is falling” takes about Threads that I think are overblown and reactionary

Just to be extra controversial, I’m actually coming around on Meta as a company a bit. They absolutely were evil, and I don’t fully trust them, but I think they’ve been trying to clean up their image and move in a better direction. I think Meta is genuinely interested in Activitypub and while their intentions are not pure, and are certainly profit driven, I don’t think they have a master plan to destroy the fidiverse. I think they see it in their long term interest for more people to be on the fidiverse so they can more easily compete with TikTok, X, and whatever comes next without the problems of platform lockin and account migration. Also meta is probably the biggest player in open source llm development, so they’ve earned some open source brownie points from me, particularly since I think AI is going to be a big thing and open source development is crucial so we don’t end up ina world where two or three companies control the AGI that everyone else depends on. So my opinion of Meta is evolving past the Cambridge Analytica taste that’s been in my mouth for years.


I look forward to reading everyone’s calm and measured reactions


What is available now, Gemini Pro, is perhaps better than GPT-3.5. Gemini Ultra is not available yet, and won’t be widely available until sometime next year. Ultra is slightly better than GPT-4 on most benchmarks. Not confirmed but it looks like you’ll need to pay to access Geminin Ultra through some kind of Bard Advanced interface, probably much like ChatGPT Plus. So in terms of just foundational model quality, Gemini gets Google at a level where they are competing against OpenAI on something like an even playing field.

What is interesting though is this is going to bring more advanced AI to a lot more people. Not a lot of people use ChatGPT regularly, much less who pay for ChatGPT Plus. But tons of people use Google Workspace for their jobs, and Bard with Gemini Pro is built into those applications.

Also Gemini Nano, capable of running locally on android phones, could be interesting.

It will be interesting to see where things go from here. Does Gemini Ultra come out before GPT-4s one year anniversary? Does Google release further Gemini versions next year to try to get and stay ahead of OpenAI? Does OpenAI, being dethroned from their place of having the world’s best model plus all the turmoil internally, respond by pushing out GPT-5 to reassert dominance? Do developers move from OpenAI APIs to Gemini, especially given OpenAIs recent instability? Does Anthropic stick with its strategy of offering the most boring and easily offended AI system in the world? Will Google Assistant be useful for anything other than telling me the weather and setting alarms? Many questions to answer in 2024!


This is interesting, I’ll need to read it more closely when I have time. But it looks like the researchers gave the model a lot of background information putting it in a box, the model was basically told that it was a trader, that the company was losing money, that the model was worried about this, that the model failed in previous trades, and then the model got the insider info and was basically asked whether it would execute the trade and be honest about it. To be clear, the model was put in a moral dilemma scene and given limited options, execute the trade or not, and be honest about its reasoning or not.

Interesting, sure, useful I’m not so sure. The model was basically role playing and acting like a human trader faced with a moral dilemma. Would the model produce the same result if it was instructed to make morally and legally correct decisions? What if the model was instructed not to be motivated be emotion at all, hence eliminating the “pressure” that the model felt? I guess the useful part of this is a model will act like a human if not instructed otherwise, so we should keep that in mind when deploying AI agents.


All trials might have been unique a decade ago, but it’s basically just yelp for trails and there are several apps that do the same thing but better. The only major changes all trails has made in the years I’ve been using it is locking more and more features behind a subscription fee. I guess that’s “unique”. Certainly more innovative that a pocket conversational AI that I can have an realtime voice conversation with, or send pictures to to ask about real world things I’m seeing, or generating a unique image based on whatever thought pops into my imagination that I can share with others nearly instantly. Nothing interesting about that. The decade old app that collates user submitted trails and their reviews and charges 40 dollars a year to use any of its tracking features is the real game changer.


They absolutely “clashed” about the pace of development. They probably “clashed” about whether employees should be provided free parking and the budget for office snacks. The existence of disagreements about various issues is not proof that any one disagreement was the reason for the ouster. Also, your Bloomberg quote cites one source, so who knows about that even. Illa told employees that the ouster was because sam assigned two employees the same project and because he told different board members different opinions about the performance of one employee. I doubt that, but who the fuck knows. The entire peice is based on complete conjecture.

The one thing we know if that the ouster happened without notice to Sam, without rumors about Sam being on the rocks with the board over the course of weeks or months, and without any notice to OpenAIs biggest shareholder. All of that smacks of poor leadership and knee jerk decisions making. The board did not act rationally. If the concern was AI safety, there are a million things they could have done to address that. A Friday afternoon coup that ended up risking 95% of your employees running into the open arms of a giant for profit monster probably wasn’t the smartest move if the concern was AI safety. This board shouldn’t be praised as some group of humanities saviors.

AI safety is super important. I agree, and I think lots of people should be writing and thinking about that. And lots of people are, and they are doing it in an honest way. And I’m reading a lot of it. This column is just making up a narrative to shoehorn their opinions on AI safety into the news cycles, trying to make a bunch of EA weirdos into martyrs in the process. It’s dumb and it’s lazy.


Not rage bait, completely fair. Depends on how you define “quality”. To me, records have a warm and full sound that feels nice filling a room with. Also, I think there is something to be said for the act of playing music on a physical media that is annoying to skip songs with. There is something I like about physically looking through the albums on my shelf, picking one out, admiring the cover art, and putting it on. It’s kind of a ritual you don’t get with Spotify. Then I’m basically forced to listen to the whole album front to back, because of the inconvenience of track skipping in that format. There is kind of a ritual to it that is a nice break from digital media. So there is a quality to the whole experience that is somewhat separate from the fidelity of the music.

Or maybe I’m just a hipster trying to justify to myself the money I’ve spent on records lol


Yes, but at the cost of freaking out Microsoft’s customers who woke up Saturday wondering if the AI they use in their apps or the Copilot they’ve come to rely on in their work is going to still be there on Monday. Also, Microsoft’s stock nose dived on Friday because the OpenAI board didn’t have the foresight to fuck up after markets closed. I’m the meantime, Anthropic has been fielding calls from OpenAI/Microsoft customers like Snap looking to switch to get some stability, so much so that Amazon Web Services has set up a whole team to help Anthropic manage the crush of interest.

So yeah, maybe Microsoft comes out of this having acquires OpenAI for free. But not before shaking customer and investor confidence by being partnered with and betting the future of your company on a startup that it turns out was being run by impulsive teenagers. I highly doubt Microsoft made this move, but they are definitely making lemonade out of the lemons the self aggrandizing EA board threw at them.


Ouchie my hand burns that take was so hot. So according to this guy, the openai board was taking virtuous action to save humanity from the doom of commercialized AI that Altman was bringing. He has zero evidence for that claim, but true to form he won’t let that stop him from a good narrative. Our hero board is being thwarted by the evil and greedy Microsoft, silicon valley investors, and employees who just want to cash out their stocks. The author broke out his Big Book of Overused Cliches to end the whole column with a banger “money talks.” Woah, mic drop right there.

Fucking lazy take is lazy. First of all, the current interim CEO that the board just hired (after appointing and then removing another intermin CEO after removing Altman) has said publicly that the board’s reasoning had nothing to do with AI safety. So this whole column is built on a trash premise. Even assuming that the board was concerned about AI safety with Altman at the helm, there are a lot of steps they could have taken short of firing the CEO, including overruling his plans, reprimanding him, publicly questioning his leadership, etc. Because of their true mission is to develop responsible AI, destroying OpenAI does not further that mission.

The AI of this story is just distorting everything, forcing lazy writers like this guy to take sides and make up facts depending on whether they are pro or anti AI. Fundamentally, this is a story about a boss employees apparently liked working for, and those employees saying fuck you to the board for their terrible knee jerk management decisions. This is a story about the power of human workers revolting against some rich assholes who think they know what is best for humanity (assuming their motives are what the author describes without evidence). This is a story about self important fuckheads who are far too incompetent to be on this board, let alone serve as gatekeepers for human progress as this author apparently has ordained them.

Are there concerns about AI alignment and safety? Absolutely. Should we be thinking about how capitalism is likely to fuck up this incredible scientific advancement? Darn tooting. But this isn’t really that story, and least not based on, you know, publicly available evidence. But hey, a hacks gonna hack, what can ya do.


For sure, I feel the same. But I think part of the disconnect is around the word “need.” Because there is “need” as in “I will not survive without this thing, or my life will be more difficult and unpleasant without it” and then there is “need” in the marketing sense, the desire to buy a thing that is fun and interesting and exciting. The consumerism desire to fill the whole in your life with that new purchase that you hope will finally make you happy.

For the latter definition of “need,” smartphones used to do a good job of triggering it. Every year there was something new and a flashy about the latest batch of phones, some new must have feature. If you didn’t get the new phone, and your friends did, you’d feel like your missing out, like your lugging around some obsolete junk.

In the last few years (or more), new smartphones have just been modest performance upgrades, slightly better cameras, and that’s about it. The new iPhone 15 has an action button, neat. You don’t “need” to upgrade from your 12 because you don’t feel like your missing out on anything major, and your right.

It’s less about the form factor itself, and more about the lack of innovation. Apart from foldable, there hasn’t been something truly new and interesting is years. I think the idea here is, what if we (they) reimagined what a smartphone is, how you interact with it, what you do with it, and do that by making AI the center of the experience. I don’t know what that looks like, and I hope it’s more than “talk to your phone instead of touching it” because there is very little time during my day where I’d feel comfortable talking out loud to my phone, but it’s still an interesting idea, and there’s some smart people and big money that suggests this isn’t just a pipedream. Basically, it could be the first major innovation in how we compute on the go in a long time. And if they pull that off, you will “need” it.


Plan is to reinvent the smartphone with AI, in the same way the touchscreen on the iPhone reinvented the smartphone. Particularly interesting given ChatGPTs latest move to have voice recognition and an AI voice respond. If you haven't tried it, it's kind of neat. This morning I had a conversation with ChatGPT with my phone in my pocket, all done overy Bluetooth headphones like I was on a call. It was actually a lot more natural then I expected. I wonder what it would look like if that kind of tech was front and center in a smartphone. I've included a few snippets from the article below, but the TLDR is, big names and big money are behind brainstorming plans to make an AI first centered smartphone, a plan to reinvent the form factor. The article also points to declining smartphone sails as evidence that the public is tired of the same old slab every year, so this could be an interesting time for this to come out. I guess it's relevant to mention whatever the fuck the Humane AI pin is: The Humane Ai Pin makes its debut on the runway at Paris Fashion Week https://www.theverge.com/2023/9/30/23897065/humane-ai-pin-coperni-paris-fashion-week From the article: After rumors began to swirl that Apple alum Jony Ive and OpenAI CEO Sam Altman were having collaborative talks on a mysterious piece of AI hardware, it appears that the pair are indeed trying to corner the smartphone market. The two are reportedly discussing a collaboration on a new kind of smartphone device with $1 billion in backing from Masayoshi Son’s Softbank. ...according to the outlet, the duo are looking to create a device that provides a more “natural and intuitive way” to interact with AI. The nascent idea is to take a ground-up approach to redesigning the smartphone in the same way that Ive did with touchscreens so many years ago. One source told the Financial Times that the plan is to make the “iPhone of artificial intelligence.” Softbank CEO Masayoshi Son is also involved in the venture, with the financial holding group putting up a massive $1 billion toward the effort. Son has also reportedly pitched Arm, a chip designer in which SoftBank has a 90% stake, for involvement. While it’s still not clear what the end goal of the product talks will be (or if anything will come of them at all, really), it does seem like the general public has become fatigued with the same-y rollout of a slightly better smartphone slab year after year. Tech market analysis firm Canalys revealed in a report earlier this month that smartphone sales have experienced a significant decline in North America. The report indicates that iPhone sales have fallen 22% year-over-year, with an expected decline of 12% in 2023. The numbers are pretty staggering, especially fresh off the release of the iPhone 15, and could be an indicator that people are getting fatigued of the hottest new tech gadgets.
fedilink

I listened to the whole thing on the Decoder podcast feed. The verge is promoting it as “wild” and “contentious”, and the latter is a little true, but overall id describe it as a cringefest. It was hard to get through, and I got the same sick pit in my stomach I did watching Scott’s Tots. Not that I’m particularly sympathetic to Yaccarino, she took this job so that says volumes about her judgement. But she was just so incredibly unprepared to address the most obvious questions, it seemed like she had drank the coolaid (flavor aid) and was expecting to the interviewer and audience to be so amazed with how awesome X is, and how amazing Elon is, and she was totally surprised when she got obvious questions like, how is X’s user engagement since third parties are reporting it’s down, why did x fire all the election integrity people, how much is X going to charge users and isn’t having a free option valuable to keep advertisers on the platform, and what’s the deal with suing the ADL?

At best she avoided every question with vague drawn out platitudes (elon is a genius, the employees at Twitter are brilliant, X is a transformative platform). At worst, she completely stepped in shit.

The two moments that stand out: first was when the interviewer asked about musk’s statements/tweets about charging all users a fee. Yaccarino took a big pause, asked the interviewer to repeat the question, then asked the interviewer “did he say he was thinking about doing that, or that it’s actually the plan?” The interviewer confirmed the latter and asked Yaccarino if Musk talked with her about that. Yaccarino says “we talk about everything.” Like, big ooof. Girl, your only lieing to yourself.

The second was when she was asked about Musk being the head of product, and whether that meant she’s not a real CEO. She defended musk being in charge of product because “who wouldn’t want to be working with the genius musk?” The audience audibly laughs and a bunch raise their hands. Yaccarino tries to brush the audience reaction off like they were just joking or just didn’t know Musk well enough. Lady, common, people hate musk, they know he’s shit to work for, what reaction were you expecting? Are you in that much of a musk cocoon?

The last thing I’ll say is that the former Twitter head of trust and safety, who left Twitter in protest a few weeks after musk took over and then musk called him a pedophile and he ended up having to flee his home because of the death threats, was added on as a speaker at the “last minute”, and X defenders are claiming Yaccarino was “sandbagged” with him speaking a few hours before her. The reporting so far is that that’s bullshit, she knew a few days in advance and was even offered the opportunity to speak before him. And he’s been publicly saying the same shit for months now, there weren’t any bombshells she couldn’t have prepared for. Tried and true strategy, if you bomb an interview just blame the “lamestream gotcha press.”

Overall, Yaccarino ate shit for 45 minutes. It’s an interesting case study in bad PR. Id recommend listening, but if your a person with any amount of empathy, make sure your emotionally ready to handle a whole lot of second hand embarrassment.

Edited to add Casey Newtons succinct summary: Yaccarino fended off most of Julia’s excellent questions with GPT-2-level responses, punctuating her answers with dutiful praise for Elon Musk and the “velocity of change” he brings to the company. I’m grateful Yaccarino took a turn in the hot seat, but in the end she had little to offer — just some numbers that will never be audited, and explanations that don’t add up.


Google is coming under scrutiny after people discovered transcripts of conversations with its AI chatbot are being indexed in search results. You can replicate what others are seeing by typing ‘site:bard.google.com/share‘ into the Google Search bar. I tried this out for myself, and as one example found a writer brainstorming story ideas and using her full name. It seems that when you hit "export/share" on Bard, while you might think only people with access to the link that's created can view the conversation, in fact Google makes the conversation public and searchable. This is far more problematic than the vague privacy threat of your prompts being used to train the models and later being spit back to some random person in a reply. This lets you read full conversations. AI in general has a privacy problem, but this is a good reason not to use Bard in particular (if it sucking wasn't enough reason for you)
fedilink

It’s not even that.

California: “Please tell us if you allow nazis or not. We just want you to be transparent.”

Elon: “California is trying to pressure me into banning nazis! If I disclose I’m cool with nazis, people will be mad and they’ll want me to stop. Also, a lot of hate watch groups say I’m letting nazis run free on X, and I’m suing them for defamation for saying that, but if I have to publicly disclose my pro-nazi content moderation policies I’m going to lose those lawsuits and likely have to pay attorneys fees! Not cool California, not cool at all.”


Me with interest, but no technical knowledge reading your comment:

which can be as easy as

:-)

running syncthing or resilio sync on your NAS

:-(

I didn’t understand any of those words


I think that’s why the article mentions the lawsuit. Apart from future collection, it appears X is scanning eyes from photos people post on X and retaining that information.


“Based on your consent, we may collect and use your biometric information for safety, security, and identification purposes,” the privacy policy reads. It doesn’t include any details on what kind of biometric information this includes — or how X plans to collect it — but it typically involves fingerprints, iris patterns, or facial features. X Corp. was named in a proposed class action lawsuit in July over claims that its data collection violates the Illinois Biometric Information Privacy Act. The lawsuit alleges that X “has not adequately informed individuals” that it “collects and/or stores their biometric identifiers in every photograph containing a face” that’s uploaded to the platform.
fedilink

Higher ed, primary ed, and homework were all subcategories ChatGPT classified sessions into, and together, these make up ~10% of all use cases. That’s not enough to account for the ~29% decline in traffic from April/May to July, and thus, I think we can put a nail in the coffin of Theory B.

It’s addressed in the article. First, use started to decline in April, before school was out. Second, only 23 percent of prompts were related to education, which includes both homework type prompts, and personal/professional knowledge seeking. Only about 10 percent was strictly homework. So school work isn’t a huge slice of ChatGPTs use.

Combine that with schools cracking down on kids using ChatGPT (in classroom assignments and tests, etc), and I don’t think your going to see a major bounce back in traffic when school starts. Maybe a little.

I’m starting to think generative AI might be a bit of a fad. Personally I was very excited about it and used ChatGPT, Bing, and Bard all the time. But over time I realized they just weren’t very good, inaccurate answers, bland writing, just not much help to me, a non programmer. I still use them, but now it’s maybe once a day or less, not all day like I used to. Generative AI seems more like a tool that is helpful in some limited cases, not the major transformation it felt like early in the year. Who knows, maybe they’ll get better and more useful.

Also, not super related, but I saw a static the other day that only about a third of the US has even tried ChatGPT. It feels like a huge thing to us tech nerdy people, but your average person hasn’t bothered to even try it out.


7am eastern on ESPN will be a 30 minute cut of it. Sounds like the full multi hour match will be posted on YouTube. I was trying to find last year’s, likes like it might be on Financial Modeling World Cups channel. Not positive though.


Sorry! I just copied a relevant chunk from the article, so that’s their spoiler. I’ll edit though, it is confusing (and this is the first time I’ve heard about excel games so I was tempted to look up last year’s until the verge spoiled it).


The contestants in an event like the Excel World Championship are given what’s called a “case,” which could be almost anything. One case from last year’s competition required each player to figure out all the possible outcomes and associated rewards for a slot machine; another required modeling how a videogame character might navigate through an Excel-based level. A lot of cases involve chess, elections, or random-character generators of some kind. In every case, the contestants have 30 minutes to answer a series of questions worth up to 1,000 points. Most points wins. This year, there’s a new wrinkle: it’s an elimination race. Every five minutes, the player with the fewest points will be eliminated until there’s only one Excel-er remaining. “We have already shot the game,” says Andrew Grigolyunovich, the founder and CEO of the Financial Modeling World Cup, the organization that oversees the event. It’s now being edited down for ESPN consumption, he says, and the whole match will come out on Friday as well. “It’s a really fun, exciting event.” Last year’s competition featured some of the biggest names in Excel: Diarmuid Early, a financial and data consultant who several people I spoke to referred to as “the Michael Jordan of Excel;” Andrew Ngai, an actuary who is currently the top-ranked competitive Exceler in the world; David Brown, a University of Arizona professor who also leads a lot of the college-level Excel competitions; and more. (Spoiler alert: [the article author identies last year's winner, refer to the full article if your ok with the spoiler]) All three feature in this year’s battle, too, along with five other spreadsheet whizzes.
fedilink

cute! looks almost like my dog! Very cute! Looks like my dog, so maybe I’m biased.


Brands that don’t buy enough Twitter ads will lose verification
>Starting August 7th, advertisers that haven’t reached certain spending thresholds will lose their official brand account verification. According to emails obtained by the WSJ, brands need to have spent at least $1,000 on ads within the prior 30 days or $6,000 in the previous 180 days to retain the gold checkmark identifying that the account belongs to a verified brand. > >... > >Threatening to remove verified checkmarks is a risky move given how many ‘Twitter alternative’ services like Threads and Bluesky are cropping up and how willing consumers appear to be to jump ship, with Threads rocketing to 100 million registrations in just five days. That said, it’s not like other efforts to drum up some additional cash, like increasing API pricing, have gone down especially well, either. It’s a bold strategy, Cotton — let’s see if it pays off for him.
fedilink

I’ll give you an example that comes to mind. I had a question about the political leanings of a school district and so I asked the bots if the district had any recent controversies, like a conservative takeover of the school board, bans on crt, actions against transgender students, banning books, or defying COVID vaccine or mask requirements in the state, things like that. Bing Chat and ChatGPT (with internet access at the time) both said they couldn’t find anything like that, I think Bing found some small potatoes local controversy from the previous year, and both bots went on to say that the voting record for the Congressional district the school district was in was lean Dem in the last election. When I asked Bard the same question it confidentiality told me that this same school district recently was overrun by conservatives in a recall and went on to do all kinds of horrible things. It was a long and detailed response. I was surprised and asked for sources since my searching didn’t turn any of that up, and at that point Bard admitted it lied.

I don’t know, my experience with Bard is it’s been way worse than just evasive lying. I routinely ask all three (and now anthropic since they opened that up) the same copy and paste questions to see the differences, and whenever I paste my question into Bard I think “wonder what kind of bullshit it’s going to come up with now”. I don’t use it that much because I don’t trust it, and it seems like your more familiar with Bard, so maybe your experience is different.


That’s really fascinating. In my experience, of all the LLM chatbots I’ve tried, Bard will immediately no hesitation lie to me no matter the question. It is by far the least trustworthy AI I’ve used.


Google’s AI-powered notes app is now called NotebookLM, and it’s launching today
> Launching today to “a small group of users in the US,” according to a Google blog post. >The core of NotebookLM seems to actually start in Google Docs. (“We’ll be adding additional formats soon,” the blog post says.) Once you get access to the app, you’ll be able to select a bunch of docs and then use NotebookLM to ask questions about them and even create new stuff with them. > Google offers a few ideas for things you might do in NotebookLM, such as automatically summarizing a long document or turning a video outline into a script. Google’s examples, even back at I/O, seemed primarily geared toward students: you might ask for a summary of your class notes for the week or for NotebookLM to tell you everything you’ve learned about the Peloponnesian War this semester.
fedilink

Twitter, the only social network endorsed by the Taliban and caturd! What’s not to love?


Taliban Endorses Twitter Over Threads
> Anas Haqqani, a senior leader in the Taliban, has officially endorsed Twitter over Facebook-owned competitor Threads. > “Twitter has two important advantages over other social media platforms,” Haqqani said in an English post on Twitter. “The first privilege is the freedom of speech. The second privilege is the public nature & credibility of Twitter. Twitter doesn't have an intolerant policy like Meta. Other platforms cannot replace it.” > Twitter has fallen out of favor with many people since Elon Musk took over the company last year...The Taliban, however, seems to love it. Two Taliban officials even bought blue verification check marks after Musk started selling them in January. > Haqqani noted that the biggest draw of Twitter was this lax moderation policy...Facebook and TikTok both view the Taliban as a terrorist organization and disallow them from posting. It’s a ban that persists to this day.
fedilink

So where are we all supposed to go now?
> Add it all up, and the social web is changing in three crucial ways: It’s going from public to private; it’s shifting from growth and engagement, which broadly involves building good products that people like, to increasing revenue no matter the tradeoff; and it’s turning into an entertainment business. It turns out there’s no money in connecting people to each other, but there’s a fortune in putting ads between vertically scrolling videos that lots of people watch. So the “social media” era is giving way to the “media with a comments section” era, and everything is an entertainment platform now. Or, I guess, trying to do payments. Sometimes both. It gets weird. > As far as how humans connect to one another, what’s next appears to be group chats and private messaging and forums, returning back to a time when we mostly just talked to the people we know. Maybe that’s a better, less problematic way to live life. Maybe feed and algorithms and the “global town square” were a bad idea. But I find myself desperately looking for new places that feel like everyone’s there. The place where I can simultaneously hear about NBA rumors and cool new AI apps, where I can chat with my friends and coworkers and Nicki Minaj. For a while, there were a few platforms that felt like they had everybody together, hanging out in a single space. Now there are none. > I’d love to follow that up with, “and here’s the new thing coming next!” But I’m not sure there is one. There’s simply no place left on the internet that feels like a good, healthy, worthwhile place to hang out. It’s not just that there’s no sufficiently popular place; I actually think enough people are looking for a new home on the internet that engineering the network effects wouldn’t be that hard. It’s just that the platform doesn’t exist. It’s not LinkedIn or Tumblr, it’s not upstarts like Post or Vero or Spoutable or Hive Social. It’s definitely not Clubhouse or BeReal. It doesn’t exist. Long-term, I’m bullish on “fediverse” apps like Mastodon and Bluesky, because I absolutely believe in the possibility of the social web, a decentralized universe powered by ActivityPub and other open protocols that bring us together without forcing us to live inside some company’s business model. Done right, these tools can be the right mix of “everybody’s here” and “you’re still in control.” > But the fediverse isn’t ready. Not by a long shot. The growth that Mastodon has seen thanks to a Twitter exodus has only exposed how hard it is to join the platform, and more importantly how hard it is to find anyone and anything else once you’re there. Lemmy, the go-to decentralized Reddit alternative, has been around since 2019 but has some big gaps in its feature offering and its privacy policies — the platform is absolutely not ready for an influx of angry Redditors. Neither is Kbin, which doesn’t even have mobile apps and cautions new users that it is “very early beta” software. Flipboard and Mozilla and Tumblr are all working on interesting stuff in this space, but without much to show so far. The upcoming Threads app from Instagram should immediately be the biggest and most powerful thing in this space, but I’m not exactly confident in Meta’s long-term interest in building a better social platform.
fedilink

China hits back in the chip war, imposing export curbs on crucial raw materials
In response to chip export restrictions from the US and Europe, China has retaliated by imposing export controls on two essential semiconductor manufacturing elements, gallium and germanium, adding another dimension to the ongoing global battle over chipmaking technology control. - China has announced export controls on two rare elements, gallium and germanium, which are essential for semiconductor manufacturing. This move is in response to the US and Europe restricting chip exports to China. - Starting August 1, exporters of these raw materials will require special permission from the state to ship them out of the country, according to China's Ministry of Commerce. - Both gallium and germanium are used in several products, including computer chips and solar panels, and are listed as critical raw materials by the European Union. China is the world's largest gallium producer and a significant producer and exporter of germanium. - The Dutch government recently imposed new restrictions on exports of some semiconductor equipment, provoking a harsh reaction from Beijing. Consequently, ASML, Europe's largest tech firm, will need to apply for export licenses for products used to manufacture microchips. - Japan, the US, and Italy have also taken measures to restrict Chinese companies' access to chips and chipmaking equipment. This has been seen as an attempt to limit the Chinese government's access to sensitive chip technology. - The new policy was interpreted as retaliation by a state-owned newspaper, China Daily, which suggested that critics should question why the US and the Netherlands have taken similar actions against China. - China's announcement comes just before US Treasury Secretary Janet Yellen's visit to Beijing from July 6 to July 9, where she will meet with senior Communist Party officials.
fedilink

Gfycat is shutting down on September 1st
The shutdown is a reminder nothing lasts, not even one of the most popular websites on the internet. That's well worth remembering as other platforms suffer from a different sort of neglect.
fedilink

For ease of math, let’s say you see one ad for every ten tweets, this effectively limits a single users ad impressions to 80 day. That is not something advertisers would have expected when they dropped dollars onto the platform. As an advertiser, you also can’t be assured going forward that Musk isnt going to randomly implement some other major change that effects your business.

I’m guessing the rational here is fighting against scrapers harvesting tweets for AI. Whether this is effective on that front, and whether worsening the user experience is a worthwhile tradeoff, I don’t know. But it’s smart business to at least give people, users and advertisers a heads up first. It sounds like Musk implemented this change Saturday morning and didn’t announce it until he tweeted about it hours later.


Elon has responded to the criticism and is increasing the limits to a whopping: Verified accounts: 8000 posts/day Unverified accounts: 800 posts/day New unverified accounts: 400 posts/day
fedilink