That there is no perfect defense. There is no protection. Being alive means being exposed; it’s the nature of life to be hazardous—it’s the stuff of living.

  • 59 Posts
  • 46 Comments
Joined 4M ago
cake
Cake day: Jun 09, 2024

help-circle
rss
Reports: Tesla’s prototype Optimus robots were controlled by humans
cross-posted from: https://lemmy.world/post/20907783
fedilink

Palm Pilot: The Tablet That Schooled Apple | TechSpot
cross-posted from: https://lemmy.world/post/20907375
fedilink






You don’t even need to go that far.

Just need real courts (based on principles of justice and sober interpretation of corruption and criminality) and proper incentives; full asset seizure and mandatory community service (decade minimum) working as a junior janitor at an Alzheimer patient facility, with restricted access to smartphones/computers and mobility restriction to the immediate area around the facility. You could even get minimum wage while taking part in your community service program.


This is the kind of thing that makes me support use of extra-judicial methods (at least in a temporary and limited context) against global oligarchs and senior lackeys.

The host then followed up with, “Do you think we can meet AI’s energy without total blowing out climate goals?” and Schmidt answered with, “We’re not going to hit the climate goals anyway because we’re not organized to do it — and the way to do it is with the ways that we’re talking about now — and yes, the needs in this area will be a problem. But I’d rather bet on AI solving the problem than constraining it and having the problem if you see my plan.”

This is outright malicious. How exactly would AI “solve the problem”? Later on in the article (I am not watching the propaganda video) alludes to “AI … will make energy generation systems at least 15% more efficient or maybe even better” but he clearly just made that up on the spot. And at any rate, even if “AI” helps discover a method to make (all?) energy generation 15% more efficient that would still require trillion-dollar investments to modify current energy generation plants using the new technology.

Who is Schmidt to say that the returns of using the total spend in the above-mentioned scenario wouldn’t be better used on investing into wind and solar?



One other note, to my understanding (I am not American), the “U.S. District Court for the Eastern District of Texas” is known for its corruption.



Perhaps the ad-free prime video subscription could be a viable option if prime has a lot of your favourite shows and you are opposed to piracy?

Not judging or telling you what to do. Just thinking out loud.

I would just go with piracy if you don’t want to pay the ad free tier.






Depends on what kind of games you play. Economic strategy games (tycoons, city-builders, large scale simulation games) can easily bring even a modern CPU to it’s knees.



It seems that the ~$3.7 billion revenue figure is from this NYT article.

Some interesting background:

Roughly 10 million ChatGPT users pay the company a $20 monthly fee, according to the documents. OpenAI expects to raise that price by $2 by the end of the year, and will aggressively raise it to $44 over the next five years, the documents said.

It will be interesting to see if their predictions turn out to be true. $44 a month seems steep for a LLM, not to mention there will likely be a lot of competition both from cloud LLM providers and local LLM initiatives.


His involvement in the infamous WorldCoin provides useful insight into his character.

An oligarch and a degenerate (outside the US many oligarchs have a more or less sober understanding of who they are, although degeneracy among oligarchs is a global issue).







cross-posted from: https://lemm.ee/post/43113750
fedilink

It really is exciting to see alternative battery systems beginning to see wider commercialization.

I am not aware of sodium-ion batteries for home use, I believe it’s mostly for industrial-scale battery systems. I could be wrong though, would be interested in learning more.

In an apartment setting, IMO the current gold standard is LiFePO4 (Lithium iron phosphate) batteries.

I live in Ukraine and we have constant problems with electricity supply (thank you dear russians). At times you have 1-2 full charge/discharge cycles per day on a 1 Kilowatt-hour battery system. Several LiFePO4 systems in my extended family seem to work close to baseline even after 1.5 years (not used daily though).

I have not seen any options for sodium-ion batteries for home use, but this maybe a local thing.

In a more rural/suburban setting, generators work as backup power supplies for most people. Typically only the well off get a high capacity LiFePO4 systems for house setting.






I was surprised to see that their negotiations broke down because of price/cost as opposed to technology (unproven node and to my knowledge intel doesn’t really have any experience with semi-custom x86 business).










This would be an excellent law/regulation that makes complete sense.

The major companies can most definitely manage this (although they will cry crocodile tears).



That would not be a good thing. The CPU/GPU design and semiconductor fab industries are already massively concentrated.



Nvidia Gets DOJ Subpoena in Escalating Antitrust Probe [Bloomberg]
cross-posted from: https://lemmy.world/post/19388103 > [Source Bloomberg article](https://www.bloomberg.com/news/articles/2024-09-03/nvidia-gets-doj-subpoena-in-escalating-antitrust-investigation)
fedilink

cross-posted from: https://lemmy.world/post/19284817
fedilink

[RAND Report - The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed](https://www.rand.org/pubs/research_reports/RRA2680-1.html)
fedilink


I didn’t really get this either.

I did think the final paragraph was notable, a “zeitgeist of our times” if you will:

The absurdity of the situation prompted tech author and journalist James Vincent to write on X, “current tech trends are resistant to satire precisely because they satirize themselves. a car park of empty cars, honking at one another, nudging back and forth to drop off nobody, is a perfect image of tech serving its own prerogatives rather than humanity’s.”


cross-posted from: https://lemmy.world/post/18626085
fedilink

Vietnam turns chip sector magnet with affordable, quality talent pool [Nikkei]
cross-posted from: https://lemmy.world/post/18625727 > [Source Nikkei article](https://asia.nikkei.com/Spotlight/The-Big-Story/Vietnam-turns-chip-sector-magnet-with-affordable-quality-talent-pool)
fedilink

An Arcade Designed in 1991, but Built in 2024! The Lemmings Prototype
cross-posted from: https://lemmy.world/post/18625032
fedilink


A very interesting and unique metaphor. Very true too. :)



Gaming laptops (especially 17 inch devices) tend to have removable wireless networking cards.


It may be not news for you (or for me), but it could be news for the average American geezer, :)


Any proof or just speculation?

Read their privacy policy.

While it is written in a seemingly pro-consumer manner, if you get down to the basics, there are two main points:

  1. They can and do collect absolutely any and all information
  2. They can and do share all this information with 3rd parties with it benefits them

This was a general comment, not aimed at you. Honestly, it wasn’t my intention to accuse you specifically. Apologies for that.


Thanks for the reply.

I guess we’ll see what happens.

I still find it difficult to get my head around how a decrease in novel training data will not eventually cause problems (even with techniques to work around this in the short term, which I am sure work well on a relative basis).

A bit of an aside, I also have zero trust in the people behind current LLM, both the leadership (e.g. Altman) or the rank and file. If it’s in their interests do downplay the scope and impact of model degeneracy, they will not hesitate to lie about it.


I’ve read the source nature article (skimmed though the parts that were beyond my understanding) and I did not get the same impression.

I am aware that LLM service providers regularly use AI generated text for additional training (from my understanding this done to “tune” the results to give a certain style). This is not a new development.

From my limited understanding, LLM model degeneracy is still relevant in the medium to long term. If an increasing % of your net new training content is originally LLM generated (and you have difficulties in identifying LLM generated content), it would stand to reason that you would encounter model degeneracy eventually.

I am not saying you’re wrong. Just looking for more information on this issue.


According to the report, the company’s chief financial officer, Susan Li, told staff the division has lost $55 billion since 2019.

$55 billion in losses over ~5 years? That’s a substantial amount.


Microsoft CTO Kevin Scott is of course not a reliable source due to conflict of interest and his position in the US corporate world.

If anything, the fact that he is doing damage control PR around “LLM scaling laws” suggests something is amiss. Let’s see how things develop.


Progress is definitely happening. One area that I am somewhat knowledgeable about is image/video upscaling. Neural net enhanced upscaling has been around for a while, but we are increasingly getting to a point where SD (DVD source, older videos from the 90s/2000s) to HD upscaling is working almost like in the science fiction movies. There are still issues of course, but the results are drastically better than simply scaling the source media by x2.

The framing of LLMs as some sort of techno-utopian “AI oracle” is indeed a damning reflection of our society. Although I think this topic is outside the scope of current “AI” discussions and would likely involve a fundamental reform of our broader social, economic, political and educational models.

Even the term “AI” (and its framing) is extremely misleading. There is no “artificial intelligence” involved in a LLM.


I am increasingly starting to believe that all these rumors and “hush hush” PR initiatives about “reasoning AI” is an attempt to keep the hype going (and VC investments) till the vesting period for their stock closes out.

I wouldn’t be surprised if all these “AI” companies have come to a point where they’re basically at the limits of LLM capabilities (due to problems with its fundamental architecture) while not being able to solve its core drawbacks (hallucinations, ridiculously high capex and opex cost).



Hahaha, that’s a pretty wild read.

The fucking Gamaverse,


Some context on what the fuck is rabbit and r1 would have been helpful.


While a lot of the technical details are beyond my paygrade, this seems to be a potentially large game-changer in the medium term (with possibly a massive impact on Nvidia’s share price).


They are correct in that encoding is a super geeky topic even by the standards of technology discussions.

It is fascinating to see how encoding has changed across generation. Take a relatively high bitrate source file and encode it with XviD, x265, x265 and whatever is the top AV1 codec at the same bitrate x resolution.

Not surprisingly, the biggest jump in quality will be from XviD to x264, but x265 does offer notable improvements.


183,200 TV episodes is pretty modest compared to alternative “non-approved” sources.

One datapoint is one source (that has a rule against any TV/show content released in the last 5 years) has a total number of 19.5K shows and TV movies/specials, with ~80 K releases. For many shows a single release can be a full season.


Thank you for the clarification regarding ASI. That still leaves the question of the definition of “safe ASI”; a key point that is emphasized in their manifesto.

To use your example it’s like an early mass market car industry professional (say in 1890) discussing road safety and ethical dilemmas in roads dominated by regular drivers and a large share of L4/L5 cars (with some of them being used as part-time taxis). I just don’t buy it.

Mind you I am not anti-ML/AI. I am an avid user of “AI” (ML?) upscaling (specifically video) and to lesser extent stable diffusion. While AI video upscaling is very fiddly and good results can be hard to get right, it is clearly on another level with respect to quality compared to “classical” upscaling algorithms. I was truly impressed when I was able to run by own SD upscale with good results.

What I am opposed to is oligarchs, oligarch-wanabees, shallow sounding proclamations of grandiose this or that. As far as I am concerned it’s all bullshit and they are all to one degree or another soulless ghouls that will eat your children alive for the right price and the correct mental excuse model (I am only partially exaggerating, happy to clarify if needed) .

If one has all these grand plans for safe ASI, concern for humanity and whatnot, setup a public repo and release all your code under GPL (and all relevant documentation, patent indemnification, no trademark tricks etc.). Considering Sutskever’s status as AI royalty who is also allegedly concerned about humanity, he would be the ideal person to pull this off.

If you can’t do that, then chances are you’re lying about your true motives. It’s really as simple as that.


Just noticed that the cropped image makes it look like he is doing a nazi salute and then the first sentence of their “manifesto” is “Superintelligence is within reach.” :)


I don’t consider tech company boardroom drama to be an indicator of anything (in of itself). This is not some complex dilemma around morality and “doing the right thing”.

Is my take on their PR copytext unreasonable? Is my interpretation purely a matter of subjectivity?

Why should I buy into this “AI god-mommy” and “skynet” stuff? Guy can’t even provide a definition of “superintelligence”. Seems very suspicious for a “top mind in AI” (paraphrasing your description).

Don’t get me wrong, I am not saying he acts like a movie antagonist IRL, but that doesn’t mean we have any reason to trust his motives or ignore the long history of similar proclamations.


What do you mean by anti-commercial style? I am not from North America, but this seems like pretty typical PR copytext for local tech companies. Lot’s of pomp, banality, bombast and vague assertions of caring about the world. It almost reads like satire at this point, like they’re trying to take the piss.

If his intentions are literal and clear, what does he mean by “superintelligence” (please be specific) and in what way is it safe?


This honestly looks like a grift to get a nice salary for a few years on VC money. These are not random sales goons peddling shit they don’t understand. They don’t even bother to define “superintelligence”, let alone what they mean by “safe superintelligence” .

I find it hard to believe this wasn’t written with malicious intent. But maybe I am too cynical and they are so used to people kissing their asses, that they think their shit doesn’t smell. But money definitely plays some role in this, they would be stupid to not cash in while the AI hype is hot.


Fascinating, I am not surprised at all.

Even beyond AI, some of the implicit messaging has got to strike a nerve with that kind of crowd.

I don’t think this is satire either, more like a playful rant (as opposed to a formal critique).


I don’t think it’s supposed to have a cohesive narrative structure (at least in context of a structured, more formal critique). I read the whole thing and it’s more like a longer shitpost with a lot of snark.


Pretty dystopian article.

But this will continue, until oligarchs like Altman, Cook, Nadella etc. start getting put into difficult situations; ones that create very strong incentives for them to show humanity (or at least emulate it).