• 0 Posts
  • 15 Comments
Joined 1Y ago
cake
Cake day: Jul 23, 2023

help-circle
rss

I used to use FL Studio, but hated using Windows. I got almost all features (including VSTs) to work in Ubuntu under Wine, but had a problem with WineASIO, which I seemed to require to use the USB sound card properly.

Because of that, I since changed to a DAW called REAPER which is built natively for Linux and works flawlessly and is very nice. There is a program called Yabridge to help run Windows VSTs. I even got more complicated plugins with authentication like Addictive Drums 2 to work using Wine no problem.

If you want a fully FOSS solution there is Ardour which is also great but a little less slick than Reaper IMO.


The advances in LLMs and Diffusion models over the past couple of years are remarkable technological achievements that should be celebrated. We shouldn’t be stifling scientific progress in the name of protecting intellectual property, we should be keen to develop the next generation of systems that mitigate hallucination and achieve new capabilities, such as is proposed in Yann Lecun’s Autonomous Machine Intelligence concept.

I can sorta sympathise with those whose work is “stolen” for use as training data, but really whatever you put online in any form is fair game to be consumed by any kind of crawler or surveillance system, so if you don’t want that then don’t put your shit in the street. This “right” to be omitted from training datasets directly conflicts with our ability to progress a new frontier of science.

The actual problem is that all this work is undertaken by a cartel of companies with a stranglehold on compute power and resources to crawl and clean all that data. As with all natural monopolies (transportation, utilities, etc.) it should be undertaken for the public good, in such as way that we can all benefit from the profits.

And the millionth argument quibbling about whether LLMs are “truly intelligent” is a totally orthogonal philosophical tangent.


Since the forces that determine policy are largely tied up with corporate profit, promoting the interests of domestic companies against those of other states, and access to resources and markets, our system will misuse AI technology whenever and wherever those imperatives conflict with the wider social good. As is the case with any technology, really.

Even if “banning” AI were possible as a protectionist measure for those in white-collar and artistic professions, I think it would ultimately be unfavorable with the ruling classes, since it would concede ground to rival geopolitical blocs who are in a kind of arms race to develop the technology. My personal prediction is that people in those industries will just have to roll with the punches and accept AI encroaching into their space. This wouldn’t necessarily be a bad thing, if society made the appropriate accommodations to retrain them and/or otherwise redistribute the dividends of this technological progress. But that’s probably wishful thinking.

To me, one of the most worrying trends, as it’s gained popularity in the public consciousness over the last year or two, has been the tendency to silo technologies within large companies, and build “moats” to protect it. What was once an open and vibrant community, with strong principles of sharing models, data, code, and peer-reviewed papers full of implementation details, is increasingly tending towards closed-source productized software, with the occasional vague “technical report” that reads like an advertising spiel. IMO one of the biggest things we can lobby for is openness and transparency in the field, to guard against the natural monopolies and perverse incentives of hoarding data, technical know-how, and compute power. Not to mention the positive externality spillovers of the open-source scientific community refining and developing new ideas.

It’s similar to how knowledge of the atomic structure gave us both the ability to destroy the world, or fuel it (relatively) cleanly. Knowledge itself is never a bad thing, only what we choose to do with it.


I take your point, but in this specific application (synthetically generated influencer images) it’s largely something that falls out for free from a wider stream of research (namely Denoising Diffusion Probabilistic Models). It’s not like it’s really coming at the expense of something else.

As for what it’s eventually progressing towards - who knows… It has proven to be quite an unpredictable and fruitful field. For example Toyota’s research lab recently created a very inspired method of applying Diffusion models to robotic control which I don’t think many people were expecting.

That said, there are definitely societal problems surrounding AI, its proposed uses, legislation regarding the acquisition of data, etc. Often times markets incentivize its use for trivial, pointless, or even damaging applications. But IMO it’s important to note that it’s the fault of the structure of our political economy, not the technology itself.

The ability to extract knowledge and capabilities from large datasets with neural models is truly one of humanity’s great achievements (along with metallurgy, the printing press, electricity, digital computing, networking communications, etc.), so the cat’s out of the bag. We just have to try and steer it as best we can.


The Japanese SCMaglev only has the cooling stuff on the train, not along the entire length of the track.

And I think there is a “high-temperature SC Maglev” in development in China too.



Free speech online doesn’t even seem to be a particularly well-defined concept. Those who extol it the loudest are often looking to have the millionth “good faith discussion” about The Bell Curve, or use slurs as “just a joke”, or promote a “dating and lifestyle coaching” business to teenage boys. If all they want is carte-blanche to say absolutely anything without being censored, I guess they only need to spin up a web server of their own, or run a lemmy instance. But what they actually want is to bypass the moderation rules on widely-used platforms and shit on the social contract. It’s the same reason they don’t show pornography, snuff footage, or other damaging content on television.


Does anyone else kinda miss when youtube was more informal, random, less edited, and more janky? Nowadays everybody has a title card, and a two minute intro greeting, high-end camera setup, and tightly rehearsed script. It’s like they all decided to just recreate the unnecessary bloat and ceremony from classical television, for the sake of “appearing professional” or something?

For example, a tutorial doesn’t need to begin with a “Hey guys, it’s your pal ASDFGHJKL. Have you ever got your foreskin trapped in a whatever and yada yada yada? Well today I’m gonna show you how to blah blah blah. Now let’s get into the video. But first a word from our sponsor Lockheed Martin…”

What’s with the “today”? I’m always watching it “today” by definition. And I wouldn’t have clicked it if I wasn’t in that particular predicament. Why not just immediately start showing the solution?


If you know of a better ML related instance than sigmoid.social let me know, but none of those influential figures I mentioned post there, and the discussion is pretty much non-existent.


I joined Twitter fairly recently as Machine Learning Twitter is/was a thing, and I wanted to stay abreast of news from people like Andrej Karpathy, Chris Olah, Andrew Ng etc., especially since r/MachineLearning went down the shitter.

But I can’t even - I log on and just instantly see ragebait posts from Daily Mail talking heads and bullshit.

Are there any better alternatives for this purpose?


TFW cute pictures: 😱 😠 🤬 😤

TFW sanctimonious drivel: 😌 👍 😍 🥳


👍good👍 idea💡 bro 💪 fuck🤬 😤😤😤 emogys🫠 they ruin💩 the 😌sanctity🙏 of 🧑‍💻online🌍 discourse🗣️ and∧ 😩debase😈 👩‍👩‍👦‍👦us👥 all∀

→If➡️ I 👁️ ever see👀 another🫴 🍑emojee💯 I’m ⏰gonna🪬 💦💦💦 🍆 ⚰️ 🚾 ⚠️ ☯️🅱️ 😎😎😎