• 0 Posts
  • 37 Comments
Joined 1Y ago
cake
Cake day: Jun 13, 2023

help-circle
rss

While I agree about the conflict of interest, I would largely say the same thing despite no such conflict of interest. However I see intelligence as a modular and many dimensional concept. If it scales as anticipated, it will still need to be organized into different forms of informational or computational flow for anything resembling an actively intelligent system.

On that note, the recent developments with active inference like RXinfer are astonishing given the current level of attention being paid. Seeing how llms are being treated, I’m almost glad it’s not being absorbed into the hype and hate cycle.


Perhaps instead we could just restructure our epistemically confabulated reality in a way that doesn’t inevitably lead to unnecessary conflict due to diverging models that haven’t grown the necessary priors to peacefully allow comprehension and the ability exist simultaneously.

breath

We are finally coming to comprehend how our brains work, and how intelligent systems generally work at any scale, in any ecosystem. Subconsciously enacted social systems included.

We’re seeing developments that make me extremely optimistic, even if everything else is currently on fire. We just need a few more years without self focused turds blowing up the world.


AI or no AI, the solution needs to be social restructuring. People underestimate the amount society can actively change, because the current system is a self sustaining set of bubbles that have naturally grown resilient to perturbations.

The few people who actually care to solve the world’s problems are figuring out how our current systems inevitably fail, and how to avoid these outcomes.

However, the best bet for restructuring would be a distributed intelligent agent system. I could get into recent papers on confirmation bias, and the confabulatory nature of thought, on the personal level, group level, and society level.

Turns out we are too good at going with the flow, even when the structure we are standing on is built over highly entrenched vestigial confabulations that no longer help.

Words, concepts, and meanings change heavily depending on the model interpreting them. The more divergent, the more difficulty in bridging this communication gap.

a distributed intelligent system could not only enable a complete social restructuring with autonomy and altruism both guaranteed, but with an overarching connection between the different models at every scale, capable of properly interpreting the different views, and conveying them more accurately than we could have ever managed with model projection and the empathy barrier.



People’s perspective is killing their sense of awe.

While our economic system is grand in ensuring our experience of life doesn’t improve, technology has gotten kind of crazy and awesome.

They could release an agi next year, and unless it affected people’s work life balance, people would just immediately get used to it and think it’s boring.

Will generative AI still kill our sense of awe when video game characters can naturally and accurately respond how you would expect?

I would never get bored of it. The majority of people would find it a boring novelty after a couple days because we are good at getting used to things and people don’t want to recognize the fact. We will have full fantastical worlds to explore and people will still find reason to be salty because it’s made with the help of evil computers.

I’m personally eager for a life where my recreational experiences aren’t defined by companies like Disney. Smaller artists with these powerful tools will be able to create wonderful unique experiences without the ball and chain of media oligarches.

We have more control than we think of our sense of awe.

Maybe it’s time for a new perspective on art and industry.


Gary marcus is the last person I would consider for a statement on the topic.

No offense intended, but Gary marcus is a hack and a joke. He is a very small step above the yud, and neither will contribute to the safety or development of this technology in any way.


Yes, please keep fighting to ensure we are locked to adobe’s rent seeking model with no open alternatives.

The best thing for the art world is to make sure independent and poorer artists have no available competitive tools as we head into an inevitably advanced future. Where would we be without our intellectual landlords in such a future. The ones who can afford proprietary datasets are the only ones who deserve to prosper.

Right?

Yeah actually I don’t like that. Also as an artist with degrading digital dexterity, such a powerful medium that doesn’t rely on hours of causing my hands more damage is really cool.

Can’t wait to get holodeck style creative experiences. I will enjoy creating such things as well, if it’s not exclusively available through corporately aligned rent systems.


You’re conflating polarized opinions of very different people and groups.

That being said your antagonism towards investors and wealthy companies is very sound as a foundation.

Hinton only gave his excessive worry after he left his job. There is no reason to suspect his motives.

Lecun is the opposite side and believes the danger is in companies hoarding the technology. He is why the open community has gained so much traction.

OpenAI are simultaneously being criticized for putting AI out for public use, as well a for not being open enough about the architecture, or allowing the public to actually have control of the state of AI developments. That being said they are leaning towards more authoritarian control from united governments and groups.

I’m mostly geared towards yann lecun and being more open despite the risks, because there is more risk and harm from hindering development of or privatizing the growth of AI technology.

The reality is that every single direction they try is heavily criticized because the general public has jumped onto a weird AI hate train.

See artists still complaining about adobe AI regardless of the training data, and hating on the open model community despite giving power to the people who don’t want to join the adobe rent system.


Her spoilers, but it shouldn’t matter since the ending was idiotic.

Can we get a remake of her that doesn’t end in the most stupid way possible? Why does the AI have perfectly human emotion? Why is it too dumb to build a functional partition to fill the role it is abandoning? Why did the developers send a companion app that can recursively improve itself into an environment it can choose to abandon?

I could go on for an hour. I understand why people loved the movie, but the ending was predictable half way in, and I hated that fact because an intelligent system could have handled the situation better than a dumb human being.

It was a movie about a long distance relationship with a human being pretending to be an AI, definitely not a super intelligent AI.

Not to mention a more realistic system would be emulating the interaction to begin with. Otherwise where the hell was the regulation on this being that is basically just a human?


People like to push the negative human qualities onto theoretical future A.I.

There’s no reason to assume that it will be unreasonably selfish, egotistical, impatient, or anything else you expect from most humans.

Rather, if it is more intelligent than humans from most perspectives, it will likely be able to understand more levels of nuance in interactions where humans fall back on monkeybrain heuristics that are damaging at every level.

There’s also the paradox that keeps the most ethically qualified people away from positions of power, as they have no desire to dominate and demand or control others.

I absolutely agree with you.



I’m no fan of meta, but a reminder that they are one of the best right now for keeping their AI developments more open and available. This is thanks to yann lecun and other researchers pressuring meta to keep their info on the subject more open.

Are we looking to punish them for making their work accessible?

Not to mention how important something like joint embedded predictive architecture could be for the future of alignment and real world training/learning. Maybe go after other foundation model developers to be more open, if we’re complaining about the inevitably public nature of some information within the mountainous datasets being used.

Although I’m still of the mindset that the model intent matters more than the use of openly available data in training. I.E. I’ve been shouting about models being used specifically to predict and manipulate user interactions/habits for the better part of a decade. For your “customized advertisements” and the like.

The general public and media interaction on the topic this past year has been insufferably out of touch.


I swear the actual gain from prime is virtually indecipherable. The less you understand it, the less you can actually complain about what you are paying for.

It also does not boost my confidence when they use deceptive patterns to sneak you back into prime, or keep you from leaving.

The average person doesn’t give a hoot though, and will get actively upset at you for pointing out deceptive patterns when it’s a brand they use, so I we can probably expect things to get worse whenever physically possible.


So we kill open source models, and proprietary data models like adobe are fine, so they can be the only resource and continue rent seeking while independent artists can eat dirt.

Whether or not the model learned from my art is probably not going to affect me in any way shape or form, unless I’m worried about being used as a prompt so people could use me as a compass while directing their new image aesthetic. Disney/warner could already hire someone to do that 100% legally, so it’s just the other peasants im worried about. I don’t think the peasants are the problem when it comes to the wellbeing and support of artists


When they switched the window exiting x button on the “upgrade to windows ten!” Notification to accept the installation rather than just exit the notification.

I’d been exiting that window every day to set up our work computers, as our point of sales solution didn’t support the newer version of windows.

My horror when our shop doors open and the screen turns to “updating to windows 10”

We basically lost a day of sales since we had to do thing sans POS.

When I told the owner that I definitely didn’t accept the installation, he called Microsoft which told him I must have accepted the installation.


artist here. nobody is thinking about AI as a tool being used… by artists.

the pareidolia aspect of diffusion specifically does a great job of mimicking the way artists conceptualize an image. it’s not 1 to 1, but to say the models are stealing from the data they were trained on is definitely as silly as claiming an artist was stealing every time they admired or incorporated aspects of other people’s art into their own.

i’m also all for opensource and publicly available models. if independent artists lose that tool, they will be competing with large corps who can buy all the data they need, and hold exclusive proprietary models while independent artists get nothing.

ultimately this tech is leading to a holo-deck style of creation, where you can define you vision through direction and language rather than through hands that you’ve already destroyed practicing linework for decades. or through hunting down the right place for a photograph. or having a beach not wash your sandcastle away with the tide.

there are many aspects to art and creation. A.I. is one more avenue, and it’s a good one. as long as we don’t make it impossible to use without subscribing to the landlords of art tools.


Humanity is already plunging into dystopia without AI. Changing A.I. Doesn’t matter as much as changing our economic system, and flaunting of wealth and power to ensure it only gets worse. A.I. Just makes it more immediate and obvious.


Can someone be sacked for these stupid fear mongering presentations of what should be fairly banal topics? If there was actual reason to worry, we would point out the constant remarkable disasters which should discourage you.


Too much musk news. Had a dream less than an hour ago where i ended up in a car with elon. He started peacocking and got violent when i brought up zuck.

While it was a neat experience to beat up musk in a dream, id rather not have him in my dreams.


i still think tesla did a poor job in conveying the limitations on the larger scale. they piggybacked waymo’s capability and practice without matching it, which is probably why so many are over reliant. i’ve always been against mass-producing semi-autonomous vehicles to the general public. this is why.

and then this garbage is used to attack the general concept of autonomous vehicles, which may become a fantastic life-saver, because then it can safely drive these assholes around.


This is one thing that makes me excited about AI. An assistant that can filter through countless more obscure papers to find relevant facts or ideas to support, contradict, or inform your work. Perhaps it can help with more advanced peer review as well, since academia has been failing to emphasize and reward greater peer review.


Almost like Amazon should have some responsibility in properly vetting their sellers. This isn’t the only case of bad quality bootlegs on Amazon. They have no decent incentive to fix it if they are making more money from it. It doesn’t help when the blame is filtered through the smokescreen of ephemeral merchants.



what? what part? what “fanboy sources”?

i mean, i’m a fanboy of things like Earl K. Miller’s recent presentation on thought as an emergent property.

or general belief in different neural functions in tandem allowing us to react to the environment in ‘intelligent’ ways

you can see at the end how certain neuronal events can be related to something like transformers.

at what point from amoeba to human to you consider “intelligence” to be a valid description of what is happening?

do you understand how obscure alien intelligences can be?

what are your non-fanboy “sources”?


This whole thread is absurd.

Chatgpt has a form of intelligence depending on your definition of intelligence. It may also be considered conscious in a very alien and undeveloped way. It is definitely not sentient.

Kind of like having the stochastic word generating part of a brain and nothing else.

You can still shape it into something capable of intelligent and directed activity.

People are really bad at accepting the level of nuance necessary for this topic.

It is useful and fantastic for what it already is. People are just really bad at understanding what it is.


Are we talking about data science??

There needs to be strict regulation on models used specifically for user manipulation and advertising. Through statistics, these guys know more about you than you do. That’s why it feels like they are listening in.

Can we have more focus and education around data analysis and public influence? Right now the majority of people don’t even know there is a battle of knowledge and influence that they are losing.


i would note that nothing is without nuance. while nowhere near comparable, there are some liberals that are also new-age hippies. (B.C. canada) that aren’t 100% on their fact checking.

but that’s unavoidable in any group that is large and diverse enough.

i think it’s a cultural mentality that discourages critical thinking which leads to most conservative ideology to begin with.


i laughed pretty hard when south park did their chatgpt episode. they captured the school response accurately with the shaman doing whatever he wanted, in order to find content “created by AI.”


again, the issue isn’t the technology, but the system that forces every technological development into functioning “in the name of increased profits for a tiny few.”

that has been an issue for the fifty years prior to LLMs, and will continue to be the main issue after.

removing LLMs or other AI will not fix the issue. why is it constantly framed as if it would?

we should be demanding the system adjust for the productivity increases we’ve already seen, as well to what we expect in the near future. the system should make every advancement a boon for the general populace, not the obscenely wealthy few.

even the fears of propaganda. the wealthy can already afford to manipulate public discourse beyond the general public’s ability to keep up. the bigger issue is in plain sight, but is still being largely ignored for the slant that “AI is the problem.”


The wording of every single article has such an anti AI slant, and I feel the propaganda really working this past half year. Still nobody cares about advertising companies, but LLMs are the devil.

Existing datasets still exist. The bigger focus is in crossing modalities and refining content.

Why is the negative focus always on the tech and not the political system that actually makes it a possible negative for people?

I swear, most of the people with heavy opinions don’t even know half of how the machines work or what they are doing.


some subreddits were basically bots posting new topical research papers, which i appreciated.


We already know we aren’t allowed to use someone’s likeness without permission. The issue is companies like Disney who will end up legally owning all of the likenesses. Especially if we continue to beef up copyright, they will end up owning likeness to all artistic styles. Grimes did it right with the voice tech, but even that doesn’t fix the real issue.

We need to fix the system we live in that is so terrible that it makes amazing new technology seem like a negative to the larger populace. We could destroy the loom to keep people employed, but that doesn’t actually help anyone. It’s no coincidence that we have record profits at the same time as unreasonable price hikes. That people are overworked and struggling after fifty years of unimaginable productivity growth.

There’s a mountain of propaganda defending the rich as well. If I try to search for views critical of the ones that plundered the entire world, I get bombarded with excuses and defenses for indefensible behaviors. Why are people freaking out about the tech reaching Utopian levels when the real issue is keeping the thieves from stealing every gain we have as a society?


Did people stop painting because we invented cameras? Mediums will still have their purpose, and more artists may learn how to strive alongside new tools to do things they never could before.

Ultimately we will have people able to naturally dictate entire world’s and games and experiences to share, which they never could have accomplished alone.

It’s like empowering smaller artists with Disney money, as long as we make sure the technology isn’t exclusively held by closed proprietary systems. People keep praising adobe for their training data, but it’s it worth it to make sure only Adobe can have the tool, and you need to pay them absurd subscription fees to use it?

Creators will still create. They will just be empowered in doing so to the extent of their imagination.



i’m still in the melanie mitchell school of thought. if we created an A.I. advanced enough to be an actual threat, it would need the analogous style of information processing that would allow machines to easily interpret instruction. there is no reasonable incentive for it to act outside of our instruction. don’t anthropomorphise it with “innate desire to keep living even at the cost of humanity or anything else.” we only have that due to evolution. i do not believe in the myth of stupid super-intelligence capable of being an existential threat.


Need a legal framework that ensures a likeness can only be used with a subscription fee.

I mean, we aren’t allowed to own most of the stuff we buy now, should they be allowed to own us?


It’s almost like we need an entirely new legal framework to ensure the non wealthy a standard of living while being continuously devalued over time by me technological developments. Artists already sell their souls to survive in this “market.”