• 0 Posts
  • 19 Comments
Joined 1Y ago
cake
Cake day: Aug 11, 2023

help-circle
rss

It’s not lying or hallucinating. It’s describing exactly what it found in search results. There’s an web page with that title from that date. Now the problem is that the web page is pinterest and the title is the result of aggressive SEO. These types of SEO practices are what made Google largely useless for the past several years and an AI that is based on these useless results will be just as useless.


How are crashlytics and firebase analytics profiting off of users? I cannot imagine not including those in an app you’re actually hoping to improve.


Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.

Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines

What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.


See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.


https://www.lifewire.com/strong-ai-vs-weak-ai-7508012

Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies

Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.

https://en.m.wikipedia.org/wiki/Artificial_general_intelligence

As of 2023, complete forms of AGI remain speculative.

Boucher, Philip (March 2019). How artificial intelligence works

Today’s AI is powerful and useful, but remains far from speculated AGI or ASI.

https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf

AGI represents a level of power that remains firmly in the realm of speculative fiction as on date


That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.


As a European, I grew up with PC Zone and PC Gamer (both from the UK). Just looked them up and I see PC Gamer is still running and has a US edition too.


To create a specific model and then have the same exact model in different clothing and poses is not something that a manager just did with an off-the-shelf pre-trained stable diffusion solution. They might not have given a model a gig, but they hired at least one full-time AI specialist.




In my native country gigabit fiber internet is less than $9/mo. Broadband prices in the US are absolutely ridiculous.


If you want something to last 1000 years you design it to last 10k. In 1000 years, the descendants of the ultra rich will come out of their bunkers with the technology to read these chips.


And the difference in results is clearly different. There are people who replaced artists with Photoshop, there are people who replaced artists with AI, and each new tool with firther empower people to try things on their own. If those results are good enough for them then they probably wouldn’t have paid for a good artist anyway.


AI is supposed to work with human input. AI is a tool for the artist, not a replacement of the artist. The human artist is the one calling the shots, deciding when the final result is good or when it needs improvement.


Sampling music is literally placing parts of that music in the final product. Gen AI is not placing pieces of other people’s art in the final image, in fact it doesn’t store any image data at all. Using an image in the training data is akin to an artist including that image on their moodboard. Except the AI’s moodboard has way more images and the odds of the work being too similar to a single particular image is lower than when a human does it.


Are you sing Tor or a VPN? Sharing the same IP with thousands of other people is something that would lead to getting captchas every time you visit a site. Most sites use Cloudflare or other CDNs, and they see the same IP making tons of requests every second, so they flag it as a potential bot IP and issue the captcha challenge.


Typewriters are also irrelevant today. It was an analogy. I agree that AI can be used in some evaluations, depending what you’re evaluating.

I allow and encourage Googling for information when I interview software engineering candidates. I don’t consider it “cheating”, on the contrary. Being able to unblock themselves is one of the skills they should have. They will be using external help when doing their job, so why should the test be any different.

But that also reminds me now that I actually once had a candidate using generative AI in the coding interview. It did feel like cheating when it was a the level of asking for the full solution, not just help getting unblocked. It didn’t help at all though because the candidate didn’t even have enough skill to tell the good suggestions from the bad ones or what they needed to iterate on.


AI is a tool that can indeed be of great benefit when used properly. But using it without comprehending and verifying the source material can be downright dangerous (like those lawyers citing fake cases). The point of the essay/exam is to test comprehension of the material.

Using AI at this point is like using a typewriter in a calligraphy test, or autocorrect in a spelling and grammar test.

Although asking for handwritten essays does nothing to combat use of AI. You can still generate content and then transcribe it by hand.