He sort of invented it, so you have to think he’s commenting on the concept here, not the implementation.
I have tried a lot of medium and small models, and there it just no good replacement for the larger ones for natural text output. And they won’t run on device.
Still, fine-tuning smaller models can do wonders, so my guess would be that Apple Intelligence is really 20+ small and fine tuned models that kick in based on which action you take.
The concept is useful. A well known idea capture of it is the famous “As We May Think” article from Vannevar Bush all the way back in 1945, which conceptualized a machine “Memex” that would enhance humans capabilities with for example memory and recall. A lot of humans needs help with this and use devices for this daily, with notes, map lookups of where you parked, find my things for devices, analytics for photo libraries etc etc etc.
The only issue here is the implementation.
Just use https://www.summarize.tech/, and if the transcript is too long, send it to a summarize like Kagi.
The Internet is actually very fine and alternatives to the big guys will keep popping up.
For tracking in general there are several options like pihole, adguard and NextDNS on a DNS level, Firefox/Orion browsers, Proton / Mullvad etc VPN and services.
For search I’ve been fairly happy with DuckDuckGo for some years, but not swears by Kagi.
What is gone is the early days of the seventies / early eighties with free servers at universities accessible to anyone. It doesn’t scale.
Various models tried to figure it out until we got what we’ve had for the last 10 years, “free” services where you are the product.
What you won’t get going forward is free services that gives you what you want without also tracking and collecting data on you and using it for ads etc.
What you can get is high quality services that you choose to pay for.
For now, a fair bit of them is niche and sort of expensive. Hopefully that will expand to giving is fairly broad service coverage from providers that are mostly crowd funded and open.
While the root issue was still unknown, we actually wrote one. It sort of made sense. Check that the date from isn’t later than date to in the generated range used for the synchronization request. Obviously. You never know what some idiot future coder (usually yourself some weeks from now) would do, am I right?
However, it was far worse to write the code that fulfilled the test. In the very same few lines of code, we fetched the current date from time.now()
plus some time span as date.to
, fetched the last synchronization timestamp from db as date.from
, and then validated that date.from
wasn’t greater than date.to
, and if so, log an error about it.
The validation code made no logic sense when looking at it.
This bug has created havocs for me. We had a “last synchronized” time stamp persisted to a DB so that the system was able to robustly deal with server restarts / bootstrapping on new environments.
The synchronization was used to continuously fetch critical incident and visualize them on a map. The data came through a third party api that broke down if we asked for too much data at a time, so we had to reason about when we fetched data last time, and only ask for new updates since then.
Each time the synchronization ran, it would persist an updated time stamp to the DB.
Of course this routine ran just as the server jumped several months into the feature for a few minutes. After this, the last run time stamp was now some time next year. Subsequent runs of the synchronization routine never found any updates as the date range it asked for didn’t really make sense.
It just ran successfully without finding any new issues. We were quite happy about it. It took months before we figured out we actually had a mayor discrepancy in our visualization map.
We had plenty of unit tests, integration tests, and system tests. We just didn’t think of having one that checked whether the server had time traveled to the future or not.
I don’t know if it’s maybe a judicial thing or something or if they are technically required to do an official recall registered in some system, even if they can actually solve it OTA.
I would suspect the rules around required recalls are not really updated to reflect the extended amount of issues that a vertical system integrator like Tesla can solve OTA.
Lemmy and Tildes has really shown me how much more interesting information is when it’s given by a real person with some context or an opening post.
A news article posted by a bot to… farm karma on Lemmy?? It’s just noise. Many of such articles posted have clickbait titles and comments under them tend to have less value and are based around uniformed opinions coming from the (often misleading) title alone.
Quality aside, I guess I don’t really see the point. To who’s benefit are bot content posted? It feels like advertising to me. I’d like to think that the community are able to sustain itself by the content someone cared enough about to bother post it here.
In short, bot created content is noise to me, while content posted by real persons has value.
Genie is out of the bag. It was shown early on how you can use AI like ChatGPT to create and enhance datasets needed to generate AI language models like ChatGPT. Now, OpenAI say that isn’t allowed, but since it’s already been done, it’s too late.
Rogue AI will spring up with specialized purposes en masse the next six months, and many of them we’ll never hear about.
It’s not a distant future, the benefits are already here and increasing with each launch.
I’ve been tracking a sailboat crossing the Atlantic Ocean the past weeks which have been able to upload videos to YouTube everyday, something that would be impossible without Starlink.
Of course, this specific use case isn’t important, just used it to point out that Starlink is already working well.
That’s why it’s on the OS-level. For example, for text, it seems to work in any text app that uses the standard text input api, which Apple controls.
User activates the “AI overlay” on the OS, not in the app, OS reads selected text from App and sends text suggestions back.
The App is (possibly) unaware that AI has been used / activated, and has not received any user information.
Of course, if you don’t trust the OS, don’t use this. And I’m 100% speculating here based on what we saw for the macOS demo.