• 0 Posts
  • 17 Comments
Joined 1Y ago
cake
Cake day: Jun 30, 2023

help-circle
rss

You can scan before the encryption step. It defeats the purpose of the encryption such that only the privileged actor gets plaintext while everyone downstream gets encrypted bytes, but technically it’s possible.

It’s only a matter of time until a vulnerability in the privilege is found and silently exploited by a nefarious monkey, and that’s precisely why adding backdoors should never be done.


Stop giving me Thermo nightmares; I lived through that shit already I don’t need to sleep through it too.


How are you planning on handling the induced phase shifts due to the rapid polarity reversals that occur in the transgravitational electron flux arrays? I mean, this is a nonstarter if you can’t get that to work—the electropositron fields are going to decay too quickly to be useful otherwise and the quite-expensive phosphokinesis-generator will be wasted.


Oh that was such an evil trick. I liked próx mining the bottom of the vertical sliding doors in that one level that looked like a stone temple. Or the grates in bunker cuz the mines are nearly invisible on those.


No Odd Job!

Was the standard house rule in my circle of friends. We hated mines too, but allowed them. But no fucking Odd Job.


I’m an AI Engineer, been doing this for a long time. I’ve seen plenty of projects that stagnate, wither and get abandoned. I agree with the top 5 in this article, but I might change the priority sequence.

Five leading root causes of the failure of AI projects were identified

  • First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI.
  • Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.
  • Third, in some cases, AI projects fail because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.
  • Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.
  • Finally, in some cases, AI projects fail because the technology is applied to problems that are too difficult for AI to solve.

4 & 2 —>1. IF they even have enough data to train an effective model, most organizations have no clue how to handle the sheer variety, volume, velocity, and veracity of the big data that AI needs. It’s a specialized engineering discipline to handle that (data engineer). Let alone how to deploy and manage the infra that models need—also a specialized discipline has emerged to handle that aspect (ML engineer). Often they sit at the same desk.

1 & 5 —> 2: stakeholders seem to want AI to be a boil-the-ocean solution. They want it to do everything and be awesome at it. What they often don’t realize is that AI can be a really awesome specialist tool, that really sucks on testing scenarios that it hasn’t been trained on. Transfer learning is a thing but that requires fine tuning and additional training. Huge models like LLMs are starting to bridge this somewhat, but at the expense of the really sharp specialization. So without a really clear understanding of what can be done with AI really well, and perhaps more importantly, what problems are a poor fit for AI solutions, of course they’ll be destined to fail.

3 —> 3: This isn’t a problem with just AI. It’s all shiny new tech. Standard Gardner hype cycle stuff. Remember how they were saying we’d have crypto-refrigerators back in 2016?



Fuck does this mean LibreOffice might get actual sponsorship, funding, organizational support? And not be a buggy steaming pile of shit that crashes my computer every ten minutes???

An engineer can dream, right?

I hate spreadsheet and slide deck days. Please oh universe help me get back to my happy place: codeland.



Because starting with ‘X’ does not guarantee the ‘sh’ sound. See ‘xylophone’, ‘Xavier’, ‘Xenon’.

Xitter looks like ‘exiter’ to me.


Would you rather have 100,000 kg of tasty supreme pizza, or 200 kg of steaming manure?

Choose wisely.


Also by design. Tech companies collude like this all the fucking time.




The enshittification will continue until morale improves. Fall in line, USER.


You, and nobody can stop them from doing so. It turns out that web UI technologies are very easily and conveniently usable for OS GUI features as well. Browsing a file system? Web UI. Navigating settings and configurations pages? Web UI.

And these browsers are open-source. Chromium. Edge is a derivative of Chromium, so is Chrome. The fact that Google controls the Chromium upstream matters not at all, because anyone is free to fork it and modify to their needs.

Freedom is a double-edged sword, but this is many folds better than locked-in proprietary.


Well Twitter, Spotify, and Netflix are all like standard system design/architecture case studies and interview questions. Pretty sure Twitter has been invented like 300,000 times in various iterations. It’s not exactly like CocaCola’s recipe.