• 1 Post
  • 9 Comments
Joined 1Y ago
cake
Cake day: Jul 14, 2023

help-circle
rss

I would stand behind the idea of splitting Google in it’s seperate branches with no shared assets. Basically Google search becomes is seperate corporation, Google AI, Google Webservices, Google Ad Services, YouTube. etc… This will hopefully undo some of the webs enshitification since now the essentially most powerful company on the web has to actually offer good product for profit instead of compensating bad product with more profitable one.


The whole point of having ads be separate from the video is for youtube to easily distance itself from malicious ads. If an ad is malicious it can easily be reported and taken out of commission. But if ads are now part of the video, what stops an ad from being an ISIS beheading clip in the middle of a video made for children? If there is still a way to still report it, then there is a way to recognize the ad.

Also how will this interfere with creators? Editing a video and giving it a proper pace is already a huge challenge. But now ads can just be automaticaly cut into it without the creators control? That’s gonna fuck up so many quality channels. That’s already a big problem with the current system, bit at least you can skip or block them.


Mod Idea: Resources in Construction Queue
For years I've been looking for a mod that does a simple thing - showing the resources needed for all placed construction blueprints, ideally in parenthesis on the vanilla resource list next to the corresponding resource.\ \ I'm tired of selecting all the placed floor tiles and then multiplying it by the number of resources per tile by hand. Especially because you can only select a limited number of entities at once. You have to fiddle around with the camera to section of portions of your blueprints if they exceed this limitation. It would save a lot of hassle for larger builds if the number of resources in construction or production queue would be visible at a glance.\ \ Additionally the aproximate amount of days your food will last should also visible on the resource list.
fedilink

Copyright protection only exists in the context of generating profit from someone else’s work. If you were to figure out cold fusion and I’d look at your research and say “That’s cool, but I am going to go do some woodworking.” I am not infringing any copyrights. It’s only ever an issue if the financial incentive to trace the profits back to it’s copyrighted source outway the cost of doing so. That’s why China has had free reign to steal any western technology, fighting them in their courts is not worth it. But with AI it’s way easier to trace the output back to it’s source (especially for art), so the incentive is there.

The main issue is the extraction of value from the original data. If I where to steal some bricks from your infinite brick pile and build a house out of them, do you have a right to my house? Technically I never stole a house from you.


I wonder how this will affect throphies and achievements. Imagine a conversation where one guy says “Man getting this throphy was so hard, I had to fight a boss and his 5 goons that kept healing themselves and I burned through all my powerups and barely made it” and the other guy is like “I only had to fight the boss and 2 goons and they never healed, I didn’t even know there were powerups.” The first guy is going to be like “YOU WHAT?”


No they can’t, the membranes of fuel cells degrade extremely quickly, as I a couple of 100 cycles before significant efficiency loss. That’s currently one of the biggest issues with fuel cells and one of the biggest areas of research. Currently, batteries are far more reliable as an energy source.


It’s funny how the loss of storage space can be valued diffently. If it’s 3TB of of video footage for a newspaper, that’s weeks if not months of work and money lost. But it could also just be the last 3 Call of Duty’s with patches.


I think you might have misunderstood the article. In one case they used the sound input from a Zoom meeting and as a reference they used the chat messenges from set zoom meetings. No keyloggers required.

I haven’t read the paper yet, but the article doesn’t go into detail about possible flaws. Like, how would the software differentiate between double assigned symbols on the numpad and the main rows? Does it use spell check to predict words that are not 100% conclusive? What about external keyboards? What if the distance to the microphone changes? What about backspace? People make a lot of mistakes while typing. How would the program determine if something was deleted if it doesn’t show up in the text? Etc.

I have no doubt that under lab conditions a recognition rate of 93% is realistic, but I doubt that this is applicable in the real world. Noboby sits in a video conference quietly typing away at their keyboard. A single uttered word can throw of your whole training data. Most importantly, all video or audio call apps or programs have an activation threshold for the microphone enabled by default to save on bandwith. Typing is mostly below that threshold. Any other means of collecting the data will require you to have access to the device to a point where installing a keylogger is easier.


OK, I quickly skimmed through the reasearch paper without going into the math, but here’s the skinny of it.

They used 2 WiFi routers with 3 antennas each as cheap makeshift radar. Router antennas aren’t designed to natively provide elevation and angle information so they had to get smart with the data processing. Once they have the data from the antennas they used cameras to train a proven AI model for recognizing human poses and mapping them to a 3D mesh on said data. They switched to 15 different room layout and proceeded training their model. Then, they switched to a new untrained room layout to test the models performance. The results were always below image based recognition and plummeted even lower after switching to an unknown room layout.

Unless it’s buried between the math paragraphs I don’t see “looking through walls” mentioned in the paper. The introduction section has a quick mention that visual obstacles provide difficulties for other human recognition technologies. Unless it’s because of the implication of WiFi going through walls, I can not discern where this article got that idea from. The superimposed example images in the research paper even cut-off at the legs if the person happens to stand behind a table.

My takeaway from this is, as long as you don’t make the specific placement of your multiple WiFi routers and the exact layout of your house public knowledge and don’t set up multiple cameras with overlapping views to cover every angle of your home, you should be safe. Or just get single antenna routers.


I’ had to dig around a little but here’s the link to the research paper that is detailing the technology described in this article:

Paper: https://arxiv.org/abs/2301.00250

Source: https://www.popularmechanics.com/technology/security/a42575068/scientists-use-wifi-to-see-through-walls/