Yes, but that’s kind of my point
We see it learn something with insane precision but most often it is almost an effect of over-training. It probably would require less time to learn another layout but it’s not learning the general rules (can’t go through walls, holes are bad, we want to get to X), it learns the specific layout. Each time a layout changes, it would have to re-learn it
It is impressive and enables automation in a lot of areas, but in the end it is still only machine learning, adapting weights to specific scenario
I get that. But my point is: are we really sure that this is the problem?
One of the bases of our scientific method is repeatability of experiment. But at some point, when we can produce a lot of experiments, comes a problem: we can run out of people with time and resources that allow to repeat it. And one of the ways to mitigate it is to strengthen the requirements on the data gathering. So when you do find something weird, you can analyze how the parameters differ from other similar runs and if someone else is able to repeat it, you might have easier time finding which variable makes it so. Without consistent “we measured X after setting that to Y” it’s hard to repeat the experiment or even recognize if you really are observing something new.
Take a look at that error a few months ago that resulted in us thinking that a new superconductor that can work in ± room conditions was found. If we didn’t have precise description of what they did and what they measured, we could be still trying to reproduce their observations
modern inventions are orders of magnitude more complex than anything in the past
Well, in a way that’s always the case with inventions. I think when the first modern submarine (it’s just an example) was built it also was a marvel of alloy purity and manufacturing precision compared to anything in the past. It’s just that in the last century we observed a lot of technological progress because we started doing research in a lot more directions and in much higher volume. We caught up to our technological and theoretical knowledge and now the progress will slow down. Only to explode again after another breakthrough, as we often move in sinusoids, but that will be in one field + how it can help other fields, not a bunch of fields developing all at once in a short timeframe
I’m not convinced by the premise of this statement
Scientific innovations should make ‘zero to one’ breakthroughs, such as the mobile phone or the combustion engine, but are instead making ‘one to many’ improvements to existing innovations
Maybe we are simply past the curve where a few people can innovate a breakthrough and now it has to come from a lot of data gathered from existing implementations? In order to invent a cellphone a lot of technologies had to be improved compared to their first introduction. And get cheap enough to enable experimentation
That I don’t know. I haven’t been looking into one-board computers for a while. The one I bought ~10 years ago was running out of juice when I was trying to run Kodi on it last year. Wifi shouldn’t be a problem IMO, I’ve been using mine as torrent downloader and hosted a few university projects (dynamic web apps) on it. The graphics might. I would guess that as long as you find one with decent specs (so probably not the 10$ one) it should work. I’m sure there’s someone who is doing exactly that and either could answer what to buy/look for or wrote a blog about it