Yeah, true, maybe not as big of a problem in that case.
melfie
Radiation is another challenge for computers in space, so just expecting to stick existing hardware in a space data center won’t work as expected. Massive shielding or more specialized hardware and software will be required like what is described here: https://www.nasa.gov/missions/artemis/clps/nasa-to-test-solution-for-radiation-tolerant-computing-in-space/
Existing hardware might work with a lot of mass for shielding, but as others have already mentioned, the rocket equation.
Here’s a highly relevant excerpt:
computers in space are susceptible to ionizing solar and cosmic radiation. Just one high-energy particle can trigger a so-called “single event effect,” causing minor data errors that lead to cascading malfunctions, system crashes, and permanent damage
I originally thought it was one of my drives in my RAID1 array that was failing, but I noticed copying data was yielding btrfs corruption errors on both drives that could not be fixed with a scrub and I was also getting btrfs corruption errors on the root volume as well. I figured it would be quite an odd coincidence if my main SSD and 2 hard disks all went bad and I happened upon an article talking about how corrupt data can also occur if the RAM is bad. I also ran SMART tests and everything came back with a clean bill of health. So, I installed and booted into Memtester86+ and it immediately started showing errors on the single 16Gi stick I was using. I happened to have a spare stick that was a different brand, and that one passed the memory test with flying colors. After that, all the corruption errors went away and everything has been working perfectly ever since.
I will also say that legacy file systems like ext4 with no checksums wouldn’t even complain about corrupt data. I originally had ext4 on my main drive and at one point thought my OS install went bad, so I reinstalled with btrfs on top of LUKS and saw I was getting corruption errors on the main drive at that point, so it occurred to me that 3 different drives could not have possibly had a hardware failure and something else must be going on. I was also previously using ext4 and mdadm for my RAID1 and migrated it to btrfs a while back. I was previously noticing as far back as a year ago that certain installers, etc. that previously worked no longer worked, which happened infrequently and didn’t really register with me as a potential hardware problem at the time, but I think the RAM was actually progressively going bad for quite a while. btrfs with regular scrubs would’ve made it abundantly clear much sooner that I had files getting corrupted and that something was wrong.
So, I’m quite convinced at this point that RAID is not a backup, even with the abilities of btrfs to self-heal, and simply copying data elsewhere is not a backup, because something like bad RAM in both cases can destroy data during the copying process, whereas older snapshots in the cloud will survive such a hardware failure. Older data backed up that wasn’t coped with faulty RAM may be fine as well, but you’re taking a chance that a recent update may overwrite good data with bad data. I was previously using Rclone for most backups while testing Restic with daily, weekly, and monthly snapshots for a small subset of important data the last few months. After finding some data that was only recoverable in a previous Restic snapshot, I’ve since switched to using Restic exclusively for anything important enough for cloud backups. I was mainly concerned about the space requirements of keeping historical snapshots, and I’m still working on tweaking retention policies and taking separate snapshots of different directories with different retention policies according risk tolerance for each directory I’m backing up. For some things, I think even btrfs local snapshots would suffice with the understanding that it’s to reduce recovery time, but isn’t really a backup . However, any irreplaceable data really needs monthly Restic snapshots in the cloud. I suppose if don’t have something like btrfs scrubs to alert you that you have a problem, even snapshots from months ago may have an unnoticed problem.
Don’t understand the downvotes. This is the type of lesson people have learned from losing data and no sense in learning it the hard way yourself.
TS transpiles to JS, and then when that JS is executed in Deno, Node.js, a Blink browser like Chrome, etc., it gets just in time compiled to native machine code instead of getting interpreted. Hope that helps.
I believe Waymo’s strategy has always been to shoot for level 5 autonomous driving and not bother with the others. Tesla not following that strategy has proven them correct. You either have a system that is safe, reliable, and fully autonomous, or you’ve got nothing. Not that Waymo has a system at this point that can work under all conditions, but their approach is definitely superior to Tesla’s if nothing else.
The JavaScript code is compiled to native and is heavily optimized, as opposed to being interpreted.
Having a synced copy elsewhere is not an adequate backup and snapshots are pretty important. I recently had RAM go bad and my most recent backups had corrupt data, but having previous snapshots saved the day.
I had to deal with large JavaScript codebases targeting IE8 back in the day and probably would’ve slapped anyone back then who suggested using JavaScript for everything. I have to say, though, that faster runtimes like v8 and TypeScript have done wonders, and TypeScript nowadays is actually one of my favorite languages.
This article sums up a Stanford study of AI and developer productivity. TL;DR - net productivity boost is a modest 15-20%, or as low as negative to 10% in complex, brownfield codebases. This tracks with my own experience as a dev.
I heard a rumor that Amazon did it to dominate the toy market
I certainly would not put it past them.
Is there anything open source that provides the same experience as Google Admin Console where IT admins can manage everything from a single pane of glass? I’d imagine schools use Chromebooks because Google has put a lot of resources into making it a simple and cost effective option for schools, where IT budgets and staffing are usually pretty limited. An open source software suite that provides a similar experience would seemingly be a compelling alternative. I’d imagine there would need to be a company hosting the software for a fee, with the funds used to build on top of existing open source software to make a seamless and unified experience that works well. Barring that, I don’t imagine any school IT admin has sufficient bandwidth to buy a bunch of cheap laptops, install Linux on them, self-host Nextcloud, secure and lock down everything, etc. I know next to nothing about how IT in schools is managed, so this a lot of conjecture that could be wrong.