bjoern_tantau

joined 2 years ago
[–] bjoern_tantau@swg-empire.de 11 points 5 hours ago (1 children)

Upvotes minus downvotes

[–] bjoern_tantau@swg-empire.de 6 points 7 hours ago

Und dann muss Ebay alle trainierten Modelle und befüllten Datenbanken löschen, richtig?

[–] bjoern_tantau@swg-empire.de 4 points 9 hours ago (2 children)

Try growing up with fdisk.exe!

[–] bjoern_tantau@swg-empire.de 26 points 12 hours ago (3 children)

Satire is dead.

[–] bjoern_tantau@swg-empire.de 11 points 1 day ago

"Ohne Arbeit"

Aber besser als in die hohle Hand geschissen.

Just the need to go in general.

[–] bjoern_tantau@swg-empire.de 25 points 1 day ago (11 children)

Shitting. It stinks. It's shit. It's tedious.

Let me live in a Star Trek utopia where they can teleport your shit right out of your colon.

And it won't be long until someone spills something on the flap and it becomes sticky. Or an item is too large and gets stuck.

[–] bjoern_tantau@swg-empire.de 10 points 1 day ago

Correct. The whole thing is lauded as this revolutionary new thing but in reality it's just a bullshit VM isolated from the rest of the system. We have had that almost for as long as Android existed. Along with Termux and similar that actually can access everything.

[–] bjoern_tantau@swg-empire.de 11 points 1 day ago (2 children)

Yeah, but that means that not the entire storage is available like the headline implies.

[–] bjoern_tantau@swg-empire.de 1 points 1 day ago (2 children)

In Germany the bureau where you register your child they have a book with tons of names in use around the world. But apart from that it depends on the bureaucrat handling your case. And of course you can always sue in case you don't like their decision.

There is a German actress whose first name is Wolke (Cloud). She doesn't know how her parents managed to get that approved. But now she is a case you can point to if you want to name your own child Wolke.

[–] bjoern_tantau@swg-empire.de 27 points 1 day ago (17 children)

Press X to doubt.

The root filesystem will very likely still be locked down.

 
 
 

cross-posted from: https://discuss.online/post/18562922

2006-01-19

https://www.smbc-comics.com/comic/2006-01-19

No alt text

Bonus panelBonus panel

22
submitted 4 weeks ago* (last edited 3 weeks ago) by bjoern_tantau@swg-empire.de to c/linux@lemmy.ml
 

Edit 2: Through all of my shenanigans I ended up on a read-only snapshot for root. The error I got just seemed similar to previous out-of-space errors. I went to a later snapshot as default and everything is working great!

My OpenSUSE Tumbleweed is wonky since I last did a dist-upgrade with about 4000 packages. Midway through it errord out with an error that indicated that the filesystem was full althou df showed plenty of free space.

BTRFS seemed to be the culprit. Removing snapshots let me continue the upgrade until it errored out again. Rinse and repeat until it was done.

Edit: My root subvolume is read only. So there must be some error in that. The other subvolumes work correctly. So I guess it isn't about free space after all.

But now the BTRFS seems to be almost full and I cannot update anymore.

...
Checking for file conflicts: .....................[done]error: can't create transaction lock on /usr/lib/sysimage/rpm/.rpm.lock (Read-only file system)                 ( 1/40) Removing: ovpn-dco-kmp-default-0.2.202412[error]Removal of (76899)ovpn-dco-kmp-default-0.2.20241216~git0.a08b2fd_k6.13.7_1-2.2.x86_64(@System) failed:          Error: Subprocess failed. Error: RPM failed: Command exited with status 1.                                      Abort, retry, ignore? [a/r/i] (a):                      Problem occurred during or after installation or removal of packages:                                           Installation has been aborted as directed.              Please see the above error message for a hint.

I've tried a full balance but that didn't even seem to help. So I suspect that the space is caught up in snapshots, but I can't delete them.

# snapper list

# │ Type   │ Pre # │ Date                             │ User │ Used Space │ Cleanup │ Description           │ Userdata                                               ─────┼────────┼───────┼──────────────────────────────────┼──────┼────────────┼─────────┼───────────────────────┼─────────────                                             0  │ single │       │                                  │ root │            │         │ current               │  1  │ single │       │ Thu 18 Apr 2024 05:58:31 PM CEST │ root │  12.51 GiB │ number  │ first root filesystem │365* │ pre    │       │ Wed 26 Mar 2025 04:28:33 PM CET  │ root │  16.00 KiB │ number  │ zypp(zypper)          │ important=no                                           366  │ pre    │       │ Wed 26 Mar 2025 07:28:09 PM CET  │ root │  16.00 KiB │ number  │ zypp(zypper)          │ important=no                                           367  │ pre    │       │ Wed 26 Mar 2025 07:36:53 PM CET  │ root │  16.00 KiB │ number  │ zypp(zypper)          │ important=no
# snapper rm 1

Deleting snapshot failed.
# snapper rm 365

Cannot delete snapshot 365 since it is the currently mounted snapshot.
# btrfs filesystem usage /

Overall:                                                    Device size:                 476.44GiB                  Device allocated:            389.06GiB                  Device unallocated:           87.37GiB                  Device missing:                  0.00B                  Device slack:                  3.50KiB                  Used:                        382.53GiB                  Free (estimated):             90.80GiB      (min: 47.12GiB)                                                     Free (statfs, df):            90.80GiB                  Data ratio:                       1.00                  Metadata ratio:                   2.00                  Global reserve:              512.00MiB      (used: 0.00B)                                                       Multiple profiles:                  no                                                                      Data,single: Size:381.00GiB, Used:377.57GiB (99.10%)       /dev/mapper/cr_root   381.00GiB                                                                              Metadata,DUP: Size:4.00GiB, Used:2.48GiB (61.97%)          /dev/mapper/cr_root     8.00GiB                                                                              System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%)           /dev/mapper/cr_root    64.00MiB                                                                              Unallocated:                                               /dev/mapper/cr_root    87.37GiB
# btrfs qgroup show /

Qgroupid    Referenced    Exclusive   Path              --------    ----------    ---------   ----              0/5           16.00KiB     16.00KiB   <toplevel>        0/256         16.00KiB     16.00KiB   @                 0/257         14.25GiB     14.25GiB   @/var             0/258         16.00KiB     16.00KiB   @/usr/local       0/259         16.00KiB     16.00KiB   @/srv             0/260         54.32MiB     54.32MiB   @/root            0/261         24.09GiB     24.09GiB   @/opt             0/262        289.02GiB    288.95GiB   @/home            0/263         16.00KiB     16.00KiB   @/boot/grub2/x86_64-efi                                                   0/264         16.00KiB     16.00KiB   @/boot/grub2/i386-pc                                                      0/265         16.00KiB     16.00KiB   @/.snapshots      0/266         24.00GiB     12.51GiB   @/.snapshots/1/snapshot                                                   0/473         16.00GiB     16.00GiB   @/.snapshots/1/snapshot/swap                                              0/657         23.68GiB     16.00KiB   @/.snapshots/365/snapshot                                                 0/661         23.68GiB     16.00KiB   @/.snapshots/366/snapshot                                                 0/662         23.68GiB     16.00KiB   @/.snapshots/367/snapshot                                                 1/0           36.19GiB     36.12GiB   <0 member qgroups>

Any tips?

 
 
 

cross-posted from: https://discuss.online/post/17161733

http://www.smbc-comics.com/comic/cosmology-3

Alt textI think we could increase the number of stem phds just by changing the title from Doctor to Ultimate Grandmaster.

Bonus panelBonus panel

 

In our eternal quest to declutter our life we found an approach that seems to work better than "does it spark joy?".

"Would I take it with me if I were to move to a 1 room apartment?"

 

Ich hoffe dieser offene Brief kann etwas erreichen. ME ist die häufigste Ausprägung von Long Covid. War aber schon vorher Jahrzehnte lang bekannt. Trotzdem wissen noch nicht einmal die meisten Ärzte, dass es existiert oder tun es als Depression oder andere psychische Erkrankung ab.

Einfach nur mehr Bekanntheit würde mir und anderen Betroffenen viel bringen.

1
submitted 1 month ago* (last edited 1 month ago) by bjoern_tantau@swg-empire.de to c/lemmy_support@lemmy.ml
 

In my quest to tame my pict-rs instance I've disabled thumbnail proxying, deleted all thumbnails and images not uploaded by me and still the files folder is several GB big. And growing.

I am the only user of my instance. But iotop shows constant disc activity for pict-rs. It's writing stuff at about 1 MB/s. The webserver logs only show my own activity and occasionally someone requesting an image. But not nearly enough to explain this constant activity.

When I'm looking at the actual files at random it seems to be mostly memes and stuff, but not what I uploaded myself.

Does anyone have an idea how I can find out what is causing all this activity?

Edit: I've opened an issue at https://git.asonix.dog/asonix/pict-rs/issues/79

I'm currently migrating pictrs' database from sled to PostgreSQL and even during that it seems to be constantly writing data to sled. Doesn't feel right to me.

view more: next ›