jcolag

joined 2 years ago
[–] jcolag@lemmy.sdf.org 2 points 1 month ago

It seems interesting, and we have needed new thinking on licenses for a long while, but a few details mildly concern me.

First, while not universal, brevity improves readability, but often leads to loopholes. Mature licenses grow so long because lawyers have needed to repeatedly game out threats and how to resolve them before arguments break out.

Then, I don't quite know how to phrase such a thing, but it'd be nice to bar Contributor License Agreements and other copyright assignments. Prior to LLMs laundering Free Software and helping novices flood everybody with low-quality patches, the biggest threat to Free Software was (and probably will be again when the AI bubble pops) was projects exercising the copyright holder's authority to re-release without a license, taking the community's work with them behind the paywall.

The LLM condition also seems concerning. I don't like how the big companies have behaved either, but the problem is corporate exploitation, not the technology. If a teenager wanted to train their own model with a selection of projects, we don't want to tell them to stop, and if a company wants to repeatedly scrape projects for a PowerPoint deck, that's just as bad. Plus, training a neural network probably falls well inside the bounds of Fair Use, though publishing the output probably wouldn't.

Otherwise, you might want to reach out to the folks working on Copyleft Next, which has a similar interest in building on the GPL, since Kuhn and Fontana have been at this for a long time.

[–] jcolag@lemmy.sdf.org 18 points 5 months ago (1 children)

I agree that creating is inherently political, because politics pervades creation whether we choose the politics or not, but that's not a useful argument after somebody says "it doesn't matter to me." If you want to get into that shouting match, it's your time to waste.

My point is that, behind the garbage philosophy, we also now know that it's garbage technology, so all these people telling us about their utopian meritocracy where we just ignore bigotry are exposed as full of it. Cloudflare, Framework, and so forth, are not only OK with Great Replacement rhetoric, but also incapable of telling solid software from broken, and that's a stronger indictment than just trying to drag the conversation back to the bigotry.

[–] jcolag@lemmy.sdf.org 32 points 5 months ago (7 children)

That's probably a good choice, since it makes a perfect response to "politics shouldn't matter, here, just merit." Turns out...

[–] jcolag@lemmy.sdf.org 2 points 6 months ago

I primarily use GNOME Authenticator, but after an inopportune crash, I now also run 2FAuth on my home server as a backup, and now just hope that I remember to do the export/import dance going forward.

[–] jcolag@lemmy.sdf.org 5 points 6 months ago

I'm another conflicted person on this. I ran Tiny for years, so I never hated it. But it had so many updates that assumed that I'd know in advance to update something on the system (PHP libraries, database schema, etc.), and then putting the git repository behind Cloudflare led to a cycle of notifications that I needed an update and then waiting for Brigadoon to reemerge so that I could pull the latest source. And any time that I needed to look for a solution to a problem, reading through the forums made me regret the choice a tiny bit more.

It's reasonable software, but I ended up moving to Fresh RSS on an in-house server, and that has gone better, but I hope that the Tiny community pulls together something better to keep the space diverse.

[–] jcolag@lemmy.sdf.org 4 points 6 months ago

In my case (not necessarily your case, of course), the cheapest selling-point has become that I already have a browser open for almost everything else, so that's one less thing to install and check in on. But it's also easier to keep up to date reading when individual computers have problems and usually has a nicer API for scripting, if you need that sort of thing.

[–] jcolag@lemmy.sdf.org 2 points 6 months ago

That's close to how I think about it, yeah, but I'd push more in terms of the investment. Since Jekyll, Hugo, Svelte, Eleventy, and the rest just generate flat HTML to upload, there's nothing wrong with using it for a single page. But you end up needing to learn the whole build-and-deploy process and all the layout quirks, which (especially if you're starting from scratch) will take longer to get the page out. And like you point out, the more material you have, the better that investment looks.

But then, if you already know the system, there's no new investment, so it becomes more of a toss-up whether to build things that way, since a page of Markdown is slightly faster to write than the equivalent HTML.

[–] jcolag@lemmy.sdf.org 9 points 6 months ago (2 children)

Personally, after churning through all the static site generator options, I landed on Jekyll, one of the first of them. It's definitely not the sexiest solution, but it's Markdown-in and HTML-out (my main page is still raw HTML/CSS from like twenty years ago, though), was the easiest for me to match the styling that I wanted from the base theme, and it's been along for long enough that it's mostly surprise-free.

That said, if you only want the equivalent of a business card, I might argue that setting up anything is probably overkill, all overhead for just a tiny bit of content. In that case, you can grab some modern-ish HTML boilerplate like this one, then use Pandoc to convert the Markdown (which you presumably already know if you're messing with Hugo) to the HTML that goes between <body> and </body> in the boilerplate. Add CSS, and you're done.

Oh, and actually, depending on how broadly you want just the "business card" idea, something like Littlelink might also fit your needs, where you hack out the links that you don't care about and fill in destinations for the rest.

[–] jcolag@lemmy.sdf.org 3 points 7 months ago

I developed this script for creating permanent/static archives of social media exports, so it's not a full solution - not a web service, expects file inputs, uses a probably incomplete list of shorteners to avoid pulling real pages - but it along with the shorteners.txt file in the same repository, iterating to find a domain not on the list, might at least inspire a solution, if it's not good for your specific cases.

[–] jcolag@lemmy.sdf.org 3 points 7 months ago

I buy it.

As it turns out, a couple of months ago when a laptop crapped out at an inopportune time, I needed to retreat to a much older machine with barely enough memory to keep a browser running all day. As I tried to work out a recovery plan for the things that didn't seem properly backed up (they were, just not where I expected them), I remembered that I had a couple of old Raspberry Pi units that I never did much with, and decided that could take the load off of the laptop if I tossed them in the corner.

So far, I have Code Server to substitute for Visual Studio Code, Cryptpad for Libre Office, Forgejo just because I really should have done that a long time ago, Fresh RSS for a rotating list of RSS readers since I dropped my Internet-accessible Tiny Tiny RSS installation, Inf Cloud and Radicale for a calendar/address book, Jellyfin that used to run on the then-in-use old laptop, Snappy Mail for Thunderbird and the bunch of heavy webpages from mail providers, YaCy because I've wanted to use it more for many years, and a few others.

Moving onto a more functional computer, I decided to keep the servers running, because the setup works about as well as the desktop setups that I've run for years, if I use a few pinned tabs. I'm sure that I'll scream about it when something goes wrong, but it does the job...

[–] jcolag@lemmy.sdf.org 1 points 7 months ago

Yeah, it's on the local network, so I'll need to mess around with aliases again. And they seem to think that it's possible to set this up on a subfolder, with the APP_SUBDIRECTORY variable, but it doesn't exactly give the impression of rigorous deployment testing, so you're right that I should assume that part doesn't work. Thanks!

 

(Apologies in advance if this is the wrong spot to ask for help, and/or if the length annoys people.)

I'm trying to set up 2FAuth on a local server (old Raspberry Pi, Debian), alongside some other services.

Following the self-hosting directions, I believe that I managed to get the code running, and I can get at the page, but can't register the first/administrative/only account. Presumably, something went wrong in either the configuration or the reverse-proxy, and I've run out of ideas, so could use an extra pair of eyes on it, if somebody has the experience.

The goal is to serve it from http://the-server.local/2fa, where I have a...actually the real name of the server is worse. Currently, the pages (login, security device, about, reset password, register) load, but when I try to register an account, it shows a "Resource not found / 404" ("Item" in the title) page.

Here's the (lightly redacted) .env file, mostly just the defaults.

APP_NAME=2FAuth
APP_ENV=local
APP_TIMEZONE=UTC
APP_DEBUG=false
SITE_OWNER=mail@example.com
APP_KEY=base64:...
APP_URL=http://the-server.local/2fa
APP_SUBDIRECTORY=2fa
IS_DEMO_APP=false
LOG_CHANNEL=daily
LOG_LEVEL=notice
CACHE_DRIVER=file
SESSION_DRIVER=file
DB_CONNECTION=sqlite
DB_DATABASE=/var/www/2fauth/database/database.sqlite
DB_HOST=
DB_PORT=
DB_USERNAME=
DB_PASSWORD=
MYSQL_ATTR_SSL_CA=
MAIL_MAILER=log
MAIL_HOST=my-vps.example
MAIL_PORT=25
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
MAIL_FROM_NAME=2FAuth
MAIL_FROM_ADDRESS=2fa@my-vps.example
MAIL_VERIFY_SSL_PEER=true
THROTTLE_API=60
LOGIN_THROTTLE=5
AUTHENTICATION_GUARD=web-guard
AUTHENTICATION_LOG_RETENTION=365
AUTH_PROXY_HEADER_FOR_USER=null
AUTH_PROXY_HEADER_FOR_EMAIL=null
PROXY_LOGOUT_URL=null
WEBAUTHN_NAME=2FAuth
WEBAUTHN_ID=null
WEBAUTHN_USER_VERIFICATION=preferred
TRUSTED_PROXIES=null
PROXY_FOR_OUTGOING_REQUESTS=null
CONTENT_SECURITY_POLICY=true
BROADCAST_DRIVER=log
QUEUE_DRIVER=sync
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
VITE_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
VITE_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
MIX_ENV=local

Then, there's the hard-won progress on the NGINX configuration.

server {
    listen 80;
    server_name the-server.local;
# Other services
    location /2fa/ {
        alias /var/www/2fauth/public/;
        index index.php;
        try_files $uri $uri/ /index.php?$query_string;
    }
    location ~ ^/2fa/(.+?\.php)(/.*)?$ {
        alias /var/www/2fauth/public/;
        fastcgi_pass unix:/var/run/php/php8.3-fpm.sock;
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        set $path_info $fastcgi_path_info;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root/$1;
        include fastcgi_params;
    }
# ...and so on

I have tried dozens of variations, here, especially in the fastcgi_param lines, almost all of which either don't impact the situation or give me a 403 or 404 error for the entire app. This version at least shows login/register/about pages.

While I would've loved to do so, I can't work with the documentation's example, unfortunately, because (a) it presumes that I only want to run the one service on the machine, and (b) doesn't seem to work if transposed to a location. They do have the Custom Base URL option, but it doesn't work. That just gives me a 403 error (directory index of "/var/www/2fauth/public/" is forbidden, client: 192.168.1.xxx, server: the-server.local, request: "GET /2fa/ HTTP/1.1", host: "the-server.local", and again I emphasize that the permissions are set correctly) for the entire app, making me think that maybe nobody on the team uses NGINX.

Setting both NGINX and 2FAuth for debugging output, the debug log for NGINX gives me this, of the parts that look relevant.

*70 try files handler
*70 http script var: "/2fa/user"
*70 trying to use file: "user" "/var/www/2fauth/public/user"
*70 http script var: "/2fa/user"
*70 trying to use dir: "user" "/var/www/2fauth/public/user"
*70 http script copy: "/index.php?"
*70 trying to use file: "/index.php?" "/var/www/2fauth/public//index.php?"
*70 internal redirect: "/index.php?"

And the Laravel log is empty, so it's not getting that far.

Permissions and ownership of 2FAuth seem fine. No, there's no /var/www/2fauth/public/user, which seems to make sense, since that's almost certainly an API endpoint and none of the other "pages" have files by those names.

I have theories on what the application needs (probably the path as an argument of some sort), but (a) I'm not in the mood to slog through a PHP application that I don't intend to make changes to, and (b) I don't have nearly the experience with NGINX to know how to make that happen.

It seems impossible that I'm the first one doing this, but this also feels like a small enough problem (especially with a working desktop authenticator app) that it's not worth filing a GitHub issue, especially when their existing NGINX examples are so...worryingly off. So, if anybody can help, I'd appreciate it.

[–] jcolag@lemmy.sdf.org 1 points 1 year ago

I've been using different versions of SearX for a long while (sometimes on my server, sometimes through a provider like Disroot) as my standard search engine, since I've never had great luck with the big names, and it's decent, but between upstream provider quota limits, and just the fact that it relies on corporate search APIs at all, sometimes the quality craters.

While I haven't had the energy to run YaCy on my own, and public instances tend to not have a long life, I don't have nearly as much experience with it, but when I have gotten to try it out, the search itself looked great, but generally didn't have as broad or current an index. Long-term, though, it (and its protocol) is probably going to be the way to go, if only because a company can't randomly tank it like they can with the meta-search systems or their own interfaces.

Looking at Presearch for the first time now, the search results look almost surprisingly good if poorly sorted, but the fact that I now know orders of magnitude more about their finances and their cryptocurrency token than what and how the thing actually searches makes me worry a bit about its future.

view more: next ›