SmokeyDope

joined 1 week ago
[–] SmokeyDope@piefed.social 1 points 2 hours ago* (last edited 2 hours ago)

why? I run it.

Mmm how to say this. i suppose what I'm getting at is like a philosophy of development and known behaviors of corporate products.

So, here's what I understand about crowdsec. Its essentially like a centralized collection of continuously updated iptable rules and botscanning detectors that clients install locally.

In a way its crowd sourcing is like a centralized mesh network each client is a scanner node which phones home threat data to the corporate home which updates that.

Notice the optimal word, centralized. The company owns that central home and its their proprietary black box to do what they want with. And so you know what for profit companies like to do to their services over time? Enshittify them by

  • adding subscription tier price models

  • putting once free features behind paywalls,

  • change data sharing requirements as a condition for free access

  • restricting free api access tighter and tighter to encourage paid tiers,

  • making paid tiers cost more to do less.

  • Intentionally ruining features in one service to drive power users to use a different.

They can and do use these tactics to drive up profit or reduce overhead once a critical mass has been reached. I do not expect alturism and respect for usersfrom corporations, I expect bean counters using alturism as a vehicle to attract users in the growing phase and then flip the switch in their tos to go full penny pinching once they're too big to fail.

Crowdsecs pricing updates from last year
CrowdSec updated pricing policy

Hi everyone,

Our former pricing model led to some incomprehensions and was sub-optimal for some use-cases.

We remade it entirely here. As a quick note, in the former model, one never had to pay $2.5K to get premium blocklists. This was Support for Enterprise, which we poorly explained. Premium blocklists were and are still available from the premium SaaS plan, accessible directly from the SaaS console.

Here are the updates:

Security Engine: All its embedded features (IDS, IPS and WAF) were, are and will remain free.

SAAS: The free plan offers up to three silver-grade blocklists (on top of receiving IP related to signals your security engines share). Premium plans can use any free, premium and gold-grade blocklists. Previously, we had a premium and an enterprise plan with more features. All features are now merged into a unique SaaS enterprise plan. The one starting at $31/month. As before, those are available directly from the SaaS console page: https://app.crowdsec.net/

SUPPORT: The $2.5K (which were mostly support for Enterprise) are now becoming optional. Instead, a client can contract $1K for Emergency bug & security fixes and $1K for support if they want to.

BLOCKLISTS: Very specific (country targeted, industry targeted, stack targeted, etc.) or AI-enhanced are now nested in a different offer named "Platinum blocklists subscription". You can subscribe to them, regardless of whether you use the FOSS Security Engine or not. They can be joined, tuned, and injected directly into most firewalls with regular automatic remote updates of their content. As long as you do not resell them (meaning you are the final client), you can use the subscription in any part of your company.

CTI DATA: They can be consumed through API keys with associated quotas. These are affordable and intended for use in tools like OpenCTI, MISP, The Hive, Xsoar, etc. Costs are in the range of hundreds of dollars per month. The Full CTI database can also be locally replicated at your place and constantly synced for deltas. Those are the largest plans we have, and they are usually destined to L/XL enterprises, governmental bodies, OEM & hardware vendors.

Safer together.
14
·
14
Comments Section
u/ShroomShroomBeepBeep avatar
ShroomShroomBeepBeep

1y ago

Whilst I'm pleased to see it made clearer, £290 a year for each security engine is still far too expensive for me to consider it.
2
u/GuitarEven avatar
GuitarEven

1y ago

We get that £290 is too high for individual home labs. Those offers are made for companies.
Free tier features should cover homelabs correctly.

Features that are oriented for enterprise clients.
If a company cannot invest $300 yearly in its security, no judgment and the free tier will still be very helpful until it recovers some budget margins to strengthen its security posture.
4
[deleted]

1y ago

Any idea why we dont have any good free / freemium (max $5 per month) app yet. Reason am asking - adguard, urigin etc had filters which matches js/domains and filters them out. Same logic can be applied atleast for the ip lists - so that these ips cann be added to iptables to block. A lot of things are easy to make. The tough ones are things like scenarios and may be ssh bw etc. I wonder why no real competition.
1
u/GuitarEven avatar
GuitarEven

1y ago

hi u/ElizabethThomas44

Well you actually do. To date, for free, you get:

  • the security engine (IDS/IPS/WAF)
  • all scenarios
  • the blocklist of IPs you are participating to detect when you use scenarios and share signals
  • the free tier of the console

The IPs you automatically get for free are already added to your nftables or iptables using the related remediation component.

<TL/DR> You already have it.

(damn, personal reddit account, sorry, this is Philippe@CrowdSec)
4

At the end of the day its not the thousands of anonymous users contributing their logs or Foss voulenteers on git getting a quarterly payout. They're the product and free compute + live action pen testing ginnea pigs, no matter what PR they spin saying how much they care about the security of the plebs using their network for free.

Its always about maximizing the money with these people your security can get fucked if they dont get some use out of you. Expect at some point the tos will change so that anonymized data sharing is no longer an option for free tier.

What happens if the company goes bankrupt? Does it just stop working when their central servers shut down? Does their open source security have the possibility of being forked and run from local servers?

It doesnt have to be like this. Peer to peer Decentralized mesh networks like YaCy already show its possible for a crowdsourced network of users can all contribute to an open database. Something that can be completely run as a local Node which federates and updates the information in global node. Something like it that updates a global iptables is already a step in the right direction. In that theoretical system there is no central monopoly its like the fediverse everyone contributes to hosting the global network as a mesh which altruistic hobbyist can contribute free compute to on their own terms.

https://github.com/yacy/yacy_search_server

I"I dont see anything wrong with people getting paid" is something I see often on discussions. Theres nothing wrong with people who do work and make contributions getting paid. What's wrong is it isnt the open source community on github or the users contributing their precious data getting paid, its a for profit centralized monopoly that controls access to the network which the open source community built for free out of alturism.

The pattern is nearly always the same. The thing that once worked well and which you relied on gets slowly worse each ToS update, while their pricing inches just a dollar higher each quarter, and you get less and less control over how you get to use their product. Its pattern recognition.

The only solution is to cut the head off the snake. If I can't fully host all of the components, see the source code of the mechanisms at all layers, own a local copy of the global database, then its not really mine.

Again, it's a philosophy thing. Its very easy to look at all that, shrug, and go "whatever not my problem I'll just switch If it becomes an issue". But the problem festers the longer its ignored or enabled for convinence. The community needs to truly own the services they run on every level, it has to be open, and for profit bean counters can't be part of the equation especially for hosting. There are homelab hobbyist out there who will happily eat cents on a electric bill to serve an open service to a community, get 10,000 of them on a truly open source decentralized mesh network and you can accomplish great things without fear of being the product.

[–] SmokeyDope@piefed.social 2 points 8 hours ago

I'm smoking weed about you smoking weed about it.

[–] SmokeyDope@piefed.social 6 points 9 hours ago* (last edited 8 hours ago) (2 children)

If crowdsec works for you thats great but also its a corporate product whos premium sub tier starts at 900$/month not exactly a pure self hosted solution.

I'm not a hypernerd, still figuring all this out among the myriad of possible solutions with different complexity and setup times. All the self hosters in my internet circle started adopting anubis so I wanted to try it. Anubis was relatively plug and play with prebuilt packages and great install guide documentation.

Allow me to expand on the problem I was having. It wasnt just that I was getting a knock or two, its that I was getting 40 knocks every few seconds scraping every page and searching for a bunch that didnt exist that would allow exploit points in unsecured production vps systems.

On a computational level the constant network activity of bytes from webpage, zip files and images downloaded from scrapers pollutes traffic. Anubis stops this by trapping them in a landing page that transmits very little information from the server side. By traping the bot in an Anubis page which spams that 40 times on a single open connection before it gives up, it reduces overall network activity/ data transfered which is often billed as a metered thing as well as the logs.

And this isnt all or nothing. You don't have to pester all your visitors, only those with sketchy clients. Anubis uses a weighted priority which grades how legit a browser client is. Most regular connections get through without triggering, weird connections get various grades of checks by how sketchy they are. Some checks dont require proof of work or JavaScript.

On a psychological level it gives me a bit of relief knowing that the bots are getting properly sinkholed and I'm punishing/wasting the compute of some asshole trying to find exploits my system to expand their botnet. And a bit of pride knowing I did this myself on my own hardware without having to cop out to a corporate product.

Its nice that people of different skill levels and philosophies have options to work with. One tool can often complement another too. Anubis worked for what I wanted, filtering out bots from wasting network bandwith and giving me peace of mind where before I had no protection. All while not being noticeable for most people because I have the ability to configure it to not heckle every client every 5 minutes like some sites want to do.

[–] SmokeyDope@piefed.social 31 points 15 hours ago* (last edited 15 hours ago) (1 children)

Something that hasn't been mentioned much in discussions about Anubis is that it has a graded tier system of how sketchy a client is and changing the kind of challenge based on a a weighted priority system.

The default bot policies it comes with has it so squeaky clean regular clients are passed through, then only slightly weighted clients/IPs get the metarefresh, then its when you get to moderate-suspicion level that JavaScript Proof of Work kicks. The bot policy and weight triggers for these levels, challenge action, and duration of clients validity are all configurable.

It seems to me that the sites who heavy hand the proof of work for every client with validity that only last every 5 minutes are the ones who are giving Anubis a bad wrap. The default bot policy settings Anubis comes with dont trigger PoW on the regular Firefox android clients ive tried including hardened ironfox. meanwhile other sites show the finger wag every connection no matter what.

Its understandable why some choose strict policies but they give the impression this is the only way it should be done which Is overkill. I'm glad theres config options to mitigate impact normal user experience.

[–] SmokeyDope@piefed.social 2 points 16 hours ago

What use cases does perplexity do that Claude doesn't for you?

 

Its winter where I live and winter means cold. Currently its below freezing outside. My offgrid heating is limited to the point I have to sometimes layer up to be comfortable.

I discovered a technique that helps me out a lot this season. A 12v car blanket wraped around your shoulders like a poncho and then put a heavy coat layer. Its a really effective way to insulate both yourself and the blanket.

You might be wondering why a 12v blanket instead of a house electric blanket. For offgrid power a 12v blanket is much easier on limited battery systems. Also, 12v blankets dont have 8 hour safety shutoffs they stay on forever.

In the picture is my polar grade Baffin booties most comfortable Cold weather slippies you can get I hightly recommend if you get cold feet.

The only other tip I have is good layering. Wool socks, multiple thermal underwear layers, hat and gloves. Each piece of clothing added helps even out the difference. Electric just amplifies h effect and makes it so you don't need as much.

Put thermally insulated blanketing on your furniture. Wool blankets are good and duvet type puffy blankets are useful. Consider making an insulated hot table.

[–] SmokeyDope@piefed.social 53 points 18 hours ago* (last edited 18 hours ago) (3 children)

Theres a compute option that doesnt require javascript. The responsibility lays on site owners to properly configure IMO, though you can make the argument its not default I guess.

https://anubis.techaro.lol/docs/admin/configuration/challenges/metarefresh

From docs on Meta Refresh Method

Meta Refresh (No JavaScript)

The metarefresh challenge sends a browser a much simpler challenge that makes it refresh the page after a set period of time. This enables clients to pass challenges without executing JavaScript.

To use it in your Anubis configuration:

# Generic catchall rule
- name: generic-browser
  user_agent_regex: >-
    Mozilla|Opera
  action: CHALLENGE
  challenge:
    difficulty: 1 # Number of seconds to wait before refreshing the page
    algorithm: metarefresh # Specify a non-JS challenge method

This is not enabled by default while this method is tested and its false positive rate is ascertained. Many modern scrapers use headless Google Chrome, so this will have a much higher false positive rate.

[–] SmokeyDope@piefed.social 5 points 18 hours ago

Security issues are always a concern the question is how much. Looking at it they seem to at most be ways to circumvent the Anubis redirect system to get to your page using very specific exploits. These are marked as m low to moderate priority and I do not see anything that implies like system level access which is the big concern. Obviously do what you feel is best but IMO its not worth sweating about. Nice thing about open source projects is that anyone can look through and fix, if this gets more popular you can expect bug bounties and professional pen testing submissions.

[–] SmokeyDope@piefed.social 21 points 18 hours ago (1 children)

You know the thing is that they know the character is a problem/annoyance, thats how they grease the wheel on selling subscription access to a commecial version with different branding.

https://anubis.techaro.lol/docs/admin/botstopper/

pricing from site

Commercial support and an unbranded version

If you want to use Anubis but organizational policies prevent you from using the branding that the open source project ships, we offer a commercial version of Anubis named BotStopper. BotStopper builds off of the open source core of Anubis and offers organizations more control over the branding, including but not limited to:

  • Custom images for different states of the challenge process (in process, success, failure)
  • Custom CSS and fonts
  • Custom titles for the challenge and error pages
  • "Anubis" replaced with "BotStopper" across the UI
  • A private bug tracker for issues

In the near future this will expand to:

  • A private challenge implementation that does advanced fingerprinting to check if the client is a genuine browser or not
  • Advanced fingerprinting via Thoth-based advanced checks

In order to sign up for BotStopper, please do one of the following:

  • Sign up on GitHub Sponsors at the $50 per month tier or higher
  • Email sales@techaro.lol with your requirements for invoicing, please note that custom invoicing will cost more than using GitHub Sponsors for understandable overhead reasons

I have to respect the play tbh its clever. Absolutely the kind of greasy shit play that Julian from the trailer park boys would do if he were an open source developer.

 

I got into the self-hosting scene this year when I wanted to start up my own website run on old recycled thinkpad. A lot of time was spent learning about ufw, reverse proxies, header security hardening, fail2ban.

Despite all that I still had a problem with bots knocking on my ports spamming my logs. I tried some hackery getting fail2ban to read caddy logs but that didnt work for me. I nearly considered giving up and going with cloudflare like half the internet does. But my stubbornness for open source self hosting and the recent cloudflare outages this year have encouraged trying alternatives.

Coinciding with that has been an increase in exposure to seeing this thing in the places I frequent like codeberg. This is Anubis, a proxy type firewall that forces the browser client to do a proof-of-work security check and some other nice clever things to stop bots from knocking. I got interested and started thinking about beefing up security.

I'm here to tell you to try it if you have a public facing site and want to break away from cloudflare It was VERY easy to install and configure with caddyfile on a debian distro with systemctl. In an hour its filtered multiple bots and so far it seems the knocks have slowed down.

https://anubis.techaro.lol/

My botspam woes have seemingly been seriously mitigated if not completely eradicated. I'm very happy with tonights little security upgrade project that took no more than an hour of my time to install and read through documentation. Current chain is caddy reverse proxy -> points to Anubis -> points to services

Good place to start for install is here

https://anubis.techaro.lol/docs/admin/native-install/

[–] SmokeyDope@piefed.social 2 points 1 day ago (1 children)

Hi Frosty 0/ its been awhile! Sorry I don't comment more often but I see every stoner men posted here by you lol hope you doing well and if you celebrate turkey day hope it went good :)

[–] SmokeyDope@piefed.social 12 points 1 day ago (1 children)

The only reason the first one was any good was because they copied the homework of Unreal. All world building after the first has been shit and people only watched the first because of 3d which was a massive flex at the time.

Tap for spoilerontributed by Rúben Alvim (1) on 18.07.2004.

One of director James Cameron's pet projects after Titanic was an epic sci-fi extravaganza called Avatar, much hyped in Hollywood circles at the time and poised to redefine the notion of a truly alien world on the big screen.

The project fell apart some years ago, but the scriptment (a hybrid between a script and a treatment ) by James Cameron still exists. Interestingly, you can find quite a few similarities between it and Unreal:


Both feature a basic plot premise where, by virtue of circumstances mostly beyond his control, a reluctant hero becomes the saviour of the native race of an alien planet forced to mine their land for ore of utmost importance to an invading race coming from the skies. In both cases the saviour is seen by the natives as someone who also came from the skies and is thus initially met with some alarm or distrust only to be later hailed as a pseudo-messiah.


The native race is called "Na'vi" in Avatar and "Nali" in Unreal. The physical description of the Na'vi by Cameron can be visualised as basically a cross between the Nalis' tall, lean, slender bodies and the IceSkaarjs' blueish skin colour patterns, facial features, ponytail-like dreadlocks and caudal appendages.


The Nali in Unreal worship goddess Vandora. The home planet of the Na'vi in Avatar (which the Na'vi worship as a goddess entity) is named Pandora.


In Avatar, one of the most dazzling alien settings described is a huge set of sky mountains, "like floating islands among the clouds". One of the most memorable vistas in Unreal is Na Pali, thousands of miles up in the cloudy sky amidst a host of floating mountains. The main sky mountain range in Avatar is called "Hallelujah Mountains". The main Unreal level set in Na Pali is called "Na Pali Haven". Both include beautiful visual references to waterfalls streaming down the cliffs and dissolving into the clouds below.


The Earth ship in Avatar is called "ISV-Prometheus". One of the levels in Unreal takes place in the wreck of a Terran ship called "ISV-Kran". Even more striking, in the expansion pack Return to Na Pali, the crashed ship the player is asked to salvage is called "Prometheus".


One of the deadly examples of local fauna in Unreal is the Manta, essentially a flying manta-ray. In Avatar, one of the most lethal aerial creatures is the Bansheeray, basically a flying manta-ray. The expansion Return to Na Pali even features a Giant Manta, while in Avatar one of the most formidable predators is a giant Bansheeray, which Cameron dubbed "Great Leonopteryx".


In the two stories (especially Return to Na Pali, on Unreal's end), a plot point arises from the fact the precious ore behind the invasion of the planet ("tarydium" in Unreal, "unobtanium" in Avatar) causes problems in the scanners.


Unreal was in development for several years before its release in 1998. The Avatar scriptment was probably finished as early as 1996-97. Bearing all the above in mind the temptation to start wondering about further suspicious parallels may be quite strong, but in spite of these similarities both titles have few else in common and many aspects actually veer off in wildly different directions. Even so, the coinciding factors can make for an interesting minutia comparison.

 

i wanted to share some short 5 second clips on a post, i noticed that gifs are too big. What are my options, what file formats doesl lemmy and piefed accept for embeded videos

[–] SmokeyDope@piefed.social 3 points 5 days ago* (last edited 5 days ago) (3 children)

Plot twist: theres still sowing needles and thimbles in there.

[–] SmokeyDope@piefed.social 10 points 6 days ago (2 children)

The electrical fire thing is mostly because there are people in this world who will absolutely try running two 1500w space heaters on a single multiplug extension. Its not the quantity of the plugs its how much power is flowing through them as well as what quality of cabling is used. From the image it looks like a couple phone chargers which isnt the worst thing in the world.

 

I got a spore syringe with shrooms of my choice and some uncle bens rice bags a while ago. One guy I watch shows off a super easy method of injecting into the bags and fruiting directly from there.

https://www.youtube.com/watch?v=1cZe1Og2tro

But then I go on the unclebens reddit and read through shroomscouts guide with dubtubs and putting space heaters with thermostats in a closet and coconut coir mixing with spawn and it starts to make me feel anxious like maybe this isn't for me. I just want some mushrooms man why does this have to be a 20 step process from innoculation to fruiting. And then everyone hypes up contamination like unless you make an air controlled still box its gaurenteed to get contamination, and how the uncle bens ready rice I got is too watery compared to alternatives, and how fruiting from bag is so much less yield than the tub spawning method, and on and on and on. Am I reading too deep into it and should just shove the syringe in and tape over and see what happens? Is it okay if I just go with the first guys method and not worry about the tubs and maintaining water droplets and all that?

 

Never worry about commie crap like public citations getting in the way of misinformation rhetoric again! (Because the LLM trained on fuckin twitter made it up lmao)

On the flipside for an actually cool non-cucked integration of LLMs with wikipedia check out this post on the localllama where the person shares their project of using a local private llm to search through a local kiwix server instance of wikipedia. https://piefed.social/post/1333130

 

A while ago I was having fun theorycrafting seeing if there were potential connections relating knot geometry/ formation to wavefunction field excitation as well as better understanding computational complexity classes of knot formation.

During my research I came across an interesting class of fractal ' wild knots ' that recursively cross into eachother infinitely. The wild knot shown in post displays cantor set class bifurcation.

PLZRVsF5RMnbr6p.png

Heres a 3d animation of a knot known as the ' Alexander Horned Sphere '

6U0Ui6CLYLUFrg1.jpg

z5m6LgUJZvVQ9RX.gif

These knots are pretty cool to look at.

view more: next ›