this post was submitted on 27 Mar 2025
6 points (87.5% liked)

Programmer Humor

22219 readers
1624 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 16 comments
sorted by: hot top controversial new old
[–] sunoc@sh.itjust.works 2 points 1 week ago (2 children)

Sure thing, but what is a monad anyway?

[–] gnutrino@programming.dev 1 points 1 week ago

It's a monoid in the category of endofunctors. Obviously.

[–] Pencilnoob@lemmy.world 0 points 1 week ago* (last edited 1 week ago) (1 children)

In practical terms, it's most commonly a code pattern where any function that interacts with something outside your code (database, filesystem, external API) is "given permission" so all the external interactions are accounted for. You have to pass around something like a permission to allow a function to interact with anything external. Kind of like dependency injection on steroids.

This allows the compiler to enhance the code in ways it otherwise couldn't. It also prevents many kinds of bugs. However, it's quite a bit of extra hassle, so it's frustrating if you're not used to it. The way you pass around the "permission" is unusual, so it gives a lot of people a headache at first.

This is also used for internal permissions like grabbing the first element of an array. You only get permission if the array has at least one thing inside. If it's empty, you can't get permission. As such there's a lot of code around checking for permission. Languages like Haskell or Unison have a lot of tricks that make it much easier than you'd think, but you still have to account for it. That's where you see all the weird functions in Haskell like fmap and >>=. It's helpers to make it easier to pass around those "permissions".

What's the point you ask? There's all kinds of powerful performance optimizations when you know a certain block of code never touches the outside world. You can split execution between different CPU cores, etc. This is still in it's infancy, but new languages like Unison are breaking incredible ground here. As this is developed further it will be much easier to build software that uses up multiple cores or even multiple machines in distributed swarms without having to build microservice hell. It'll all just be one program, but it runs across as many machines as needed. Monads are just one of the first features that needed to exist to allow these later features.

There's a whole math background to it, but I'm much more a "get things done" engineer than a "show me the original math that inspired this language feature" engineer, so I think if it more practically. Same way I explain functions as a way to group a bunch of related actions, and not as an implementation of a lambda calculus. I think people who start talking about burritos and endofunctors are just hazing.

[–] sacredfire@programming.dev 0 points 1 week ago (1 children)

I don’t know if this is correct, but if it is, this is best answer to this question I’ve ever seen.

[–] Pencilnoob@lemmy.world 0 points 1 week ago (1 children)

I'm sure someone will be like "um akchuly" to my explanation. But for me it's good enough to think if it that way.

I've worked in Haskell and F# for a decade, and added some of the original code to the Unison compiler, so I'm at least passingly familiar with the subject. Enough that I've had to explain it to new hires a bunch of times to get them to to speed. I find it easier to learn something when I'm given a practical use for it and how it solves that problem.

[–] harryprayiv@infosec.pub 3 points 1 week ago* (last edited 1 week ago)

Lovely response! Very cool to see Unison mentioned. Haskell and Purescript are my daily drivers but I have a huge crush on it even though it intimidates me.

Ps. Unison doesn’t have monads. They are replaced by “abilities”.

[–] Cargon@lemmy.ml 1 points 1 week ago

my_balls |> ligma() |> gotem(laugh=TRUE)

[–] Samskara@sh.itjust.works 0 points 1 week ago (1 children)

Functional programmers still pretending side effects snd reals world applications don’t exist.

[–] expr@programming.dev 1 points 1 week ago (3 children)

As a senior engineer writing Haskell professionally, this just isn't really true. We just push side effects to the boundaries of the system and do as much logic and computation in pure functions.

It's basically just about minimizing external touch points and making your code easier to test and reason about. Which, incidentally, is also good design in non-FP languages. FP programmers are just generally more principled about it.

[–] Samskara@sh.itjust.works 1 points 3 days ago

Reasoning about memory use for example is difficult with FP.

Using pure functions is a good idea for non FP languages as well.

[–] CanadaPlus@lemmy.sdf.org 1 points 1 week ago* (last edited 1 week ago) (2 children)

It heavily depends on the application, right? Haskell is life for algorithmically generating or analysing data, but I'm not really convinced by the ways available in it to do interaction with users or outside systems. It pretty much feels like you're doing imperative code again just in the form of monads, after a while. Which is actually worse from a locality of ~~reference~~ behavior perspective.

[–] dreadbeef@lemmy.dbzer0.com 1 points 1 week ago* (last edited 1 week ago)

Not really, it's just good practice. You write your application in layers, and the outer layer/boundary is where you want your side effects and that outer layer takes the crazy effectful world and turns it sane with nice data types and type classes and whatnot and then your inner layers operate on that. Data goes down the layers then back up, at least in my experience with functional projects in OCaml, F#, Clojure, and Haskell.

The real sauce is immutability by default/hard-to-do mutation. I love refs in OCaml and Clojure, so much better than mutation. Most of the benefits of FP are that and algebraic data types, in that order imo.

[–] expr@programming.dev 0 points 1 week ago* (last edited 1 week ago) (2 children)

I'm not sure what you mean by "locality of reference". I assume you mean something other than the traditional meaning regarding how processors access memory?

Anyway, it's often been said (half-jokingly) that Haskell is a nicer imperative language than imperative languages. Haskell gives you control over what executing an "imperative" program actually means in a way that imperative languages don't.

To give a concrete example: we have a custom monad type at work that I'm simply going to call Transaction (it has a different name in reality). What it does is allow you to execute database calls inside of the same transaction (and can be arbitrarily composed with other code blocks of type Transaction while still being guaranteed to be inside of the same transaction), and any other side effects you write inside the Transaction code block are actually collected and deferred until after the transaction successfully commits, and are otherwise discarded. Very useful, and not something that's very easy to implement in imperative languages. In Haskell, it's maybe a dozen lines of code and a few small helper functions.

It also has a type system that is far, far more powerful than what mainstream imperative programming languages are capable of. For example, our API specifications are described entirely using types (using the servant library), which allows us to do things like statically generate API docs, type-check our API implementation against the specification (so our API handlers are statically guaranteed to return the response types they say they do), automatically generate type-safe API clients, and more.

We have about half a million lines of Haskell in production serving as a web API backend powering our entire platform, including a mobile app, web app, and integrations with many third parties. It's served us very well.

[–] CanadaPlus@lemmy.sdf.org 1 points 1 week ago

I’m not sure what you mean by “locality of reference”. I assume you mean something other than the traditional meaning regarding how processors access memory?

Shit! Sorry, got my wires crossed, I actually meant locality of behavior. Basically, if you're passing a monad around a bunch without sugar you can't easily tell what's in it after a while. Or at least I assume so, I've never written anything big in Haskell, just tons of little things.

To give a concrete example:

Yeah, that makes tons of sense. It sounds like Transaction is doing what a string might in another language, but just way more elegantly, which fits into the data generation kind of application. I have no idea how you'd code a game or embedded real-time system in a non-ugly way, though.

It also has a type system that is far, far more powerful than what mainstream imperative programming languages are capable of.

Absolutely. Usually the type system is just kind of what the person who wrote the language came up with. The Haskell system by contrast feels maximally precise and clear; it's probably getting close to the best way to do it.

[–] victorz@lemmy.world 1 points 1 week ago

Excellent write-up. People who complain about Haskell and purely functional languages just don't understand it, I think. Take me for example. I tried learning Haskell many years ago, and while I learned so many new and incredibly useful concepts from my short adventure, that I use everyday in my career, I just couldn't wrap my head around the more abstract concepts, like monads e.g. And the feeling I got was that Haskell is a difficult language, but probably it's the terminology and abstract mathematical concepts which are the real issue for me here. Because the syntax isn't really that complicated. Especially the way space is used to call functions. I'm really sick of all the parentheses in other languages.

But, if you understand all about functional programming, for those that do, it seems to really enrich the way they write and maintain code from what I've seen. People who dog on it just don't understand (including me). Of course it's hard to maintain something you don't understand. But if you do understand it, it's easy to maintain. 🤷‍♂️ Seems logical.

What next, where is the line drawn for what kind of code we can write? Why introduce more useful concepts in programming if we risk losing maintainability because some devs won't learn the new concepts?

Life means change. Adapt. Learn new things. Expand the mind. Learn how to do things in a good way, and then do the things in that good way. Why stagnate just because we don't understand something. Better to learn a new thing to understand the better way, than to dumb it down to a worse state just so we understand it.

Bah.

I've never had the chance to use a functional language in my work, but I have tried to use principles like these.

Once I had a particularly badly written Python codebase. It had all kinds of duplicated logic and data all over the place. I was asked to add an algorithm to it. So I just found the point where my algorithm had to go, figured out what input data I needed and what output data I had to return, and then wrote all the algorithm's logic in one clean, side effect-free module. All the complicated processing and logic was performed internally without side effects, and it did not have to interact at all with the larger codebase as a whole. It made understanding what I had to do much easier and relieved the burden of having to know what was going on outside.

These are the things functional languages teach you to do: to define boundaries, and do sane things inside those boundaries. Everything else that's going on outside is someone else's problem.

I'm not saying that functional programming is the only way you can learn something like this, but what made it click for me is understanding how Haskell provides the IO monad, but recommends that you keep that functionality at as high of a level as possible while keeping the lower level internals pure and functional.