this post was submitted on 22 Jun 2025
44 points (97.8% liked)
Space
1486 readers
182 users here now
A community to discuss space & astronomy through a STEM lens
Rules
- Be respectful and inclusive. This means no harassment, hate speech, or trolling.
- Engage in constructive discussions by discussing in good faith.
- Foster a continuous learning environment.
Also keep in mind, mander.xyz's rules on politics
Please keep politics to a minimum. When science is the focus, intersection with politics may be tolerated as long as the discussion is constructive and science remains the focus. As a general rule, political content posted directly to the instanceโs local communities is discouraged and may be removed. You can of course engage in political discussions in non-local communities.
Related Communities
๐ญ Science
- !curiosityrover@lemmy.world
- !earthscience@mander.xyz
- !esa@feddit.nl
- !nasa@lemmy.world
- !perseverancerover@lemmy.world
- !physics@mander.xyz
- !space@beehaw.org
๐ Engineering
๐ Art and Photography
Other Cool Links
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Stopped reading after the first caption, because it says:
The model doesn't add "further" details. It composites a pretty little picture and puts it into the image. It's not what the black hole actually looks like. The "details" are not "further details", they're fabricated. Nonsense.
To clarify: they used data which was determined as unusable as it was too noisy for regular algorithms. So the neural net was trained on more data than just a couple of images. They were trained on all the raw data available. Even if the results might not be that accurate, it is actually a good way of solving this type of problem. Scientifically speaking, the results are not accurate, but it might give us a new perspective to the problem.
Kalman filters can be used to filter noise from multivariate data, and it's just a simple matrix transform, no risk of hallucinations.
Neutral networks are notoriously bad at dealing with small data sets. They "overfit" the data, and create invalid extrapolations when new data falls just a little outside training parameters. The way to make them useful is to have huge amounts of data, train the model on a small portion of it and use the rest to validate.
Fully agree, overfitting might be an issue. We donโt know how much training data was available. Just more than the first assumption suggests. But it might still not be enough.
There aren't a lot of high resolution images of black holes. I know of one. So not a lot.
Oh, they donโt train on image data. They train on raw sensor data. And as mentioned earlier, they used all the data that was too noisy to produce images out of it.
Of that one mission, right? Until you have thousands of these days sets, it's the wrong approach.
9 petabytes of raw data have been produced with the EHT in 2017 and 2018. After filtering, only about 100 terabytes were left. After final calibrations, about 150 gigabytes were then used to generate the images.
So clearly a lot of data was thrown away, as it was not usable for generating images. However, a machine learning model might be able to use this data.
It's still only one black hole. It's one huge datapoint.