And an additional response, because I didn't fully answer your question. LLMs don't reason. They traverse a data structure based on weightings relative to the occurrence frequency in their training content. Loosely speaking, it's a graph (https://en.wikipedia.org/wiki/Graph_(abstract_data_type)). It appears like reasoning because the LLM is iterating over material that has been previously reasoned out. An LLM can't reason through a problem that it hasn't previously seen unlike, say, a squirrel.
JayleneSlide
By the same logic, raytracing is ancient tech that should be abandoned.
Nice straw man argument you have there.
I'll restate, since my point didn't seem to come across. All of the "AI" garbage that is getting jammed into everything is merely scaled up from what has been before. Scaling up is not advancement. A possible analogy would be automobiles in the late 60s and 90s: Just put in more cubic inches and bigger chassis! More power from more displacement does not mean more advanced. Continuing that analogy, 2.0L engines cranking out 400ft-lb and 500HP while delivering 28MPG average is advanced engineering. Right now, the software and hardware running LLMs are just MOAR cubic inches. We haven't come up with more advanced data structures.
These types of solutions can have a place and can produce something adjacent to the desired results. We make great use of expert systems constantly within narrow domains. Camera autofocus systems leap to mind. When "fuzzy logic" autofocus was introduced, it was a boon to photography. Another example of narrow-ish domain ML software is medical decision support software, which I developed in a previous job in the early 2000s. There was nothing advanced about most of it; the data structures used were developed in the 50s by a medical doctor from Columbia University (Larry Weed: https://en.wikipedia.org/wiki/Lawrence_Weed). The advanced part was the computer language he also developed for quantifying medical knowledge. Any computer with enough storage, RAM, and the hardware ability to quickly traverse the data structures can be made to appear advanced when fed with enough collated data, i.e. turning data into information.
Since I never had the chance to try it out myself, how was your neural network and LLMs reasoning back in the day? Imo that’s the most impressive part, not that it can write.
It was slick for the time. It obviously wasn't an LLM per se, but both were a form of LM. The OCR and auto-suggest for DOS were pretty shit-hot for x386. The two together inspried one of my huge projects in engineering school: a whole-book scanner* that removed page curl and gutter shadow, and then generated a text-under-image PDF. By training the software on a large body of varied physical books and retentively combing over the OCR output and retraining, the results approached what one would see in the modern suite that now comes with your scanner. I only achieved my results because I had unfettered use of a quad Xeon beast in the college library where I worked. That software drove the early digitization processes for this (which I also built): http://digitallib.oit.edu/digital/collection/kwl/search
*in contrast to most book scanning at the time, which required the book to be cut apart and the pages fed into an automatically fed scanner; lots of books couldn't be damaged like that.
Edit: a word
Wait, isn't this how everyone does WFH lunch?
No, no they're not. These are just repackaged and scaled-up neural nets. Anyone remember those? The concept and good chunks of the math are over 200 years old. Hell, there was two-layer neural net software in the early 90s that ran on my x386. Specifically, Neural Network PC Tools by Russell Eberhart. The DIY implementation of OCR in that book is a great example of roll-your-own neural net. What we have today, much like most modern technology, is just lots MORE of the same. Back in the DOS days, there was even an ML application that would offer contextual suggestions for mistyped command line entries.
Typical of Silicon Valley, they are trying to rent out old garbage and use it to replace workers and creatives.
Sailors know your pain all too well. The key to preventing this is air movement. The less expensive option is some kind of material to put in between your cot and mattress, such as Hypervent Aire-Flow or Dri-Deck. An expensive solution is a Froli System, which has the added benefit of allowing you to tune the firmness for different parts of your body. I have a Froli under all of the bunks on my boat; condensation and mildew are no longer a thing now. But the price is steep.
And Cascadia too, please.
our highly trained Canadian Geese
This explains so damned much of their behavior. I for one look forward to these operatives helping us out with our myriad domestic issues.
If you look at from a different perspective, it all makes more sense. Right now, you're trying to apply the incorrect logic and an ethical consistency to anti-trans efforts. The anti-trans efforts are a test to move the Overton Window rightward. Trans and NB people are such a tiny minority. By targeting and othering that demographic, Conservatives are testing how much the rest of the citizenry will tolerate the next steps in fascism: targeting other minorities, miscegenation, segregation, concentration camps... whatever it takes to make a white xian US.
This right here. I fell down the "wild boar problem" rabbit hole a couple years ago. I was curious about what controls have been tried and what could be done to bring things back into balance. The statistic I read said that 75000 boars must be killed per year in Texas just to keep their numbers stable there. Holy hell. That's a lot of dangerous game hunting.
I'm not a hunter. But I do understand a lot about environmental conservation and the need for balance. We have eliminated enough of the animals that predate on deer such that some other means, ie hunters, are required to control deer populations. The other option is mass kills, which strike me as wasteful on so many levels.
When I lived in Vermont, there was a conservation movement to attract younger people to deer hunting because natural controls just aren't there anymore. Where I live now, a distemper outbreak decimated the coyotes, and the deer are out of control. The coyotes are finally bouncing back, but it's going to take a while. In my small city, the deer are so rampant, it's common to see dozens on a short bike ride through town. Their food supply is depleted enough such that most deer here appear unhealthy and undernourished. The exploded deer population have follow-on effects: increased expense for deer control measures, collisions (one almost slammed into me on my bike two days ago; not the first time), destruction of plantings to control erosion, and spreading ticks.
I would like to see prospering wild animal populations, rather than this mess we made.
the DEA visited them last year and performed “accountability audits” that uncovered violations of the federal Controlled Substances Act, namely through inadequate record keeping, according to records obtained by The Baltimore Banner.
At the scale of prisons, these pharmacies are called institutional pharmacies. The size, operation, automation, and throughput of institutional pharmacies is mind-blowing. For example, the biggest Costco pharmacies might process 300 scrips a day; institutional pharmacies generally handle 15000 to 30000 per day, with some being even larger.
The "inadequate record keeping" part is just idiocy. There exists automation and auditing software for this. I know because I wrote the last-mile portion of a suite that manages end-to-end compliance automation for institutional pharmacies. A single failed audit generally costs more than most of the auditing and compliance suites licensing fees. And even in small pharmacies, there's usually more than one failed C-2 audit when it happens. And let's be clear; these audits are always for C-2 drugs (opioids and stimulants).
So I read the paper. I'm sincerely curious: how is this new or special? Ground loops have been used for heat pumps for decades, providing both heat source and heat sink. Is what they're researching qualitatively different from ground loops?