There is a lot of heated debate about this. They’re saying due to COVID, that “early” can mean anything from 0-8 now
I would love to hear the logic behind that
There is a lot of heated debate about this. They’re saying due to COVID, that “early” can mean anything from 0-8 now
I would love to hear the logic behind that
Since always, without a subpoena. Until PRISM, at least.
It should be called The Endarkenment, just saying
It’s not a combination of the names, it’s wordplay: “splayd” => “splay” (like splayed tines, to cover “fork”) + “spade” (a shovel, sharper than a spoon, which covers “knife” and “spoon”)
Tenty years ago
Actually, after “ninety” comes “one hundred”
This is an increasingly bad take. If you work in an industry where LLMs are becoming very useful, you would realize that hallucinations are a minor inconvenience at best for the applications they are well suited for, and the tools are getting better by leaps and bounds, week by week.
edit: Like it or not, it’s true. I use LLMs at work, most of my colleagues do too, and none of us use the output raw. Hallucinations are not an issue when you are actively collaborating with the model and not using it to either “know things for you” or “do the work for you.” Neither of those things are what LLMs are really good at, but that’s what most laypeople use them for, so these criticisms are very obviously short-sighted to those of us who have real-world experience with them in a domain where they work well.
No, it’s obvious to anyone with a brain. If the commenter seriously thought it might have been a false positive when they read the original comment, they never would have relayed their thought the way they did in their reply, and it is so clearly a reference to the content of the post that to analyze it even that deeply is overkill. To anyone reading this who is a native English speaker: if you think that comment needs a “/s”, you need to work on your reading comprehension. Read things more carefully.