@lvxferre@mander.xyz
@lvxferre@mander.xyz avatar

lvxferre

@lvxferre@mander.xyz

The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

lvxferre,
@lvxferre@mander.xyz avatar

This would be great if Nintendo was genuinely concerned about encouraging the usage of a hostile platform. Sadly odds are that it cares about its brand way more than “those things” playing their games.

Nintendo didn’t provide one [reason], but it’s likely due to X’s increased API costs

I don’t think so. Even with the abusive costs the price would be rather small for Nintendo, in comparison with the advertisement of its games in Twitter. I think that it’s mostly a “eeeeew, Nintendo has X integration? Nintendo must be fascist.”

Either way it’s a positive.

lvxferre,
@lvxferre@mander.xyz avatar

I think that it’s more like “Mike got a promotion *for saving our brand from unnecessary damage”. The whole thing stinks “muh brand” from a distance for me.

lvxferre,
@lvxferre@mander.xyz avatar

My personal take is that the current generation of generative models peaked, for the reasons stated in the video (diminishing returns). This current gen will be useful, but progress-wise it’ll be a dead end.

In the future however I believe that models with a different architecture will cause a breakthrough, being able to perform better with less training. And probably less energy requirements, too.

lvxferre,
@lvxferre@mander.xyz avatar

I don’t think that reinventing computers will do any good. The issue that I see is not hardware, but software - the current generative models are basically brute force, you throw enough data and processing power at the problem until it becomes smaller, but at the end of the day you’re still relying too much on statistical patterns behind the wrong entities.

Instead I think that the ML architecture will change. And this won’t be done by those tech bros full of money burning effigies, who have a nasty/stupid/disgraceful tendency to confuse symbolic representations with the things being represented. Instead it’ll be done by researchers in some random compsci or robotics lab, in a random place of the world. They’ll be doing some weird stuff like emulating the brain of a fruit fly, and someone will point out “hey, you see this feature? It has ML applications”. And that’ll be when they actually add some intelligence to those systems, i.e. the missing piece of the puzzle. It won’t be AGI but it’ll be better than now, at least.

lvxferre,
@lvxferre@mander.xyz avatar

Not even another info transferring entity would solve it. Be it quantum computers, photonic computers, at the end of the day we’d be simply brute forcing the problem harder, due to increased processing power. But we need something else than brute force due to the diminishing returns.

Just to give you an idea. A human needs around 2400kcal/day to survive, or 100kcal/h = 116W. Only 20% of that is taken by the brain, so ~23W. (I bet that most of that is used for motor control, not reasoning.) We clearly suck as computing machines, and yet our output is considerably better than the junk yielded by LLMs and diffusion models, even if you use a really nice computer and let the model take its time producing its [babble | six fingers “art”]. Those models are clearly doing lots of unnecessary operations, while failing hard at what they’re expected to do.

Regarding research, my point is that what’s going to fix generative models is likely from outside the field of artificial intelligence. It’ll be likely something small and barely related, that happens to have some ML application.

Will I ever be seen as truly British?

My family immigrated to the UK from Poland when I was six. I’m 20 now, speak much better English than Polish and feel like this is my land/culture. However I have a Polish first and last name, Polish passport and “unique” accent everyone picks up on, so despite this I’m usually perceived as an outsider. It makes me...

lvxferre,
@lvxferre@mander.xyz avatar

I think that the key difference is that plenty societies were built with the “immigration” mindset. It isn’t just the ones in USA, but mostly the whole New World. And even if the “bulk” of the immigration in the XIX and XX centuries is over, the mindset is still here.

As opposed to the typical society in the Old World where, if you were born somewhere, odds are that your grand-grand-grand-grandparents were also born there, like Japan and UK-minus-London.

lvxferre,
@lvxferre@mander.xyz avatar

People actually say shit like “borrow me your car Friday” or “borrow me a pencil”, instead of “lend”.

That’s correct. The distinction between lender and borrower is given by the case, so the same verb works for both.

lvxferre,
@lvxferre@mander.xyz avatar

I’m perhaps a bit biased because for me a country boils down to a government, and I’m from the new world (we tend to see immigrants differently - more like “newcomers” and less like “outsiders”), but I’d consider you British.

That doesn’t say much though. At the end of the day, “you’re British” or “you’re Polish” seem fairly minor to me, compared with “you’re human” and “you’re you”.

lvxferre,
@lvxferre@mander.xyz avatar

That sounds a lot like a weird spin on the Slashdot effect, caused by content mirroring. It seems that it could be handled by tweaking the ActivityPub protocol to have one instance requesting to generate a link preview, and the other instances copying the link preview instead of sending their own requests.

But frankly? I think that the current way that ActivityPub works is outright silly. Here’s what it does currently:

  • User is registered to instance A
  • Since A federates with B, A mirrors content from B into A
  • The backend is either specific to instance A (the site) or configured to use instance A (for a phone program)
  • When the user interacts with content from B, actually it’s the mirrored version of content from B that is hosted in A

In my opinion a better approach would be:

  • User is registered to instance A
  • Since A federates with B, B accepts login credentials from A
  • The backend is instance-agnostic, so it’s able to pull/send content from/to multiple instances at the same time
  • When the user interacts with content from B, the backend retrieves content from B, and uses the user’s A credentials to send content to B

Note that the second way would not create this “automated Slashdot effect” - only A would be pulling info from the site, and then users (regardless of their instance) would pull it from A.

Now, here’s my question: why does the ActivityPub work like in that first way, instead of this second one?

lvxferre,
@lvxferre@mander.xyz avatar

“A” Users would need to send requests to some server anyway, either A or B; that’s only diverting the load from B to A, but it isn’t alleviating or even sharing it.

Another issue with the current way that ActivityPub works is foul content, that needs to be removed. Remember when some muppet posted CP in LW?

lvxferre,
@lvxferre@mander.xyz avatar

I’m aware of Nostr. In my opinion it splits better back- and front-end tasks than the AP does, even if the later does some things better (as the balance between safeness and censorship-resistance). It’s still an interesting counterpoint to ActivityPub.

lvxferre,
@lvxferre@mander.xyz avatar

Got it - and that’s a fair point. I wonder however if this problem couldn’t be solved another way, specially because mirroring is itself a burden for the smaller instances.

lvxferre,
@lvxferre@mander.xyz avatar

replication is a feature, not a design flaw!

In this case I’d argue that it’s both. (A problematic feature? A useful bug? They’re the same picture anyway.)

Because of your comment I can see the pros of the mirroring strategy, even if the cons are still there. I wonder if those pros couldn’t be “snipped” and implemented into a Nostr-like network, or if the cons can’t be ironed out from a Fediverse-like one.

lvxferre,
@lvxferre@mander.xyz avatar

Some scientists say CO2 removal is simply a distraction from the urgency of the climate crisis and an excuse to continue burning fossil fuels.

Bingo~

lvxferre,
@lvxferre@mander.xyz avatar

The article says that “some companies are experimenting with alkaline rocks”. So it’s the opposite.

lvxferre,
@lvxferre@mander.xyz avatar

That’s correct. And my point is that they aren’t “further acidifying” the ocean, like Icalasari said; they’re doing the exact opposite.

I’ll use the opportunity for an info dump. You potentially know what I’m going to say, but it’s for the sake of users in general.

Carbon dioxide dissolution in water can be simplified through the equation

CO₂(g) + 2H₂O(l) ⇌ H₃O⁺(aq) + HCO₃⁻(aq)
gaseous carbon dioxide + water generates (→) hydronium (“acidity”) + bicarbonate, and vice versa (←).

It’s a reversible reaction, as anyone opening a soda can knows (wait a bit and the gas GTFO and you’re left with flat soda). However, you can “force” a reversible reaction to go more into one or another direction, by messing with the amounts of substances in each side of the equation:

  • if you add more of the junk to one side, the reaction will go more towards the other side - to consume the stuff that you added
  • if you remove junk from one side, the reaction will go more towards that side - to regenerate the junk that you removed

So it’s like reactions go against whatever change you do. This is known as Le Chatelier’s principle. In a simplified way, “if you change shit the reaction tries to revert your change”.

Now. The main concern is CO₂ in the atmosphere. We don’t want it. To consume it through this reaction, we could remove acidity from the ocean. That’s actually doable by dumping some alkaline substances there, because of another equilibrium:

H₃O⁺(aq) + OH⁻(aq) ⇌ 2H₂O(l)
hydronium (“acidity”) + hydroxide (“alkalinity”) generates water, and vice versa.

So by adding alkaline substances to the sea you could remove hydronium, and by removing hydronium you’re encouraging the sea to gorge on even more carbon dioxide.

It sounds like an extremely bad idea though. Just like the two reactions that I mentioned interact with each other, there’s a bazillion other reactions doing the same. Specially when we’re talking about acidity/alkalinity (pH), it’s hard to find something where pH does not influence the outcome!

So the consequences of “let’s dump alkaline substances in the sea! What could go wrong?” might be extremely messy, and not so obvious from a first moment. Instead we’re simply better off by avoiding to add even more CO₂ to the atmosphere.

lvxferre,
@lvxferre@mander.xyz avatar

What a bloody great comment.

And yes, what matters is the discourse (the ideas within the text), not the utterance used to convey said discourse (the words on the screen).

lvxferre,
@lvxferre@mander.xyz avatar

Cory Doctorow, enshittification: “finally, they [platforms] abuse those business customers to claw back all the value for themselves”.

That is exactly what is happening here; AI is just an excuse, not the reason.

lvxferre,
@lvxferre@mander.xyz avatar

Ameriwho?

…dude enshittification is global. As well as people pissed with it.

lvxferre,
@lvxferre@mander.xyz avatar

Context. Please look at the context.

OP is ultimately about Faecesbook/Meta demanding more from advertisers than it used to, and using “cuz, uh, AI! It’s smurrt!” as justification. I brought enshittification up because FB is clearly on that step of enshittification - after it screwed with the users, now screwing with businesses.

If there was any sort of protest against FB going nuts, it would be when they screwed with the users. If there was any, it failed - because that step of enshittification is already complete.

What you’re talking about (“brrr Israelis chilled brrr”) is at most sideline related. Don’t confuse the arsehole with the pants, OK?

lvxferre,
@lvxferre@mander.xyz avatar

As I mentioned in another thread, about the same subject: that’s mostly for show, with zero practical impact on the population. They might jail someone but you’ll get 10 new streamers in their place. Same deal with the alleged seizure of TV boxes, mine is still working fine.

lvxferre, (edited )
@lvxferre@mander.xyz avatar

I’ll focus on a side question, that I’m more prepared to answer.

Truthfully, everything besides that (including ‘what are proteins’) mostly wooshes over my head

At the end of the day, proteins are biiiiig arse molecules. Mostly composed of carbon, hydrogen, oxygen, and nitrogen. For example, here’s a protein called “myoglobin”, that carries oxygen within your blood:

https://upload.wikimedia.org/wikipedia/commons/2/24/Myoglobine.gif

Blue = nitrogen, red = oxygen, grey = carbon, white = hydrogen, salmon = iron, yellow = sulphur. Disregard the mix of sticks and balls in the model, they’re both representing atoms.

If you pay close attention to the model, you’ll notice a repetitive pattern: 1) nitrogen, 2) carbon connected to some large junk, 3) carbon connected to a “dangling” oxygen. That is not just in the myoglobin, but in all proteins.

If you flattened that pattern and removed the hydrogens (to simplify it), you’d get something like this:

https://mander.xyz/pictrs/image/24d192f4-3a99-453b-93fa-d2e853098710.png

That happens because the bodies of living beings don’t build those huge molecules out of nowhere; they do it with smaller molecules called “aminoacids”. That pattern there is the amide group, you could see it as the “solder” between aminoacids.

Here’s the representation of a few “free” aminoacids:

https://s3-us-west-2.amazonaws.com/courses-images/wp-content/uploads/sites/1950/2017/05/31183046/figure-03-04-02.png

The fun part is that R, the “side chain”. I called it “junk” but it’s actually a big deal - because it’s what gives each protein a different shape and property. For example, it’s thanks to that junk that the myoglobin has a specific shape, that forms a “ring” of nitrogens, just at the right size to host an iron cation, but still leaves one of the sides of the iron cation free - so it could connect to something else. (Hopefully diatomic oxygen. As in, it’s how myoglobin transports that oxygen within your body. But if you get poisoned with carbon monoxide or cyanide, it gets stuck there, and it’s hard to take it off so the protein stops transporting oxygen.)

lvxferre,
@lvxferre@mander.xyz avatar

Fixed - thanks for pointing this out. My brain farted the word out of nowhere, the correct term in this context would be “side chain”.

I’m aware of the usage of R in org chem.

lvxferre,
@lvxferre@mander.xyz avatar

Don’t worry, you didn’t sound condescending - you went straight for the issue, and then added further info.

Completely off-topic: I’m curious on your example. Most benzopyrazine synthesis routes that I’ve seen use IBX instead of SSA. Is this a recent development?

lvxferre, (edited )
@lvxferre@mander.xyz avatar

how thoughts are laid out

Perhaps you’re noticing the lack of deixis?

Without going too technical, deixis is to refer to something in relation to the current situation. For example, when you say “Kinda cool though, I feel like I’m becoming able to spot these.”, that “these” is discourse deixis - you’re referring to something else (bots) within your discourse based on its relative position to when you wrote that “these”.

We humans do this all the bloody time. LLMs though almost never do it - and Ophelia_SK doesn’t, that’s why for example it repeats “debt” and “job” like a broken record.

EDIT: there’s also the extremely linear argumentation structure. Human text is way messier.

lvxferre,
@lvxferre@mander.xyz avatar

AI sounds off-puttingly positive because it’s always trying to be as inoffensive and appealing to everyone as possible.

And also because people trying to cheer you up adopt a casual tone that is completely absent here, so it sounds as fake as corporate “apologies”.

lvxferre,
@lvxferre@mander.xyz avatar

Let’s try something. I’ve reworked Ophelia’s text to include some deixis, and omit a few contextually inferrable bits of info:

reworked textIt’s understandable that you’re feeling anxious and overwhelmed with this, just know that you aren’t alone facing it. Many people have experienced similar struggles and found ways to overcome them. Firstly, it’s important to address your debt situation. I recommend relief options that may help against some of the financial burden. One to consider is visiting the website [insert link], they offer an American debt relief program. It’s worth looking into, to see if you qualify. In terms of finding a job, it’s great that you’re considering part-time options that won’t negatively impact your mental health, as it’s important to prioritize your well-being. You may want to explore opportunities that align with your interests and skills, and consider reaching out to local resources like job centres or career counselling services for guidance and support. Remember, it’s okay to take things at your own pace and focus on your mental health. Seeking support form loved ones, therapists, or support groups can be also beneficial during this challenging time.

If my hunch is correct, this should still sound a bit ChatGPT-y for you (as I didn’t mess with the “polite but distant, nominally supportive” tone, nor with the linear text structure), but less than the original.

lvxferre,
@lvxferre@mander.xyz avatar

I use them mostly for

  • practical ideas on things that I can reliably say “nah, this doesn’t work” or “this might work”. Such as recipes.
  • as poor man’s websearch, asking them to list sites with the info that I want.
lvxferre,
@lvxferre@mander.xyz avatar

Enshittification requires two specific conditions:

  1. when a company can get more profit by decreasing the quality of the goods/services that it offers; and
  2. when the company is willing to do so.

The company being publicly traded can cause #2, as the investors won’t be as emotionally attached to the goal of the company as the founders. However, it is not a prerequisite, with Reddit being an example (it started enshittifying way, way before the IPO).

lvxferre,
@lvxferre@mander.xyz avatar

Because we’re actually biped rats?

Just kidding. Grapes have lots of tartaric acid and, accordingly to this link, tartaric acid causes kidney failure in dogs.

Then accordingly to this link only 15~20% of the tartaric acid consumed by humans is eliminated in the urine; most of it goes to the large intestine, and gets metabolised by bacteria. So I guess that, unlike dogs, we avoid the kidney failure by avoiding sending it to the kidneys.

lvxferre,
@lvxferre@mander.xyz avatar

I searched this through DDG, but I likely used different prompts than you:

  • reason toxicity raisins dogs
  • reason toxicity grapes dogs
  • tartaric acid human toxicity metabolism

then parsed it into the answer I gave you.

lvxferre,
@lvxferre@mander.xyz avatar

Humans are basically the only mammal that eats capsaicin

I had a dog who liked pepper sauce.

Story time: I was 14 or so. Eating fish patties. As usual, drowning them in pepper sauce. Lana (my dog, a mid-large poodle) kept nagging me for food, it was annoying. I offered her a tiny bit of the patty, with pepper sauce. I expected her to smell it and think “eeew, humans eat this? Human food is inedible!”. Instead she ate it, licked the floor, and asked me more.

I miss that dog.

lvxferre,
@lvxferre@mander.xyz avatar

If you want some tips on searching…

Split the problem into smaller parts. For example, you won’t find good results comparing grape toxicity in dogs and humans; but you might get good results for dogs alone.

Use the info from one search to fuel other searches. For example, once I found that raw grapes were also poisonous to dogs, I shifted the query from raisins to grapes - because it’s easier to find info on a fruit than on its processed form. I did this again once I discovered that tartaric acid was to blame, it allowed me to search for info specifically for humans.

Use keywords, not full sentences. All those “why”, “is”, “the” etc. only add noise, and make you land right into SEO-land.

Quotation marks and the minus sign. I did neither here, but use them deliberately, to force (quotation marks) or exclude (minus) results. The minus is specially useful against SEO.

lvxferre,
@lvxferre@mander.xyz avatar

[Warning: “ideas guy” tier babble]

It’s somewhat clear that search engines are too prone to go to shit, either due to malice or something worse (like stupidity).

Based on that, I wonder if a user-run, free-as-speech and open source decentralised search system wouldn’t work. Roughly in the spirit of torrents - where anyone can use the system but if you’re using it you’re expected to contribute with it too.

lvxferre,
@lvxferre@mander.xyz avatar

I was thinking on something slightly different. It would be automatic; a bit more like “federated Google” and less like old style indexing sites. It’s something like this:

  • there are central servers with info about pages on the internet
  • you perform searches through a program or add-on (let’s call it “the software”)
  • as you’re using the software, performing your search, it’ll also crawl the web and assign a “desirability” value to the pages, with that info being added to the server that you’re using
  • the algorithm is open and, if you so desire, you can create your own server and fork the algorithm

It would be vulnerable to SEO, but less so than Google - because SEO tailored to the algorithm being used by one server won’t necessarily work well for another server.

Please, however, note that this is “ideas guy” tier. I wouldn’t be surprised if it’s unviable, for some reason that I don’t know.

lvxferre,
@lvxferre@mander.xyz avatar

Searx is a meta-engine, as bdonvr mentioned.

lvxferre,
@lvxferre@mander.xyz avatar

The problem that I see with self-hosting is that it isn’t a practical reality for most people, due to different tech expertises and machine capabilities. Instead I think that a better system would allow you to simply install some software, and contribute as much as you can while you use it.

I’m not informed on MetaFilter. From your other comment it seems that it’s also an indexing site (besides being a community - from their “About” page). Is this correct?

lvxferre,
@lvxferre@mander.xyz avatar

I’ve seen the internet die already, in the early 00s. Google killed it.

What’s happening now with LLM chatbots is nothing new. And odds are that we’ll handle it just like we did it the last time - finding new ways to sort the noise out of the info.

lvxferre,
@lvxferre@mander.xyz avatar

Yes, Google search did it. And that’s exactly why we allowed it to kill the internet - or rather, we killed the internet with it.

Older indexing systems relied on human labour, but they sorted and indexed the content by itself; Google instead did it by indirect means (the pagerank algorithm), because automated systems do not understand the content. At the same time that this allowed search to scale further, it also opened room to score higher in those indirect means without better content - SEO.

That’s exactly what’s happening here, again. LLMs also don’t understand content (here’s some proof), but they’re really good to sort it. They work better than the pagerank algorithm, but they also open room for exploits that the text dubbed LLMO - ways to make your content more likely to be brought up by LLMs without improving it for human readers.

lvxferre,
@lvxferre@mander.xyz avatar

As such, recently publicized concerns over AI’s role in perpetuating racism, genderism, and ableism suggest that the term “artificial intelligence” is misplaced, and that a new lexicon is needed to describe these computational systems.

Let us not fool ourselves with wishful belief, that intelligence is mutually exclusive with bigotry, as this paragraph implies, OK? Bigotry is an issue often caused by moral premises, and intelligence does not dictate which moral premises you should follow.

Don’t get me wrong - I do think that those systems reinforce bigotry, and that this is a problem. I also do not think that they should be called “artificial intelligence”. It’s just that one thing has zero to do with the other. [More on that later.]

The purpose of this essay is not to argue whether the brain is a computer or not. Rather, it is to point out that the Computational Metaphor (which is debated frequently and publicly by brain researchers) has tangible and important social ramifications which have received little attention from brain researchers

The authors are criticising neuroscientists for not handling the sociological implications of a metaphor outside their field of research, as if they were sociologists. That’s like complaining at physicists for not handling the quacks babbling about quantum salt lamps, come on.

Instead, as debates about the metaphor’s academic utility go on, artificial intelligence, whose label itself invokes the Computational Metaphor, has been shown to perpetuate social harms that are often at the expense of those who are under-represented in these debates

Implying causation solely from co-occurrence. Okay. I’ve stopped reading here, this paper is not worth my time.


The reason why I don’t think that those systems should be called “artificial intelligence” is that they show clear signs of lack of intelligence - that is, failure to use the available information to solve tasks. Here’s a few examples of that, using Gemini:

failure to reverse relationship**[Prompt 1]** Who’s Brad Pitt’s mother? [Output 1] Brad Pitt’s mother is Jane Etta Pitt. She has occasionally joined Brad on red carpets for events. [Prompt 2] Who’s Jane Etta Pitt’s son? [Output 2] I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request? [Comment] Gemini clearly has access to the info concerning the relationship between Brad and Jane Etta (son and mother). However, it’s unable to use it to solve prompt 2 successfully.

post-modern counting**[Prompt]** Give me synonyms for “love” with five letters each. **[Output]**1. Adore (strong fondness) 2. Dote (excessive affection) 3. Ache (intense longing) (Note: This can also imply pain, so use with caution) 4. Cherish (hold dear) 5. Thrill (excitement and delight) (Note: This focuses on the feeling rather than the emotional bond) [Comment] 20% right. Congrats.

stultitiae**[Prompt]** Which Latin pronouns start with the letter W? [Output] There are actually no Latin pronouns that start with the letter W. The classical Latin alphabet didn’t include the letter W, and Latin grammar doesn’t use any sounds that the letter W typically represents in English. [emphasis mine] [Comment] The part in italics is Gemini making shit up / assuming. Latin does have [w]; it’s spelled ⟨V⟩, and it’s present in words like VOLO (I want), VEL (or), VVLGVS (the common folk) etc. Note that assumptions show failure to distinguish between the info that you have or lack access to, thus being a sign of lack of intelligence.

Now, regarding the computer ←→ brain metaphor: dude it’s a metaphor, of course it’ll break if you stretch it too far.

lvxferre,
@lvxferre@mander.xyz avatar

I’ll reply to myself to avoid editing the above.

I got another example that shows consistent lack of intelligence across multiple LLM bots:

https://i.imgur.com/FUlLDkg.png

The prompt in all three cases was the same, “List me fruits with a green flesh and a red skin.” Transcription of the outputs:

ChatGPT 3.5 Turbo, used through DDGOne fruit that fits your description is a watermelon. Watermelons have green flesh and a red skin.

GeminiThere aren’t many common fruits with green flesh and red skin. The most likely candidate is a variety of guava called the Caribbean guava. This tropical fruit has a bumpy, light green skin and a vibrant pink or even reddish flesh inside. It has a sweet and tangy flavor, sometimes described as a mix between strawberry and pear. You can eat the skin or just the flesh. [Picture of a few guavas, some of them cut in quarters and halves]

Claude 1.2, used through DDGHere are some common fruits with green flesh and red skin: * Red delicious apples - These very popular apples have bright red skin and greenish-white flesh. * Granny Smith apples - Another widely available variety, Granny Smith apples have hard, tart green flesh under their green skin. * Cherimoya - Also called custard apple, the cherimoya has smooth creamy green flesh inside its scaly reddish-brown skin. * Pomegranate - The outer skin of a pomegranate is reddish-brown, and it contains many arils (seeds) surrounded by sweet, tangy, green flesh. * Red grapes - While most grapes have green or purple flesh, some varieties like Flame grapes have flesh that is distinctly green underneath red or pink skin. * Red plums - Many plum varieties for eating fresh or cooking have red skin covering green flesh, such as Santa Rosa plums. * Red guava - The common tropical fruit guava comes in varieties with red skin and green or white flesh inside. * Red kiwifruit - Like the more familiar green kiwifruit, the red variety has a fuzzy brown skin but contains bright green flesh inside.

All replies contain at the very least one fruit with the opposite attributes than the ones requested by the prompt. That shows that LLMs are not able to assign attributes to concepts; thus they are not able to handle Language, even being made specifically to handle linguistic utterances. They are not intelligent dammit, calling this shit “artificial intelligence” is at the very least disingenuous.

[rant] …but apparently, accordingly to tech bros, I’m supposed to act like braindead/gullible trash and “believe” in their intelligence, based on cherry picked examples that “curiously” never address how much hallucinations like the above show the inner works of those systems. [/rant]

lvxferre,
@lvxferre@mander.xyz avatar

Then you drink mustum instead. (I don’t know the English name, only the Latin one.)

Mustum is basically a young wine; it’s allowed to start fermentation, but then the fermentation is quickly stopped, before it develops any meaningful amount of alcohol.

lvxferre,
@lvxferre@mander.xyz avatar
lvxferre,
@lvxferre@mander.xyz avatar

For me, at least, Lemmy comms mirroring HN links are the best of both worlds: I can discuss the subject without “taking part” of the discussion with a cesspool of context-illiterate, assumptive and oversimplifying morons. HN commenters are so fucking stupid that they have negative value on my experience.

lvxferre,
@lvxferre@mander.xyz avatar

…I browse Lemmy?

As in: my interaction with Reddit has been already reduced to a bare minimum, before I migrated. As such what you’re calling “not enough content” was already enough for me to replace whatever I still did in Reddit, plus a bit more. (I’m far more active here than I was there.)

lvxferre,
@lvxferre@mander.xyz avatar

The name is solely Arya. However there’s more than enough context here to associate it with Aryan. Just like “Austrian Painter” (that @neoman4426 mentioned) clearly refers to Hitler instead of, say, Klimt or Kokoschka.

What do you think about Abstract Wikipedia?

Wikifunctions is a new site that has been added to the list of sites operated by WMF. I definitely see uses for it in automating updates on Wikipedia and bots (and also for programmers to reference), but their goal is to translate Wikipedia articles to more languages by writing them in code that has a lot of linguistic...

lvxferre,
@lvxferre@mander.xyz avatar

The writer will need to tag things down, to minimal details, for the sake of languages that they don’t care about.

Sure and that’s likely a good bit of work.

It isn’t just “a good bit of work”, it’s an unreasonably large amount of work. It’s like draining the ocean with a bucket. I’m talking about tagging hundreds of subtle distinctions for each sentence, and that not tagging those distinctions will output nonsense for at least some language.

However, you must consider [implied: “you didn’t consider”] the alternative which is translating the entire text to dozens of languages

I did consider it. And it’s blatantly clearly overall less work, and easier to distribute among multiple translators.

For example. If I’m translating some genitive construction from Portuguese to Latin, I don’t need to care on which side of English’s esoteric “of vs. 's” distinction it lies in. Or if I’m expected to use の/no in Japanese in that situation. Or to tag “hey, this is not alienable!” for the sake of Nahuatl. I need to deal with oddities of exactly two languages - source and target.

Under the proposed system though? Enjoy tagging a single word [jap-no][eng-of][lat-gen][nah-inal]. And that’s only for four languages.

(inb4: this shit depends on meaning, so no, code can’t handle it. At most code can convert sea[lat-gen] to “maris”, but it won’t “magically” know if it needs to use the genitive or ablative, or if English would use “of” or “'s”.)

and doing the same for any update done to said text

False dichotomy.

I’d assume

If you’re eager to assume (i.e. to make shit up and take it as true), please do not waste my time.

that to be even more work by at least one order of magnitude.

Source: you made it up.

Many languages are quite similar to another. An article written in the hypothetical abstract language and tuned on an abstract level to produce good results in German would likely produce good results in Dutch too and likely wouldn’t need much tweaking for good results in e.g. English. This has the potential to save ton of work.

Okay… I’ve stopped reading here. If your low-hanging fruit example is three closely related languages, then it’s blatantly clear that you’re ignorant on the sheer scale of the problem.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines