yetAnotherUser

joined 1 year ago
[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

Uh I had to quickly look at Wikipedia but apparently the reason it's transcribed with Ph is:

At the time these letters were borrowed, there was no Greek letter that represented /f/: the Greek letter phi 'Φ' then represented an aspirated voiceless bilabial plosive /ph/, although in Modern Greek it has come to represent /f/.)

And so out of the various vav variants in the Mediterranean world, the letter F entered the Roman alphabet attached to a sound which the Greeks did not have.

So Greeks pronounced Phi differently from F and somehow someone decided that it should be transcribed as Ph because it sounded different from the transcriber's sound of F. Maybe the Phi symbol just looked like a P.

[–] [email protected] 1 points 1 month ago (2 children)

Smh just learn Ancient Greek:

philosophy <=> φιλοσοφία <=> Phi Iota Lambda Omicron Sigma Omicron Phi Iota Alpha

[–] [email protected] 7 points 1 month ago

How do you know they're not in the toilet stall next to yours?

[–] [email protected] 6 points 1 month ago

Counter-counterpoint:

Display the exact value of pi with 64 digits in any base N number system.

[–] [email protected] 12 points 1 month ago (1 children)

Do Italian professors know their students' names? Over here, two countries to the North, no professor knows anything about their students.

[–] [email protected] 2 points 1 month ago

J is in lower case too, the line on the right of the i is shorter than the others.

[–] [email protected] 66 points 1 month ago

Placebos work even when you knows it's a placebo though. Pointing out something is a placebo is important because many are at best overpriced scams (homeopathy) and at worst actively harmful (chiropracty). The culture behind many placebos is also rife with pseudoscience and advocates against seeking out genuine care, so you should ensure nobody gets invested into placebos past a certain point.

One can make an informed decision regarding taking placebos if and only if one knows it's a placebo, else one will be scammed and/or harmed.

[–] [email protected] 2 points 2 months ago

Killing 50% of any one people is genocide, right? For example, the Nazis killed up to 50% of European Romani people and it is classified as a genocide.

Let's assume killing 50% of n peoples is genocide.

Since killing 50% of n peoples is genocide, killing 50% of n+1 peoples must also be genocide, else a number N would exist such that killing 50% of N - 1 peoples is genocide but killing 50% of N peoples is not. The existence of such a number N would be quite contradictory, as it would imply one could undo genocide by killing more people. Additionally, if one were to first kill 50% of N - 1 people and then kill 50% of one more people some time later, both events would be classified as genocide, since killing 50% of one people is assumed to be genocide.

Therefore, Thanos did in fact commit genocide.

[–] [email protected] 0 points 7 months ago (1 children)
[–] [email protected] 3 points 9 months ago

Although I'm only vaguely aware of the German laws, I don't think other EU nations' laws differ significantly.

Here's the corresponding law:

The insurer shall not be obligated to effect payment if the policyholder has intentionally and unlawfully caused the loss suffered by the third party.

Source:

Since this was clearly negligence, I think they would be fine. After all, they didn't intend to damage the statue. Gross negligence is still negligence.

[–] [email protected] 2 points 10 months ago

AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk,” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days.

What would've been high risk? Well:

In one section of the White Paper OpenAI shared with European officials at the time, the company pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.”

That does make sense, considering ELIZA from the 60s would fit this description. It pretty much repeated what you wrote to it in a different style.

I don't see how generative AI can be considered high risk when it's literally just fancy keyboard autofill. If a doctor asks ChatGPT what the correct dose of medication for a patient is, it's not ChatGPT which should be considered high risk but rather the doctor.

view more: next ›