既然灵魂数量是一定的, 那么人类的活着的总人口为什么不断增长
版主: Softfist
#7 Re: 既然灵魂数量是一定的, 那么人类的活着的总人口为什么不断增长
因為動物越來越少了。
這說明有些人的前世不是人, 是動物。
這可以解釋為什麼世界上有這麼多的犯罪行為。
==============================
Google Detects Black People as ‘Gorillas,’ https://www.wsj.com/articles/BL-DGB-42522
By
Alistair Barr
Follow
Updated July 1, 2015 3:41 pm ET
Share
Resize
Google is a leader in artificial intelligence and machine learning. But the company’s computers still have a lot to learn, judging by a major blunder by its Photos app this week.
The app tagged two black people as “Gorillas,” according to Jacky Alciné, a Web developer who spotted the error and tweeted a photo of it.
“Google Photos, y’all f**ked up. My friend’s not a gorilla,” he wrote on Twitter.
Google apologized and said it’s tweaking its algorithms to fix the problem.
“We’re appalled and genuinely sorry that this happened,” a company spokeswoman said. “There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”
The gorilla tags turned up in the search feature of the Google Photos app, which the company released a few weeks ago. When users start a search, Google suggests categories developed from machine learning, the science of training computers to perform human tasks such as labeling. The company has removed the gorilla categories, so those suggestions will no longer appear.
“Lots of work being done, and lots still to be done. But we’re very much on it,” Yonatan Zunger, chief architect of social at Google, wrote on Twitter in reply to Alciné. Google is working to improve its recognition of skin tones and will be more careful about its labels for people in photos, he added.
The episode shows the shortcomings of artificial intelligence and machine learning, especially when used for consumers. Google likes to release software that may still have flaws and then update to fix any problems. This gets products out to users fast, but risks upsetting consumers if bugs are major.
Google launched a YouTube Kids app earlier this year that aims to exclude adult content using a combination of automatic filters, user feedback and manual reviews. But the system missed some inappropriate content, sparking complaints. A Google spokeswoman said at the time that it was “nearly impossible to have 100% accuracy.”
When it launched the Photos app, Google acknowledged that it was imperfect. But the “gorillas” tag shines a harsh light on the system’s shortcomings.
Getting this technology right is increasingly important as machine learning is used for more everyday tasks. Google’s self-driving cars, which are being tested on public roads, use the technology to recognize objects and decide whether to stop, avoid or continue.
The machine-learning system in Google’s cars isn’t taught to recognize specific objects. Rather, it is taught to recognize generally that there’s an object and decide what to do next based on the object’s motion and speed.
The algorithms for photo recognition need to be more accurate. “We need to fundamentally change machine learning systems to feed in more context so they can understand cultural sensitivities that are important to humans,” said Babak Hodjat, chief scientist at Sentient Technologies, an artificial-intelligence startup.
He said machine-learning systems don’t understand the difference between mistaking a chimp for a gorilla, which may be OK, and mislabeling a human as a gorilla, which is offensive.
Google’s system may not have “seen” enough pictures of gorillas to learn the differences – and wouldn’t understand the significance of such a mistake, he said.
“Humans are very sensitive and zoom in on certain differences that are important to us culturally,” Hodjat said. “Machines cannot do that. They can’t zoom in and understand this type of context.”
Feeding more pictures of gorillas into Google’s machine learning system would help. But such systems also have to be trained to be more cautious in certain settings. Now, most systems are set up to make their best guess at a label, even if they’re not 100% sure, Hodjat explained.
Artificial intelligence expert Vivienne Ming said machine-learning systems often reflect biases in the real world. Some systems struggle to recognize non-white people because they were trained on Internet images which are overwhelmingly white, she explained.
“The bias of the Internet reflects the bias of society,” she said.
Google said that as more images are loaded into Google Photos and more people correct mistaken tags, its algorithms will get better at categorizing photos.
Advertisement
Google took a similar approach with its voice-search feature that lets users ask questions verbally rather than by typing search queries. That service initially had lots of speech recognition errors, but as more people have used the service, Google’s machines have got better at understanding speech.
這說明有些人的前世不是人, 是動物。
這可以解釋為什麼世界上有這麼多的犯罪行為。
==============================
Google Detects Black People as ‘Gorillas,’ https://www.wsj.com/articles/BL-DGB-42522
By
Alistair Barr
Follow
Updated July 1, 2015 3:41 pm ET
Share
Resize
Google is a leader in artificial intelligence and machine learning. But the company’s computers still have a lot to learn, judging by a major blunder by its Photos app this week.
The app tagged two black people as “Gorillas,” according to Jacky Alciné, a Web developer who spotted the error and tweeted a photo of it.
“Google Photos, y’all f**ked up. My friend’s not a gorilla,” he wrote on Twitter.
Google apologized and said it’s tweaking its algorithms to fix the problem.
“We’re appalled and genuinely sorry that this happened,” a company spokeswoman said. “There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future.”
The gorilla tags turned up in the search feature of the Google Photos app, which the company released a few weeks ago. When users start a search, Google suggests categories developed from machine learning, the science of training computers to perform human tasks such as labeling. The company has removed the gorilla categories, so those suggestions will no longer appear.
“Lots of work being done, and lots still to be done. But we’re very much on it,” Yonatan Zunger, chief architect of social at Google, wrote on Twitter in reply to Alciné. Google is working to improve its recognition of skin tones and will be more careful about its labels for people in photos, he added.
The episode shows the shortcomings of artificial intelligence and machine learning, especially when used for consumers. Google likes to release software that may still have flaws and then update to fix any problems. This gets products out to users fast, but risks upsetting consumers if bugs are major.
Google launched a YouTube Kids app earlier this year that aims to exclude adult content using a combination of automatic filters, user feedback and manual reviews. But the system missed some inappropriate content, sparking complaints. A Google spokeswoman said at the time that it was “nearly impossible to have 100% accuracy.”
When it launched the Photos app, Google acknowledged that it was imperfect. But the “gorillas” tag shines a harsh light on the system’s shortcomings.
Getting this technology right is increasingly important as machine learning is used for more everyday tasks. Google’s self-driving cars, which are being tested on public roads, use the technology to recognize objects and decide whether to stop, avoid or continue.
The machine-learning system in Google’s cars isn’t taught to recognize specific objects. Rather, it is taught to recognize generally that there’s an object and decide what to do next based on the object’s motion and speed.
The algorithms for photo recognition need to be more accurate. “We need to fundamentally change machine learning systems to feed in more context so they can understand cultural sensitivities that are important to humans,” said Babak Hodjat, chief scientist at Sentient Technologies, an artificial-intelligence startup.
He said machine-learning systems don’t understand the difference between mistaking a chimp for a gorilla, which may be OK, and mislabeling a human as a gorilla, which is offensive.
Google’s system may not have “seen” enough pictures of gorillas to learn the differences – and wouldn’t understand the significance of such a mistake, he said.
“Humans are very sensitive and zoom in on certain differences that are important to us culturally,” Hodjat said. “Machines cannot do that. They can’t zoom in and understand this type of context.”
Feeding more pictures of gorillas into Google’s machine learning system would help. But such systems also have to be trained to be more cautious in certain settings. Now, most systems are set up to make their best guess at a label, even if they’re not 100% sure, Hodjat explained.
Artificial intelligence expert Vivienne Ming said machine-learning systems often reflect biases in the real world. Some systems struggle to recognize non-white people because they were trained on Internet images which are overwhelmingly white, she explained.
“The bias of the Internet reflects the bias of society,” she said.
Google said that as more images are loaded into Google Photos and more people correct mistaken tags, its algorithms will get better at categorizing photos.
Advertisement
Google took a similar approach with its voice-search feature that lets users ask questions verbally rather than by typing search queries. That service initially had lots of speech recognition errors, but as more people have used the service, Google’s machines have got better at understanding speech.
#9 Re: 既然灵魂数量是一定的, 那么人类的活着的总人口为什么不断增长
天文数字也是数字,即使是无限那也是一种量化的表现形式。人类没有感知到的有可能存在,人类感知到的肯定是存在和现实的。当然这里不包括幻觉。
#11 Re: 既然灵魂数量是一定的, 那么人类的活着的总人口为什么不断增长
如果相信六道轮回和转世之说,这就不是个问题。如果相信只有人类才有灵魂,看来把
灵魂看做熵,不是有什么熵增定律嘛,灵魂的数量就绑定上这个“熵”。
灵魂看做熵,不是有什么熵增定律嘛,灵魂的数量就绑定上这个“熵”。
#16 Re: 既然灵魂数量是一定的, 那么人类的活着的总人口为什么不断增长
小学二年级的时候,我们一群小伙伴在一起做作业,为着一块儿香橡皮吵了起来,互相骂
“你是营养生殖的!”
“你是孢子生殖的!”
“你是分裂生殖的!”
然后更狠的来了: “你是出芽生殖的!”
同学的奶奶给我们送削好的水果,听到我们的争论,激动不已,颤抖着白发说:这帮小孩子智商实在是忒高了,实在是太崇明了,知道的忒多了,忒有晓问了!
一直以为这场争论是小孩子们的胡言乱语,没想到卧槽居然是剧透!
x1

上次由 LittleBear 在 2024年 5月 1日 17:41 修改。
#17 Re: 既然灵魂数量是一定的, 那么人类的活着的总人口为什么不断增长
LittleBear 写了: 2024年 5月 1日 11:07 小学二年级的时候,我们一群小伙伴在一起做作业,围着一块儿香橡皮吵了起来,互相骂
“你是营养生殖的!”
“你是孢子生殖的!”
“你是分裂生殖的!”
最后最狠的来了: “你是出芽生殖的!”
同学的奶奶给我们送削好的水果,听到我们的争论,激动的颤抖着白发说:这帮小孩子智商实在是忒高了,实在是太崇明了,知道的忒多了,忒有晓问了!
一直以为这场争论是小孩子们的胡言乱语,没想到卧槽居然是剧透!