大部分公司都不具备openAI的研发能力,甚至连Meta和google都不具备,但大部分AIGC公司还是用的Meta实验室开源的模型,还有一大票公司在等openAI开源GPT-3.5。自动驾驶领域领导船骗子太多,干活的人太少,事实上大部分类似领域推动行业发展的都只是一小部分人,openAI的研发能力来源于极高的人才密度
叔觉得,自动驾驶公司最后都会失败
版主: Softfist
#57 Re: 叔觉得,自动驾驶公司最后都会失败
宽衣,通商,唐诗,宋词,秦刚,秦城
此生不悔入华夏,来世还做小昂萨
此生不悔入华夏,来世还做小昂萨
标签/Tags:
#58 Re: 叔觉得,自动驾驶公司最后都会失败
现在研发其实不重要,工程能力才是关键
GPT这玩意原理性的东西是公开的,但怎么做数据集,怎么才能多快好省的训练就是高科技了
GPT这玩意原理性的东西是公开的,但怎么做数据集,怎么才能多快好省的训练就是高科技了
ILoveBainiu 写了: 2023年 10月 19日 04:25 大部分公司都不具备openAI的研发能力,甚至连Meta和google都不具备,但大部分AIGC公司还是用的Meta实验室开源的模型,还有一大票公司在等openAI开源GPT-3.5。自动驾驶领域领导船骗子太多,干活的人太少,事实上大部分类似领域推动行业发展的都只是一小部分人,openAI的研发能力来源于极高的人才密度
#63 Re: 叔觉得,自动驾驶公司最后都会失败
你这蠢属英文问题
Residual Networks were developed by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, which won the ImageNet 2015 competition
Residual Networks were developed by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, which won the ImageNet 2015 competition
starJ0101 写了: 2023年 10月 19日 08:38 你给出链接我瞅瞅。
https://en.wikipedia.org/wiki/Residual_neural_network
In the book[12] written by Frank Rosenblatt, published in 1961, a three-layer Multilayer Perceptron (MLP) model with skip connections was presented (Chapter 15, p313 in [12]). The model was referred to as a "cross-coupled system", and the skip connections were forms of cross-coupled connections.
In two books published in 1994 [13] and 1996,[14] "skip-layer" connections were presented in feed-forward MLP models: "The general definition [of MLP] allows more than one hidden layer, and it also allows 'skip-layer' connections from input to output" (p261 in,[13] p144 in [14]), "... which allows the non-linear units to perturb a linear functional form" (p262 in [13]). This description suggests that the non-linear MLP performs like a residual function (perturbation) added to a linear function.
Sepp Hochreiter analyzed the vanishing gradient problem in 1991 and attributed to it the reason why deep learning did not work well.[15] To overcome this problem, Long Short-Term Memory (LSTM) recurrent neural networks[3] had skip connections or residual connections with a weight of 1.0 in every LSTM cell (called the constant error carrousel) to compute
�
�
+
1
=
�
(
�
�
)
+
�
�
{\textstyle y_{t+1}=F(x_{t})+x_{t}}. During backpropagation through time, this becomes the above-mentioned residual formula
�
=
�
(
�
)
+
�
{\textstyle y=F(x)+x} for feedforward neural networks. This enables training very deep recurrent neural networks with a very long time span t. A later LSTM version published in 2000[16] modulates the identity LSTM connections by so-called forget gates such that their weights are not fixed to 1.0 but can be learned. In experiments, the forget gates were initialized with positive bias weights,[16] thus being opened, addressing the vanishing gradient problem.
The Highway Network of May 2015[2][17] applies these principles to feedforward neural networks. It was reported to be "the first very deep feedforward network with hundreds of layers".[18] It is like an LSTM with forget gates unfolded in time,[16] while the later Residual Nets have no equivalent of forget gates and are like the unfolded original LSTM.[3] If the skip connections in Highway Networks are "without gates", or if their gates are kept open (activation 1.0) through strong positive bias weights, they become the identity skip connections in Residual Networks.
The original Highway Network paper[2] not only introduced the basic principle for very deep feedforward networks, but also included experimental results with 20, 50, and 100 layers networks, and mentioned ongoing experiments with up to 900 layers. Networks with 50 or 100 layers had lower training error than their plain network counterparts, but no lower training error than their 20 layers counterpart (on the MNIST dataset, Figure 1 in [2]). No improvement on test accuracy was reported with networks deeper than 19 layers (on the CIFAR-10 dataset; Table 1 in [2]). The ResNet paper,[9] however, provided strong experimental evidence of the benefits of going deeper than 20 layers. It argued that the identity mapping without modulation is crucial and mentioned that modulation in the skip connection can still lead to vanishing signals in forward and backward propagation (Section 3 in [9]). This is also why the forget gates of the 2000 LSTM[16] were initially opened through positive bias weights: as long as the gates are open, it behaves like the 1997 LSTM. Similarly, a Highway Net whose gates are opened through strongly positive bias weights behaves like a ResNet. The skip connections used in modern neural networks (e.g., Transformers) are dominantly identity mappings.
DenseNets in 2016 [19] were designed as deep neural networks that attempt to connect each layer to every other layer. DenseNets approached this goal by using identity mappings as skip connections. Unlike ResNets, DenseNets merge the layer output with skip connections by concatenation, not addition.
Neural networks with Stochastic Depth [20] were made possible given the Residual Network architectures. This training procedure randomly drops a subset of layers and lets the signal propagate through the identity skip connection. Also known as "DropPath", this is an effective regularization method for training large and deep models, such as the Vision Transformer (ViT).
#65 Re: 叔觉得,自动驾驶公司最后都会失败
怎么又扯到了普林斯顿?
一作何凯明是清华本科,港中文博士,MSRA研究员,ResNet一炮打响后被脸书挖到美国去了
一作何凯明是清华本科,港中文博士,MSRA研究员,ResNet一炮打响后被脸书挖到美国去了
starJ0101 写了: 2023年 10月 19日 08:42 Residual Networks were developed by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, which won the ImageNet 2015 competition.[4][5]
princeton university 算是中国人的大学吗?
-
- 著名点评
juderiverman 的博客 - 帖子互动: 231
- 帖子: 3320
- 注册时间: 2022年 9月 28日 08:36
#66 Re: 叔觉得,自动驾驶公司最后都会失败
马斯克是被证实有远见的人。
“We achieve inner peace when our schedule aligns with our values.”
#67 Re: 叔觉得,自动驾驶公司最后都会失败
机器人不能自卫,分分钟被零元购。要是能自卫,需要很大的社会变革。
ILoveBainiu 写了: 2023年 10月 18日 22:00 应用来说,就算你能做成,你第的环境下,无非就是robotaxi和送外卖,投资几十几百亿美金就为了送外卖,真是傻逼;你第的环境下robotaxi很难拼过廉价劳模的人力成本,更别说前期巨额开发成本;自动驾驶公司最不缺卷比和奋斗比,但这群人在这种最终用途都不具备什么前景的情况下根本开发不出什么东西,更何况车载传感器都被你鳖垄断,想靠纯软件取得优势几乎不可能
#69 Re: 叔觉得,自动驾驶公司最后都会失败
自动港口和现在搞的道理无人驾驶是两回事。
自动港口是全封闭,地上全是传感器,车辆按计算好的路线行驶,基本可以不需要灯光照明之类,难度一下子就小太多了。
自动港口是全封闭,地上全是传感器,车辆按计算好的路线行驶,基本可以不需要灯光照明之类,难度一下子就小太多了。
#70 Re: 叔觉得,自动驾驶公司最后都会失败
自动驾驶这个东西。核心不在技术。而是在责任归属liabilities 和AI医疗类似。即使错误率小于人,也不容易推广。需要法律的与时俱进。
比如L3以下,是驾驶员责任,L4以上就完全是制造商责任 。。。。。那么,L4以后还要不要。driver overwrite/takeover option? 怎么认定????
比如L3以下,是驾驶员责任,L4以上就完全是制造商责任 。。。。。那么,L4以后还要不要。driver overwrite/takeover option? 怎么认定????
#71 Re: 叔觉得,自动驾驶公司最后都会失败
很明显,厂家是不愿意也没有底气去负责的,所以目前还是L3为主,巨额投资的自动车最后还是需要人区管控萧武达 写了: 2023年 10月 19日 09:33 自动驾驶这个东西。核心不在技术。而是在责任归属liabilities 和AI医疗类似。即使错误率小于人,也不容易推广。需要法律的与时俱进。
比如L3以下,是驾驶员责任,L4以上就完全是制造商责任 。。。。。那么,L4以后还要不要。driver overwrite/takeover option? 怎么认定????
宽衣,通商,唐诗,宋词,秦刚,秦城
此生不悔入华夏,来世还做小昂萨
此生不悔入华夏,来世还做小昂萨
-
- 论坛元老
cellcycle1 的博客 - 帖子互动: 978
- 帖子: 68710
- 注册时间: 2022年 7月 24日 15:59