为了账号安全,请及时绑定邮箱和手机立即绑定

不相容的形状:[1020,1,1] 与 [1019,1,1] - 张量流

不相容的形状:[1020,1,1] 与 [1019,1,1] - 张量流

慕田峪7331174 2022-09-27 10:50:25
当我将神经网络的输出设置为1时出现问题,它在数据数组中出现了问题,指责哪个形状更大,如果我使用,一切都可以正常工作!periods = len(valuesAnalisys) - 1时期:periods = 1返回:Imcompatible shapes: [1020,1,1] vs. [1019,1,1]神经网络:datecollect = [x[0] for x in dataSet]servers = [x[1] for x in dataSet]valuesAnalisys = [float(x[2]) for x in dataSet]base = np.array(valuesAnalisys)periods = 1future_forecast = 1X = base[0:(len(base) - (len(base) % periods))]X_batches = X.reshape(-1, periods, 1)y = base[1:(len(base) - (len(base) % periods)) + future_forecast]y_batches = y.reshape(-1, periods, 1)X_test = base[-(periods + future_forecast):]X_test = X_test[:periods]X_test = X_test.reshape(-1, periods, 1)y_test = base[-(periods):]y_test = y_test.reshape(-1, periods, 1)tf.reset_default_graph()appetizer = 1hidden_neurons = 100exit_neurons = 1xph = tf.placeholder(tf.float32, [None, periods, appetizer])yph = tf.placeholder(tf.float32, [None, periods, exit_neurons])cell = tf.contrib.rnn.BasicRNNCell(num_units = hidden_neurons, activation = tf.nn.relu)cell = tf.contrib.rnn.OutputProjectionWrapper(cell, output_size = 1)exit_rnn, _ = tf.nn.dynamic_rnn(cell, xph, dtype = tf.float32)calculateError = tf.losses.mean_squared_error(labels = yph, predictions = exit_rnn)otimizador = tf.train.AdamOptimizer(learning_rate = 0.001)training = otimizador.minimize(calculateError)with tf.Session() as sess:    sess.run(tf.global_variables_initializer())    for epoch in range(2000):        _, cost = sess.run([training, calculateError], feed_dict = {xph: X_batches, yph: y_batches})        if epoch % 100 == 0:            print("[INFO] Epoch: {} - Level Error: {}".format(epoch,cost))    forecast = sess.run(exit_rnn, feed_dict = {xph: X_test})y_test.shapey_test2 = np.ravel(y_test)final_forecast = np.ravel(forecast)mae = mean_absolute_error(y_test2, final_forecast)for (host, forecast, date) in list(zip(servers, final_forecast, datecollect)):    send.postForecastMemory(host, forecast, cost, date)
查看完整描述

1 回答

?
月关宝盒

TA贡献1772条经验 获得超5个赞

罪魁祸首似乎是RNN细胞中固定的时间维度。


xph = tf.placeholder(tf.float32, [None, periods, appetizer])

yph = tf.placeholder(tf.float32, [None, periods, exit_neurons])


cell = tf.contrib.rnn.BasicRNNCell(num_units = hidden_neurons, activation = tf.nn.relu)

在这里,在 xph 和 yph 中,您都已将时间维度指定为周期。因此,如果您有更长或更短的信号,则会出现错误。


我无法推断模型层的确切尺寸,因为您没有指定输入形状或模型摘要。因此,使用占位符数字。


有两种可能的修复方法。


不要使用固定的时间维度 = 周期,而应使用 None。

xph = tf.placeholder(tf.float32, [None, None, appetizer])

yph = tf.placeholder(tf.float32, [None, None, exit_neurons])

但是,缺点是每个批次中必须具有相同长度的信号,或者您可以简单地使用批量大小 = 1 进行训练,而不必担心时间长度。


使用截断/填充来解决长度问题。只需将信号传递到预处理函数即可添加/删除额外的时间点。

import numpy as np

def pre_process(x, fixed_len = 1000): # x.shape -> (100, 1000, 1)


    if x.shape[1] >= fixed_len:

       return x[:,:fixed_len,:]

    else:

       z_ph = np.zeros((x.shape[0], fixed_len, x.shape[2]))

       z_ph[:,:x.shape[1],:] = x

       return z_ph


X_batches = pre_process(X_batches, YOU_CHOOSE_THIS_LENGTH) # based on the length of your data

X_test = pre_process(X_test, YOU_CHOOSE_THIS_LENGTH)


查看完整回答
反对 回复 2022-09-27
  • 1 回答
  • 0 关注
  • 127 浏览
慕课专栏
更多

添加回答

举报

0/150
提交
取消
微信客服

购课补贴
联系客服咨询优惠详情

帮助反馈 APP下载

慕课网APP
您的移动学习伙伴

公众号

扫描二维码
关注慕课网微信公众号