简单的回归函数代码结构

功能定义:

def run_training(train_X, train_Y):

输入变量:

X = tf.placeholder(tf.float32, [m, n])
Y = tf.placeholder(tf.float32, [m, 1])

重量和偏见表示

W = tf.Variable(tf.zeros([n, 1], dtype=np.float32), name="weight")
b = tf.Variable(tf.zeros([1], dtype=np.float32), name="bias")

线性模型:

with tf.name_scope("linear_Wx_b") as scope:
    activation = tf.add(tf.matmul(X, W), b)

成本:

with tf.name_scope("cost") as scope:
    cost = tf.reduce_sum(tf.square(activation - Y)) / (2 * m)
    tf.summary.scalar("cost", cost)

训练:

with tf.name_scope("train") as scope:
    optimizer = tf.train.GradientDescentOptimizer(0.07).minimize(cost)

TensorFlow 会话:

with tf.Session() as sess:
    merged = tf.summary.merge_all()
    writer = tf.summary.FileWriter(log_file, sess.graph)

注意: merged 和 writer 是 TensorBoard 策略的一部分,用于跟踪模型行为。

    init = tf.global_variables_initializer()
    sess.run(init)

重复 1.5k 次训练循环:

    for step in range(1500):
       result, _ = sess.run([merged, optimizer], feed_dict={X: np.asarray(train_X), Y: np.asarray(train_Y)})
       writer.add_summary(result, step)

打印培训费用:

    training_cost = sess.run(cost, feed_dict={X: np.asarray(train_X), Y: np.asarray(train_Y)})
    print "Training Cost: ", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n'

基于训练模型的具体预测:

    print "Prediction for 3.5 years"
    predict_X = np.array([3.5], dtype=np.float32).reshape([1, 1])

    predict_X = (predict_X - mean) / std
    predict_Y = tf.add(tf.matmul(predict_X, W), b)
    print "Child height(Y) =", sess.run(predict_Y)